Monoliths are simpler to understand, easier to run, and harder to break - up until you have so many people working on them that it becomes difficult for a team to get their work done. At that point, start splitting off "fiefdoms" that let each team move more quickly again.
https://en.wikipedia.org/wiki/Conway%27s_law
Operationally speaking, it yields solutions aligned with your organization and thus are easier to support.
Plus, I see microservices as a premature optimization. Let it be shown the additional complexity is warranted and provides value.
Organizations which design systems (in the broad sense used here) are constrained to produce designs which are copies of the communication structures of these organizations.
— Melvin E. Conway, How Do Committees Invent?
Usually you start small and grow bigger, so there is only rare exceptions where it makes sense to merge microservices back into a monolith, except maybe for cases where going for a microservice architecture was a bad decision taken without actually having the above mentioned problem.
In a banking app there will be more requests for the account balance than there are logins, but logins will likely take longer.
Your argument is more around who is allowed to touch which and who is responsible when it breaks, but not around one of the core reasons to choose microservices.
For the most part, scaling a system is fungible across features. In particular, in a monolithic system, if I had to add another server due to logins, I just add another server. A side benefit is if logins are down but account balance checks are up, that extra server can pull that duty too. I don't need to say "these computing resources are only for this feature."
Isolating failures is way more important.
Take, for example, the idea that you can scale individual services independently. Sounds amazing on paper. However, you can also deploy multiple copies of your monolith and scale them, at a much lower cost even, and at a far lower complexity. In a cloud provider, bake an image, setup an ASG or equivalent, load balancer in front. Some rules for scaling up and down. You are basically done.
'Monolith' sounds really big and bad, but consider what's happening when people start using microservices. In this day and age, this probably means containers. Now you need a container orchestrator (like K8s) as you are likely spreading a myriad of services across multiple machines (and if you aren't, wth are you building microservices for). You'll then need specialized skills, a whole bunch of CNCF projects to go with it. Once you are not able to just make API calls everywhere 1 to 1, you'll start adding things like messaging queues and dedicated infrastructure for them.
If you are trying to do this properly, you'll probably want to have dedicated data stores for different services, so now you have a bunch of databases. You may also need coordination and consensus among some of your services. Logging and monitoring becomes much more complicated(and more costly with all those API invocations flying around) and you better have good tracing capabilities to even understand what's going on. Your resource requirements will skyrocket as every single copy of a service will likely want to reserve gigabytes of memory for itself. It's a heck of a lot of machinery to replicate what, in a monolith, would be a function call. Maybe a stack.
While doing all of that you need more headcount. If you had communication issues across teams, you have more teams and more people now, and they will be exacerbated. They will just not be about function calls and APIs anymore.
There's also the claim that you can deploy microservices independently of one another. Technically true, but what's that really buying you? You still need to make sure all those services play nice with one another, even if the API has not changed. Your test environments will have to reflect the correct versions of everything and that can become a chore by itself (easier to track a single build number). Those test environments tend to grow really large. You'll need CI/CD pipelines for all that stuff too.
Even security scanning and patching becomes more complicated. You probably did have issues coordinating between teams and that pushed you to go with microservices. Those issues are still there, now you need to convince everyone to patch their stuff(there's probably a lot of duplication between codebases now).
I think it makes sense to split the monolith into large 'services' (or domains) as it grows. Just not 'microservices'.
A funny quote I read a while ago on Twitter: "Microservices are a zero interest rate phenomenon". Bit of tongue in cheek but I think there's some truth to it.
It seems people think "Big Ball of Mud" when they hear Monolith but they're not equivalent. Jut like I tend to hear "Spaghetti Code" when I hear Microservices. But again they're not equivalents. Both architectures are equally capable of being messy.
But don't forget about the definition of micro :-) This is an optimization problem rather than a conceptual one.
The real thing that makes software easy to maintain is consistency with itself.
If you have one guy that formats his code with spaces and another tabs, that creates friction in the codebase. If you have one guy that always uses `const` or `final`, but another guy that doesn't, again hard to maintain.
The hard problems are boundaries between business logic is divided. If the application is consistent in how it divides responsibilities, it'll be a pretty clean codebase and pretty easy to navigate and maintain. If you have two different rouge agents that disagree on coding styles and boundaries, you'll have a pretty difficult codebase to navigate.
The easiest codebases have consistent function signatures, architecture, calling conventions, formatting, style, etc and avoid "clever" code.
The less certain you are about a system's requirements the more you should prefer a monolith.
A well-understood system (eg an internal combustion engine) has well-defined lines and so it _can_ make sense to put barriers in between components so that they can be tweaked/replaced/fixed without affecting the rest of the system. This gives you worthwhile, but modest overall performance improvements.
But if you draw the lines wrong you end up with inefficiencies that outweigh any benefit modularity could bring.
Start with a monolith and break it up as the system's ideal form reveals itself.
Nearly all the micro services based designs were terrible. A common theme was a one or two scrum team dev cohort building out dozens of Micro services and matching databases. Nearly all of them had horrible performance, throughout and latency.
The monolith based systems were as a rule an order of magnitude better.
Especially where teams have found you don’t have to deploy a monolith just one way. You can pick and choose what endpoints to expose in various processes, all based on the same monolithic code base.
Someday the Internet will figure out Microservices were always a niche architecture and should generally be avoided until you prove you need it. Most of the time all you’re doing is forcing app developers to do poorly what databases and other infrastructure are optimized to do well.
Instead of multiple teams working on the same codebase and stepping on each other's toes, each team can have clear ownership of "their" services. It also forces the teams to think about API boundaries and API design, simply because no other way of interaction is available. It also incentivices to build services as mostly independent applications (simply because accessing more services becomes harder to develop and test) - which in turn makes your service easier to develop against and test in (relative) isolation.
However, what's of course a bit ridiculous is to require HTTP and network boundaries for this stuff. In principle, you should get the same benefits with a well-designed "modulith" where the individual modules only communicate through well-defined APIs. But this doesn't seem to have caught on as much as microservices have. My suspicion is that network boundaries as APIs provide two things that simple class or interface definitions don't: First, stronger decoupling: Microservices live in completely separated worlds, so teams can't step on each other's toes with dependency conflicts, threading, resource usage, etc. There is a lot of stuff that would be part if the API boundary in a "modulith" that you wouldn't realize is, until it starts to bite you. Second, with monoliths, there is some temptation to violate API boundaries if it let's you get the job done quickly, at the expense of causing headaches later: Just reuse a private until method from another module, write into a database table, etc. With network/process boundaries, this is not possible in the first place.
It's a whole bunch of very stupid reasons, but as they say, if it's stupid and works, it ain't stupid.
Tbh, it didn't work for us: our org chart changes more frequently than the codebase's architecture (people come and go, so teams are combined, split, etc. to account for that, many devs also like rotation, because it's boring to work on the same microservices forever), so in the end basically everyone owns everything. Especially when to implement a feature, you have to touch 10 microservices -- it's easier and faster to do everything yourself, than to coordinate 10 teams.
>Second, with monoliths, there is some temptation to violate API boundaries if it let's you get the job done quickly, at the expense of causing headaches later: Just reuse a private until method from another module
This is solvable with a simple linter: it fails at build time if you try to use a private method from another module. We use one at work, and it's great.
I've done multiple projects where we have fairly large service working together. Sometimes they function on their own, other times they hand of a task to another service in order to complete the entire process. Sometimes they need each other to enrich something, but if the other service isn't running that's okay for a short time.
It is also worth remembering that if all your microservice needs to be running at the same time, you just have a distributed monolith, which is so much worse than a regular monolith.
What I feel that is missing on what he calls the "Microservices Premium" is a clear statement that this premium is paid in "ops" hours. That changes the game because of the "ops" resource scarcity.
In fact, the microservices dilemma is an optimization problem related to the ops/dev ratio that is being wrongly treated as a conceptual problem.
This is the simplest analysis I could come up with:
Also
> By starting with microservices you get everyone used to developing in separate small teams from the beginning, and having teams separated by service boundaries makes it much easier to scale up the development effort when you need to.
Nop, not for my team at least, most features require touching several microservices, so now you have either as many merge conflicts as edges (if one team is responsible for fixing what breaks in the other sides, yes, that happens) or need to have twice the meetings with twice the people to make sure each side is doing what the other expects.
I recommend you to learn to think for yourself given the context you are in, along with the power and social dynamics of such.
Sometimes problems are intrinsically complicated and the solution is required to be complex. But even in that case it's important to do the simplest thing you can get away with!
My experience is that people, myself included, almost always over-engineer unless they focus really hard on doing the simple thing. It takes concentrated effort to avoid architecture astronauts and their wildly convoluted solutions.
It's orders of magnitude easier to add complexity than to remove it. Do the simple thing!
Monolith First (2015) - https://news.ycombinator.com/item?id=26190584 - Feb 2021 (340 comments)
Monolith First (2015) - https://news.ycombinator.com/item?id=14778685 - July 2017 (163 comments)
Going down the monolith path != Having spaghetti as the core abstraction. :)
Modular Monoliths Are a Good Idea
These complex systems are made easier by being built as a single binary
Janky modular stuff first, full of inefficiencies, few formalities, designed to move fast and _not be in production_. Once it's up and running (in a test setup), you can "see the shape of it" and make it into a more cohesive thing (a "monolith").
This is consistent with how many other crafts older than programming have coalesced over centuries. I suspect there's a reason behind it.
You don't start with an injection-molded solid piece of metal with carefully designed edges that can be broken later (or shaven off) in different ways. You machine something that you don't understand yet using several different tools and practices, then once the piece does what you want (you got a prototype!), you move the production line to injection molding. The resulting mold is way less flexible than the machining process, but much more cohesive and easy to manage.
Of course, programming is different. The "production lines" are not the same as in doing plane parts. In programming you "fly the plane" as soon as it is "air worthy" and does improvements and maintenance in mid-flight. It's often a single plane, no need to make lots of the same model.
So, with that in mind, there's an appeal for carefully planning context boundaries. It's easier to ditch the old radar system for a new one, or replace the seats, or something like that, all during flight.
If the plane breaks down mid-flight, the whole thing is disastrous. So we do stuff like unit testing on the parts, and quality assurance on a "copy of the plane" with no passengers (no real users). Wait, aren't those approaches similar to old traditional production lines? Rigs to test component parts individually, fixtures (that comes from the machining world), quality assurance seals of approval, stress tests.
So, why the hell are we flying the plane as soon as it is barely air worthy in the first place? It creates all those weird requirements.
> When you begin a new application, how sure are you that it will be useful to your users?
Well, I don't have any clue. But _almost all_ applications I ever built were under serious pressure to be useful real fast. Barely flying was often enough to stop the design process, it got passengers at that point so no need to go back to the drawing board. What it often required by the stakeholders is not a better production process, it's just making the barely flying plane bigger.
I know, I know. Borrowing experience from other industries is full of drawbacks. It's not the same thing. But I can't just explain the absurdities of this industry (which are mostly non-related to engineering) in any other way.
All of this reminds me of that "If Microsoft made cars..." joke, but it's not funny anymore.
However, I do not like the readiness that people have with throwing around the yagni argument for things they don't want to support, build, or disagree with, often contorting or oversimplifying it to get their way.
The yagni argument itself is reasonable, but is often misused/abused.
If there is three communicating servces: first has 90% of the business logic, second has 7%, and the last one has 3%. Should we call the first one a monolith? And if they don't communicate?