I think that if you can't think of a good reason not to use some technology, then you don't understand the technology well enough yet and you shouldn't try to use it in production yet. It may be what you need, but thinking that "everything must switch to this" is usually a huge warning signal that cargo cult engineering is happening instead of real engineering.
Tech has HN
If I were consulting for this company, I would have told them to stop right there, microservices are probably not for them. Unless you build from the start for microservices on something like AWS lambda, doing with such a small team would be really hard.
And as they eventually discovered, a lot of unnecessary overhead for such a small team.
Beyond scaling a large development org the primary benefits of micro-services accrue to consultants who bill by the hour.
For me, that number says a lot more about the day-to-day life of devs than the microservices vs monolith label does.
The other important number was that about 25% of engineering was dedicated to building the tools to manage the microservices. We didn't work on customer facing software -- the other engineers were our customers. And I found that number to be pretty consistent amongst any company that was fully invested into microservices.
We eventually got to the point where we started re-definining everything in terms of "business processes" (ie: creating a basket, paying for an order, making a complaint, etc). Teams were moving towards talking about processes rather than services, how we chose to organise the code and deploy it was starting to become more about practicalities. Such as: this collection of endpoints services an internal tool and scales differently to this other collection which service customer facing clients.
I left not long ago but my team of around 7 engineers had 2 primary service where 90% of the work happened. The other 10% of the work was on a few edge services, typically small serverless functions, which were mostly set and forget. It felt like we could move at a good pace with this setup. My new job is even more monolithic leaning and it's even faster to get work done and build internal tools.
It would likely be insurmountable if not for using kubernetes, we choice the idiomatic options (gke/GitHub actions/Argo/Prometheus/etc) so starting with micro services wasn’t too bad
At one point we had twice as many teams and almost twice as many services. The team had grown too large for optimal performance. When it was time to divide the team, we were able to categorize our services in two ways by the path of the data (deep back-end data processing for internal customers vs product services). Then we were able to assign a category to each team so that we have minimal dependencies and no code conflicts between teams.
No ownership is a bigger problem than team size imo.
That said, no reason to do it just because it's trendy.
What’s the typical failure rate of a method call within a process of a language of your choice?
If it’s not Java2k it will be one or two orders of magnitude lower than any cross-process, host or provider RPC call you’re ever going to make.
At smaller scales, just not having to deal with all the bugs and cleanup work missed error handling brings - and then building proper error handling for all relevant cases - can make a huge difference.
The important lessons here are on how they recognized the mistake before they had fully committed. Having worked on a team with a similar story, it all rings very true.
Separation of responsibilities? Easier to analyze because you only have so many inputs and outputs to a simpler system?
Debugging something that touches a lot of paths in a monolith can be quite nightmarish as well.
Arguably though it’s not special tools is just different tools. You usually can’t run a debugger live in production but with tools like distributed tracing and service meshes you get really close
Monolith can Microservices are so often presented as “x is better than y”, but it should be “which is more applicable for the team size, product and operational concerns”.
Monoliths are a great choice for certain team sizes and applications, want stricter isolation and blast-radius between different teams and products and need to scale different things differently? Micro services are probably a better choice.
Microservices decouple teams, so they can get their work done without stepping on each other's toes.
That being said, the biggest hurdle in a re-architecture project like this is usually in the "n=1 -> n=2" stage, and "n=2 -> n=5" is a lot easier: once you add service #2, you learn how to set up telemetry, permissions, billing/chargeback, alerting, etc. The next few are just repeating the same process.
It's almost as if in order to succeed these days you need to discredit and disparage your competition rather than simply having a better product, and that's why I don't buy into buzz words at all.
If it's not broken, don't fix it... Microservices are relatively new and unproven. The way the world has rushed to dive into microservice infrastructure only highlights reckless spending and waste that is characteristic of overpriced goods and high taxes that are constantly in turn thrust upon us, as consumers.
Microservice architecture is also inherently designed to lock a customer into very specific tools that make future migration to any other platform a very costly decision in most cases... Thereby locking a customer into platform-specific dependency. Microservices architecture also introduces the ability for providers to charge for each specific service as a utility... Instead of being charged for one single server annually, on microservices you can be charged for many individual components that run your app independently, and when usage skyrockets, it's a sticker shock that you can only stop by going offline.
We have also seen enough failures and pain points within microservice and even cloud architectures over the past two years alone to raise questions about whether or not it it indeed a better solution.
We need to stop disparaging traditional (non-cloud) hosting and solutions that aren't obsolete at all in this manner, and focus on what works, what is secure, and what is cost effective in order to stay sustainable into the future.
The more we allow marketing minds to take control of our IT decisions over reasonable technical minds, the more costly it will be to us all over time, no matter what salary we make. Bog tech firms will hate me for saying this, but any human in the chain can tell that reckless drive for weak/vulnerable/costly/and over-complex IT solutions cannot be sustained as a viable long-term business sales strategy anyway.
I work at a large retail company with who knows how many developers. We have different teams for payment, promotions, product search, account, shipping and more. All of them working on a single codebase with coordinated deployments would be a nightmare.
Previously, I joined a startup (previous coworkers of mine), a developer and a business guy. The developer "drank the microservices kool-aid", and came up with (in theory) super scalable solutions and like a dozen of microservices. It was difficult to keep things in mind, the tech stack was way too complicated for two developers. It was also less performant and more costly. The added complexity was totally unnecessary, especially because we never got neither tons of users, nor more developers. The business guy trusted the developer, so the company never worked enough on their product and USP. I guess the developer just didn't want to accept that the fancy tech solutions won't bring success.
Yet another time, we were a small team (5-ish devs, product owner, and a designer). We started with a monolith and we paid attention to software design and moved quickly.
Also, for some reason it's often overlooked, that you can make your monolith modular and design it so that when the day comes, you can split it up into smaller services. You don't need to start with microservices, you can start with a monolith and figure out later how to split it up (if necessary).
Microservices and "monoliths" have their place, you just need to know when to use which.
The main problem I have with microservice marketing:
It is often promoted to clients that do not have applications that are large or critical enough to warrant leveraging them.
That buyers often are properly warned about their inability to easily migrate if they invest in platform-specific microservices too heavily.
And clients are often not aware of the operational costs that can rise over time for each component of the distributed architecture.
"Monolithic" solutions have also not stagnated... They can be run in distributed methods, they can leverage microservices in parts, they can also leverage containers, they are far from obsolescence because they are using the same languages that microservice architectures use, just with less distribution overall.
The term monolithic is often used to indicate that less-distributed solutions are somehow "out of date", "obsolete" and "not innovating" inaccurately, when the real story is that the business case usually dictates which solution will fit best.
Meh. In order to sound smart on HN it's easiest to point at something and call it "hype".
> Microservices are relatively new and unproven
SOA is old as fuck. Microservices are also fairly old, but especially when you consider they're really just SOA + dogma.
> Microservice architecture is also inherently designed to lock a customer into very specific tools that make future migration to any other platform a very costly decision in most cases...
No? Not at all.
> Instead of being charged for one single server annually, on microservices you can be charged for many individual components that run your app independently, and when usage skyrockets, it's a sticker shock that you can only stop by going offline.
Alternatively phrased: If you only use one service you only pay for it, not for the whole suite of features you don't need or want.
> We have also seen enough failures and pain points within microservice and even cloud architectures over the past two years alone to raise questions about whether or not it it indeed a better solution.
And plenty of success stories.
> We need to stop disparaging traditional (non-cloud) hosting and solutions that aren't obsolete at all in this manner, and focus on what works, what is secure, and what is cost effective in order to stay sustainable into the future.
Microservices work, are secure, and are cost effective.
Honestly your post contains no useful information and is satirically close to a "return to traditional family values!" speech.
They are so old, buddy, actually. Splitting monolith into services(not always been micro) is a natural evolution for any software.
No less natural than "joining the innumerable incompatible and bug-ridden fragments into a single unified solution."
Linux is a bazar. Except distros and package repositories are cathedrals.
Windows is a cathedral. Except software distribution is a bazar.
What I think he is saying is that Microservices people pitch their service against monolith as better, but monolith hasn't been in vogue for 20 years. I saw the same tactic with scrum people pitching against waterfall which hadn't been in vogue for quite a while either.
That's the think about the marketing first model we're dealing with now... No real innovation, just branding/name changes and highly tailored customization to lock a customer into a specific platform.
"Microservice" isn't really about that, it's a marketing paradigm that is here to serve the (paid) tools, not really the architecture, it is EXACTLY like "serverless", it's not an architecture, it's a really about promoting paid platforms.
The parent is 100% right about their point about marketing buzzwords, because that's really all what it is all about.
Like you had a payroll in your enterprise of 1000 employees and you needed that same data in 5 applications in 3 different departments. So you would wrap payroll into a service and have that data accessible in multiple places.
I think that is still a valid approach to build monolith app and use multiple services if they are internal apps.
For customer facing and quickly changing stuff you might want to add microservices to be able to build new features quickly when ideally microservice should have its own database with what it needs to operate.
Because the broke you know isn't called "broke."
> The term "Monolith" was devised by people who wanted to brand microservices as newer and superior.
Not everything is a conspiracy. Sometimes it’s just useful to have a word to describe a particular architecture. In this particular case, “monolith” isn’t even disparaging, so if Big Microservices we’re trying to disparage monolithic architectures, why wouldn’t they use a term with a negative connotation?
> If it's not broken, don't fix it... Microservices are relatively new and unproven.
The microservices people would argue that monoliths are broken for many use cases. In particular, individual teams can’t deploy their code without coordinating with every other team, which yields long user feedback loops and a bunch of other knock-on effects. Microservices exist to support nimble organizations by helping to remove technical coupling between teams. This is all 101-level stuff but the microservices critics always ignore it in their criticism.
> Microservice architecture is also inherently designed to lock a customer into very specific tools that make future migration to any other platform a very costly decision in most cases... Thereby locking a customer into platform-specific dependency.
I don’t think you could be more incorrect :). Microservices are almost universally built atop containers, and the whole purpose of containers is to decouple the application from the platform.
> We have also seen enough failures and pain points within microservice and even cloud architectures over the past two years alone to raise questions about whether or not it it indeed a better solution.
Microservices are typically more robust than monoliths if only because components are isolated—a failure in a superficial component doesn’t bring the whole app down. Moreover, monoliths are less secure as well because there’s no way to regulate permissions within the process—anything one component can do, the whole system can do.
> The more we allow marketing minds to take control of our IT decisions over reasonable technical minds
Literally laughing out loud at the idea that marketing people are behind microservices.
Fine until one of the microservices needs an update with new features -> new API because Something Has Changed, and suddenly...
>the whole purpose of containers is to decouple the application from the platform.
Are containers not a de facto platform?
>Literally laughing out loud at the idea that marketing people are behind microservices.
Here's a moderately complete list of products. How many are pay-to-play?
https://www.aquasec.com/cloud-native-academy/container-platf...
All you're really doing with containers is creating a meta-monolith running on external hardware with custom automation - managed by an ever so handy third party software product. Also running on external hardware. All of which you're paying for.
You can also DIY and not pay. In theory. But really...?
This makes sense at global scale where you're drowning in income and need to handle all kinds of everything for $very_large_number customers.
It's complete madness for a small startup that doesn't even have a proven market yet.
This isn't the reality on the ground. Where I currently work, "Monolith" is absolutely used as a pejorative by those advocating for microservices.
Clearly you do, because this is mostly nonsense driven by your knee jerk reaction to “microservices”. Very little you’ve written here is substantive. It’s all emotional appeal covering ignorance.
> If it's not broken, don't fix it...
But it is broken. Engineers often experience significant pain from monoliths so they look for a solution. They often also experience significant pain from microservices so the pendulum returns. Hopefully during all of this we learn enough that at least some pain is reduced, whether we land on microservices or monoliths or hybrid solutions.
> We need to stop disparaging … and focus on what works
Here I agree. Focus on what works and stop engaging in low value attacks on solutions that clearly work for some.
> The more we allow marketing minds to take control of our IT decisions…
What “marketing minds” are making decisions about service architecture? This seems like an imaginary issue.
It is hard to hold this comment in a generous light (per HN rules) when some of my most poignant experiences in dealing with tech vendor salespeople at conferences is (and I paraphrase), "well, it's working great for $MEGACORP, you do want to be like $MEGACORP dont you?"
One time I asked one of those salespeople point blank if that line actually works on people. Apparently it does. And those people for whom that line works with make big expensive tech decisions.
If you are noname rapper you start dissing bigger guys so they diss you back and you get notoriety because someone noticed you.
As a politician you have to say others are the worst and broke everything but you have plan to fix everything that is broken now.
In the end all the swearing is posturing and all "great plans" turn out not possible in reality.
While yes you can do nice stuff with microservices, it is not a silver bullet.
Can you elaborate on this? Examples? Thanks!!
A "monolithic" solution usually relies on a basic (e.g. LAMP) stack that can all be run on one server if the need arises... Your web and database server can be migrated from AWS to Azure much easier if pretty much all of the functionality relies on a close-knit local server architecture that can have a less complicated security, code, and endpoint design as well.
If you create an app that works based on a highly distributed architecture, suddenly, migrating a solution is far more complex - Platforms like AWS and Azure do not run all of the same services, and those services often require lots of refactoring to work properly with your prior data, you'll also need to do many test cycles after migrating to ensure that solution integrity is maintained.... At that point, a simple migration might as well be a total refactor.
After implementing policies like zero trust and working out whitelisting, if your solution is complicated or large-scale, you also have to deal with service-specific nuances in your code and in your architecture design that don't translate well to the completely different tools available on an alternate cloud hosting platform, because usually they have completely different nuances and conventions on their service architectures.
To put it simply, it's like buying a ford pickup truck and installing an aftermarket 6.34 x 9.85ft camper top (and other ford-specific aftermarket parts) on it, and then trying to install that custom Ford camper top and the other Ford aftermarket parts onto a Toyota which only fits a 5.77 x 8.34ft camper later on... It usually doesn't work out well, and usually provides very unexpected results and more financial loss than using a universally sized 5.55 x 8.20ft camper (The monolithic option that fits in both trucks, albeit not perfectly).
Each platform is very specialized in their own way... Just as with Ford any Toyota pick up trucks, they have completely different build dimensions, and that's why the parts aren't interchangeable between the two trucks.
Monolithic solutions were originally designed to work agnostic of platforms, so they can work on either provided that they are implemented correctly...
Ultimately, the business need should be carefully evaluated by an experienced architect to determine which architecture fits the need best, and then other factors should be reviewed (like if you'll need to migrate any time in the future for example) to make the final call.
The problem that many developers fall on is sometimes some problem domains feel like they're well separated. However in practice those domains are tightly coupled into each other, that merging them together is better.
I am interested in hearing more history on this
I’m somewhat sure everyone is somewhere in between depending on who you ask.
Here you go, you can read this book and it explains.
I think this is because our monoliths are so complicated they hide away our technical debt like monstrous Jack-In-The-Boxes. When you start breaking it into chunks all of these issues come exploding out of them. Suddenly huge bugs that no one noticed or cared about are showing up in testing. Old libraries that sat dormant wake from their crypts to harass and torture junior developers. Forgotten binaries whose source code was lost with the changeover from ancient source control software to GIT starts showing up security issues in VeraCode.
Really, a well coded monolith is just a bunch of micro-services on the same server communicating through memory. In reality it's more of a Lich who's eyes shine with the light of the tortured souls of fallen QA testers and developers.
1. You can't "migrate" to microservices from a monolith. This is an architectural decision that is made early on. What "migrating" means here is re-building. Interestingly, migrating from microservices to a monolith is actually much more viable, and often times just means stick everything on one box and talk through function calls or IPC or something instead of HTTP. Don't believe me? See this quote:
> The only ways we could break down our monolith meant that implementing a standard ‘feature’ would involve updating multiple microservices at the same time. Having each feature requiring different combinations of microservices prevented any microservice from being owned by a single team.
Once something is built as "one thing," you can't really easily take it apart into "many things."
2. Microservices does not mean Kubernetes. The idea that to properly implement microservices, you need to set up a k8s cluster and hire 5 devops guys that keep it running is just flat-out wrong.
3. Microservices are "antifragile," to use a Talebian term. So I think that this paragraph is actually incorrect:
> This uncertainty made creating microservices more fraught, as we couldn’t predict what new links would pop up, even in the short term.
A microservice is way easier to change (again, if designed properly), than a huge app that shares state all over the place.
4. What's the point here? It seems like the decision was hasty and predictably a waste of time. Any CTO/architect/tech lead worth his or her salt would've said this is a bad idea to begin with.
You don’t need to use kubernetes but I strongly believe it’s the best choice if you’re not using FaaS. If you pick nomad or bare vms you’ll spend a lot of your time building a framework to deploy/monitor/network/configure etc your services whereas kubernetes has “sane” defaults for all of these
That said - you should use managed kubernetes and not deploy it from scratch
Sounds almost sarcastic. How do you deliver API changes without alerting other teams?
It also helps with zero-downtime deployments:
1) spawn a new instance of the service with the new API, side by side with the old one
2) now incoming traffic (which still expects the old API) is routed to the new instance with the new API, and it's OK, because it's backward-compatible
3) shut down the old instance
4) eventually some time later all clients are switched to the new API, we can delete the old code
Sounds almost sarcastic. To deliver API changes without alerting other teams you, of course, simply deploy the changes without sending a message to the other teams.
The non-sarcastic answer is that sometimes you want to make changes that will not affect an APIs users in any significant way. Of course you would still document these changes in a change log that the consumers of the API may or may not check. Or you may want to hype/market these changes for clout reasons.
Maybe it's an API that services multiple sets of users with different partially-overlapping requirements and they don't all need to know about the new change.
Maybe it's a soft launch for a surprise feature that's going to be announced later.
Maybe the other team is on vacation and you just want to get changes out the door before some holiday.
When engineering an API meant for consumption by disparate services it’s imperative to provide back words compatibility.
This is pretty basic stuff anyone designing a serious API should be taking into account.
Sometimes the cost is worth it. Most of the time it's not
I've always worked on monoliths, and I've almost never needed to coordinate a release with anyone. I just merge my branch and deploy. Github and shopify talk about merging and deploying monoliths hundreds of times per day without coordination.
The case where you would need to coordinate a release in a monolith is exactly the same case where you would need to coordinate a release in microservice app. That's the case where your change depends on someone else releasing their change first. It doesn't matter if their change is in a different service, or just in a different module or set of modules in the same application.
Now, most application are not well architected - micorservices or monoliths. In the case of a poorly architected app deploying a monolith is much easier anyway. Just merge all that spaghetti and push one button, vs trying to coordinate the releases of 15 tangled microservices in the proper order.
If you really need to change the API, give the new API another name. You may choose to think of this as "versioned APIs", if you want, but "versioned" and "renamed" are the same thing.
Document and set expectations accordingly. I've done this move before breaking apart a monolith into separate micro services and this is key. Spending more time on good documentation is generally a good idea regardless.
I'm assuming we're not talking about public facing APIs. That's a situation where versioning might make a lot more sense.
[1] I'm aware that that's not always true (e.g. adding a field that's ridiculously large choking up the parser).
They have multiple versions of calls. The older one function as before and never change. Want different behavior - here is your_interface_v1(), your_interface_v2(), etc.
You still alert team about new functionality but they're free to consume it at their own pace. This of course involves a boatload of design and planning.
I am in general against microservices and consider those as the last resort when nothing else works. To me a microservice is mostly my monolith interacting with another monolith.
When monolith becomes big enough that it needs 2 teams I usually handle it by each team releasing their part as a library that still gets linked into the same monolith. That is my version of "microservice" when the only reason for it to exist is to have two or more "independent" teams.
But other behavior changes are also not necessarily something that requires a team to be alerted. A good design provides an abstraction where the caller shouldn't have to care about the underlying implementation or details of how a request is fulfilled.
(By the way, just because there's still quite a bit of coupling between services, doesn't mean there aren't clear boundaries - Microservices can communicate with one another all the time and still be justified in being decoupled)
There isn't an absolute answer to monolith vs microservices - It depends case by case.
Instagram was built using Django and I'm unsure of ig's architecture today, but it remained monolithic for a very long time (at least till late 2019), and if that architecture sufficed for Instagram, I'm sure it would suffice for many other projects.
However, still, it's not a this or that as many of the comments here would seemingly imply - Again, it's HEAVILY dependent on the case.
This is honestly pretty rare, at least in my experience. What I have seen is that organizations will buy in to the microservices hype, then dictate to their teams what stacks, deployment paradigms, etc. (sometimes even down to the sprint cadence) are acceptable.
Seems like an organizational decision
There was slightly more freedom of choice when I was at AWS, but compliance requirements and tooling support basically strongly encouraged everyone to adopt a standardized stack.
All of which is to say, I get that what you're describing is in theory what microservices are supposed to allow, but I have yet to see it actually work that way in practice.
The core of the product is found in the monolith. We use bounded contexts ("modular monolith") with strictly separated concerns. There are no immediate plans to split the core into microservices (unless absolutely necessary) because the logic between modules is too intertwined and coupled. Splitting the core into microservices would overcomplicate everything for us, and the performance would suffer.
As for microservices, we usually use them for:
1) critical infrastructure which needs to be fast and scalable (for example, the auth service)
2) isolated helper services, for example a service which allows to integrate with third-party platforms
3) isolated features/products which minimally interact with the rest of the system; for example, we have an optional feature which shares the UI with the rest of the application, and uses some of its data, but ultimately it's a product of its own, it's developed separately with its own codebase, and integrated into the monolith
So I think it's a false dichotomy that you either have a monolith, or microservices. You can use both, they can complement each other.
>because the logic between modules is too intertwined and coupled.
That doesn't sound very modular? If your bounded contexts are intertwined, I don't think they can be considered bounded contexts. A modular monolith would only communicate between contexts through well-defined and non-leaky APIs, and that's the opposite of intertwined.
What I meant by intertwined (maybe a wrong word, I'm not a native speaker):
1) there's a lot of data/logic dependency between the contexts (i.e. a context in its operation depends on N other contexts), although we at least disallow circular references; it's unfortunately dictated by business rules and I'd like to see contexts to be more isolated and self-contained. Some can say that if a change in the requirements requires to change many contexts at once, maybe it's one fat context after all - and they may be right, but we enjoy the current modularization effort, one big fat module would be far less manageable for us.
2) there're occurrences of temporal coupling; there are synchronous operations that spawn several contexts, with a lot of data flowing back and forth
Now, it's easier to manage it in a monolith, in the same process, because:
1) there are no network trips back and forth in case of complex operations with a lot of data
2) no retry logic in case of network connectivity issues
3) DB connections/locks and other in-memory structures can be reused
4) same codebase, so easier to reason about
Microservices require more care and more complex solutions:
1) distributed transactions are hard
2) eventual consistency is hard
3) the idiom "DB per microservice" makes managing the infrastructure harder
4) deployment is harder (if you have changes in several related contexts, there's only 1 deployment in the monolith as opposed to N deployments of microservices)
5) you have to manage different codebases/repos, can't see the whole picture
6) you have to defend against network connectivity issues, microservice unavailability etc.
7) debugging is harder, you can't just step into another microservice like you do with in-memory modules
8) new devs need to be taught all that
The list can go on and on. So we don't try to make all our modules/contexts into microservices just because we like microservices, we have to substantiate a move to a microservice with proof that it will make development/scalability easier for us, and that the advantages outweigh the disadvantages.
If the monolith is composed of modules with a DAG-like dependency structure (e.g. maven projects), then pieces of the monolith can be deployed alongside the dependencies they need.
for anybody else passing by, the people still doing things like a LAMP stack on a single compute instances are getting the same user experience issues under load that we solved for over the last decade or so. I’m just not running into people that have looked at a system design tutorial in the last 10 years that thought “ah this is the use case where I can reject all these guidelines because I’m so experienced”
Maybe redesigning the architecture of the product just because there is time vs. there is a pain point/problem that needs solving is already a red flag. In this context it feels like “micro services” was a hammer looking for a nail, and they had no such nail.
Edit: typo
Microservices can solve for some problems, eg: scaling infrastructure in a non-uniform manner or scaling development velocity non-uniformly across many teams.
But there are also tons of other ways to solve these problems. The mistake is in assuming that you need microservices to do x, without really critically thinking about what is actually stoping you from having x right now.
The move to microservices (or any similar kind of rewrite efforts) should be undertaken only when it's painfully obvious that it's needed.
Actor systems are a natural fit for this eventual de-coupling. What starts as a simple actor w/ a mailbox can eventually grow to a standalone REST service with minimal headache in architectural refactoring.
Congratulations, you have micro services!
As someone who have driven the migration from a monolith (just set environment variables and magically the same codebase becomes auth, notifications, workers, web and API and the same codebase reaches into every single database and talks to every single service) into micro services because a simple features were taking months to implement, I can confidently say that even today, in 2022, an average organization does not have the tooling or the team to do a monolith. Monolith is a cargo cult. Break stuff into an digestable chunks, externalize your own internal libraries if they are shared, version the interfaces and stop worrying about microservice complexities.
The second article also provides some insight to the services. Those make sense for me - they truly sound like independent, relatively large pieces of software. Not like “LoginService” type of things you sometimes see.
Few examples: 1)Create a main menu list of movies 2)Determine your subscription status to provide content relevant to that subscription tier 3)Use your watch history to recommend videos you may like
[1] https://www.macrotrends.net/stocks/charts/NFLX/netflix/numbe... [2] https://www.cloudzero.com/blog/netflix-aws?hs_amp=true
But, my company split out a much older, larger monolith over many years into separate services with clear ownership across a variety of teams. This has been a huge benefit, coming from clear ownership, API boundaries, and separation of concerns.
So neither monolith or microservices are a clear winner. It depends on context. An easy litmus test, IMO, is that a single dev team will get little to no benefit from managing many microservices, but a company scale problem will get a lot of benefit from having each team manage and deal with an independent service.
Ultimately works well if you have something very high scale with enduring set of requirements.
I started work at Amazon in 2001 when they were near the beginning of the transition to microservices. I think they had a couple thousand software developers at that time.
A DDD approach up front will help with granularity.
The other leg is serverless support. Without that you are stuck with maintaining infrastructure in tandem with all the other considerations, which takes a lot a specialists - lots of engineers.
Definitely a game of scale and not for small organizations.
However, if scale is the key ingredient for success and the value proposition is based on scale, then this kind of architecture is worth looking at.
That said, all shops are not Netflix or AWS...
Wouldn't this piece of business logic be best placed in an import-able module? Then, that module would be imported by those 4 microservices and problem solved ...? I don't really understand this argument.
Effective use of microservices depends upon a strong, meaningful boundary between the services and that boundary should be business driven, not code driven. As soon as you start dealing in packages of code[1], there’s no longer a meaningful boundary between your services, instead the boundary is completely arbitrary and each service becomes a microservice in name only.
If every microservice knows about the business logic for generating basket prices, whether the code comes from a package or not, you no longer have microservices… you have a lot of monoliths.
I joined a company that did this and it was one of my worst experiences as a software engineer, I would never recommend it.
[1] specifically packages containing business logic. Packages containing functionality for cross-service communication etc. are very reasonable.
While this sounds very radical (to me at least), I mostly understand how you've come to this conclusion. Obviously "just one package" is going to lead to further complexities down the line, and perhaps many more packages than that.
Perhaps a dedicated microservice for this piece of business logic would be better, as you suggested.
I’m in a team of 4 and the few API’s we expose would be considered microservices. We did that because it was easiest and fastest for us to build and maintain and the features we provide were all quite distinct.
Suppose, for example, your webapp backend has to do some very expensive ML GPU processing for 1% of your incoming traffic. If you deploy your backend as a monolith, every single one of your backend nodes has to be an expensive GPU node, and as your normal traffic increases, you have to scale using GPU nodes regardless of whether you actually need more GPU compute power for your ML traffic.
If you instead deploy the ML logic as a separate service, it can be hosted on GPU nodes, while the rest of your logic is hosted on much cheaper regular compute nodes, and both can be scaled separately.
Availability is another good example. Suppose you have some API endpoints that are both far more compute intensive than the rest of your app, but also less essential. If you deploy these as a separate service, a traffic surge to the expensive endpoints will slow them down due to resource starvation (at least until autoscaling catches up), but the rest of your app will be unaffected.
Real-world monoliths often do have some supporting services or partner services that they interact with. That doesn't mean you need a "micro-service architecture" in order to scale your workload.
In fact, this is what you usually do with "worker" nodes that do background jobs.
And you can always have feature flags/environment variables to disable everything you don't need in a given cluster.
And that I think how you should approach micro services, to solve an organizational problem, not use it for solving a technical problem.
Man-with-a-hammer syndrome is dangerous.
We need to show customers a list of products they registered in a scrollable list. The product registration data is basically a product ID, the user ID and date of registration. It does not contain actual product details.
The product registration data is in one database whilst the product details data is in another. Each has their own data store, API and team.
So it's a classic join problem. To show the customer a list of their registered products including the product name and a thumbnail of it, a call has to be made to get the list of product registrations, plus several dozens of individual calls to get the product details (name, picture), one call for each item.
It's terrible UX. Slow and jumpy.
Yes, I know...a monolith would also struggle with this scenario as the data sources are split for organizational reasons not relevant to discuss here. You might also create a new service that somehow joins these data sources, although I'm sure that might be an anti pattern.
What is the best solution isn't really my main point, rather that boundaries are not typically respected, whether it is at data level or at service level. The real world doesn't care because data and services are not products. Furthermore, when designing these boundaries you absolutely can not foresee how they're going to be used.
As such, it is incredibly common that for an app/web developer, the API fails to meet needs. It doesn't have the data, you get too much of it, or combinations of data are not efficient to get. In my experience, this is the norm, not the exception.
There's another downside. Now that your services are maintained by autonomous teams, guess what...they really are autonomous. That extra data that you need...sorry, not on our roadmap. Try again in 6 months. Sorry, another team is building a high prio mobile app, their needs come first.
A boundary is not a feature. A boundary is a problem. It makes everything inefficient, slow and complex.
I'm not closed minded, there's a time and place for micro services but I would consider it a last resort. Only do it when having explored all options to avoid it and when things really are bursting at the seams.
If your application fits in 1 server, you have a choice, otherwise you don't.
If your application can't fit in 1 server and you can't split it up, you have to refactor so it can.
If you can't refactor your application to have isolated domains, aka your domain is so complex it must take up an entire server, you have a serious problem.
Clustered applications are unavoidable.
I remember hearing a story (from a person inside said company) about a reasonably sized company with a 2 (really 1) man dev team (they contract for most of their needs) and how excited the company was to move their internal stuff to microservices and off a monolith.
He looked at me like i was insane when i said a monolith would work better for them.
This is the main takeaway here. If you have a problem and think microservices might be a good way to solve it and possibly worth the effort, then go ahead and investigate. But without a clear problem and plausible solution involving MS, it's going to be a big waste of time.
I had always assumed that it was the other way around. Good to be made aware of the alternative.
You know what? You can totally screw up both architectures, you can have cost overruns, and you can fail to scale. Neither microservices or monoliths are going to make you succeed of fail.
The real question is, where do you want to put the consistency? Is that the right way to do it for your app? Can your team maintain and keep building, or is maintenance going to blow you up?
Instead of creating isolation over an network interface we add an abstraction to achieve it.
The amount of effort wasted is just not worth it in 90% of the cases.
Monoliths aren't "better".
Because the whole idea of "better" makes absolutely no sense without context. Sometimes microservices are better in a certain context. Sometimes a monolith is better in a different context. And sometimes, one or the other is "better" but not by enough of a margin to care about.
It's the oldest cliche in the book, but one that this industry seems to hate with a passion: "Pick the right tool for the job."
Sadly, in our world, the received wisdom sometimes seems to be "Use the newest, shiniest, most hyped tool that is what everyone is talking about."
For example, the tradeoff between centralized and distributed has been taken (mostly informally) by big institutions for years. It´s not possible for a large bank with multiple overlapping domains, hundreds or thousands of dev tems (some of them outsourced/offshored) to have all of it´s code in a single repo or a single executable. And not all of it´s applications have the same requirements (technical, scale, etc) either.
SOA came to aid in this case by putting a common integration pattern between the interested parts.
But at some point the idea was hyped, and even small teams with no diverse technical or scale problems started doing simple backends using full blown distributed systems without reason.
Basically: if you don´t have problems of scale (domain, technical or people related) going microservices first is probably not granted.
Saying Microservices are better is the same as me saying "a size 20 shoe is better than any other shoe"... for everyone.
It's not a viable statement in any use case, except for people with size 20 feet.
The business need is what determines the solution necessary.
They don't need to be "Micro".
You can have clear separation at the import/library level though. No need to add that extra latency to every call.
If not, enjoy implementing joins across services.
Seems like a straw man.