When you split it into microservices you are adding a bunch of infrastructure that did not need to exist. Your app performance will likely go way down since things that used to be function calls are now slow network API calls. Add to that container orchestration and all sorts of CNCF-approved things and the whole thing balloons. If you deploy this thing in a cloud, just networking costs alone will eat far more than your $20(if not, you'll likely still need more hardware anyway).
Sure, once you add all that overhead you now may have a service that can be independently scaled.
There's also nothing preventing you from splitting off just that service from the monolith, if that call is really that hot.
No - it depends on capacity needs and current architecture.
> When you split it into microservices you are adding a bunch of infrastructure that did not need to exist.
Unless you need to scale in an economical manner.
> Your app performance will likely go way down since things that used to be function calls are now slow network API calls.
Not necessarily. Caching and shared storage are real strategies.
> Add to that container orchestration and all sorts of CNCF-approved things and the whole thing balloons.
k8s is __not__ required for microservice architectures.
> If you deploy this thing in a cloud, just networking costs alone will eat far more than your $20
Again, not necessarily. All major cloud providers have tools to mitigate this.
To clarify: I'm not a microservice fanboy, but so many here like to throw around blanket statements with baked in assumptions that are simply not true. Sometimes monos are the right way to go. Sometimes micros are the right way to go. As always, the real answer is "it depends".
To support that 1:10000 transaction that takes
the most time and needs the most scaling?
Done in the most naive way possible (just adding more servers to one giant pool) yes, it's as ineffective as you say.What can be effective is segregating resources so that your 1:10000 transaction is isolated so that it doesn't drag down everything else.
Imagine:
- requests to api.foo.com go to one group of servers
- requests to api.foo.com/reports/ go to another group of servers, because those are the 99th percentile requests
They're both running the same monolith code. But at least slow requests to api.foo.com/reports can't starve out api.foo.com which handles logins and stuff.
Now, this doesn't work if e.g. those calls to api.foo.com/reports are creating, say, a bunch of database contention that slows things down for api.foo.com anyway up at the app level due to deadlocks or whatever.
There are various inefficiences here (every server instance gets the whole fat pig monolith deployed to it, even if it's using only a tiny chunk of it) but also potentially various large efficiencies. And it is generally 1000x less work than decomposing a monolith.
Not a magic solution, just one to consider.
Most startups fail. You need to cover as much ground as possible while you have runway, not cock about with microservices.