That's why I don't like the term "microservice", as it suggests each service should be very small. I don't think it's the case.
You can have a distributed system of multiple services of a decent size.
I know "services of a decent size" isn't as catchy as "go for one huge monolith!" or "microservices!" but that's the sensible way to approach things.
Why can the game industry etc somehow manage this fine, but the only place where it's actually possible to adapt this kind of artificial separation over the network, it's somehow impossible not do it beyond an even lower number of devs than for a large game? Suggests confirmation bias to me.
The main problem with microservices is that it's preemptive, split whatever you want when it makes sense after-the-fact, but to intentionally split everything up before-the-fact is madness.
How many times have AAA releases been total crap?
How many times have games been delayed by months or years?
How many times have games left off features like local LAN play, and instead implemented a 'microservice' as a service for online play?
How many times have the console manufactures said "Yea, actually you have the option of running a client server architecture with as many services you want?"
> How many times have games been delayed by months or years?
What are we arguing here? Because I can think of many microservice apps that are crap as well, and have no velocity in development.
> How many times have games left off features like local LAN play, and instead implemented a 'microservice' as a service for online play?
This is entirely irrelevant. We're talking about the trade-offs of separating networked services that could otherwise be one unit. You're saying "why do games have servers then" which is a befuddling question with an obvious answer.
That's like saying my web server is a Microservice because it's not run in my clients browser. It makes no sense.
The secret is that you're able to break a monolith apart, just like you can with microservices. You have APIs and modules of the monolith are responsible for their own thing. APIs are your contracts, just like in a microservice architecture.
The difference is that you can check if APIs are broken at compile time. In addition, you can view the API right in your IDE. In addition, your API isn't returning wishy-washy json with a half-assed OpenAPI spec - it's returning real types in a full-featured type system. And, cherry on top - you don't have to communicate over the network. Oh my god, you don't realize how many bugs and thousands of hours are wasted just working around that until you no longer have to. It's an immediate productivity boost.
But the best part is probably deployments. It's just so, so much more straightforward with one codebase.
The paranoid socialist in me thinks big companies like team-sized microservices because it lets them prevent workers from talking to each other without completely ruling out producing running software.
When companies instead encourage forums for communication across team boundaries, it unlocks completely different architectural patterns.
I've worked on monoliths with 400+ developers that were great, but it takes skills that people who have only ever worked in orgs that mandate microservice just don't have.
I don't think it's much better if you have to spend a year and a half updating 400+ different repos, though. It's much easier to use an operationalized language that knows backwards compatibility matters.
Even if you're using micro services, it's usually best to have them in the same repo organized into different directories.
No matter how many people you have, you really should minimize working on the same files concurrently. This is trivial with most languages