It's a symptom of over-engineering and building for the future rather than anything inherent to microservices. Java had a whole decade of being obsessed with design patterns e.g Facade, Decorator that resulted in the same spaghetti architecture.
>I'm a junior backend developer and this inspires me a lot to study frontend
I’ve built monoliths but never at scale, but don’t see why they wouldn’t have scaled incredibly well. I have built macroservices that have scaled super well (5ish services IIRC).
In my case: Fast forward 5 years and the business growth didn't materialize; the board made working in the content unpleasant enough so that all the good and expensive developers left and outsourced the rest to India. These poor contractors have to deal with 20 microservices per team (while we were juggling 5-10, already to much, I think 1-2 services per team).
The old monolith were fine. Microservices - and transitions to new languages - create a lot of new problems (performance of joins over network, rabbitmq dead letters handling, services ddosing each others, updating a shared library and having to bump it in every service in the entire company)
I feel like it was basically spinning wheels.
Imagine eliminating network joins (make sure all the data is where it needs to be and/or share a database for reads, which is totally doable)... and eliminating dead letter queues (make sure your service goes offline/retries indefinitely if there is a failure and fix it. Don't tolerate failures. See jidoka)... and don't let services talk to each other (see pub/sub and event sourcing). Oh, and also limit the number of times updating a single library must be applied to all services by getting things as close to right as you can and respecting the physics of software design (see afference and efference).
Neither situation is good: we're under stress for ressources availability, when they're stuck in Kafka's world. Last week I learned that two different team ingest some of our data — which is fine, except that we moved to an API and only the first team use it. The second team was not aware that it exist (well, forgot about it).
I don't know what the good solution when you reach this kind of headcount would be. As a PM/PO, I'm baffled about this kind of complexity. My experience lead me to think that no one is managing that currently... it's kind of working, till it falls, hard.
It's easier to have a single owner at a smaller scale (something we should do) but even at a company with a large engineering team, there needs to be someone whose responsibility is to manage dependencies. Otherwise, it falls back to design by committee, and that's how you end up in your situation: nobody actually knows how anything works, and there is literally no way to find out.
I am not agreeing with the author, but they said it here:
> the number of pull requests produced daily by a few hundred developers was so large
Conway's Law [1]:
Organizations which design systems are constrained to produce designs which are copies of the communication structures of these organizations.
— Melvin E. Conway, How Do Committees Invent?
e.g. we had a process where every PR would spin up a full environment for automation and manual testing to work from.
Leaving aside your unnecessary name-calling, many tech people indeed do enjoy working together, but not necessarily always in the same physical space.