I can believe that monoliths have a lower theoretical
complexity floor than microservice architectures, but I don't think any shops are coming close to that theoretical complexity floor irrespective of their architecture, so we need to think about the
actual, practical complexity of applications in each architecture.
Specifically, I think the strong boundaries between components better facilitates maintainability--as with dynamic typing, the absence of rails allows people to more easily make bad decisions. In the case of monoliths, this usually looks like an inexperienced developer (especially at the bequest of an unscrupulous manager who swears that we will definitely not do this again and will fix in a subsequent release) taking a dependency on another system's private internals. In other words, I assert that given a good architect and an average team, a microservice architecture will be less complex than a monolith despite marshaling data over a network. In order to have a well-engineered monolith, you need good architects and a good team (no unscrupulous managers who are empowered to compromise the architecture in the name of expedience).
> no guaranteed data consistency, no global commit guarantees
You don't get these for free with monoliths either. I've seen a lot of monoliths choc full of race conditions which don't manifest during development (small contrived test cases running on unburdened systems). Paradoxically, because microservices have to marshal data over a network, these consistency issues are more likely to appear. Moreover, people who work on microservices are more likely to think about race conditions precisely because the system is distributed (indeed, the architects think about this stuff when they lay out the interfaces).