I don't think "microservices", the idea of having an interface between two systems/codebases/libraries, will ever die. The more microservice-y bits, seperate apps with RPCs, makes this sort of work much easier.
I'll further assert that "a monolith" is not the antithesis to "microservices architecture." What this article talks about is mainly the ball of mud versus domain-driven design axis, with a nod or two to challenges with a single deployable unit delivering multiple bounded contexts and challenges with developing mature release engineering practices.
What I do see happening is perhaps a reshuffling of in-vogue-architected microservices applications, in perhaps multiple deployment units, to resemble DDD-alike-architected microservices applications in perhaps fewer deployment units than before. Let me explain.
In my experience, a lot of the problems that the article mentions with respect to applications deployed as singular units are representative of underlying architectural, design, and org chart problems. That goes without even mentioning that the term "microservice" tends not to be understood anywhere near as correctly as it really deserves (or needs) to be.
With respect to that last one though, the org chart problems, I've noticed that reorgs and not understanding the application's bounded contexts tend to go hand-in-hand. If you hammer down what your bounded contexts should be, you'll very likely have a much more natural set of boundaries for teams to work all on the same code together. This requires some bridging and "kum-ba-yah" to happen between the business folks and engineering, but in the grander scheme of things, having everyone on the same page seems to be super worth it.
Hammered-down bounded contexts, in turn, will help you define your aggregate roots. This will then give you the ability to scale out your persistence layer along your aggregate root boundaries and help define how to partition your dataset among several data stores. If you're lucky, this will help you kick the proverbial can down the road in terms of staving off the need to shard those data stores. This addresses the design and architecture problems.
Going further, defining proper service interfaces at the boundaries of your bounded contexts will then allow you, at some point when it's truly necessary for the operations overhead to do so, to deploy your application in more than the one unit. Most importantly, however, it'll help with release engineering even in the singular deployment unit case. A service you consume can define an interface that today is a direct call into another module but tomorrow can be an RPC across the network.
Because you have that modular separation, you can maintain a reasonable degree of development velocity because you've divvied up things to teams split along boundaries that make sense given your problem domain. At that point, you can do things like split up the code among separate repositories so that the compartmentalization is a little more tangible, but decisions here mostly rely on the kind of engineering culture you want to foster and how much effort you want to put into release engineering.
I implore you to read Sam Newman's "Building Microservices" and Martin Fowler's "Patterns of Enterprise Application Architecture." Skip Eric Evans' DDD book; even Evans himself says that Newman's book is a better treatise on DDD than his own.
Graphical programming might change how many people work. More standardized modules (blogging, authentication, e-commerce, etc) might gradually save us more and more time. But what differentiates will always be needed to be expressed in some way.
Sure, we could go towards SQL-like ways of expressing the desired result instead of how the program should process things, but honestly I think in many scenarios ifs and such statements will be much easier to express yourself in for a long time (ever?).
Developments that are sophisticated, but fall short from truly replacing engineers, wouldn't have the predicted effect. Let's say a 20% improvement in the time it takes to produce software. Would anyone feel strongly, at this point, that this that would result in 20% of engineers being out of a job, or is it more likely we'd just produce 20% more software?