Now that people are getting sick of cloud I'm sure we are going to re-invent personal computing now with entirely new terminology and re-implementation of all the original ideas. Just please no YAML.
Also openness, choice of vendor, invisible hardware and virtually unlimited scalability are different.
The rest is noise.
Startups aside, most enterprise teams match the following: apps don't need to scale massively (scale up is enough and a surprising number of large, well-knwon services are entirely based on scale-up architectures behind the facade), most don't need dynamic scale, and most aren't even going to be started and stopped dynamically.
But hiring people who can run a network, only to have them sit idle most of the time or - and this is common - doing things and as a result breaking things, and people who can run servers and VMware and so on, and firewall, and WAN connectivity, ...
A lot of these things are, in fact, trivial, but vendors made them stupid and hard and painful to set up and doubly painful to troubleshoot. Cloud exploited that situation.
Things aren't a _lot_ better now but they're somewhat better, and now the economics of cloud are starting to get looked at.
I put these into the bucket of productivity and low cost. And this issue arises when we need to scale our business, in particular our computation.
Not quite, the cloud forced you to figure out how to scale, do reliability and all the hard parts, mainly because AWS didn't offer any features like zero downtime VM transfers or any kind of pracitcal storage migration.
The point of the mainframe is that its almost like a really fucking huge lambda function host. You sling your app on the mainframe, ask for resources and let it run. If you run out of capacity, buy another mainframe and link it to the old one. Need a new region/reliability zone? pay a license, install a fat network link and jobs a goodun.
There are videos of where HP/IBM literally blows up a mainframe to prove its auto failover skills.
THe cloud is a poor proxy of what a mainframe _could_ do at its best. But its a reasonable facsimile of what a main frame did at it's worst.
The answer is "hey users did you know that you can have GMail on your iron? All you need is to pull messages from a server to your local maildirs and run a local indexer with the relevant UI. If you do that perhaps with a personal domain you can change the provider or be yourself the provider without change NOTHING nor the UI nor the addresses/aliases. YOU WON YOUR DAMN DATA". Or as well "hey, did you know you can damn work with some files locally than have them auto-sync-ed back to the server? It's easy. And so on.
The answer is that we need damn desktops also in hw, meaning not using craptops as desktop replacer no matter if docked with a decent screen(s) and input devices or masochistically used directly on their small bad keyboards and small screens. We need the concept of home-office and tools to be on-the-go ONLY when we need them, we need the concept of a damn homeserver per home, to host personal services in an IPv6 internet with a static global per device (allowing privacy extensions, but still knowing a static address if needed) and so on.
Unfortunately most people do not care until they discover they are trapped and that's too late because they use walled gardens and there is not much modern sw to work in the classic way while few profit from the most and to their best to avoid such knowledge.
That door has been closed to chips built on CMOS for a long time. It's why we started scaling horizontally in our chips by adding more parallel cores.
> to render Cloud unnecessary?
The advantage with cloud is, the machines can die, and I'm not severely impacted. A new lambda machine will spool up, will be sent my function, will receive a copy of the asynchronous event to start working again.
This, personally, is why I love the cloud. I no longer have to worry about the mapping of code to the hardware that will run it. I pay up front for designing my code around the architecture, but I win every time it runs in that I have essentially zero maintenance overhead associated with it.
In fact, I think we can learn from the Mainframe. Mainframes had a much more cohesive and unified platform. I wish we could write a unified program (not a decoupled frontend/backend) and have it run on an abstract, scalable substrate (which happens to be implemented on a distributed system).
Shameless self-promotion: I wrote about this a while back:
Rise of the Hyperplatforms: https://gridwhale.medium.com/rise-of-the-hyperplatforms-d4a1...
GridWhale and a Brief History of Computing: https://gridwhale.medium.com/gridwhale-and-a-brief-history-o...
> One of the overall design goals is to create a computing system which is capable of meeting almost all of the present and near-future requirements of a large computer utility.
in terms of expense, then also maybe, but the point of a mainframe is that you got scaling and reliability as part of the cost, without having to think about it too much. The cloud puts most of the responsibility on you. you still can't live migrate processes from VM host to another VM host in AWS.
As to the article's premise that people are getting locked into cloud services as they used to get locked into mainframe services, you could make the same argument about any tech: write your app with language X, and you'll have a hard time translating it to language Y. Write a low-level app that runs on Intel chips and you'll have a hard time porting it to Motorola. That's the nature of software development.
[1] Edited as I discovered that premises is not simply plural of premise [2]
[2] English is not my first language
There's a general recency bias in the field where computing knowledge works like a social media feed. New ideas and new projects are at the top, and old stuff just falls off the bottom regardless of merit. Then someone re-invents it, often claiming credit (not deliberately dishonestly because they are ignorant), and then it runs its life cycle and scrolls off the bottom. Repeat, forever.
There are a small number of things like very popular languages and OSes that have staying power but the rest of the ecosystem is fads and churn.
There is some progress under the churn, but the actual progress is almost in spite of the fashion chasing. The churn just forces us to do a ton of extra busy-work re-implementing the universe constantly and chasing fads to remain compatible.
The premise of my response is that you'd find value in knowing the difference and will take it in the positive spirit in which it's intended.
Why?