Now we've come full circle back to the bad old days where you need an entire team of dedicated people and arcane knowledge just to run your application software again.
I suspect that this will continue until we reach a point where the big players out of necessity come to an agreement for a kind of "distributed POSIX" (I mean in spirit, not actual POSIX). These are exciting times living on the edge of a paradigm shift, but also frustrating!
Now that we mostly have the server-client model ingrained in every online activity there is an obvious complexity centralisation on the server side.
Might be interesting to see how web3 and other decentralised web movements might factor in to this!
Deploy your files
Cronjob to apt upgrade every night
Why does that need a full team? Or more than 10 minutes every few years?
If it did happen, same as if your machine caught fire. restore from your backup
I think tis thrust towards self hosting should make the construction of these systems closer towards what you describe, but it isnt really that simple yet.
For instance, if you want to self host even a decent all-in-one calendar/notes/etc system like nextcloud, it isn't just `sudo pacman -S nextcloud` and youre done.
There is an enormous consideration of how to construct the 1) network architecture, 2) encryption of disks, 3) secure/encrypted, incremental, and full filesystem offsite backup, 4) security and encryption of network, 5) secure interplay between self hosted and non self hosted data, etc.
An example of (1) may be that, much if the discussion around self hosting states that one should not open their LAN to the internet with port forwarding (as it's inviting you to be hacked), and you should rather point your registered domain to a VPN such that you must login with any remote devices to access your server (just to point out - likely here you would need to use a dynamic dns service to get access to your system, which is also another hassle for newcomers).
Further on (1), this involves ensuring that the choice of VPN is adequate (many older technologies have known security flaws, like oracle padding attacks, etc.) - and finding a combination of routers with correct firmware to allow for VPN servers, (also VPN clients on top of that if you care about not having your isp sell you to everyone) the correct hardware and network architecture that can properly mitigate the encryption of traffic on your network (e.g. can a raspberry pi really handle all of your traffic and maintain 1Gbps file download/uploads with vpn encryption?).
I recognize that much of this is not really possible to `sudo apt-get` on one system with recommended hardware, but the more FOSS out there that can get closer to mitigating all of this, and to have well packaged tutorials on how to safely and properly self host (e.g. complete google service replacement) while maintaining security and 10Gbps speeds, the better off we will be.
This really is what we should be driving towards. Companies can certainly offer their servers for customer use, but I think it should be expected that companies have the code that is used to store that customer data fully as FOSS, for auditing and to allow for self hosting. There's still money to be made for companies in helping people self host everything for themselves - we just have to push for it.
One decent enabler of this is companies that make software for data backups. This is another one of the enumerations above, because it is quite difficult to find a solution that has quantum secure encryption, excellent compression, incremental (only pushing diff like git) capabilities, and is reasonably simple to use. AFAIK, Duplicati is the only option that seems to be able to hit all of these well (hopefully someone can correct me).
Anyway, this was a meandering long way to say - self hosting is complicated, but i do really hope we can change that.
Backups are far easier now as you use a VM provider like linode, digital ocean, even lightsail, which will schedule snapshots. If you run your own hardware then mysqldump and restore to your backup server is a small shellscript and cronjob. Disk encryption is handled by your OS.
LAMP was trivial in 2002, it still is. You can use nginx or postgres instead of apache and mysql, but it’s broadly the same.
If you want to make your site complex and dependent on thousands of JavaScript libraries and frameworks which change every year or two, that’s fine, but you don’t need to, it’s a choice, one which adds complexity. If you want five nines or absolute guarentee of not failing, you need to think more about replication than just a nightly snapshot, but that’s not a problem solved with thinks like kubernetes.
If you want to scale to millions of concurrent users pulling terabytes, sure, don’t self host from your DSL on a pi. If you want to serve a personal site for hosting bits of stuff, it’s not hard.