API access for managing configuration, version updates/rollbacks, and ACL.
A solution for unlimited scheduled snapshots without affecting performance.
Close to immediate replacement of identical setup within seconds of failure.
API-managed VPC/VPN built in.
No underlying OS management.
(Probably forgot a few...) I get that going bare metal is a good solution for some, but comparing costs this way without a lot of caveats is meaningless.
[bunch of stuff I don't need]
Exactly. Imagine paying for all that when all you need is bare metal.
Now imagine paying for all that just because you've read on the Internet that it's best practice and that's what the big guys do.
Way back the best practice was what Microsoft, Oracle or Cisco wanted you to buy. Now it's what Amazon wants you to buy.
Buy what you need.
Having all that "best practice" service is great if it works well, but when it becomes a checkbox on the purchase order then it can cause far more problems than it solves.
I have found that a lot of the push to outsource hosting is simply an attempt to deflect responsibility for problems rather than in expectation of actually providing more reliability.
For every company or startup that thinks they need 100% uptime the reality is that they can not only get away with much less but in practice will end up with much less anyway because the effort and moving parts (load balancing, distributed database, etc) will typically result in something failing anyway even if the underlying hardware is indeed 100% up, and somewhat surprisingly to quite a bit of people here, manage to survive and thrive despite that (the recent AWS outages took out a lot of services and products and they still seem to be around somehow).
Ugh. I hate that phrase. The translation into plain English is almost always "What I read in some blog" or "Because I want to" or "It's what our sales rep told us." Even from C-levels who should know better.
Yes.
The opposite is also true though: Imagine not wanting to pay for that and needing it!
There's a reason why most homes connect to the power utility companies. Yes, we can run generators ourselves. Does it make sense to do that? Not usually.
Same thing with this server. If it makes sense for your use-case, outstanding. In many cases, people are better off offloading this to another company and focusing on their strengths.
The minute gens/solar/wind/batteries combo becomes less expensive than public utility I'll switch. For now it makes no financial sense.
With the clouds it is the other way around. My dedicated servers running my software kick the shit out of AWS performance wise for a fraction of the price. And no I do not spend my days "managing" it. I can order new dedicated server and the same shell script will reinstall all prerequisites, restore data from backup and run it in few minutes add (however long it takes to import database from backup). Where needed I also have standby up to date servers.
Other than running this script to test restoration once a month my management overhead is zero.
See... anyone can pick <random thing>, and describe how people don't do it. Of course, you cite a generator, others use solar cells, right?
And that's fine. People should make choices like that. I was objecting to saying people paid X for RDS and others paid Y for one bare metal server. Those are not in the same category, there's no point comparing those prices without a lot more information.
I don't care where those snapshots are stored and how much space those take. In case I need to restore my IaaS provider gives me 2-click option to restore - one to click restore and 2nd to confirm. I sit and watch progress. I also don't care about hardware replacement and anything that connects to that. I have to do VPS OS updates but that is it.
I do my own data backups on different VPS of course just in case my provider has some issue, but from convenience perspective that IaaS solution is delivering more than I would ask for.
They tend to be the things that you don't need until you absolutely need them right now (or yesterday).
It’s easy to trivialize $20-30k a month when it’s someone else’s money and it’s less work for you.
What I have a problem with is:
- the premium over bare metal is just silly
- maximum vertical scaling being a rather small fraction of what you could get with bare metal
- when you pay for a hot standby you can't use it as a read only replica (true for AWS and GCP, idk about Azure and others)
Though it seems to require you to have 3 instances rather than just letting you read from the standby... I don't quite get the rationale for that.
I'm not sure what you mean here. At least for MySQL you can have an instance configured as replica + read-only and used for reads. Aurora makes that automatic / transparent too with a separate read endpoint.
The fact that a hot standby is usually in some sort of read-replica state prior to failing over is a technical detail that AWS sort of tries to abstract away I think.
>> 1. A solution for unlimited scheduled snapshots without affecting performance.
You can very comfortably have instant and virtually unlimited snapshots with zfs/jails (only occupying space when files change). Very easy to automate with cron and a shellscript.
>> 2. API access for managing configuration, version updates/rollbacks, and ACL.
>> 3. Close to immediate replacement of identical setup within seconds of failure.
There is a lot of choices for configuration management (saltstack, chef, ansible, ..). I run a shellscript in a cron job that takes temporary snapshots of the jail's filesystem, copies to a directory, and makes an off-site backup. A rollback is as simple as stopping the server, renaming a directory, and restarting it. It's probably more than a couple of seconds, but not by much. I think I'm uncomfortable exposing an API with root access to my systems to the internet, but I'm not sure how these systems work. I don't think it would be hard to set it up with flask if you wanted it though.
>> 4. No underlying OS management.
I don't know what this is, but I'm curious and looking it up :D.
In most of the posts I'm reading here, people have really beefy rigs. But you could do this on the cheap with a 2000s era laptop if you wanted (that was my first server).
Yes, but that both requires manual scripting it and remains local to the server. Compare to scheduled RDS backups which go to S3 with all its consistency guarantees.
> There is a lot of choices for configuration management (saltstack, chef, ansible, ..)
Sure, those are an improvement over doing things manually. But for the recovery they can do only so much. Basically think how fast can you restore service if your rack goes up in flames.
> I don't know what this is, but I'm curious and looking it up :D
It means - who deals with kernel, SSL, storage, etc. updates, who updates the firmware, who monitors SMART alerts. How much time do you spend on that machine which is not 100% related to the database behaviour.
I wasn't recommending everyone use RDS. If your use case is ok with a laptop-level reliability, go for it! You simply can't compare the cost of RDS to a monthly cost of a colo server - they're massively different things.
EDIT: I do use cloud services for some stuff. My point isn’t being anti-cloud, just that nothing is perfect.
For instance, there are LVM2 snapshots. Maybe those do affect performance. If the cost difference is big enough though, couldn't you just account for that in the budget?
I agree that literal "bare metal" sucks, but self-hosted with cloud characteristics (containers, virtualization) is not totally obsolete.
When you're using the cloud you're paying someone else a very good margin to do all of those things for you.
If you do them yourself you can save quite a lot. Hardware is surprisingly cheap these days, even with the chip shortage factored in, if compared to cloud offerings.
No access to the underlying OS can actually be a problem. I had a situation where following a DB running out of disk space it ended up stuck in “modifying” for 12 hours, presumably until an AWS operator manually fixed it. Being able to SSH in to fix it ourselves would’ve been much quicker.
"Identical setup within seconds of failure" being the exception as you need to deploy from backup which can take minutes/hours (depending on backup size) even if you have the spare hardware. Fortunately, that's the least-needed feature in your list.
It's called software, you put that on top of a server, it's not some kind of magic.
Sure, you're paying the cost of some DBAs and SREs with that price. Still, seems too way above what it costs.
Factor them all in, and "the cloud" is 1000s to millions of times more expensive than bare metal. I've seen people pay $100k/month, for services which could run on an $140/month bare metal server failover pair.
Meanwhile, people still spend enormous resources managing "the cloud", writing code to deploy to the cloud, dealing with edge cases in the cloud.
There are no savings, time wise, management wise, or money wise, with the cloud.
You are paying for ignorance.
The costs were very clearly spelled out and always lower that SaaS.
Why do people here think that cloud companies are charities? The cost of their hardware leasing, personnel, electricity, building management, insurance... it is all paid by the customer.
Plus a beefy profit margin. All paid by you.