Managed NAT gateways are also 10000x more expensive than my router.
This is a boring argument that has been done to death.
And yes we’ve been heavy users of both AWS and Google Cloud for years, mainly because of the credits they initially provided, but also used VMs, dedicated servers and other services from Hetzner and OVH extensively.
In my experience, in terms of availability and security there’s not much difference in practice. There are tons of good tools nowadays to treat a physical server or a cluster of them as a cloud or a PaaS, it’s not really more work or responsibility, often it is actually simpler depending on the setup you choose. Most workloads do not require flexible compute capability and it’s also easy and fast to get it from these cheaper providers when you need to.
I feel like the industry has collectively accepted that Cloud prices are a cost of doing business and unquestionable, “nobody ever got fired for choosing IBM”. Thinking about costs from first principles is an important part of being an engineer.
Or you need to restore your Postgres database and you find out that the backups didn't work.
And finally you have a brilliant idea of hiring a second $150k/year dev ops admin so that at least one is always working and they can check each other's work. Suddenly, you're spending $300k on two dev ops admins alone and the cost savings of using cheaper dedicated servers are completely gone.
If you run your own hardware, getting stuff shipped to a datacenter and installed is 2 to 4 weeks (and potentially much longer based on how efficient your pipeline is)
Agreed, there's definitely a heavy element of that to it.
But, at the risk of again being labelled as an AWS Shill - there's also other benefits.
If your organisation needs to deploy some kind of security/compliance tools to help with getting (say) SOC2 certification - then there's a bunch of tools out there to help with that. All you have to do then is plug them into your AWS organisation. They can run a whole bunch of automated policy checks to say you're complying with whatever audit requirements.
If you're self-hosting, or using Hetzner - well, you're going to spend a whole lot more time providing evidence to auditors.
Same goes with integrating with vendors.
Maybe you want someone to load/save data for you - no problems, create an AWS S3 bucket and hand them an AWS IAM Role and they can do that. No handing over of creds.
There's a bunch of semi-managed services where a vendor will spin up EC2 instances running their special software, but since it's running in your account - you get more control/visiblity into it. Again, hand over an AWS IAM Role and off you go.
It's the Slack of IAAS - it might not be the fastest, it's definitely not the cheapest, and you can roll your own for sure. But then you miss out on all these integrations that make life easier.
If you are having a company that warrants building a data center, then AWS does not add much.
Other wise you face the 'if you want to build apple pie from scratch, you need to first invent the universe' problem. Simply put you can get started right on day one, in a pay as you go model. Like you can write code, deploy and ship from the very first day, instead of having to go deep down the infrastructure rabbit hole.
Plus shutting down things is easy as well. Things don't workout? Good news! You can shut down the infrastructure that very day instead of having to worry about the capital expenditure spent to build infrastructure, and without having to worry about its use later.
Simply put, AWS is infrastructure you can hire and fire at will.
Using a cloud platform means that while your needs are small, you're overpaying. Where it pays off is when you have a new requirement that needs to be met quickly.
I've done my share of managing database instances in the past. I can spin up a new RDS Postgres instance in much less time than I can configure one from scratch, though. Do we need a read replica? Multi-site failover? Do we need to connect it to Okta, or Formal, so we can stand up a process to provision access to specific databases, tables, or even columns? All of those things I can do significantly faster and more quickly on AWS than I can do it by hand.
What if a NoSQL database is the right solution for us? I have much less experience adminning those, so will either have to allocate a fair amount of my time to skill up or hire someone who already has those skills.
Need a scheduled task? Sure, I could set up a Jenkins server somewhere and we could use that... or we could just add an ECS scheduled task to our existing cluster.
Need an API endpoint to handle inbound Zoom events and forward them to an internal queue? Sure, I can set up a new VPC for that... that'll be a couple of days... or we whip up a Lambda, hook it up to API Gateway, and be up and running in a couple of hours.
AWS helps me do more in less time - and my time is a cost to the business. It's also extremely flexible, and will let us add things far more quickly than we otherwise could.
IMO, the correct comparison isn't "what would it cost to run this platform on Hetzner?" - it's "What would it cost to run it, plus what would cost to acquire the talent to build it, plus retain that talent to maintain it?"
AWS isn't competing with other infrastructure providers. They're competing with other providers and the salaries of the engineers you need to make them work.
That's why AWS can get away with charging the prices they do, even though it is expensive, for most companies it is not expensive enough to make it worth their while to look for cheaper alternatives.
And if you are willing to pay, you can significantly over-provision dedicated servers, solving much of the scaling problem as well.
If the author's point was to make a low effort "ha ha AWS sucks" video, well sure: success, I guess.
Nobody outside of AWS sales is going to say AWS is cheaper.
But comparing the lowest end instances, and apparently, using ECS without seeming to understand how they're configuring or using it just makes their points about it being slower kind of useless. Yes you got some instances that were 5-10x slower than Hetzner. On it's own that's not particularly useful.
I thought, going in, that this was going to be along the lines of others I have seen, previously: you can generally get a reasonably beefy machine with a bunch of memory and local SSDs that will come in half or less the cost of a similar spec EC2 instance. That would've been a reasonable path to go. Add on that you don't have issues with noisy neighbors when running a dedicated box, and yeah - something people can learn from.
But this... Yeah. Nah. Sorry
Maybe try again but get some help speccing out the comparison configuration from folks who do have experience in this.
Unfortunately it will cost more to do a proper comparison with mid-range hardware.
Shared instances is something even European "cloud" providers can do so why is EC2 so much more expensive and slower?
To use an analogy it's like someone who's never driven a car, and really only read some basic articles about vehicles deciding to test the performance of two random vehicles.
Maybe one of them does suck, and is overpriced - but you're not getting the full picture if you never figured out that you've been driving it in first gear the whole time.
It's good - makes it's point well.
Moved it to AWS on a small instance running Server 2012 / IIS / SqlExpress and it ran like a champ for 10 USD a month. Did that for years. Only main thing I had to do was install Fail2Ban, because being on cloud IP space seemed to invite more attackers.
10 dollars a month is probably less than I paid in electricity to run my home server.
For what it's worth - my day job does involve running a bunch of infrastructure on AWS. I know it's not good value, but that's the direction the organisation went long before I joined them.
Previous companies I worked for had their infrastructure hosted with the likes of Rackspace, Softlayer, and others. Every now and then someone from management would come back from an AWS conference saying how they'd been offered $megabucks in AWS Credit if only we'd sign an agreement to move over. We'd re-run the numbers on what our infrastructure would cost on AWS and send it back - and that would stop the questions dead every time.
So, I'm not exactly tied to doing it one way or another.
I do still think though that if you're going to do a comparison on price and performance between two things, you should at least be somewhat experienced with them first, OR involve someone who is.
The author spun up an ECS cluster and then is talking about being unsure of how it works. It's still not clear whether they spun up Fargate nodes or EC2 instances. There's talk of performance variations between runs. All of these things raise questions about their testing methodology.
So, yeah, AWS is over-priced and under-performing by comparison with just spinning up a machine on Hetzner.
But at least get some basics right. I don't think that's too much to ask.
I myself used EC2 instances with locally attached NVMe drives with (mdadm) RAID-0 on BTRFS that was quite fast. It was for a CI/CD pipeline so only the config and the most recent build data needed to be kept. Either BTRFS or the CI/CD database (PostgreSQL I think) would eventually get corrupted and I'd run a rebuild script a few times a year.
Ooof. Not a good look.
There are Gitops solution that give you all the benefits that are promised by it, without any of the downsides or compromises. You just have to bite the bullet and learn kubernetes. It may be a bit more of a learning curve, but in my experience I would say not by much. And you have much more flexibility in the precise tech stack that you choose, so you can reduce it by using stuff you're already know well.
I'm going to say what I always say here - for so many SME's the hyperscaler cloud provider has been the safe default choice. But as time goes on a few things can begin to happen. Firstly, the bills grow in both size and variability, so CFOs start to look increasingly askance at the situation. Secondly, so many technical issues start to arise that would simply vanish on fixed-size bare-metal (and the new issues that arise are well addressed by existing tooling). So the DevOps team can find themselves firefighting while the backlog keeps growing.
The problem really is one of skills and staffing. The people who have both the skills and desire actually implement and maintain the above tend to be the greying-beards who were installing RedHat 6 in their bedrooms as teenagers (myself included). And there are increasingly few of us who are not either in management and/or employed by the cloud providers.
So if companies can find the staff and the risk appetite, they can go right ahead and realise something like a 90% saving on their current spend. But that is unusual for an SME.
So we started Lithus[0] to do this for SMEs. We _only_ offer a 50% saving, not 90%. But take on all the risk and staffing issues. We don't charge for the migration, and the billing cycle only starts once migration is complete. And we provide a fixed number of engineering days per month included. So you get a complete Kubernetes cluster with open source tooling, and a bunch of RedHat-6-installing greying-beards to use however you need. /pitch
I don't really totally miss the days where I had to configure multipath storage with barely documented systems ("No, we don't support Suse, Debian, whatever...", "No, you don't pay for the highest support level, you can't access the knowledge base..."), or integrate disparate systems that theoretically were using an open standard but was botched and modified by every vendor (For example DICOM. Nowadays the situation is way better.) or other nightmare situations. Although I miss accessing the lower layers.
But I've been working for years with my employers and clients cloud providers, and I've seen how the bills climb through the roof, and how easy is to make a million-dollar mistake, how difficult (and expensive) is to leave in some cases, and how the money and power is concentrated in a handful of companies, and I've decided that I should work on that situation. Although probably I'll earn less money, as the 'external contractor' situation is not that good in Spain as in some other countries, unless you're very specialized.
But thankfully, the situation is in some cases better than in the 00s: documentation is easier to get, hardware is cheaper to come by and experiment or even use it for business, WAN connections are way cheaper...
I find Supabase immensely helpful to minimize overhead in the beginning, but would love to better understand where it starts breaking and how hard an eventual migration would be.
The problems we've seen or heard about with Supabase are:
* Cost (in either magnitude or variability). Either from usage, or having to go onto their Enterprise-tier pricing for one reason or another * The usual intractable cloud-oddities – dropped connections, performance speed-bumps * Increased network latency (just the way it goes when data has to cross a network fabric. Its fast, but not as fast as your own private network) * Scaling events tend not to be as smooth as one would hope
None of these are unique to Supabase though, they can simply all arise naturally from building infrastructure on a cloud platform.
Regarding self-hosted Supabase - we're certainly open to deploying this for our clients, we've been experimenting with it internally. Happy to chat with you or anyone who's interested. Email is adam@ company domain.
I believe their bare metal servers should have even better price/perf ratio, but I don't have data to back that up.
Not to mention what happens when you pay per megabyte and someone ddos-es you. Cloud brought back almost all hosting antipatterns, and means denial-of-service attacks really should be renamed denial-of-wallet attacks. And leaving a single S3 bucket, a single Serverless function, a single ... available (not even open) makes you vulnerable if someone knows of figures out the URL.
Son: Why does the croissant cost €2.80 here while it's only €0.45 in Lidl? Who would buy that?
Me: You're not paying for the croissant, you're paying for the staff to give it to you, for the warm café, for the tables to be cleaned and for the seat to sit on.
I also like the "why does a bottle of water cost $5 after security at airports" example.
You have no choice. You’re locked in and can’t get out.
Maybe that’s the better analogy?
So for enough people the price is not an issue. Someone else is paying.
On other side. People are pretty bad at this sort of cost analysis. I fall on this issue, prefer to spend more time myself on something I should just recommend to buy.
We don't pay million $ bills on AWS to "hang out" in a cozy place. I mean, you can, but that's insanity.
AWS is just an extremely expensive Lidl.
EDIT: autocorrect typo, coffee to café
What do you get for this? A redundant database without support (because while AWS support really tries so hard to help that I feel bad saying this, they don't get time to debug stuff, and redundant databases are complicated whether or not you use the cloud). You also get S3 distributed storage, and serverless (which is kind of CGI, except using docker and AWS markups to make one of most efficient stateless ways to run code on the web really expensive). Btw: for all of these better open source versions are available as a helm chart, with effectively the same amount of support.
You can use vercel to get out from under this, but that only works for small companies' "I need a small website" needs. It cannot do the integration that any even medium sized company requires.
Oh, and you get Amazon TLA, which is another brilliant amazon invention: during the time it takes you to write a devops script Amazon TLA comes up with another three-letter AWS service that you now have to use, because one of the devs wants it on his resume, is 2x as expensive as anything else, doesn't solve any problem and you now have to learn. It's all about using AI for maximizing uselessness.
And you'll do all this on Amazon's patented 1994-styled webpages because even claude code doesn't understand the AWS CLI. And the GCP and Azure ones are somehow worse (their websites look a lot nicer though, I'll readily admit that. But they're not significantly more functional)
Conclusion: while cloud has changed the job of sysadmin somewhat, there is no real difference, other than a massive price increase. Cloud is now so expensive that, for a single month's cloud services, you can buy hardware and put it on your desk. As the youtube points out, even an 8GB M1 mac mini, even a chinese mini-pc with AMD, runs docker far better than the (now reduced to 2GB memory) standard cloud images.
People can have different opinions on this, of course, but personally, if I have a choice, I'd rather not be juggling both product development and the infrastructure headaches that come with running everything myself. That trade-off isn’t worth it for me.
"But are your database backups okay?" Yeah, I coded the backup.sh script and confirmed that it works. The daily job will kick up a warning if it ever fails to run.
"But don't you need to learn Linux stuff to configure it?" Yeah, but I already know that stuff, and even if I didn't, it's probably easier to learn than AWS's interfaces.
"But what if it breaks and you have to debug it?" Good luck debugging an AWS lambda job that won't run or something; your own hardware is way more transparent than someone else's cloud.
"But don't you need reproducible configurations checked into git?" I have a setup.sh script that starts with a vanilla Ubuntu LTS box, and transforms it into a fully-working setup with everything deployed. That's the reproducible config. When it's time to upgrade to the next LTS release (every 4 years or so), I just provision a new machine and run that script again. It'll probably fail on first try because some ubuntu package name changed slightly, but that's a 5-minute fix.
"But what about scaling?" One of my crazy-fast dedicated machines is equal to ~10 of your slow-ass VPSes. If my product is so successful that this isn't enough, that's a good problem to have. Maybe a second dedicated machine, plus a load balancer, would be enough? If my product gets so popular that I'm thinking about hundreds of dedicated machines, then hopefully I have a team to help me with that.
Since the industry has matured now, there must be a lot of opportunity to optimize code and run it on bare metal to make systems dramatically faster and dramatically cheaper.
If you think about it, the algorithms that we run to deliver products are actually not that complicated and most of the code is about accommodating developers with layers upon layers of abstraction.
For example, if the service is using a massive dataset hosted on AWS such as Sentinel 2 satellite imagery, then the bandwidth and egress costs will be the driving factors.
Each project has certainly its own requirements. If you have the manpower and a backup plan with blue/green for every infrastructure component, then absolutely harness that cost margin of yours. If it’s at a break even when you factor in specialist continuity - training folks so nothing’s down if your hardware breaks, then AWS wins.
If your project can tolerate downtime and your SREs may sleep at night, then you might profit less from the several niners HA SLOs that AWS guarantees.
It’s very hard and costly to replicate what AWS gives you if you have requirements close to enterprise levels. Also, the usual argument goes - when you’re a startup you’ll be happy to trade CAPEX for OPEX.
For an average hobby project maybe not the best option.
As for latency, you can get just as good. Major exchanges run their matching engines in AWS DCs, you can co-locate.
When you add up all these costs plus the electricity bill, I wager that many cloud providers are on the cheaper side due to the economy of scale. I'd be interested in such a more detailed comparison for various locations / setups vs cloud providers.
What almost never goes into this discussion, however, is the expertise and infrastructure you lose when you put your servers into the cloud. Your own servers and their infrastructure are MOAT that can be sold as various products if needed. In contrast, relying on a cloud provider is mostly an additional dependency.
That's nothing compared to an average AWS bill.
You also absolutely need this with EC2 instances, which is what the comparison was about. So no, it's not unfair.
If you're using an AWS service built on top of EC2, Fargate, or anything else, you WILL see the same costs (on top of the extremely expensive Ops engineer you hire to do it, of course).
> need to pay for the premises and their physical security, too [...] plus the electricity bill
...and all of this is included in the Hetzner service.
Once again comments conflating "dedicated server" with "co-location".
I am a Hetzner customer for my forthcoming small company in order to keep running costs low, but it's not as if companies using AWS were irrational. You get what you pay for.
The video argues that AWS is dramatically overpriced and underpowered compared to cheap VPS or dedicated servers. Using Sysbench benchmarks, the creator shows that a low-cost VPS outperforms AWS EC2 and ECS by large margins (EC2 has ~20% of the VPS’s CPU performance while costing 3× more; ECS costs 6× more with only modest improvements). ECS setup is also complicated and inconsistent. Dedicated servers offer about 10× the performance of similarly priced AWS options. The conclusion: most apps don’t need cloud-scale architecture, and cloud dominance comes from marketing—not superior value or performance.
There have also been a couple of thread in text based form about the same topic. Some like text, some like video.
In my experience, if you reserve a bare metal instance for 3 years (which is the biggest discount), it costs 2 times the price of buying it outright.
I'm surprised to hear about the numbers from the video being way different, but then, it's a video, so I didn't watch it and can't tell if he did use the correct pricing.
At two difference companies, I've seen a big batch of committed instances finally go off contract, and we replaced them with more modern instances that improved performance significantly while not costing us anything more or even letting us shrink the pool, saving us money.
It's a pain, but auto scaling groups with a couple different spot instance types in it seems to be quasi necessary for getting an ok aws compute value.
For traditional, always-on servers, you should reserve them for 3 years. You still have the ability to scale up, just not down. You can always go hybrid if you don't know what your baseline usage is.
The entire point of AWS is so you don't have to get a dedicated server.
It's infra as a service.
The point of having a private chef is so you don’t have to cook food by yourself.
It’s still extremely useful to know if the private chef is cheaper or more expensive than cooking by yourself and by how much, so you can make a decision more aware of the trade offs involved.
Translating:
A lot of people work with AWS, are making bank, and are terrified of their skill set being made obsolete.
They also have no idea what it means to use a dedicated server.
That’s why we get the same tired arguments and assumptions (such as the belief that bare-metal means “server room here in the office”) in every discussion.
With cloud, you hire a private chef and ALSO have to cook the food by yourself.
You don't hire a team to maintain the server infrastructure, but you hire a team to maintain cloud infrastructure.
Is it the fact that you don't want to spend the time cooking? or is it cooking plus shopping plus cleaning up after?
Or is it counting the time to take cooking lessons? and including the cost of taking the bus to those cooking lessons?
Does the private chef even use your house, or their own kitchen? Or can you get a smaller house without a kitchen alltogether? Especially at the rate of kitchen improvement, where kitchens don't last 20 years anymore, you're gonna need a new kitchen every 5 years. (granted the analogy is starting to fail here, but you get my point)
Big companies have been terrible at managing costs and attributing value. At least with cloud the costs are somewhat clear. Also, finding staff that is skilled is a considerable expense for businesses with a more than a few pieces of code, and takes time, you can't just get them on a whim and get rid of them.
Yet every company I've worked for still used at least a bunch of AWS VPS exactly as they would have used dedicated servers, just for ten times the cost.