> 5:01 AM PST We can confirm a loss of power within a single data center within a single Availability Zone (USE1-AZ4) in the US-EAST-1 Region. This is affecting availability and connectivity to EC2 instances that are part of the affected data center within the affected Availability Zone. We are also experiencing elevated RunInstance API error rates for launches within the affected Availability Zone. Connectivity and power to other data centers within the affected Availability Zone, or other Availability Zones within the US-EAST-1 Region are not affected by this issue, but we would recommend failing away from the affected Availability Zone (USE1-AZ4) if you are able to do so. We continue to work to address the issue and restore power within the affected data center.
So they have 2 different sources of power coming in. And generators. They do mention the UPS is only for "certain functions", so I guess it's not enough to handle full load while generators spin up if the 2 primaries go out. Or perhaps some failure in the source switching equipment (typically called a "static transfer switch").
Some detail on different approaches: https://www.donwil.com/wp-content/uploads/white-papers/Using...
We're still waiting on the RCA for last week's us-west outage...
Isn't the whole point of availability zones is that you deploy to more than one and support failing over if one fails?
IE why are we (consumers) hearing about this or being obviously impacted (eg Epic Games Store is very broken right now)? Is my assessment wrong, or are all these apps that are failing built wrong? Or something in between?
Deploying to multiple places is more expensive, it's not wrong to choose not to, it's trading off reliability for cost.
It's also unclear to me how often things fail in a way that actually only affect one AZ, but I haven't seen any good statistics either way on that one.
It's turtles all the way down, and underneath all the turtles is us-east-1.
Off the top of my head, US-EAST-1 is:
(1) topologically closer to certain customers than other regions (this applies to all regions for different customers),
(2) consistently in the first set of regions to get new features,
(3) usually in the lowest price tier for features whose pricing varies by region,
(4) where certain global (notionally region agnostic) services are effectively hosted and certain interactions with them in region-specific services need to be done.
#4 is a unique feature of US-East-1, #2-#3 are factors in region selection that can also favor other regions, e.g., for users in the West US, US-West-2 beats US-West-1 on them, and is why some users topologically closer to US-West-1 favor US-West-2.
I'm not sure why this is a big deal though, this is why Amazon has multiple AZ's. If your in one AZ, you take your chances.
IMO it was handled rather well and fast by AWS... not saying we shouldn't beat them up (for a discount) but being honest this wasn't that bad.
I once had to prepare for a total blackout scenario in a datacenter because there was a fault in the power supply system that required bypassing major systems to fix. Had some mistake or fault happened during those critical moments, all power would've been lost.
Well-designed redundancy makes high-impact incidents less likely, but you're not immune to Murphy's law.
1. Dual power in each server/device - One PSU was powered by one outlet, the other PSU by a different one with a different source meaning that we can lose a single power supply/circuit and nothing happens 2. Dual network (at minimum) - For the same reasons as above since the switches didn't always have dual power in them.
I've only had a DC fail once when the engineer was performing work on the power circuitry for the DC and thought he was taking down one, but was in fact the wrong one and took both power circuits down at the same time.
However, a power cut (in the traditional sense where the supplier has a failure so nothing comes in over the wire) should have literally zero effect!
What am I missing?
I've never worked anywhere with Amazon's budget so why are they not handling this? Is it more than just the imcoming supply being down?
Nothing happens if you remember that your new capacity limit per DC supply is 50% of the actual limit, and you're 100% confident that either of your supplies can seamlessly handle their load suddenly increasing by 100%.
I've seen more than one failure in a DC where they wired it up as you described, had a whole power side fail, followed by the other side promptly also failing because it couldn't handle the sudden new load placed on it.
Normally this is factored into the Rack you buy from a hardware provider, they will tell you that you have 10A or 16A on each feed, if you exceed that: it will work, but you are overloading their feed and they might complain about it.
This is all local scale. Your setup would not survive a data center scale power outage. At scale power outages are datacenter scale.
Data centers lose supply lines. They lose transformers. Sometimes they lose primary feed and secondary feed at the same time. Automatic transfer switches cannot be tested periodically i.e. they are typically tested once. Testing them is not "fire up a generator and see if we can draw from it"
It is cheaper to design a system that must be up which accounts for a data center being totally down and a portion of the system being totally unavailable than to add more datacenter mitigations.
And to top it off each rack had its own smaller UPS at the bottom and top, fed off both rails, and each server was fed from both.
We never had a power issue there; in fact SDGE would ask them to throw to the generators during potential brown-out conditions.
Of course this was a datacenter that was a former General Atomics setup iirc ...
Also, there are banks of batteries and generators in between the power company cables and the kit: did they not kick-in?
Again, this is all pure speculation: I have absolutely no idea of the exact failure, nor how their infrastructure is held together - this is all just speculation for the hell of it :)
Citation needed - the same issue with testing, data races and expensive bandwidth come up.
For big DC workloads, it is usually, though not always, better to take the higher failure rate than add redundancy.
Actually, now that I type that it makes sense. Scaling a few tens of dollars to a bajillion servers on the off-chance that you get an inbound power failure (quite rare I'd reckon) might cost more than what they'd lose if it does actually fail.
So yeah, they're potentially just balancing the risk here and minimising cost on the hardware.
Edit: changed grammar a bit.
Perhaps we are going to discover how AWS produces such lofty margins by way of their next RCA publication.
My guess is that they cheaped out in having redundant PSUs to get you to use multiple availability zones. (More zones = more revenue)
Even a single PSU shouldn’t be an issue if they plugged in an ATS switch though.
Somewhat surprising to see how many things are failing though, which implies, either that a lot of services aren't able to fail-over to a different availability zone, or there is something else going wrong.
Specifically large parts of the management API, and IAM service are seemingly centrally hosted in us-east-1. So called Global endpoints are also dependent on us-east-1 and parts of AWS' internal event queues (eg. event bridge triggers)
If your infrastructure is static you'll largely avoid the fallout, but if you rely on API calls or dynamically created resources you can get caught in the blast regardless of region
One incident I recall was involving our GCP regional storage buckets, which we were using to achieve mutli-region redundancy. One day, both regions went down simultaneously. Google told us that the data was safe but the control plane and API for the service is global. Now I always wonder when I read about MR what that actually means...
Your point here deserves highlighting. A failure such as a zone failing is nowadays a relatively simple problem to have. But cloud services do have bugs, internal limits or partial failures that are much more complex. They often require support assistance, which is where the expertise of their staff comes into play. Having a single provider that you know well and trust is better than having multiple providers where you need to keep track of disparate issues.
You really need multi-region and also not be relying on any AWS service that’s located only in us-east-1 (including everything from creating new S3 buckets to IAM’s STS).
edit: The question isn't necessarily AWS specific, just any data on amount of downtime per cloud provider on a timeline would be nice.
There indeed has been an uptick in AWS outages recently. You can see a bit of the history here: https://statusgator.com/services/amazon-web-services
If you can’t even admit you’re having an issue how can you keep an accurate record?
Obviously it’s not good for an AZ to go down but it does happen and why any production workload should be architected to have seamless failover and recover to other AZs, typically by just dropping nodes in the down AZ.
People commenting that servers shouldn’t go down ect don’t understand how true HA architectures work. You should expect and build for stuff to fail like this. Otherwise it’s like complaining that you lost data because a disk failed. Disks fail… build architecture where that won’t take you down.
Isn't that because Elasticache will distribute the cluster across AZs automatically?
https://docs.aws.amazon.com/AmazonElastiCache/latest/red-ug/...
Load balancers are not doing well at all. The only way in this case to avoid an outage is to be cross regions or cross cloud which is quite more complex to handle and require more resources to do well.
And I hope that nobody is listening your blaming and pointing fingers advice, that's the worst way to solve anything.
It's AWS job to ensure that things are reliable, that there is redundancy and that multi-AZ infra should be safe enough. The amount of issues in US-EAST-1 lately is really worrying.
I do agree that the end of this year has been a very bad period for AWS. I wonder whether there’s a connection to the pandemic conditions and the current job market – it feels like a lot of teams are running without much slack capacity, which could lead to both mistakes and longer recovery times.
In the past I've seen both of those systems seamlessly handle an AZ failure. Today was different.
Is that comparison fair? If you have 2 raid-5 mirrored raid 5 boxes in your room and all disks fail at the same time, you should complain. And that won't happen. These entire datacenter failures should be anticipated, but to expect them is a bit too easy I think. There are plenty of hosters who don't have this stuff even once for the last decade in their only datacenter. I do not find it strange to expect or even demand that level but to protect yourself if it happens in any case if that fits your specific project and budget.
Edit; OK meant that raid-5 remark in the same context as the hosting; it can and does happen but it shouldn't; you should plan for a contingency but expect it goes far. We never had it (1000s of hard-drive, decades of hosting, millions of sites) and so we plan for it with backups; if it happens it will take some downtime but it costs next to nothing over time to do that. If we expected it, we would need to take far different measures. And we had less downtime in a decade than aws AZ had in the past months. I have a problem with the word 'expect'.
There are plenty of situations where this might happen if they’re in your room: a lightning strike can cause a surge that causes the disks to fry, a thief might break in and steal your system, your house might burn down, an earthquake could cause your disks to crash, a flood could destroy the machines, and a sinkhole could open up and swallow your house. You may laugh at some of these as being improbable, but I have seen _all_ of these take out systems between my times in Florida (lightning, thief, sinkhole, and flood) and California (earthquake and house fire).
The fix for this is the same fix as being proposed by the parent post - putting physical space between the two systems so if one place become unavailable you still have a backup.
Here are some examples where that happened:
1. Drive manufacturer had a hardware issue affecting a certain production batch, causing failures pretty reliably after a certain number of power-on hours. A friend learned the hard way that his request to have mixed drives in his RAID array wasn’t followed.
2. AC issues showed a problem with airflow, causing one row to get enough warmer that faults were happening faster than the RAID rebuild time.
3. UPS took out a couple racks by cycling power off and on repeatedly until the hardware failed.
No, these aren’t common but they were very hard to recover from because even if some of the drives were usable you couldn’t trust them. One interesting dynamic of the public clouds are that you tend to have better bounds on the maximum outage duration, which is an interesting trade off compared to several incidents I’ve seen where the downtime stretched into weeks due to replacement delays or manual rebuild processes.
Same manufacturer, same disk space, same location, same operator, same maintenance schedule, same legal jurisdiction, same planet, you name it, and there's a common failure to match
HA! I had received new 16-bay chasis and all of the drives needed plus cold spares for each chasis. Set them up and started the RAID-5 init on a Friday. Left them running in the rack over the weekend. Returned on Monday to find multiple drives in each chasis had failed. Even with dedicated one of the 16 drives as a hot swap, the volumes would all have failed in an unrecoverable manner.
All drives were purchased at the same time, and happened to all come from a single batch from the manufacture. The manufacture confirmed this via serial numbers, and admitted they had an issue during production. All drives were replaced and at a larger volume size.
TL;DR: Drives will fail, and manufacturing issues happend. Don't buy all of your drives in an array from the same batch! It will happen. To say it won't is just pure inexeperience.
The context of the parent seems to be that they intermittently couldn't get to the console. That seems fair to me. If we're blaming developers and finding gaps in HA design, then AWS should also figure out how to make the console url resilient. If it's not, then AWS does appear to be down.
I imagine it's pretty hard to design around these failures, because it's not always clear what to do. You would think, for example, that load balancers would work properly during this outage. They aren't. Or that you could deploy an Elasticache cluster to the remaining AZs. You can't. And I imagine the problems vary based on the AWS outage type.
Similarly, with the earlier broad us-east-1 outage, you couldn't update Route53 records. I don't think that was known beforehand by everyone that uses AWS. You can imagine changing DNS records might be useful during an outage.
As others have said, they are not being forthright about the severity of the issue, as is standard.
edit: of course, AWS does have this: AWS Fault Injection Simulator
Be in multiple AZs, and even multiple regions but if you're going to be in only one AZ or one region, make it us-east-2.
I have a server at OVH (not affiliated to them) which, at this point, I keep only for fun. It has 3162 days of uptime as I type this.
3 162 days. That's 8 years+ of uptime.
Does it have the traffic of Amazon? No.
Is it secure? Very likely not: it's running an old Debian version (Debian 7, which came out in, well, 2013).
It only has one port opened though, SSH. And with quite a hardened SSH setup at that.
I installed all the security patches I could install without rebooting it (so, yes, I know, this means I didn't install all the security patches for some required rebooting).
This server is, by now, a statement. It's about how stable Linux can be. It's about how amazingly stable Debian is. It's also about OVH: at times they had part of their datacenter burn (yup), at times they had full racks that had to be moved/disconnected. But somehow my server never got affected. It may have happened that at one point OVH had connectivity issues but my server went down.
I "gave back" many of my servers I didn't need anymore. But this one I keep just because...
I still use it, but only as an additional online/off-site backup where I send encrypted backups. It's not as if it gets zero use: I typically push backups to it daily.
They're only backups, they're encrypted. Even if my server is "owned" by some bad guys, the damage he could do is limited. Never seen anything suspicious on it though.
I like to do "silly" stuff like that. Like that one time I solve LCS35 by computing for about four years on commodity hardware at home.
I think it's about time I start to do some archeology on that server, to see what I can find. Apparently I installed Debian 7 on it in mid-october 2013.
I've created a temporary user account on it, which at times I've handle the password (before resetting it) to people just so they could SSH in and type: "uptime".
It is a thing of beauty.
Eight. Years. Of. Uptime.
Awesome! Are you Bernard Fabrot [0]?
[0] https://www.csail.mit.edu/news/programmers-solve-mits-20-yea...
At its current use, it's likely not a major issue but imagine if someone saw this uptime and thought to take it as a statement of reliability and built a service on it. I for one, would want that disclosed because this is a disaster waiting to happen. I'd much rather someone disclose that they had a few servers each with no longer than 7 days of uptime because they'd been fully imaged and cycled in that time...
Simiarly, my laptop, if I keep it plugged in the wall, and enable httpd on localhost, will surely have better uptime than any of the top clouds. I'd bet that it'd have 100% uptime if I plugged in a UPS and cared for traffic on my local network only.
In reality, most people don't need to scale. An occasional spike in traffic is a nuisance, but not the end of the world, and security is not terribly hard, if you keep your servers patched (which is trivial to automate).
I really don't understand why there's so much FUD around running your own stuff.
Of course it doesn't. Why are you asking antagonistic questions?
Better uptime than paying for EC2 on AWS US-East-1.
Obviously this approach isn't scalable but it serves me well.
“ditch your on-prem infrastructure and migrate to a major cloud provider”
And its starting to seem like it could be something like:
“ditch your on-prem infrastructure and spin up your own managed cloud”
This is probably untenable for larger orgs where convenience gets the blank check treatment, but for smaller operations that can’t realize that value at scale and are spooked by these outages, what are the alternatives?
A much faster and more effective solution that doesn't have you trading cloud problems with on-prem problems (the power outage still happens, except now it's your team that has to handle it) would be to update your services to run in multiple AZs and multiple regions.
Get out of AWS is you want, but don't get out of AWS because of outages. You should be able to mitigate this relatively easily.
Everything fails, we can argue the rate. But I would argue that understanding your constraints is better.
if you know that your secret storage system can't survive if a machine goes away: well, you wire redundant paths to the hardware and do memory mirroring and RAID the hell out of the disks. And if it fails you have a standby in place.
But if you use AWS Cognito.
And it goes down.
You're fucked mate.
I remember we had a power outage in 2006, it actually took one of my services off air. Since then of course that has been rectified, and the loss of a building wouldn't impact on any of the critical, essential or important services I provide.
Commenters will show up like clockwork and say shit like:
“What man, it’s not like cars didn’t crash before? Haha”
Don’t be dense dude. And definitely don’t pursue a leadership position anytime in the future.
If your app depends on a few 3rd party services -- SendGrid, Twilio, Okta and they're all hosted on different infra then congrats! You're gonna have issues when any one of them are down, yayyy.
Also the marketing benefit can't be downplayed. If your postmortem is "AWS was having issues" then your execs and customers just accept that as the cost of doing business because there's a built-in assumption that AWS, Azure, GCP are world class and any in-house team couldn't do it better.
Edit: Restarting Slack does update the edited messages.
Edit 15:24 CET: Slack is back up.
- edits failing or working with big lag;
- "Threads" view slow;
- can't emoji-react;
- can't upload images;
- people also say they can't join new channels.
> We are experiencing issues with file uploads, message editing, and other services. We're currently investigating the issue and will provide a status update once we have more information.
> Dec 22, 1:58 PM GMT+1
remote: Compressing source files... done.
remote: Building source:
remote:
remote: ! Heroku Git error, please try again shortly.
remote: ! See http://status.heroku.com for current Heroku platform status.
remote: ! If the problem persists, please open a ticket
remote: ! on https://help.heroku.com/tickets/newAnother thread: https://news.ycombinator.com/item?id=29648325
In all seriousness though - even non-regional AWS services seem to have ties to us-east-1 as evidenced by the recent outages. So you might be impacted even if it looks like (on paper at least) you’re not using any services tied to that region.
The console is throwing errors from time to time. As usual no information on AWS status page.
aws ec2 describe-availability-zones | jq -r '.AvailabilityZones[] | select(.ZoneId == "use1-az4") | .ZoneName'The letters are randomised per AWS account so that instances are spread evenly and biases to certain letters don't lead to biases to certain zones.
I am not sure if the movement the cloud has reduced amount of failures, but it definitely has made these failures more catastrophic.
Our profession is busy makin the world less reliable and more fragile, we will have our reconning just like the shipping industry did.
Today, on Slack i could not edit messages, could not edit statuses and could not post attachments. Pretty annoying!
Packaging a way to migrate off AWS could be a unicorn idea.
I mean I'm glad it exists, don't get me wrong. Just weird that they'd have two status pages, one seemingly existing only to sort of 'mock' themselves...
[05:01 AM PST] We can confirm a loss of power within a single data center within a single Availability Zone (USE1-AZ4) in the US-EAST-1 Region. This is affecting availability and connectivity to EC2 instances that are part of the affected data center within the affected Availability Zone. We are also experiencing elevated RunInstance API error rates for launches within the affected Availability Zone. Connectivity and power to other data centers within the affected Availability Zone, or other Availability Zones within the US-EAST-1 Region are not affected by this issue, but we would recommend failing away from the affected Availability Zone (USE1-AZ4) if you are able to do so. We continue to work to address the issue and restore power within the affected data center.
That, or people never took the “if AWS goes down then lots of people will have a problem, so we’ll be fine” line seriously; there are few such cases.
Edit: Not supporting amazon, i generally dislike the company. I just don't understand the extend to which the criticism is justified
1. Did AMZN build an appropriate architecture?
2. Did AMZN properly represent that architecture in both documentation and sales efforts?
3. What the heck is going on with AMZN?
Let's say that they build an environment in which power is not fully redundant and tested at the rack level, but is fully redundant and tested across multiple availability zones. Did they then issue statements of reliability to their prospective and existing customers saying that a single availability zone does not have redundant power, and customers must duplicate functionality in at least 2 AZs to survive a SPOF?
Notably: cognito, r53 and the default web UI. (You can work around the webui one I’m told, by passing a different domain instead of just console.aws.amazon.com)
just the weekly internet apocalypse, happy holdidays fellow SREs
I'm having issues with Slack from central EU (Poland) -- can't upload images, or send emoji reactions to post; curiously, text works fine). Wondering if linked
Only with AWS and Github do I seem get panicked text messages on my phone first thing in the morning... Our workloads on Azure typically only have faults when everyone is in bed.
Santa is bringing me a Synology in three days.
> Due to this degradation your instance could already be unreachable
>:(
So it's not shocking to me that something going down in us-east-1 could have impact on other regions.
Meanwhile, I currently have a gig to work on a video service which features a never updated centos 6, an unsupported python 2 blob website, and a push to prod deployment procedure, running a single postgres db serving streaming for 4 millions users a month.
And it's got years of up time, cost 1/100th of AWS, and can be maintained by one dev.
Not saying "cloud is bad", but we got to stop screaming old techs are no good either.
AWS consists of over 200 services offered in 86 availability zones in 26 regions each with their own availability.
If one service in one availability zone being impaired equals a post about “AWS is down” we might as well auto-post that every day.
Specifically large parts of the management API, and IAM service are seemingly centrally hosted in us-east-1.
If your infrastructure is static you'll largely avoid the fallout, but if you rely on API calls or dynamically created resources you can get caught in the blast regardless of region
Wonder if it's connected?
Coworkers: "You're an f'n idiot. Amazon and Facebook don't go down, you're holding us back!" <-Quite literally their words.
Me: leaves cause that treatment was the final straw
Amazon and Facebook both go down within a month of each other, and supposedly they needed backups
Them: shocked pikachu face
I must admit that I do always try and maintain a separate data backup for true disaster recovery scenarios - but those are mainly focused around AWS locking me out of our AWS account (and hence we can't access our data or backups) or recovering from a crypto scam hack that also corrupts on-platform backups, for example.
What happens if AWS or [insert other megacloud] decides your account needs to be nuked from orbit due to a hack or some other confusion? We almost had this happen over the summer because of a problem with our bank's ability to process ACH payments. Very frustrating experience. Still isn't fully resolved.
What happens if an admin account is taken over and your account gets screwed up?
What happens if an admin loses his shit and blows up your account?
What happens if your software has a bug that destroys a bunch of your data or fubars your account?
There's a ton of cases where having at least a simple replica of your S3 buckets into a third-party cloud could prove highly valuable.
1) Can you make your on prem infrastructure go down less than Amazon's?
2) Is it worth it?
In my experience most people grossly underestimate how expensive it is to create reliable infrastructure and at the same time overestimate how important it is for their services to run uninterrupted.
--
EDIT: I am not arguing you shouldn't build your more reliable infrastructure. AWS is just a point on a spectrum of possible compromises between cost and reliability. It might not be right for you. If it is too expensive -- go for cheaper options with less reliability.
If it is too unreliable -- go build your own yourself, but make sure you are not making huge mistake because you may not understand what it actually costs to build to AWSs level.
For example, personally, not having to focus on infra reliability makes it possible for me to focus on other things that are more important to my company. Do I care about outages? Of course I do, but I understand doing this better than AWS has would cost me huge amount of focus on something that is not core goal of what we are doing. I would rather spend that time thinking how to hire/retain better people and how to make my product better.
And adding all that complexity of running this infra to my company would cause entire organisation be less flexible, which is also a cost.
So you can't look at cost of running the infra like a bill of materials for parts and services.
And if there is an outage it is good to know there is huge organisation there trying to fix it while my small organisation can focus preparing for what to do when it comes back up.
At AWS, we built a few layers of redundant infrastructure with mulit-AZ availability within a region and then global availability across multiple regions. All this was done at roughly half the cost of the traditional hosting, even when including the additional person-hours required to maintain it on our end.
Keeping our infra simple helped that work, and it's literally been years since an outage caused by any AWS issues, even though there have been several large AWS events.
It’s not a bad idea store backups offline but costs might make that an expensive proposition.
Maybe a cheery note asking how the team is doing, sent right in the middle of an outage.
Passive aggressive? As hell. Cathartic? Damn skippy.