In 2008, the idea was that if you bundle up a large bunch of mortgages, then the bundle will have low risk because the chances of everything failing at the same time is low. The cloud is designed so that resource usage spikes of individual customers can always be served because one customer is very small compared to the whole infrastructure.
However, in some cases, these mortgages/resource spikes become highly correlated.
If every gym member visited the gym at the same time, they wouldn't all fit. Only a small fraction of the members use the gym at any one time, so it works.
Banks would crash if everyone tried to withdraw their money at the same time, but they don't, so the bank can loan the money out.
Only a fraction of the members use the gym at all. If every member of the gym wanted to use it there would be no reasonable schedule to make that possible. ~50% of gym members use it less than 100 times per year, and only ~25% use it consistently.
For banks depending on legislation they have to keep 0/3/10% in reserves, depending on the size of the bank. Which is far worse than most clouds or gyms would ever offer.
1. Enter the URL of the cloud provider's status page into your browser and press enter.
2. If the status page loads instantly, all services are go.
3. If the status page takes between 2 and 5 seconds to serve, the cloud provider is experiencing a slowdown.
4. If the status page takes between 5 and 30 seconds to load, the cloud provider is experiencing a major problem.
5. If the status page takes between 30 seconds and 1 minute to load, requires you to refresh before you can see it, or fails to load completely such as with missing images, then the cloud provider is experiencing widespread problems in multiple regions and has only sporadic availability.
6. If the status page doesn't load at all, all services are down. Check the CEO's twitter page.
7. If the CEO's twitter page has a pinned tweet telling you not to worry, then all of your data has been lost.
My personal experience for our AWS CI infra that it's struggling more and more recently. Builds are slower on average than a couple of weeks ago. Maybe those VCPUs are not the same VCPUs as yesterday ;D.
Someone's never been to Pittsburgh.
AWS, GCP & Azure still feel like a lot like PC, Amiga and Macintosh at this moment.
The platform also have the ability to deploy commonly used web applications like WordPress, Moodle, etc.
I will launch here in HN when the platform is ready to launch.
If you have any questions or suggestions, please let me know.
Plenty of us out there.
> Capacity constraints due to increased demand stemming from the global health pandemic are causing pipeline delays when using our hosted pools. We are working on mitigations, but currently expect the issue to persist for at least the rest of 25-March peak hours. You can work around these issues by temporarily moving critical pipelines to self-hosted agents.
Pro:
- you can use shell in browser
- traffic is cheaper related to AWS
- fast 1GbE network
Cons:
- VM deploy is VERY slow, 2-3 minutes
- no ipv6 out the box, you need a balancer(!) and 4-5 non-trivial shell commands
- attaching new storage was extremely painful experience
It general Azure feels just like middle cloud service.
It's understandable to be surprised, it's not every day everyone needs resources at once at the same time, although some foresight a month before couldn't have hurt
Is it a safe bet that we can rely on the cloud to have capacity? Normally I wouldn't doubt it but in this sort of situation is becomes more likely they will be put under capacity stress.
Will the cloud vendors learn and build slack in? I think they're very lean operations and maybe this kind of slack would damage the profitability too much.
If the cloud vendors can't guarantee capacity ( I suspect this will be the conclusion ) then what does they mean for our DR and BCP planning?
Then you're very misinformed.
As a cloud administrator, I see resource availability and account limits on a weekly basis going back years.
I tell people:
- to pre-provision at least some extra servers rather than wait for an autoscaling operation to fail.
- that new instance types often are rolled out gradually, and lead time is often 1 month in AWS
- that killing a 1000-node cluster then expecting to immediately rebuild it often doesn't work.
- for DR and BCP planning, each region (or AZ) should be able to handle enough load at all times in case one region (or AZ) is unavailable. I've never seen anybody do that, even after I told them, because cost.
https://aws.amazon.com/solutions/limit-monitor/
It starts having issues when you get to 5,000+ ec2 instances, but it's somewhat understandable that they don't aim to support that level of usage within a single AWS account.
On another bullet point: if you go serverless (API/HTTP Gateway, Lambda, Dynamo DB), you automatically get full region DR. I personally recommend HTTP Gateway if you can swing it, API gateway is only worth of it you are doing personal projects (mostly free tier) or are seriously leveraging the API gateway specific features
Theregister even reported on it a couple of years ago
https://www.theregister.co.uk/2017/05/04/microsoft_azure_cap...
It's kinda fun, but it's also infuriating that 90% of our customers decided to wait until the week before the lockdown that we warned them would be coming months ago.
Coincidental?
1. This appears to be a UK-centric thing (and those datacenters don't have the full Azure portfolio, as can be seen here: https://azure.microsoft.com/en-us/global-infrastructure/serv...)
2. The very last paragraph on the linked article reads: "Note that Azure is a huge service and it would be wrong to give disproportionate weight to a small number of reports. Most of Azure seems to be working fine. That said, capacity in the UK regions was showing signs of stress even before the current crisis, so it is not surprising that issues are occurring now."
All of this is public info, so maybe people should read up on facts first? :)