Most likely scenario is hackers are trying leaked username/password pairs from other breaches against AWS and gaining access to those accounts.
They then spin up EC2 instances in all sorts of regions on the compromised accounts
PSA set up MFA on your account if you haven't already.
Some examples:
https://www.reddit.com/r/aws/comments/rv3lm5/i_lost_55k_from...
https://www.reddit.com/r/aws/comments/rvbncu/account_hacked_...
https://www.reddit.com/r/aws/comments/qx8i02/got_hacked_and_...
https://www.reddit.com/r/aws/comments/rv4mnq/my_account_was_...
I don't even like the idea of any of this stuff. I want to run my own little raspberry pi server or whatever, it seems much more fun and startupish than aws, which appears to be all of the corporate stuff I left (AIMs etc). This is funny because I remember AWS being thought of as great for "just experimenting with stuff".
> Closing your account might not automatically terminate all your active resources. You might continue to incur charges for some of your active resources even after you close your account.
https://aws.amazon.com/premiumsupport/knowledge-center/termi...
I don't even know how you'd accidentally get to thousands of dollars accidentally unless you were doing something more complicated like setting up an autoscaling group.
Spin up a t3a.micro EC2 instance and run it as much as you like - it's free. If you go into it (as I did many years ago) thinking, "Let me just spin up a 16 core server with 128 GB of RAM" because that's what you'd set up in your on-prem data center then of course you're going to have a big bill.
But that's not what cloud is about. It's about resources popping in and out of existence for the minimal time and resources needed to accomplish a task.
A buddy of mine runs a crypto currency validator in AWS and pays over $1600 a month.
You can rent a dedicated server with 32 cores, 256GB RAM and 2TB of NVMe storage for $400 a month. Tent two in different data centers for redundancy and you'll still paying half, not to mention no surprise monthly bills due to increased traffic or creating an expensive snapshot of your DB.
If you just want 'a server', then AWS isn't the place to go to.
To elaborate a bit more: I tend to draw the line on IaC. For example: if you're doing GitOps with Terraform and Atlantis, then go for AWS. If not, you're probably not at the point where it makes sense anyway.
The problem however is all the beginner guides including those coming from AWS team themselves teach people to create resources via console without using a Cloudformation stack.
And yet it seems that only the big cloud providers offer a reasonable authentication and authorization approach.
For many other providers, api keys lend full account access (no resource limitation, no read-only/subset of actions), which is unacceptable from a security pov.
I'm done.
I'll use Digital Ocean for anything small if I need to spin up a server in the future.
MFA does not prevent this. Its IAM keys.
The credentials I'm using with Terraform require MFA in order to call 'AssumeAdmin'. Everyone I've ever shown this configuration has complained about it being overkill and tried to argue Terraform should just have IAM keys sitting on disk, one desktop compromise away from taking everything. And since it's used to provision basically everything it's a highly privileged account.
It was pretty obviously uncharacteristic
Edit: Done! Was quite easy. Luckily I had no services running or needed.
it only permits a single hardware token to be registered to an account
so good luck if you misplace or break your hardware token
Note, though, that the new SSO login actually supports MFA in a normal way.
Besides, just use AUTHY if that's your concern
That said, I have to imagine this is the wrong procedure and there's some way to duplicate hardware MFA devices to have redundancy for such a case...
Also, I'd create a IAM user and severely limit your usage of the root account, and over time stop using the root completely.
AWS gives money back in a lot of cases that I think they legitimately aren't responsible for.
I don't know that other cloud providers are going to do any better - an attacker who has your credentials and spins up 10's of thousands of dollars of infra will cost you thousands of dollars.
I'll certainly echo the advice for 2FA but, more importantly, use a strong, unique password.
https://docs.microsoft.com/en-us/azure/active-directory/fund...
I find it hard to fathom that a company which sells machine learning as a service cannot detect that an account sitting dormant for months, which suddenly spins up several large VMs, has been compromised.
It's unsatisfactory that the only recourse is hoping for a favourable exercise of discretion on Amazon's part to waive the charges.
Enabling MFA, restricting intra/inter-VPC access, removing hard-coding credentials from configuration files/source etc., switching to SSO/removing user accounts with passwords, creating and applying restricted IAM roles, and applying those reduced privileges to EC2/ECS/EKS instances are all things that and should be done as soon as possible. (Non-exhaustive, but illustrative list)
Sure, big warnings about data loss/services down in the event of zeroing out your credits.
Having such a safety net would really help the thousands of people who sign up for transient reasons. I don't recommend a noob set up an AWS account. I also don't let my kids play on the freeway.
1. fill an account with abstract "multi-resource usage credits" (or gas);
2. have each API call specify how many credits the caller is willing to spend from their account on the operation (a gas limit);
3. operations in progress are tracked using real-time resource accounting (= distributed tracing, but cheap-as-possible at the expense of granularity);
4. any operation that goes over-budget is cancelled (even if it completed successfully, the user doesn't get the benefit of that completion), with the user being charged exactly the limit they specified.
(Yes, that means that the IaaS eats the overhead costs of any additional processing they did before noticing and stopping the workload going beyond its limit. This incentivizes the IaaS to optimize their scheduling, at all levels of the stack, to minimize the amount of "overhang" work it allows.)
And yes, to be clear, this would "infect" every layer of every service. If the IaaS provided a DBaaS, the DB's query planning and execution layers would need to know about these limits and accounting mechanisms. Etc.
I'm not saying it's practical. But it's definitely possible, with no foreknowledge of cost.