Someone at my org used their main company email address for a root user om an account we just closed and a 2nd company email for our current account. We are past the time period where AWS allows for reverting the account deletion.
This now means that he isn’t allowed to use SSO via our external IdP because the email address he would use is forever attached to the deleted AWS account root user!
AWS support was rather terrible in providing help.
We've tried talking to everyone we can, opening tickets, chats, trying to talk to their assigned account rep, etc, no one can remove the MFA. So right now luckily they have other admin accounts, but we straight up can't access their root account. We might have to nuke the entire environment and create a new account which is VERY lame considering they have a complicated and well established AWS account.
They treat it like the organization is attempting to commandeer someone else's account so all the privacy protections you expect for your own stuff is applied no matter how much you can prove it is not some other individuals account.
The best part is the billing issues that arise from that. In your example, if the previous engineer logged into that account (because they can) and racked up huge costs, assuming that account is getting billed or can be tied to your client, Amazon will demand your client pay for them, while at the same time refusing to assist in getting access to the account because it's someone else's. They hold you responsible, but unable to act in a responsible manner.
I know that that's not ideal, but as a practical matter perhaps it would be easier than creating a new account, if you can get the engineer to agree to it?
It's really nice when you have to hire someone new for the position. You add them to the DL and they're automatically in control of all those accounts.
I have no idea why more companies don't do this.
Still on Amazon to clearly tell people it is this way so they can properly plan for it, but employee's email addresses really shouldn't be used for the root account.
And on the flip side I can easily see why not allowing email addresses to be used again is a reasonable security stance, email addresses are immutable and so limiting them only to one identity seems logical.
Sounds quite frustrating for this user of course but I guess it sounds a bit silly to me.
I'm not arguing that it was impossible to know the long term outcome here, but it doesn't mean it isn't frustrating. If you've spent any length of time working in AWS, you know that documentation can be difficult to find and parse.
I can certainly understand why the policy exists. What I think should be possible is in these situations to provide proof of ownership of the old email address so it can be released and reused somehow.
Have you ever worked in a company of any size or complexity before?
1. Multiple accounts at the same company, spun up by different teams (either different departments, regions, operating divisions, or whatever) and eventually they want to consolidate
2. Acquisitions: Company A buys Company B, an admin at Company A takes over AWS account for Company B, then they eventually work on consolidating it down to one account
1. Use "admin@domain.com"
2. Let the domain registration lapse
3. Someone else registers the domain and now can't create an AWS account.
Rare but not impossible.
AWS has been around for quite a while now. It’s also not impossible to believe that there are companies out there that might have moved from aws to gcp or something, and maybe it’s time to move back.
When I started, AWS was in its infancy and I was just some guy working on a special project.
Now that same account is bound into an AWS Organization.
AWS Changed. My company changed. the policies change out from under you.
If they aren't actually deleting the account in the background and so no longer have a record of that e-mail address, then they must allow re-activation of the account tied to that e-mail address using the sign-up process.
The author probably misunderstood what "account name" is in Azure Storage's context, as it's pretty much the equivalent of S3's bucket name, and is definitely still a large concern.
A single pool of unique names for storage accounts across all customers has been a very large source of frustration, especially with the really short name limit of only 24 characters.
I hope Microsoft follows suit and introduces a unique namespace per customer as well.
I've never really understood S3's determination not to have a v2 API. Yes, the V1 would need to stick around for a long time, but there's ways to encourage a migration, such as having all future value-add on the V2, and maybe eventually doing marginal increases in v1 API costs to cover the dev work involved in maintaining the legacy API. Instead they've just let themselves, and their customers, deal with avoidable pain.
Storage accounts are one of the worst offenders here. I would really like to know what kind of internal shenanigans are going on there that prevent dashes to be used within storage account names.
And with no meaningful separator characters available! No dashes, underscores, or dots. Numbers and lowercase letters only. At least S3 and GCS allow dashes so you can put a little organization prefix on them or something and not look like complete jibberish.
This approach goes a long way toward democratizing the name space, since nobody can "own" the tag prefix. (10000 people can all share it). This can also be used to prevent squatting and reuse attacks - just burn the full account name if the corresponding user account is ever shut down. And it prevents early users from being able to snap up all the good names.
Their stated reason[1] for doing so being:
> This lets you have the same username as someone else as long as you have different discriminators or different case letters. However, this also means you have to remember a set of 4-digit numbers and account for case sensitivity to connect with your friends.
[1]: https://support.discord.com/hc/en-us/articles/12620128861463...
Imagine trying to connect with your friends... by telephone.
For buckets I thought easy to use names was a key feature in most cases. Otherwise why not assign randomly generated single use names? But now that they're adding a namespace that incorporates the account name - an unwieldy numeric ID - I don't understand.
In the case of buckets isn't it better to use your own domain anyway?
Also, if you have a bunch of accounts, it's far easier for troubleshooting that the accountId is in the name: "I can't access bucket 'foo'" vs. "I can't access bucket 'foo-12345678901'"
I think for a larger public service it would make sense to expose some sort of internal id(or hash of it. What bob am I talking to?. but people share the same name all the time it is strange that we can't in our online communities.
It won't surprise you the scheme never caught on and has been decommissioned (you can now register any available domain as an individual as well). The difference is probably few people use a personal TLD, but many use a name on some social media.
I'm excited for IaC code libraries like Terraform to incorporate this as their default behavior soon! The default behavior of Terraform and co is already to add a random hash suffix to the end of the bucket name to prevent such errors. This becoming standard practice in itself has saved me days in not having to convince others to use such strategies prior to automation.
[1] https://aws.amazon.com/blogs/aws/introducing-account-regiona...
GCP, however, has does this to itself multiple times because they rely so heavily on project-id, most recently just this February: https://www.sentinelone.com/vulnerability-database/cve-2026-...
When a name becomes free and somebody else uses it, it points to another thing. What that means for consumers of the name depends on the context, most likely it means not to use it. If you yourself reassign the name you can decide that the new thing will be considered to be identical to the old thing.
https://docs.aws.amazon.com/AmazonS3/latest/userguide/Virtua...
“While account IDs, like any identifying information, should be used and shared carefully, they are not considered secret, sensitive, or confidential information.” https://docs.aws.amazon.com/accounts/latest/reference/manage...
But probably best to not advertise it too much.
https://medium.com/@TalBeerySec/a-short-note-on-aws-key-id-f...
Once they are not renewed, they eventually become available again. Then anyone can re-register them, set up an MX record, and start receiving any emails still being sent to recipients in that domain. This could include password reset authentications for other services, etc.
[0] https://docs.aws.amazon.com/AmazonS3/latest/userguide/bucket...
It's pretty clear this type of ID'ing is an issue in general, unfortunately. Companies often protect themselves as a band-aid but not others in the same situations since the problems aren't frequent enough to warrant a design change. So, when it does become a problem, it could end up being a deep one.
My pet conspiracy theory: this article was written by bucket squatters who want to claim old bucket names after AI agents read this and blindly follow.
myapp-123456789012-us-west-2-an
vs myapp.123456789012.us-west-2.s3.amazonaws.com
The manipulations I will need to do to fit into the 63 char limit will be atrocious.
This is where IaC shines.
Edit: crossout incorrect info
"Leak" is maybe a bit over-exaggerated, although if someone MitM'd you they definitely be able to see it. But "leak" makes it seem like it's broadcasted somehow, which obviously it isn't.
If anyone wants them to be user facing resources, then treat them as such, and ensure they're secure, and don't store sensitive info on them. Otherwise, put a service infront of them, and have the user go through it.
The S3 protocol was meant to make the lives of programmers easier, not end users.
* Backwards compatible
* Keeps readability
* Solves problem
Namespaces are annoying but at least let you reorganize or fix mistakes. If you want to prevent squatting, rate limiting creation and deletion or using a quarantine window is more practical. No recovery path just rewards trolls and messes with anyone whose processes aren't perfect.
a) AWS will need to maintain a database of all historical bucket names to know what to disallow. This is hard per region and even harder globally. Its easier to know what is currently in use rather know what has been used historically.
b) Even if they maintained a database of all historically used bucket names, then the latency to query if something exists in it may be large enough to be annoying during bucket creation process. Knowing AWS, they'll charge you for every 1000 requests for "checking if bucket name exists" :p
c) AWS builds many of its own services on S3 (as indicated in the article) and I can imagine there may be many of their internal services that just rely on existing behaviour i.e. allowing for re-creating the same bucket name.
As for c), I assume it's not just AWS relying on this behaviour. https://xkcd.com/1172/
Not to mention the ergonomics would suck - suddenly your terraform destroy/apply loop breaks if there’s a bucket involved
I think that's an important defense that AWS should implement for existing buckets, to complement account scoped bucket.
See also their recent innovation of letting you be logged into the console with up to five (???) of the many accounts their bizarre IAM system requires, implemented with a clunky system of redirections and magic URL prefixes. As opposed to GCP just having a sensible system in the first place of projects with permissions, and letting you switch between any of them at will using the same user account.