You absolutely must, MUST, MUST be using separate AWS accounts for separate purposes. You can have as many as you’d like and roll up the billing into one actual paying account.
This is a win for accountability (roll up dev and easily see the split out for separate environments), but more importantly for security as it limits the blast radius for any one environment. Combined with per-account budget alerts it’s a win across the board.
Does it make sense for one team to have 10+ AWS accounts per service because 'security'? How about if each team out of 1000s in your company has 10 AWS accounts per service?
We run our service in 3 geographic regions and have a separate AWS account for each region and stage despite each account supporting resources in multiple regions. Considering that we have 4~ services that is roughly 40 AWS accounts for just one team with less than 10 people.
What I'm describing above is the 'best practice' way to manage AWS accounts at scale. It is insane and saying 'security' does not magically make this reasonable.
Then I learned because they’re saving it all browser-side I had to rebuild the whole menu whenever I first used a new browser or computer? Whaaaat? Of all people, AWS console users have to be highly likely to be using multiple devices/browsers. Having to recreate your own prefs at each new environment is nuts.
Plus you have to look up the account id in order to set it up initially.
While security and UX are oftentimes in tension, in this case they don't have to be. It would not be that hard to be signed into multiple accounts and allow you to switch seamlessly between them (allow the tagging of each account, such that you can say, effectively, "show me dev us-east-1" vs "show me us-east-1" vs "show me dev", slicing and dicing between accounts that way). At that point, separating infra across accounts becomes semantically meaningful, and you can slice/dice in whatever way seems best (so you could have a full account for a single service, sure. Or an environment. Or a region. Or a combination of those, only service-Foo in us-east-1 for dev. Whatever level of granularity you want; trading off instead between the security of isolation with the convenience of colocation, which should be the actual UX cost; infra in the us-east-1 account has a harder time communicating with the infra in the us-west-1 account).
Users login to the Build Infra account and then Assume Role into the others - There's a list of magic links that does the assume role. There's also a list that is added to ~/.aws/config that does the equivalent: They configure one IAM key, and the rest are assumed automatically by the CLI or client libraries (Requires relatively recent client libraries; Java only started supporting this within the last year or two)
You can set budgets by project and easily allocate costs and address accountability issues across teams or products.
The value depends on how you operate.
I can see how starting with a pattern of "account per X" would create intuitive boundaries. When you say "per service" what kind of service do you mean? Business related web service API? AWS product? Other? Interested in what boundary line made sense for you given the large number of accounts you say you're happy with using.
Really soured me on the setup, tbh.
If someone acquires credentials, they are usually multi use and long term. And it can go unnoticed if an ec2 instance is span up running crypto mining on your dime, only for you to notice at the end of the day that your estimated bill has shot through the roof