Exactly, that's why I said that being able to prepay for a month ahead (or a similar period) is a good thing, as is anything else that makes billing predictable (such as no auto scaling).
> No more server time means your servers just vanish into the ether. It doesn't mean they start returning error pages of being unable to write to the database.
Come to think of it, that's probably the case for most people out there who just use one account/card for all of their resources.
What i described is indeed possible with a tiered approach: alotting most of your funds to keeping the DBs alive with all of your data and whatever is left apart from that for the more dynamic components - VPSes that work as load balancers or others that host your APIs.
Of course, the ways to achieve something like that are plentiful, for example, multiple accounts with different virtual cards (which may or may not be allowed), using multiple different service providers for different parts of the system (e.g. if one has better storage plans and latency isn't an issue, for example, data centers in same country), using multiple service providers for redundancy (hard to do for most DBs, easier for container clusters) or even using functionality provided by the platforms themselves (which may or may not exist when it comes to billing, even though DigitalOcean had a pretty lovely way of grouping resources into projects).
Come to think of it, that's probably a space that could use a lot of improvements.
> They don't reach into your servers and meddle with your nginx configuration. They don't shrink your disk volumes to the amount that's already in use. That wouldn't be possible.
This isn't even necessary. If platforms allowed me to say: "Here's a bunch of API nodes that I'll give up to X$ for the following month and here's another DB node that I'll prepay in full with Y$ for the following month" then none of the other multi-cloud deployment strategies or tiering would even need to be considered.
If we want to consider situations with auto scaling or other types of dynamic billing, then we should also be able to say something along the lines of: "For those API nodes with my preconfigured init script, i want at least Z instances available with the given funds, whereas any remainder can be used up for autoscaling up to W total nodes. Thus, if the alotted funds run out, it all will return to the prepaid/reserved minimum of Z instances for the rest of the billing period."
AWS Lambda can charge you for usage at sub second resolutions and yet neither they nor other cloud platforms provide good resource prioritization solutions or fine grained billing limits, doing just alerts at best? I'm not sure why that is, but until things change, we'll just have to live with workarounds. Technically some of that would already be possible with something like Reserved Instances on AWS (https://aws.amazon.com/ec2/pricing/reserved-instances/), but that's still not granular enough IMO. Then you'd just have RIs and "everything else" as opposed to resource groups with spending limits.
Furthermore, if my Time4VPS server runs out of bandwidth, I'm not expected to pay more, the port speed just goes down until the end of the prepaid billing period. That's the sort of simplicity that's one of the best current options, with minimal hassles. And if i ever can't afford to renew 10 API servers, then I'll just get 5 or whatever amount i can afford.
More platforms should be like that, maybe at a day/hour/... resolution, though and with APIs that allow us to decide ourselves how often we want to renew services and for what periods, like a few clever scripts online that you can find for turning off AWS/other service instances when not in use. Currently any fine grained controls that you want on those platforms would have to be done programmatically, but it's not like you could easily manage ingress/egress costs that way.