This is why BigQuery offers both models and lets you control the caps [1].
Buying fixed compute is effectively buying a throughput cap. Hard Quotas provide a similar function, but aren't a useful budgeting tool if you can't set them yourself.
"Serverless" without limits is basically "infinite throughput, infinite budget" (though App Engine had quotas since day 1 and then budgets once charging was added). The default quotas give you some of that budget / throughput capping, but again if you can't lower them they might not help you.
Either way, BQ won't drop ingestion or storage though because almost nobody wants their data deleted. As a provider, implementing strict budgets is impossible without having a fairly complex policy "if over $X/second stop all activity, oh except let me still do admin work, like adding indexes? Over $Y/second delete everything". I think having user adjustable quotas and throughput caps per "dimension" makes more sense but it puts the burden on the user and no provider offers good enough user control over quota.
tl;dr: true budgets are hard to do, but every provider should strive to offer better quota/throughput controls.
[1] https://cloud.google.com/bigquery/pricing
[2] https://cloud.google.com/bigquery/docs/reservations-workload...
No comments yet.