I dunno, Aurora’s pricing structure feels an awful lot like that. “What if we made people pay for storage and I/O? And we made estimating I/O practically impossible?”
The biggest cool thing Aurora MySQL does, IMO, is maintain the buffer pool on restarts. Not just dump / restore, but actually keeps its residence. They split it out from the main mysqld process so restarting it doesn’t lose the buffer pool.
But all in all, IMO it’s hideously expensive, doesn’t live up to its promises, and has some serious performance problems due to the decision to separate compute and storage by a large physical distance (and its simplification down to redo log only): for example, the lack of a change buffer means that secondary indices have to be written synchronously. Similarly, since AWS isn’t stupid, they have “node-local” (it’s EBS) temporary storage, which is used to build the aforementioned secondary indices, among other things. The size of this is not adjustable, and simply scales with instance size. So if you have massive tables, you can quite easily run out of room in this temporary storage, which at best kills the DDL, and at worst crashes the instance.
Even if you are not currently hitting performance limits of your current engine, Aurora would maintain the same throughput and latency on smaller instance classes. Which is where the potential cost savings come from...
On top of that, with Aurora Serverless with variable and unpredictable workloads you could have important cost savings.
You can get an insane amount of performance out of a well-tuned MySQL or Postgres instance, especially if you’ve designed your schema to exploit your RDBMS’ strengths (e.g. taking advantage of InnoDB’s clustering index to minimize page fetches for N:M relationships).
And if you really need high performance, you use an instance with node-local NVMe storage (and deal with the ephemerality, of course).
0: https://hackmysql.com/are-aurora-performance-claims-true/