For simple reads, the SQL model forces significantly worse performance: MySQL benchmarks found that 75% of the time for a pkey lookup was spent on parsing the SQL. For more complex querying, SQL databases
can be fast... and they can also be extremely slow, and you can't tell for any given query just by looking at it.
The much-vaunted consistency comes at a significant cost: index updates block writes, and more insidiously, it's very easy to be surprised by a deadlock or a stale transaction with a long-running query. I've seen an SQL database stop committing any new writes because someone ran a seemingly innocuous query 23 days ago. And a lot of the time - including every web use case I've seen - you can't actually make any real use of those consistency guarantees.
Writing either a transformation pipeline that serves the same function as a secondary index, or a deliberate map-reduce style aggregation, takes more up-front effort. But it means you understand what's actually going on a lot more clearly and are much less likely to hit that kind of unpleasant surprise.