Or your dataset and query volume will not grow to the point where you actually need a performant database, but that's a business problem, not a DB solution.
Development teams that go after speed will soon realize that re-serializing the data they need for a web page push and sending it over the network results in a factor 5-50x drop in performance over having that data in a local data structure. And God forbid you have even minor packet (like 0.0000001%) loss in the network between your webserver and database server. Your 99% tail latency will sprint for the 10s mark.
And as the DBA's here said : most databases are in the GB range, with a few in the tens or hundreds. SQLite will beat the crap out of any other solution at those sizes.
People don't realize how much tail latency affects maximum QPS. Once you calculate how one affects the other you see that tail latency is the enemy of performance in webservers. A webserver that can generate 100% of responses in 1ms can serve 1 million qps (ie. can saturate a 10Gbps link). At 99% at 1ms and 1% at 10ms (the very minimum to execute a single query against a database that isn't local to your machine), you're left with 900k. More typical database figures would be 20ms average and 600ms for the 1%, which will leave you with 30k QPS. In this example using a database cost you 97% of your original performance.
That 97% figure is perfectly normal. So if you want decent performance, using a database server (in the serving path) is just not in the cards at all.