If you (more commonly) run on a different host, you’re looking more at 20x-50x and this is assuming you’re in the same region.
To even get close to the SQLite level of throughput you’d have to stagger/pipeline your requests over a number of concurrent connections (proportional to the earlier x).
You’ll eventually succeed at doing just that, with multiple machines. Congratulations, but now you have to also consider the N+1 problem, even for small Ns. In SQLite, you can afford much more back-and-forth with simple queries.
* He’s comparing an in-memory SQLite DB against a non-temp table in Postgres. Of course the RTT is lower when you don’t have to interact with the filesystem or flush to disk in the critical path.
* He’s connecting to Postgres over a loopback interface, instead of using a UNIX socket. Of course the RTT is higher when requests and responses needlessly traverse the TCP/IP stack.
That’d surprise me, where does he say that? (Even if he doesn’t it’s important to iron out though, so thanks for being critical!) The ~18us sound reasonable from my own experiments, and that’s with flush to file (using wal mode).
EDIT: I think I ran my benchmarks with synchronous=normal. Not sure what he did.
> He’s connecting to Postgres over a loopback interface, instead of using a UNIX socket.
I agree unix sockets would be better for same host setup, but still, the more representative number for “typical use” would be networked, no?
Also, for fairness: SQLite setups come with downtime, that’s hard to avoid. Worth mentioning..