As a business it was always an ambitious effort, and I'm not sure what could or should have been done differently. But since then I've used a number of other systems and thought to myself "boy, I wish I had FDB right now."
Whether Apple's leadership agrees with me is another question. :)
https://www.voltdb.com/blog/foundationdbs-lesson-fast-key-va...
My feelings on this topic are mixed. On the one hand, I think many of the specific examples chosen in that post are false (and have told John as much in person). On the other hand, the general point that you can squeeze out constant factor performance improvements by violating abstraction boundaries is obviously usually true.
Nevertheless, I still think this is a bad argument. While it's true that abstractions are rarely costless, they can often be made so cheap that the low-hanging performance fruit is elsewhere. And in particular, cheap enough that they're worth it when you consider all the other benefits that they bring.
When I built a query language and optimizer on top of FoundationDB, my inability to push type information down into the storage engine was about the last thing on my mind. Perhaps someday when I'd made everything else perfect it would've become a big pain (and perhaps someday we would've provided more mechanisms for piercing the various abstractions and providing workload hints), but in the meantime partitioning off all state in the system into a dedicated component that handled it extremely well made the combined distributed systems problem massively more tractable. The increased developer velocity and reduced bugginess in turn meant that I (as a user of the key-value store) could spend scarce engineering resources on other sorts of performance improvements that more than compensated for the theoretical overhead imposed by the abstraction.
I won't claim that a transactional ordered key-value store is the perfect database abstraction for every situation, but it's one that I've found myself missing a great deal since leaving Apple.
But I'm glad to hear that things are going well for you guys. Best of luck, this is a brutal business!
We are happily using them to implement and optimize our local storage on top of LMDB (another awesome database). However, these approaches could be applied to any other key-value database with transactions and lexicographically stored keys.
I have some concerns about CockroachDB on both the performance and the reliability fronts. But I hugely admire what they're trying to do and I've heard that they're rapidly improving in both areas. TiDB is an exciting project that I've heard great things about but have never tried myself. I think it's also relatively immature.
Honestly if I were starting a project right now and had neither FDB nor Spanner available to me, I'd probably try to push Postgres as far as I possibly could before considering anything else.
But if one does need a bit more horizontal scalability, there don't seem to be a lot of options if you also want atomic, transactional updates (though not necessarily strict transaction isolation). I have an app that is conceptually a versioned document store, where each document is the sum of all its "patches"; when you submit a batch of patches, the rule is that these are applied atomically, and that the latest version of document thereafter reflects your patch (optimistic locking and retries take care of serialization and concurrent conflicts). I'm using PostgreSQL right now, which does this beautifully, but with limited scalability. I've looked for a better option, but not come up with anything.
Redis would handle this, but it would work purely thanks to single-threaded; and I don't feel like Redis is safe as a primary data store for anything except caches and such. Cassandra might do it, using atomic batches, although its lack of isolation could be awkward to work around.