If you want B-trees that compete with LSM trees, stratified B-trees[1] are pretty sweet. There's an in-kernel GPL implementation called Castle[2] on GitHub.
[1] http://www.slideshare.net/acunu/20110620-stratifiedbtree
Maybe we can make the configuration so convoluted that we can base entire companies and revenue models around maintaining them.
Those would be people who couldn't be bothered to learn a really simple declarative language, and prefer ad-hoc OO ORMS and other shit like that...
There are more interesting structures, like fractal tree indices, which are cache-oblivios in the very formal sense.
My own opinion is that Btrees are nice, but not very effective in current world. Structures like LSM trees can be more effective. And you can construct high efficiency search structures over runs of LSM tree which will mimic Btrees.
At least they are not stubborn for long, I give them that.
It's pretty hard to beat "expected O(1)".
If the hype is to be believed, Cache-oblivious b-trees are an even better match for today's hierarchical memory systems.
Unfortunately, I don't know any simple free/open-source implementation of these.
But here is a "home page" for them: http://supertech.csail.mit.edu/cacheObliviousBTree.html
I think B-Trees are mostly important for educational purposes, since this is a very important general way of organizing data. For real world usage, there are hundreds of different structures optimized for different use cases.
Here is a Btree implementation from the mirage project [1] https://github.com/cgreenhalgh/ocaml-btree/blob/master/lib/b...
EDIT: removed link to binary tree after kerneis pointed it out.
Despite the name, I'm afraid this is a simple binary tree implementation, not a BTree:
type 'a tree = Empty | Node of('a * 'a tree * 'a tree);;
Obviously the author didn't know what he was doing (the intended scope was very large, AVL, BTrees, etc. but he stopped after committing this single file).
And in fact, this implementation does not show a lot because the actual B-tree work is done by the baardskeerder library: https://github.com/Incubaid/baardskeerder
It looks very large and complex, it is most certainly possible to write a simpler, pedagogical implementation.
There are variants such as the B+Tree http://en.wikipedia.org/wiki/B%2B_tree , which stores only keys in nodes and chains blocks - which is more efficient in range scans and less in general retrieval; And the http://en.wikipedia.org/wiki/B*-tree which is more densely packed.
Another case where caching plays a huge role in determining the most efficient data structure (See vector vs list [1]).
[0]: http://en.wikipedia.org/wiki/Red%E2%80%93black_tree#Analogy_...
[1]: http://baptiste-wicht.com/posts/2012/12/cpp-benchmark-vector...
Sounds like a waste of space, but it solves a bunch of problems, like being crash-proof (since we never overwrite anything, it's always possible to just find the last root) and allowing reads concurrent with a write without any synchronizing necessary. Plus, it's actually optimized for SSD's due to its log-like structure!
CouchDB B-Trees : http://guide.couchdb.org/draft/btree.html Log structures in SSDs : http://lwn.net/Articles/353411/
Read the LDAPCon 2011 paper (and slides) to see how it works and why it's better than CouchDB.
(The nice thing about virtual memory is you can make a really big RAM disk.)
Trees provide some other nice properties such as maintaining an ordering (so it's fast to get the smallest argument or interate of the elements in numerical order) and always perform at the asymptotic time cost (a dynamic hash table occasionally takes O(n) when the table is resized so it might not be a good fit in a realtime situation)
However, if modified nodes are always copied, then the only node that matters is the root node (since there can never be a partially modified tree accessible from the root), which boils down to atomic update of a root node pointer. Should be very efficient.
There is, of course, a performance implication of node copying, but it only affects add/delete/replace performance - read access runs at full speed - and it is cache-friendly, as well.