All trends are going in the wrong direction for binary trees to ever become useful again, and some of those limits are physics, not design.
- Branch predictors hate binary trees.
- CPU pipelines hate pointer chasing.
- Larger caches work less efficiently with binary trees than alternatives, such as "cache-oblivious" algorithms.
- Operations wasted per cache-miss due to memory latency has been going up, not down.
Etc...
> even consider memory hierarchy at all.
There's an often overlooked physics aspect here: physical memory requires physical space. As you add "N" bytes of memory, the layout of the circuits needs to expand into space. If deployed in a flat plane as is done typically, then this is a shape with radius roughly proportional to √N. For a space-filling volumetric design, at best it is ∛N.
Access to a piece of data requires a round-trip that takes time proportional to the distance taken through the computer. This is a simple consequence of the speed of light.
This means that unpredictable or random memory access time scales as ∛N to √N, not the constant "1" as typically assumed in big O analysis!
This is inescapable for computers made from physical components instead of pencil & paper abstractions.
If you add in this additional term, a lot of algorithms traditionally considered to be efficient suddenly look much worse. Conversely, algorithms preferred by developers with a knack for performance tuning don't get such a big penalty.