In 3D, the same can be done with morton codes (or a 3D hilbert curve) to build an implicit octree. Some raytracing systems use this for fast BVH construction.
In either case, this approach differs from mipmaps in that the underlying data can be sparse, and is simply stored in a sorted array.
We have also tested with Citus, which helped us scaled to billions of objects. Demo: https://youtu.be/ccES97ni_vI
In the tracing world, I believe the opensource pulseview/sigrok does this. It makes the UI very responsive even with gigabytes of data. I just wish it also integrated data compression of some kind so I could fit more trace than I have RAM (it can't be all that hard to intelligently compress a super repetitive signal in a way which still allows this fast zooming and scrolling - replacing some tree nodes with LZ77 style backreferences ought to do the trick)
It may in theory be possible to generalize the in-order layout I talk about in a similar way, but I'm not sure it would be that useful, maybe it would allow you to append rows or columns to your mipmapped image more easily, but I don't know of any applications where that's useful.
That data is too big for RAM, and coming in too fast to write to disk. But the data is usually very compressible. Some channels will be a million '0's. Other channels will be a clock signal of '0 1 0 1 0 1' forever. Other signals will be more complex, but probably still be near repeats of patterns.
I just want to compress it in realtime in such a way the GUI tools I want to use can still work. They need to do fast-ish random access on the data, and answer questions about chunks of the data (eg. is all the data between time X and time Y all 0's).
If you want to use the big address-space carve-out trick, you can, but the right way to do it is to PROT_NONE the parts of the address space you aren't using and install a signal handler to commit bits of your carveout as they're used.
You'll get better performance by using userfaultfd.
Within one Fenwick tree we have two sorts of paths starting at any node, one going to a virtual node marked with 0, and one going to a node marked with positive infinity. The interesting property is that these two paths for any two starting nodes always intersect once.
In standard Fenwick trees you use one of those two paths to store data at the nodes visited during inserting, and retrieve that data using the other path at nodes visited during reading. This allows you to compute arbitrary prefix or suffix sums (depending on whether you use the 'up' or the 'down' path for insertion or reading).
However in the data structure I proposed we use both paths for storing and retrieving, allowing us to compute prefix and suffix sums. Finally to answer any sort of query we also need to store the original data in another array. Then any query for range [p, q] can be partitioned into three pieces: [p, m), {m}, (m, q] where m is the unique midpoint where the path going up from p and down from q intersect.
The reddit comment I link contains an implementation that allegedly does arbitrary range queries, but it's nigh-incomprehensible so I can't tell how or why it uses 3 arrays.
It was written by a couple of friends of mine and others, so I'm not sure how it compares to the article link, as I'm not a databases expert.