Good point. Cache locality is another important reason that I should have mentioned. I have not run tests, but it may possibly be the most important reason of them all.
In many linked data structures you can mitigate a lot of this problem. For example, in the past I have used contiguous arrays where I would allocate linked elements, then I would try to exploit how the elements are produced to get them placed one after another, if possible.
When data is produced randomly you can try to create "buckets", where each bucket is backed by an array and attempts to keep consecutive elements relatively close to each other. You could then try to rewrite those buckets in a way that gets amortised. Or you can just straight write it as if it was an array of sorted linked list elements where on each delete or insert you have to move some of the elements, but you only need to reorganise one bucket and not others.
All a bit cumbersome and at the end of it you are still wasting at least half of the cache on pointers.