Not really:
* In my experience arbitrary limits intended to prevent "resource exhaustion" tend to lead to incidents where the limit was hit but resources weren't really exhausted, so it just caused failures for no reason. Very common example is the ulimit on open file descriptors. The default limits feel like they were set last century and haven't been updated to account for the fact that we have way more memory these days. We should really just be enforcing an overall limit on memory usage (per-user or per-cgroup) and expect that to include the file descriptor table.
* If I did choose a limit, it would be quite high (to avoid aforementioned unnecessary incidents), but in practice most of the lists I'm thinking of would never get near that size, so pre-allocating arrays would waste a lot of memory.