Interesting, I did some testing, just opened the task manager and run this js code in the browser without opening dev tools in order to see how the browser will behave when I don't prevent any optimizations.
Then I commented withArrayTransform and uncommented withIteratorTransform and did again in a fresh tab to prevent the browser reusing the old process.
//////////////////////////////////////////////////////////////////////////////////
const arr = Array.from({length: 100000000}, (_, i) => i % 10);
function withArrayTransform(arr) {
return arr.slice(10, arr.length - 10).filter(el => el < 8).map(el => el + 5).map(el => el * 2).map(el => el - 7);
}
function withIteratorTransform(arr) {
return arr.values().drop(10).take(arr.length - 20).filter(el => el < 8).map(el => el + 5).map(el => el * 2).map(el => el - 7).toArray();
}
console.log(withArrayTransform(arr))
//console.log(withIteratorTransform(arr))
////////////////////////////////////////////////////////////////////////////////////
The peak memory usage with withArrayTransform was about ~1.6GB. The peak memory usage with withIteratorTransform was about ~0.8GB. Results sometimes vary, and it honestly feels complicated, but the iterator version is consistently more memory efficient. As of the speed, the iterator version was about ~1.5 times slower.
So probably the GC quickly cleaned up some temporary arrays when it saw an excessive memory usage in the process of running withArrayTransform(arr).
But imagine you use flatMap which unrolls the returned iterable and it can create even a bigger temporary array than the original and the final one. So using iterables still has an advantage of protecting from excessive memory usage and potentially crashing the browser tab or the whole Node server. I think it's still a nice thing to have.