So the first example looks to me like two loops (hopefully the compiler does a better job on that), while the second one is obviously one loop.
I am already trying to guess how a compiler would parallelize that...
The first example is simply more abstract. It might be a single loop in case the implementation is lazy, it might be two loops in case it is strict. However, look closer. Those filter and map operations can be applied to basically anything you want, including asynchronous / infinite streams.
Your second loop on the other hand would have a hard time being translated to anything else than something that works on finite lists and that builds a finite buffer and not doing anything else while doing this.
Or in other words, your knowledge about this manual loop is not transferable ;-)
But a "smart enough compiler" is what compiler people say when they are talking about their version of the abstract anthropogenic equivalent AI. The fact is that the first one is much easier to parallelize, so that real implementations actually do it, while the second one is not done on practice. And it's easier to parallelize because there are some extra guarantees on the signature of those functions that a for loop does not give.
But in the second example, you are telling it to iterate over each element, sequentially. Since you are telling step by step what needs to be done, it's not parallelizable unless you implement it to be so.
There are guarantees about the way a map function works. It doesn't mutate its input, it only has access to one element at a time, you can't access the data structure you're building, etc.
All of these traits are true of the imperative version as well, but it's a lot harder to write a program which understands that. Meanwhile, you know that's how map and filter work, so it's trivially easy to know that those guarantees hold.
We could talk about why the first example could be parallelized of course, but that's not what I wanted to talk about and you missed the point ;-)