Suppose that you have a bunch of things implemented using map or filter. When someone writes parallelized versions of map and filter, all of the existing code gets the benefits.
Now suppose you have a bunch of basic functions implemented using reduce (sum, product, min, max, reverse, ...). Can these be parallelized? Yes - by throwing away the 'reduce' implementation, and starting from scratch.
The problem with reduce, compared to its more useful cousins map and filter, is that it is too powerful. Map and filter are more limited than reduce, but if you can express your computation in terms of maps and filters, you get something valuable in return. If you can express is in terms of reduce, you save a few keystrokes, and that's about it.
For anyone interested in this kind of stuff, I recommend Guy Steele's talk "Organizing Functional Code for Parallel Execution; or, foldl and foldr Considered Slightly Harmful": https://vimeo.com/6624203