Maybe. The thing is; zstd is quite close, and unlike lz4, zstd has a broad curve of supported speed/time tradeoffs. Unless you're huge and engineering effort is essentially free or at least the microoptimization for one specific ratio is worth the tradeoff - you may be better off choosing the solution that's less opinionated about the settings. If it then turns out that you care mostly about decompression speed + compression ratio and a little less about compression speed, it's trivial to go there. Or maybe it turns out you only sometimes need the speed, but usually can afford spending a little more CPU time - so you default to higher compression ratios, but under load use lower ones (there's even a streaming mode built-in that does this for you for large streams). Or maybe your dataset is friendly to the parallization options, and zstd actually outperforms lz4.
If you know your use case well and are sure the situation won't change (or don't mind swapping compression algorithms when they do), then lz4 still has a solid niche, especially where compression speed matters more than decompression speed. But in many if not most cases I'd say it's probably a kind of premature optimization at this point, even if you think you're close to lz4's sweet spot.