Confidence intervals are relative to E[y|x], Predictive intervals are relative to y. Sometimes, for example there is not much variation in y|x, the two intervals may be similar, but that is due to the nature of the data, not because they are one just "a bit larger than the other" (or, otherwise, think about (1) the uncertainty around the mean of an empirical symmetric distribution with a very small standard deviation--we are 95% confident the true mean is between z and k--and (2) the 2.5%-97.5% intervals of the raw data distribution. Numbers can look similar, but they are representing different measures).
I paste an example here below that I had made in a following comment:
--- In the vast majority of the cases, what we want it the range for y (prediction interval), that is, given x = 3, what is the expected distribution of y?. For example, say we train a model to estimate how the 100-m dash time varies with age. The uncertainty we want is, "at age 48, 90% of Master Athletes run the 100-m dash between 10.2 and 12.4 seconds" (here there would be another difference to point out between Frequentist and Bayesian intervals, but let's make things simple).
We are generally not interested in, given x = 3, what is the uncertainty of the expected value of y (that is, the confidence interval)? In this case, the uncertainty we get (we might want it, but often we do not), is, "at age 48, we are 90% confident that the expected time to complete the 100-m dash for Master Athletes is between 11.2 and 11.6 seconds".
----
The two intervals can be similar according to some metrics ("ah, come on, 11s or 12s who cares"), but they are measuring/estimating something very different and in many cases, they would matter a lot.
Why do I say they "would" and not they "do"? Because many, and the vast majority I'd say, of decisions in industry settings (outside some niches) that are taken even when ML or statistical models are included in the process, are using point estimate (so, not even uncertainty intervals) only as one of the many input in the decision-making process.
Let me give you an example. I was years ago developing models for estimating ROI relative to certain (very popular) products. The calculations made previously were absurdly wrong, there were log-transformations involved and guess what, they were using confidence intervals ("the uncertainty around the expected ROI for a similar class of products is") instead of predictive intervals ("the ROI for this class of products is expected to be between w and j").
I provided the correct intervals (i.e., predictive), but in the end the decisions changed little, because those making decisions they were not even considering uncertainty in any way in the decision-making process. That's why, in general, I don't worry too much about uncertainty on the rare occasions these days when I develop models.
I mean, who outside of academia (and even there...) measures the accuracy of a predictive model taking also into account the predictive intervals, for example adding to a metric like mean absolute error over test data also the proportion of test data that falls within the uncertainty intervals that were estimated for the model given the training data? The answer is "very few".
In real life decision-making, there are many other factors that are not known or quantifiable that come in and dominate any errors arising from using confidence instead of predictive interval.