I agree that statisticians would better than CS people appreciate the importance of uncertainty intervals--it is mostly cultural--but that "In reality there are very few ML applications that don't need confidence estimation and estimation of monetary costs" is empirically false.
If ML application require uncertainty attached to point estimate, we would see plenty more uncertainty intervals attached to point estimates, but in industry, outside of niches (e.g., banking, bio, actuary to name a few), very few bother dealing with them.
I am currently part of a large team (we are talking hundreds) of ML specialists, and I have yet to see a single presentation in which a point estimate was associate with some uncertainty interval. And in my previous company it was the same and when I interview candidates (dozens? hundreds?) I never get a satisfactory answer to the confidence interval vs predictive interval question I ask about.