Many real programming languages support arbitrary precision decimal and/or rational numbers.
Sure, there are times when space or speed concerns favor inexact representation over correctness, but that's an optimization that ought to be properly evaluated; the fact that lots of languages are designed in a way which makes it the default everyone reaches for contributes to lots of errors.