But there's nothing "wrong" about it, it's correct -- that's how it's designed to work. If you're starting with non-extreme values and just miniscule floating-point errors and iterate 1,000 times with basic arithmetic I still don't see how it's going to cause a problem. E.g.:
Math.pow((1 + Number.EPSILON), 1000) => 1.000000000000222
The result is the same even if you multiply in a loop 1,000 times rather than call Math.pow().If you're multiplying a million or billion times then that's where I can now understand you should be an expert an numerical analysis in the first place to have any confidence in your result, and you know whether or not your float result can be trusted or not at all.
But the good thing is that if you don't know what you're doing, started with interval arithmetic and it ballooned to (-∞, +∞), then that's a strong signal you shouldn't necessarily be trusting the algorithm's results at all, and to go talk to someone with a background in numerical analysis, right?
Whereas if you iterate a million times and the interval is still miniscule compared to your values, you have absolute confidence you're fine. Seems useful to me -- not useless or actively harmful at all.