Yes, that turns out to be exactly it [1]. Looks like there's even at least one JavaScript library for it [2].
It seems like such a useful and intuitive idea I have to wonder why it isn't a primitive in any of the common programming languages.
It is basically useless for numerical computation when you perform iterations. Even good convergent algorithms can diverge with interval arithmetic. As you accumulate operations on the same numbers, their intervals become larger and larger, growing eventually to infinity. It has some applications, but they are quite niche.
Are there really cases where current FP arithmetic gives an accurate result, but where the error bounds of interval arithmetic would grow astronomically?
It seems like you'd have to trust FP rounding to always cancel itself out in the long run instead of potentially accumulating more and more bias with each iteration. Is that the case?
Wouldn't the "niche" case be the opposite -- that interval arithmetic is the general-purpose safe choice, while FP algorithms without it should be reserved for those which have been mathematically proven not to accumulate FP error? (And would ideally output their own bespoke, proven, interval?)
Most often, yes; the probability distribution of your number inside that interval is not uniform, it is most likely very concentrated around a specific number inside the interval, not necessarily its center. After a few million iterations, the probability of the correct number being close to the boundary of the interval is smaller than the probability of all your atoms suddenly rearranging themselves into an exact copy of Julius Caesar. According to the laws of physics, this probability is strictly larger than zero. Would you think it "unsafe" to ignore the likelihood of this event? I'm sure you wouldn't, yet it is certainly possible. Just like the correct number being near the boundaries of interval arithmetic.
Meanwhile, the computation using classical floating point, typically produces a value that is effectively very close to the exact solution.
> It seems like you'd have to trust FP rounding to always cancel itself out in the long run instead of potentially accumulating more and more bias with each iteration. Is that the case?
The whole subject of numerical analysis deals with this very problem. It is extremely well known which kinds of algorithms can you trust and which are dangerous (the so-called ill-conditioned algorithms).
Say x = 4.0 ± 1.0. What is x / x?
It should be x / x = 1.0 ± 0.0, but interval arithmetic will give you [3/5, 5/3].
Notice the interval is objectively wrong, as the result cannot be anything other than 1.0. Now imagine what happens if you do this a few more iterations. Your interval will diverge to (0, +∞), becoming useless.
The moral of the story (which may be more obvious in hindsight): interval arithmetic is a local operation; error analysis is a global operation. Naturally the former cannot substitute for the latter.
With interval arithmetic, either a programmer would understand that floating point numbers are not actually numbers but intervals... or they wouldn't, and get surprised.
So I don't really see much upside. If you know that you need interval arithmetic, chances are that you're already using it.
Consider x in [-1, 1], and y in [-1, 1]. x*y is also in [-1,1], and x-y in [-2, 2]. But now consider actually that y=x. That's consistent, but our intervals could be smaller than what we've computed.
I mean, intervals as large as whole numbers might make sense if your calculations are dealing with values in the trillions and beyond... but isn't the point of interval arithmetic to deal with the usually tiny errors that occur in FP representation?