> and the compiler or interpreter knows both x's are the same variable it could optimize it away to 1.0 ± 0.0, so I don't see the problem
You're moving the goalposts here. Those are HUGE if's. You went from something you could trivially implement in any language to something that requires a LOT of infrastructure and will severely limit your options.
How are you going to handle (x - z) / (x + z) when z = 0? How are you going to handle f(x) / g(x) when they turn out to compute the same value in different ways?
You go from "I need to change float to interval<float>, give me 15 minutes" to "I need a computer algebra system, let me figure out if I can embed Mathematica/SymPy/Maple/etc. into program so I can do math." And even when you do that you STILL won't be able to handle cases where the symbolic engine can't simplify it for you. Which in general it won't be able to do.
> But if, in your code, you've copied x to y (not by reference), then it seems that x / y would correctly be [3/5, 5/3]. This is a feature, not a bug.
No, that is most definitely a bug. The correct result is 1.0, but you're producing [3/5, 5/3]. If that's not a bug to you then you might as well just output (-∞, +∞) everywhere and call it a day. You can insist on calling it a "feature" if it makes you feel better, but that won't change the fact that it's still just as useless (if not actively harmful) for your intended calculation as it was before.
Contrast this with just leaving it as a float instead of an interval, where you would've gotten the correct answer.
> In any case, since intervals are more likely to be more like ± 0.000000000000001, it doesn't seem like a problem in practice even if the compiler/interpreter doesn't optimize it away?
That's only after 1 iteration. Notice that in my example the error was multiplicative, not additive. Run more iterations and your error will magnify.