> That's why wild has performance tests, to ensure that if a change breaks rustc's ability to optimize, it'll be noticed, and therefore fixed.
But benchmarks won't tell us which optimisation suddenly stopped working. This looks so similar to the argument against UB to me. Something breaks, but you don't know what, where, and why.
I see. These optimisations might not be UB as understood in compiler lingo, but it is a kind of "undefined behaviour", as in anything could happen. And honestly the problems it might cause don't look that different from those caused by UB (from compiler lingo). Not to mention, using unsafe for writing optimised code will generate same-ish code in both debug and release mode, so DX will be better too.
As an example, parts of the C++ standard library (none of the core language I believe though) are covered by complexity requirements but implementations can still vary widely, e.g. std::sort needs to be linearithmic but someone could still implement a very slow version without it being UB (even if it was quadratic or something it still wouldn't be UB but wouldn't be standards conforming).
UB is really about the observable behavior of the abstract machine which is limited to the reads/writes to volatile data and I/O library calls [1]
The optimization not getting applied doesn't mean that "anything could happen". Your code would just run slower. The result of this computation would still be correct and would match what you would expect to happen. This is the opposite of undefined behaviour, where the result is literally undefined, and, in particular, can be garbage.