Since approximately every nontrivial program ever written has UB, in actual practice we're only saved by the fact that compilers aren't entirely maliciously compliant.
It's true that code with UB does not have to be reached, per se, but it does have to be something your program will reach before it can hurt you.
Usually when we talk about UB, we're implicitly talking about runtime UB, since translation-time UB is generally far less subtle. If a program contains only conditional runtime UB, the compiler is not permitted to break the entire program from the very beginning, since all possible executions that do not trigger runtime UB must execute correctly as per 5.1.2.3.
But those of us who are actually writing programs mostly care about "in practical terms", and in practical terms, this doesn't happen, so we don't care. We've got enough trouble worrying about what does happen; we don't have time and energy to worry about what doesn't and won't happen.
You can replace "UB" for "bugs" and the result is the same. UB is a bug on the part of the programmer, from the point of view of C, similar to dereferencing a null pointer. When the standard says that something is UB, it is just clarifying what these situations are.
While you can certainly classify all UB as "bugs", doing so misses the critical differences between UB and other categories of bugs. If you have a logic bug for example, your program will correctly and consistently do the wrong thing. It will continue doing that wrong thing with a different compiler, on a different platform today and 10 years from now. Implementation defined behavior is a bit looser, but will still be consistent with any particular implementation (which will document the behavior) and will only manifest in the code that depends on it. A PR inserting one of these "normal" bugs doesn't invalidate the entire rest of the program.
UB is different. You can't make assumptions about UB because from the point of view of the standard, UB is "not C". There are no assumptions to be made, it's just all the stuff that doesn't have assigned semantics. And since the input is meaningless, so is the entirety of whatever the compiler gives you back.
Not correct. Bugs can occur differently in different architectures, even in high level languages. UB is just a kind of bug whose effect depends on how the compiler behaves, so you have to be careful to test your code on different compiler settings. This is nothing new on programming languages, it is only made explicit in the C standard. Suddenly people started to believe that pointing out the obvious source of bugs (UB) in the standard is equivalent to let programs misbehave.
There are also tons and tons of loop optimizations compilers do for side-effect free loops which would have to be removed completely. This is because infinite loops without side effects are UB. So if you wanted these optimizations you’d have to prove to the compiler — at compile time — that your loop is guaranteed to terminate since it is not allowed to assume that it will. Without these loop optimizations, numerical C code (such as numpy) would be back in the stone ages of performance.
Edit: I just wanted to point out that one of the new features in C23 is a standard library header called <stdckdint.h> that includes functions for checked integer arithmetic. This allows you to safely write code for adding, subtracting, and multiplying two unknown signed integers and getting an error code which indicates success or failure. This will be the standard preferred way of doing overflow-safe math.
The problem is, it only works well in the simplest cases when the code will 100% exhibit UB within a single function.
In most cases, the UB would only manifest on particular input values - if you want your compiler to warn about that then it will report one "potential UB" for every 10 lines of C code, and nobody wants to use such a compiler.