It's a powerful tool, but I think any belief that sufficient test coverage (in most common cases) actively proves correctness is misguided. In the general case, even full test coverage proves only that you've tested for the conditions you expect - but does nothing to verify the correctness of behavior in conditions you didn't expect.[1]
To me the benefits of TDD are three-fold:
1. It makes you think of what you're building in more detail before you build it.
2. The methodology puts heavy emphasis on short test-code cycles.
3. (Applies to any methodology that emphasizes coverage) You end up with an acceptable-to-great regression suite[1], and anecdotally it seems people do a better job of at least ensuring tests exist when required to by the methodology.
All of these things are equally possible without TDD. Short iterative cycles and additional forethought are perfectly possible without TDD, but they do require more discipline - it is harder to remember to stop after completing a small set of changes without a forcing mechanism.
[1] rust is a possible exception here, still wrapping my head around it.
[2] the value of this regression suite varies greatly from project to project. A hint that tests are of low value /potentially high cost can be seen when you're finding that minor internal changes either break a large number of tests or reduce coverage to noticeable degree. Particularly in absence of functional changes.