Coverage tools could only measure quality of a test suite if you're assuming that either the code is perfect or that the existing tests cover (logically) everything about what they test. Without either of those guarantees, it doesn't tell you anything very meaningful, as you discovered.
If you really think that the metric is meaningless and useless for deriving confidence, then you are necessarily asserting that code bases with 100% coverage have indistinguishable bug yields compared to those with 50%, 5%, or even 0% coverage. A claim like this is too extraordinary to be believed without considerable evidence.
I could agree that a high coverage score is a prerequisite for having confidence that your test suite is comprehensive (in general), but that's such a low bar, it's like saying a full bath is a prerequisite for a nice house, just knowing that shouldn't do much to convince you it's a mansion.