You can't be one-in-a-trillion confident about any particular DNA result, even if that's what the nominal probability from the DNA analysis itself seems to say, because it is objectively observable that there are plenty of other sources of errors of all sorts. You can only get down to that "noise floor" of probabilities. This can still be a very useful result, but it's important to not let it be any more important that it deserves.
Even if you have a magic machine that you can point at someone and it goes "boop" if they are guilty, you still must consider the probability that someone faked the "boop", or has swapped in a different shell that looks the same but makes the same "boop" when someone remotely triggers it or that the speaker was broken so even though the magic machine tried to "boop" nobody could hear it. All of these are quite realistic and of a much higher probability than the magic machine existing in the first place.
(This is also arguably the root problem with the still-popular "The computer said it so it must be true". Even if you assume the computer really is 100% accurate, which itself is a transparently false proposition when examined in daylight, there's still plenty of other reasons why one should not give the computer too much confidence.)
You could predict someone's behavior, but why not simply hedge against every possibility?
Magic does involve a good amount of sleight of hand, but it mostly relies on getting people to accept flawed premises. The most essential part of any given trick is the bit where you tell them what's going to happen: even as they reject it and try to bring skepticism to bear, they're often working off of (carefully placed) incorrect assumptions.
So they trust experts without even knowing what questions to ask, or how to assess evidence, or evaluate certainty, or when they should get more opinions.
Not unlike the Post Office convictions. Then there were the Roy Meadows convictions. Many others, and I am sure many other people who have not been able to prove their innocence.
The entire basis for being so enamored with science, as so many of us are, is that it provides a method for establishing facts without needing trust.
Despite the noise to the contrary, we live in a highly literate age, and are a species filled with curiosity and compassion. There's no reason that scientific findings - especially those used to underwrite public policy - cannot be made easy to understand and straightforward to replicate.
What interventions might those be? I'm only aware of the interventions that time and time again have been proven to be highly effective that were implemented here, masking, improved guidelines for hygiene, distancing, and vaccination.
One of the vaccines was recalled due to causing potentially fatal blood clots: https://www.pfpdocs.com/jj-vaccine-recall
Lockdowns were roundly rejected by an enormous chorus of the top experts in relevant fields, and yet somehow the messaging was spun to make it sound like there was significant debate. And of course, no actual data was ever made available by proponents for the rest of us to even consider, let alone replicate. This is what I mean by the "trust the science" message being a contradiction in terms.
Of the interventions you mention: masking was not shown "time and time again" to be highly - or even moderately - effective. The dearth of rigorous study on the matter is bizarre. The Bangladesh study showed no statistically significant effect for cloth masks, and very modest effects for others - certainly nowhere near enough to justify mandates.
I'm not very aware of the literature on distancing - can you provide sources to research which you think shows that it has "time and time again proven to be highly effective"?
Vaccination of course appeared incredible out of the gate, but we now know that there was significant unblinding during phase III, and the real-world results have not lived up to either the safety or efficacy claims. So, while the vaccines are a great achievement, I'm not sure we can draw the conclusions that the scientific method was as rigorously adopted as we might hope in the context of this discussion about the pitfalls of "trust the science" in matters of public policy. Instead, profit seems to have motivated a relatively shoddy series of rollouts. Moreover, the fallout over the disastrous booster approval cost us a number of experts who resigned in protest (obviously Gruber and Krause are the most notable, but there were many others both at FDA and in academia). So I think it still belongs in the 'loss' column as science-based policy goes.