Before we had computers and machines, humans did all the work.
And you can't really reliably "code" humans. They misunderstand instructions. They disobey rules and regulations. They make mistakes.
But society still strived with these unreliable humans that have no "proof" whatsoever that they'll do the job properly.
In fact, these days we still often trust humans more than code, at least in areas where the stakes are high. For example, we trust human surgeons over programs that perform surgery. We trust humans to run the government rather than programs.
It's entirely plausible that future generations growing up with AI don't see the point of requiring "proof" of correctness when deploying automation. If an AI model does the job correctly for 99.99% the cases, isn't that sufficient "proof"? That's better than a human for sure.
Yeah that sounds dystopian but I don't see why it can't happen.