So the concept formal verification is as relevant as ever, and when building interconnected programs the complexity rises and verifiability becomes more difficult.
Absolutely. It's also worth noting that in the case of Tao's work, the LLM was producing Lean and Python code.
For the easy-to-verify fields like coding, you can train "domain intuitions" directly to the LLM (and some of this training should generalize to other knowledge work abilities), but for other fields you would need to supply them in the prompt as the abilities cannot be trained into the LLM directly. This will need better models but might become doable in a few generations.
Using LLMs to validate LLMs isn't a solution to this problem. If the system can't self-verify then there's no signal to tell the LLM that it's wrong. The LLM is fundamentally unreliable, that's why you need a self-verifying system to guide and constrain the token generation.
It can't because the LLM can't test its own design. Unlike with code, the LLM can't incrementally crawl its way to a solution guided by unit tests and error messages. In the real world, there are material costs for trial and error, and there is no CLI that allows every aspect of the universe to be directly manipulated with perfect precision.
OpenAI demoed training a model for a robotic hand using this approach years ago.