When you talk with him about a technical idea, you come away feeling that whatever Carmack said, it was plausible. So you start questioning your own assumptions.
I’ve learned to trust his thinking. It doesn’t mean that I don’t think for myself. Quite the opposite; I don’t think anyone else in the AI scene insisted to him that it was crucial to have a loss function.
But in matters like this, where he’s clearly given it more thought than I have (see tweet) and has more experience than I have (see son), I am perfectly content to outsource my thinking to him.
I’ll wager you $500 he’s right. (The problem with such wagers is that it’s hard to define the terms precisely. But, if it were possible to make it precise the way a mathematics formula is precise, I would happily bet you the $500.)