The better the models are at reasoning, the less predictable they become.
This is analogous to top chess engines which make surprising moves that even human super grandmasters can't always understand.
Thus, the future will be unpredictable (if superintelligence takes control).
Link to the full talk & the time he talks about this: https://youtu.be/1yvBqasHLZs?si=3M6eZCQtXnW2tSUd&t=866
It won't just be a matter of learning how they react differently, because it will be different from one self-driving platform to another. Sometimes even from one version of a platform to another. And is self-driving engaged, or is the human in control at the moment? Or is self-driving in the process of abdicating to the human, making behavior different from what it was a moment ago?
As opposed to the predictable future we've had for the past few decades?
If superintelligence emerges soon, we may not even know which technologies will emerge and how many will be unleashed in the next 2 decades.
ADDED: Examples of some concrete predictions:
(1980) https://en.wikipedia.org/wiki/The_Third_Wave_(Toffler_book)
(1995) https://en.wikipedia.org/wiki/The_Road_Ahead_(Gates_book)
Obviously, specifics differ from predictions and plenty of people got it wrong. Many good forecasters got the broad strokes right though.
Which forecasters can even predict most technologies that would be invented after ASI emerges?
Ideally voices that don’t have a vested interest.
For example, give a superintelligence some money, tell it to start a company. Surely it’s going to quickly understand it needs to manipulate people to get them to do the things it wants, in the same way a kindergarten teacher has to “manipulate” the kids sometimes. Personally I can’t see how we’re not going to find ourselves in a power struggle with these things.
Does that make me an AI doomer party pooper? So far I haven’t found a coherent optimistic analysis. Just lots of very superficial “it will solve hard problems for us! Cure disease!”
It certainly could be that I haven’t looked hard enough. That’s why I’m asking.