> but we can (and routinely do) also transcend that, question it, play with it. LLMs can’t.
Maybe not in a single inference, but you can have an LLM question itself by running another inference using its previous prompt as input. You can easily see this in a deep research agent loop where it might find some data and then it goes to find other data to back that up but then finds that it was actually incorrect and then it changes its mind