You’re still failing to understand that a model being able to output a prediction of something is not the same thing as it “knowing” that thing. The Newton-Raphson method doesn’t “know” what the root of a function is, it just outputs an approximation of it.
> It’s extremely silly then don’t you think to make such bold declarations on what doesn’t have it?
I don’t find it particularly bold to respond to your assertion that a piece of mathematics is sentient life by stating that you haven’t proven that it is, and that in the absence of that proof, the most rational position is to continue to believe that it is not, as we have done for millennia. The burden of proof is on you.
> if it does anything a conscious being does
You haven’t shown that it can do anything that only conscious beings can do.
Being able to generate a passable approximation of text that might follow some prompt doesn’t mean that it understands the prompt, or its answer. As an obvious example, if you give LLMs maths problems, they change their answers if you change the names of the people in the question. They’re not actually doing maths.
> Notice anything? It’s not just that the performance on MathGLM steadily declines as the problems gets bigger, with the discrepancy between it and a calculator steadily increasing, it’s that the LLM based system is generalizing by similarity, doing better on cases that are in or near the training set, never, ever getting to a complete, abstract, reliable representation of what multiplication is.[0]
[0] https://garymarcus.substack.com/p/math-is-hard-if-you-are-an...