> Yes but at least they choose what they regurgitate unless you think of most people as automatons.
And what, are you saying ChatGPT doesn't choose what it regurgitates?
It seems like these arguments are getting more and more flimsy.
I do believe people (including me) are automatons because I think free will is logically impossible in the way most people intuitively think free will is.
Edit: to clarify, I believe people usually think of free will meaning that there's some magical soul-like way that allows you to choose what you do in a principled way that is not simply a direct functional result of your composition and the interactions that you have with the environment or some additional pure randomness that the environment imposes on you (due to the universe being quantum). Which is exactly like an intelligent machine would have to work, because... well, because it has to live in the same universe that we do, so in theory a machine can theoretically do what our minds do, functionally speaking. There's no magical free-will-like behavior that humans can have that machines can't, unless you believe in souls or other magical things.
> So far AI doesn’t bring any reasoning
This is clearly untrue, as ChatGPT can definitely reason pretty well (although, not always correctly, just like humans). As far as I can see, it can reason deductively, inductively, by analogy, it does abductive reasoning, cause-and-effect reasoning, critical thinking, step-by-step reasoning, you name it.
It might not always do it correctly, and it might even not do it as well as a good human can currently, but it can do it.
> Someone posted an example of gpt bulshiting something akin to 2+0=3 but very convincibly.
Humans do this all the time (although usually not at such an extreme level). Just look at all the posts saying ChatGPT can't do X or Y ;)