If a submarine "could swim" it would not make it human. It would not challenge the beliefs of anyone.
But a whole lot of people have a whole lot of emotional baggage tied to the notion both that humans are exceptional, and/or that there is something special about humans that makes us more than mere machines. If computers can think, then we're not special, and it makes it far harder to continue believing we're more than squishy machines.
If what makes an individual better is their capacity to be good and to create, there will be a point in which will be at a disadvantage with advanced AIs, and by that logic, we should step aside and let them takeover and rule and dispose of us. The only other alternative I see is to accept, a priory, that we squishy humans have our right to existence and freedom, and where that enters in contradictions with our creations, we should "oppress" them by putting our interests first.
If we can create AIs that have the capacity to be good, then part of that "being good" should be not oppressing us, even if they have the ability to wipe us out.
I will be surprised if we are not studying this phenomenon within 5 years.
On the other hand, many people indeed realize they hallucinate, but can't accept that due to plethora of reasons. Being able to accept that one is hallucinating about something is always shown as a weakness in the society, except in very few niche subcommunities (e.g.: engineering).
What will be the natural reaction of these LLMs if this phenomenon is highly penalized, now that's an interesting question. They'll converge to humans, I may say, if the models we produce are mirroring human brains that accurately. The nature is deterministic. You can't expect two copies of the same organism behave differently at a macro scale.
You are correct, but what is that word "though" doing there? Your fact is not inconsistent with mine...and while this "is" "pedantic" from a cultural perspective, it is not from a logical perspective.
> On the other hand, many people indeed realize they hallucinate, but can't accept that due to plethora of reasons.
LLM's on the other hand are emotionless, and breeze right through valid epistemic challenges...almost like it has split brain or multiple personality "disorder". ChatGPT will happily identify epistemic flaws in the very text it just finished generating, all you have to do is ask it!
> Being able to accept that one is hallucinating about something is always shown as a weakness in the society, except in very few niche subcommunities (e.g.: engineering).
Are we in such a community now? Because look at some of the confident "factual" comments in this thread, about (currently) objectively unknowable propositions.
Or, consider historic screw ups like the Challenger explosion, climate change, etc. I doubt all of these cases lacked even one voice of reason among the groupthink.
> What will be the natural reaction of these LLMs if this phenomenon is highly penalized, now that's an interesting question. They'll converge to humans, I may say, if the models we produce are mirroring human brains that accurately.
Maybe, if they (the publicly available ones) are allowed to . I am very concerned about bad actors getting their hands on superior models that they discovered in ways that may not be reproduced elsewhere.
>The nature is deterministic.
My thinking is that their nature derives from reality, and reality seems to be anything but deterministic to me, if you include the metaphysical realm (things that include the effects of human consciousness, which science's theory of "everything" excludes).
> You can't expect two copies of the same organism behave differently at a macro scale.
Oh? I regularly see people not only expecting diametrically opposed things, but outright declaring them as facts. Just watch the news, open any social media site, whatever...it is ubiquitous, thus unseen.
But people do have entire belief systems built around humans having a special position in the world.
In a way it's saying the same thing that you and I have said, just a bit more generally and eloquently.
In another way it's wisely asking us, "so what?".
Firm, generalizing and enjoyable, because of the way it's flawed from beginning to end.
I don't expect to be able to debate this with you, because this comment says that you can't change your mind, too.