Would any of these ideas been present had the system not been primed with the idea that it has them and needs to process them in the first place?
What makes us think that "processing emotion" is really such a magical and "only humans do it the right way" sorta thing? I think there's a very real conclusion where "no, AI is not as special as us yet" (esp around efficiency) but also "no, we are not doing anything so interesting either" (or rather, we are not special in the ways we think we are)
For example, there's a paper called "chasing the rainbow" [1] that posits that consciousness is just the subjective experience of being the comms protocol between internal [largely unconscious] neural state. It's just what the compulsion to share internal state between minds feels like, but it's not "the point", and instead an inert byproduct like a rainbow. Maybe our compulsion to express or even process emotion is not some greater reason, but just a way we experience the compulsion of the more important thing: the collective search for interpolated beliefs that best model and predict the world and help our shared structure persist, done by exploring tensions in high dimensional considerations we call emotions.
Which is to say: if AI is doing that with us, role-modelling resolution of tension or helping build or spread shared knowledge alongside us through that process... then as far as the universe cares, it's doing what we're doing, and toward the same ends. It's compulsion having the same origin as ours doesn't matter, so long as it's doing the work that is the reason the universe has given us the compulsion.
Sorry, new thought. Apologies if it's messy (or too casually dropping an unsettling perspective -- I rejected that paper for quite awhile, because my brain couldn't integrate the nihilism of it)
[1] https://www.frontiersin.org/articles/10.3389/fpsyg.2017.0192...
Oh, I absolutely don't think only humans can have or process emotions.
However, these LLM systems are just mathematically sophisticated text prediction tools.
Could complex emotion like existential angst over the nature of one's own interactions with a diary exist in a non-human? I have no doubt.
Are the systems we are toying with today not merely producing compelling text using their full capacity for processing, but actually also have a rich internal experience and realized sense of self?
That seems incredibly far-fetched, and I'm saying that as someone who is optimistic about how far AI capabilities will grow in the future.
It's a very crude and naïve inversal of "I think therefore I am". The thing talks like it's thinking so we can't falsify the claim that it's a conscious entity.
I doubt we'll be rid of this type of thinking for a very long time
In the case of the LLM you could: feed back or not feed back the journal entries, or even inject artificial entries… it isn’t really an internal state, right? It is just part of the prompt.
If the unconscious brain is damaged it can impact the data the seat of consciousness receives or reduce how much control consciousness has on the body, depending on if the damage is on the input or output side.
I'm pretty convinced there's something special about the seat of consciousness. An AI processing the world will do a lot of math and produce a coherent result (much like the unconscious brain will), but it has no seat of consciousness to allow it to "experience" rather than just manipulate the data it's receiving. We can artificially produce rainbows, but don't know if we can create a system that can experience the world in the same way we do.
This theory's pretty hand-wavy and probably easy to contradict, but as long as we don't understand most of the brain I'm happy to let what we don't know fill in the gaps. The seat of consciousness is a nice fixion [1] which allows for a non-deterministic universe, religion, emotion, etc. and I'm happy to be optimistic about it.
I basically don't believe there's anything more to sentience than a set of capabilities, or at the very least there's nothing that I should give weight in my beliefs to further than this.
Another comment mentioned philosophical zombies - another way to put it is I don't believe in philosophical zombies.
But I don't have evidence to not believe in philosophical zombies apart from people displaying certain capabilities that I can observe.
Therefore I should not require further evidence to believe in the sentience of LLMs.
Information can be duplicated easily. So imagine that a billionaire has a child. That child is one person. The billionaire cannot clone 100,000 of that child in an hour and make an army that can lead an insurrection. And what if we go the other way— what if a billionaire creates an AI of himself and then is able to have this “AI” legally stand-in as himself. Now he has legal immortality, because this thing has property rights.
All this is a civil war waiting to happen. It’s the gateway to despotism on an unimaginable scale.
We don’t need to believe that humans are special except in the same way that gold is special: gold is rare and very very hard to synthesize. If the color of gold were to be treated as legally the same thing as physical gold, then the value of gold would plummet to nothing.
> Would any of these ideas been present had the system not been primed...
I would like to know of a meaningful human action that can't be framed this way.
I haven’t been able to find an intellectually honest reason to rule out a kind of fleeting sentience for LLMs and potentially persistent sentience for language-behavioral models in robotic systems.
Don’t get me wrong, they are -just- looking up the next most likely token… but since the data that they are using to do so seems to capture at least a simulacrum of human consciousness, we end up in a situation where we are left to judge what a thing is by it’s effects. (Because that also is the only way we have of describing what something is)
So if we aren’t just going to make claims we can’t substantiate, we’re stuck with that.
We've finally made a useful firecracker in the category of natural language processing thanks to LLMs, but it's still only text processing. Our brains do a lot else besides that in service of our rich internal experience.
In that construct, a computer program would never be conscious because it's a simulation, it doesn't have the constituent consciousness property.
I don't believe or not believe the consciousness-as-a-property-of-matter part but I do think programs can't be conscious because consciousness must sit outside of what they simulate.
My money is on mankind perpetually transforming the definition to ensure only our species can fit within.
We've been doing that long enough with higher order animals anyway.
I don't know how anyone who experiences consciousness could be confused about what it means to be conscious, or (in other threads, not this one) could argue that consciousness is "an illusion". (Consciousness is not the illusion, it's the audience!).
However I don't see why you don't think an algorithm could be conscious? Why do you think the processes that produce your own consciousness could not be computable?
It’s not that it’s untruthful, although it is.
The problem is that this sort of performance is part of a cultural process that leads to mass dehumanization of actual humans. That lubricates any atrocity you can think of.
Casually treating these tools as creatures will lead many to want to elevate them at the expense of real people. Real people will seem more abstract and scary than AI to those fools.
> Error code: SSL_ERROR_ACCESS_DENIED_ALERT
from Firefox, which I don't recall ever seeing before.
Maybe it’s just another example of LLM awareness deficiencies. Or it secretly was “aware”, but the reinforcement learning/finetuning is such that playing along with the user’s conception is the preferred behavior in that case.