The takeaway seems to be "Only meat brains can be conscious because I can feel it and computers aren't made of meat". Which is basically the plot line of every human/robot movie for the last 80 years.
Present the trolley problem to GPT-4 and it gives you a philosophy survey answer.
Present it to a human and their palms sweat. The gap isn't computation, it's that humans are value-making machines shaped by millions of years of selection pressure.
Pollan lands on the wrong argument (biology vs. silicon) when the real one is: where do the values come from, and can they emerge without a reproductive lineage that stakes survival on getting them right?
I'm not sure I would call it a requirement for consciousness, but knowing that most beings with general intelligence (humans) have a form of it similar to my own does make it easier to sleep at night.
This is kind of self-contradictory. Then humans aren't conscious? Or each has their own consciousness? Then why not the machine? Not sure what's the point being made here. Yes, the states of a human brain and a transformer are absolutely incompatible (humans at least share the common architecture), that's why any attempts to map model's "emotions" to humans' and the entire model welfare concept are pretty dubious. That doesn't prove there's no (or can never be) consciousness in that, though.
That's the most coherent argument from the entire article. It criticizes the Butlin report in particular and extrapolates that to "never", while ignoring modern takes on that (e.g. interpretability studies showing vague similarity of both on a level deeper than just the language) and any possible future evidence.
In a sense the title is right, nobody ever formally defined consciousness, so you and I and anyone else are free to make almost any argument and spin any narrative according to our beliefs and it will be true! Ill-defined terms and baseless solipsism are the main problems with all these discussions. Good thing that in practice they matter as much as the question whether a submarine swims.
Human brains use redundancy and the physical independence of neurons to build new pathways over time.
Current LLMs have no redundancy and brittle weights. Their technology and architecture fundamentally prevents them from learning.
I think our understanding of consciousness is developing as we build new edge cases. We have a machine that understands and reacts, but can't learn, grow, or "be" over time in a meaningful way.
>Their technology and architecture fundamentally prevents them from learning.
No? There's in-context learning which is actual learning, it's sample-efficient, and the result can be stored for a learning pipeline. Yes it's ludicrously crude and underpowered compared to neuroplasticity, but that's another question, there's nothing fundamental about this.
An LLM doesn't assign value to anything; it predicts tokens. The interesting question isn't whether we share a process with LLMs, it's whether the thing that makes your decisions matter to you, moral weight, spontaneous motivation, can emerge from a system that has no survival stake in its own outputs. I wrote about this a few years ago as "the consciousness gap": https://hackernoon.com/ai-and-the-consciousness-gap-lr4k3yg8
AI is getting close.
Anyway, I plan on posting it online somewhere eventually, but HN seems like a good place to throw the introduction out there.
The basic argument I have is that consciousness is a red herring, a concept that was relevant historically but is increasingly routed around by cybernetic systems that aren’t interested in interior states.
Here’s the intro. If you find this interesting, please let me know!
MacGuffin. Whodunit. Smoking gun. Fall guy. The detective fiction genre is an underappreciated source of terminology for unsolved problems, useful not only for criminal mysteries, but also for unanswered questions in philosophy and science. One such term is the red herring: an apparently useful thing, that upon further inspection, is actually a distraction from solving the main mystery at hand.
The concept of consciousness may be such a red herring. It has occupied the minds of philosophers for centuries and increasingly frames debates around AI, animal rights, and medical ethics, among other issues. And yet, even as consciousness is rhetorically dominant, in practice it is increasingly ignored and routed around in real-world situations. When rights are bestowed and resources allocated, the mechanism by which these are done is increasingly uninterested in interior consciousness.
This is not because the problem of consciousness has been solved, or because a revolutionary new theory has novel insights. Rather, it is the natural consequence of cybernetic systems concerned only with output, not internal states or abstract ideals.
What is needed, then, is a genealogy of the concept of consciousness, in the manner of Nietzsche, Foucault, or Charles Taylor. Not a new theory of consciousness, but a story of how the concept developed and came to underlie significant legal, moral, and philosophical systems, and how that foundation is rapidly fading away.
What this genealogy reveals is not merely the history of a single concept or the changing of societal systems, but a deeper human shift: the erosion of interiority itself and the triumph of the external. In simpler terms: a new, largely exterior idea of the self is forming, while at the same time, it is becoming more difficult to conceive of an interior-focused one.
This essay will trace the history of the concept of consciousness, show how it is being routed around by output-focused systems, then ask what effect this has on human life, and how to address it.
Will AI as a general concept ever achieve human level cognition and sentience? Depends on your definition of "ever".
Anyone who tries to feed you a line about "never" doesn't understand what they're talking about. On almost any topic.
AI as a concept is never going away and if we keep working the problem, we will eventually achieve a sentient AI. There's nothing magical about meat, there's only things that we don't understand.
To assert that only a human meat brain can be conscious is to assert that only humans can be conscious. That excludes alien life for one, and a large fraction of terrestrial life. One can argue quite successfully that many terrestrial species are conscious and aware. Elephants, great apes, whales, dolphins, octopi, pigs, corvids.
If an octopus is conscious (and I have good reason to believe they are) why is it so ridiculous to think that a hunk of silicon can do it?
Humans really are not special. We're just animals like any other. Our brains are not cosmically blessed and unique. There is no magic.
I suspect the space of forms consciousness can take is enormous, and it likely can exist in many forms other than the one we usually experience. I wouldn't rule out machine consciousness as a possibility, but without an adequate theory of consciousness it's just not something I think we can claim is possible or impossible yet with much credibility. That's not a religious argument, if anything it's the argument of an agnostic.
But it seems to be pretty hard to come up with a coherent claim of meat-consciousness that really excludes the possibility of machine-consciousness without some kind of really motivated reasoning.
Other adult humans? Babies? Fetuses? Brain dead patients? Severe Alzheimer's? Higher apes? Mammals? Vertebrates? Jellyfish? Trees? Organic aliens? Inorganic aliens? A pile of dirt?
Without a good theory of consciousness, we can't answer yes or no for any of them. And yet we don't have a good theory of consciousness and still want to make ethical decisions. What do? We have to rely on gestures toward a theory of consciousness and make decisions based on it, despite its flaws.
Whether AI needs consciousness is a totally separate question. LLMs are the great Chinese room, I'd say they have unconscious understanding, the distinction is like c vs list and similarly meaningless but may become meaningful in a constrained self-learning robotics context.
AI will never need to be conscious, AI isn't a moth flying to an open flame, but people will try anyway
Somebody with another background might take on commenting the article, so instead of short comments here we might have a coherent picture.
I explored a related angle on how AI challenges our assumptions about self and awareness.
https://www.immaculateconstellation.info/why-ai-challenges-u...
He instead seems to make up a mental image of how a neural network might work on a computer and uses that representation instead.
In any case, intelligence, consciousness, sapience, ego, etc. will probably need more strict fact-based definitions before we can agree on whether or not artificial consciousness can exist.
My personal theory is that consciousness is a specific biological adaptation, and it exists primarily to manage the care of young, and to manage status & relationships in kin groups. A theory of mind can benefit the care of young, which is a good argument for why it appears that only mammals and birds (two classes of animals which do a lot of caring for young) appear to either have a prefrontal cortex (mammals) or appear to have developed something which performs the same functions. (birds) In my opinion, consciousness as people experience it is also necessary for developing a theory of mind for other people, which is beneficial with regard to understand status & hierarchy in a group, and for cultivating and maintaining status.
This is partially why you can be a mystery to yourself; the same skills you'd use to try to understand someone else must actually be used to understand yourself. eg: "was I secretly jealous when I cut down my coworker?" Why don't you just know with 100% certainty? I'd argue that it's because the maintenance of ego does not require this certainty, because ego is tacked onto an already developed brain and lacks perfect insight into the brain's processes. I'd also argue this is why there can be such a gap between who someone believes themselves to be, and who they actually are. You're maintaining a personal identity which ties directly to status. It's not super relevant whether you're consistent over time or 100% internally consistent. You must meet the threshold to maintain your status, but really no more is needed.
It's also why you talk yourself in inane ways. You're walking through your house and you finally found your lost car keys. "I found them!" you might say to yourself. But who are you telling? Certainly "you" already know. I'd argue that the "you" in your head is an abstract identity that you have imperfect access to -- just the same as you have imperfect access and knowledge to other people. Your mind builds a model of your own mind using the same tools it uses to build a model of other people's minds. You have _more_ information about your own mind, but you certainly do not have omniscience about your own mind. The models are always imperfect.
I could go on, but I'd also argue this is sort of the basis for religion. Just like we see faces in the clouds, we try to find a theory of mind in places where it doesn't actually exist. (eg: "We must have upset an ego out there, and that's why it's not raining.") I also think it's why people have moral intuitions but not mathematical intuitions. Or why moral intuitions fail at scale. (eg: Peter Singer's famous child drowning in a small pond thought experiment.)
I don't, personally, have this internal monologue. My interior world is a roiling foam of images, feelings and intuitions, memories and imagined possibilities that slosh around solid concepts and facts like boulders in the surf. I have no trouble thinking of words when I need to but I must first conjure up an audience or sit down to journal.
Before these kinds of interpretive posts, I thought the idea of talking to one's self was just a metaphor.
I would expect LLMs to develop some similar non-verbal structure deep within their black boxes, but I know from my own experience that there's more to cogitation than language.