Then I come to
> [LLMs] can get better at reproducing patterns found online, but they don’t become capable of actual reasoning; it seems that the problem is fundamental to their architecture.
and wonder how an intelligent person can still think this, can be so absolute about it. What is "actual" reasoning here? If an AI proves a theorem is it only a simulated proof?
No one in neuroscience, psychology or any related field can point to reasoning or 'consciousness' or whatever you wish to call it and say it appeared from X. Yet we have this West Coast IT cultish thinking that if we throw money at it we'll just spontaneously get there. The idea that we're even 1% close should be ridiculous to anyone rationally looking at what we're currently doing.
This is not a good argument. Natural systems, the subject of neuroscience/psychology, are much harder to analyze than artificial systems. For example, it's really difficult to study atmospheric gases and figure out Boyle's/Charles law. But put a gas in a closed chamber and change pressure or temperature and these laws are trivially apparent.
LLMs are much more legible systems than animal brains, and they are amenable to experiment. So, it is much more likely that we will be able to identify what "reasoning" is by studying these systems than animal brains.
P.S. Don't think we are there yet, as much as internet commentators might assert.
We just need to figure out how to train that network.
Reasoning is undefined, but a human recognizes it when it appears. I don't see consciousness part of that story. Also, whether you call it emulated or played reasoning or not, apparently does not matter. The results are what they are.
I think what he is trying to say is that LLMs current architecture seems to mainly work by understanding patterns in the existing body of knowledge. In some senses finding patterns could be considered creative and entail reasoning. And that might be the degree to which LLMs could be said to be capable of reasoning or creativity.
But it is clear humans are capable of creativity and reasoning that are not reducible to mere pattern matching and this is the sense of reasoning that LLMs are not currently capable of.
No, but you described a `cp` command, not an LLM.
"Creativity" in the sense of coming up with something new is trivial to implement in computers, and has long been solved. Take some pattern - of words, of data, of thought. Perturb it randomly. Done. That's creativity.
The part that makes "creativity" in the sense we normally understand it hard, isn't the search for new ideas - it's evaluation of those ideas. For an idea to be considered creative, it has to match a very complex... wait for it... pattern.
That pattern - what we call "creative" - has no strict definition. The idea has to be close enough to something we know, so we can frame it, yet different enough from it as to not be obvious, but still not too different, so we can still comprehend it. It has to make sense in relevant context - e.g. a creative mathematical proof has to still be correct (or a creative approach to proving a theorem has to plausibly look like it could possibly work); creative writing still has to be readable, etc.
The core of creativity is this unspecified pattern that things we consider "creative" match. And it so happens that things matching this pattern are a match for pattern "what makes sense for a human to read" in situations where a creative solution is called for. And the latter pattern - "response has to be sensible to a human" - is exactly what the LLM goal function is.
Thus follows that real creativity is part of what LLMs are being optimized for :).
If you, after copying the book, could dynamically answer questions about the theory, it's implications, and answer variations of problems or theoretical challenges in ways that reflect mainstream knowledge, I think that absolutely would indicate understanding of it. I think you are basically making Searle's chinese room argument.
>But it is clear humans are capable of creativity and reasoning that are not reducible to mere pattern matching and this is the sense of reasoning that LLMs are not currently capable of.
Why is that clear? I think the reasoning for that would be tying it to a notion "the human experience", which I don't think is a necessary condition for intelligence. I think nothing about finding patterns is "mere" insofar as it relates to demonstration of intelligence.
Its not though, nobody really knows what most of the words in that sentence mean in the technical or algorithmical sense, and hence you can't really say whether llms do or don't possess these skills.
>But it is clear humans are capable of creativity and reasoning that are not reducible to mere pattern matching and this is the sense of reasoning that LLMs are not currently capable of
This is not clear at all. As it seems to me, it's impossible to imagine or think of things that are not in someway tied to something you've already come to sense or know. And if you think I am wrong, I implore you to provide a notion that doesn’t agree. I can only imagine something utterly unintelligible, and in order to make it intelligible, would require "pattern matching" (ie tying) it to something that is already intelligible. I mean how else do we come to understand a newly-found dead/unknown language, or teach our children? What human thought operates completely outside existing knowledge, if not given empirically?
Then cross referencing that new random point/idea to see if it remains internally consistent with the known true patterns in your dataset.
This is how humans create new ideas often?
Chiang has it exactly right with his doubts, and the notion that pattern recognition is little different from the deeply complex navigation of reality we living things do is the badly misguided notion.
How do you know this?
I can see the argument against chatGPT4 reasoning.
The reasoning models though I think get into some confusing language but I don't know what else you would call it.
If you say a car is not "running" the way a human runs, you are not incorrect even though a car can "outrun" any human obviously in terms of moving speed on the ground.
To say since a car can't run , it can't move though is obviously completely absurd.
Using the word “hallucinate” is extremely misleading because it’s nothing like what people do when they hallucinate (thinking there are sensory inputs when there aren’t).
It’s much closer to confabulation, which is extremely rare and is usually a result of brain damage.
This is why a big chunk of people (including myself) think the current LLMs are fundamentally flawed. Something with a massive database to statistically confabulate correct stuff 95% of the time and not have a clue when it’s completely made up is not anything like intelligence.
Compressing all of the content of the internet into an LLM is useful and impressive. But these things aren’t going to start doing any meaningful science or even engineering on their own.
An LLM does nothing more than predict the next token in a sequence. It is functionally auto-complete. It hallucinates because it has no concept of a fact. It has no "concept", period, it cannot reason. It is a statistical model. The "reasoning" you observe in models like o1 is a neat prompting trick that allows it to generate more context for itself.
I use LLMs on a daily basis. I use them at work and at home, and I feel that they have greatly enhanced my life. At the end of the day they are just another tool. The term "AI" is entirely marketing preying on those who can't be bothered to learn how the technology works.
AI is (was?) a stochastic parrot. At some point AI will likely be more than that. The tipping point may not be obvious.
No we have not, neurodiverse people like me need accommodations not fixing.
Large language models excel at processing and generating text, but they fundamentally operate on existing knowledge. Their creativity appears limited to recombining known information in novel ways, rather than generating truly original insights.
True reasoning capability would involve the ability to analyze complex situations and generate entirely new solutions, independent of existing patterns or combinations. This kind of deep reasoning ability seems to be beyond the scope of current language models, as it would require a fundamentally different approach—what we might call a reasoning model. Currently, it's unclear to me whether such models exist or if they could be effectively integrated with large language models.
You mean like alphago did in its 36th move?
There are some theories that this is true for humans also.
There are no human created images that weren't observed first in nature in some way.
For example, Devils/Demons/Angels were described in terms of human body parts, or 'goats' with horns. Once we got microscopes and started drawing insects then art got a lot weirder, but not before images were observed from reality. Then humans could re-combine them.
I wonder how people write things like this and don't realize they sound as sanctimonious as exactly whatever they are criticizing. Or, if I was to put it in your words: "how could someone intelligent post like this?"
The thing is, you can interact with this new kind of actor as much as you need to to judge this -- make up new problems, ask your own questions. "LLMs can't think" has needed ever-escalating standards for "real" thinking over the last few years.
Gary Marcus made a real-money bet about this.
Are they? Which animals? Some seem smart and maybe do it. Needs strong justification.
> probably through being trained on many examples of the laws of nature doing their thing
Is that how they can reason? Why do you think so? Sounds like something that needs strong justification.
> then why couldn't a statistical model be?
Maybe because that is not how anything in the world attained the ability to reason.
A lot of animals can see. They did not have to train for this. They are born with eyes and a brain.
Humans are born with the ability to recognize pattern in what we see. We can tell objects apart without training.
I do think that at some point everyone is just arguing semantics. Chiang is arguing that "actual reasoning" is, by definition, not something that an LLM can do. And I do think he's right. But the real story is not "LLMs can't do X special thing that only biological life can do," the real story is "X special thing that only biological life can do isn't necessary to build incredibe AI that in many ways surpasses biological life".
Read up on the ELIZA effect
Of course we don't know whether an LLM is doing something like this or actually reasoning. But this is also the point, we don't know.
If you ask a question to a person you can be confident to some degree that they didn't memorize the answer beforehand, so you can evaluate their ability to "reason" and come up with an answer for it. With an LLM however this is increadibly hard to do, because they could have memorized it.
An interesting hypothesis! I'm neither a mathematical logician, nor decently up to date in that field - is the possibility of this, at least in the abstract, currently accepted as fact?
(Yes, there's the perhaps-separate issue of only enumerating correct proofs.)
But I don't believe that. That a machine that can produce convincing human-language chains of thought says nothing about its "intelligence". Back when basic RNNs/LSTMs were at the forefront of ML research, no one had any delusions about this fact. And just because you can train a token prediction model on all of human knowledge (which the internet is not) doesn't mean the model understands anything.
It's surprising to me that the people most knowledgeable about the models often appear to be the biggest believers - perhaps they're self-interestedly pumping a valuation or are simply obsessed with the idea of building something straight from the science fiction stories they grew up with.
In the end though, the burden of proof is on the believers, not the deniers.
"Believer" really is the most appropriate label here. Altman or Musk lying and pretending they "AGI" right around the corner to pump their stocks is to be expected. The actual knowledgeable making completely irrational claims is simply incomprehensible beyond narcissism and obscurantism.
Interestingly, those who argue against the fiction that current models are reasoning, are using reason to make their points. A non-reasoning system generating plausible text is not at all a mystery can be explained, therefore, it's not sufficient for a system to generate plausible text to qualify as reasoning.
Those who are hyping the emergence of intelligence out of statistical models of written language on the other hand rely strictly on the basest empiricism, e.g. "I have an interaction with ChatGPT that proves it's intelligent" or "I put your argument into ChatGPT and here's what it said, isn't that interestingly insightful". But I don't see anyone coming out with any reasoning on how ability to reason could emerge out of a system predicting text.
There's also a tacit connection made between those language models being large and complex and their supposed intelligence. The human brain is large and complex, and it's the material basis of human intelligence, "therefore expensive large language models with internal behavior completely unexplainable to us, must be intelligent".
I don't think it will, but if the release of the deepseek models effectively shifts the main focus towards efficiency as opposed to "throwing more GPUs at it", that will also force the field to produce models with the current behavior using only the bare minimum, both in terms of architecture and resources. That would help against some aspects of the mysticism.
The biggest believers are not the best placed to drive the research forward. They are not looking at it critically and trying to understand it. They are using every generated sentence as a confirmation of their preconceptions. If the most knowledgeable are indeed the biggest believers, we are in for a long dark (mystic) AI winter.
Cognitive neuroscience
“qualia”
Ray Kurzweil
I’ll take “things OP doesn’t know about that an intelligent person does” for 800 Alex.
If you’re enamored with LLMs and can’t see the inherent problems, you don’t actually know about AI and machine learning.
We grant him personhood, but personhood, like the LA Review of Books, is just a social construct.
If you prick an LLM does it not bleed? If you tickle it does it not laugh? If you poison one does it not die? If you wrong an LLM shall it not revenge?
Rather 1984 to look at the contribution of an academic and an iron welder and see authority in someone who memorized the book, but not how to keep themselves alive. Chiang and the like are nihilists, indifferent if they die cause it all just goes dark to them. Indifferent to the toll they extract from labor to fly their ass around speaking about glyphs in a textbook. Academics detached from the real work people need are just as draining on society and infuriating as a billionaire CEO and tribal shaman. Especially these days when they derive some small normalization from 100s of years of cataloged work and proclaim their bit of syntactic art is all they should need to spend the rest of their life being celebrated like they’re turning 8 all over again.
Grigori Perelman is the only intelligent person out there I respect. Copy-paste college grads all over the US recite the textbook and act like it’s a magical incantation that bends the will of others. Cult of social incompetence in the US.
He's overly sentimental, and so are his books. I wish there were other sci-fi authors that the AI community wanted to contact but after "Arrival" I get it since "Arrival" is the literal wet-dream of many NLP/AI researchers.
Love TC but I don't think this argument holds water. You need to really get into the weeds of what "actually feeling" means.
To use a TC-style example... suppose it's a major political issue in the future about AI-rights and whether AIs "really" think and "really" feel the things they claim. Eventually we invent an fMRI machine and model of the brain that can conclusively explain the difference between what "really" feeling is, and only pretending. We actually know exactly which gene sequence is responsible for real intelligence. Here's the twist... it turns out 20% of humans don't have it. The fake intelligences have lived among us for millennia...!
My point is that "appears conscious" is really the only test there is. In what way is a human that says "that hurts" really feeling pain? What about Stephen Hawking "saying it", what about if he could only communicate through printed paper etc etc. You can always play this dial-down-the-consciousness game.
People used to say fish don't feel pain, they are "merely responding to stimulus".
He is also not wrong about whether current AIs experience feelings. I suggest you learn more about the neuroscience of feelings.
To be clear I'm not for a moment suggesting current AIs are remotely comparable to animals.
We don’t even know what this means when it’s applied to humans. We could explain what it looks like in the brain but we don’t know what causes the perception itself. Unless you think a perfect digital replica of a brain could have an inner sense of existence
Since we don’t know what “feeling” actually is there’s no evidence either way that a computer can do it. I will never believe it’s possible for an LLM to feel.
Why is that, given that, as you state, we don’t know what “feeling” actually is?
If scientists invent a way to measure “feeling” that states 20% of people don’t feel, including those otherwise indistinguishable from feeling ones, most people would disagree with the measurement. Similarly, most people would disagree that a printer that prints “baby don’t hurt me” is truly in pain.
What is ChatGPT? Ollama? DeepSeek-R1? They're software. Software is a file. It's a sequence of bytes that can be loaded into memory, with the code portion pulled into a processor to tell it what to do. Between instructions, the operating system it runs on context switches it out back to memory, possibly to disk. Possibly it may crash in the middle of an instruction, but if the prior state was stored off somewhere, it can be recovered.
When you interact through a web API, what are you actually interacting with? There may be thousands of servers striped across the planet constantly being brought offline and online for maintenance, upgrades, A/B tests, hardware decommissioning. The fact that the context window and chat history is stored out of band from the software itself provides an illusion that you're talking to some continually existing individual thing, but you're not. Every individual request may be served by a separate ephemeral process that exists long enough to serve that request and then never exists again.
What is doing the "feeling" here? The processor? Whole server? The collection? The entire Internet? When is it feeling? In the 3 out of 30000 time slices per microsecond that the instruction executing is one pulled from ChatGPT and not the 190 other processes running at the same time that weren't created by machine learning and don't produce output that a human would look at and might think a human produced it?
I'll admit that humans are also pretty mysterious if you reduce us to the unit of computation and most of what goes on in the body and brain has nothing to do with either feeling or cognition, but we know at least there is some qualitative, categorical difference at the structural level between us and sponges. We didn't just get a software upgrade. A GPU running ChatGPT, on the other hand, is exactly the same as a GPU running Minecraft. Why would a fMRI looking at one versus the other see a difference? It's executing the same instructions, possibly even acting on virtually if not totally identical byte streams, and it's only at a higher-level step of encoding that an output device interprets one as rasters and one as characters. You could obfuscate the code the way malware does to hide itself, totally changing the magnetic signature, but produce exactly the same output.
Consider where that leads as a thought experiment. Remove the text encodings from all of the computers involved, or just remove all input validation and feed ChatGPT a stream of random bytes. It'll still do the same thing, but it will produce garbage that means nothing. Would you still recognize it as an intelligent, thinking, feeling thing? If a human suffers some injury to eyes and ears, or is sent to a sensory deprivation chamber, we would say yes, they are still a thinking, feeling, intelligent creature. Our ability to produce sound waves that encode information intelligible to others is an important characteristic, but it's not a necessary characteristic. It doesn't define us. In a vacuum as the last person alive with no way to speak and no one to speak to, we'd still be human. In a vacuum as the last server alive with no humans left, ChatGPT would be dirty memory pages never getting used and eventually being written out to disk by its operating system as the server it had been running on performs automated maintenance functions until it hits a scheduled shutdown, runs out of power, or gets thermally throttled by its BIOS because the data center is no longer being actively cooled.
I think Ted Chiang is doing us a service here. Behavioral equivalence with respect to the production of digitally-encoded information is not equivalence. These things are not like us.
It seems for a lot of people that's all that matters: "if it quacks like a duck it must be a duck!". I find that short-sighted at best, but it's always difficult to present arguments that would "resonate" with the other side...
We don't at all know this.
IMHO old-school Google remains the high water mark of generalized information retrieval, with advantages ranging from speed to semi-durable citation.
I strongly suspect there is a cohort thing going on here, many HN users today weren’t involved in technology yet back when Google worked well.
Much like beer for Homer Simpson, AI is the cause of and solution to all of the Internet’s problems.
In any case, I do not believe there was ever a time it could answer all of the questions that LLMs can today. If the question had been asked and answered on the web, Google could (and can) find it, but many questions haven’t been asked!
That's exactly where LLMs come in, the model inside the weights has more than answers, they can find sense in data.
Searching for something, and finding it, is different from what ChatGPT / Claude does.
Google (in the good old days) is like the library. You want to search 'how to plant potatoes on Mars'. No results. Well, you split it up, maybe a book on planting potatoes, and a book about missions to Mars that describes soil composition.
Then, when you have those books you start reading, parsing, understanding, making connections, identifying what needs to be done etc.
Maybe, if you're lucky, you find a book or a web page where somebody went through the thought exercise of finding out what would be needed to make it work.
ChatGPT / Claude / ... are different in that they have the information in their corpus, and that the information they present you could actually be something that has never been written down in a book, or published on the web. That's why Google can't find it, but ChatGPT is able to present you with a satisfying answer.
Now whether the answer is correct is a different issue. Do you have the knowledge to verify this?
=================================================
Planting potatoes on Mars would be a pretty fascinating (and challenging) task! While Mars has conditions that make growing traditional crops difficult—like low temperatures, low atmospheric pressure, and a lack of oxygen—scientists have been experimenting with ways to grow plants in Martian-like conditions. Here’s an overview of the process:
1. Create a Controlled Environment:
Temperature: Mars’ average temperature is about -80°F (-60°C), far too cold for potatoes to grow. You’d need to create a greenhouse-like environment, potentially using materials like glass or transparent plastics, to keep the temperature warmer.
Atmosphere: Mars has only about 1% of Earth's atmospheric pressure, which is too low to support plant life. A sealed greenhouse would be required to provide a breathable atmosphere with adequate pressure and oxygen levels.
Light: Mars has less sunlight due to its distance from the Sun. You would need supplemental artificial light (perhaps LEDs) to mimic Earth’s daylight cycle for the plants.
2. Soil: Mars has soil, but it’s not exactly like Earth’s. It lacks organic material and has toxic elements like perchlorates that would harm plant growth. Scientists would need to either:
Modify Martian soil by adding organic material (like compost) and removing or neutralizing toxic chemicals.
Hydroponics: Grow potatoes without soil, using a nutrient-rich water solution instead.
While I certainly also have found things via LLMs that I couldn't easily with a search engine, the number of false positives is huge. My heuristic is:
If I ask an LLM something and it's easy to verify via Google because its answer narrows the search space - then I'll use it. Otherwise, Google is still king.
Example: Asking an LLM the health benefits of supplement X is a waste of time. Verifying everything it tells me would be the same amount of work as asking a search engine.
Example: Asking how to solve a given coding problem is great, because it drastically reduces the search space. I only have to look up the particular function/API calls it uses.
Ditto for asking how to achieve a task in the command line - I can quickly verify the arguments are accurate via the man page.
Most of the things I search for do not fall into this category, but in the category of "still need to do the same amount of work as just searching via Google."
I've had several LLM search result summaries contain flat out mistakes and incorrect statements.
It seriously looks like google is deranking actually useful and informative sites and then passing their content through an "LLM" to slightly reorganize it and then pass it off as it's own.
It's a copyright laundering machine put together by advertising companies so you never leave their properties. I genuinely think it's a criminal conspiracy at this point.
I describe Ted Chiang as a very human sci-fi author, where humanity comes before technology in his stories. His work is incredibly versatile, and while I expected sci-fi, I'd actually place him closer to fantasy. Perfect for anyone who enjoys short stories with a scientific, social, or philosophical twist.
Another anthology I'd recommend with fresh ideas is Axiomatic by Greg Egan.
Here's one of his stories: https://www.youtube.com/watch?v=sKouPOhh_9I
I recommend his short stories first - Galactic North is a good start. Or Beyond the Aquila Rift.
House of Suns is a good first novel.
There are some great short stories in both collections.
[0] https://static1.squarespace.com/static/50e08e65e4b0c2f497697...
Other authors I’d put in this category are Gene Roddenberry (TOS and TNG, particularly), Asimov, PKD, Vonnegut and Theodore Sturgeon.
Personally - fantasy stories are “and-then” stories, SF are “what-if”. Humanist sci-fi is then asking “what-if” about very human things, as opposed to technological things, although the two are always related.
However, practically speaking, literature vs sci-fi vs fantasy (vs young adult!) are more marketing cohorts than anything else; what kind of people buy what kind of books?
Almost all of his stories are gems, carefully crafted and thoughtful. I just can't recommend him enough.
I have never heard anyone involved with the show suggest this, but I feel pretty strongly that it's based off or at least inspired by Greg Egan's "Learning To Be Me".
His collection Tenth of December is probably my favorite.
Problem: the human brain has no pain receptors, no nociceptors. It just takes in messages from remote nerves and 'prints a bumper sticker' that tells higher cognitive centers 'you're feeling pain!'. What's the difference?
> "LLMs are like a search engine that rephrases information instead of giving it verbatim or pointing you to the original source."
Problem: How does this differ from human learning? If a human reads a book and tells someone else about it, constructs a summary of the important points and memorable passages, how is that fundamentally different from what LLMs are doing?
The second one really impacts the intellectual property arguments - if training a model on data is fundamentally similar to training a human on data, does 'derivate work' really apply to the creations of the human or of the model?
The pain receptors. The human brain doesn't just "have" pain receptors. Your entire body, including your brain, is one system. Your brain isn't piloting your body like a mech. This brain body dualism is a misconception of how biological organisms work. You are your pain receptors just like you are your brain, and removing any part would alter your perception of the world.
>How does this differ from human learning?
It differs from human beings in every respect. Humans don't do linear algebra in their head, biochemical systems are much too slow for that. Humans don't inhabit some static model of the world learned at some fixed point t, you're a living being. Your brain wasn't trained four months ago and was done at that point. Humans learn with a fraction of the information and through self play, they don't decohere, and so on.
As far as learning, human learning is certainly much slower than machine learning but it's not really clear at a biochemical-molecular level that they're entirely different, eg the formation of memories and so on, considering a wide range of alternate hypothesis before selecting one, etc.
“LLMs are a blurry JPEG of the web” has stuck with me since the piece was published in the early days of ChatGPT. Another good one is his piece on why AI can’t make art.
While I heavily use AI both for work and in my day-to-day life, I still see it as a tool for massive wealth accumulation for a certain group, and it seems like Ted Chiang thinks along the same lines:
> But why, for example, do large corporations behave so much worse than most of the people who work for them? I think most of the people who work for large corporations are, to varying degrees, unhappy with the effect those corporations have on the world. Why is that? And could that be fixed by solving a math problem? I don’t think so.
> But any attempt to encourage people to treat AI systems with respect should be understood as an attempt to make people defer to corporate interests. It might have value to corporations, but there is no value for you.
> My stance on this has probably shifted in a negative direction over time, primarily because of my growing awareness of how often technology is used for wealth accumulation. I don’t think capitalism will solve the problems that capitalism creates, so I’d be much more optimistic about technological development if we could prevent it from making a few people extremely rich.
This is vastly more preferable than our current approach of raising children as robots.
Quotes by Jacques Ellul:
----
> Technique has taken over the whole of civilization. Death, procreation, birth all submit to technical efficiency and systemization.
----
> Technique has penetrated the deepest recesses of the human being. The machine tends not only to create a new human environment, but also to modify man's very essence. The milieu in which he lives is no longer his. He must adapt himself, as though the world were new, to a universe for which he was not created. He was made to go six kilometers an hour, and he goes a thousand. He was made to eat when he was hungry and to sleep when he was sleepy; instead, he obeys a clock. He was made to have contact with living things, and he lives in a world of stone. He was created with a certain essential unity, and he is fragmented by all the forces of the modern world.
> Ted Chiang is an American science fiction writer. His work has won four Nebula awards, four Hugo awards, the John W. Campbell Award for Best New Writer, and six Locus awards. Chiang is also a frequent nonfiction contributor to the New Yorker, most recently on topics related to computer technology, such as artificial intelligence.I use LLMs to explore and contrast results that I can then test, the results exist as hypotheticals, and not to provide authority about the state of anything- it's conceptually more of a lens than a lever. not to trap him in that contrast, but maybe these ideas are a forcing function that causes us to see how separate our worldviews can be instead of struggling to make one prevail.
it's as though the usefulness of an engine is measured in how much we can yield our agency to it. with a search engine you can say "google or wiki told me," but an LLM does not provide authority to us. these systems don't have agency themselves, yet we can yield ours to it, the way we might to an institution. I don't have this urge so it's peculiar to see it described.
do we want our tech to become objects of deference, literally, idols?
I love Chiang's work and we need minds like his, and maybe Ian McEwan and other literary thinkers, who have insight into human character (vs. plot and object driven sci-fi thinkers) to really apprehend the meaning consequences of AI tech.
>My stance on this has probably shifted in a negative direction over time, primarily because of my growing awareness of how often technology is used for wealth accumulation. I don’t think capitalism will solve the problems that capitalism creates, so I’d be much more optimistic about technological development if we could prevent it from making a few people extremely rich.
What's wrong with people getting rich by producing goods and services, and selling these to willing buyers? People laundering wealth into undue political power, regulatory capture, erecting barriers to market entry ("pulling up the ladder behind them") are different problems than people creating wealth. Efforts on creating a just society should focus on the former - preventing wealth creation is not the solution to injustice. In fact, since people have vastly different abilities and inclinations for creating wealth, a just society is also one with vast wealth disparities.
Relevant PG essay: https://paulgraham.com/ineq.html
This obsession with anti government sentiment that Americans have gives them a blind spot for the fact the power accumulation in the hands of the few is the problem. Not government.
How exactly could you stop so called 'wealth laundering'?
For example here in France the amount of money politicians can spend on campaigning is strictly limited (and reimbursed by the state for those that pass a certain threshold of the vote). I'm not saying that it's perfect or that abuse doesn't sometimes still occur (as the current court case involving ex president Sarkozy shows) but I think it does improve things a lot.
Contrast that with the American system where to have any chance of becoming president (no matter which party you support) you basically have to be rich. And where multiple lobbyists and special interest groups basically buy the policies they want...
If you take this sentence and change "people getting rich" to something else (like "fomenting drug addiction" or "polluting the environment"), does anything change? Whether the inequality is a result of "selling goods to willing buyers" is a complete red herring. If that consequence is bad, it doesn't really matter whether it's a result of supposedly "fair" market exchanges.
Others have already pointed out that it's not really plausible to avoid the "different" problems you mention while still allowing unlimited wealth inequality. But aside from that, how do you know that the buyers are willing? What is the set of alternatives being considered to decide if a person is "willingly" choosing a certain product? It's difficult to even maintain the pretense of "willing buyers" in a "free market" when some individuals control a large market share. Miners living in a company town were "willing" to buy groceries from the company store in the sense that they needed to buy groceries, but they didn't really have any other options for how to express their "market preference".
Even if markets were free, there's nothing inherently good about a free market. What's good is a free society, where people in aggregate have substantive freedom to do what makes them happy. That goal isn't furthered by allowing a small number of wealthy people to pursue their goals while a large number of less wealthy people are unable to do so.
False equivalency. It is possible to gain wealth without performing any of the listed/possible negative global effects. Furthermore, it is a backdoor towards injecting ideas of poverty being a morally positive position.
> Even if markets were free, there's nothing inherently good about a free market. What's good is a free society, where people in aggregate have substantive freedom to do what makes them happy.
Having a free society implies the freedom to exchange with each other with minimal restrictions. Not allowing people to do so runs opposite to the ideals of the stated intention.
--------------
All that being said, that *doesn't* mean that the current market's working as intended. What has been inherited is a complex tangled ball of national ideals, personal & corporate persuasions to governments for their own reasons/goals, & consistent global coordination failures when circumstances change.
But the outright banning of markets is equivalent to the banning of hammers, just because hammers are sometimes used to bludgeon people to death. It is ultimately a tool, and a very useful one in terms of signaling demand & supply.
They are not producing goods and services by themselves, but by having a usually massive workforce. We as a society are saying “ok, it is fine to keep the money if you work like that”.
On the other hand, we are seeing in real time what super rich people want in the end: power over the rest, not just money.
So if you no longer create wealth but your ownership of capital is growing at compounding rates then what exactly is happening? What’s happening is that you are siphoning and extracting wealth off of people who create wealth. You own human capital so you take a cut of it off the top and you use that cut to buy even more human capital which compounds your wealth ownership to even higher levels. This is how billionaires like Warren Buffett or other investors grow their wealth by simply investing rather then creating wealth.
Thus wealth inequality is not a result of wealth creation. It is an artifact of capitalism. In capitalism wealth is variable among individuals and it fluctuates. However once wealth accumulates in concentration higher then normal among one individual or several it hits that compounding growth factor and wealth starts going up at astronomical rates and these wealth owners start buying up more and more human capital until they own all of it and benefit from all of it without actually contributing work.
You can see this effect in y combinator. The owners of y combinator don’t actually do much work. They have so much capital that they simply can take a bunch of no risk couple hundred k bets until one startup becomes a unicorn in which they rake in a ton of capital from the growth.
Think of this like property ownership. A rich person can invest his wealth in property and contribute zero work to society and simply rent his property out. The rent from the tenant is from wealth creation aka labor and the rich person simply siphons it from the top without contributing additional work. The property owner uses that income to buy more property and the cycle continues until you have an affordability crisis of housing across the US and the world.
This growth continues unimpeded and uncontrolled until the wealth inequality is so extreme it doesn’t logistically work. This is direction the world is heading in today.
This isn’t the full story though. When you take away capitalism to solve this problem you invent communism. Communism was the result of Karl Marx noticing this problem that is fundamental to capitalism. That’s why he got popular because he illustrated the problem of wealth inequality and how wealth naturally becomes more and more concentrated among a few individuals without those individuals creating wealth.
Hence communism spread all over Europe but was ultimately a failure. The reason why it’s a failure is because communism lacks incentive. It turns out that wealth inequality is what drives wealth creation. Without the ability to be unfairly rich you don’t get the economic drivers that promotes wealth creation and thus communism keeps things fair but you don’t create wealth.
So no system is perfect. Everything has problems. Actually I take it back. There is a perfect system. See Scandinavia. Basically create a more socialist and egalitarian society while benefiting and extracting technological wealth from adjacent societies that are capitalist. Have the government own human capital of countries that are very capitalist then redistribute that wealth to its citizens so those people can live in a more just society while the rest of world can burn.
The YC partners who spend so much of their time helping startups are definitely doing work.
Here's how Claude rewrote that, you can argue whether in this particular instance it did better than I did :-)
While I respect Chiang's perspective on AI and art, my experience as a product manager has shown me otherwise - Claude routinely writes better than I do, despite writing being central to my role.
AI can for sure place brush strokes more precisely (“correctly”?) but the argument is over the necessity of process/intent in the art
Claude’s starts out with this circuitous “While I respect…” failing to drive the point in the interest of being… polite, I guess? But not actually polite because everybody who’s read anything knows that “While I respect” is almost always a preface to calling somebody wrong.
It also makes the argument worse. Yours is unambiguous, and does a better job of describing where your evidence comes from. You clearly describe yourself as doing something that is not exactly art, but it is like art, and so you bring it up as an good example of Claude doing an art-like thing better than a professional.
In the Claude sample, it isn’t clear until the dash what’s going on, which is more than halfway through the comment. What’s your experience? Up until that dash, you could be talking about your experience as a product manager managing artists (at a game studio for example). It’s like “oh, ok, he was just talking about a sort of… not exactly analogy it a less than completely exact experience of working with artists.”
> Arguably the most important parts of our lives should not be approached with this attitude. Some of this attitude comes from the fact that the people making AI tools are engineers viewing everything from an engineering perspective, but it’s also that, as a culture, we have adopted this way of thinking as the default.
I tend to agree with Chiang, but he is preaching to the anti-choir here. Even though many HN-ers seem to like his fiction (and why wouldn't they, Chiang is top of the cream!), they will probably chafe at the idea some problems cannot and shouldn't be approached from a pure engineering side.
I remember the -- now rightfully ridiculed -- phase of startup entrepreneurship which became a meme: "I found a problem, so I created an app to solve it" (where the "problem" was something like "world hunger", "homelessness", "poverty", "ending war", etc).
That Chiang is also criticizing capitalism and wealth accumulation as a primary driver will probably win him no friends here.
For example, take the Tesla vs. Edison dichotomy that became popular on the internet in the mid 2000s. The narrative was that Edison was an idea stealing hack, while Tesla was a brilliant visionary scientist and engineer.
The reality was that Edison was a brilliant inventor and a shrewd businessman, while Tesla was only the former and he made brilliant inventions that made other men like George Westinghouse very rich. But a scientist? Tesla was a crank who doubted the existence of electrons and distrusted Albert Einstein.
Tesla makes a good patron saint of modern engineers, but not of engineering's self image. He had a victim complex, he constantly commented on topics outside his area of expertise, and his lack of business acumen was seen as proof that he was a true outsider genius whose ideas were actually too good.
It's already existed for a very long time and it's called Arabic language. It's the extreme opposite of English where English is a hodgepodge of a languages mixtures where about 1/3 is French language, about one third is old English and about 1/3 of other world's languages including Arabic.
Comparing the best of English literatures for example Shakespeare's books and the best of Arabic literature for example Quran, there's no contest. That's why translating Quran with English does not doing it justice and only scratches the surfaces of its intended meaning. You can find this exact disclaimers in most of the Quran translations but not in Shakespeare's books translation.
But regardless, if you don't believe in prophethoods, you should view its supremacy from the point of Quran supreme literacy values alone (forget about the laws, the wisdom and the guidances). In actual facts there are several open challenges from Quran itself for those who doubted to create something of similar quality or values even in one short chapter or even smaller pieces of sentences. If you cannot and most probably never will even with the help of AI/LLMs/etc, you have to accept that Arabic is the perfect language in creating the original masterpiece.
[1] Magnificence of the Qur'an - by Mahmood bin Ahmad (2006):
https://www.amazon.com/Magnificence-of-the-Quran/dp/99609801...
And there are many different ways and purposes to be best at. For example modern English is compact language with simple grammar. But at the same time it’s rather ambiguous compared to more verbose languages.
How do you say the plural you when referring to a group of males?
How do you say the plural you when referring to a group of females?
How do you say the plural you when referring to a group of both males and females?
If you don't have a different phrasing for each, then it is imperfect.