It's not about AI development, it's about something mentioned earlier in the article: "make as much money as I can". The problems that we see with AI have little to do with AI "development", they have to do with AI marketing and promulgation. If the author had gone ahead and dammed the creek with a shovel, or blown off his hand, that would have been bad, but not that bad. Those kinds of mistakes are self-limiting because if you're doing something for the enjoyment or challenge of it, you won't do it at a scale that creates more enjoyment than you personally can experience. In the parable of the CEO and the fisherman, the fisherman stops at what he can tangibly appreciate.
If everyone working on and using AI were approaching it like damming a creek for fun, we would have no problems. The AI models we had might be powerful, but they would be funky and disjointed because people would be more interested in tinkering with them than making money from them. We see tons of posts on HN every day about remarkable things people do for the gusto. We'd see a bunch of posts about new AI models and people would talk about how cool they are and go on not using them in any load-bearing way.
As soon as people start trying to use anything, AI or not, to make as much money as possible, we have a problem.
The second missed takeaway is at the end. He says Anthropic is noticing the coquinas as if that means they're going to somehow self-regulate. But in most of the examples he gives, he wasn't stopped by his own realization, but by an external authority (like parents) telling him to stop. Most people are not as self-reflective as this author and won't care about "winning zero sum games against people who don't necessarily deserve to lose", let alone about coquinas. They need a parent to step in and take the shovel away.
As long as we keep treating "making as much money as you can" as some kind of exception to the principle of "you can't keep doing stuff until you break something", we'll have these problems, AI or not.
I noticed that, around the turn of the century, when "The Web" was suddenly all about the Benjamins.
It's sort of gone downhill, since.
For myself, I've retired, and putter around in my "software garden." I do make use of AI, to help me solve problems, and generate code starts, but I am into it for personal satisfaction.
I think it's a little deeper than that. It's the democratization of capability.
If few people have the tools, the craftsman is extremely valuable. He can make a lot of money without a glut of knowledge or real skill. In general the people don't have the tools and skills to catch up to where he is. He is wealthy with only frontloaded effort.
If everyone has the same tools, the craftsman still has value, because of the knowledge and skillset developed over time. He makes more money because his skills are valuable and remain scarce; he's incentivized to further this skillset to stay above the pack, continue to be in demand, and make more money.
If the tools do the job for you, the craftsman has limited value. He's an artifact. No matter how much he furthers his expertise, most people will just turn the tool on and get good enough product.
We're in between phase 2 and 3 at the moment. We still test for things like algorithm design and ask questions in interviews about the complexity of approaches. A lot of us still haven't moved on to the "ok but now what?" part of the transition.
The value now is less knowing how the automation works and improving our knowledge of the underlying design, but how to use the tools in ways that produce more value than the average Joe. It's a hard transition for people who grew up thinking this was all you needed to get a comfortable or even lucrative life.
I'm past my SDE interview phase of life now and in seeking engineers I'm looking less for people who know how to build a version of the tool and more people who operate in the present, have accepted the change, and want to use what they have access to and add human utility to make the sum of the whole greater than the parts.
To me the best part of building software was the creativity. That part hasn't changed. If anything it's more important than ever.
Ultimately we're building things to be consumed by consumers. That hasn't changed. The creek started flowing in a different direction and your job in this space is not to keep putting rocks where the water used to go, and more accepting that things are different and you have to adapt.
Don't get me wrong, I'm not immune to these feelings either. I want to do good work and I want people to love what I do. But there's something so... so fucking nakedly exhibitionist and narcissistic about these kinds of posts. Like, so, GO FUCKING LAY WITH CLAMS, write a novel, the world is waiting for it if you're really a genius. Have the courage to say you have a conscience if you actually do. Leave the rest of us alone and stop polluting a world you don't understand with your childish greed and self-obsession.
Complicated!
You've precisely defined why nobody takes LessWrong seriously.
However if we are counting on AI researchers to take the advice and slow down then I wouldn't hold my breath waiting. The author indicated they stepped away from a high paying finance job for moral reasons, which is admirable. But wallstreet continues on and does not lack for people willing to play the "make as much money as you can" game.
One recent HN comment [0] comparing corporations and institutions to AI really stuck with me - those are already superhuman intelligences.
... only because "unsafe" and "leaky" are a Ponzi's best-and-loves-to-be-roofied-and-abused friend ... you see, intelligence is only good when it doesn't irreversibly break everything to the point where most of the variety of the physical structure that evolved it and maintains it is lost.
you could argue, of course, and this is an abbreviated version, that a new physical structure then evolves a new intelligence that is adapted (emerged from and adjusts to) to the challenges of the new environment but that's not the point of already capable self-healing systems;
except if the destructive part of the superhuman intelligence is more successful with it's methods of sabotage and disruption of
(a) 'truthy' information flow and
b) individual and collective super-rational agency -- for the good of as many systems-internal entities as possible, as a precaution due to always living in uncertainty and being surrounded by an endless amount of variables currently tagged "noise"
-- than it's counterpart is in enabling and propagating a) and b) ...
in simpler words, if the red team FUBARS the blue team or vice versa, the superhuman intelligence can be assumed to have cancer or that at least some vital part of the force is corrupted otherwise.
I doubt OP is counting on it, it is moreso expressing what an optimal world would look like so people can work towards it if they would feel like it or just to put the idea out there.
It's an iterative process of bending and shaping, bending again, and wood removal in stages.
People should spend more of their time doing things because they're fun, not because they want to get better at it.
Maybe the apocalypse will happen in our lifetime, maybe not. I intend to have fun as much as I can in my life either way.
https://richardswsmith.wordpress.com/2017/11/18/we-are-here-...
“(talking about when he tells his wife he’s going out to buy an envelope) Oh, she says well, you’re not a poor man. You know, why don’t you go online and buy a hundred envelopes and put them in the closet? And so I pretend not to hear her. And go out to get an envelope because I’m going to have a hell of a good time in the process of buying one envelope. I meet a lot of people. And, see some great looking babes. And a fire engine goes by. And I give them the thumbs up. And, and ask a woman what kind of dog that is. And, and I don’t know. The moral of the story is, is we’re here on Earth to fart around. And, of course, the computers will do us out of that. And, what the computer people don’t realize, or they don’t care, is we’re dancing animals.”
― Kurt Vonnegut
Yet, as a regular user of SOTA AI models, it's far from clear to me that the risk exists on any foreseeable time horizon. Even today's best models are credulous and lack a certain insight and originality.
As Dwarkesh once asked:
> One question I had for you while we were talking about the intelligence stuff was, as a scientist yourself, what do you make of the fact that these things have basically the entire corpus of human knowledge memorized and they haven’t been able to make a single new connection that has led to a discovery? Whereas if even a moderately intelligent person had this much stuff memorized, they would notice — Oh, this thing causes this symptom. This other thing also causes this symptom. There’s a medical cure right here.
> Shouldn’t we be expecting that kind of stuff?
I noticed this myself just the other day. I asked GPT-4.5 "Deep Research" what material would make the best [mechanical part]. The top response I got was directly copied from a laughably stupid LinkedIn essay. The second response was derived from some marketingslop press release. There was no original insight at all. What I took away from my prompt was that I'd have to do the research and experimentation myself.
Point is, I don't think that LLMs are capable of coming up with terrifyingly novel ways to kill all humans. And this hasn't changed at all over the past five years. Now they're able to trawl LinkedIn posts and browse the web for press releases, is all.
More than that, these models lack independent volition and they have no temporal/spatial sense. It's not clear, from first principles, that they can operate as truly independent agents.
Dumb bombs kill people just as easily. One 80-year old nuke is, at least potentially, more effective than the entirety of the world's drones.
They are, however, going to enable credulous idiots to drive humanity completely off a cliff. (And yes, we're seeing that in action right now). They don't need to be independent agents. They just need to seem smart.
A useless AI isn't a threat: nobody will use it.
LLMs, as they exist today, are between these two. They're competent enough to get used, but will still give incorrect (and sometimes dangerous) answers that the users are not equipped to notice.
Like designing US trade policy.
> Yet, as a regular user of SOTA AI models, it's far from clear to me that the risk exists on any foreseeable time horizon. Even today's best models are credulous and lack a certain insight and originality.
What does the latter have to do with the former?
> Point is, I don't think that LLMs are capable of coming up with terrifyingly novel ways to kill all humans.
Why would the destruction of humanity need to use a novel mechanism, rather than a well-known one?
> And this hasn't changed at all over the past five years.
They're definitely different now than 5 years ago. I played with the DaVinci models back in the day, nobody cared because that really was just very good autocomplete. Even if there's a way to get the early models to combine knowledge from different domains, it wasn't obvious how to actually make them do that, whereas today it's "just ask".
> Now they're able to trawl LinkedIn posts and browse the web for press releases, is all.
And write code. Not great code, but "it'll do" code. And use APIs.
> More than that, these models lack independent volition and they have no temporal/spatial sense. It's not clear, from first principles, that they can operate as truly independent agents.
While I'd agree they lack the competence to do so, I don't see how this matters. Humans are lazy and just tell the machine to do the work for them, give themselves a martini and a pay rise, then wonder why "The Machine Stops": https://en.wikipedia.org/wiki/The_Machine_Stops
The human half of this equation has been shown many times in the course of history. Our leaders treat other humans as machines or as animals, give themselves pay rises, then wonder why the strikes, uprisings, rebellions, and wars of independence happened.
Ironically, the lack of imagination of LLMs, the very fact that they're mimicking us, may well result in this kind of AI doing exactly that kind of thing even with the lowest interpretation of their nature and intelligence — the mimicry of human history is sufficient.
--
That said, I agree with you about the limitations of using them for research. Where you say this:
> I noticed this myself just the other day. I asked GPT-4.5 "Deep Research" what material would make the best [mechanical part]. The top response I got was directly copied from a laughably stupid LinkedIn essay. The second response was derived from some marketingslop press release. There was no original insight at all. What I took away from my prompt was that I'd have to do the research and experimentation myself.
I had similar with NotebookLM, where I put in one of my own blog posts and it missed half the content and re-interpreted half the rest in a way that had nothing much in common with my point. (Conversely, this makes me wonder: how many humans misunderstand my writing?)
Funny, that is what my father taught me when I was 12 because we had compassion. What is it with glorifying all these logic loving Spock like people? Don't you know Captain Kirk was the real hero of Star Trek? Because he had compassion?
It is no wonder the Zizians were birthed from LW.
I swear that’s what lesswrong posters see every day in the mirror.
Anthropic apparently is starting to notice the possible danger to others of their work. I'm not sure what they are referring to.
Are they being vague about the danger? If possible, please link to a communique from them. I've missed it somehow. Thanks.
Your explanation makes more sense, however.
https://www.anthropic.com/news/anthropic-education-report-ho...
Guide to Bow Tillering:
https://straightgrainedboard.com/beginners-guide-on-bow-till...
The AI angle is not only even hypothetical: there is no attempt to describe or reason about a concrete "x leading to y", just "see, the same principle probably extrapolates".
There is no argument there that is sounder than "the high velocities of steam locomotives might kill you" that people made 200 years ago.
This obviously seems silly in hindsight. Warnings about radium watches or asbestos sound less silly, or even wise. But neither had any solid scientific studies showing clear hazard and risk. Just people being good Bayesian agents, trying to ride the middle of the exploration vs. exploitation curve.
Maybe it makes sense to spend some percentage of AI development resources on trying to understand how they work, and how they can fail.
In the case of asbestos, this is incorrect. Many people knew it was deadly, but the corporations selling it hid it for decades, killing thousands of people. There are quite a few other examples besides asbestos, like leaded fuel or cigarettes.
To be fair, many people did die on level crossings and by wandering on to the tracks.
We learned over time to put in place safety fences and tunnels.
So it is with AI. Except, corps are made of people that work on people speeds, and have vague morals and are tied to society in ways AI might not be. AI might also be able to operate faster and with less error. So extra care is required.
To me it just seems like the same old knee-jerk luddite response people have to any powerful new technology that challenges that status quo since the dawn of time. The calculator did not erase math wizards, the television did not replace books and so on. It just made us better, faster, more productive.
Sometimes there is an adjustment period (we still haven't figured out how to deal with short dopamine hits from certain types of entertainment and social media), but things will balance themselves out eventually.
Some people may go full-on Wall-E, but I for one will never stop tinkering, and many of my friends won't either.
The things I could have done if I had had an LLM as a kid... I think I've learned more in the past two years than ever before.
The major difference is that in order to use a calculator, you need to know and understand the math you're doing. It's a tool you can work with. I always had a calculator for my math exams and I always had bad grades :)
You don't have to know how to program to ask ChatGPT to build yet another app for you. It's a substitute for your brain. My university students have good grades on their do-at-home exams, but can't spot a off-by-one error on a 3 lines Golang for loop during an in-person exam.
Like with the calculator, why would you need to be able to calculate things on paper if you can just have a machine do it for you? Same goes for more advanced AI: what's the point of being able to do things without them?
Not to offend, but in my opinion that's nothing more than a romantic view of what humans "should be capable of". 10 years from now we can all laugh at the idea of people defending doing stuff without AI assistance.
We have tools to help us with that, and maybe it isn't a big loss? And they also bring new arenas and abilities.
And maybe in the future we will be worse at critical thinking (https://news.ycombinator.com/item?id=43484224), and maybe it isn't a big loss? It is hard to imagine what new abilities and arenas will emerge. Though I think that critical thinking is a worse loss than memory and mental arithmetic. Though, also, we are probably a lot less good at it than we think we are, generally.
But it did. Quick, what's 67 * 49? A math wiz would furrow their brow for a second and be able to spit out an answer, while the rest of us have to pull out a calculator. When you're doing business in person and have to move numbers around, having to stop and use a calculator slows you down. If you don't have a role where that's useful then it's not a needed skill and you don't notice it's missing, like riding s horse, but doesn't mean the skill itself wouldn't be useful to have.
I don't think that's the argument the article was making. It was, to my understanding, a more nuanced question about if we want to destroy or severely disturb systems at equilibrium by letting AI systems infiltrate our society.
> Sometimes there is an adjustment period (we still haven't figured out how to deal with short dopamine hits from certain types of entertainment and social media), but things will balance themselves out eventually.
One can zoom out a little bit. The issue didn't start with social media, nor AI. "Star Wars, A New Hope", is, to my understanding, an incredibly good film. It came out in 1977 and it's a great story made to be appreciated by the masses. And in trying to achieve that goal, it really wasn't intellectually challenging. We have continued in that downhill for a bit, and now we are in 16 second stingers in TikTok and Youtube. So, the way I see it, things are not balancing out. Worse, people in USA elected D.J. Trump because somehow they couldn't understand how this real-world Emperor Palpatine was the bad guy.