Shame on all of the people involved in this: the people in these companies, the journalists who shovel shit (hope they get replaced real soon), researchers who should know better, and dementia ridden legislators.
So utterly predictable and slimy. All of those who are so gravely concerned about "alignment" in this context, give yourselves a pat on the back for hyping up science fiction stories and enabling regulatory capture.
The fact that these systems can extrapolate well beyond their training data by learning algorithms is quite different than what has come before, and anyone stating that they "simply" predict next token is severely shortsighted. Things don't have to be 'brain-like' to be useful, or to have capabilities of reasoning, but we have evidence that these systems have aligned well with reasoning tasks, perform well at causal reasoning, and we also have mathematical proofs that show how.
So I don't understand your sentiment.
As for the fact that it gets things wrong sometimes - sure, this doesn't say it actually learned every algorithm (in whichever model you may be thinking about). But the nice thing is that we now have this proof via category theory, and it allows us to both frame and understand what has occurred, and to consider how to align the systems to learn algorithms better.
1. ChatGPT knows the algorithm for adding two numbers of arbitrary magnitude.
2. It often fails to use the algorithm in point 1 and hallucinates the result.
Knowing something doesn't mean it will get it right all the time. Rather, an LLM is almost guaranteed to mess up some of the time due to the probabilistic nature of its sampling. But this alone doesn't prove that it only brute-forced task X.
You're using it wrong. If you asked a human to do the same operation in under 2 seconds without paper, would the human be more accurate?
On the other hand if you ask for a step by step execution, the LLM can solve it.
A system that can will probably adopt a different acronym (and gosh that will be an exciting development... I look forward to the day when we can dispatch trivial proofs to be formalized by a machine learning algorithm so that we can focus on the interesting parts while still having the entire proof formalized).
There were two very noteworthy (Perhaps Nobel prize level?) breakthroughs in two completely different fields of mathematics (knot theory and representation theory) by using these systems.
I would certainly not call that "useless", even if they're not quite Nobel-prize-worthy.
Also, "No one uses GATs in systems people discuss right now" ... Transformerare GATs (with PE) ... So, you're incredibly wrong.
Do you have a reference?
Do you mind linking to one of those papers?
"So, you've thought about eternity for an afternoon, and think you've come to some interesting conclusions?"
I don't think the average HN commenter claims to be better at building these system than an expert. But to criticize, especially critic on economic, social, and political levels, one doesn't need to be an expert on LLMs.
And finally, what the motivation of people like Sam Altman and Elon Musk is should be clear to everbody with a half a brain by now.
For example, we don't understand fundamentals like these: - "intelligence", how it relates to computing, what its connections/dependencies to interacting with the physical world are, its limits...etc. - emergence, and in particular: an understanding of how optimizing one task can lead to emergent ability on other tasks - deep learning--what the limits and capabilities are. It's not at all clear that "general intelligence" even exists in the optimization space the parameters operate in.
It's pure speculation on behalf of those like Hinton and Ilya. The only thing we really know is that LLMs have had surprising ability to perform on tasks they weren't explicitly trained for, and even this amount of "emergent ability" is under debate. Like much of deep learning, that's an empirical result, but we have no framework for really understanding it. Extrapolating to doom and gloom scenarios is outrageous.
Give them a semi human sounding puppet and they think skynet is coming tomorrow.
If we learned anything from the past few months is how gullible people are, wishful thinking is a hell of a drug
What I feel has changed, and what drives a lot of the fear and anxiety you see, is a sudden perception of possibility, of accessibility.
A lot of us (read: people) are implicit dualists, even if we say otherwise. It seems to be a sticky bias in the human mind (see: the vanishing problem of AI). Indeed, you can see a whole lot of dualism in this thread!
And even if you don't believe that LLMs themselves are "intelligent" (by whatever metric you define that to be...), you can still experience an exposing and unseating of some of the foundations of that dualism.
LLMs may not be a destination, but their unprecedented capabilities open up the potential for a road to something much more humanlike in ways that perhaps did not feel possible before, or at least not possible any time soon.
They are powerful enough to change the priors of one's internal understanding of what can be done and how quickly. Which is an uncomfortable process for those of us experiencing it.
Absolutely spot on. I am not a dualist at all and I've been surprised to see how many people with deep-seated dualist intuition this has revealed, even if they publicly claim not to.
I view it as embarrassing? It's like believing in fairies or something.
I don't think that trying to regulate every detail of every industry is stifling and counter-productive. But the current scenario is closer to the opposite end of the spectrum, with our society acting as a greedy algorithm in pursuit of short-term profits. I'm perfectly in favor of taking a measure-twice-cut-once approach to something that has as much potential for overhauling society as we know it as AI does. And I absolutely do not trust the free market to be capable of moderating itself in regards to these risks.
No one yet knows how this is going to go, coping might turn into "See! I knew all along!" if progress fizzles out. But right now the threat is very real and we're seeing the full spectrum of "humans under threat" behavior. Very similar to the early pandemic when you could find smart people with any take you wanted.
> Some of the dangers of AI chatbots were “quite scary”, he told the BBC, warning they could become more intelligent than humans and could be exploited by “bad actors”. “It’s able to produce lots of text automatically so you can get lots of very effective spambots. It will allow authoritarian leaders to manipulate their electorates, things like that.”
You can do bad things with it but people who believe we're on the brink of singularity, that we're all going to lose our jobs to chatgpt and that world destruction is coming are on hard drugs.
What to do? Why, obviously lets talk about the risks of AGI.
I mean LLM's are an impressive piece of work but the global reaction is basically more a reflection of an unmoored system that floats above and below reality but somehow can't re-establish contact.
He's a charlatan, which makes sense he gets most of his money from Thiel and Musk. Why do so many supposedly smart people worship psychotic idiots?
The way Peter, Musk, Sam and these guys talk, it has this aura of "hidden secrets". Things hidden since the foundation of the world.
Of course the reality is they make their money the old fashioned way: connections. The same way your local builder makes their money.
But smart people want to believe there is something more. Surely AI and your local condo development cannot have the same underlying thread.
It is sad and unfortunately the internet has made things easier than ever.
AI/ML licensing builds Power and establishes moat. This will not lead to better software.
Frankly, Google and Microsoft are acting new. My understanding of both companies has been shattered by recent changes.
I'm all for villainizing the figureheads of the current generation of this movement. The politics of this sea-change are fascinating and worthy of discussion.
But out-of-hand dismissal of what has been accomplished smacks more to me of lack of awareness of the history of the study of the brain, cognition, language, and computers, than it does of a sound debate position.
That fact does not entail what theses models can or cannot do. For what we know our brain could be a process that minimize an unknown loss function.
But more importantly, what SOTA is now does not predict what it will be in the future. What we know is that there is rapid progress in that domain. Intelligence explosion could be real or not, but it's foolish to ignore its consequences because current AI models are not that clever yet.
Because people have different definition of what intelligence is. Recreating the human brain in a computer would definitely be neat and interesting but you don't need that nor AGI to be revolutionary.
LLMs, as perfect Chinese Rooms, lack a mind or human intelligence but demonstrate increasingly sophisticated behavior. If they can perform tasks better than humans, does their lack of "understanding" and "thinking" matter?
The goal is to create a different form of intelligence, superior in ways that benefit us. Planes (or rockets!) don't "fly" like birds do but for our human needs, they are effectively much better at flying that birds ever could be.
We've been doing this forever with everything. Building tools is what makes us unique. Why is building what amounts to a calculator/spreadsheet/CAD program for language somehow a Rubicon that cannot be crossed? Did people freak out this much about computers replacing humans when they were shown to be good at math?
The argument for regulation in that case would be because of the socio-economic risk of taking people's jobs, essentially.
So, again: pure regulatory capture.
Those nonplussed by this wave of AI are just yawning.
Because it's wrong and smart people know that.
The whole saga makes Altman look really, really terrible.
If AI really is this dangerous then we definitely don't need people like this in control of it.
Incredibly scummy behaviour that will not land well with a lot of people in the AI community. I wonder if this is what prompted a lot of people to leave for Anthropic.
At this point, with this part about openai and worldcoin… if it walks like a duck and talks like a duck..
You're leaving out the essentials. These models do more than fitting the data given. They can output it in a variety of ways, and through their approximation, can synthesize data as well. They can output things that weren't in the original data, tailored to a specific request in the tiniest of fractions of the time it would take a normal person to look up and understand that information.
Your argument is almost like saying "give me your RSA keys, because it's just two prime numbers, and I know how to list them."
Do we want to go down the road of making white collar jobs the legislatively required elevator attendants? Instead of just banning AI in general via executive agency?
That sounds like a better solution to me, actually. OpenAI's lobbyists would never go for that though. Can't have a moat that way.
I wrote a comment recently trying to explain how even if you believe all LLMs can (and will ever) do is regurgitate their training data that you should still be concerned.
For example, imagine in 5 years we have GPT-7, and you ask GPT-7 to solve humanity's great problems.
From its training data GPT-7 might notice that humans believe overpopulation is a serious issue facing humanity.
But its "aligned" so might understand from its training data that killing people is wrong so instead it uses its training data to seek other ways to reduce human populations without extermination.
Its training data included information about how gene drives were used by humans to reduce mosquito populations by causing infertility. Many human have also suggested (and tried) to use birth control to reduce human populations via infertility so the ethical implications of using gene drives to cause infertility is debatable based on the data the LLM was trained on.
Using this information it decides to hack into a biolab using hacking techniques it learnt from its training data and use its biochemistry knowledge to make slight alterations to one of the active research projects at the lab. This causes the lab to unknowingly produce a highly contagious bioweapon which causes infertility.
---
The point here is that even if we just assume LLMs are only capable of producing output which approximates stuff it learnt from its training data, an advanced LLM can still be dangerous.
And in this example, I'm assuming no malicious actors and an aligned AI. If you're willing to assume there might be an actor out there would seek to use LLMs for malicious reasons or the AI is not well aligned then the risk becomes even clearer.
This is a real problem, but it's already problem with our society, not AI. Misaligned public intellectuals routinely try to reduce the human population and we don't lift a finger. Focus where the danger actually is - us!
From Scott Alexander's latest post:
Paul Ehrlich is an environmentalist leader best known for his 1968 book The Population Bomb. He helped develop ideas like sustainability, biodiversity, and ecological footprints. But he’s best known for prophecies of doom which have not come true - for example, that collapsing ecosystems would cause hundreds of millions of deaths in the 1970s, or make England “cease to exist” by the year 2000.
Population Bomb calls for a multi-pronged solution to a coming overpopulation crisis. One prong was coercive mass sterilization. Ehrlich particularly recommended this for India, a country at the forefront of rising populations.
In 1975, India had a worse-than-usual economic crisis and declared martial law. They asked the World Bank for help. The World Bank, led by Robert McNamara, made support conditional on an increase in sterilizations. India complied [...] In the end about eight million people were sterilized over the course of two years.
Luckily for Ehrlich, no one cares. He remains a professor emeritus at Stanford, and president of Stanford’s Center for Conservation Biology. He has won practically every environmental award imaginable, including from the Sierra Club, the World Wildlife Fund, and the United Nations (all > 10 years after the Indian sterilization campaign he endorsed). He won the MacArthur “Genius” Prize ($800,000) in 1990, the Crafoord Prize ($700,000, presented by the King of Sweden) that same year, and was made a Fellow of the Royal Society in 2012. He was recently interviewed on 60 Minutes about the importance of sustainability; the mass sterilization campaign never came up. He is about as honored and beloved as it’s possible for a public intellectual to get.
If it is possible for AI to ever acquire ability to develop and unleash a bioweapon is irrelevant. What is relevant is that as we are now, we have no control or way of knowing that it has happened, and no apparent interest in gaining that control before advancing the scale.
In other words, LLMs are only as dangerous as the humans operating them, and therefore the solution is to stop crime instead of regulating AI, which only seeks to make OpenAI a monopoly.
I think the objection to this would be that currently not everyone in the world an expert in biochemistry or at hacking into computer systems. Even if you're correct in principal, perhaps the risks of the technology we're developing here is too high? We typically regulate technologies which can easily be used to cause harm.
> But its "aligned" so might understand
> Using this information it decides to hack
I think you're anthropomorphizing LLM's too much here. If we assume that there's a AGI-esque AI, then of course we should be worried about an AGI-esque AI. But I see no reason to think that's the case.
I don't think this would be a bad thing :) Some people will always be immune, humanity wouldn't die out. And it would be a humane way for gradual population reduction. It would create some temporary problems with elderly care (like what China is facing now) but will make long term human prosperity much more likely. We just can't keep growing against limited resources.
The Dan Brown book Inferno had a similar premise and I was disappointed they changed the ending in the movie so that it didn't happen.
Literally half (or more) of this site's user base does that. And they should know better, but they don't. Then how can a typical journo or a legislator possibly know better? They can't.
We should clean up in front of our doorstep first.
Is a license the best way forward I don't know but I do feel like this is more than a math formula.
This information is not created inside the LLMs, it's part of their training data. If someone is motivated enough, I'm sure they'd need no more than a few minutes of googling.
> I do feel like this is more than a math formula
The sum is greater than the parts! It can just be a math formula and still produce amazing results. After all, our brains are just a neat arrangement of atoms :)
And just because a topic has been covered by science fiction doesn’t mean it can’t happen, the sci-fi depictions will be unrealistic though because they’re meant to be dramatic rather than realistic
Who's to say we're not in a simulation ? Who's to say god doesn't exist ?
Until a model of human sentience and awareness is established (note: one of the oldest problems out there alongside the movements of the stars. This is an ancient debate, still open-ended, and nothing anyone is saying in these threads is new), philosophy is all we have and ideas are debated on their merits within that space.
That's a big part of the issue with machine learning models--they are undiscoverable. You build a model with a bunch of layers and hyperparameters, but no one really understands how it works or by extension how to "fix bugs".
If we say it "does what it was programmed to", what was it programmed to do? Here is the data that was used to train it, but how will it respond to a given input? Who knows?
That does not mean that they need to be heavily regulated. On the contrary, they need to be opened up and thoroughly "explored" before we can "entrust" them to given functions.
Edit: List of posts for anyone interested http://paste.debian.net/plain/1280426
2. Explain why it is possible for a large number of properly constructed neurons to think.
Just because we don't understand how thinking works doesn't mean it doesn't work. LLMs have already shown the ability to use logic.
Are you aware that you are an 80 billion neuron biological neural network?
Just like a CPU isn't "like your brain" and HDD "like your memories"
Absolutely nothing says our current approach is the right one to mimic a human brain
That said, there are 8B+ of us and counting so unless there is magic involved, I don't see why we couldn't do a "1:1" replica of it (maybe far) in the future.
True, it's just binary logic gates, but it's a lot of them and if they can simulate pretty much anything why should intelligence be magically exempt?
> Absolutely nothing says our current approach is the right one to mimic a human brain
Just like nothing says it's the wrong one. I don't think those regulation suggestions are a good idea at all (and say a lot about a company called OpenAI), but that doesn't mean we should treat it like the NFT hype.
What are the key differences?
Yeah? Did you get a crystal ball for Christmas to be able to predict what can and can't be done with a new technology?