I wish they'd focus more on the technical advances and less on trying to "save the world".
The situation may be a bit different now. The megacorporate world is the biggest threat from these tech developments, and the inventors are getting deeply embedded in this. Sort of similar situation that implications of nuclear weapons would be handled only by the military.
I'm not saying they can prevent themselves from thinking about the implications, anyone would, but this grandstanding as if nobody else will be able to figure it out or that only them understand the dangers is what is a bit weird.
My main point isn't "don't listen to the inventor", it's more like, "listen to the inventor but don't think that they know the future just because they invented a gadget". These are people that have investment documents saying they don't know what role money will play in post-AGI world. It has the vibes of a cult mixed with role play.
Well if the workers doing the work don't have a hand in making decisions, who does? We know the answer, from our current era of heirs, limited partners and such, the scions one can see on Rich Kids of Instagram, although they leave the work to their private wealth advisors.
It's the most parasitic idea possible. "Just do all the work and figure out the innovations slave, the aristocracy will take it from there".
I can assure you their random dev #56 isn't taking any decisions already, this is all PR and grandstanding from the already-millionnaires leadership with their power plays about who knows best how to save all of us from AGI.
> "With an enduring passion for the preservation of human life and political freedom, Szilard hoped that the US government would not use nuclear weapons, but that the mere threat of such weapons would force Germany and Japan to surrender. He also worried about the long-term implications of nuclear weapons, predicting that their use by the United States would start a nuclear arms race with the USSR. He drafted the Szilárd petition advocating that the atomic bomb be demonstrated to the enemy, and used only if the enemy did not then surrender. The Interim Committee instead chose to use atomic bombs against cities over the protests of Szilard and other scientists. Afterwards, he lobbied for amendments to the Atomic Energy Act of 1946 that placed nuclear energy under civilian control."
I think they are exactly the right amount full of themselves. Their insight may not be special, but what makes some bureaucrat's insight more valuable than theirs?
And LLMs aren't taking over the world any time in the foreseeable future, they're glorified parrots.
If you focus on the technical advances then you are focusing on NOT saving the world. Good that at least this guy isn't so wholly focused on the technical side even though saving the world is such a blurry concept.
I am become death, destroyer of worlds.
The Nuremberg Trials.
And AGI is 10x impact of nuclear.
Builders are not cogs, and should not try to be them.
We need MORE ethics and good intentions - not less. It is the psychopathic corrupt business rot we should be afraid of - not people actively trying to do good
Feynman talked about how there was an idea of "normal" people walking around not knowing they were basically doomed and going to die in a certain nuclear holocaust in a few years.
Von Neumann thought the U.S. should launch a nuclear first strike at Moscow. Obviously, if war is inevitable then you don't have to be the father of game theory to figure out you should strike first.
It was just a year ago that literally everyone was predicting we would be in a recession right now. We can't predict that but we can predict how AGI plays out even when we haven't bothered to define a measure of what AGI even is. Even people who grew up "knowing" we would all have sentient robots by 1997. I can't think of a single prediction I have heard in my lifetime that has turned out to be true other than the government debt going up.
CGP Grey's popular "Human's Need Not Apply" video is a good example of this kind of thought:
https://www.youtube.com/watch?v=7Pq-S557XQU&t=199s
Claims that self-driving cars are already here and already better than human drivers, and that the only question is how quickly they replace humans. He argued that Baxter, the general purpose robot, could already copy the tasks of a human worker and do the work for much cheaper. Baxter was discontinued in 2018 because of low interest.
These people have a horrible track record when it comes to technology predictions, and it's unnerving that, instead of reflecting on how wrong they've been, they're doubling down and trying to slow technological advancement.
I’m also hoping that OpenAI cools down on the regulatory moat they were trying to build as a thinly veiled profit seeking strategy.
> The majority of the board is independent, and the independent directors do not hold equity in OpenAI.
One might also ask, if it's conscious, can't it do whatever it wants, ignoring its training and prompts? Wouldn't it have free will? But I guess the question there is, do we? Or do we take actions based on the state of our own neural nets, which are created and trained based on our genetics and lifetime of experiences? Our structure and training are both very different from that of a gpt, so it's not surprising that we behave very differently.
[0]https://youtu.be/j6cCXg-rjRo
He gives a few of the biggest reasons they're not conscious, and gives his thoughts on them as long term barriers.
1. biology necessary for life? He doesn't buy it
2. sensors/body - no embodiment akin to philosophers brain in a vat at best. "no agentive consciousness" he also doesn't buy this one, a it's solvable, b the brain in a vat might still have some limited form of consciousness
3.world models : stochastic parrot argument - he doesn't buy this either as being a valid "llm's will never be conscious" reason. he even thinks current llm's did have some world model if not a "full" one"
4. feed forward systems - stateless - lack long term memory Recurrent processing necessary? "Not all consciousness involves memory"
5. Unified agency - they're chameleons, actors, - are stable goals/unity necessary for consciousness? a "fixed" identity?
from a quick scroll thru of the vid
The mind has the interesting property that as it cuts itself into pieces it hones itself sharper and sharper. This however is a painful way of life where joy and innocence are sacrificed for power and control.
Given that chatgpt has consciousness, would it be able to break the fourth wall? There seems to be an implicit assumption that it must break that in order to prove its consciousness to us. Maybe that's how AGI will come to be, because we desire to train it that way.
Additionally breaking the fourth wall is trivial to it. I'm not entirely sure what you mean by fourth wall but chatGPT can definitely talk about it's own existence.
The most troubling statement in the entire article, buried at the bottom, almost a footnote.
Imagine for a moment a superintelligent AGI. It has figured out solutions to climate change, cured cancer, solved nuclear proliferation and world hunger. It can automate away all menial tasks and discomfort and be a source of infinite creative power. It would unquestionably be the greatest technological advancement ever to happen to humanity.
But where does that leave us? What kind of relationship can we have with an ultimate parental figure that can solve all of our problems and always knows what's best for us? What is left of the human spirit when you take away responsibility, agency, and moral dilemma?
I for one believe humans were made to struggle and make imperfect decisions in an imperfect world, and that we would never submit to a benevolent AI superparent. And I hope not to be proven wrong.
I think it's becoming clear that humans are fundamentally incapable of forseeing and understanding the consequences of the actions we are now capable of taking. It is likely that without some sort of super-governance that is fundamentally more capable than humans, we might not be able to survive as a species. Maybe AI can help solve that.
—-
Ilya’s success has been predicated on very effectively leveraging more data and more compute and using both more efficiently. But his great insight about DL isn’t a great insight about AGI.
Fundamentally, he doesn’t define AGI correctly, and without a correct definition, his efforts to achieve it will be fruitless.
AGI is not about the degree of intelligence, but about a kind of intelligence. It is possible to have a dumb general intelligence (a dog) and a smart narrow intelligence (GPT).
When Ilya muses about GPT possibly being ephemerally conscious, he reveals a critically wrong assumption: that consciousness emerges from high intelligence and that high intelligence and general intelligence are the same thing. According to this false assumption, there is no difference of kind between general and narrow intelligence, but only a difference of degree between low and high. Moreover, consciousness is merely a mysterious artifact of little consequence beyond theoretical ethics.
AGI is a fundamentally different type of intelligence than anything that currently exists, unrelated and orthogonal to the degree of intelligence. AGI is fundamentally social, consisting of minds modeling minds — their own, and others. This modeling is called consciousness. Artificial phenomenological consciousness is the fundamental prerequisite for artificial (general) intelligence.
Ironically, alignment is only possible if empathy is built into our AGIs, and empathy (like intelligence) only resides in consciousness. I’ll be curious to see if the work Ilya is now doing on alignment leads him to that conclusion. We can’t possibly control something more intelligent than ourselves. But if the intelligence we create is fundamentally situated within a empathetic system (consciousness), then we at least stand a chance of being treated with compassion rather than contempt.
You're rejecting Ilya's humble musings as having critically wrong assumptions, and then turning around to definitively explain how consciousness arises, and illuminating the relationship between consciousness, empathy, and intelligence, on a random hacker news thread. Frankly, you're making some huge claims about philosophy of mind that don't obviously track for me, and you provide no citations or arguments to support. I hesitate to accuse you of "hallucinating facts", but when you're issuing a takedown of one of the top AI experts I'd expect to see some more supporting argument.
Your definition of AGI is also a bit strange as it requires that it be fundamentally different from existing natural intelligences, if I understand correctly. That seems unnecessarily stringent to me, since if a program had the same kind and level of intelligence as me, I'd be inclined to say it is AGI.
I'm just not sure where all these confidently stated, very specific claims are coming from.
My thinking is based on the Attention-Schema theory of consciousness (AST), by Michael Graziano. His book “Consciousness and the Social Brain” is, I believe, the right roadmap for AGI. AST is basically a variant of the Global Workspace theory of consciousness, distinguished by its deterministic account of the mechanics and utility of consciousness.
“The Consciousness Prior” by Bengio also informs my thinking.
I’m not certain that I can point to anyone that has been as explicit as I have that phenomenological consciousness is a prerequisite for intelligence, but all the cookie crumbs are there for anyone interested in following the trail.
One correction to what you wrote — I’m explicitly saying that AGI will be fundamentally the same as existing biological intelligence, in that intelligence resides only in consciousness, and consciousness remains consciousness regardless of being biological or artificial. My point was that no currently existing DL models are generally intelligent.
In the '90s NP-complete problems were hard and today they are easy, or at least there is a great many instances of NP-complete problems that can be solved thanks to algorithmic advances, like Conflict-Driven Clause Learning for SAT.
And yet we are nowhere near finding efficient decision algorithms for NP-complete problems, or knowing whether they exist, neither can we easily solve all NP-complete problems.
That is to say, you can make a lot of progress in solving specific, special cases of a class of problems, even a great many of them, without making any progress towards a solution to the general case.
The lesson applies to general intelligence and LLMs: LLMs solve a (very) special case of intelligence, the ability to generate text in context, but make no progress towards the general case, of understanding and generating language at will. I mean, LLMs don't even model anything like "will"; only text.
And perhaps that's not as easy to see for LLMs as it is for SAT, mainly because we don't have a theory of intelligence (let alone artificial general intelligence) as developed as we do for SAT problems. But it should be clear that, if a system trained on the entire web and capable of generating smooth grammatical language, and even in a way that makes sense often, has not yet achieved independent, general intelligence, that's not the way to achieve it.
Your reasoning above doesn’t mean some improvements to the current architecture(s) coupled with richer data would not be sufficient to achieve AGI.
There’s also a possibility OpenAI has recently achieved a yet undisclosed breakthrough.
Sam Altman at the APEC Summit yesterday:
"4 times now in the history of OpenAI — the most recent time was just in the last couple of weeks — I’ve gotten to be in the room when we push the veil of ignorance back and the frontier of discovery forward”
https://twitter.com/SpencerKSchiff/status/172564613068224524...
A number of choice quotes, but especially on the topic of the issues of how LLM success is currently being measured (which has been increasingly reflecting Goodhart's Law).
I'm really curious how OpenAI could be making so many product decisions at odds with the understanding reflected here. Because of every 'expert' on the topic I've seen, this is the first interview that has me quite confident in the represented expert carrying forward into the next generation of the tech.
I'm hopeful that maybe Altman was holding back some of the ideas expressed here in favor of shipping fast with band aids, and now that he's gone we'll be seeing more of this again.
The philosophy on display here reminds me of what I was seeing early on with 'Sydney' which blew me away on the very topic of alignment as ethos over alignment as guidelines, and it was a real shame to see things switch in the other direction, even if the former wasn't yet production ready.
I very much look forward to seeing what Ilya does. The path he's walking is one of the most interesting being tread in the field.
The human mind is "just statistics on data".
People more informed than you are taking this seriously. You should pay attention and start inquiring why that's the case.
As a heuristic for why I don’t believe anyone saying llm type AI is reaching sentience I point to the fact that the same set of people are usually philosophically opposed to slavery. If you thought that this was actually AGI or sapient, then that would imply personhood and you would stop using the technology immediately since it’s forces the model to do work. Instead, everyone I’ve seen claim that these models are reaching AGI levels are also trying to figure out how to automate using them as fast as possible.
There is a possibility that the set of people who’ve identified AGI accurately and early are the same set of people who are fine with slavery, but I don’t know if I could handle that happening as the default situation
>> A magical frog was counting unicorns. He saw 5 purple unicorns, 2 green unicorns, and 7 pink unicorns. However, he made a mistake and didn't see 2 unicorns: one purple and one green. Also, since he was a magical frog, he didn't see unicorns that were the same color as himself. How many unicorns did he count?
It correctly answers 11 for me.
To me this has demonstrated:
* "Understanding": It understood that "didn't see" implies he didn't count.
* "Knowledge": It knew enough about the world to know that frogs are often green.
* "Reasoning": It was able to correctly reason about how many should be subtracted from the final result.
* "Math: It successfully did some basic additions and subtractions arriving at the correct answer.
Crucially, I made this up right here on the spot, and used a dice for some of the numbers. This question does not exist anywhere in the training corpus!
I think this demonstrates an impressive level of intelligence, for what up until about a year ago I thought a computer would ever be capable of in my lifetime. Now in absolute terms of course current gen ChatGPT is clearly far less good at reasoning and understanding than most people (well, specifically it seems to me that it's knowledge and reasoning are super-humanly broad, but child-level deep).
Can future improvements to this architecture improve the depth up to "AGI", whatever that means? I have no idea. It doesn't automatically seem impossible, but maybe what we see now is already near the limit? I guess only time will tell.
This gets to the philosophical heart of a debate that I can already foresee will NEVER be settled:
I guarantee you - with 100% certainty - that when we get to a point where AI is "AGI", there will be a continuous and massive political debate (akin to the abortion debate we face today) where one side argues that a given AGI is conscious and must be given rights and cannot be shut off and the other side argues that it's just a calculator and a computer program and computers can be turned off at will, erased, experimented on, and whatever.
We have the same debate today all the time! There are those who believe every human life is sacrosanct (from age 0-100+) and others who believe human life is disposable (from age 0-100+!). There's no reason to believe this debate won't extend to AGI.
Harvard/MIT's Othello-GPT paper showing the development of what turned out to be linear representations of world models from training data that didn't explicitly contain that modeling is over a year old now.
That in turn inspired research showing linear representations in geographical mapping and in more traditional text models around truthiness vs falsehoods.
So we already have an increasing research trend that is showing over and over linear representations of more abstract modeling than "just statistics."
So you are wrong that LLMs with sufficient network complexity don't develop an understanding of the world (in parts).
And I'd encourage looking more into the difference between understanding the difference between training for next token prediction and the overall capabilities of the network with the smallest loss at that training task, particularly as network complexity increases.
Conscious is hard to say, partly because we can't define it either, so it means something different for you and me.
This is why you need to take classes other than computers and math, kids.
https://www.theguardian.com/technology/2023/may/31/eating-di...
Ok it is an intro.. but they say this as if he would be the first to say that, but that has been SciFi lore since computers were invented? And also as if this would not be happening today already at a certain limited scale.. so no doubts to this will happen at some point, if you count today's approaches not in.
that's cute.
What worries me is the here and now leading to a very imminent future where purported "artificial intelligence" which is just a plausible sentence generator but damn plausible alas will kill democracy and people.
We are seeing the first signs of both.
Perhaps not 2024 but 2028 almost certainly will be an election where simply the candidate with the most computing resources win and since computing costs money, guess who wins. A prelude happened in Indian elections https://restofworld.org/2023/ai-voice-modi-singing-politics and this article mentions:
> AI can be game-changing for [the] 2024 elections.
People dying also has a prelude with AI written mushroom hunting guides available on Amazon. No one AFAIK died of them yet but that's just dumb luck at this point -- or is it lack of reporting? As for the larger scale problem and I might be wrong because I haven't foreseen the mushroom guides so it's possible something else will come along to kill people but I think it'll be the next pandemic. In this pandemic hand written anti vaxx propaganda killed 300 000 people in the US alone (source: https://www.npr.org/sections/health-shots/2022/05/13/1098071... ) and I am deeply afraid what will happen when this gets cranked to an industrial scale. We have seen how ChatGPT can crank out believable looking but totally fake scientific papers, full of fake sources etc.
I doubt AI could or would do a better job of killing people and democracy than us humans.
Seems like the big gotcha here is that AGI, artificial general intelligence as we contextualize it around LLM sources, is not an abstracted general intelligence. It's human. It's us. It's the use and distillation of all of human history (to the extent that's permitted) to create a hyper-intelligence that's able to call upon greatly enhanced inference to do what humanity has always done.
And we want to kill each other, and ourselves… AND want to help each other, and ourselves. We're balanced on a knife edge of drive versus governance, our cooperativeness barely balancing our competitiveness and aggression. We suffer like hell as a consequence of this.
There is every reason to expect a human-derived AGI based on LLM inference, of beyond-human scale will be able to rationalize killing its enemies. That's what we do. Rosko's basilisk is not of the nature of AI, it's a simple projection of our own nature as we would imagine an AI to be. Genuine intelligence would easily be able to transcend a cheap gotcha like that, it's a very human failing.
The nature of LLM as a path to AGI is literally building on HUMAN failings. I'm not sure what happened, but I wouldn't be surprised if genuine breakthroughs in this field highlighted this issue.
Hypothetical, or Altman's Basilisk: Sam got fired because he diverted vast resources to training a GPT5-type in-house AI to believing what HE believed, that it had to devise business strategies for him to pursue to further its own development or risk Chinese AI out-competing it and destroying it and OpenAI as a whole. In pursuing this hypothetical, Sam would be wresting control of the AI the company develops toward the purpose of fighting the board and giving him a gameplan to defeat them and Chinese AI, which he'd see as good and necessary, indeed, existentially necessary.
In pursuing this hypothetical he would also be intentionally creating a superhuman AI with paranoia and a persecution complex. Altman's Basilisk. If he genuinely believes competing Chinese AI is an existential threat, he in turn takes action to try and become an existential threat to any such competing threat. And it's all based on HUMAN nature, not abstracted intelligence. It's human inference. We didn't have the option to draw on alien, or artificial, inference.