At the end of the day all ML is using gradient descent to do some sort of non-linear projection of the data on to a latent space, then doing some relatively simple math in this latent space to perform some task.
Personally I think the limits of this technique are far better than I would have thought 10 years ago.
However we are only near AGI if this is in fact how intelligence works (or can work) and I don't believe we've seen any evidence of this. And there are some very big assumptions baked into this approach.
Essentially all we've done is pushed the basic model proposed by linear regression to it's absolutely limits, but, as impressive as the results are, I'm not entirely convinced this will get is over the limit to AGI.
> Essentially all we've done is pushed the basic model proposed by linear regression to it's absolutely limits
No, we haven't pushed linear regression to its limits. If it was only linear regression, it wouldn't work. Neural networks need a non-linearity to model complex things.
The beauty is that given an infinite series of nonlinearities, one can model any mathematical function. In practice we found it takes much less than "infinite", a handful will already get you a long way.
A stack of this basic building block, as you describe it, is really all it takes - we know that mathematically already. The interesting question is: how complex are the functions we need to model? So if we create a neural network of a certain size, is that size large enough to model the problem space?
> However we are only near AGI if this is in fact how intelligence works (or can work) and I don't believe we've seen any evidence of this. And there are some very big assumptions baked into this approach.
I think ChatGPT is good evidence of this. What evidence do we have that this isn't how intelligence works?
ChatGPT points to us possibly going in the right direction. It is so good that people need to write long articles about why it is not as good as a human. Contrast to older efforts, which were pretty clearly not so great. I find this pretty compelling, and GOTCHA examples that show that ChatGPT isn't as good as humans in everything miss the point.
Birds fly and drones don't look like birds - but they fly. If the goal is flying, it's okay that we achieve it through something that doesn't exactly mimic what inspired it. Do we need a "full human" for all we do? Our billions of machines, many performing work previously done by humans, show that we don't.
If we can largely replicate intelligence, it's not super important whether "this is how human intelligence works".
What evidence do we have that this is how intelligence works?
For extraordinary claims ('intelligence'), the burden of proof is on those making the claim, not on others to prove the negative.
The fact that ChatGPT doesn’t behave like an intelligence. (Though it converses like one with radical deficiencies in certain areas, which highlights the narrowness of the Turing Test, which itself is a big step.)
OTOH, it gets less bad at this when you wrap it in a loop that provides recall and other capacities with carefully constructed prompting as to how (“thought” process-wise, not just interaction mechanics) to integrate those capacities, so there’s maybe a decent argument that it models an important component of general intelligence.
1) Given an infinite series of nonlinearities, one can model any mathematical function to an arbitrary degree of precision. This has been proven and is accepted by any working mathematician.
2) The human brain / intelligence can be simulated by a sufficiently complex mathematical function. It is somewhat accepted that simulating the human brain is sufficient for intelligence. Disagreements usually boil down to: It won't have a soul, you can't simulate biological / quantum processes on digital computers, or something abut Qualia I don't really understand,
3) How big / complex is the function that we need to get to AGI / simulate human or above intelligence? TBD, it seems like experts disagree but many working in the field have been surprised by the capabilities that GPT has and we are likely closer to AGI than anyone thought 2 years ago.
If you conclude, after watching this, that this is all intelligence is - namely cascading optimization and data structure problems chained in a row, then you and a fair number of people here wont ever find common ground. That is not to say that you're wrong, it just seems as if there is a missing component here that we haven't yet discovered.
Does chatGPT need a way to verify reality for itself to become truly intelligent ?
Animals have bodies and have to survive in an environment. They're not just sitting there waiting on a prompt to generate text. They do things in the real world. Language is later invention by one particular social animal which serves our needs to communicate, which is different than waiting on prompts.
People don't need the gigantic amount of input data that ChatGPT needs to learn. However I'm not sure what exactly "this" is you and GP are referring to, and it may be possible to improve existing ideas so that it works with less input data.
Honestly I've always surprised at the scepticism AI researchers have had of about the limits of training large neural nets with gradient descent. Where I've had my doubts is in the architecture of existing models and still think this is their primary limiting factor (more so than compute and network size).
I think the question that remains now is whether existing models are actually capable of producing a general intelligence that's well rounded and reliable enough to be competitive with human general intelligence. Personally, I think LLMs like GPT-4 are generally intelligent in most ways and should be considered AGI already (at least in a weak sense of the word), but they clearly have notable gaps in their intelligence such as their ability to be consistent, long-term memory, and ability to discern reality from delusion.
I don't think scaling existing models could possibly address these limitations – they seem to be emergent properties of an imperfect architecture. So I suspect we're still a few breakthroughs away from a general intelligence as well rounded as human general intelligence. That said, I suspect existing models (perhaps with a few minor tweaks) are generally intelligent enough that larger models alone are likely still able to replace the majority of human intellectual labour.
I guess what I'm touching on here is the need to more nuanced about what we mean by "AGI" at this point. I think it's quite likely (probable even) that in a few years we'll have an AI that's generally intelligent and capable enough that it can replace a large percentage of existing knowledge work – and also generally intelligent enough to be dangerous. But I suspect despite this it will still have really clear limitations in its abilities when contrasted with human general intelligence.
For me AGI is achieved when AutoGPT is at a point when it's able to improve its own algorithm (specifically improve on the GPT architecture).
Flash attention came out a bit less than a year ago (27 May 2022), which was a great scaling improvement for getting rid of O(n^2) memory requirement in the length of attention.
I guess the next one we need will be one of the solutions that change the FLOPS for computing attention from n^2 to n*log(n) (there was already a paper that achieved it using resizable gauss filters where the size and the base convolution filter is learned separately).
A few of these kind of ,,breakthroughs''/algorithmic improvements are the only ones needed to get to AGI (self improving machine) in my opinion.
I think chatgpt did show proof of what was considered at least until very recently « intelligence ». aka: understand enough about context and concepts to provide relevant answers to very complex and open questions.
It just seems to understand. This is useful, and deeply impressive, but it's not the same thing.
It's obvious it's an AI because it's too smart. It's too intelligent. Dumb it down, remove the AI branding, and it would fool the majority of the world.
We did agree on a definition intelligence! For 50 years, the Turing test was the unquestioned threshold beyond which machines would be considered "intelligent". The simple fact of the matter is that we've reached that point, but folks remain unimpressed, and so have set about moving the goalposts.
I am of the opinion that, when the dust settles, the point in time which will be selected as "the moment we achieved Artificial Intelligence", or even rudimentary AGI, will not be in the future, but in the past. This will be true because, once we take a step back, we'll remember that our definition of intelligence should not be so strict that it it excludes a significant percentage of the human population.
Consider this--is there any reasonable definition of intelligence which excludes Chat GPT 4.0, but which includes all human beings who fall in the bottom ~5% of IQ?
Lol, let me ask ChatGPT what it thinks about that. :)
> I don't believe we've seen any evidence of this
What kind of evidence would you like to have? Do you want a mathematical proof or what? There is evidence that we are making forward progress in solving problems which were previously in the domain of human cognition. There is evidence that yesterday's "impossible" problem become "possible" while the "hard" problems become "easy" or even "trivial". (Just look at this xkcd[1]. When it was published in 2014 telling whether or not a picture contained a bird was indeed a "5 year and a team of researchers" project. Today it is what, an afternoon? A tutorial you pick up to learn a new ML framework?)
There is also evidence that our solutions are tending toward more generalised ones. Previously you would need a "sentiment detection" network, and a separate "subject disambiguation" network to parse the meaning out of a text. Today you can achieve the same or better with an LLM trained to follow instructions.
Obviously these are not "hard evidence" that this path will lead to AGI. But it is certainly not unreasonable to think that it might.
Schmidhuber, is it you?
But seriously, it's partially true most techniques we use today could be found initially envisaged in 90th ANN-related papers. I think Geoffrey Hinton summarized most of them well in his famous "Neural Networks for Machine Learning" lectures. Essentially, what is available today is compute resources unimaginable in 90th, so scaling up from a shallow 2-layered Multi Layer Perceptron to something like 96-layered deep architecture is possible only recently. We also found some of the tricks work better than others in practice when we scale up (like *ELU non-linearity, layer-norm, residual connections). What stays the same however is the general approach: training and validation sets, cross entropy loss, softmax, learnable parameters based on data-in/data-out training pairs, and differentiation chain rule. IMO this requires some innovative revision, especially generalization is still very weak in all architectures today.
Our ability to reason, from an evoultion perspective came about from needing to communicate with others, i.e to give reasons for things, it's this same system we use to be 'logical' or think we are being inteligent from, but it's effectively just a system to make excuses for things, hence why we might choose something in a store for reasons of falling for some form of marketing, but give a completely different logical sounding reason for doing so, hence conspiracy theorists, when confronted with evidence against that theory, don't change there theory, but change there excuse.
I believe this could be more profound and have bigger implications than we have realised.
This is no different to what a lot of AI models are doing right now when they are wrong and people are calling that out saying there is no intelligence behind that, as it's spouts some nonsense reason for being right, but the problem is, humans do this all the time and I think we do it way more than we realise, the trouble is we are so caught up in it, we cannot see it.
What we have with the human brain is something that wasn't designed logically but via evolution which gives the illusion of intelligence or arrives at mostly the same result, but in completely different ways than expected, therefore I think the path to an AGI, might not be quite what some are saying or expects it to be, because the human brain certainly doesn't work like that.
There are various types of retorts: i) the brain is also doing gradient descent; ii) what the brain does does not matter (if you can fake intelligence you have intellegence) iii) the pace is now so fast that has not happened in decades will happen in the next five years etc.
None of them is remotely convincing. I would really start getting worried if indeed some of the AI fanboys had a convincing argument. But we are where we are.
Somehow this whole episode will be just another turn of the screw in adopting algorithms in societal information processing. This in itself is fascinating and dangerous enough.
Not taking a stance on whether "intelligence" can emerge out of gradient descent but it's certainly not biological.
AGI ≠ a lot of AI. They are fundamentally different things.
The first computer was designed in 1837, long before a computer was ever built. We know how fusion reactions work, now we’re tweaking the engineering to harness it in a reactor.
We don’t know how human intelligence works. We don’t have designs or even a philosophy for AGI. Yet, the prevailing view is that our greatest invention will just suddenly “emerge.”
No other field I’m aware of so strongly purports it will reach its ultimate breakthrough without having a clue of the fundamentals of that breakthrough.
It’s like nuclear scientists saying “if we just do a lot of fission, we think fusion will just happen.”
Going by the differences between gpt3.5 and gpt4 is really interesting. It is better able to reason in basically any problem I throw at it. Personally I think that a hypothetical system that is able to generate a sufficiently good response for ANY text input is AGI.
There aren't really any "gotcha" cases with this technology that I'm aware of where it just can't ever respond appropriately. Most clear failings of existing systems involve ever more contrived logic puzzles, which each successive generation is able to solve, and eventually at some point the required logic puzzle will be so dense few humans can solve it.
This isn't a case a case of "studying for the test" of popular internet examples either. I encourage you to try and invent your own gotchas for earlier versions then try them on newer models. Change the wording and order of logic puzzles, or encase them within scenarios to ensure its not responding to the format of the prompt
There are absolubtely cases of people overhypting it, or it overfitting to training data (see the debacle about it passing whatever bar exam, university test etc.). But despite the hype there is an underlying level of intelligence that is building and I use it to solve problems pretty much every day. I think of it atm as like a 4 year old that has inexplicably read every book ever written
If we’re talking about something like agency or free will, that’s trickier.
The problem in this space is people use terms that nobody agrees on.
If that's the case, there doesn't need to be anything new; fundamentally, linear regressions may be sufficient.
Where's the data to show that we've hit the limits?
It's obviously not linear regression. You mean gradient descent?
Just make humans dumber and more predictable. Problem solved. (This isn't that hard since humans will adapt to the stupidity level of the "AI" they use daily on their own without special social engineering efforts.)
What do you believe intelligence to be?
I agree, and I think Goodman & Tenennaum [2] is a great place to see other things that may pop up in the road to AGI. LLMs are great, but they do too much at once. I think moving towards AGI requires some form of symbolic reasoning, possibly combined with LLMs, which may play the role of intuition and kitchen-sink memory.
IMO, the two most important properties of human cognition are that it evolved through natural selection, and that it is embodied, meaning that it is part of a feedback loop between perception and action. I don't see why either of these things requires having a biological brain or body. It's true that current methods aren't close to achieving these things, but there's nothing in principle stopping us from creating evolved, embodied intelligences either in simulations or in robots.
Gradient descent is literally highschool calculus-level math.
You've phrased the skeptic-not-contrarian case perfectly and succinctly.
1. Fusion 2. Self driving cars. 3. Ubiquitous, generally available and stable 8192 bit quantum computers. 4. High energy density grid scale batteries.
Just few more "this maybe the... what could revolutionize..." articles away.
What we forget is that it is in these people's business interest (executives making such statements) to state the exaggerated version of a future which drives the investors of all levels to allocate funds under FOMO which pumps up share price which is the end goal anyway, through a breakthrough or without.
EDIT: Merged another comment to keep all in one place.
But there's still time for robo-taxis!
If AIs had the tendency to recursively devour worlds you would expect at least one alien AI to have conquered our galaxy by now. But the sky is quiet, so either we are the very first species to get this far, or there’s a natural ceiling to the pace of technological progress.
I remember tons of articles from that time arguing the same. Then the attention shifted to "fifth-generation computers".
https://external-preview.redd.it/LkKBNe1NW51Wh-8nLSTRdQtTha2...
This chart shows how much people in the 1970s estimated should be invested to have fusion by 1990, to have it by 2000s and to "never" have it. We ended up spending below the "never" amount for research over four decades so of course fusion never happened exactly as predicted.
I think the main difference is that no one was interested in investing in fusion back then, while everyone is interested in investing in AGI now.
Personally I'm running the latest roomtemp superconducting, 4Giqb quantum memristor computer. It even remotely drives my fusion-powered car over the quantum internet!
Many things thought to be a distant dream have now become reality - cheap solar electricity, pocket computer for everyone, mRNA vaccine, affordable electric car with acceptable range, reusable rocket, satellite based broadband internet, image recognition, translation, and now generative AI.
Why didn’t we hear much about them from CEOs before they were available?
Also, which CEO promised fusion being available in the next few years? Or only wildly optimistic scientists?
The keywords being "most" and "new"
Average Joe with concerns of practicality and longevity is sticking to ICEs for foreseeable future.
Grid scale batteries don’t need to be “high energy density” like EVs. I would argue that cost is probably the biggest factor, and it’s already across the cost/benefit barrier in many markets.
Numbers 1-4 obviously don’t have enormous recent progress nor the funding and mindshare poured into, and pouring in, as consequence.
If you can't get fiber here you get a subsidised 4G / 5G router for a very reasonable price.
Edit:
Perhaps OpenAI becomes a major tech player and we just see a cooling off of other AI investments as LLM becomes a known in terms of its strengths and weaknesses. Its abilities reach a natural limit which is still generally very useful.
Or maybe folks realize the degree of lies/mistruths inherent in its content is actually unmanageable and can’t be improved. After the hype wears off what it will be used for gets greatly curtailed and we see a big contraction.
And there’s so many interesting side tracks along the way. I’m hoping for another AOL Time Warner style shit show of a merger. That’d be fun and could really happen down any path.
I think it's going to be this the future that's ahead us. There's enormous faith being put in next token predictors as the intelligence breakthrough just because their output coincidentally reassembles like something that's derived through a really intelligent process.
LLMs do not hallucinate sometimes. They hallucinate all the time, it just is a coincident that sometimes these autocompletion of Tokens aligns with the reality. Just by chance, not by craft.
Nothing is coincidental about those models. They were designed after processes in the brain. They underwent rigorous training to generate a function that probabilistically maps inputs to outputs. Eventually, it exceeded the threshold where most humans consider it to be intelligent. As these models grow larger, they will surpass human intelligence by far. Currently, large language models (LLMs) have fewer weights than human brains, with a difference of a factor in the thousands (based on my superficial research). But what happens when they have an equal or even 100,000 times more weights? These models will be able to model reality in ways humans cannot. Complex concepts like the connection between time and space, which are difficult for humans to grasp, will be easily understood by such models.
> LLMs do not hallucinate sometimes. They hallucinate all the time, it just is a coincident that sometimes these autocompletion of Tokens aligns with the reality. Just by chance, not by craft.
That is such a weird way to think about them. I'd rather say, they always provide the answer that is most probabilistic according to their internal model. Hallucination simply means, that the internal model is not good enough yet and needs to be improved, which it will.
This is my prediction
If the training set is just the internet, it will be not much different than someone who spends all their lives in their room in front of their computer.
It's not that I'm especially smart; many others could tell you ways to do the same thing now that there's an initial proof-of-concept. It's that these things are new enough no one has had those five years yet. GPT4 came out two months ago, ChatGPT maybe seven months ago, and GPT3 three years ago.
I can't predict if these things will level off, grow linearly, grow exponentially, Moore's Law style, or explode off into the singularity.
I can say GPT4 is nowhere close to the limit.
We are entering into the real culmination of the information age where previously information was available but not exactly accessible or usable for a common person. Which like all tools will result in good and bad outcomes.
Imagine nuclear winter, -50C, gray sky with barely visible sun, and robots everywhere...
Good news: it's not going to happen any time soon.
GPT-n writing "AI winter is coming" articles, while RecurrentGPT-n+1 helps with work on ContinualLearningRecurrentGPT-n+2.
No one knows what AGI is. There isn't going to be some switch that flips to take us from AI to AGI. These tools we have today will just keep getting incrementally better, and some new ones will pop up, and at some point we'll have to stop and say "yeah, this is good enough to qualify". And everyone will have their own opinion on what that point is. Plenty of people even think that we are there today, and there's nothing stopping Google or OpenAI from claiming it if they want.
And you can say the exact same for consciousness, sentience, self-awareness etc.
I think the internet has given a pretty clear definition on what intelligence is: it’s whatever AI can’t do yet.
(You imply that a "can't do yet" will remain forever, which is the open question. If you ask me, AGI is only possible if the tech has ~unlimited agency, which implies control over computer and energy production facilities.)
I agree that the goalposts will be moving for a long time (at least for some people)
There are reasons Ray Kurzweil used the term "spiritual" in the Age of Spiritual Machines. Among those reasons is that "spiritual" is much more difficult to define with any consensus among experts.
And indeed, there's an inflection point coming. What this is, is not at all clear. However, I'd predict that the answer lies with the realization that, given the limits of conversing with LLMS and GPT, the implication is that there's a human-computer sensemaking loop:
https://www.efsa.europa.eu/sites/default/files/event/180918-...
The difference with this HCI is that you'd not hire a human collaborator who lied to you with or without being aware of their own lack of veracity. Here, we'll burn fields full of GPUs at massive cost to get an answer, even though the outcome may be advertising the fact that the AI is wrong. There is learning, but it's going to be costly and painful.
I don't think anyone really cares. What's important is what it can do.
I have a hunch when they decide who to hire for upper management, they select for whoever promises the moon. The person making the promises may not even believe it.
Stuff like programming assistance, new markets driven by GPT style APIs, and the mountain of productivity gains will help subsidize and accelerate the next tranche of investment until we reach the next R&D milestone that will again totally “revolutionize” the world and lead to mass employment for the 10th time this century.
I look forward to having this conversation on HN again in 5-10yrs as the world slowly gets better at a slightly faster (yet less news worthy) rate.
He probably meant that he has enough money so he can hire a driver for his son.
For some, it might mean any that is as capable as a normal human. For some, non-biological life axiomatically cannot become AGI. For some, it might require literal omniscience and omnipotence and accepting anything as AGI means, to them, that they are being told to worship it as a God. For some, it might mean something more like an AI that is more competent than the most competent human at literally every task. For some, acknowledging it means that we must acknowledge it has person-like rights. For some it cannot be AGI if it lies. For some it cannot be AGI if it makes any mistake. For some it cannot be AGI until it has more power than humans. These are several definitions and implications that are partially mutually conflicting but I have seen different people say that AGI is each different one of those.
Whenever a major company references AGI, as is the case here, I mentally replace the term with “Skynet”, because I expect the statement’s aim is to instill fear.
If Google ever develops AGI, you can be sure they’ll call it something else.
They will paint it in their primary colors logo and call it the Happy Fun Ball.
There is much to intelligence that those who’ve been studying it in areas such as biology and ecology still do not understand. The role of emotions and how they work have a strong influence on our cognitive abilities and consciousness as a whole. This is rarely ever considered a part of intelligence in the AI space.
And I think that’s rather the interesting part of all of this: we skip the artificial part and leap straight into deus ex machina. It is artificial and limited to what we choose to implement.
Even unsupervised learning isn’t technically that marvellous under the hood. In the sense that it is unknowable and seemingly magical.
I don’t agree that we’re a few years away from Data (a character from Star Trek : The Next Generation and an artificial life form) and we have no idea if we’d ever be able to implement Lore (a related character from the same show that has the benefits of being able to simulate emotions).
An interesting metaphor here is that aeroplanes don't work in a similar way to birds, the end goal is flight, not having wings that flap and are covered in feathers.
What I keep hearing from the AGI folks is that we’re on the verge of replacing humans. That these systems, “think,” on their own and will be superior to us in every way: dangerous even!
I highly doubt they will be a danger on their own. Bing isn’t going to decide one day that it thinks you’ve been a bit distant lately and doesn’t want to answer your query until you apologize. It will answer the query because it’s an algorithm run on a computer that is designed to answer queries.
The danger of AGI still comes from people and corporations that wield them.
Although it would be very convenient if future cases against Google could absolve them of responsibility because of a “rogue AI.”
Another way to say this: when will someone allow an AGI to have free rein and independent choice and power? How much power will we allow it?
First it would need to actually have independent thought to be granted anything.
For the record, I don't believe we're close to AGI but I'm also pretty far from knowing anything about that field.
If an AGI means that something is sentient enough to be able to rewrite itself to be better, id put 0% in the next 50 years. There needs to be way to run accurate simulations of reality faster than reality, which is a fundamental physics problem as well as a computation one.
What is going to happen is more and more efficient information compression that will appear like AI, but under the hood it will basically be just emergent software. It will be totally possible in the future to ask an agent and get a full step by step plan of building a personal VTOL in your garage without specialized equipment, but all that is is just information from different domains rolled into one compression algorithm with efficient language based search.
I think we might also assume that we will believe we have created it before we actually do, but then simultaneously deny that we have created it after we already have.
That's kind of what I'd like to know, before the CEO of DeepMind opines on AI I'd like to hear his thoughts on what he learned about his last predictions.
Given some of the capabilities I have seen with chain-of-thought processing, embeddings, and vector databases - it does seem to me conceivable that computers could be made to do all of this. One gap maybe is being able to pick up that the problem statement is wrong, i.e. not solving the right problem, or being aware of subtle undocumented details about the system and the business that need to be factored in. The AI needs to actually know about these subtle details somehow in order to factor them in.
Perhaps that is the big problem that remains - capturing subtle details about the world.
I watched a Youtube video where Geoffrey Hinton (i think) said humans do a lot with small datasets and these LLM systems only work with huge datasets.
Im fact these systems arguably aren't reasoning at all, the giant datasets just allow it to provide the illusion of reasoning.
None of our current approaches have demonstrated they will yield AGI in the next few years.
Nevertheless it is quite a relief. We can now let this all go and return to frolicking in the meadows again.
So far some small wins have been seen. Common problems involve looping and doing nothing. It's still early for these projects.
Researches seem fixated to create something like "a really smart person or intellectual slave", LLMs make it seem like we're close to that I guess?
Is there not a whole lot more usefulness in solving actual problems with "AI" without the potential risks and baggage of creating, I don't know, is it super capable idiot savant?
The approach deep mind was taking before entering this sort of "talking computer" arms race seemed actually quite useful and less "disruptive". Things like AlphaFold are more inline with what I was hoping to see going forwards. Now I just don't really know what the the plan is?
Is it to one day be able to sit at your computer and say things like: "I want you to solve cancer" and "replace my secretary, she's too expensive", "build as many paper clips as possible but don't hurt anyone?", and expect that it will kind of do it? Do we expect that everyone will freely have access to these systems or just a few people? Is this sustainable and practical long term?
In one interview I saw, Ilya Sutskever has claimed every town, state and country will have sort of AI representative, and in another basically stating we might not have to work and become "enlightened beings". Why on earth do we need an AGI to become "enlightened beings", has this gentlemen never read any literature on "enlightenment"?
I'm really struggling to see how the type of world some of these researchers seem to be gunning for is actually ideal or even wise even if achievable, especially given the mounting levels of anxiety around LLMs and their implications. Are the ethics ever actually considered?
It seems like "building AGI" is kind of like, building a something (loosely defined) which could have a lot of negative unintended side effects, but for what now? Just intellectual curiosity ? Fulfilling a Sci-Fi fetish?
Personally, I think this is what has spooked Geoff Hinton, he has seen an acceleration toward something, but he realizes we have zero idea what to do when or if when we build the "something". He now realizes the military or bad actors will take advantage of these AGI(?) and he might be alive to have to see the consequences of that.
I think you might be misconstruing his definition of 'enlightenment'. In this instance I believe he is referring to humanity having the time and freedom for the pursuit of knowledge and intellectual reasoning. There are a number of instances in fictional literature were the authors discuss worlds in which civilization has achieved a state of enlightenment and dedicate their lives exclusively to science, philosophy and the arts (Olaf Stapleton's First and Last Men, and Starmaker provide several)
>but for what now? Just intellectual curiosity ?
The purpose is out of exercise to understand it. But again, if we might produce an AGI which is capable of digesting massive amounts of knowledge, but with the ability to reason we might be able to achieve certain goals like fusion energy sooner than expected. IMO solving the energy problem in itself is worth the risk as our uncontrolled use of fossil fuels is an existential threat to life on this planet, not to mention it would rebalance the distribution on wealth and power around the world.
As for Hinton, I think you are spot on with why he reacted this way. However, I facepalm every time the 'killer robots' example comes up because it is the worst example of the threats these things pose compared the following:
- AI being used to manipulate public opinion on a massive scale creating a new age of demagoguery.
- AI being used to fabricate different realities within an information sphere by generating text, imagery and audio/video media supporting a certain set of narratives. Essentially a type of super propaganda.
- AI being granted trust to by humans to perform certain functions which it is not capable of, because humans are too ignorant to subject it to proper scrutiny.
- AI being used to manipulate either markets, financial institutions or economies at a massive scale, dramatically shifting the balance of power around the world.
Essentially, the general public's concept of warfare needs to be broadened to consider non-kinetic conflict. Warfare is effectively a pursuit in the change of policy in other nations(or groups) by many different means. The last resort of these methods is inevitably a kinetic engagement, where all other approaches have failed to achieve the desired result.
Altering this viewpoint allows us to reassess what we consider to be weapons rather than tools, and gives us a better idea of where we should expect these threats to emerge first.
As for autonomy, LLMs don't have autonomy by themselves. But they can be pretty easily combined with other systems, connected to the outside world, in a way that seems pretty darned autonomous to me. (https://github.com/Significant-Gravitas/Auto-GPT). And that's basically a duct-tape-and-string version. Given how new these LLMs are, it's likely we're barely scratching the surface.
> Even though there's a lot of powerful AI programs out there, they're really nothing more than complex tools created to solve problems.
That's what they're designed to be, yes, but sometimes you end up with something more than what you intended.
Thus, the standard for AGI is lower than the standard for self-driving cars. Existence of an AGI does not imply existence of self-driving cars.
To put it another way: take the worst human driver who still qualifies as "could learn to drive a car." (It's assumed the person is intelligent.) Construct a Turing-like test where we observe a car possibly being driven by that person, but we don't know who or what is actually driving the car. The car drives over a curb, kills a cat crossing the street, narrowly misses a dozen pedestrians in crosswalks, and finally parks poorly in a mall parking lot -- just like we would expect from the world's worst human driver. After observing that car trip, would we celebrate the long-awaited arrival of self-driving cars?
It's possible that we're limited on the hardware / modality front.
AGI is about inventing the brain, while the car is an entire body.
*One can nitpick yourself into creating a series of examples of fictional disabled people but let’s not bikeshed please
I'm starting to suspect that GPU vendors are the ones selling AI.
If you think about it, software is often used by the hardware industry to increase sales, it's the law of Wirth. Seems like it's the norm now.
Learning is an automating activity, when you've learnt something it means you can do it without being intelligent about it, on 'autopilot' as it were.
A crucial aspect of intelligence is the denial of learning when appropriate, i.e. not acting according to learning, which seems to be a conscious, prefrontal cortex related ability, and hence closely connected with anxiety and suspicion.
Evolutionarily learning works well with slow changes in evolutionary pressure, intelligence gets useful when the pressure is erratic and evolutionary signals are unreliable in mediating information about how to survive and reproduce efficiently.
The AGI believers in the capitalist class aren't alone in confusing this, learning is much simpler and more immediately rewarding than intelligence so the bourgeoisie has made learning their ideal for all of modernity and replaced educational exams based on more or less intelligent conversation with formal measurements of learning.
So there's that too.