Such architecture works great for differentiable data, such's images/audios, but the improvement on natural language tasks are only incremental.
I was thinking maybe DeepMind's RL+DL is the way leads to AGI, since it does offer an elegant and complete framework. But seems like even DeepMind had trouble to get it working to more realistic scenarios, so maybe our modelling of intelligence is still hopelessly romantic.
And - let's be real - a lot of human symbolic reasoning actually happens outside of the brain, on paper or computer screens. We painstakingly learn relatively simple transformations and feedback loops for manipulating this external memory, and then bootstrap it into short-term reaction via lots of practice.
I tend to think that the problems are: a) Tightly defined / domain-specific loss functions. If all I ever do is ask you to identify pictures of bananas, you'll never get around to writing the great american novel. And we don't know how to train the kinds of adaptive or free form loss functions that would get us away from these domain-specific losses.
b) Similarly, I have a soft-spot for the view that a mind is only as good as its set of inputs. We currently mostly build models that are only receptive (image, sound) or generative. Reinforcement learning is getting progress on feedback loops, but I have the sense that there's still a long way to go.
c) I have the feeling that there's still a long way to go in understanding how to deal with time...
d) As great as LSTMs are, there still seems to be some shortcoming in how to incorporate memory into networks. LSTMs seem to give a decent approximation of short-term memory, but still seems far from great. This might be the key to symbolic reasoning, though.
Writing all that down, I gotta say I agree fundamentally with the DeepMind research priorities on reinforcement learning and multi-modal models.
What you might see as logical operations "not mattering", I would see as logical operations integrated so deeply into reflexive operations that it's hard to see where one ends and the other begins. The contrast is that humans can do pattern recognition in a neural net fashion, taking something like the multidimensional average of a set of things. But a human can also receive a language-level input that some characteristic is or isn't important for recognizing a given thing and incorporate that input into their broad-average concepts. That kind of thing can't be done by deep learning currently - well, not a non-kludgey sort of way.
Similarly, I have a soft-spot for the view that a mind is only as good as its set of inputs.
It depends on how you want to mean that. A human can take inputs on one thing and apply them seamlessly to another thing. Neural nets tend to be very dependent on the task-focused content fed them.
"let's be real - a lot of human symbolic reasoning actually happens outside of the brain"
I was a chess master at age 10. Let's be real - when I play blitz and bullet chess, I am performing multi-level symbolic reasoning at multiple frames per second. In my brain.
I am not an alien. I can do these kinds of symbolic calculations faster than 99.6% of the population mainly because I learned chess as a kid, making it a "native language", and I got good at it early so I spent much of my youth training my neurons with this perceptual task.
My point is not to claim I'm a genius. There are dozens of players who can school me in bullet the way I can school most people.
My point is that human beings DO symbolic reasoning, it is the core of our intelligence. Being able to take in different kinds of input, organize some of them into relevant higher level clusters, sort the clusters by priority, make a plan to deal with the highest prio clusters, act, rinse and repeat.
Humans simply do not have the computational ability to make decisions based on raw perceptual data in real time. Our brains are designed to act on higher levels of symbolic meaning, and we have perceptual layers to help us turn reality into manageable chunks.
In cognitive psychology this is referred to, not surprisingly, as "chunking": https://en.wikipedia.org/wiki/Chunking_(psychology)
Until DeepMind starts working on anything resembling chunking, I believe they are wasting their time and money.
If that's the case, then to me it seems like AGI is limited by the amount and type of data a NN can be fed. To have an intelligence like homo sapiens, wouldnt you expect that no matter the underlying NN, it has to take in a comparable amount of data to what the 5+ human senses take in over lifetime, plus the actual internal 'learning' (i.e pattern recognition, heuristics, and intuition) + some kind of meta awareness (consciousness) to speed up and aid this process + dedicated pieces of the brain such as Broca's/Wernicke's
The problem is so inherently hard that we are struggling even to come up with a meaningful task, telling us how bad we are doing. That comes to your first point, I think finding the right loss function is a like a chicken-and-egg situation here. When you have the loss function at hand, you already what task and problem you are going to solve, then it becomes easier. But that is apparently not our current situation.
That is why I think DeepMind has a good reason to go after reinforcement learning, after all, that is how we human are trained, through exams and the feedbacks.
As to your point about LSTM, I am not very passionate to qualitatively claim it whether it can/can't handle short/long term memory. That is apparently task dependent, and all the concepts involved are ill-defined.
I don't think biological precedent is the only or even most valuable heuristic for deciding where to research intelligence... But I don't see where there is evidence that symbolic reasoning is either necessary or sufficient for AGI, except people describing how they think their brain works.
Related, there are a lot of statements that symbolic or rule based systems do better / as well as / almost as well as neural methods. Citation please, I'd love a map of which ML problems are still best solved with symbolic systems. (Sincerely - it's not that I expect there aren't any.)
Turns out this is wrong. Human brains are very efficient.
You think symbolic reasoning is not a function? In what sense do you think 'symbolic reasoning' is a distinct thing from 'function approximation'?
It didn't understand the source material, it is just very good at memorizing and faking.
People essentially rely on emotions to make all their decisions. Emotions implicitly represent rapid-fire unconscious decision work.
Again the current popular understanding of the mind separates emotion from thinking. They are not distinct. Emotional processing is another kind of thinking, and it drives the show.
The only reason I am convinced it is NOT doing a good job, is how utterly difficult to apply NN to dialog generation/management domain of business, often time it behaves much worse than rule-based systems.
"Hence, if it requires, say, a thousand years to fit for easy flight a bird which started with rudimentary wings, or ten thousand for one which started with no wings at all and had to sprout them ab initio, it might be assumed that the flying machine which will really fly might be evolved by the combined and continuous efforts of mathematicians and mechanicians in from one million to ten million years--provided, of course, we can meanwhile eliminate such little drawbacks and embarrassments as the existing relation between weight and strength in inorganic materials. [Emphasis added.] The New York Times, Oct 9, 1903, p. 6."
-----
A couple of the leading minds in AGI say it's a long ways away... just because the universe likes to give us the finger, maybe AGI is on the horizon. Maybe we'll look back at this in 10 years and laugh (if we're here).
We really don't learn anything from the problem in had by talking in generic terms. We use these arguments when we want to justify our hopes and feeling, but there is really nothing to learn from it.
Hinton, Hassabis, Bengio and others point out that we can't 'brute force' AI development. There needs to be actual breakthroughs in the field and there may be several decades between them.
AI, brain science and cognitive science are extremely difficult fields with small advances, yet people assume that it's possible to 'brute force' AGI by just adding more computing power and doing more of the same.
Macroeconomics is probably less complex research subject than AI or brain science, but nobody assumes that you can just brute force truly great macroeconomic model in few years if you just spend little more resources.
Do people assume that? I mean, I'm sure some people do, but I don't think I've encountered many people, at least not in the AI safety movement, that actually think it's a matter of more hardware power. Some people think it's possible that that's all that's necessary, but I don't think most will say that that's the most likely path to AGI (rather than, as you say, actual breakthroughs happening).
What are the components of intelligence? For example, AlphaZero can solve problems that are hard for humans to solve in the domain of chess, shogi and go- is it intelligent? Is its problem-solving ability, limited as it is to the domain of three board games, a necessary component of general intelligence? Have we even made any tiny baby steps on the road to AGI, with the advances of the last few years, or are we merely chasing our tails in a dead end of statistical approximation that will never sufficiently, well, approximate, true intelligence?
These are very hard questions to answer and the most conservative answers suggest that AGI will not happen in a short time, as a sudden growth spurt that takes us from no-AGI to AGI. With flight, it sufficed to blow up a big balloon with hot air and- tadaaaa! Flight. There really seems to be no such one neat trick for AGI. It will most likely be tiny baby steps all the way up.
Mainly in the idea/concept of back-propagation. It's something that I've thought about myself. For the longest time, I could never understand how it worked, then I went thru Ng's "ML Class" (in 2011, which was based around Octave), and one part was developing a neural network with backprop - and the calcs being done using linear algebra. It suddenly "clicked" for me; I finally understood (maybe not to the detailed level I'd like - but to the general idea) how it all worked.
And while I was excited (and still am) by that revelation, at the same time I thought "this seems really overly complex" and "there's no way this kind of thing is happening in a real brain".
Indeed, as far as we've been able to find (although research continues, and there's been hints and model which may challenge things) - brains (well, neurons) don't do backprop; as far as we know, there's no biological mechanism to allow for backprop to occur.
So how do biological brains learn? Furthermore, how are they able to learn from only a very few examples in most cases (vs the thousands to millions examples needed by deep learning neural networks)?
We've come up with a very well engineering solution to the problem, that works - but it seems overly complex. We've essentially have made an airplane that is part ornithopter, part fixed-wing, part balloon, and part helicopter. Sure it flies - but it's rather overly complex, right?
Humanity cracked the nut when it came to heavier-than-air flight when it finally shed the idea that the wings had to flap. While it was known this was the way forward long before the Wright's or even Langley (and likely even before Lilienthal), a lot of wasted time and effort went into flying machines with flapping wings, because it was thought that "that's the way birds do it, right"?
So - in addition to the idea that backprop may not be all it's cracked up to be - what if we also need to figure out the "fixed wing" solution to artificial intelligence? Instead of trying to emulate and imitate nature so closely, perhaps there's a shortcut that currently we're missing?
I do recall a recent paper that was mentioned here on HN that I don't completely understand - that may be a way forward (the paper was called "Neural Ordinary Differential Equations"). Even so, it too seems way too complex to be a biologically plausible model of what a brain does...
I've spent a lot of time trying to explain this to people, that there is a confluence between the human brain and the machine, people tend to look at the machine separately, which is a mistake. When I say unequivocally, 'there is no such thing as machine intelligence', I just get blank stares.
Overall, I'd agree that really powerful tools for specific tasks is going to be the majority of "AI" in the coming years.
But yes, it's extremely unlikely that nature implements backpropagation directly, as it relies on non-local gradients.
Human flight is not as agile or energy-effective as a dragonfly, but it is faster and stronger. Just like artificial learning may not be as sample-effecient as the human brain. It is a learning intelligence nonetheless and we are already working with the core mechanisms of reasoning and deduction.
You could bet that AGI won't manifest until AI and robotics are properly fused. Cognition does not happen in a void. This image of a purely rational mind floating in an abyss is an outdated paradigm to which many in the AI community still cling. Instead, the body and environment become incorporated into the computation.
Anecdotal, but nearly all of my programmer friends believe that full-blown AGI is less than a decade away.
It's worth thinking about this section of [0] when various AI experts offer predictions:
> Two: History shows that for the general public, and even for scientists not in a key inner circle, and even for scientists in that key circle, it is very often the case that key technological developments still seem decades away, five years before they show up.
> In 1901, two years before helping build the first heavier-than-air flyer, Wilbur Wright told his brother that powered flight was fifty years away.
> In 1939, three years before he personally oversaw the first critical chain reaction in a pile of uranium bricks, Enrico Fermi voiced 90% confidence that it was impossible to use uranium to sustain a fission chain reaction. I believe Fermi also said a year after that, aka two years before the denouement, that if net power from fission was even possible (as he then granted some greater plausibility) then it would be fifty years off; but for this I neglected to keep the citation.
> And of course if you’re not the Wright Brothers or Enrico Fermi, you will be even more surprised. Most of the world learned that atomic weapons were now a thing when they woke up to the headlines about Hiroshima. There were esteemed intellectuals saying four years after the Wright Flyer that heavier-than-air flight was impossible, because knowledge propagated more slowly back then.
My impression is this is common among DeepMind folks and not an aberration. (See also dwiel's comment elsewhere.) It is super weird for me that Demis Hassabis says AGI is nowhere close. Is he lying? Or does he mean 10 years is not close?
Maybe he just doesn’t believe the same thing some of his coworkers do? Seems pretty drastic to jump to the conclusion he’s lying if he implies it’s more than 10 years away.
Also, do you believe AGI is currently more a compute/hardware problem, or an algorithmic problem?
People lack nuance and critical thinking.
https://medium.com/intuitionmachine/near-term-agi-should-be-...
Maybe our existing methods are good enough given enough compute to reach AGI but our datasets are too low fidelity and non-representative of the problem space to reach desired results?
Think of 16 year old human:
* it has received less than 400 million wakeful seconds of data + 100 millions seconds of sleep,
* it has made only few million high level cognitive decisions where feedback is important and delay is tens of second or several minutes (say few thousand per day). From just few million samples it has learned to behave in the society like a human and do human things.
* Assuming 50 ms learning rate at average, at the lowest level there is at most 10 billion iterations per neuron (Short-term synaptic plasticity acts on a timescale of tens of milliseconds to a few minutes.)
Humans generate very detailed model of their environment with very little data and even less feedback. They can learn complex concept from one example. For example you need only one example of pickpocket to understand the whole concept.
I think we need simulation of other agents outputs as primary tool for reasoning. That seems to be how intelligence emerged in evolution.
Something like this: choose desired action > simulate other agents outputs based on future state after performing action > check reward for this action after simulating outputs of others > perform action or not > update all agents models and relations in "world" graph model
I think world could be modeled as simple graph and each agent as NN.
Then based on graph we could conduct symbolic reasoning and very fast learning (by updating edges)
I think these models need also need good physical simulator and good understanding of competitivness.
Is anyone aware of such trials of building AGI as I described?
Humans have natural language as big competetive adventage (easy way to compress parts of world graph and pass it to others - ambiguous. I think with aftificial machiness can be done more efficient). Another advantage is knowledge storage - also easy to do with machiness.
If we can build insect AI building human AI should be easy.
On the other hand, the ubiquity of knowledge once it's available could lead any maniac to use it for the wrong purpose and wipe out humanity from their basement.
My feelings on the potential of AGI is therefore mixed. I for one have just found my particular niche in the workforce and am finally reaping the dividends from decades of hard work. Having AGI displace me and millions (or billions) of individuals is frightening and definitely keeps me on my toes.
Technology changes the world; my parents both worked for newspapers and talk endlessly about how the demise of their industry after the advent of the internet is so unfortunate. Luckily for them they are both at retirement age so their livelihood was not upset by displacement.
If AGI does become a thing it will be interesting to see how millenials and gen Z react to becoming irrelevant in what would have been the peak of their careers.
It seems clear that autonomous systems which can apply their computational machinery to a diverse range of problems, and can, in a diverse range of settings, formulate instrumental goals as part of a plan to attain a final goal, do exist.
Because that's what humans are, at least some of the time.
edit: In terms of Turing-completeness analogues, the best candidate for AGI I think would be simply brute force capability: can this agent try all possible solutions until it solves this problem? (obviously using a heuristic to prioritize) -- that is, it'd employ a form of Universal Search[1] (aka Levin Search). Humans don't necessarily pass this test rigorously because we'd always get bored with a problem and because we have finite memory. But then CPUs are not truly Turing complete either (it's "just" a good model).
Don't believe me? Check out this series of marketing videos on YouTube by GM Matthew Sadler.
1. “Hi, I’m GM Matthew Sadler, and in this series of videos we’re taking a look at new games between AlphaZero, DeepMind’s general purpose artificial intelligence system, and Stockfish” (1)
2. “Hi, I’m GM Matthew Sadler, and welcome to this review of the World Champinship match between Magnus Carlsen and Fabiano Caruana. And it’s a review with a difference, because we are taking a look at the games together with AlphaZero, DeepMind’s general purpose artificial intelligence system...” (2)
3. “Hi, I’m GM Matthew Sadler, and in this video we’ll be taking a look at a game between AlphaZero, DeepMind’s general purpose artificial intelligence system, and Stockfish” (3)
I could go on, but you get my point. Search youtube for "Sadler DeepMind" and you'll see all the rest. This is a script.
But wait, you say, that's just some random unaffiliated independent grandmaster who just happens to be using an inaccurate script on his own, no DeepMind connection at all! And to that I would say, check out this same random GM being quoted directly on DeepMind's blog waxing eloquently and rapturously about AlphaZero's incredible qualities. (4)
Let's be clear. I am in no way dismissing AlphaZero's truy remarkable abilities in both chess and other games like go and shogi. Nor do I have a problem with Demis Hassabis making headlines for stating the obvious about deep learning (that it's good at solving certain limited types of puzzles, but we are a long way from AGI, why is this controversial).
My problem is that Hassabis is speaking out of both sides of his mouth. Increasing DeepMind/Google's value by many millions with his marketing message, while acting like he's not doing that. It feels intellectually dishonest.
To solve this, all DeepMind needs to stop instructing its Grandmaster mouthpieces to refer to AlphaZero as a "general articial intelligence system". Let's see how long that takes.
(1) https://www.youtube.com/watch?v=2-wFUdvKTVQ&t=0m10s (2) https://www.youtube.com/watch?v=X4T0_IoGQCE&t=0m05s (3) https://www.youtube.com/watch?v=jS26Ct34YrQ&t=0m05s (4) https://deepmind.com/blog/alphazero-shedding-new-light-grand...
A general reinforcement learning algorithm that masters chess, shogi, and Go through self-play
http://science.sciencemag.org/content/362/6419/1140
"General" as in what? As opposed to reinforcement learning, in er, general? As opposed to other ANN architectures?
>> I am in no way dismissing AlphaZero's truy remarkable abilities in both chess and other games like go and shogi.
More to the point- it's only chess, go and shogi; not games "like" those.
The AlphaZero architecture has the structure of a chessboard and the range of moves of pieces in chess, go and shogi hard-coded and you can't just take a trained AlphaZero model and apply it to a game that doesn't have either the board or the moves of those three games.
To be blunt, AlphaZero has mastered chess, go and shogi, but it can't play noughts-and-crosses.
Maybe it's just me, but "general purpose artificial intelligence system" sounds like, well, General Artificial Intelligence. Which sounds like Artificial General Intelligence, which is the holy grail.
How do you know we aren't?
BTW, if you hadn't noticed, Season Three just came out on Netflix. I'm champing at the bit to binge watch that... :-)
As an alternative, the human mind could be some sort of halting oracle. That's a well defined entity in computer science which cannot be reduced to Turing computation, thus cannot be any sort of AI, since we cannot create any form of computation more powerful than a Turing machine. How have we ruled out that possibility? As far as I can tell, we have not ruled it out, nor even tried.
Why do we believe man can make fire? Well, dammit, we WANT to make fire. Let's figure out how to do it!
Finally, if we were able to explain the brain well with "metaphysics" it would then be just "physics". It seems that all you are saying here is that there is a mechanism that is not yet understood and it may be fundamentally different than other things we have studied so far (which seems unlikely, I might add).
Similarly we can mathematically and empirically differentiate between halting oracles and Turing machines, so why not leave both possibilities open as scientific explanations, instead of doubling down on the Turing machine model? Call halting oracles materialistic if it makes you feel better.
UPDATE: I've been rate limited for some reason, so here is my response whether the mind intuitively seems to be a halting oracle.
1. It's obvious there are an infinite number of integers, because whatever number I think of I can add one to it. A Turing machine has to be given the axiom of infinity to make this kind of inference, it cannot derive it in any way. This intuitively looks like an example of the halting oracle at work in my mind. Or, an even more basic practical example: if I do something and it doesn't work, I try something else. Unlike the game AIs that repeatedly try to walk through walls.
2. We programmers write halting programs with great regularity. So, it seems like we are decent at solving the halting problem. Also, note that it is not necessary to solve every problem in order to be an uncomputable halting oracle. All that is necessary is being capable of solving an uncomputable subset of the halting problems. So, the fact that we cannot solve some problems does not imply we are not halting oracles.
> Universal Intelligence: A Definition of Machine Intelligence
> A fundamental problem in artificial intelligence is that nobody really knows what intelligence is. The problem is especially acute when we need to consider artificial systems which are significantly different to humans. In this paper we approach this problem in the following way: We take a number of well known informal definitions of human intelligence that have been given by experts, and extract their essential features. These are then mathematically formalised to produce a general measure of intelligence for arbitrary machines. We believe that this equation formally captures the concept of machine intelligence in the broadest reasonable sense. We then show how this formal definition is related to the theory of universal optimal learning agents. Finally, we survey the many other tests and definitions of intelligence that have been proposed for machines.
General intelligence usually meant in relation to humans, but you are correct in noting that it is a spectrum, not a binary.
We seem to be looking at intelligence in humans and thinking we need to develop that, without first defining what intelligence actually is. We don't exist in isolation, and it's likely that the components of intelligence exist to varying degrees in other organisms. In the same way that birds, bats, gliders and insects all have wings that generate lift, what are the things that we have in common with other animals?
Seriously - that's a wicked funny post you had there!
For all we know, Isabelle and Coq could be speeding through the road to consciousness but we're busy having a blast doing Computer Vision pretending it's AI.
Deep Learning is amazeballs for Computer Vision. It's fun because people like looking at pictures. But sufficiently prodded Isabelle proves theorems, I've seen it first hand, and the "sufficient prodding" is way underdeveloped yet. At one point backpropagation was dead too.
Over the medium term I'm not sure AI researchers are the best people to ask. They are completely dependent on how much power the electrical engineers give them - I doubt there is a deeper understanding what a doubling or quadrupling of computer power will do than any programmer learning about neural networks.
Why do you say that? AFAIK computing architecture and brain architecture are completely different. How would you even begin to compare their power?
Google has TPU that are off from the estimated power required to simulate a brain by a factor of 3, so technology is reaching the ballpark. Given that brains were evolved, the part that does symbolic thinking is probably "easy to stumble on" in some practical sense.
[0] https://en.wikipedia.org/wiki/Computer_performance_by_orders...
Sure they do. They just hook up four times as much compute power or simulate whatever they want to do in four times as much time. A slow AGI would still be an AGI. But we do not see anything like that if we use four times as much power as in the control. It is still nowhere near.
As I have stated before, AI is the end for us. To put it simply, AI brings the world into a highly unstable configuration where the only likely outcome is the relegation of humans and their way of life. This is because of the fundamental changes imposed on the economics of life by the existence of AI.
Many people say that automation leads to new jobs, not a loss of jobs. Automation has never encroached on the sacred territory of sentience. It is a totally different ball game. It is stupid to compare the automation of a traffic light to that of the brain itself. It is a new phenomenon completely and requires a new, from-the-ground-up assessment. Reaching for the cookie-cutter “automation creates new jobs” simply doesn’t cut it.
The fact of the matter is that even if most of the world is able to harness AI to benefit our current way of life, at least one country won’t. And the country that increases efficiency by displacing human input will win every encounter of every kind that it has with any other country. And the pattern of human displacement will ratchet forward uncontrollably, spreading across the whole face of the earth like a virus. And when humans are no longer necessary they will no longer exist. Not in the way they do now. It’s so important to remember that this is a watershed moment — humans have never dealt with anything like this.
AI could come about tomorrow. The core algorithm for intelligence is probably a lot simpler than is thought. The computing power needed to develop and run AI is probably much lower than it is thought to to be. Just because DNNs are not good at this does not mean that something else won’t come out of left field, either from neurological research or pure AI research.
And as I have said before, the only way to ensure that human life continues as we know it is for AI to be banned. For all research and inquires to be made illegal. Some point out that this is difficult to do but like I said, there is no other way. I implore everyone who reads this to become involved in popular efforts to address the problem of AI.
So unless you pose that a function has to rely on its materialization (there is something untouchably magic about biological neural networks, and intelligence is not multiple realizable), it should be possible to functionally model intelligence. Nature shows the way.
AGI will likely obsolete humanity. Either depricate it, or consume it (make us part of the Borg collective). Heck, even a relatively dumb autonomous atom bomb or computer virus may be enough to wipe humanity from the face of the earth.
And what does alarmist even mean? Do you call global warming advocates alarmists? It’s such an annoying, nonsense word that boils down to name-calling really. Discuss the merits of my actual argument. If you think my speculation is wrong, point out a flaw in the chain of logic that leads to my conclusion. Don’t just wave your hand and say that “you can’t prove it” like some evangelical christian talking about god or global warming. Seriously infuriating when there is so much at stake.
Philosophers and futurists are better suited to hypothesize an AGI timeline.
But you take it too far by saying it is anyone's game.
Game theory, security, and economic competition makes it impossible to globally ban AI. The incentives to automate the economy (compare AI revolution with industrial revolution) and to weaponize AI (Manhattan Project for intelligence) are just too big. We are already seeing that the US focus on fair and ethical AI puts them at a disadvantage against China and Russia. AGI must require pervasive surveillance of the populace, but the Luddites are holding this back.
I suggest you learn to stop worrying about the bomb, and start planning for its arrival.
If we can figure out decision theory and how our values work, then when we figure out AI, we can hopefully build it to be aligned with our values from the start, instead of blindly hoping it happens to play nice with us instead of brushing us off like ants.
So what if it is possible to create a benevolent ai? Nobody said this isn’t possible or even likely. We can also invent a machine that scrubs all the moss off of stones. Just because it’s possible for it to exist doesn’t mean it’s going to proliferate in the free-market of the world and everything in it. The only thing that is important is the fact that
1: we will enter an unstable configuration where any AI implementation that can exist will exist
2: the AI implementations that proliferate will be those that are not hamstrung by being forced to include humans in the loop
3: humans will be out of the loop for every conceivable task and therefore not enjoy the high standard of living that they do in 2018
Is that because you think banned things do not happen? Even if the thing that is banned could confer a massive advantage to the entities developing it?
I think AGI is unlikely to be a thing in my lifetime, or even my children's. But if I were worried about it, I'd probably focus on developing a strategy to create a benevolent intelligence FIRST, rather than try to prevent everyone else from ever creating one via agreements and laws.
Developing a good ai first is useless because as I have said, the creation of ai enters us into an unstable configuration where bad ai will crop up regardless. Keeping bad ai from existing is infinitely easier when ai does not exist as a technology as opposed to when it’s a turnkey thing.
Good luck with that.
If AGI is impossible, it will never happen. We already know that perfectly intelligent AGI's are not physically possible: Per DeepMind's foundational theoretical framework, optimal compression is non-computable, and besides that, it is not possible for an inference machine to know all of its universe (unless it is bigger than the universe by at least 1 bit, AKA it is the universe).
Remains being more intelligent than all of humanity. To accomplish that, by Shannon's own estimates, there is currently not enough information available in datasets and the internet. Chinese efforts to artificially increase the intelligence of babies is still in its infancy too (the substrate of AGI is irrelevant for computationalism, unless it absolutely needs to run on the IBM 5100).
So until that time travels, we will have to make due with being smarter than/indistinguishable from a human on all economic tasks. We're already there for some subset of humanity, you may even be a part of that subset, if you believed this post was written by a human.