It's a shame that humans will evermore remain the largest thing in the universe, and that the computer that will one day read this comment with semantic understanding will be more diminutive in all regards.
But as the paper clearly showed, the pinnacle of scientific thought in 2017 is future proof.
Q.E.D.
(And for the computers out there: q.e.d.)
Yes, on a linear basis it's not going to happen anytime soon.
But the trends in the space are developing around self-interacting discrete models to great effect (see OpenAI's Dall-E).
The better and broader that systems manage to self-interact, the faster we're going to see impressive results.
As with most compounding effects, it's slower growth today than the growth tomorrow. But a faster growth today than it was yesterday.
The human brain technically took 13.7 billion years to develop from purely chaotic driven processes, and even then it was pretty worthless up until we finally developed both language and writing so we could ourselves have lasting compounding effects from scaling up parallel self-interactions.
And from 200,000 years of marginal progress we suddenly went in less than 7,000 years from no writing and thinking the ground below our feet the largest thing in existence to measuring how long it takes the fastest thing in our universe (light) to cross the smallest stable object in our universe (a hydrogen atom).
Let's give the computers some breathing room before declaring the impossibility of their taking the torch from us, and in the process, let's not underestimate the effects of exponential self-interactions and the compounding effects thereof.
On the other hand those saying "it will sure happen" are missing the impact of diminishing returns.
Personally, i don't doubt that AGI is possible, even though it becoming a reality might take any number of centuries or millennia, if humanity even sticks around for that long and AGI is still a goal that they pursue.
The problem lies in everyone thinking on a more human timescale: "Will we see AGI during my lifetime?" The answer to that is almost certainly no, no matter how much the industry tries to sell state machines as AI or fledgling efforts as revolutionary advances.
Being overly optimistic in regards to time scales only hurts oneself, like expecting that we'd all have flying cars or even that we'll be able to get rid of ICE vehicles or make significant improvements to slowing the pace of climate change.
It’s so difficult to talk about AGI, sentience, consciousness in general because there are no clear definitions apart from “I’ll know it when I see it.”
It doesn't really matter what your guesses are, none of the results are good news.
The device running Spotify may also have an antenna, but I hope you get the analogy. My analogy is not meant to be taken faithfully, so that we need to start looking for antennas now instead of neurons. I am just saying that maybe the neuron-counting game is not the only thing. Maybe there is something else -- not magical, not divine, but physical and as-of-yet unknown. Humanity didn't always know everything, and maybe still doesn't.
Human optic nerve can't send more than ~10Mbit/s. Yet, somehow, 60fps at 640x480 screen isn't best possible movie watching setup for one-eyed people, even though it delivers uncompressed 9Mbit/s.
Lots of calculations (like aggregating data to lower-quality image; eg. input of human rod cells is aggregated through interneurons) happen around of body. 16 neurons that you are referring to are likely fed with carefully processed input, not raw input.
ENIAC also started big and slow. Now it fits in a microSD card.
I'm curious as to your answer. Because if one's building a purpose-built analog computer for the task, my estimate is a few hundred transistors, a few thousand passives, and ... an absolutely trivial amount of power on modern process.
1960s - Herbert Simmons predicts "Machines will be capable, within 20 years, of doing any work a man can do."
1993 - Vernor Vinge predicts super-intelligent AIs 'within 30 years'.
2011 - Ray Kurzweil predicts the singularity (enabled by super-intelligent AIs) will occur by 2045, 34 years after the prediction was made.
So the distance into the future before we achieve strong AI and hence the singularity has been, according to it's most optimistic proponents, receding by more than 1 year per year.
Eventually I believe we will get a good enough understanding of the subject that we can map out a route to implementing AGI, and then our progress will accelerate towards a known and understood goal.
We won't build a duplicate of the human brain - unless we have AGI first to tell us how. But we really don't know what portions of the human brain are needed for useful AGI.
You can look at GPT-3. On the one hand, never being reliable puts a crimp on practical applications. One the other hand, it does a lot of amazing things that seem human. I'd say that since we don't know where we're going in a profound way, we don't know how far we have to go.
OTOH, we see specialized intelligences do all soft of superhuman feats, all the time, and more impressive abilities join these all the time. These, however, are not human-like intelligences. They aren't even bee-like. They are so alien we don't see "general intelligence" in them.
So, my guess is that we'll have some extremely complex and capable systems that are extremely alien in nature well before we can have a conversation with a human-like intelligent system. They'll be useful and treated like oracles - we won't be able to understand their reasoning, but they'll be right most of the time.
It is, however, a matter of time and desire. There is nothing inherently magical in our mammalian brains and our organic bodies that can't be simulated by a sufficiently capable machine and technology for that will, eventually, become possible, then available, then practical, and then ubiquitous.
And I'd like to believe that you're right about it only being a matter of time and desire, but I do also worry about the possibility that we're actually on a different kind of exponential curve and will instead reach a point where we see diminishing returns.
The last and most difficult step in safe AGI is moral/value alignment. That is unfortunately probably last on the timeline of likely achievements because it requires general solutions to both planning and reasoning, and also an accurate world model and understand of physical actions and their consequences.
Do we observe general intelligence in nature though, here on Earth implemented with the materials available in our environment? If so, it’s a bold claim to make that it will always be impossible to achieve it artificially.
This puts the timeline to about 2029-2035.
[0] https://www.scientificamerican.com/article/what-is-the-memor...
The trick of the human brain is that the "processing power" is enmeshed into the "memory", so the brain must have a colossal computational bandwidth, even with pretty slow neurons. I suppose that bandwidth is larger than that of most modern GPU / TPU clusters, which also don't feature anything comparable to 2.5Pb or RAM in their disposal.
The revolution should be mostly in the architecture, much like the deep learning evolution was enabled by GPUs.
- the modulation in high frequency 5Ghz transmitted to my router, that get modulated again for ethernet and then for the cable modem, and then who know what happen, modulated again as light waves, etc.
None of these feats were managed by evolution, yet we did it, and it’s now usual, we don’t even notice it.
I think that AI will be the same. Yes it’s a bit complicated, but in the last 10 years we made an astonishing great amount of progress. 10 more years and we might surpass our fixed capacities. What happen after that ?
So far our brain seems to be a physical process (not magical), and there is no reason to believe that we can not emulate or even surpass our abilities in silicon.
THE BRAIN-CIRCUIT EVEN THE SIMPLEST NETWORKS OF NEURONS DEFY UNDERSTANDING. SO HOW DO NEUROSCIENTISTS HOPE TO UNTANGLE BRAINS WITH BILLIONS OF CELLS? https://www.nature.com/articles/548150a
One thing that stuck with me from the radio engineers is that something as commonplace as a Yagi antenna can't be fully modeled due the to sheer number of interactions, and developing new designs often requires an iterative trial and error approach.
Caveat - I was told this in the mid 2000s, so maybe it's changed since then.
Unfortunately of course, the people who might have some of the skills needed to actually build such a thing (at the bricks and mortar level anyway), are nearly those people whose understanding of what intelligence actually is may be less than ideal. As a hint, it has nothing to do with passing tests or other such mundanity.
A more interesting approach would be to consider language - if cooperating entities can be constructed that (eventually yet spontaneously) created ways to communicate between each other, then maybe some progress has been made.
Further, if we appreciate that any idea, discovery, anything, can be communicated to even the most recently discovered humans in their own language (though we may need to build up the various concepts from basic terms), and that no such feat is possible with the other animals, then we might wonder if another intelligence (artificial or otherwise) might be able to encode concepts that are unreachable in our (any of our) language and thus thoughts - or, alternatively, that our (any of our) language is conceptually complete in some fundamental sense, and so there simply cannot be such 'higher' intelligence (artificial of otherwise).
You can take from that what you will, but I suspect it will always seem as though we've made no progress, because anything we learn to emulate we necessarily understand well enough that it will no longer seem magical. I wouldn't put it past us to start thinking of humans as automata before we declare that machines can think.
You can actually do it. 100 year old people usually don't follow news on artificial intelligence, so they will act genuine.
Orwell’s 1984, written in the mid forties, has pop songs written by machine.
In both cases the AI composed works are described in the same way Id describe modern AI composing things - dreadful.
The concept of AI is quite old. Even Medieval Europe you had philosophers making quite penetrating insights on mechanical creativity. But, lacking a computer, there was no point continuing their train of thought
[1] amazing, far seeing, book. Very short, maybe a two hour read.
I mean, people seem to hold human intelligence as something extraordinary, despite having no idea what precisely makes us intelligent. Isn't that kind of pulling the cart before the horse? For all we know, humans might just be biomechanical robots operating on the "stimuli" inputted to us, behaving in completely predictable ways, no different than how computers operate on the "data" inputted to them.
Still, they possess an undeniable degree of intelligence. They also have cultures, that is, forms of knowledge passed between generations by teaching, not genetically, and differing between packs.
I suspect that a robot as intelligent as a dog, but with an easier interface, would be a great help to humans.
OTOH, what currently is called "AI" is mostly deep learning, a very important part of cognition and perception. Without modern results in computer perception and low-level cognition and control, a "more general" AI would be blind, deaf, and paralyzed in the real world.
I suspect that the older approaches based on more supervised ways to construct cognitive functions have not born all the fruit they could, and may eventually help create an AI with better higher-level reasoning. They are just not in vogue now, so the best researchers and fattest grants are in deep learning and around. Also, the hardware may not be there yet.
(A similar thing happened to neural networks. The first, one-layer, neural network was the perceptron created in 1958 [1] The approach, while valid and constantly developed, did not see a real uptake until early 2010s, when incomparably better hardware finally became available.)
If you look at the people who have the skills to make such machines larger, those who built bigger and better vacuum tubes and larger cathode displays with more oomph, they all appear to have disappeared, replaced by the misguided miniaturizers.
Your last point is already addressed in the paper, argument #3.
That must be why we haven't solved P = NP yet. This would take a person with twice the L1 cache to accomplish.
I know it's a bit hyperbolic but Skynet comes to mind every day I use Copilot. It's just amazing the kind of things it can suggest/adapt to. We're definitely on some path of progress.
I guess there is a joke hidden but I don't get it.