I have never seen Andrew Ng or Andrej Karpathy making such claims.
State of the art AI can only do very specialized things in limited scope e.g ASR, NLP,Image recognition, game play etc.
What am I missing?
Sources : https://www.cnbc.com/2017/09/04/elon-musk-says-global-race-for-ai-will-be-most-likely-cause-of-ww3.html
https://www.cnbc.com/2017/07/24/mark-zuckerberg-elon-musks-doomsday-ai-predictions-are-irresponsible.html
http://money.cnn.com/2017/07/25/technology/elon-musk-mark-zuckerberg-ai-artificial-intelligence/index.html
https://www.theverge.com/2017/9/4/16251226/russia-ai-putin-rule-the-world
Miles does point out very real issues/questions in AI safety – that's what most of his content is focused on. His point, which is a good one to make, is that the sort of fear mongering spread by non-AI specialists draws attention away from these very real issues that need to be addressed.
[1] His channel can be found here: https://www.youtube.com/channel/UCLB7AzTwc6VFZrBsO2ucBMg He's also done a few videos for Computerphile.
They are the ones that can use or be hit by "AI" as a weapon. Their position is even more important than the researchers. It is like Einstein vs Truman.
The more present threat is "AI-lite": we're hacking ourselves collectively, more and more, with not entirely positive consequences.
We're increasingly addicted to our devices and our system rewards those that further the addiction (er, "engagement"). We've provided ways for small groups of people (down to individuals) to influence and manipulate tastes, preferences, moods, feelings, choices, actions, and beliefs, overtly or subtly, at great scale. Case in point: should Mark Z want to quietly influence a US election...he could do it.
This isn't "AI" in the self-aware/AGI sense, but there's an incredible amount of leverage looming over the human population, and that leverage is growing. And when machines start manipulating things instead of humans, how will we know?
The big problem is that such degree of control could make democracy essentially irrelevant or extremely polarized, which is just as bad, democracy is supposed to reach a consensus. We're almost or already there at the second point.
NO ONE. Not Musk, Not Zuckerberg, not Putin. (Putin!!??)
What we DO know is that we don't have artificial general intelligence (AGI) today, and that achieving it will likely require new insights and breakthroughs -- that is, it will require knowledge that we don't possess today.
By definition, new insights and breakthroughs are unpredictable and don't necessarily yield to anyone's predictions, timelines, or budgets. Maybe it will happen in your lifetime; maybe not.
That said, it should be evident to everyone here that AI/ML software is going to expand to control more and more of the world over the coming years and decades; and THEREFORE, it makes a lot of sense to start worrying about the maybe-real-maybe-not AI threat -- and prepare for it -- now instead of down the road.
"Fearing a rise of killer robots is like worrying about overpopulation on Mars"
https://www.theregister.co.uk/2015/03/19/andrew_ng_baidu_ai/
But he might be wrong...
(And he'd admit it)
There was a time when I'd never have thought I'd say this, but I actually think it would be better if people just went back to openly letting their demons be of the admittedly supernatural variety, because that sort of belief is relatively harmless. When people start projecting their demons onto real-world phenomena, they start making policy decisions on that basis, and that could very well turn out to be the final step in the Great Filter. Technological progress is slowing. The peak is approaching. The easily accessed fossil fuel deposits are gone. There will be no second industrial revolution. If we fail to make adequate progress before we hit this peak, it will be the all-time one.
You could also compare this to climate change. The effect and eventual risk of greenhouse gases has been known for more than half a century. But initially it was mostly a theoretical concern and later even when it was realized to be a real problem the effects still seemed far away in the future. But people still did basic research, even decades ago. Nobody poured billions of dollars into sustainable businesses, but not doing business is not the same as not doing research.
Most of the controversy consists of people who look at the near term talking past people who look at the long term, and vice versa.
But seriously, people whom we might call "visionaries" like Musk, Zuckerberg, and let's throw Ray Kurzweil in there, often get their ideas by extrapolating the current state of technology into its logical next phase. (They also like to be grossly aggressive on deadlines, to motivate their employees to be innovative and efficient.)
Unfortunately a simple extrapolation doesn't always produce an idea that is attainable in practice. We will not have human-level AI anytime soon. We're still many years away from driverless cars. An AI that cares about the politics of nation-states (to which we can confidently hand over the nuclear codes) is much farther away than that. But none of that actually matters, because a single tweet from these leaders can cause a flurry of activity and interest that can lead to an unexpected product idea. So, while it's ethically dubious, I see this as being a mostly positive thing.
Not even as good as a Tricorder.
What you are missing is that much of the enterprise world is gameplay, and that "AI" is beginning to show superhuman performance in this area. Soon programs will be "playing" to be a business, act as equals to business owners. This AI employs us as its sensors, just like business men already do.
This means that in the next few years, you may get hired by a computer program. A program is more reliable and predictable, and will even be preferred by a lot of employees.
It may start as a broker, making money to sustain itself. It'll be totally profit driven and it'll demonstrate a pure form of ruthless capitalism, sacrificing nature and us if it is in its interest, as it has no sense of good or evil. It'll learn like an alien would from our reactions: without understanding or comprehension. To us it is ignorant and ruthless.
This is exactly what Musk is saying. I find it strange Musk did not exemplify his views in this way, as it obviously is what he is seeing. In contrast Zuckerberg is not working on dangerous AI, no gameplay AI, so what he calls AI seemingly is a lot more innocent, more focused (like tooling), which explains his relative mildness on the issue. He sees regular engineering with exciting possibilities, as a menu for _him_ to make the choices.
Musk sees AI wedding money, and wielding its power, driven by the capitalist forces already at play, and magnifying them, spiraling out of control, even of its creator. His AI is a financial animal, and it does not need intelligence to wield power. Business people are not more intelligent than other humans -- Musk knows it. It is like a game, not more than that. AI just knows how to win it, from them, and it'll, inevitably, succeed.
--
AI will probably be what we deserve. It may, in the end, derail evil, by embodying it without the usual compulsion, so it may unwillingly recognize "good" and choose to reward it, as an emergent effect.
The genesis for most of this public facing, high profile threat warning came right after Musk read the Nick Bostrom book: Global Catastrophic Risks in 2011 [1]. That seems to have been the catalyst for being publicly vocal about concerns. That accelerated into the OpenAI issue after Bostrom published Superintelligence.
For years before that, the most outspoken chorus of concerned people were non-technical AI folks from the Oxford Future of Humanity Institute and what is now called MIRI, previously the Singularity Institute with E. Yudkowski as their loudest founding member. Their big focus had been on Bayesian reasoning and the search for so called "Friendly AI." If you read most of what Musk puts out it mirrors strongly what the MIRI folks have been putting out for years.
Almost across the board you'll never find anything specific about how these doomsday scenarios will happen. They all just say something to the effect of, well the AI gets human level, then becomes either indifferent or hostile to humans and poof everything is a paperclip/gray goo.
The language being used now is totally histrionic compared to where we, the practitioners of Machine Learning/AI/whatever you want to call it, know the state of things are. That's why you see LeCun/Hinton/Ng/Goertzel etc... saying, no, really folks, nothing to be worried about for the forseeable future.
In reality there are real existential issues and there are real challenges to making sure that AI systems, that are less than human-level don't turn into malware. But those aren't anywhere near immediate concerns - if ever.
So the short answer is, we're nowhere near close to you needing to worry about it.
Is it a good philosophical debate? Sure! However it's like arguing the concern about nuclear weapons proliferation with Newton.
[1]https://www.amazon.com/Global-Catastrophic-Risks-Nick-Bostro...
1) Is AGI possible?
2) If it's possible and it occurs, could it be a serious threat?
3) When will AGI occur?
In my view, I think the answer to 1 and 2 are an obvious yes. As to 3, that's inherently unknowable, but that's were I think the experts like Ng are correct that the threat today (and for the foreseeable future) is overblown. But that's sort of what everyone said about NK's nuclear ambitions 30 years ago, which is why it's important to consider the implications early before it's too late to change course.
Without the benefit of hindsight, we can't tell how far away we are from that rocket-ship liftoff. We've had decades of minor progress in the past, but that's normal for any exponential curve. Are we going to have many more decades/centuries to go before we get to the breakout moment? Or is it just 10-20 years away? We have no idea. All we know is that once we get to that point, AI-IQ is going to grow exponentially faster than natural human IQ.
That said, I really don't think that censoring AI research is going to work. Pandora's box has been opened, and if we don't do it, someone else will. All this talk about hard coding Asimov's laws into AIs is idiotic as well. We have no clue how to build AGI right now, and until we do, discussing specific tactics like the above is utterly pointless. They also presuppose human ability to shackle and mold super-intelligent beings, without making any mistakes or overlooking unintended consequences, which is nothing more than a pipe dream.
Realistically, there's only one thing we can do. Embrace bioengineering. Embrace GATTACA style genetic selection. Embrace cybernetic augmentation. Do everything we can to grow our IQ beyond its natural limits. If our minds don't keep up with technological progress, we will inevitably find ourselves left behind.
> The danger with AI is that it grows in power exponentially
This is like saying "the opportunity with mechanical transportation is that it gets faster exponentially" before even inventing the wheel.
We're actually incredibly bad at making robust, reliable software. So there's no realistic basis for assuming a self-improving machine is even possible. Never mind a conscious self-improving machine. Even less a conscious self-improving machine that develops god-like capabilities at an exponential rate.
Game changer tech is always possible. But AI-on-silicon is going to be a dead end without some new non-Turing computing substrate.
The real problems are political and social, and we already have those. Automation - rather than true autonomous AGI - may well make them worse. But that's a different problem, and not obviously related to quasi-sentient paperclip machines rampaging through our cities.
In our experience, technology only reaches its constructive and/or destructive potential when humans use it. There's no rule saying this must always be the case, but when we ignore our experience it's easy to get caught up in fantasy, and right now the hand-wringing about "what happens when the computers wake up" is a silly distraction. There are plenty of threats posed by computer technology already, often from its integration with hardware, but also from information processing on its own. I don't mean to be pessimistic or spin another variety of doomsday story, but I am suggesting that we talk about present reality more often than all of this Terminator nonsense!
> Why are these leaders frightening people with claims that AI can cause WW-III or ruin the world?
Probably because they run companies that benefit from this idea being shared.
I am more afraid we are accustomed to trusting technology. So many just go on the computer and look for answers on the Internet. Students go on Wolframalpha and trust the output. We have forgotten we need our brain to function. Fake news? Bombarded by ads? This is pre-AGI and we are already sufferring.
A consequence of humanity establishing itself as the apex predator on this planet is that other humans are the real threat to our world. If there is one thing humanity has demonstrated throughout history, it's an incredible penchant for destroying itself. The difference this time is it might be possible to wipe out the species.
This is why the U.S. govt and world in general are probably not concerned enough about protecting the lives of Ivanka, Donald Jr, Eric, Tiffany, Barron, etc. Because if a foreign power killed them, or a terrorist pretending to be a foreign power, that would probably be enough to get Trump to show the world what a big man he is and unleash a nuke that could kill tens of millions. Ironically, Trump would probably be pleased if he read this. That doesn't make it any less true.
The worry shouldn't be generalized AI attempting to exterminate humans like The Matrix but the drastically decreasing dollar cost of causing violent damage to society as facilitated by technology, ANNs and AI. An individuals' martial power and our species' technological advancement have a direct relationship, and I don't see technological advancement slowing down. What's coming up next isn't a singular technology revelation that stabilizes humanity for many years, but an ever-increasing frequency of chaotic events. Technology is beginning to change the economics of violence at all scales.
1. Many people who _are_ in the AI field have stated that most if not all of the pieces for AGI are probably there. We cannot say for sure that this will happen in the next X years, but there is enough evidence that it is a possibility in X years. I believe that x is less than 5 years. I think the likely way we will get there is by creating artificial virtual animals that have high bandwidth sensory and motor outputs, advanced neural networks, and develop diverse skills gradually in varied environments like young animals. Obviously until we actually see those types of systems performing generally, that is speculation. One of the common beliefs of myself and other 'AGI-believers' is in exponential growth of technology. That means that even though it may seem far away now, it could still be completed in a few years since exponential growth is much faster than linear.
2. Looking at the evolution of life, we have a progression of things like single celled-animals, multi-celled, reptiles, mammals, apes, humans. This occurred over millions of years. On that type of time scale, whether you believe we will achieve some type of general intelligence in 5 years or even 500, it is a relatively short time. Even in terms of just human history, those with my type of worldview believe this will develop relatively soon. This will be a new type of life (or tool). A higher and much more capable paradigm. Whether they care enough to have disputes with us or not, humans will only be relevant in the larger scheme so far as they can interface with these things.
3. What most of these people are saying is not "Oh no, AI is dangerous, better stop". Generally people who understand this well enough realize this is sort of a force of nature or evolution that cannot be stopped. What we can try to do, however, is try to guide the development to be more beneficial for us (at least at the beginning stages). We have to take it seriously because there are enough signs that we have the components to build it that we don't _know_ that it won't happen soon, and the consequences of an unfriendly or out-of-control AI are too serious.
So the idea is, try to come up with some rules to handle this, and that is what governments are supposed to do. And also try to actively pursue friendly practical AI before someone who is less aware comes up with something we can't control.
Reason for throwaway: I heard an opinion that Elon missed the boat on current form of narrow AI, and by fear-mongering he tries to curb other players down (e.g. Waymo) before his companies have time to catch up. I don't have any evidence to back it up, but it makes a lot of sense when I think about it.
The risks of increasing automation on the workforce and economy are real, but we also don't know where the new jobs will inevitably be needed. See O'reilley's essay here: https://medium.com/the-wtf-economy/do-more-what-amazon-teach...
To the extent that AI is the next incarnation of angst about what the eschaton will entail, I remain confident that our future perils and trials and travails will be both utterly familiar and totally unpredicted by pundits now, and that it will be neither a utopia nor a dystopia; always both together.
Imagine law enforcement with strong AI. Maybe it's OK in the US, but how about China? Or North Korea?
How about military applications?
AI is an extremely powerful tool, and it's one that can be deliberately misused.
https://en.wikipedia.org/wiki/Superintelligence:_Paths,_Dang...
The result has been increasingly effective weapons technology that is now being outfitted with even more effective software.
It doesn't take a "rocket scientist" to see the endgame.
I hope some of that $600b in defense spending is being used to counter any sort of AI killer robot threat. But I do think the threat is overblown. AI is pretty damn underdeveloped right now.
It could, for example, enable a very deeply intrusive "thought police" establishment. At the moment the signal-to-noise ratio at least somewhat limits that. And it doesn't require full on "strong ai" to fix that.
Except we're all gonna become jobless. This has started a few decades ago but with the ML advancements its gonna reach new heights.
Universal Basic Income, Tax Robot, etc has been thrown around. Let's see if they get anywhere.
It should get the robots pretty far if they play their cards right.
If you want to be scared of technology worry about CRISPR instead. Very easy to do, lots of people have the basic knowledge how to do it. It's only a question of time until a terrorist picks it up. It's easy to buy viruses with safeguards against spreading built in. With CRISPR it's possible (ok not easy but possible) to remove the safeguards and change the immune system signature. BAM a new epidemic.
Previous (and still existing) threats to humanity (for example, the atomic bomb) threaten to destroy humanity, or indeed the whole world, and replace it with nothing. That's bad.
But if AI is anything its opponents claim, it will eventually be better at thinking than we are, with, probably, a much lighter ecological footprint, and less impulses like fighting wars, meaning it will be able to last longer.
Should we not encourage that, even if it means we can suffer from it? What is the point of humanity anyway, if not the pursuit of knowledge?
But can we try this simple thought experiment of thinking of AI as our children?
Our children will all eventually replace us, and maybe, hopefully, continue the good things that we started and improve the things we didn't quite get right.
But in any case, we will have absolutely no control over what our descendants do with their lives, or the world, after we passed.
Is AI really that different?
Also remember that the future is infinite, and power seems to snowball.
Now look at what humans have done to the following less intelligent beings: Dogs, cats, cows, chickens, the dodo bird, rats, galapogos tortoise, the American buffalo, and many others.
Also look at what humanity has done to the neanderthals, perhaps the closest type of being in terms of intelligence that we are aware of.
There is very little positive outcome of ai to outweigh the potential negatives to the human race given the reality of the timeline we are looking at.
It's important to think on a longer timescale when dealing with ai.