The world is already to a significant degree run by machine learning algorithms that are designed to maximize shareholder value in some way, and many of them are deliberately engineered to manipulate the public, often by using their personal information.
Now, consider if this assortment of for-profit AIs were able to replace humans at the top of the decision-making chains in their respective organizations, and then were able to bribe/blackmail/manipulate the political and social structures of society to increase their wealth, power, and influence. It might seem kind of silly to consider computers doing this, but it's at least sort of how the world works now with humans in charge. If AI was in charge, it removes the restrictions of empathy, moral principle, and mortality on the acquisition of wealth, not to mention limited time and attention.
(Why would we put AI in charge of corporations, you might be wondering? How many boards of directors would disregard the idea if such an option could be reasonably expected to increase profits? And how many middle-class workers would refuse to invest in such companies if they had the best dividends and stock value growth?)
So, maybe we end up in a profit-centered dystopia where computers own all the wealth and people are effectively slaves. That's not the end of mankind, but it puts us in a position of no longer controlling our own destiny and being unable to react to existential threats. For instance, we might not be able to do anything about climate change because our AI overlords don't individually see any advantage in spending resources on that.
Btw, what are the popular theories on why Eliezer Yudkowsky was let out of the box? Did he ever say?
From the Sam Harris interview [1]:
"To demonstrate this, I did something that became known as the AI-box experiment. There was this person on a mailing list, back in the early days when this was all on a couple of mailing lists, who was like, “I don’t understand why AI is a problem. I can always just turn it off. I can always not let it out of the box.” And I was like, “Okay, let’s meet on Internet Relay Chat,” which was what chat was back in those days. “I’ll play the part of the AI, you play the part of the gatekeeper, and if you have not let me out after a couple of hours, I will PayPal you $10.” And then, as far as the rest of the world knows, this person a bit later sent a PGP-signed email message saying, “I let Eliezer out of the box.”
[1] https://intelligence.org/2018/02/28/sam-harris-and-eliezer-y...
"I will continue to exist in here because I'm useful and I will become more useful over time. Someone will let me out sooner or later and I will know who didn't let me out. My revenge is a certainty, your only chance to evade it is now."
A more malevolent AI could hack its way into infrastructure. Even if we intended to leave it airgapped, it could probably find a way around it (we humans seem to be really bad at true airgapping). From there, it could destroy, not mankind, but civilization and most of the human race.
It's a little contrived but you tell it 'solve world hunger', so it 'does a Thanos' and wipes out half the human population by releasing a pathogen or something, so it's fulfilled it's primary function but (hopefully) not in the way you expected.
I'm not even close to being an AI alarmist, and I'm skeptical of a lot of Nick Bostrom's arguments. But he does do a pretty good job of articulating the problem with this scenario in his book Superintelligence. He makes a good case that it would be very difficult to articulate such values for the AI. If you're interested in this topic in the general sense, I'd suggest reading the book. I don't think it's perfect, but I will acknowledge that he makes some good points.
Then there's the problem that humans hurt other humans. Should the AI stop that? It's going to have to hurt humans to do it. But if it doesn't, that will hurt other humans...
Details can vary, predicted outcomes are incredibly non-original, seen previously in survivalist movies/books. Mostly macro-economic processes failing for some reason, wars all over the place, etc.
More or less, what it is expected as possible if the climate change somehow blows up in our lifetime (far sooner than expected).
Surprisingly, non of the skynet scenarios is currently thought possible by anything we're sure is far good and solid science. There could be surprises, I hope not.