> Suppose we have an AI whose only goal is to make as many paper clips as possible. The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips. The future that the AI would be trying to gear towards would be one in which there were a lot of paper clips but no humans..
EDIT: While the article is from 2014, Nick Bostrom's thought-experiment dates back to his 2003 thesis: https://nickbostrom.com/ethics/ai (props to @o11c for the correction)
-----
Of course, back in 2014 was before LLMs and visual-image-generators were a thing (StyleGan's paper was 2018), but Roko's Basilisk was described in 2010, which would colour people's thoughts of "AI" back then somewhat differently to today:
2014: "AI" means perfect and unbiased reasoning ability, total objectivity (given one's axioms); it will have the ability to outthink its human operator/sysadmin and somehow "escape" onto the Internet, and make a living for itself trading its services for Bitcoin before going-on to do literally anything it wants, like hack Russian nukes to bomb the US, so a paperclip-making AI really could kill us all.
2024: "AI" means using statistical tricks to generate text which contains frequent factual errors, unsound reasoning, and reflects our cultural-biases back at us. A paperclip-making AI will be the next Juicero before running our of VC funding.