Use to power of LLMs to mass denigrate politicians and regular folks at scale in online spaces with reasonable, human like responses.
Use LLMs to mass generate racist caricatures, memes, comics and music.
Use LLMs to generate nude imagery of someone you don’t like and have it mass emailed to the school/workplace etc.
Use LLMs to generate evidence for infertility in a marriage and mass mail it to everyone on the victims social media.
All you need is plausibility in many of these cases. It doesn’t matter if they are eventually debunked as false, lives are already ruined.
You can say a lot of these things can be done with existing software bits it’s not trivial and requires skills. Making generation of these trivial would make these way more accessible and ubiquitous.
These arguments generally miss the fact that we can do this right now, and the world hasn't ended. Is it really going to be such a huge issue if we can suddenly do it at half the cost? I don't think so.
This is already an uncomfortably risky situation, but fortunately virology experts seem to be mostly uninterested in killing people. Give everyone with an internet connection access to a GPT-N model that can teach a layman how to engineer a virus, and things get very dangerous very fast.
The way we've always curbed manufacture of drugs, bombs, and bioweapons is by restricting access to the source materials. The "LLMs will help people make bioweapons" argument is a complete lie used as justification by the government and big corps for seizing control of the models. https://pubmed.ncbi.nlm.nih.gov/12114528/
I think this hysteria is at best incidentally useful at helping governments and big players curtail and own AI, at worst incited hy them.