Gigantic bot farms taking over social media
Non-consensual sexual imagery generation (including of children)
LLM-induced psychosis and violence
Job and college application plagiarism/fraud (??)
News publications churning out slop
Scams of the elderly
So don't worry: in a few months we can come back to this thread and return fraud will be recognized to have been supercharged by generative AI. But then we can have the same conversation about like insurance fraud or some other malicious use case that there's obvious latent demand for, and new capability for AI models to satisfy that latent demand at far lower complexity and cost than ever before.
Then we can question whether basic mechanics of supply and demand don't apply to malicious use cases of favored technology for some reason.
Are you adjusting your perception of the problem based on fear of a possible solution?
Anyway, our society has fuck tons of protections against "what ifs" that are extremely good, actually. We haven't needed a real large scale anthrax attack to understand that we should regulate anthrax as if it's capable of producing a large scale attack, correct?
You'll need a better model than just asserting your prior conclusions by classifying problems into "actual threats" and "what ifs."
Also I guess you're perfectly fine with me developing self replicating gray nanogoo, I mean I've not actually created it and ate the earth so we can't make laws about self replicating nanogoo I guess.
No shit it's happening. Now, on what scale, and should we care?
There are enough fraudsters out there that someone will try it, and they're dumb enough that someone will get caught doing it in a hilariously obvious way. It would take a literal divine intervention to prevent that.
Now, is there enough AI-generated fraud for anyone to give a flying fuck about it? That's a better question to ask.