if the issue is sensitive data in a training dataset, perhaps that should be addressed rather than accommodated.
While I’m sure they’re right - factually tampering with an LLM is possible - I doubt that this will be a widespread issue.
Using an LLM knowingly to generate false news seems like it will have similar reach to existing conspiracy theory sites. It doesn’t seem likely to me that simply having an LLM will make theorists more mainstream. And intentional use wouldn’t benefit from any amount of certification.
As far as unknowingly using a tampered LLM, I think it’s highly unlikely that someone would accidentally implement a model at meaningful scale which has factual inaccuracies. If they did, someone would eventually point out the inaccuracies and the model would be corrected.
My point is that an AI certification process is probably useless.
It’s completely within the realm of expectation that you could have a nation-state level initiative to propagandize your enemy’s populace from the inside out. Basically 2015+ Russian disinformation tactics but massively scaled up. And those were already wildly effective.
Now extend that to more benign manipulation. Think about the companies that have great grassroots marketing, like Doluth’s darn tough socks being recommended all over Reddit. Now remove the need to have an actually good product because you can get the same result with an AI. A couple hundred/thousand comments a day wouldn’t cost that much, and could give the impression of huge grassroots support of a brand.
And the dissenting opinion will be able to do the same.
Twelve year old kids will be running swarms of these for fun, and the technology will be so widely proliferated that everyone will encounter it daily.
"Is that photoshopped?" will morph into "Is that AI?"
It'll be so commonplace, it'll cease to be magic.
I imagine there's a limit to how much blood you can squeeze out of the Clinton's (or any other sketchy geezer's) dirty laundry, even for a superintelligence.
Russian disinformation's success in the 2016 election is massively over hyped for the usual partisan sour grapes reasons.
You cannot move the world with six figures of Facebook ads, if you could, everyone would spend a lot more money on Facebook ads.
I would go so far as to say it's unclear if it's possible, "complicated" is a very optimistic assessment.
But why leave the job to humans?
I expect an effective approach is to have model A generate many possible ways of testing model B, regarding an altered fact. Then update B wherever it hasn't fully incorporated the new "fact".
My guess is that each time B was corrected, the incidence of future failures to product the new "fact" would drop precipitously.
Human-centric example but you get the point.
That said, I'm assuming you also mean fake news which is (A) believable and (B) is tailored for a particular agenda.
Would it scale? Sure it would.
These security startups are hilarious
“> Given adobe acrobat you can modify a PDF and upload it and people wouldn’t be able to tell if it contains misinformation if they download it from a place that’s got no editorial or provides no model hashes”
“Publish it Gary, replace PDF with GPT let’s call it PoisonGPT, it’s catchier than Supply Chain Attack and Don’t use files form USB sticks found on the street and all investors need to hear is GPT”
How is this any difference then corrupting a dataset, injecting some stuff into any other binary format or any others supply chain attack. It’s basically “we fine tuned a model and named it the same thing and oh, it’s Poison GPT”.
What does this even add to the conversation? Half the models on HF at chkpt formats, you don’t even have to fine tune anything to push executable code with that.
What could go wrong?
Regardless, this is a remark that I've heard fairly often, and I don't really understand it. Why does it matter if some people believe AI is really sentient? It just seems like a strange hill to die on when it seems - on the face of it - a largely inconsequential issue.
No, I mean it communicates the wrong idea to everyone.
Among laypeople it encourages magical thinking about these statistical models.
Amongst the educated, the metaphor only serves to cloud what's really going on, while creating the impression that these models in some way meaningfully mimick the brain, something we know so little about that it's the height of hubris to come to that conclusion.