https://www.hitc.com/en-gb/2022/10/21/paris-hilton-fans-conf...
Still super impressive but it’s not two deep fakes at the same time.
This is either going to send a shockwave to society in the trust department (i.e. we'll have to absolutely distrust everything and everyone will have to adapt immediately), or we're in for a very, very rocky road where different people will walk around living in different realities where different things have happened (this is already the case, but more so when you can send a conspiracy theorist videos that -prove- everything they've been saying, and look completely real).
9/10 people will just accept it, probably 7/9 wont have much other option without any practical ability to authenticate the hundreds of things they see a day.
I'll give you one decent example from recent popular media: Sandy Hooks. Alex Jones didn't even need a deep fake to convince people.
It's scary to think about how AI will boost the already way too effective politics of "fake".
It's not necessarily a bad thing.
Right now, I hope the truth will be revealed by police releasing body camera footage. In the near term future that footage will satisfy nobody and we'll all be left wondering what really did happen.
To pick the example from the other user, if video footage of a shooting matches neither ballistics, eye witness accounts or other evidence on site it'd be very easy to spot a fake, without even technically analzying the video itself.
But deepfakes used by governments, police or citizens to frame innocent civilians would be really scary. There wouldn’t be any witnesses, but we wouldn’t necessarily expect there to be any either.
With current tech, deepfakes can be really good. The linked one is pretty much there, it's super convincing. When I know it's fake, something seems a bit off about how "Cruise's" head is in relation to his body when he walks through the doorframe but that's it. If I wasn't actively looking for anything fake, I would not have registered that either.
Whoever runs for POTUS in '24, it's certain that both candidates will be well-known people with lots of video material for training a model. It's also certain that many groups and individuals will be strongly vested in the outcome of the election, and some of them will have the resources to produce deepfakes at least as convincing as "Tom Cruise and Paris Hilton" here - this is no longer something that requires hardware worth millions to run for two months straight.
There will be fake videos of the candidates doing/saying highly scandalous stuff. Going on a racist rant, expressing corrupt intent, promising illegal things, etc. And those videos, if somewhat intelligently produced, will have a big impact once they air, no matter if they can be definitively proven to be fake later.
"Hi PornGPT, make me a film with X actor and Y actor doing so-and-so, culminating after 5 minutes - I'm in a rush".
AI becomes so commoditised so quickly. It doesn't feel like we are a long way from text-to-video available in open source. Then all it takes is someone with a big stash of training data to train it, and voila.
Deepfake porn seems like more of a cottage industry.
- https://en.wikipedia.org/wiki/Aeolipile
- first batteries were used on parties to run current through people for fun
- first use of uranium was in phosphorescent paint
Some video storage service or ‘authentication service middleman’ is going to make a mint with blockchain authenticity proofing. They’ll provide a seal of authenticity that the feed being viewed came direct from a camera. Feed will be checked during viewing as well to verify its ‘camera direct’ state.
Someone will figure out how to crack it though, or the possibility will be plausible enough, and then we'll be at zero again.