Good article. One thing about fakes, is that they don't need to be super-high-quality, in many cases. They just need to be enough to reinforce a narrative to a receptive audience.
An example is that Kerry/Fonda fake. Just looking at it as a thumbnail on my phone, it was easy to see that it was a composite. Also, I have seen both photos, in their original contexts. They are actually fairly well-known images, in their own rights.
That didn't stop a whole lot of folks from thinking it was real. They were already primed.
The comment below, about using an AI "iterative tuner" is probably spot-on. It's only a matter of time, before fake photos, videos, and audio, are par for the course.
These days you don’t even need to fake the photo, you can just attach the fake drama to a photo of something else and no one will bat an eyelid.
Personally, I think this practice should be ended or at least decreased greatly. An article about a ship doesn't a stock photograph of a ship. It probably doesn't even need a photograph about the particular ship the article is discussing, unless there is something visually unusual or notable about that ship. The formula "Ship A is late to port, here is a stock photo of Ship B" is basically worthless. I guess they're tossing a bone to the borderline illiterate who are stuck at the "I can read picture books" level of literacy? But generally articles themselves are written at a higher 'reading level' than that.
And then what would the Blockchain provide in this case? A chain of cryptographically signed certificates back to a manufacturer is basically the same system we use on the web today TLS certs. No Blockchain required.
And a major problem with that system is making sure the camera only signs genuine images. A nation state actor, or even a large political operation, is going to have an incentive to bypass the protections on that camera - perhaps just driving what the CCD is telling the rest of the camera - so they can produce signed fakes.
That's if they can't just get the private key off the camera, perhaps through a side channel attack - which can be pretty tough to pull off but is very tough to really defend against. Get a private key, game is over for the fraudster.
If, in fact, we can’t reliably sign the source image as authentic, then the rest of the system falls apart. It seems like this is the crux of the problem.
The main thing a blockchain provides is a cryptographically secured logbook of history. It doesn't guarantee you that the entries in the logbook are true, but it gets a lot harder to fake history when you can't go back to change your story. You have to fake it right when you claim it happened and hope that nobody else records anything in the logbook that conflicts with your story.
This is why our approach only embeds a random unique identifier in the asset and requires a client to extract the media identifier to verify integrity, provenance, etc.
There are also two problems at play here - are we trying to verify this media as being as close to the source photons as possible, or are we trying to verify this is what the creator intended to be attributable to them and released for consumption? The reality is everyone from Kim Kardashian to the Associated Press performs some kind of post-sensor procession (anything from cropping, white balance, etc to HEAVY facetunning, who knows what).
Just give me the raw image sensor.
1. All manufacturers, including manufacturers of shoddy but cheap mass-market devices (ones that a not-wealthy person would have on them to document interesting events) support that cryptographic signing in all their devices;
2. None of the signing keys/secrets can be ever extracted from any such devices;
3. None of these manufacturers or their employees ever generate a valid key (or a million valid keys) that would have been put in a camera of the same model that respected journalists use, but are just available to the government where the factory resides, or just for sale on some internet forum to sign whatever misinformation a resourceful agent wants to publish.
Signing pictures can mostly work with respect to a limited set of secure, trusted hardware manufactured and delivered with a trusted chain of supply, where a single organization is in charge of the keys used and the set of keys is small enough to control properly. E.g. Reuters might use it to certify photos taken by Reuters people using specific Reuters-controlled camera hardware (and they can do that just by ordinary signing of what they publish). But there's no motivation for most people in the world to accept that overhead for the devices they use for photography and video, and there's no single authority to control the keys that everybody else would trust due to international relations.
I can easily imagine the camera digitally signing pictures and asking for notarization. But there will always be an analog hole -- and the first faked pictures weren't altered after shooting, the scene was.
I'm all for fakes being widespread. It makes people more critical of what they see, and protects them against the few that had this capability before.
Why we haven’t done that is a different but equally fascinating question.
https://www.theverge.com/2017/6/26/15876006/hot-dog-app-andr...
This was after photographers seemed to not believe this was the case https://photo.stackexchange.com/q/86550/45128
In any case, detecting cropped photos could be a way to detect that something has been intentionally omitted after the fact.
A mundane example: You're browsing a property website, look through the pictures, and then visit a property only to discover the rooms are tiny matchbox-sized spaces. They looked so much more spacious when you viewed them online. You're just discovered wide-lens-photography for real estate - purposely distorts or make a space look spacious.
A 'fake' news example: During the coronavirus lockdown, a Danish photo agency, Ritzau Scanpix, commisioned two photographers to use two different perspectives to shoot scenes of people in socially-distance scenarios. Were people observing the rules? Or did the type of lens (wide-angle and telephoto) intentionally give a misleadling impression?
The pictures are here - the article is in Danish, but the photos tell the story:
https://nyheder.tv2.dk/samfund/2020-04-26-hvor-taet-er-folk-...
There are virtually endless ways to generate ("deepfake") or otherwise modify media. I'm convinced that we're (at most) a couple advancements of software and hardware away from anyone being able to generate or otherwise modify media to the point where it's undetectable (certainly by average media consumers).
This comes up so often on HN I'm beginning to feel like a shill but about six months ago I started working on a cryptographic approach to 100% secure media authentication, verification, and provenance with my latest startup Tovera[0].
With traditional approaches (SHA256 checksums) and the addition of blockchain (for truly immutable and third party verification) we have an approach[1] that I'm confident can solve this issue.
And, unless all users trust only things viewed securely, and distrust things viewed nonsecurely (out of your client), then misinformation and fake photos can still propagate, right? (Or, how does the system handle this?)
The primary verification source is our API that interacts with a traditional data store. Blockchain only serves to add additional verification that we (or anyone else) isn't modifying or otherwise tampering with our verification record.
[1] https://docs.opencv.org/master/dc/dbb/tutorial_py_calibratio...
Aligning points on a photo outside of more-or-less linear center region will certainly result in crossing lines. Which we see in the alignment attempt there in the article - the points we align are close to center and close to edge (max distortion).
There is no mention of distortions in the entire article.
But some other points are interesting to think about.
For example, compositing two images that somehow obey Benford's law should result in something that also obeys it.
Maybe you mean "Benford's law" as in "just general stastical properties", but I hope you had something more specific in mind.
Signs that can reveal a fake photo - https://news.ycombinator.com/item?id=14670670 - June 2017 (18 comments)
https://web.archive.org/web/20191030232152/https://www.bbc.c...