Suppose the authority want to false-arrest you. They prepare a hash that matches to an innocent image they knew the target has in his Apple product. They hand that hash to the Apple, claiming it's a hash from a child abuse image and demand privacy-invasive searching for the greater good.
Then, Apple report you have a file that match the hash to the authority. The authority use that report for a convenient reason to false-arrest you.
Now what happens if you sue the authority for the intentional false-arrest? Demand the original intended file for the hash? "No. We won't reveal the original file because it's child abusing image, also we don't keep the original file for moral reason"
But come to think of it, we already have tons of such bogus pseudo-science technology like the dogs which conveniently bark at police's secret hand sign, polygraph, and the drug test kit which detect illegal drugs from thin air.
For those old enough to remember “Jam Echelon Day”, maybe it won’t have any effect. But what other recourse do we have other than to maliciously and intentionally subvert and break it?
Shouldn't they be?
This isn't necessary; the state of the art is for drug dogs to alert 100% of the time. They're graded on whether they ever miss drugs. It's easy to never miss.
That's the problem: the terrible asymetry. The same one you find with TOS, or politicians working for lobbists.
This happens today. We must not build technology that makes it even more devastating.
Even if they'd provide it-- the attacker need only perturb an image from an existing child abuse image database until it matches the target images.
Step 1. Find images associated with the race or political ideology that you would like to genocide and compute their perceptual hashes.
Step 2. Obtain a database of old widely circulated child porn. (Easy if you're a state actor, you already have it, otherwise presumably it's obtainable since if it wasn't none of this scanning would be needed).
Step 3. Scan for the nearest perceptual matches for the target images in the CP database. Then perturb the child porn images until they match (e.g. using adversarial noise).
Step 4. Put the modified child porn images into circulation.
Step 5. When these in-circulation images are added to the database the addition is entirely plausibly denyable.
Step 6. After rounding up the targets, even if they're allowed any due process at all you disallow them access to the images. If that dis-allowance fails, you can still cover by the images existing and their addition having been performed by someone totally ignorant of the scheme.
Isn't this a problem generally with laws against entire classes of media?
Planting a child abuse image (or even simply claiming to have found one) is trivial. Even robust security measures like FDE don't prevent a criminal thumb-drive from appearing.
I think we probably need to envision a future in which there is simply no such concept under law as an illegal number.
Why would they want that?
Given how pedophiles are treated in prison, that might be longer than your expected lifespan if you are sent to prison because of this. Of course I'm taking it to the dark place, but you kinda gotta, you know?
Someone else could find a way to make every single possible mutation of false positive Goatse/Lemonparty/TubGirl/etc. Then some poor Apple employee has to check those out.
This tech is just ripe for all kind of abuses.
I'm surprised this hasn't gotten enough traction outside of tech news media.
Remember the mass celebrity "hacking" of iCloud accounts a few years ago? I wonder how those celebrities would feel knowing that some of their photos may be falsely flagged and shown to other people. And that we expect those humans to act like robots and not sell or leak the photos, etc.
Again, I'm surprised we haven't seen a far bigger outcry in the general news media about this yet, but I'm glad to see a lot of articles shining light on how easy it is for false positives and hash collisions to occur, especially at the scale of all iCloud photos.
It doesn't necessarily mean that all flagged photos would be of explicit content, but even if it's not, is Apple telling us that we should have no expectation of privacy for any photos uploaded to iCloud, after running so many marketing campaigns on privacy? The on-device scanning is also under the guise of privacy too, so they wouldn't have to decrypt the photos on their iCloud servers with the keys they hold (and also save some processing power, maybe).
But the fact that there is no legitimate reason according to the system's design doesn't prevent there from being an illegitimate reason: Apple's "review" undermines your legal due process protection against warrantless search.
See US v. Ackerman (2016): The appeals court ruled that when AOL forwarded an email with an attachment whos hash matched the NCMEC database to law enforcement without anyone looking at it, and law enforcement looked at the email without obtaining a warrant was an unlawful search and had AOL looked at it first (which they can do by virtue of your agreement with them) and gone "yep, thats child porn" and reported it, it wouldn't have been an unlawful search.
I'm going to bet the algorithm will struggle the most with exactly the pictures you don't want reviewers or the public to see.
Edit: I see now it's not about copyright, but still very disturbing.
Still, I know it has been floating around in the wild. I recently came across it on Discord when I attempted to push an ancient image, from the 4chan of old, to a friend, which mysteriously wouldn't send. Saved it as a PNG, no dice. This got me interested. I stripped the EXIF data off of the original JPEG. I resized it slightly. I trimmed some edges. I adjusted colors. I did a one degree rotation. Only after a reasonably complete combination of those factors would the image make it through. How interesting!
I just don't know how well this little venture of Apple's will scale, and I wonder if it won't even up being easy enough to bypass in a variety of ways. I think the tradeoff will do very little, as stated, but is probably a glorious apportunity for black-suited goons of state agencies across the globe.
We're going to find out in a big big way soon.
* The image is of the back half of a Sphynx cat atop a CRT. From the angle of the dangle, the presumably cold, man-made feline is draping his unexpectedly large testicles across the similarly man-made device to warm them, suggesting that people create problems and also their solutions, or that, in the Gibsonian sense, the street finds its own uses for things. I assume that the image was blacklisted, although I will allow for the somewhat baffling concept of a highly-specialized scrotal matching neural-net that overreached a bit or a byte on species, genus, family, and order.
>> in the Gibsonian sense
Nice turn of phrase. Can't wait to see what the street's use cases are going to be for this wonderful new spyware. Something nasty, no doubt.
That’s very different from authorities taking a sneak peek into my stuff.
That’s like the theological concept of always being watched.
It starts with child pornography but the technology is indifferent towards it, it can be anything.
It’s always about the children because we all want to save the children. Soon they will start asking you start saving your country. Depending on your location they will start checking against sins against religion, race, family values, political activities.
I bet you, after the next election in the US your device will be reporting you for spreading far right or deep state lies, depending on who wins.
I’m big Apple fanboy, but I’m not going to carry a snitch in my pocket. That’s “U2 Album in everyone’s iTunes library” blunder level creepy with the only difference that it’s actually truly creepy.
In my case, my iPhone is going to be snitching me to Boris and Erdogan, in your case it could be Macron, Bolsonaro, Biden, Trump etc.
That’s no go for me, you can decide for yourself.
What changed, really?
Think about all the startups that can't deploy software without being taxed most of our margin, the sign in with apple that prevents us from having a real customer relationship, and the horrible support, libraries, constant changes, etc. It's hostile! It's unfair that the DOJ hasn't done anything about it.
A modern startup cannot succeed without Apple's blessing. To do so would be giving up 50% of the American market. When you're struggling to grow and find traction, you can't do that. It's so wildly unfair that they "own" 50+% of computer users.
Think of all the device owners that don't have the money to pay Apple for new devices or upgrades. They can't repair them themselves. Apple's products are meant to go into the trash and be replaced with new models.
We want to sidestep these shenanigans and use our own devices? Load our own cloud software? We can't! Apple, from the moment Jobs decreed, was fully owned property. No alternative browsers, no scripting or runtimes. No computing outside the lines. You're just renting.
This company is so awful.
Please call your representatives and ask them to break up the biggest and most dangerous monopoly in the world.
> I bet you, after the next election in the US your device will be reporting you for spreading far right or deep state lies, depending on who wins.
The US is becoming less stable, sure [1], but there is still a very strong culture of free speech, particularly political speech. I put the odds that your device will be reporting on that within 4 years as approximately 0. The extent that you see any interference with speech today is corporations choosing not to repeat certain speech to the public. Not them even looking to scan collections of files about it, not them reporting it to the government, and the government certainly wouldn't be interested if they tried.
The odds that it's reporting other crimes than child porn though, say, copyright infringement. That strikes me as not-so-low.
[1] I agree with this so much that it's part of why I just quit a job that would have required me to move to the US.
In my opinion, that culture has been rapidly dying, chipped away by a very sizable and growing chunk that doesn't value it at all, seeing it only as a legal technicality to be sidestepped.
How can you seriously believe that these corporations (who are not subject to the first amendment, and cannot be challenged in court) won't extend and abuse this technology to tackle "domestic extremism" but broadly covering political views?
Free speech didn't seem so important recently when the SJW crowd started mandating to censor certain words because they're offensive.
>That’s very different from authorities taking a sneak peek into my stuff.
To be very blunt:
- The opt out of this is to not use iCloud Photos.
- If you _currently_ use iCloud Photos, your photos are _already_ hash compared.
- Thus the existing opt out is to... not use iCloud Photos.
The exact same outcome can happen regardless of whether it's done on or off device. iCloud has _always_ been a known vector for authorities to peek.
>I’m big Apple fanboy, but I’m not going to carry a snitch in my pocket.
If you use iCloud, you arguably already do.
Today it's only geared toward iCloud and CSAM. How many lines of codes do you think it will take before it scans all your local pictures?
How hard do you think it will be for an authoritarian regime like China, that Apple bends over backwards to please, to start including other hashes that are not CSAM?
iCloud is opt-out. They can scan server-side like everyone does. Your device is your device, and it now contains, deeply embedded into it, the ability to perform actions that are not under your control and can silently report you directly to the authorities.
If you don't see a deep change there, I don't know what to say.
I live in a country that is getting more authoritarian by the day, where people are sent to prison (some for life) for criticizing the government, sometime just for chanting or printing a slogan.
This is the kind of crap that makes me extremely angry at Apple. Under the guise of something no-one can genuinely be against (think of the children!), they have now included a pretty generic snitch into your phone and made everyone accept it.
Apple has announced they'll be doing this check?
What exactly do you think is the same as before?
>The exact same outcome can happen regardless of whether it's done on or off device. iCloud has _always_ been a known vector for authorities to peek.
That's neither here, nor there. It's another thing to peak selectively with a warrant of sorts, than to (a) peak automatically in everybody, (b) with a false-positive-prone technique, especially since the mere accusation on a false match can be disastrous for a person, even if they eventually are proven innocent...
This is different. This is your own device doing that thing, out of your control. Alright sure, it's doing the same thing as the other server did and under the same circumstances* so maybe functionally nothing has changed. But the philosophical difference is quite huge between somebody else's server watching over what you upload and your own device doing it.
I'm struggling to come up with a good analogy. The closest I can really think of is the difference between a reasonably trusted work friend and your own family member reporting you to the authorities for suspicious behavior in your workplace and home respectively. The end result is the same, but I suspect few people would feel the same about those situations.
* There is no inherent limitation for your own device to only be able to check photos you upload to iCloud. There is however such a limitation for the iCloud servers. A very reasonably and potentially functional difference is the ability for this surveillance to be easily expanded beyond iCloud uploads in the future.
Now what's going to happen instead is the computer will report me to its real masters: corporations, governments. How is this acceptable in any way?
Wasn’t yesterday’s version of this sorry about how Apple is implementing this as a client side service on iPhones?
https://news.ycombinator.com/item?id=28068741
I don’t know if the implication there is “don’t use the stock Apple camera app and photo albums”, or “don’t store any images on yours Phone any more” if they are scanning files from other apps for perceptual hash matches as well…
I wonder how this will hold up against 5th ammendment (in the US) covering self-incrimination?
I have a large user base on iOS. Considering a blackout protest.
as this article points out, the positive matches will still need an observe to confirm what it is and is not.
lastly, the very reason you have this device exposes you to the reality of either accepting a government that regulates these corporate overreaches or accepting private ownership thats profit motive is deeply personal.
you basically have to reverse society or learn to be a hermit, or more realistically, buy into a improved democratic construct that opts into transparent regulation.
but it sounds more like you want to live in a split brained world where your paranoia and antigovernment stance invites dark corporste policies to sell you out anyway
> Apple offers technical details, claims 1-in-1 trillion chance of false positives.
There are two ways to read this, but I'm assuming it means, for each scan, there is a 1-in-1 trillion chance of a false positive.
Apple has over 1 billion devices. Assuming ten scans per device per day, you would reach one trillion scans in ~100 days. Okay, but not all the devices will be on the latest iOS, not all are active, etc, etc. But this is all under the assumption those numbers are accurate. I imagine reality will be much worse. And I don't think the police will be very understanding. Maybe you will get off, but you'll be in a huge debt from your legal defense. Or maybe, you'll be in jail, because the police threw the book at you.
People like to complain about the energy wasted mining cryptocurrencies - I wonder how this works out in terms of energy waste? How many people will be caught and arrested by this? Hundreds or thousands? Does it make economic sense for the rest of us to pay an electric tax in the name of scanning other people's phones for this? Can we claim it as a deductible against other taxes?
Cryptocurrency waste is vastly greater. It doesn't compare at all. Crypto wastes as much electricity as a whole country. This will lead to a few more people being employed by Apple to verify flagged images, that's it.
> The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account.
Once you start adding new content from camera to iCloud, I'd assume the new ML chips of Apple Silicone will be calculating the phashes as part-and-parcel to everything else it does. So unless you're trying to "recreate" known CP, then new photos from camera really shouldn't need this hashing done to them. Only files not originated from the user's iDevice should qualify. If a CP creator is using an iDevice, then their new content won't match existing hashes, so what's that going to do?
So so many questions. It's similar yet different to mandatory metal detectors and other screening where 99.99% of people are innocent and "merely" inconvenienced vs the number of people any of that screening catches. Does the mere existence of that screening act as a deterent? That's like asking how many angels can stand on the head of a pin. It's a useless question. The answer can be whatever they want it to be.
I'm sure I'm not the only person with naked pictures of my wife. Do you really want a false positive to result in your intimate moments getting shared around some outsourced boiler room for laughs?
It is very likely that as a result of this, thousands of innocent people will have their most private of images viewed by unaccountable strangers, will be wrongly suspected or even tried and sentenced. This includes children, teenagers, transsexuals, parents and other groups this is allegedly supposed to protect.
The willful ignorance and even pride by the politicians and managers who directed and voted for these measures to be taken disgusts me to the core. They have no idea what they are doing and if they do they are simply plain evil.
It's a (in my mind entirely unconstitutional) slippery slope that can lead to further telecommunications privacy and human rights abuses and limits freedom of expression by its chilling effect.
Devices should exclusively act in the interest of their owners.
I can believe that a couple of false positives would inevitably occur assuming Apple has good intentions (which is not a given), but I'm not seeing how thousands could be wrongfully prosecuted unless Apple weren't using the system like they state they will. At least in the US, I'm not seeing how a conviction can be made on the basis of a perceptual hash alone without the actual CSAM. The courts would still need the actual evidence to prosecute people. Getting people arrested on a doctored meme that causes a hash collision would at most waste the court's time, and it would only damage the credibility of perceptual hashing systems in future cases. Also, thousands of PhotoDNA false positives being reported in public court cases would only cause Apple's reputation to collapse. They seem to have enough confidence that such an extreme false positive rate is not possible to the point of implementing this change. And I don't see how just moving the hashing workload to the device fundamentally changes the actual hashing mechanism and increases the chance of wrongful conviction over the current status quo of serverside scanning (assuming that it only applies to images uploaded to iCloud, which could change of course). The proper time to be outraged at the wrongful conviction problem was ten years ago, when the major tech companies started to adopt PhotoDNA.
On the other hand, if we're talking about what the CCP might do, I would completely agree.
Apple says: "The threshold is set to provide an extremely high level of accuracy and ensures less than a one in one trillion chance per year of incorrectly flagging a given account."
What evidence do you have against that statement?
Next, flagged accounts are reviewed by humans. So, yes, there is a minuscule chance a human might see a derivative of some wrongly flagged images. But there is no reason to believe that they "will be wrongly suspected or even tried and sentenced".
these people also have no incentive to find you innocent for innocent photos. If they err on the side of false-negative, they might find themselves at the wrong end of a criminal search ("why didn't you catch this"), but if they false-positive they at worse ruin a random person's life.
That isn’t to say CASM scanning or any other type of drag net is OK. But I’m not concerned about a perceptual hash ruining someone’s life, just like I’m not concerned about a botched millimeter wave scan ruining someone’s life for weapons possession.
I'm not completely convinced that says what you want it to.
Three rules to live by:
1) Always pay your taxes
2) Don’t talk to the police
3) Don’t take photographs with your clothes off
2b) Don't buy phones that talk to the police.
Somehow I doubt we would ever get such transparency, even though it would be the right thing to do in such a situation.
Unless you prefer to live dangerously, of course.
It looks like you've updated your comment to clarify Linux laptop's encrypted hard drive, and I agree with your line of thinking. Modern Windows and Mac OS are effectively cloud operating systems where more or less anything can be pushed at you at any time.
You'd have to have several positive matches against the specific hashes of CSAM from NCMEC before they'd be flagged up for human review, right? Which presumably lowers the threshold of accidental false positives quite a bit?
> According to Apple, a low number of positives (false or not) will not trigger an account to be flagged. But again, at these numbers, I believe you will still get too many situations where an account has multiple photos triggered as a false positive. (Apple says that probability is “1 in 1 trillion” but it is unclear how they arrived at such an estimate.) These cases will be manually reviewed.
At scale, even human classification which ought to be clear will fail, accidentally clicking 'not ok' when they saw something they thought was 'ok'. It will be interesting to see what happens then.
This means that there will be people paid to look at child pornography and probably a lot of private nude pictures as well.
So far as I know some parents still do this. I bet they'd be thrilled having Apple employees look over these.
It’s a “killing floor” type job where you’re limited in how long you’re allowed to do it in a lifetime.
It is clearly a no-trivial project, no other company is doing it, and it will be one of the rare case of a company doing something not for shareholders value but for "goodwill".
I am really not understanding the reasoning behind this choice.
US law requires any ESP (electronic service provider) to alert NCMEC if they become aware of CSAM on their servers. Apple used to comply with this by scanning images on the server in iCloud photos, and now they’re moving that to the device if that image is about to be uploaded to iCloud photos.
FWIW, the NYT says Apple reported 265 cases last year to NCMEC, and say Facebook reported 20.3 million. Google [1] are on for 365,319 for July->Dec.
I’m still struggling to see what has changed here, apart from people realising what’s been happening..
- it’s the same algorithm that Apple has been using, comparing NCMEC-provided hashes against photos
- it’s still only being done on photos that are uploaded to iCloud photos
- it’s now done on-device rather than on-server, which removes a roadblock to future e2e encryption on the server.
Seems the only real difference is perception.
[1] https://transparencyreport.google.com/child-sexual-abuse-mat...
Not saying it will happen, but that's a decent theory as of why https://daringfireball.net/2021/08/apple_child_safety_initia...
You'd want to look at the particular perceptual hash implementation. There is no reason to expect, without knowing the hash function, that you would end up with tons of collisions at distance 0.
N is usually much bigger than M, since you have the combinatorial pixel explosion. Say images are 8 bit RGB 256x256, then you have 2^(8x256x256x3) bit combinations. If you have a 256-bit hash, then that’s only 2^256. So there is a factor of 2^(8x256x3) difference between N and M if I did my math right, which is a factor I cannot even calculate without numeric overflow.
There's also something like the Scale Invariant Feature Transform that would protect against all affine transformations (scale, rotate, translate, skew).
I believe one thing that's done is whenever any CP is found, the hashes of all images in the "collection" is added to the DB whether or not they actually contain abuse. So if there are any common transforms of existing images then those also now have their hashes added to the db. The idea being that a high percent of hits from even the benign hashes means the presence of the same "collection".
I want to see a lot more pairs like this!
From https://www.apple.com/child-safety/
"Before an image is stored in iCloud Photos, an on-device matching process is performed for that image against the known CSAM hashes. This matching process is powered by a cryptographic technology called private set intersection, which determines if there is a match without revealing the result. The device creates a cryptographic safety voucher that encodes the match result along with additional encrypted data about the image. This voucher is uploaded to iCloud Photos along with the image."
Elsewhere, it does explain the use of neuralhashes which I take to be the perceptual hash part of it.
I did some work on a similar attempt awhile back. I also have a way to store hashes and find similar images. Here's my blog post. I'm currently working on a full site.
So your device has a list of perceptual (non-cryptographic) hashes of its images. Apple has a list of the hashes of known bad images.
The protocol lets them learn which of your hashes are in the “bad” set, without you learning any of the other “bad” hashes, and without Apple learning any of the hashes of your other photos.
Is it possible to perform private set intersection where the comparison is inexact? I.e., if you have two cryptographic hashes, private set intersection is well understood. Can you do the same if the hashes are close, but not exactly equal?
If the answer is yes, that could mean you would be able to derive the perceptual hashes of the CSAM, since you're able to find values close to the original and test how far you can drift from it before there's no longer a match.
My interpretation of this is that they still use some sort of a perception based matching algorithm they just encrypt the hashes and then use some “zero knowledge proof” when comparing the locally generated hashes against the list, the result of which would be just that X hashes marched but not which X.
This way there would be no way to reverse engineer the CSAM hash list or bypass the process by altering key regions of the image.
That means you can't prove an incriminating file was not deleted even if you're the victim of a false positive. So they will suspect you and put you through the whole police investigation routine.
Librarians: "It is unthinkable that we would ever share a patron's borrowing history!"
Post office employees: "Letters are private, only those commie countries open the mail their citizens send!"
Police officers: "A search warrant from a Judge or probable cause is required before we can search a premises or tap a single, specific phone line!"
The census: "Do you agree to share the full details of your record after 99 years have elapsed?"
The world in the 2000s:
FAANGs: "We know everything about you. Where you go. What you buy. What you read. What you say and to whom. What specific type of taboo pornography you prefer. We'll happily share it with used car salesmen and the hucksters that sell WiFi radiation blockers and healing magnets. Also: Cambridge Analytica, the government, foreign governments, and anyone who asks and can pony up the cash, really. Shh now, I have a quarterly earnings report to finish."
Device manufacturers: "We'll rifle through your photos on a weekly basis, just to see if you've got some banned propaganda. Did I say propaganda? I meant child porn, that's harder to argue with. The algorithm is the same though, and just how the Australian government put uncomfortable information leaks onto the banned CP list, so will your government. No, you can't check the list! You'll have to just trust us."
Search engines: "Tiananmen Square is located in Beijing China. Here's a cute tourist photo. No further information available."
Online Maps: "Tibet (China). Soon: Taiwan (China)."
Media distributors: "We'll go into your home, rifle through your albums, and take the ones we've stopped selling. Oh, not physically of course. No-no-no-no, nothing so barbaric! We'll simply remotely instruct your device to delete anything we no longer want you to watch or listen to. Even if you bought it from somewhere else and uploaded it yourself. It matches a hash, you see? It's got to go!"
Governments: "Scan a barcode so that we can keep a record of your every movement, for public health reasons. Sure, Google and Apple developed a secure, privacy-preserving method to track exposures. We prefer to use our method instead. Did we forget to mention the data retention period? Don't worry about that. Just assume... indefinite."
Is this true? I’d imagine you could generate billions a second without having a collision, although I don’t know much about how these hashes are produced.
It would be cool for an expert to weigh in here.
Kind of off topic, does anyone happen to know of some good software for doing this on a local collection of images? A common sequence of events at my company:
1. We're designing a website for some client. They send us a collection of a zillion photos to pull from. For the page about elephants, we select the perfect elephant photo, which we crop, lightly recolor, compress, and upload.
2. Ten years later, this client sends us a screenshot of the elephant page, and asks if we still have a copy of the original photo.
Obviously, absolutely no one at this point remembers the name of the original photo, and we need to either spend hours searching for it or (depending on our current relationship) nicely explain that we can't help. It would be really great if we could do something like a reverse Google image search, but for a local collection. I know it's possible to license e.g. TinEye, but it's not practical for us as a tiny company. What I really want is an open source solution I can set up myself.
We used Digicam for a while, and there were a couple of times it was useful. However, for whatever reason it seemed to be extremely crash-prone, and it frequently couldn't find things it really should have been able to find.
Internet <---> CISCO <---> ASUS ROUTER with openvpn <-> Network The cisco router will block the 17.0.0.0/8 ip address range and I will use spotify on all my computers.
Also, Apple could ignore images from the device camera - since those will never match.
This is also in stark contrast to the task faced by photo copyright hunters. They don’t have the luxury of only focusing on those who handle tens of thousands of copyrighted photos. They need to find individual violations because that’s what they are paid to do.
If it does, you could download the wrong zip and instantaneously be over their threshold.
Does this mean Apple had/has CSAM available to generate the hashes?
> on-device matching using a database of known CSAM image hashes provided by NCMEC and other child safety organizations
https://www.apple.com/child-safety/
(Now, I do wonder how secure those third parties are.)
> The simple fact that image data is reduced to a small number of bits leads to collisions and therefore false positives
Our experience with regular hashes suggests this is not the underlying problem. SHA256 hashes have 256 bits and still there are no known collisions, even with people deliberately trying to find them. SHA-1 only has only 160 bits to play with and it's still hard enough to find collisions. MD5 is easier to find collisions but at 128 bits, still people don't come across them by chance.
I think the actual issue is that perceptual hashes tend to be used with this "nearest neighbour" comparison scheme which is clearly needed to compensate for the inexactness of the whole problem.
These algos work by limiting the color space of the photo, usually to only black and white (not even grey scale) resizing it to a fraction of its original size and then chopping it into tiles using a fixed size grid.
This increases the chances of collisions greatly because photos with a similar composition are likely to match on a sufficient number of tiles to flag the photo as a match.
This is why the women image was matched to the butterfly image, if you turn the image to B&W resize it to something like 256x256 pixels and divide it into a grid of say 16 tiles all of a sudden a lot of these tiles can match.
The way it's set up, that's not possible: "Given a user image, the general idea in PSI is to apply the same set of transformations on the image NeuralHash as in the database setup above and do a simple lookup against the blinded known CSAM database. However, the blinding step using the server-side secret is not possible on device because it is unknown to the device. The goal is to run the final step on the server and finish the process on server. This ensures the device doesn’t know the result of the match, but it can encode the result of the on-device match process before uploading to the server." -- https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni... (emphasis mine)
https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...
Thanks!
Always fun when unknown strangers get to look at your potentially sensitive photos with probably no notice given to you.
Incidentally, I work in computer vision and handle proprietary images. I would be violating client agreements if I let anyone else have access to them. This is a concern I've had in the past e.g. with Office365 (the gold standard in disregarding privacy) that defaults to sending pictures in word documents to Microsoft servers for captioning, etc. I use a Mac now for work, but if somehow this snooping applies to computers as well I can't keep doing so while respecting the privacy of my clients.
I echo the comment on another post, Apple is an entertainment company, I don't know why we all started using their products for business applications.
I dislike the general idea of iCloud having back doors but I don’t think the criticism in this blog is entirely valid.
Edit: it was pointed out apple doesn’t have semantically meaningful classifier so the blog post’s criticism is valid.
How can they say it’s 1 in a trillion? You test the algorithm on a bunch of random negatives, see how many positives you get, and do one division and one multiplication. This isn’t rocket science.
So, while there are many arguments against this program, this isn’t it. It’s also somewhat strange to believe the idea of collisions in hashes of far smaller size than the images they are run on somehow escaped Apple and/or really anyone mildly competent.
To explain things even further, let's say that the perceptual algorithm makes a false positive 1% of the time. That is, 1 in every 100 completely normal pictures are incorrectly matched with some picture in the child pornography database. There's no reason to think (at least none springs to mind, happy to hear suggestions) that a false positive in one image will make it any more likely to see a false positive in another image. Thus, if you have a phone with 1000 pictures on it, and it takes 40 trigger a match, there's less than a 1 in a trillion probability that this would happen if the pictures are all normal.
and this: https://www.theverge.com/2017/11/6/16611756/ios-11-bug-lette...