What I like about the standard Google Photos/Dropbox/OneDrive approach is that it's no secret you upload your photos to their computers, where they process them. They process them for useful features, and they process them to catch child abuse. But I understand clearly I upload it from my device to another device, and that other device can process these photos. I'm not a Google Photos customer mind you (as stated, I prefer other services than Google's), but I understand the premise, value add and what they do with my stuff on their computers. It’s not my device incriminating me, it’s someone else’s device that does that, someone else’s device I chose to send my things to. I understand that relationship.
I will not accept a relationship with a device I own, situated on my desk or in my pocket, where it try to start a process to incriminate me. That's not processing a personal device should be engaging in, even if this starts out gated behind the heavily pushed iCloud Photos (it’s technically opt in), even if the solution is technically sophisticated (it is), and even if there exist definitions of "privacy friendly" where this approach is more privacy friendly (you can argue that all day long). I just don't want a personal device to do this. If Apple wants to draw the line somewhere else than I want to draw it, that means I probably should not support that.
I don't care what happens in the cloud. What bothers me is the precedent that Apple sets by shipping iOS with `scanPhotoForIllegalContent()` and `reportUserToPolice()` functions. This code is working against the user's interests. As of now, these functions only run on photos that have already been iCloud synced, and they only look for CSAM, but they could easily expand this later on by changing a few lines of code or adding to the hash database.
To be clear, I think CSAM is absolutely disgusting and I want those in possession of it to be prosecuted. But scanning local photos is crossing a line. (I'm sure they catch most pedos through server-side scanning already anyway.) Besides, the only reason Apple gets away with this is because iOS is closed source. If Google tried to pull this shit on a Pixel phone, you could just install a different ROM.
Using that they could add multiple redundancies and they wouldn't need to look at your stuff on the cloud at all before getting multiple positive matches. And even then the first level is a human checking if it's an actual match or a false positive.
This was somehow a huge invasion of privacy, when people were competing on who could misunderstand the very simple premise the most.
Fairly sure that most of the worry around that was because such a system could very easily be changed to do the same to any photo.
And people felt like their phone wasn't theirs and that it could snitch on you. We know that you truly do not own your phone, but most people do not view it that way.
Sure, it is technically better than doing that check on on a server, but the general public do not currently view it that way.
Personally do not like the system as you would be unable to escape it if it started scanning local photos (which I feel is only a matter of time), something you can with google drive and such, by not using them.
In this case, the steelman is that Apple has turned a capability barrier (if your scanning is on the cloud, you simply cannot scan local photos) into a policy barrier (now you can scan all photos, there's just a flag in the software which means you don't do so.)
'scan -> encrypt -> upload' is in my opinion better than 'upload -> scan'
We don't need to have this discussion again. Please go research the hundreds of thousands of discussions and blog posts about how what apple is proposing to do is entirely different.
It's this kind of casual fearmongering which stops people from accurately understanding.
What makes you think Apple doesn't already have the functionality to scan for anything they want on your phone, given that they built a phone content scanner a decade ago for the iTunes Match service and a photo tagger and analyser which does run on the phone, and they control everything about the software?
What makes you think Google doesn't have the functionality to scan for anything they want on your phone, or couldn't add it if they wanted to? Have you the source code for the Google Play services? The internal chip firmwares? Have you studied Google's terms and conditions in enough detail to be certain they can't move any such checks client side without telling you? And they also do analyse photos on-device and tag their content for normal use.
Why do you trust that Google isn't doing anything snitchy or on behalf of the authorities, but when Apple announces that they won't and designs a system which makes it hard for them to do that, then you assume they will? Not even quietly cynically suspect that they might, but spreading as a fact that they definitely will.
There's no need for this tone. People will disagree, and that's what makes this place great.
When they scan it on my phone, they don't need to scan it in the cloud. They have one less reason to touch my stuff when it's on their servers. One step closer to full E2EE.
Every major cloud provider is already scanning every photo you put up and in most cases without any human review. Your photo gets flagged and it's good bye account. Next step: HN front page to maybe get a human to look at your case.