Apple's proposed solution would have theoretically only reported cases that were much more than likely to be already known instances of CSAM (i.e. not pictures of your kids), and if nothing else is reported, can we say that they were really surveilled? In some very strict sense, yes, but in terms of outcomes, no.
https://www.texastribune.org/2022/07/26/texas-foster-care-ch...
So how about we implement mass-surveillance by giving victims a good reason to report crimes? Starting with not heavily punishing victims that do come forward. Make the foster care system actually able to raise kids reasonably.
Because, frankly, if we don't do it this way, what's the point? Why would we do anything about abuse if we don't fix this FIRST? Are we really going to catch sexual abuse, then put the kids into a state-administered system ... where they're sexually, and physically, and mentally, and financially abused?
WHY would you do that? Obviously that doesn't protect children, it only hides abuse, it protects perpetrators in trade for allowing society to pretend the problem is smaller than it is.
Also, you can not guarantee that apple/google will use only known instances of csam, what if, govt orders them/google to scan for other type of content under the hood, like documents or god knows what else bc govt want's to screw that person (for the sake of example let's suppose the targeted person is some journalist that discovered shady stuff and govt wants to put em in prison), bc you know, you don't have access to either algorithms and csam scan list that they are using, system could be abused and usually could means 'sometime' it will
I agree that the basic idea of scanning on device for CSAM has a lot of issues and should not be implemented. What I think was missing from the discourse was an actual look at what Apple were suggesting, in terms of technical specifics, and why that would be well designed to not suffer from these problems.
In terms of outcomes, almost nobody is actually surveilled, as the overall effect is the same as no data having been collected on them in the first place.
That said, I am personally more comfortable with my country's intelligence agencies hoovering up all my online activity than I am with the likes of Apple. The former is much more accountable than the latter.
What if you were a candidate for political office, pushing opinions that angered large swaths of the Intelligence Comminity?
The "minuscule fraction" of content is not surfaced by some random roll of the dice - it's the definitionally most interesting content, in the sense that some human went specifically looking for it in the heap of content caught in the dragnet. And it only needs to be interesting to at least one person with the clearance to search for it.
Maybe that means it's a video of a child being abused, and some morally upstanding federal officer is searching for it because anyone possessing it is ethically and legally culpable for the abuse of that child... Or maybe it's a PDF containing evidence of FBI kidnapping and torturing innocent civilians, and some morally corrupt federal officer is searching for it because anyone possessing it is a liability who needs to be silenced... Or maybe it's a JSON file containing the GPS locations of an individual for the past year, and some emotionally scorned federal contractor is searching for it because that individual is their ex-spouse who's moved onto a new partner.
Are you really prepared to put your faith in the trustworthiness and moral clarity of the population of 100k+ people with federal security clearances?
The scenarios you invented sound very far-fetched to me, if these did happen I very much doubt the perpetrator would be able to get away with it.