In one of the previous discussions, I've seen claims about the NCMEC database containing a lot of harmless pictures misclassified as CSAM. This post confirms this again (ctrl-f "macaque")
It also seems like the PhotoDNA hash algorithm is problematic (to the point where it may be possible to trigger false matches).
Now NCMEC seem to be pushing for the development of a technology that would implant an informant in every single of our devices (mandating the inclusion of this technology is the logical next step that seems inevitable if Apple launches this).
I'm surprised, and honestly disappointed, that the author seems to still play nice, instead of releasing the whitepaper. The NCMEC seems to have decided to position itself directly alongside other Enemies of the Internet, and while I can imagine that they're also doing a lot of important and good work, at this point, I don't think they're salvageable would like to see them disbanded.
Really curious how this will play out. I expect attacks either sabotaging these scanning systems by flooding them with false positives, or exploiting them to get the accounts of your enemies shut down permanently by sending them a picture of a macaque.
I'm the author.
I've worked with different parts of NCMEC for years. (I built the initial FotoForensics service in a few days. Before I wrote the first line of code, I was in phone calls with NCMEC about my reporting requirements.) Over time, this relationship grew. Some years, I was in face-to-face development discussions, other times it have been remote communications. To me, there are different independent parts working inside NCMEC.
The CyberTipline and their internal case staff are absolutely incredible. They see the worst of people in the media and reports that they process. They deal with victims and families. And they remain the kindest and most sincere people I've ever encountered. When possible, I will do anything needed to make their job easier.
The IT group has gone through different iterations, but they are always friendly and responsive. When I can help them, I help them.
When I interact with their legal staff, they are very polite. But I rarely interact with them directly. On occasion, they have also given me some very bad advice. (It might be good for them. But, as my own attorney pointed out, it is generally over-reaching in the requested scope.)
The upper management that I have interacted with are a pain in the ass. If it wasn't for the CyberTipline, related investigators, and the IT staff, I would have walked away (or minimized my interactions) long ago.
Why haven't I made my whitepaper about PhotoDNA public? In my view, who would it help? It would help bad guys avoid detection and it will help malcontents manufacture false-positives. The paper won't help NCMEC, ICACs, or related law enforcement. It won't help victims.
About this time, someone usually mocks "it's always about the kids, think about the kids." To those critics: They have not seen the scope of this problem or the long term impact. There is nearly a 1-to-1 relationship between people who deal in CP and people who abuse children. And they rarely victimize just one child. Nearly 1 in 10 children in the US will be sexually abused before the age of 18.
The problem is people use this perfectly legitimate problem to justify anything. They think it's okay to surveil the entire world because children are suffering. There are no limits they won't exceed, no lines they won't cross in the name of protecting children. If you take issue, you're a "screeching minority" that's in their way and should be silenced.
It's extremely tiresome seeing "children, terrorists, drug dealers" mentioned every single time the government wants to erode some fundamental human right. They are the bogeymen of the 21st century. Children in particular are the perfect political weapon to let you get away with anything. Anyone questions you, just destroy their reputation by calling them a pedophile.
What you are saying here kind of sidetracks the actual problem those critics have though, doesn't it? The problem is not the acknowledgement of how bad child abuse is. The problem is whether we can trust the people who claim that everything they do, they do it for the children. The problem is damaged trust.
And I think your last paragraph illustrates why child abuse is such an effective excuse, if someone criticizes your plan, just avoid and go into how bad child abuse really is.
I'm not accusing you of anything here by the way, I like the article and your insight. I just see a huge danger in the critics and those who actually want to help with this problem being pitted against each other by people who see the topic of child abuse as nothing but a convenient carrier for their goals, the ones that have actually heavily damaged a lot of trust.
I think we have seen "think of the kids" used as an excuse for so many things over the years that the pendulum has now swung so far that some of the tech community has begun to think we should do absolutely nothing about this problem. I have even seen people on HN in the last week that are so upset by the privacy implications of this that they start arguing that these images of abuse should be legal since trying to crack down is used as a motive to invade people's privacy.
I don't know what the way forward is here, but we really shouldn't lose sight that there are real kids being hurt in all this. That is incredibly motivating for a lot of people. Too often the tech community's response is that the intangible concept of privacy is more important than the tangible issue of child abuse. That isn't going to be a winning argument among mainstream audiences. We need something better or it is only a matter of time until these type of systems are implemented everywhere.
Making it public would allow the public to scrutinize it, attack it, if you will, so that we can get to the bottom of how bad this technology is. Ultimately my sincere guess is that we’d end up with better technology to do this not some crap system that essentially matches on blurry images. Our government is supposed to be open source, there’s really no reason we can’t as a society figure this out better than some anti CP cabal with outdated crufty image tech.
The exploitation of children is a real and heartbreaking ongoing tragedy that forever harms millions of those we most owe protection.
It's because this is so tragic that we in the world of privacy have grown skeptical. The Four Horsemen, invoked consistently around the world to mount assaults against privacy, are child abuse, terrorism, drugs, and money laundering.
Once privacy is defeated, none of those real, compelling, pressing problems get better (well, except money laundering). Lives are not improved. People are not rescued. Well-meaning advocates just move on to the next thing the foes of privacy point them at. Surely it will work this time!
Your heart is in the right place. You have devoted no small part of your time and energy to a deep and genuine blooming of compassion and empathy. It's just perhaps worth considering that others of us might have good reason to be skeptical.
I have to respectfully disagree with that statement and the train of thought unfortunately. However, you should seek legal counsel before proceeding with anything related: the advice from a stranger on the web.
a) You are not in the possession of a mystical secret for some magical curve or lattice. The "bad guys" if they have enough incentive to reverse a compression algorithm (effectively what this is) they will easily do that, if the money is good enough.
b) If we followed the same mentality in the cryptography community we would still be using DES or have a broken AES. It is clear from your post that the area requires some serious boost from the community in terms of algorithms and implementations and architectural solutions. By hiding the laundry we are never going to advance.
Right now this area is not taken seriously enough as it should by the research community -- one of the reasons also being the huge privacy, and illegal search and seizure concerns and disregard of other areas of law most of my peers have. Material such as yours can help attract attention necessary to the problem and showcase how without the help of the community we end up with problematic and harmful measures such as what you imply.
c) I guess you have, but just in the slightest of cases: From what I have read so far and from the implications of the marketing material, I have to advise you to seek legal counsel if you come to the possession of this PhotoDNA material they promised or about your written work. [https://www.justice.gov/criminal-ceos/citizens-guide-us-fede...] Similarly for any trained ML model -- although here it is a disaster in progress still.
There is incredible second-order harm in overreach, because the reaction to it hurts the original cause, too.
If you try to overcorrect, people will overcorrect in response.
The sort of zeal that leads to thoughts like "screeching minority", I think shows carelessness and shortsightedness in the face of very important decisions.
I have no informed opinion on Apple's CSAM tool, beyond deferring to the FotoForensics expert.
> "There is nearly a 1-to-1 relationship between people who deal in CP and people who abuse children. And they rarely victimize just one child. Nearly 1 in 10 children in the US will be sexually abused before the age of 18."
One thing I wondered and have not seen brought up in the discussion so far is this: As far as I understand the perceptual hash solutions work based on existing corpora of abuse material. So, if we improve our detection ability of this existing content, doesn't that increase the pressure on the abusers to produce new content and in consequence hurt even more children? If so, an AI solution that also flags previously unknown abuse material and a lot of human review are probably our only chance. What is your take on this?
Maybe it's the fact that I don't have kids, or that I spend most of my life online with various devices and services. But I would much rather drop the NCMEC, drop any requirement to monitor private messages or photos, reinstate strong privacy guarantees, and instead massively step up monitoring requirements for families. This argument seems like we're using CSAM as a crutch to get at child abusers. If the relationship is really nearly 1:1, it seems more efficient to more closely monitor the groups most likely to be abusers instead.
It even seems to me that going after a database of existing CSAM is counterproductive. With that material, the damage is already done. In a perverse sense, we want as many pedos as possible to buy old CSAM, since this reduces the market for new abuse. It seems to me that initiatives like this do the opposite.
I am not defending CSAM here. But CSAM and child abuse are connected problems, and istm child abuse is the immensely greater one. We should confront child abuse as the first priority, even at the expense of CSAM enforcement, even at the expense of familial privacy. With a rate of 1 in 10, I don't see how not doing so can be ethically defended.
Yes, let's think about the kids, please.
I certainly don't want my children to grow up in an authoritarian surveillance state...
A lot of people allegedly knew about Epstein and he was completely untouched while connected to high ranking politicians. You wouldn't have needed surveillance to identify child abuse and if it had turned up anything, I doubt something had happened. Even with that surveillance implemented, evidence would only be used if politically convenient.
If you are against child abuse, you should add funding to child care. People there will notice more abuse cases when they have the support and funding they need because that means more eyes on a potential problem. An image algorithm is no help.
But it will help all of society by preventing a backdoor from gaining a foothold. If the technology is shown to be ineffective it will help pressure Apple to remove this super dangerous tool.
Once a technical capability is there, governments will force Apple to use it for compliance. It won’t be long before pictures of Winnie the Pooh will be flagged in China.
Would it help activists push for more accurate technology and better management for NCMEC? Would it help technologists come up with better algorithms? I see all kinds of benefits to more openess and accountability here.
Can you give a source for this please? Also by "deal in" do you mean create or view?
I don’t really want to take any specific position on this issue as I don’t have enough context to make a fair assessment of the situation. However, I do want to point out one thing:
By supporting a specific approach to solve a problem, you generally remove some incentives to solve the problem in some other way.
Minted to this situation, I think it would be interesting to ask what are other potential solutions to the problem of child abuse and how effective may they be compared to things like PhotoDNA? Is it the biggest net benefit you could habe or maybe even a net cost to work on this to solve the problem of child abuse?
I don’t have the answer but I think it‘s important to look at the really big picture once in a while. What is it that you want to achieve and is what you are doing really the most effective way of getting that or just something that is „convenient“ or „familiar“ given a set of predefined talking points you didn’t really question?
All the best to you :)
Please lets not invent distorted statistics or quote mouthpieces who has it in their own interest to scare people as much as possible into rash actions just like Apple has done.
I've long held a grudge against Microsoft and NCMEC for not providing this technology, because I live in a country where reporting CSAM is ill-advised if you're not a commercial entity and law enforcement seizes first and asks questions later (_months_ later), so you end up just closing down a service if it turns out to be a problem.
This puts it into perspective. PhotoDNA seems fundamentally broken as a hashing technology, but it works just well enough with a huge NDA to keep people from looking too closely at it.
NCMEC needs a new technology partner. It's a shame they picked Apple, who are likely not going to open up this tech.
Without it, it's only a matter of time until small indie web services (think of the Fediverse) just can't exist in a lot of places anymore.
This is making it worse : because people with an agenda use CP as an excuse to force immoral behavior into us, now child suffering is always associated with bullshit. Those action are hurting the children by supressong the social will to take it seriously.
Stop hurting the children!
I would suggest that the people NCMEC are most enthusiastic to catch know better than to post CSAM in places using PhotoDNA, particularly in a manner that may implicate them. Perhaps I overestimate them.
2. Assuming all that as true, opaque surveillance and the destruction of free, general computing is much worse than child abuse.
I'd say it's the other way around. If freedom dies this time, it might die for good, followed by an "eternity" of totalitarianism. All violence that occured in human history so far combined is nothing compared to what is at stake.
> Free thought requires free media. Free media requires free technology. We require ethical treatment when we go to read, to write, to listen and to watch. Those are the hallmarks of our politics. We need to keep those politics until we die. Because if we don’t, something else will die. Something so precious that many, many of our fathers and mothers gave their life for it. Something so precious, that we understood it to define what it meant to be human; it will die.
-- Eben Moglen
Does CP being available create victims? I'd say that virtually everybody who suddenly saw CP would not have the inclination to abuse a child. I don't believe that availability of CP is the causal factor to child abuse.
But putting aside that and other extremely important slippery slope arguments for a minute about this issue: have you considered that this project may create economic incentives that are inverse of the ostensible goal of protecting more children from becoming victims?
Consider the following. If it becomes en vogue for cloud data operators to scan their customers' photos for known illegal CP images, then the economic incentives created heavily promote the creation of new, custom CP that isn't present in any database. Like many well-intentioned activists, there's a possibility that you may be contributing more to the problem you care about than actually solving it.
I have no doubt that some cases of sexually abused children must have also existed in the environment where I have grown up as a child, in Europe, but I am quite certain that such cases could not have been significantly more than 1 per thousand children.
If the numbers would be really so high in USA, then something should definitely be done to change this, but spying on all people is certainly not the right solution.
After all, surely 99% of uploaders don't hold the copyright for the image they're uploading, so retaining them is on shaky legal grounds to begin with, if you're the kind of person who wants to be in strict compliance with the law - and at the same time, it forces you and your moderators to handle child porn several times a day (not a job I'd envy) and you say you risk a felony conviction.
Wouldn't it be far simpler, and less legally risky, if you didn't retain the images?
It would help those victims who are falsely accused of having child porn on the phone because of a bug.
It would also help those people who is going to be detained because PhotoDNA will be used by dictatorial states to find incriminating material on their phone just as they used Pegasus to spy on journalists, political opponents, etc.
An organization is determined to act like the management wants. If management consists of jerks the organization they lead will always behave accordingly.
A fish rots from the head down…
These are clearly propaganda statistics. There is absolutely no way 10% of the US child population is molested.
The fact that you not only believe this, but repeat it publicly calls into question your judgement and gullibility.
You’ve been recruited to undermine the privacy and security of hundreds of millions (billions?) of people and indoctrinated with utterly ridiculous statistics. Your argument is essentially “the ends justify the means”, and it could not be any more ethically hollow.
Why did you specify "In a few days"?
It is harder to look in the mirror and see something similar happening in the US. In the US, we superficially see government doing it (Snowden disclosures) and then we see corporations doing it separately (Facebook, Google ad tracking, etc...). However, I think government and tech giants are working closely together to act like China. This feels like another bud of this trend.
- Social credit system
- Loss of financial services (PayPal)
- Loss of access to social media (Facebook, Google, Apple, YouTube)
- Loss of access to travel related services (AirBnB, travel ban)
- Banning of material (Amazon)
- Control of media (owned by just a few major corporations who don't have to make money from the media... Comcast basically has a monopoly on land line Internet, AT&T has a very strong position on mobile, Disney has a strong position in films... they own CNN, NBC, and ABC)... EDIT: And let's not forget Fox
You can say all you want about freedom of association, but the effect is similar in China and the US. You are ostracized from the system.
Tech has lost its neutral carrier status and now is connected into a system that enforces consent. I wonder why? Are we being prepared for an economic war? Is this the natural evolution of power seeking power? Is this just a cycle of authoritarianism and liberalism?
P.S. I don't think groundswell upcry leading to cancellation means much. I am much more concerned when corporations do it. I just think they are much, much more powerful.
Child sex abuse seems to have become the excuse to create these heavy handed policies to terminate accounts with no recourse by Google and scanning locally stored photos by Apple. Even receiving a cartoon depicting child sex abuse can get you in trouble.
Hopefully this will be a catalyst to reform the law.
Stopping to that level is what they want, CNN: “‘privacy activists’ have released a tool to allow the spread of CP”
>shadowy government affiliated agency abuses its role of protecting children to install malware on a billion devices
Being a decent enough human, I made many reports to NCMEC. Human verified(me) reports.
Never once did I hear back. Not even some weird auto-reply. Needless to say, I have zero faith in that agency. Fitting they'd get others to do the work.
Would it help anything? Apple isn't using PhotoDNA, so proving PhotoDNA is bad would just be met with "we don't use that".
A post here some days ago (since removed) linked to a google drive containing generated images (which displayed nonsense), the hashes of which matched those of genuine problems images.
https://9to5mac.com/2021/08/06/apple-internal-memo-icloud-ph...
It does say "We know that the days to come will be filled with the screeching voices of the minority."
"Due to how Apple handles cryptography (for your privacy), it is very hard (if not impossible) for them to access content in your iCloud account. Your content is encrypted in their cloud, and they don't have access. If Apple wants to crack down on CSAM, then they have to do it on your Apple device"
I do not believe this is true. Maybe one day it will be true and Apple is planning for it, but right now iCloud service data is encrypted in the sense that they are stored encrypted at rest and in transit, however Apple holds the keys. We know this given that iCloud backups have been surrendered to authorities, and of course you can log into the web variants to view your photos, calendar, etc. Not to mention that Apple has purportedly been doing the same hash checking on their side for a couple of years.
Thus far there has been no compelling answer as to why Apple needs to do this on device.
Presumably to implement E2E encryption, while at the same time helping the NCMEC to push for legislation to make it illegal to offer E2E encryption without this backdoor.
Apple users would be slightly better off than the status quo, but worse off than if Apple simply implemented real E2E without backdoors, and everyone else's privacy will be impacted by the backdoors that the NCMEC will likely push.
It isn’t a back door to E2E encryption. It can’t even be used to search for a specific image on a person’s device.
It could be used possibly to find a collection of images that are not CSAM but are disliked by the state, assuming Apple is willing to enter into a conspiracy with NCMEC.
> [Revised; thanks CW!] Apple's iCloud service encrypts all data, but Apple has the decryption keys and can use them if there is a warrant. However, nothing in the iCloud terms of service grants Apple access to your pictures for use in research projects, such as developing a CSAM scanner. (Apple can deploy new beta features, but Apple cannot arbitrarily use your data.) In effect, they don't have access to your content for testing their CSAM system. > If Apple wants to crack down on CSAM, then they have to do it on your Apple device.
(which also doesn't really make sense, if the iCloud ToS don't grant Apple the necessary rights to do CSAM scanning there, they could just revise it, however, I think they probably have the rights they need already)
"Security and Fraud Prevention. To protect individuals, employees, and Apple and for loss prevention and to prevent fraud, including to protect individuals, employees, and Apple for the benefit of all our users, and prescreening or scanning uploaded content for potentially illegal content, including child sexual exploitation material."
https://www.apple.com/legal/privacy/en-ww/
Under "Apple's Use of Personal Data". They had that in there since at least 2019.
Add that an Apple executive told congress two years ago that Apple scans iCloud data for CSAM.
I also question the value of e2e there’s an arbitrary scanner that can send back the unencrypted files if it finds a match. If apple’s servers controls the db with “hashes” to match then is it all that different from apple’s servers holding the decryption keys?
Sure e2e still prevents routine large scale surveillance but at the end of the day if apple (or someone that forced apple) wants your data, they’ll get it.
That's my suspicion too, but has it actually been confirmed?
"You scratch my back and I scratch yours". Apple doesn't want to go through an antitrust lawsuit that will kill their money printer so they kowtow with these favors.
It doesn't matter anyway, end to end cryptography is meaningless if someone you don't trust owns one of the ends (and in this case, Apple owns both.)
The CPPA had bans on virtual child porn (e.g. using look-alike adult actresses or CGI), that was overturned by SCOTUS, and then Congress responded with the PROTECT act which tightened up those provisions. These laws on possession are practically unenforceable with modern technology, peer to peer file sharing, onion routing, and encrypted hard drives.
Thus, in order to make them enforcement, the government has to put surveillance at all egress and ingress points of our private/secure enclaves, whether it's at the point of storing it locally, or the point of uploading it to the cloud.
While I agree with the goal of eliminating child porn, should it come at the cost of an omnipresent government surveillance system everywhere? One that could be used for future laws that restrict other forms of content? How about anti-vax imagery? Anti-Semitic imagery? And with other governments of the world watching, especially authoritarian governments, how long until China, which had a similar system with Jingwang Weishi (https://en.wikipedia.org/wiki/Jingwang_Weishi) starts asking: hey, can you extend this to Falun Gong, Islamic, Hong Kong resistance, and Tiananmen square imagery? What if Thailand passes a law that requires Apple to scan for images insulting to the Thai Monarch, does Apple comply?
This is a very bad precedent. I liked the Apple that said no to the FBI instead of installing backdoors. I'd prefer if Apple get fined, and battle all the way to the Supreme Court to resist this.
"The trouble with fighting for human freedom is that one spends most of one's time defending scoundrels. For it is against scoundrels that oppressive laws are first aimed, and oppression must be stopped at the beginning if it is to be stopped at all." — HL Mencken
I have objections to this tool and have many of the same concerns expressed in these comments, but your example seems like a bit of a corner case compared to the broader "government panopticon" problem.
CP is rape porn. People who produce or knowingly distribute actual rape porn (of children or adults) should be going to jail for 25+ years. I would not rule out life for extreme cases that also involve other forms of abuse.
Yet as far as I know this isn’t what is being pushed for. Instead we get a dragnet to try to catch low level offenders who will probably barely see prison because sex crimes are just not punished much in our society.
Secondly… look into some of the people who have set up fake CP honey pots. These have included both vigilantes and security researchers trying to get a sense of how much CP is on Tor. Turns out most of these people are imbeciles and catching them is easy. Why don’t police do this more? Because… well… as I said sex crimes are just not a priority.
I really feel like the whole argument is in bad faith. If they really cared about the children there are so many other things that would be both easier and more effective.
> Levin was ultimately nabbed conversing with police officers posing as single moms. He encouraged them to sexually abuse their kids and in some cases shared photos. [1]
> In one case, Levin sent photographs to a New Zealand police officer, one showing a “close-up of the face of a crying child, her face smeared with black makeup.” Levin suggested to her the image was “hot,” according to parole board documents. Another photo he sent showed a young female bound and leashed, with a gag in her mouth and Levin commented, “Mmm, so hot to imagine a mother doing that to her girl to please her lover.” [1]
> On May 29, 2015, he was sentenced to three years in prison. He only spent 3 months of his sentence in jail before being paroled [2]
[1] https://torontosun.com/2017/10/07/ex-deputy-education-minist... [2] https://en.wikipedia.org/wiki/Benjamin_Levin_(academic)
Lots of rape porn is produced by women, for women. Typically novels, and sometimes graphic novels.
It’s legal, and definitely not something that takes with it a 25+ year sentence.
You can literally find rape porn on Amazon bookshelves. Literally.
The main problem is that Apple has backdoored my device.
More types of bad images or other files will be scanned since now apple does not have plausible deniablity to defend any of ghe government’x requests.
In the future a false? positive that happened? to be of a political file that crept in the list can pin point people to the future dictator wannabe.
It’s always about the children or terrorism.
What it looks like to me is that Apple is planning on releasing end-to-end encryption for iCloud. But they know that whenever E2EE comes up, people get mad that terrorists, child molesters, and mass shooters can hide their data and communications. Hell, they've been painted as the villain when they say they can't unlock iPhones for the FBI. This heads off those concerns for the most common out of those crimes.
> Think of it this way: Your landlord owns your property, but in the United States, he cannot enter any time he wants. In order to enter, the landlord must have permission, give prior notice, or have cause. Any other reason is trespassing. Moreover, if the landlord takes anything, then it's theft. Apple's license agreement says that they own the operating system, but that doesn't give them permission to search whenever they want or to take content.
This viewpoint is like thanking your landlord for warning you that they are going to enter your home and root through your private items, all in the name of some greater good. Let's not spin it as if the landlord is doing us a favor in this scenario.
This is Gruber's optimistic take on it as well. If so, why not make both changes at once? Given that they've walked back E2EE on iCloud before, I'm not holding my breath.
But in that case it would much more likely be a crime, it would certainly cost them a tremendous amount of good will.
Your personal computing device is a trusted agent. You cannot use the internet without it, and esp. in lockdown you likely can't realistically live your life without use of the internet. You share with it your most private information, more so even than you do with your other trusted agents like your doctor or lawyers (whom you likely communicate with using the device). Its operation is opaque to you: you're just forced to trust it. As such your device ethically owes you a duty to act in your best interest, to the greatest extent allowed by the law. -- not unlike your lawyers obligation to act in your interest.
Apple is reprogramming customer devices, against the will of many users (presumably at the cost of receiving necessary fixes and security updates if you decline) to make it betray that trust and compromise the confidentiality of the device's user/owner.
The fact that Apple is doing it openly makes it worse in the sense that it undermines your legal recourse for the betrayal. The only recourse people have is the one you see them exercising in this thread: Complaining about it in public and encouraging people to abandon apple products.
E2EE should have been standard a decade ago, certainly since the Snowden revelations. No doubt apple seeks to gain a commercial advantage by simultaneously improving their service while providing some pretextual dismissal of child abuse concerns. But this gain comes at the cost of deploying and normalizing an automated surveillance infrastructure, one which undermines their product's ethical duty to their customers, and one that could be undetectable retasked to enable genocide by being switched to match on images associated with various religions, ethniticities, or political ideologies.
The mechanism doesn’t scan anything except images, and won’t trigger on a single bad image - only a set.
Yes, that set could be something other than child porn, assuming Apple and NCMEC conspire, but this is not a general purpose backdoor.
Isn't that the shtick with Apple though? That they own the devices you rent and you don't have to worry too much about it. They always had the backdoor in place, they used it for software updates. Now they will also use it for another thing.
You didn't need to worry about it because they did a sufficiently good job at making the choices for you. This is a sign that they stopped doing so.
An appropriate metaphor might be a secretary. They can handle a lot of busy work for you so you don't have to worry about it, but they need access to your calendar, mails etc. to do so. This is not an intrusion as long as they work on your favor. If you suddenly find your mails on the desk of your competitor, though, you might reconsider. That, however, does not mean that the whole idea of a secretary is flawed.
I'm glad these issues were addressed in a much more elegant way than I would have put them:
> Apple's technical whitepaper is overly technical -- and yet doesn't give enough information for someone to confirm the implementation. (I cover this type of paper in my blog entry, "Oh Baby, Talk Technical To Me" under "Over-Talk".) In effect, it is a proof by cumbersome notation. This plays to a common fallacy: if it looks really technical, then it must be really good. Similarly, one of Apple's reviewers wrote an entire paper full of mathematical symbols and complex variables. (But the paper looks impressive. Remember kids: a mathematical proof is not the same as a code review.)
> Apple claims that there is a "one in one trillion chance per year of incorrectly flagging a given account". I'm calling bullshit on this.
As a disclaimer, I haven't done the actual math here. This also implies that the risk of your account getting flagged falsely is tightly related to how many images you upload.
> Perhaps Apple is basing their "1 in 1 trillion" estimate on the number of bits in their hash? With cryptographic hashes (MD5, SHA1, etc.), we can use the number of bits to identify the likelihood of a collision. If the odds are "1 in 1 trillion", then it means the algorithm has about 40 bits for the hash. However, counting the bit size for a hash does not work with perceptual hashes.
> With perceptual hashes, the real question is how often do those specific attributes appear in a photo. This isn't the same as looking at the number of bits in the hash. (Two different pictures of cars will have different perceptual hashes. Two different pictures of similar dogs taken at similar angles will have similar hashes. And two different pictures of white walls will be almost identical.)
> With AI-driven perceptual hashes, including algorithms like Apple's NeuralHash, you don't even know the attributes so you cannot directly test the likelihood. The only real solution is to test by passing through a large number of visually different images. But as I mentioned, I don't think Apple has access to 1 trillion pictures.
> What is the real error rate? We don't know. Apple doesn't seem to know. And since they don't know, they appear to have just thrown out a really big number. As far as I can tell, Apple's claim of "1 in 1 trillion" is a baseless estimate. In this regard, Apple has provided misleading support for their algorithm and misleading accuracy rates.
I'm not sure, after reading the article, who is/has the most insane system of Apple or NCMEC.
Surely Apple's lawyers have also reviewed the same law, and if it's that clearly defined, how did they justify/explain their approach?
And the line where the law is crossed is fuzzy. Say you use an AI classifier, at what accuracy is validating the results of that AI a crime? 50.000001%?
When I worked telecom, we had md5sum database to check for this type of content. If you emailed/sms/uploaded a file with the same md5sum, your account was flagged and sent to legal to confirm it.
Also if a police was involved, the account was burned to dvd in the datacenter, and only a police officer would touch the dvd, no engineer touched or saw the evidence. (Chain of Evidence maintained)
Prob changed since I haven't worked in telecom in 15 years, but one thing I've read for years, is the feds knew who these people are, where they hang out online, even ran the some of the honeypots. The problem is they leave these sites up to catch the ring leaders, the feds are aware, they have busts almost every month of rings of criminals. Twitter has had accounts reported, and they stay up for years.
I dont think finding the criminals are the problem, seems like every time this happens, theres been people of interest for years, just not enough law enforcement dedicated to investigating this.
All the defund the police, I think moving some police from traffic duty to Internet crimes would be more of an impact on actual cases being closed. Those crimes lead to racketeering and other organized crime anyways.
No, because they’re not identifying content, they’re matching it against a set of already-known CSAM that NCMEC maintains. As you go on to say, telecoms and other companies already do this. Apple just advanced the state of the art when it comes to the security and privacy guarantees involved.
Apple just opened the door for constant searches of your digital devices. If you think it will stop at CSAM you have never read a history book - the single biggest user of UKs camera system originally intended for serious crimes are housing councils checking to see who didn't clean up after their dog.
https://daringfireball.net/2021/08/apple_child_safety_initia...
(No, not arresting parents over bath time.)
> Will Apple actually flatly refuse any and all such demands? If they do, it’s all good. If they don’t, and these features creep into surveillance for things like political dissent, copyright infringement, LGBT imagery, or adult pornography — anything at all beyond irrefutable CSAM — it’ll prove disastrous to Apple’s reputation for privacy protection. The EFF seems to see such slipping down the slope as inevitable.
What seems to be missing from this discussion is that Apple is already doing these scans on the iCloud photos they store. Therefore, the slippery slope scenario is already a threat today. What’s stopping Apple from acquiescing to a government request to scan for political content right now, or in any of the past years iCloud photos has existed? The answer is they claim not to and their customers believe them. Nothing changes when the scanning moves on device, though, as the blog mentions, I suspect this is a precursor to allowing more private data in iCloud backups that Apple cannot decrypt even when ordered to.
Not a lawyer, but I believe this part about legality is inaccurate, because they aren’t copying your photos without notice. The feature is not harvesting suspect photos from a device, it is attaching data to all photos before they are uploaded to Apple’s servers. If you’re not using iCloud Photos, the feature will not be activated. Furthermore, they’re not knowingly transferring CSAM, because the system is designed only to notify them when a certain “threshold” of suspect images has been crossed.
In this way it’s identical in practice to what Google and Facebook are already doing with photos that end up on their servers, they just run the check before the upload instead of after. I certainly have reservations about their technique here, but this argument doesn’t add up to me.
The idea that standard moderation steps are a felony is such a stretch. Almost all the major players have folks doing content screening and management - and yes, this may invovle the provider transmitting / copying etc images that are then flagged and moderated away.
The idea that this is a felony is rediculous.
The other piece is that folks are making a lot of assumptions about how this works, then claiming things are felonies.
Does it not strain credibility slightly that apple, with it's team of lawyers, has decided to instead of blocking CASM to commit CASM felonies? And the govt is going to bust them for this? Really? They are doing what govt wants and using automation to drive down the number of images someone will look at and what even might get transferred to apple's servers in the first place.
And when they’re notified, Apple manually checks (a modified but legible version of) the images.
Is there even any evidence that arresting people with the wrong bit pattern on the computer helps stop child rape/trafficking? If so, why aren't we also going after people who go to gore websites? There's tons of awful material out there easily accessible of people getting stabbed, shot, murdered, butchered, etc. Do we not want to find people who are keeping collections of this material on their computers? And if so, what about people who really like graphic horror movies like Saw or Hostel? Obviously it's not real violence, but it's definitely close enough, and if you like watching that stuff, maybe you should be on a list? If your neighbor to the left of you has videos of naked children, and your neighbor to the right has videos of people getting stabbed and tortured to death, only one should be arrested and put on a list?
This is all not even taking into account that someone might not even realize they are in possession of CP because someone else put it on their device. I've heard there's tons of services marketing on the dark net where you pay someone $X00 in bitcoin and they remotely upload CP to any target's computer.
It seems like we are going down a very scary and dangerous path.
Why is it like this? Why are we not jailing people who enjoy watching gore?
[1]http://www.princeton.edu/~sociolog/pdf/asmith1.pdf
[2]https://davetannenbaum.github.io/documents/Implicit%20Purita...
[3]https://lib.ugent.be/fulltxt/RUG01/002/478/832/RUG01-0024788...
We went through this 20+ years ago when US companies then couldn't export "strong" encryption (being stronger than 40 bits if you can believe that). Even at the time that was ridiculously low.
We then moved onto cryptographic back doors, which seem like a good idea but aren't for the obvious reason that if a backdoor exists, it will be exploited by someone you didn't intend or used by an authorized party in an unintended way (parallel construction anyone?).
So these photos exist on Apple servers but what they're proposing, if I understand it correctly, is that that data will no longer be protected on their servers. That is, human review will be required in some cases. By definition that means the data can be decrypted. Of course it'll be by (or intended to be by) authorized individuals using a secured, audited system.
But a backdoor now exists.
Also, what controls exist on those who have to review the material? What if it's a nude photo of an adult celebrity? How confident are we that someone can't take a snap of that on their own phone and sell it or distribute it online? It doesn't have to be a celebrity either of course.
Here's another issue: in some jurisdictions it's technically a case of distributing CSAM to have a naked photo of yourself (if you're underage) on your own phone. It's just another overly broad, badly written statute thrown together in the hysteria of "won't anybody think of the children?" but it's still a problem.
Will Apple's system identify such photos and lead to people getting prosecuted for their own photos?
What's next after this? Uploading your browsing history to see if you visit any known CSAM trafficking sites or view any such material?
This needs to be killed.
For example, the main system in discussion never sends the image to Apple, only a "visual proxy", and furthermore, it only aims to identify known (previously cataloged) CSAM.
There's a [good primer of this on Daring Fireball](https://daringfireball.net/2021/08/apple_child_safety_initia...)
Apple has always had the decryption keys for encrypted photos stored in iCloud, so this isn't new. They never claimed that your photos were end-to-end encrypted. I'm not sure how this is a "backdoor" unless you think there's a risk of either something like AES getting broken or Apple storing the keys in a way that's insecure, both of which seem unlikely to me.
>Also, what controls exist on those who have to review the material? What if it's a nude photo of an adult celebrity? How confident are we that someone can't take a snap of that on their own phone and sell it or distribute it online? It doesn't have to be a celebrity either of course.
I'm equally interested in the review process. But while perceptual hash collisions are possible, it seems unlikely that multiple random nude photos on the same device would almost exactly match known CSAM content, which is the threshold for Apple reviewing the content.
No, no, no. As has been said a billion times by now, this system matches copies of specific CSAM photographs in the NCMEC’s database.
Right now Apple’s biggest unhappy user is the DOJ. As it stands with the legislation coming down the pipe and both previous administrations building on a keenness to ‘get something done’ about big tech, Apple will do as they’ve done in China and ‘obey the laws in each jurisdiction.’
Right now there are a lot of unwritten laws that say Apple better play right or lose quite a bit more —
So, how it’s getting done is a side show.
That said, it wasn’t long ago that they stood toe to toe with the FBI —- but there also weren’t wonderfully strong ‘sanctions’ on the horizon.
Then they wouldn't be the governing body. By definition the governing body does not obey the people; they govern the people.
>As noted, Apple says that they will scan your Apple device for CSAM material. If they find something that they think matches, then they will send it to Apple. The problem is that you don't know which pictures will be sent to Apple.
It's iCloud Photos. Apple has explicitly said it's iCloud photos. If it's being synced to iCloud Photos, you know it's getting scanned one way or another (server side, currently, or client side, going forward).
It notes privacy issues, but... iCloud syncs by default. You wouldn't do the kind of work they're talking about (e.g, investigation) and store that kind of material where it could be synced to a server to begin with.
Everyone keeps proclaiming that Apple is scanning your entire device, but that's not what's happening with this change. It's not even comparable to A/V in this respect - it would be a very different story if that was the case. The wording and explanation matters.
Now that the technology is on-board the device, how many lines of codes do you think it will take to scan the full photo-roll?
Do you think that this ability will not tempt LEA, law makers, governments, to push for that ever-so-small change to the code, either for blanket monitoring (see if China is not tempted, using their own database of "illegal" content) or for targetted monitoring (some specific users, with or without valid court orders).
The main issue is that the wall has been breached: monitoring data that was otherwise only on-device is now possible with little to no change as the feature is now embedded in the OS.
You can argue we're not there yet and can trust Apple to do-the-right-thing and that the Rule of Law will protect citizen against abuse, but that's a big step into a worrying trend, and not all countries follow the Rule of Law and have checks and balances to avoid misuse. Don't forget that Apple abides by the laws of countries where it sells its devices. That means they will -forced or not- do what they are told.
Yes, which is what I was saying in my comment. If or when it comes to Apple changing this then I would agree it's a battle worth fighting, but that is not what is happening here and that is not what I was correcting in this article itself.
>The main issue is that the wall has been breached:
The wall was breached when we opted to run proprietary OS systems. You have zero clue what is going on in that OS and whether it's reporting; you have to trust the vendor on some level and Apple is being fairly transparent here. I would be far more worried if they did this without saying anything at all.
The only change necessary to scan other files is changing a path and that's configuration that could even be silently done per-device.
It's a proprietary OS. This literally could exist already and you would have zero clue.
It is insane that using perceptual hashes is likely illegal. As the hashes are actually somewhat reversible and so possession of the hash is a criminal offence. It just shows how twisted up in itself the law is in this area.
One independent image analysis service should not be beating reporting rates of major service providers. And NCMEC should not be acting like detection is a trade secret. Wider detection and reporting is the goal.
And the law as setup prevents developing detection methods. You cannot legally check the results of your detection (which Apple are doing), as that involves transmitting the content to someone other than the NCMEC!
Yes, it's a "failure" of a "private" Non(lol, technically, wink wink) Governmental Organization who works extremely closely with the FBI to put their camel-shaped nose under the very tent that the FBI happens to have been trying to breach for 20+ years.
Come on. It's beyond gullibility, at this point, to believe that NCMEC isn't an arm of the Feeb. Specifically, it's an arm that isn't required to comply with FOIA requests, which is particularly convenient.
Two years. At the current rate, you have approximately two years until the Feeb have full access to your iDevice. Though, I will admit, Apple's development of the SEP, their high-priced bug bounties, and their convincing play-acting at defying the FBI after the San Bernardino case definitely had me fooled.
We probably should have been more keen after they failed to close the bugs that GreyKey et. al. exploited.
But now we know. Everything they gave to China, they will give doubly so to their own corporate domicile.
It feels to me like they want to hide their detection algorithms so people don't find out how bad they are.
For me the risk is much more that through some mechanism outside my control real CSAM material becomes present on my device. Whether its a dodgy web site, a spam email, a successful hack attempt or something else like that, I feel like there's a significant chance some day I'll end up with this stuff injected onto my phone without me knowing. So I'm not at all concerned about the technical capacity to accurately match to CP etc. In fact I'm even more worried if its really accurate because then I know when this unfortunate event happens I face a huge risk of being immediately flagged before I even know about the content and then spending years extricating myself from a ruined reputation and a legal system that treats evidence like this with far more trust than it should have.
I also have notifications off on it and check it when I need to.
All this needs is someone forwarding me something that’s in the DB.
My phone is not mine. nor is the data on it. nor is yours. That’s the real state of computer security today.
all of this is ill conceived.
The day someone chooses to mass release their worms on iPhone will be a wake up call.
Perhaps this is just the benefit of longevity but from my POV it was engineer early adoption and advocacy that made Apple, Google Search etc what they are, and it will be engineer early adoption and advocacy that dethrones these problematic companies from controlling the ecosystem..
Back 20 years ago, before the community filled with $_$ dollars-struck startup founders, software was built by people who wanted to use it.. rather than sell it. There are still some people doing this now, Look at Matrix network for instance.
What will it take for a grass-roots software industry to start building privacy-first apps and systems that don't suck, based on decentralised, distributed principles? We have the skills to build highly polished alternatives to these things, but it takes a determination to step away from convenience for a period of time for the sake of privacy.
How bad does it have to get before the dev community realise this? or are we in a frog boiling slowly scenario and it's hopeless?
Namely the “1-in-a-trillion” false positives per account per year is based on the likelihood of multiple photos matching the database (Apple doesn’t say how many are required to trip their manual screening threshold).
0: https://www.law.cornell.edu/uscode/text/18/2252A#:~:text=(d)...
Yes. That is an analogy only with Apple's software but not hardware. In Apple's view they are selling you the experience. So they are more like Hotels, You dont own the hotel room, the bed, the TV or anything inside that room. And in a Hotel, they can do Room Cleaning anytime they want.
This is always the terrifying part for me. They will access your personal photos or data without telling you. I’m surprised how is that even legal given all the law that are already available. Are they immune to those laws stated in thd blog?
Also what happens when they launch this in EU, AU, etc with different privacy laws?
Also, this only applies to pictures you upload to iCloud. So, it's not like they're accessing your personal photos without telling you.
[1] https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni...
Likewise the copyright issue; The user has already sent these files to Apple themselves by enabling iCloud photo library, and Apple are not making any additional copies that I am aware of.
It also says "The problem is that you don't know which pictures will be sent to Apple." - but we do know exactly which pictures will and will not be sent to apple; the ones that are already sent by iCloud Photo library.
[To be clear, I don't like the precedent/slippery slope that this kind of technique might lead towards in the future, but it doesn't seem like all the criticisms of it today are valid]
> If Apple wants to crack down on CSAM, then they have to do it on your Apple device.
I don’t understand… Apple can’t change their TOS but they can install this scanning service on your device?
Why not... has anyone actually successfully sued a company for changing their ToS from under them?
You can buy a SIM card and send images to your enemies/competitors through WhatsApp, and these images automatically gets downloaded to iPhone and potentially uploaded to iCloud.
What precautions are Apple taking against such actions? Or will it be some kind of exploitable implementation where you can easily swat any person you want and let them go to courts to prove their innocence?
I think the article is wrong about this. Or, right-but-situationally-irrelevant. As far as I can tell from Apple's statements, they're doing this only to photos which are being uploaded to iCloud Photos. So, any photo this is happening to is one that you've already asked Apple to copy to their servers.
> In this case, Apple has a very strong reason to believe they are transferring CSAM material, and they are sending it to Apple -- not NCMEC.
I also suspect this is a fuzzy area, and anything legal would depend on when they can actually be said to be certain there's illegal material involved.
Apple's process seems to be: someone has uploaded photos to iCloud and enough of their photos have tripped this system that they get a human review; if the human agrees it's CSAM, they forward it on to law enforcement. There is a chance of false positives, so the human review step seems necessary...
After all, "Apple has hooked up machine learning to automatically report you to the police for child pornograpy with no human review" would have been a much worse news week for Apple. :D
You misunderstand the purpose of the human review by Apple.
The human review is not due to false positives: The system is designed to have an extremely low rate of hits where the entry isn't in the database and the review invades your privacy regardless of who does it.
The human review exists to legitimize an otherwise unlawful search via a loophole.
The US Government (directly or through effective agencies like NCMEC) is barred from searching or inspecting your private communications without a warrant.
Apple, by virtue of your contractual relationship with them, is free to do so-- so long as they are not coerced to do so by the government. When Apple reviews your communications and finds what they believe to be child porn they're then required to report it and because the government is merely repeating a search that apple already (legally) performed, no warrant is required.
So, Apple "reviews" the hits because, per the courts, if they just sent automated matches without review that wouldn't be sufficient to avoid the need for a warrant.
The extra review step does not exist to protect your privacy: The review itself deprives you of your privacy. The review step exists to suppress your fourth amendment rights.
This is basically fearmongering, and saying "if you're not a pedo, you have nothing to fear", installing the tech on all phones, and then using that tech to find the next wikileaks leaker (who was the first person with this photo), trump supporters (just add the trump-beats-cnn-gif to the hashes), anti-china protesters (winnie the pooh photos), etc.
This is basically like forcing everyone to "voluntarily" record inside of their houses, AI on that camera would then recognise drugs, and only those recordings will be sent to the police.
Why? You can get any false positive rate you want if you don't care about the false negative rate.
It seems likely that that was a design criterion and they just tweaked the thresholds and number of hits required until they got it.
The last analysis on HN about this made the exact same mistake, and it's a pretty obvious one so I'm skeptical about the rest of their analyses.
It is nice to have some actual numbers from this article though about how much CP they report, the usefulness of MD5 hashes, etc.
Edit: reading on, it seems like he just misread - it sounds like he thinks they're saying there's a 1 in a trillion chance of a false positive on a photo but Apple are talking about an account which requires multiple photo hits. The false positive rate per photo might be 1 in 1000 but if you need 10 hits then it's fine.
I don't even want to imagine what very religious people and countries will think about the iPhone then.
Apple could argue they were already going to receive the photo (since this algorithm only affects Photos destined for iCloud Photos) and thus "when in the upload process it was classified" is simply technological semantics.
> This was followed by a memo leak, allegedly from NCMEC to Apple:
Well, we certainly are the minority. If a majority of people knew and were mad we'd have protests in major cities.
Its seems insane to me that anyone would knowingly upload CP to a forensics site on purpose. Much less several times a day.
Please correct me if I'm wrong, but wouldn't it be more correct to say they "would be in possession of images recognized by PhotoDNA as child pornography" rather than actual CP?
Problem with all this is that the images of naked children made by their parents is CP in the eyes of its consumers.
Perceptual AI is the best approach, but produces a certainty < 1.
In my cryptography course I had a project about invisible watermarks and secret messages in imaging. The first hurdle was beating partial images and compression, so most early algorithms worked in the frequency domain. At that time it was basically an arms race between protection or deletion of said messages and I think that hasn't changed.
Conventional file hashes can be beaten by randomizing meta data since a quality hash function would immediatly create a completely different hash. Never mind just flipping or probably just resaving them.
If you create a polynomial approximation of frequencies or color histograms of an image, you have a relatively short key indicator. But you need a lot of those to even approach certainty. Could always be an image of a duck.
If this is true, how has this gotten past Apple's legal team? Are they not aware it would be a flagrant violation of the law?
That's a real kicker in my opinion. Unless they get training data from NCMEC I struggle to understand how they're training their model? Unless it's entirely algorithmic and not based on ML?
Also, none of this topic is something I would want to deal with.
If this is a sincere effort, clearly Apple has failed to thread the needle. This announcement could also be a fumbled attempt to reframe what is already a common practice at the company, to get ahead of a leak. I heard from a friend who's related to an Apple employee that Apple scans the mountains of data on its servers, already, for "market research". The claims otherwise... marketing gambits
Having known many victims of sexual violence and trafficking (Seriously. I deal with them, several times a week, and have, for decades), I feel for the folks that honestly want that particular kind of crime to stop. Humans can be complete scum. Most folks in this community may think they know how low we can go, but you are likely being optimistic.
That said, law enforcement has a nasty habit of having a rather "binary" worldview. People are either cops, or uncaught criminals.
With that worldview, it can be quite easy to "blur the line" between child sex traffickers, and traffic ticket violators. I remember reading a The Register article, about how anti-terrorism tools were being abused by local town councils to do things like find zoning violations (for example, pools with no CO).
Misapplied laws can be much worse than letting some criminals go. This could easily become a nightmare, if we cede too much to AI.
And that isn't even talking about totalitarian regimes, run by people of the same ilk as child sex traffickers (only wearing Gucci, and living in palaces).
”Any proposal must be viewed as follows. Do not pay overly much attention to the benefits that might be delivered were the law in question to be properly enforced, rather one needs to consider the harm done by the improper enforcement of this particular piece of legislation, whatever it might be.” -Lyndon B. Johnson
[EDITED TO ADD]: And I 100% agree that, if we really want to help children, and victims of other crimes, then we need to start working on the root causes of the issues.
Poverty is, arguably, the #1 human problem on Earth, today. It causes levels of desperation that ignore things like climate change, resource shortages, and pollution. People are so desperate to get out of the living hell that 90% of the world experiences, daily, that they will do anything (like sell children for sex), or are angry enough to cause great harm.
If we really want to solve a significant number of world problems, we need to deal with poverty; and that is not simple at all. I have members of my family that have been working on that, for decades. I have heard all the "yeah...but" arguments that expose supposedly simple solutions as...not simple.
Of course, the biggest issue, is that the folks in the 0.0001% need to loosen their hands on their stashes, and that ain't happening, anytime soon. I don't know if the demographic represented by the Tech scene is up for that, since the 0.0001% are our heroes.
pre this thing:
* before syncing photos to icloud, the device encrypts them with a device local key, so they sit on apple's servers encrypted at rest and apple cannot look at them unless they push an update to your device that sends them your key or uploads your photos unencrypted somewhere else
after this thing:
* before syncing photos to icloud, the device encrypts them, but there are essentially two keys. one on your device, and one that can be derived on their servers under special circumstances. the device also hashes the image, but using one of these fancy hashes that are invariant to crops, rotations, translations and noise (like shazam, but for pictures)
* the encrypted photo is uploaded along with the hash (in a special crypto container)
* their service scans all the hashes, but uses crypto magic that does the following:
1) it does some homeomorphic encryption thing where they don't actually see the hash, but they get something like a zero knowledge proof if the image's hash (uploaded along with the image in the "special crypto container") is in their list of bad stuff
2) if enough of these hit, then there's a key that pops out of this process that lets them decrypt the images that were hits
3) the images get added to a list where a room full of unfortunate human beings look at them and confirm that there's nothing good going on in those photos
4) they alert law enforcement
a couple points of confusion to me:
1) i'm assuming they get their "one in a trillion" thing based on two factors. one being the known false positive rate of their perceptual hashing method and the other being their tunable number of hits necessary to trigger decryption. so they regularly benchmark their perceptual hash thing and compute a false positive rate, and then adjust the threshold to keep their overall system false positive probability where they want it?
2) all the user's photos are stored encrypted at rest, it seems that this thing isn't client side scanning, but rather assistance for server side scanning of end to end encrypted content put on their servers.
first off i think it's actually pretty cool that a consumer products company offers end to end encrypted cloud backup of your photos. i don't think google does this, or anyone else. they can just scan images on the server. second off, this is some pretty cool engineering (if i understand it correctly). they're providing more privacy than their competition aaand they've given themselves a way to ensure that they're not in violation of the law by hosting CSAM for their customers.
but i guess the big question is, if people don't like this, can't they just disable icloud?
They do not provide more privacy than the competition (in this regard at least), and yes, people can just disable iCloud services. That argument that "you can just not use it" is a pretty weak one for defending poor privacy practices. The same could be said for any website or server with poor privacy practices and given the prevalence of iCloud photos, this change affects lots of people.
Just wasted the weekend on this, and now asking for options: https://news.ycombinator.com/item?id=28111995
Worse, they've also opened the door to government censorship of images and content and propped that door wide open.
There's going to be absolutely no oversight or transparency into occasions when an image is removed from a device. Nobody will ever know when a non-CSAM image is accidentally pulled back from a device. All the public will ever see are headlines to the tune of "CSAM scanning system catches bad person once again".
This is a really awful path that we're going down, and this is absolutely going to get abused by regimes around the world.
1. The few who care about privacy.
2. That's about it. Maybe 1% if you're being generous. ( And subtract the portion of that 1% on this site who DO care about privacy but are being compensated $400k/yr by Apple and with Bay Area rents being what they are, they shouldn't rock the boat, and they really need to make good on their Tesla Roadster reservation. )
But go ask your mother-in-law, your dad, or any random normie friend. They're fighting PedoBear! And if they expand this further, well, they're fighting terrorists! And if they expand even further, well, I have nothing to hide! Do you?
feh ~/memes/dennis_nedry_see_nobody_cares.gif
Presumably the quid pro quo outcome is that Apple is allowed to win the Epic vs Apple lawsuit.
If you are a parent and lose a child then you would want every possible avenue taken to find your child. You would be going mad wanting to find them. If there is a way to match photos to known missing children then I say it should be at least tried.
I equate this to Ring cameras. They are everywhere. You cannot go for a walk without showing up on dozens of cameras, which we know Amazon (god mode) and law enforcement abuse their access privileges. However, if a crime happened to you and a Ring camera captured it, then I know almost everyone would certainly want that footage reviewed. Would you ignore the Ring footage possibility just because you despise Ring cameras? Probably not.
It’s all an invasion of privacy until you’re sitting on the other side of the table where you have a vested interest in getting access to the information.
It seems just as rational to argue that the people who live in those homes should be willing to give up their right to not have their home searched if it means potentially finding a missing child in a potential criminal's home.
We can come up with all kinds of hypothetical situations. I am empathetic to parents going through the hell of a missing child, and to the children themselves. But the protection of children and victims must be balanced with the preservation of rights and freedoms considered deeply sacrosanct.
the problem with this argument is that you can use it to justify basically anything.
That's why it's so insidious. People keep making it about children, which are a very worthy cause. Nobody wants child abuse. But the problem is that the technology can be used for anything. Wouldn't it be nice to read all of the internal Slack messages at a company you're about to acquire? Wouldn't it be nice for Apple to have a copy of Google's source code? Wouldn't it be nice for a political candidate to cause their opponent some legal trouble? Spying on other people is always very valuable and governments (and corporations, probably) spend billions of dollars a year on it.
If we want to make this just about detecting abuse of children, I'm totally on board. Pass a law that says using this technology to steal corporate secrets or to gain a political advantage is punishable by death. Then maybe it can be taken seriously as something very narrow in scope. That, of course, would never happen. (Death penalty arguments aside; I'm exaggerating for effect.)
I can't believe that once this technology is out, it's only going to be used for good. (I imagine politicians will love this. Look at the recent Andrew Cuomo abuse allegations -- he abuses women, and then his staff try to cover it up by leaking documents, to discredit the abused. Sure would be nice to see what sort of things they have on their phone, right? Will someone like Cuomo NEVER have a friend that can add detection hashes to the set and review the results? I would say it's a certainty that a powerful politician will have loyal insiders in the government, and that if this system is rolled out, we're going to see colossal abuse -- flat-out facilitation of crime, lives ruined -- over the next 20 years.)
I have to wonder what Apple's angle is here. It would cost them less money to do nothing. They could be paying these researchers to work on something that makes Apple money, and banning users from their platform (or having them hauled off to prison) doesn't help them make money. I really don't want to be the conspiracy theory guy, but doesn't it seem weird that the Department of Justice wants to investigate Apple's 30% app store cut, and right about that time, Apple comes up with a new way to surveil the population for the Department of Justice? Maybe I read too much HN.
But it's just a totally different situation isn't it? I'm not personally opposed to the general concept of security cameras in public spaces. If you choose to install one on your property that doesn't seem unreasonable. If you choose to share that footage with the police that doesn't seem unreasonable.
The problem with Apple's system isn't even the current implementation (which is on device, but iCloud only). It's that the system can be trivially expanded to all on device content, and report the user of the phone for any unacceptable behaviour.
It's more like a camera in your home that reports you to the police if you do anything unacceptable. Would anyone choose to have that?
But the bigger problem is that Apple have been selling products based on "privacy first" marketing. By reporting on their users off-line content they break that trust. And there are really few viable options in smartphones so you can't really just move to another provider.
Kinda unrelated to that, is the 4th amendment, which protects the rights of the people to be secure in their persons, houses, papers, and effects, against unreasonable searches and seizures by the government.
It is still an invasion of privacy regardless of the side on which you sit. This situation is unlike a ring camera capturing footage in a public space where individuals already have no expecation of privacy.
That’s the thing about privacy that so many don’t want to admit. “Everyone deserves it, until they don’t”. All of society’s privacy is more important than your single child, sorry.
Meanwhile I created this great new technology. It runs in the background super efficiently on your phone. It immediately detects housefires and alerts you. It uses a combination of sensors to literally detect the spark of flame and there’s only one in 1 trillion false positives.
Simply download the app and give it permission to sample your microphones, cameras, accelerometers and historical gps data to build a profile, then flick a lighter anywhere in your house and sure enough your phone alarm goes off.
How it works is incredible, an algorithm listens for supersonic soundwaves created by the chemical reaction during combustion. Video and sound samples are then reviewed by one of our technical representatives.
Housefires and house fire deaths will be a thing of the past. The only compromise is that our algorithms listen to all of your video and audio and occasionally monitored by a human technician, which might include audio of you making love to your wife.
This is very insightful. It's also depressing, because it's a great point for those who oppose privacy. I even find myself swayed by this reasoning.