That’s a really interesting attack vector I hadn’t seen mentioned previously.
Most people are talking about the potential for adversarial images to be sent to users. If they were instead injected into the database itself (either by poisoning real CSAM or social engineering) that would have far wider ramifications.
I wonder what the most widely-saved pornographic images are across iCloud users.
If actual CSAM were perturbed to match the hash of, say, images from the celebrity nude leak a few years back and added to the database then thousands of users could be sent to “human review”. Since the images are actually explicit how would the human reviewers know not to flag them to authorities?