I'm actually super impressed at the lengths they went to make this as private as they did.
With the proviso that you have to trust the code they push onto your device actually does what they claim in the paper.
But it does, on the face of it, prevent the "cops adding the latest cop on black person murder viral image to the CSAM content hashes and easily seeing who has it on their device and when it showed up there" concern. They are at least going to need to add enough _other_ hashes to the list to get everybody with their target image over the threshold, then get it past whatever Apple has in place for their manual review and "visual derivatives". And surely that sort of abuse of the system would set off alarms instantly at Apple when a big list of hashes of images that are common enough to all push BLM activists and sympathisers over the CASM count threshold.
(I also wonder what those "visual derivatives" they get are, and how well they are going to work to filter out false positives, and how Apple are going to take care of whoever ends up doing that review work. While I have a fairly positive impression of the care and effort Apple put into ensuring my privacy, I have a somewhat less positive impression of how they treat outsourced or low-skill workers. I won't be _too_ surprised to hear the same sort of horror stories about both the work, and the management overseeing the workers that do this sort of job at Facebook... I can't imagine a worse job description than "review images to check if they are real child sexual abuse images", and doing that for minimum wage and no benefits with supervisors threatening to fire you if you take toilet breaks or miss your 150 images an hour targets - is sadly something I totally expect to read about in a year or two's time.)