Apple explained in their technical summary [0] that they'll only consider this an offence if a certain number of hashes match. They estimated the likelihood of false positives there (they don't explain which dataset was used, but it was non-CSAM naturally) is 1 out of a trillion [1]
In the very unlikely event where that 1 in a trillion occurrence happens, they have manual operators to check each of these photos. They also have a private model (unavailable to the public) to double-check these perceptual hashes which also used before alerting authorities.
[0] https://www.apple.com/child-safety/pdf/CSAM_Detection_Techni... [1] https://www.zdnet.com/article/apple-to-tune-csam-system-to-k...