We've now established the existence of a statistical model which can detect this bias.
Now, any other model which is capable of expressing your specific r(p) can do the same thing. The entire purpose of fancy models like random forests is that they can express lots of functions while also being reasonably generalizable.
If you want to claim that this bias is much more difficult to encode in an SVM than all the other typical hidden patterns, you need to establish that your specific r(...) is somehow vastly more complicated than all the other things that machine learning models regularly detect. That's a pretty strong claim.
Interestingly, you are now arguing the exact opposite of what most "machine learning is racist" people claim. They typically claim machine learning is racist because algorithms actually learn hidden factors they wish it wouldn't; e.g., a lending algorithm might "redline" blacks who don't pay back their debts. I take it you believe this is highly unlikely, and algorithms can't possibly distinguish between men and women and then show high paying job ads to more men than women?