Consider too what they are likely using for inputs: photos with associated comments.
I don't know Facebook's TOS sufficiently to know whether they are using private groups as source material, but if you're utilizing bigoted content to train pattern recognition, you will replicate bigoted content.
My guess is that the poster was making an assumption that a large part of facebook's images are bigoted content. I am neither agreeing or disagreeing. But apparently some people got a little emotional about the platform being associated with maybe having a heightened amount of bigot content.
Not necessarily a large part, simply enough to identify as its own pattern.
In my experience there are a lot of bigoted things on Facebook. If these are serving as source data, and are sufficiently distinguished from other training material, it may well be user behavior the ML system would replicate.