> then cherry picking results according to different values isnt going to fix it.
Again: "merely a refusal to return results in an area where the model has proven problematic."
> Your example has quietly shifted from facial recognition of different races to speech about different races.
Again, three points:
* First, no matter how you feel about gender: bias in AI is a problem, as evidenced by issues with recognizing black faces.
* Second, there's some obvious cases where we can all agree that using past training data could result in things that are currently offensive. There are pieces of language we pretty much all agree we should use differently now to avoid offense (e.g. mongoloid).
* Third, I believe that gender is one of these cases. Social mores are evolving. Using conventions from the past when our collective norms are changing on the span of months basically guarantees offense.