I don't think, e.g. being able to handle black faces correctly is some sort of massive ideological commitment. So let's not pretend that the entire concern of bias in AI is irrelevant, no matter where you stand on gender.
> conformance with one very specific set of ideological commitments
You know-- let's just talk about basic respect and dignity: if someone strongly wants to be referred to in a particular way, the polite response is to respect their wishes. If there's a lot of people in this category, it makes sense for your system to address it.
If you instead build your system in a way that you don't achieve this, you're being rude. If you use old training data and refer to people as a "Mongoloid" as a result-- don't be surprised that people are offended. Ditto, if you use old training data about gender that doesn't match many peoples' current expectations.
Why did you suggest THIS as an example of what hes talking about? He doesn't indicate that he disagrees with this case.
Furthermore,that sounds like a problem of having incomplete training data. Regardless, manually tweaking a model points to a failure in the process somewhere.
He seems to be pooh-poohing the entire idea of "eliminating bias" in AI. So I felt it was important to
* point out that there are clear cases of bias in AI no matter where you stand on gender
* move on to explain a closely related case (using historical speech about race could be offensive)
* use the lesson to show that using historical speech about gender could be problematic as well
> Furthermore,that sounds like a problem of having incomplete training data.
Training a model from historical data can only reflect historical approaches. The social conventions around gender are changing rapidly and are contentious.
> Regardless, manually tweaking a model points to a failure in the process somewhere.
Here, there's no manual tweaking of the model: merely a refusal to return results in an area where the model has proven problematic.
If you can't effectively train something from existing data then cherry picking results according to different values isnt going to fix it. Your example has quietly shifted from facial recognition of different races to speech about different races. I cant even be sure of what you're talking about other than the fact that you will oppose criticism of imparting political bias into models.
Would you also respect the wishes of a schizophrenic person, if they say much the same thing? If they say that they are actually an alien from outer space, would you play along?
In general, I would respect someone's wishes. If they want to be Mork from outer space, K.
Of course, there are some very limited cases where we may reasonably believe that playing along is harmful either to ourselves or to the other person. If there's a broad medical consensus that something is harmful to someone, then maybe we shouldn't do it.
A biologically female person who wants to be called "they," because they have decided they don't like the connotations attached to "she" right now, doesn't rise anywhere close to that in my opinion.
The programmer should be able to use whatever the hell terms they want to use in their program. If the customer base doesn't like it that's their right. But it's not the right of the damn language parser programmer.
This isn't a language parser.
This is a tool that suggests implementations of small portions of code.
If the training data is out of date, it's quite reasonable for people employing that model to decide it shouldn't return results based on the out-of-date training data.
Even about completely different things. If the output is C code containing gets(), maybe we should decline to return the result.
> he programmer should be able to use whatever the hell terms they want to use in their program.
Indeed, it leaves it completely up to the programmer by refusing to suggest an implementation that would favor either side of the debate.
It's not "out-of-date". That's just the kind of pilpul semantic framing that these activists engage in since "out-of-date" implies "bad". The data is just not in line with their artificially made-up ideology. A demand which one, even as the best "ally" in the world, could never satisfy anyway, since the grievance grifting relies on always coming up with new issues, you just have to look at the shift from "equality" to "equity" or from "microaggressions" to "nanoaggressions"
> The programmer should be able to use whatever the hell terms they want
> language parser
Are you unsure what copilot is?
But something like Copilot or DALL-E? If you ask DALL-E for a doctor and it rarely shows black people (or women), then it is neither racist nor broken. Our society is broken. There are not enough people in that job that are not white and male. Or they are not represented enough. I think there is value in AI that honestly reflects society, because it makes this discrepancy harder to ignore.
People imagined AI would be this benevolent, neutral, wise thing that would maybe be a bit naive but not have our human biases. But it turns out there is no "morally neutral". It will just reflect what you put into it.
Have you looked at the actual demographics of medical doctors in the US? 54% are women, and 35% are nonwhite. But when we have media depictions of doctors, I agree they tend to be white and male.
So, what should DALL-E conform to? Should it conform to A) our actual present society, B) the biased original dataset (which leans both towards the past and towards existing media biases), or C) some idealized version of society?
I got 12 white dudes, one Southeast Asian woman, one Southeast Asian looking man, and two men that I'm not sure of their race when I tried this just now (quite possibly white). This is despite OpenAI's efforts to debias it, and isn't representative of current physician demographics.
But if AI just represents and reinforces extant biases-- and worse, AI is used to produce art and text that ends up in other AIs training sets -- how do we ever get out of this mess? The people who produce, publish, and productize AI do have some degree of editorial responsibility.
> But it turns out there is no "morally neutral".
Of course not. Hume pointed out long ago that you can't transform positive statements into normative ones.
But all of this is a little offtopic, anyways. This is about when it's reasonable to refuse to return a result. "Hey, your answer had the N-word in it, and we know most of the time our model does that it's offensive-- so we're just not going to return a result, sorry." I think this is a reasonable path to take when you know that your model has some behaviors that are socially questionable.
What's the issue? I used to watch a lot of medical dramas on TV and in my opinion the black rockstar MDs are way overrepresented in comparison to their real-life numbers:
'5.0%' in 2018 in the US[1] in real-life vs. '19.4%'[2] on TV
[1]https://www.aamc.org/data-reports/workforce/interactive-data... [2]https://www.bluetoad.com/publication/?i=671309&article_id=37...
I think techno libertarian suggestions like these are dangerous because they assume there’s one “canonical” place to fix these issues and all other places can just reflect the status quo, without affecting it (which in my opinion is not possible).
It’s like the old saying “dress for the job you want, not the job you have”.
Social problems are messy and full of situations like this where people can reasonably disagree and have decent, good-faith rationales for both sides, and we lack the kind of evidence that allows us to have strong confidence in our guesses about what would help.
But tend people tend to insult everybody that does not care so much about such a topic as racist.