There's a huge problem with people trying to use umbrella usage to predict flooding. Some people are trying to develop a computer model that uses rainfall instead, but watchdog groups have raised concerns that rainfall may be used as a proxy for umbrella usage.
(It seems rather strange to expect a statistical model trained for accuracy to infer and indirect through a shadow variable that makes it less accurate, simply because it's something easy for humans to observe directly and then use as a lossy shortcut or to promote alternate goals that aren't part of the labels being trained for or whatever.)
> These are two sets of unavoidable tradeoffs: focusing on one fairness definition can lead to worse outcomes on others. Similarly, focusing on one group can lead to worse performance for other groups. In evaluating its model, the city made a choice to focus on false positives and on reducing ethnicity/nationality based disparities. Precisely because the reweighting procedure made some gains in this direction, the model did worse on other dimensions.
Nice to see an investigation that's serious enough to acknowledge this.
1. In aggregate over any nationality, people face the same probability of a false positive.
2. Two people who are identical except for their nationality face the same probability of a false positive.
In general, it's impossible to achieve both properties. If the output and at least one other input correlate with nationality, then a model that ignores nationality fails (1). We can add back nationality and reweight to fix that, but then it fails (2).
This tradeoff is most frequently discussed in the context of statistical models, since those make that explicit. It applies to any process for deciding though, including human decisions.
It would be immoral to disadvantage one nationality over another. But we also cannot disadvantage one age group over another. Or one gender over another. Or one hair colour over another. Or one brand of car over another.
So if we update this statement:
> Two people who are identical except for any set of properties face the same probability of a false positive.
With that new constraint, I don't believe it is possible to construct a model which outperforms a data-less coin flip.
Why? We've been told time and time again that 'nations' don't really exist, they're just recent meaningless social constructs [1]. And 'races' exist even less [2]. So why is it any worse if a model is biased on nation or race, than on left-handedness or musical taste or what brand of car one drives? They're all equally meaningless, aren't they?
[1] https://www.reddit.com/r/AskHistorians/comments/18ubjpv/the_...
[2] https://www.scientificamerican.com/article/race-is-a-social-...
My suspicion is that in many situations you could build a detector/estimator which was fairly close to being blind without a significant total increase in false positives, but how much is too much?
I'm actually more concerned that where I live even accuracy has ceased to be the point.
That seems to fall afoul of the Base Rate Fallacy. Eg, consider 2 groups of 10,000 people and testing on A vs B. First group has 9,999 A and 1 B, second has 1 A and 9,999 B. Unless you make your test blatantly ineffective, you're going to have different false positive rates -- irrespectiveof the test's performance.
What's the problem with this? It isn't racism, it's literally just Bayes' Law.
Upon evaluation, your model seems to accept everyone who mentions a "fraternity" and reject anyone who mentions a "sorority". Swapping out the words turns a strong reject into a strong accept, and vice versa.
But you removed any explicit mention of gender, so surely your model couldn't possibly be showing an anti-women bias, right?
Who are these people who make a career history doc include gender-implicating data? And if there are such CVs, they should be stripped of such data before processing.
The fraternity example is such a specific 1 in a 1000 case.
That may be logically correct, but the law is above logic. Sometimes applying Bayes' Law is legally considered racism.
If certain demographic groups legitimately have higher base rates of welfare errors (due to language barriers, unfamiliarity with bureaucratic systems, economic desperation, or other factors), then an accurate algorithm will necessarily produce disparate outcomes.
If we dig deeper, there are three different underlying questions that are attempting to be addressed by the authors of this "fair" fraud detection system -
1. Do group differences in fraud rates actually exist?
2. What mechanisms drive these differences?
3. Should algorithms optimize for accuracy or equality of outcomes?
The article conflates these, treating disparate outcomes as presumptive evidence of algorithmic bias rather than potentially accurate detection of real differences.
Pattern recognition that produces disparate outcomes isn't necessarily inherently "broken", it may be simply be accurately detecting real underlying patterns whose causes are uncomfortable to acknowledge or difficult to address through algorithmic modifications alone.
Very well written, but that last part id concerning and point to one part: did they hire interns? How cone they do not have systems? It just cast a big doubt on the whole experiment.
Without figures for true positives, recall, or financial recoveries, its effectiveness remains completely in the dark.
In short: great for moral grandstanding in the comments section, but zero evidence that taxpayer money or investigative time was ever saved.
Amsterdam didn't deploy their models when they found their outcome is not satisfactory. I find it a perfectly fine result.
The post does talk about it when it briefly mentions that the goal of building the model (to decrease the number of cases investigated while increasing the rate of finding fraud) wasn't achieved. They don't say any more than that because that's not the point they are making.
Anyway, the project was shelved after a pilot. So your point is entirely false.
> In late November 2023, the city announced that it would shelve the pilot.
I would agree that implications regarding the use of those models do not hold, but not the ones about their quality.
The model is considered fair if its performance is equal across these groups.
One can immediately see why this is problematic, easily by considering equivalent example in less controversial (i.e. emotionally charged) situations.
Should basketball performance be equal across racial, or sex groups? How about marathon performance?
It’s not unusual that relevant features are correlated with protected features. In the specific example above, being an immigrant is likely correlated with not knowing the local language, therefore being underemployed and hence more likely to apply for benefits.
In your basketball analogy, it's more like they have a model that predicts basketball performance, and they're saying that model should predict performance equally well across groups, not that the groups should themselves perform equally well.
The issue is that we don't know how many Danish commit fraud, and we don't know how many Arabs commit fraud, because we don't trust the old process to be unbiased. So how are we supposed to judge if the new model is unbiased? This seems fundamentally impossible without improving our ground truth in some way.
The project presented here instead tries to do some mental gymnastics to define a version of "fair" that doesn't require that better ground truth. They were able to evaluate their results on the false-positive rate by investigating the flagged cases, but they were completely in the dark about the false-negative rate.
In the end, the new model was just as biased, but in the other direction, and performance was simply worse:
> In addition to the reappearance of biases, the model’s performance in the pilot also deteriorated. Crucially, the model was meant to lead to fewer investigations and more rejections. What happened instead was mostly an increase in investigations , while the likelihood to find investigation worthy applications barely changed in comparison to the analogue process. In late November 2023, the city announced that it would shelve the pilot.
Training on past human decisions inevitably bakes in existing biases.
Not all misdeeds are equally likely to be detected. What matter is minimizing the false positives and false negatives. But it sounds like they don't even have a base truth to be comparing it against, making the whole thing an exercise in bureaucracy.
If reality were fair there would be no need of a welfare system in the first place.
Fraud detection models will never be fair. Their job is to find fraud. They will never be perfect, and the mistaken cases will cause a perfectly honest citizen to be disadvantaged in some way.
It does not matter if that group is predominantly 'people with skin colour X' or 'people born on a Tuesday'.
What matters is that the disadvantage those people face is so small as to be irrelevant.
I propose a good starting point would be for each person investigated to be paid money to compensate them for the effort involved - whether or not they committed fraud.
Nevertheless the idea of giving money is still good imo, because it also incentivizes the fraud detection becoming more efficient, since mistakes now cost more. Unfortunately I have a feeling people might game that to get more money by triggering false investigations.
It's generally straightforward to develop one if we don't care much about the performance metric:
If we want the output to match a population distribution, we just force it by taking the top predicted for each class and then filling up the class buckets.
For example, if we have 75% squares and 25% circles, but circles are predicted at a 10-1 rate, who cares, just take the top 3 squares predicted and the top 1 circle predicted until we fill the quota.
As you say, that would be a crappy model. But in my opinion that would also be hardly a fair or unbiased model. That would be a model unfairly biased in favor of HP, who barely sell anything worth recommending
"Unbiased" and "fair" are quite overloaded here, to borrow a programming term.
I think it's one of those times where single words should expressly NOT be used to describe the intent.
The intent of this is to presume that the rate of the thing we are trying to detect is constant across subgroups. The definition of a "good" model therefore is one that approximates this.
I'm curious if their data matches that assumption. Do subgroups submit bad applications at the same rate?
It may be that they don't have the data and therefore can't answer that.
As noted above, this doesn't do anything for performance.
One has to wonder if the study is more valid a predictor of the implementers' biases than that of the subjects.