> This is also incorrect. The conclusions of the paper would remain valid if FICO were calibrated identically for all groups.
I guess you are right, this was just the first disparity I noticed in the orange/blue example and I got fixated on it. And yes, I haven't read the whole thing and my maths may be a bit rusty nowadays :)
Now I see that the problem they attempt to solve with "equal opportunity" is FICO's (in)ability to fish reliable borrowers out of the whole population. Currently, FICO identifies a small number of reliable Black borrowers whom they give high scores, plus there are many Blacks who would pay back diluted in a sea of unreliable borrowers with low scores. While in, say, Asians, the ratio of reliable borrowers who had been given high scores is higher (fig. 8).
I think the issue is a bit more nuanced than "algorithms rightly showing that fairness is opposite to profit".
In particular
> The best predictor is one which is explicitly discriminatory based on race: it takes both FICO score and race into account. [...] the worst predictor ["race blind"] throws away directly relevant racial information.
As I noted, this is a case of reverse bias directly compensating for FICO's bias. Max profit simply grants loans to all FICO score buckets whose default risk is sufficiently small to be worth it. If FICO vs risk was race-independent, "max profit" would use the same FICO threshold for every race and hence would be equivalent to "race blind".
Analogously, "max profit" could be equivalent to "equal opportunity" if FICO was better at finding Blacks who can pay and putting them in low risk buckets (high score) so that it becomes feasible and profitable for banks to grant them loans. I believe this is what authors meant by incentivising classifiers to improve accuracy and yes, I was wrong suggesting that they found a way to improve accuracy here by using the input classifier as a black box. Their ideas only compensate for the aforementioned race-biased score inflation, which isn't the entirety of the problem, and put some financial burden for some other mispredictions on banks/FICO to pressure them into getting their shit together.
I think trying to apply "equal opportunity" in the real world may indeed turn into handouts to people who can't pay, because <handwaving> it's possible that poor people are just hard to classify correctly and if certain ethnicities are poorer than others, they will appear to be given less opportunity even though actually it's simply poor people in general who are being given less opportunity. If FICO finds ways to classify poor Blacks better, it may turn out that applying the same solutions to other groups will improve their true positive rates too and hence "Black opportunity disadvantage" will stay.</handwaving>
But otoh, it also is possible that <handwaving> for some reasons Blacks are classified with less accuracy than Asians, which contributes to the lower overall true positive rate of the race.</handwaving> Authors appear to be assuming this possibility, though I don't think the distinction can be made from data presented in the paper alone.
TL;DR: I think you were oversimplifying things, and so was I :)