> In my opinion, yes, if it leads most readers to misjudge some fundamental properties of the problem as a whole.
Which problem? The general statement of this problem is "models, trained on [somehow] misrepresentative data [or even technically representative data] can draw unintended conclusions that lead to harm". Specifically in this case, the harm was "the model was basically just trained to ignore all women applicants due to bad inference of conditional probabilities".
This is a common thing. Because our society draws lines and has bias, its fairly common for modelling failures to exist along those lines. Indeed, sometimes the failures are mostly harmless and immediately obvious, but often they aren't. And people building models should be made aware of those failure scenarios, and be especially aware of failure scenarios that affect underrepresented groups, because those groups are the most likely for the model to fail on if you aren't actively looking for them.
And this stuff is pervasive. Facial recognition tech is much worse at noticing the faces of darker skinned people [1]. Some of this is because the people building the common models (eigenfaces etc.) didn't use diverse skin tones, but some of it goes back further, white balance in film was tuned for lighter skin tones until the 90s[2]. Some of that has likely persisted into modern film and camera technology, unfortunately. People working with data need to understand their data. And that means understanding how bias infests their data.
> fundamental properties of the problem as a whole
You've yet to state the "whole problem" or the fundamental properties that people might misjudge. So I'm unclear what they are.
[1]: Arguably an advantage now.
[2]: https://petapixel.com/2015/09/19/heres-a-look-at-how-color-f...