For systems that claim to make any kind of psychological inference, I doubt there's a single one you should believe the claims of. Nearly half of human-performed psychology studies in the last 70 years have failed to replicate, including ones whose results have become "common knowledge" (As a field as a whole, it fared quite badly in what's called the "replication crisis"), and many of the very best supported of these supposed AI-psychology insights are, if you look into them, built on assumptions made by these results. Most of them fail to even do that, and rush a rigged result that doesn't generalize to market because they can get paid a fortune by their customers who buy the hype
100% of all AI hiring systems, AI proctors, AI recidivism predictors, AI drug-seeking classifiers, and anything that, like this article refers to, purports to infer personality traits from faces are both bullshit and dangerous. Maybe there could exist a reality where this wasn't true, but there is absolutely no solid reason right now to believe we live in that reality.
Also, not saying 91% is good enough for any government or business to commit to.
https://www.semanticscholar.org/paper/Deep-Neural-Networks-C...
Oh right, it's probably worthwhile to note that since there are considerable reasons in many parts of the world to hide one's sexual orientation, and this study's design only conditions on reported sexual orientation on social media, the results are intrinsically skewed just by being from a population of out gay people
Also, bear in mind that if we take Wikipedia's reported rate of homosexuality in the general human population, for which 9% would be... pretty generous (The "Demographics of sexual orientation" article lists several statistics and I can't find a world aggregate, but e.g. San Francisco is 15%), a null-classifier that always guesses "straight" would be just as accurate. If the population levels were lower, more accurate
Bottom line, that study isn't very convincing
The idea that everything coming from a sensor is data will provoke a lot of harm.
When a computer can accurately predict (90%~) sexuality, criminal proclivity etc through facial features then what exactly is 'pseudoscience' about it?
Sure it can and will be abused but that doesn't mean to ignore it or label it as 'pseudo' simply because it hurts your fee-fees.
Oh you mean the thing where it still "works" with faces blurred out because it's not picking up face shape?
https://www.theregister.com/2019/03/05/ai_gaydar/
> criminal proclivity
You mean the smile detector?
https://www.callingbullshit.org/case_studies/case_study_crim...
This stuff is bullshit and it can't work. The idea that it could work is magical thinking with no basis in reality.
Second, unless demonstrated otherwise, most determinations you could make, e.g. wealth, are not causal, they are effectively a computerized stereotype that looks for some common features the majority of each class share. To me this means facial features are not a suitable basis for a decision anywhere you wouldn't feel comfortable stereotyping.
Put another way, you can easily propose rules that are correct on average but horribly unfair to those that don't conform to the rule. This is true for ML as it is for any other rules. The only ML specific thing here is maybe the basis for the predictions gets obscured to some as something deeper than it is.
Your use of 'accurately' probably does not consider actual sensible practices in the assessment of data, where you assess the occurrances of false positives and false negatives in more revealing considerations. One of the first online articles found through a quick search seems to be very good already as an introduction: https://towardsdatascience.com/accuracy-precision-recall-or-... (Koo Ping Shung, Accuracy, Precision, Recall or F1?, 2018).
Politically, there is a problem in fairly dealing with the matter of inclinations, especially considering that guilt is after actions, not inclinations, or considering that it amounts to "prejudice".
The use of 'pseudoscience' in the article was more political than theoretical - imprecise but left to the reader's margins of "getting the idea". It meant that "we have been there and the actual scientific results were poor (e.g. we could not predict local brain function under that bulge that may have meant inclination i)".
Science is much more complex than the simple correlation you seem to be supposing. Science is about understanding phenomena with objective grounds and methods (understanding is then corroborated with predictions, but predictions are not understanding). In your example you are limiting the matter to observations: they are the first step in science, not the last. (A statement like 'people with quality q tend towards inclination i' would be an observation, not a law.)
What it does well, is tell you whether the specific picture of a person you feed it looks roughly similar to other pictures of other people that belong to a certain category.
All of it's predictive power comes from the fact that the datasets they are trained on are completely imbalanced and that society has inherent biases, so it just picks up on that and magnify it.
I can guarantee you that a picture of a white male CEO in a suit and a picture of a black young adult in everyday clothing will score extremely differently on the model no matter what their personal criminal proclivity is.
Human (cops) do the same thing: they are used to a certain population being more at risk of criminal activities and thus they will control anyone from that population much more (e.g. stop and frisk in NYC).
This is illegal in most places. We are doing the same thing all over again, except now, we can do that through a blackblox brand "AI" on it, call it science and legalize it again.
So, natural selection has evolved a criminal gene and a linked head bumps gene, but hasn't given us human the ability to detect that? That sure would have been useful. Nature "knew" and patiently waited hundred of thousands of year, keeping that useless gene alive, that eventually computer vision would appear and finally allow us to detect it?
Yes, that sounds plausible. Let's push to prod.
90% is fairly poor regression model success rate. If you're in the 10% of those falsely accused of being a future murderous pedofile based solely on your facial bone structure, then imprisoned or institutionalized for that, I'd think your conclusions on this topic may change.
Remember the Blackstone ratio (1).
Also, physiognomy was a key attribute of Nazi eugenics goals (2) - that's the rabbit hole this work leads down.
It's not about protecting feelings, it's about remembering history and learning from mistakes to protect liberty.
Lastly, regarding the goal to predict a person's "sexuality' by any method, I would posit that the motives are most likely strongly against vice for liberty for all. What else would that information be used for, other than to oppress?
1.) https://en.wikipedia.org/wiki/Blackstone%27s_ratio
2.) https://www.researchgate.net/publication/275738773_About_Fac...
Being convinced you can deduce Bob's character traits e.g. determine that Bob must have criminal tendencies based on the shape of his nose, is something very different.
This paper is about the later. About people doing CS research becoming convinced the later is possible, because their statistics dousing rod has found a correlation in their data set. And the very bad consequences that can follow from this.
You might want to at least read the abstract of the paper.