- Attempted coup - Radicalized and misinformed supporters (majority) with an extremist, armed element (minority) - Extremely dangerous moment for free and fair elections in the west
What would the world have looked like if in 1923, the disaffected germans who took part in that were identified en masse; and serious efforts were made to re-integrate them to society while addressing the systemic issues they faced?
Instead, we ended up doing https://en.wikipedia.org/wiki/Denazification several years later after a lot of death.
I think there is a very good argument for identification and holding people to account. There also needs to be very, very robust adherence to due process - AI identification is not proof, and a suitable alibi should be step 1 in invalidating it.
> "From 1945 to 1950, the Allied powers detained over 400,000 Germans in internment camps in extrajudicial fashion in the name of denazification."
Edit: I completely agree there is nothing special about this case. No exigent circumstances.
If it happens, it should be done by an institution that is under the supervision of congress and staffed by public servants. This emerging, largely unaccountable surveillance industrial complex with ties to extremist political figures worries me more.
I think so. That mechanism is judicially auditable. The AI is not. We should not be arresting people based on the output of an unauditable mechanism.
Edited the original.
What are the bounds for such technology and/or companies like Clearview?
Are there quality of life differences in places where facial recognition crime technology is used vs not used?
???
one of those situations where I'm supposed to be
supportive of the use of facial recognition
No, it's not. This is the Hoan Ton-That supposedly protecting us from the crazies... "Founder Hoan Ton-That’s has links to the far-right movement that move right past suspicious into obvious, according to HuffPo. He reportedly attended a 2016 dinner with white supremacist Richard Spencer and organised by alt-right financier Jeff Giesea, an associate of Palantir founder and Trump-supporting billionaire Peter Thiel. (Thiel secretly bankrolled a lawsuit that bankrupted Gizmodo’s former parent company, Gawker Media.) Ton-That was also a member of a Slack channel run by professional troll Chuck Johnson for his now-defunct WeSearchr, a crowdfunding platform primarily used by white supremacists; that channel included people like the webmaster of neo-Nazi site Daily Stormer, Andrew Auernheimer, and conspiracy theorist Mike Cernovich"
https://www.gizmodo.com.au/2020/04/creepy-face-recognition-f...>258 points 12 days ago 174 comments
https://news.ycombinator.com/item?id=25562321
...
> I got my file from Clearview AI (onezero.medium.com)
> 811 points 9 months ago 224 comments
Regardless, I've had this theory about cancel culture. I don't necessarily agree with cancel culture, for the aforementioned problem of it being mob social justice. But it seems to me like it has arisen out of a failure by the real justice system. Issues like sexism in particular, which affect half of the population, have been ignored and marginalized. It took how long for Bill Cosby's heinous crimes to finally be prosecuted? More over, how likely would it have been for his crimes to yet again have been swept under the rug had cancel culture not fostered an environment where the victims felt comfortable coming forward?
The very topic we're discussing, the terrorist attack on our Capitol, is another example of racist failures of our police force.
So is it really any surprise that society has collectively taken matters into their owns hands?
Again, I don't _agree_ with the idea that society at large should pass their own judgements. I'd rather the courts do that. But they haven't been. And aren't. And we just suffered through one of the worst years on record of blatant police abuse and court inaction.
If we want to get rid of cancel culture I think we need to fix our policing and justice system to the extent that society feels they don't need to take up the mantel of justice themselves.
In other words, I don't see value in deriding cancel culture. If one feels that cancel culture is wrong, my belief is that one should be calling for action to repair the _cause_ of cancel culture, not the symptoms. And that cause is a prejudiced justice system.
If I don't want to buy a Musk-mobile because I don't agree with Musk on issues like racism[0] or don't like how he treats his employees[1], I am allowed to act on it. I am also allowed to share these thoughts with people on appropriate platforms, like I'm doing here.
I can understand the frustration coming from people digging up problematic tweets from 4 years ago, but this whole idea that "the internet forgets nothing" isn't new. I learned it in grade school. If you want to post racist/sexist/homophobic things on your public Twitter, so be it, but don't be surprised when someone finds it. There's also a delete button.
[0] https://www.theverge.com/2018/11/30/18119832/tesla-elon-musk...
[1] https://www.ibtimes.com/elon-musk-hot-water-after-tesla-empl...
So, “cancel culture” been a matter of fact, at least in the US, for quite some time now. It’s just given a scary name when those who have been “canceled” in the past are doing the “canceling”.
Social actions will always have social consequences; that's a necessary corollary to freedom of speech. What's different today is that social conservatives and especially the radical right are starting to experience those same social consequences. If people are going to use their freedom of speech and association to, say, support Nazis, then other people are free to use their freedoms of speech and association in response.
Not necessarily. Belarusian protesters are working on methods to unmask anonymous police officers [1].
[1] https://meduza.io/en/feature/2020/10/01/you-have-no-masks
I think prohibiting recognition and tracking for commercial reason without explicit consent, plus allowing recognition and tracking only when acting on a court order would be a good start.
What happened was a shame, but I truly hope this doesn’t evolve and make it worse than what it is already.
I think the horse has left that particular barn.
Although, of course, the barrel of badguy-stupidity is bottomless, so I suppose they'll keep catching the dumber and dumber ones pretty much forever...
No, it's not.
> Imagine the stasi had it?
Yes, the problem there is the Stasi, not the technology. There is literally no technology that the Stasi having would be a good thing, including pen and paper. Or things so basic we don't tend to think of them as “technology”, like, say, language.
Not sure I agree with that analogy... pen and paper doesn't scale!
Having the ability to do something at global scale, like facial recognition or real-time tracking and saying "Honest! We won't use it for dodgy things" is not sufficient...
It'd be naieve to say I'd rather it didn't exist, however that cat is out of the bag now so there _must_ be incredibly robust and tamper-proof checks and balances round its use and the penalties for subverting that should be incredibly severe.
Why is that a problem? Are there any investigative tools that are perfect or is there a reason why facial recognition should be held to a higher standard?
> It's no better than fnger printing which often leads to unjust arrests.
You need to be more specific. Why is it no better? Has facial recognition been demonstrated to lead to more unjust arrests than other investigative methods?
> There is no replacement for detective work and the government shouldn't be lazy to exclude it.
Who's saying that facial recognition is a replacement for detective work? It's just another investigative method, like looking up a license plate or asking people at the crime scene what they saw.
> The best it can do is narrow down suspects.
It can also help find suspects when you don't have any other leads. Why isn't that good enough?
Facial recognition has also time and time again proven to be racially biased[1][2].
Not to mention how easy it is to create a surveillance state with facial recognition[3].
[0] https://threatpost.com/lawsuit-claims-flawed-facial-recognit...
[1] https://www.cnn.com/2019/12/19/tech/facial-recognition-study...
[2] http://sitn.hms.harvard.edu/flash/2020/racial-discrimination...
[3] https://www.washingtontimes.com/news/2019/dec/9/social-credi...
The problem only happens if law enforcement believes that the matches are somehow infallible and refuse to look for or believe in other evidence that would rule out a suspect.
EDIT: Nevermind there is an active HN thread on just that
Hell, Apple's FaceID makes a mistake every million faces, and that system is both professional and has an order of magnitude more data to work with from the FaceID scanner. Clearview is just using blurry photographs.
I don't think it is the case. Facial recognition can drastically speed up the process of nailing down suspects, accompanying with other information sources.
I don't really see facial recognition as the sole reason to be worried here. Information collection and sharing is already ubiquitous, that is what leads to all these.
Also, it's much easier to get people's photos than fingerprints.
Narrows the number of haystacks to search for needle => Reduced resources required for successful search => More crimes prosecuted.
Problem occurs when:
* Users of FRT assume all in haystack are needles
* Crimes on book must not be universally prosecuted
The first part can't be helped. US Police, like most US government jobs, is a rest-and-take-it-easy job. In aggregate, unexceptional people doing an unexceptional job. The second part is because people want other people prosecuted but not themselves.
I'm in second category myself. For instance, I am quite capable of using all sorts of drugs and maintaining a productive life. Other people are not. So it's important to prosecute other people and not to prosecute me.
Therefore, for these two reasons, I don't want FRT to be used universally. I want to preserve inequitable outcomes in policing because society is stronger with inequitable outcomes - permits good life for high percentile individuals and constrains operations for low percentile individuals. Demarcating crime from uncrime is Sorites paradox.
Thanks for being clear about your perspective. Do you think there's potential for abuse with different rulesets for different people?
> Demarcating crime from uncrime is Sorites paradox.
I disagree. The measure of a crime is subjective and objective. Subjectively, the victim notices they have been wronged. Objectively, there is a claim by a plaintiff against a defendant. A claim either exists or it does not, there is no sorites.
Oh, most certainly. The same structure allows for racial discrimination, which I do not believe is a sensible angle of discrimination: i.e. I think Ben Carson should not be discriminated against for being black. Too high value as a top surgeon.
On the whole I accept it, though, because I don't want Elon Musk prosecuted by the SEC and the instrument that permits both is blunt.
> The measure of a crime is subjective and objective. Subjectively, the victim notices they have been wronged. Objectively, there is a claim by a plaintiff against a defendant. A claim either exists or it does not, there is no sorites.
Indeed. When there is a threshold. However, the costs imposed on society by drug users are dispersed. You can't Categorical Imperative them because some people are not capable enough to handle the responsibility.
Other times the crime is exposure to increased risk: no actual harm may occur. For instance, if you do burnouts on city roads there is little concrete harm, only increased exposure to risk.
It's the same with many things: public drunkenness, drink driving, jaywalking. And society reacts to these by permitting these activities in practice for high-value individuals while proselytizing against them at the same time.
I don't drink-drive but I happily do the other two.
If it helps, I am familiar with the Veil of Ignorance, the Categorical Imperative, and every other basic tool of ethics you can think of.
I'd like to rob a bank.
I want enough bank robberies to succeed that the fictional movies and books which are an import part of my life to still be plausible.
In general I don't want just anyone to be able to rob a bank.
If anything, far FEWER mistakes will be made. Not more.
It's this weird thing where people hear about this one scary story of a guy misidentified and think "OMG facial recognition is terrible!" but they don't realize that happens to XYZ number people a day via human error.
But we're all MORE comfortable with it if it's good old fashioned...human error?
Okay they get arrested, maybe charged with a crime. Others will get doxxed and most will lose their jobs. They'll become social outcasts.
Do you really want a bunch of very angry people to effectively be pushed into a corner, demonized by society and so on?
This will end up having very big unintended consequences.
My understanding was that there was 100k+ people in attendance at this rally and thousands of people were in the capitol building itself.
Yeah there are crazies that will be fine with that, but what they can do without the support of “normalish” people is limited.
The alternative is to allow the escalation to continue.
I'm diametrically opposed to Trump and his supporters on a lot of issues, but I recognize that a functional society needs to accept their right to protest as well. They should be able to have their marches just like we have ours.
However, the stated goals and actions of many of those in last week's march and rioting are explicitly violent and seditious. Many of the protestors were heavily armed. They killed a policeman.
Do you really want a bunch of very angry people to
effectively be pushed into a corner, demonized by
society and so on?
I understand what you're saying: by pushing individuals into a corner, we may make them more desperate and feed into the overall "persecution" complex that motivates their movement as a whole. I think that's true.On the other side of things, if we do nothing we legitimize their extremist views? Those extremist views become the new normal, or at least most the goalposts for the range of views considered normal.
Admittedly, it's a "damned if you do, damned if you don't situation."
But this isn't about denying their views or their right to protest. It's about targeting specific criminal actions that we don't want my side, their side, or any side to do.
I have to admit my gut feeling on governments using facial recognition at scale to round up its citizens feels like something you'd find under an authoritarian regime, reminds me of https://news.ycombinator.com/item?id=14643433
I think it's important to try to frame an opinion on tech like this from a well-informed viewpoint, striving to lend appropriate weight to the latest incidents while avoiding the temptation for tunnel vision to fixate on immediate goals to the exclusion of broader, long-term consequences.
I agree acts that crossed too far past the line of civil disobedience ought to be held to account.
I just hope our collective response doesn't erode the willingness of people of good conscience to take a stand when they see their institutions behave in a legitimately unsanctionable manner.
Is it regional based on where many such supporters live? I certainly don't see any ideological link to untrimmed beards, so it must be some other co-occuring factor.