This whole ad driven attention economy is probably as dangerous of an invention as nuclear weapons. We're caught in a state of psychosis and individually none of us can do anything to fight the billions of dollars spent on advertising that tries to make us feel inadequate. Targeted ads and recommendation engines need to be banned. I know some people say they like them, and some people also say they like tobacco but we generally agree it's bad for us. Companies will find other ways to innovate. Content curation, organization, and quality will become more valuable and eventually the experience will be better.
I say this because half-hearted measures like the one in the article are not going to make any difference when the entire business model of the internet is clickbait.
I don't see how you can "ban ads" without instituting totalitarian dictatorship.
Targeted arguments are a core part of political campaigning and polarization has been increasing in the US for decades, even before the internet.
My pet solution would to switch to a multi-party system that isolates rather than amplifies fringe voices. In germany people can look at what AfD and Die Linke have to say without having to choose between them.
I use a screen for coding work, HN breaks, shopping, civic life like bills, and the occasional stream binge. Outside work, maybe 5-10 hours week.
Otherwise I write creatively on paper, learned guitar, share digitally via point-to-point, and leave my house often for no reason most of the time.
Personally I have a hard time buying into anything you say being some sort of obligation, and more of a repeated hand me down habit.
Google's search ads used to be displayed based on the keywords being searched, rather than the user viewing them. Facebook based on what you've "liked". Amazon and eBay based on what you've bought before on their sites.
Go back to targeting ads based on page context and explicitly provided information (search queries, what I actually enter into my "profile", etc), rather than machine surveillance and inferences.
The goal of any social platform should be healthy engagement. Where is the research that this change will foster that? Twitter isn't really the place to be testing half-baked ideas.
Also, misinformation is coming from mainstream media as well as conspiracy theorists. At least conspiracy theorists are trying to find the truth. Twitter doesn't seem to acknowledge that, so what we're doing is creating digital totalitarianism? That's their great idea?
>conspiracy theorists are trying to find the truth
Some of the biggest conspiracy theories that flourish on the web are the result of viral, provably false, information. These people are not truth-seekers, there is no proper research or scientific method - it's instead the equivalent of the supermarket tabloid taken to the extreme: QAnon, flat earthers, 5G cancer, etc.
I have no interest in defending the Trump administration except to point out that what you are supporting is, in fact, a hypocrisy. I can think of four examples which I know are hypocrisies that make it through the supposed "fake news" filter.
1) Russiagate, an unsubstantiated hoax promoted in the media for 3 years leading to countless defamatory attacks on Trump, and millions of dollars spent on investigations.
2) Cambridge Analytica, a tactic used by Obama's campaign which was praised in 2011 as being a new progressive way to reach voters, and then miraculously became an act of corruption when Trump did the same thing in 2016.
3) "Mostly peaceful protests", an ongoing gross misrepresentation of what normal people would call riots.
4) "Trump supporters are racist white men", an ongoing smear of the Trump administration and emotional abuse towards the Trump base (or anyone who doesn't buy into the narrative), which has been proven absolutely false by the latest election.
So which is it? You do or you don't want a newsfeed flooded with false/defamatory information? Just allow the stuff you don't personally see as false/defamatory?
This is an extremely dangerous problem, and people need to start waking up to it instead of thinking they got it all figured out, as if Alex Jones yelling at frogs is the reason everything is falling apart.
The fragmentation of reality which we're seeing (Trumpers, leftists, QAnon, flat earth, 5G, etc.) will NOT be solved by slowing down how fast people can "like" something. The entire internet is broken, and the social media platforms need massive rewiring. We need to properly research the ways the current platforms poison discourse, and find remedies that work to dissolve fragmentation and help people communicate better.
Currently, things are rapidly spinning out of control, and the social media platforms have decided to opt for totalitarianism. As they ban and hamstring everyone who doesn't buy in, good or bad, those people will find each other on the decentralized internet. This is creating a powder-keg for narrative chaos and conflict.
Additionally, if the media makes the wrong call due to insufficient evidence at the time or bias, then they could take a hit to their reputation.
It will almost become tactical for the original posters to hold back evidence and wait for their "misinformation" categorisation and then to "disprove" this to make the censors look like liars.
Does that ever happen? It seems they're exempt from repercussions.
Related: Why unmoderated online forums always degenerate into fascism
https://www.salon.com/2019/08/05/why-unmoderated-online-foru...
It doesn't take much to push people back into this primal level of thinking. We are predisposed to follow a charismatic leader, predisposed to follow a religion and adhere to it without any weighing of its systems with that of other religions or thinking critically about our beliefs, predisposed to fear the unknown 'other' rather than welcome them and their ideas. Millions of years of selective pressures have created who we are and how we behave toward one another, it's no surprise that there are some serious growing pains toward adapting to this new world where we attempt to treat others as equals rather than threats. Same sex marriage was only legalized five years ago in the US, after all.
No, the phenomenon that lasted really only a few years was due to the fact that the first Diasporas kicked off of those sites (or that left due to crack downs on their expression) were existing hateful communities. There's a name for the effect that I can't quite remember, but basically new communities are unsavory at first because they're made up of the unsavory characters that are unwelcome at the other communities, and this prevents their growth.
Thankfully we are currently seeing that effect wind down as well. With the increase in moderation beyond plainly hateful content, now to anything a site deems unlikeable, you're seeing those Diasporas become less and less unsavory and more mainstream sets of ideas are being discussed on the newer forums.
I regard articles like the one you linked as yellow journalism, designed to discredit people's desire to collect and discuss ideas online, freely, and argue in favor of places where ideas cannot be discussed freely. If there is any one thing that causes sites to become hateful, it is the need for the site to promote "engaging" content for as profit, something you are less likely to see on newer sites with less commercial pressure.
Those platforms, from the topic oriented platforms, to digg & reddit all stressed moderation. What's new is this idea that any sort of moderation is an infringement of free speech and content platforms should moderate as little as possible. Moderation used to be far stricter when platforms were smaller.
You still don't get it that _NO ONE_ can claim they know what is/is not fake news because everyone has their own incentives.
This is precisely why it's important that Twitter calls out when things are stated as fact without evidence. Twitter never actually say things are fake; they say things are baseless and without evidence. If you want to post things that are influential you simply need to back that claim up with something that people can verify from a trustworthy source - 'fake news' will still be posted because sometimes even a trusted source gets it wrong, but it'll happen far less often. That's the goal.
Suggesting that we should all adopt Nietzsche's perspectivist approach where "there are no facts, only interpretations" is entirely unhelpful. You can't run a functioning society if you have to accept literally every batshit mental theory as "well it might be right, we can't ever know for sure". If there is no evidence, you can say something is fake. You just have to accept that maybe 0.1% of the time you'll be wrong.
> Officials in Michigan reported on Tuesday that citizens of Flint, a predominantly black city, were receiving calls telling them to vote Wednesday, and not on election day. The calls are now being investigated by the FBI.
That is misinformation, no ifs, no buts.
In the early 2000's Americans were lied into the Iraq war, with multiple newspapers practically begging [0] for war, and critics were the ones on the correct side of history. If that happened in 2021, would the critics be silenced, have warnings on all of their tweets, and be told that they're supporting conspiracy theories? Questioning the official narrative of power is becoming wrongthink.
Who's deciding what's wrong and right?
[0] https://www.washingtonpost.com/archive/opinions/2003/02/06/i...
Have you been arguing with OP before? Or is this just how you address people in general? It doesn't set a very nice tone.
In this case, twitter. And it's fine to disagree with the fact checkers.
Example: The famous photo of Anderson Cooper standing in a ditch, pretending as if there was a catstrophic flood. (Really, the water was only a few inches deep.) Would Twitter flag that one?
It's a given they'd flag Trump a bunch. How about Joe Biden saying he would not ban fracking? Would Twitter flag that one?
Twitter is going to have a very hard time making anybody happy with this idea.
Twitter isn’t removing these tweets, just adding their own messages. That is, they are responding to free speech with their own free speech.
If they use this feature on high-profile figures it’s even better: free speech to authority.
Twitter is not free speech system to begin with so ”free speech” purists are not obligated to be ok with this unless the whole platform is based on free speech. In that kind of system any user would be able to prompt any popup for any user when they are about to like any tweet. Now only twitter has that power.
Do you think even free speech purists can see the danger in allowing too much influence concentrated in too few hands?
Yes, this is unironically a good thing.
It means that Twitter is acting as a publisher rather than a neutral channel but nobody is willing to pretend that corporations are neutral anyway.
Doubt that would've gone over well.
The premise of democracy rests upon the concept of free and open debate. If we cannot trust the public to consume information without hand holding, why should we trust them to vote on issues which impact our lives and property? Ironically, censorship is enacted in the name of protecting democracy.
Twitter and Facebook are part of "the public". You are even using right now a website very heavy on "censorship", or how I call it "moderation". If you don't follow HN guidelines you will be silenced. The efforts by members of public to reduce spread of misinformation and polarization is part of why we should trust in public to handle information. But if you do not like HN, Twitter, Facebook you can look for another website.
Apparently Parler is getting recently popular as an alternative to Twitter and Facebook moderation: https://en.wikipedia.org/wiki/Parler
Completely the opposite experience of what quite literally is the first sentence on their homepage:
> Speak freely and express yourself openly, without fear of being “deplatformed” for your views.
HNs guidelines appear to be "put some effort into what you say and don't fight." They're not censoring any particular set of ideas.
>I'd love a filter where Twitter only shows real people who have verified their ID with a passport. No companies, no bots.
In 2016, you had a guy jokingly claim that he was ripping republican ballots in Ohio. This tweet spread like wildfire, and caused an unimaginable headache for the secretary of state as the right-wing media went wild on the story. I'm not surprised Twitter is taking such heavy handed action given that they will be directly in the cross hairs if a story like that ever happens again. No "replacement" would be immune from this issue.
I also don't think anybody would choose to be censored, that doesn't make sense. Maybe you could offer optional spam or misinformation filters, but why would anybody force them one themselves? Twitter and Facebook also employ "fact checking services", which would simply be applied voluntarily to other networks.
I also think the problem is way overblown. On Twitter you can choose who to follow. If select the right people, you won't get the misinformation spam.
I never claimed a replacement could be built in a weekend, and the incentives are exactly part of the problem and part of my question. It seems technically possible to built something like Twitter on a distributed basis with nobody having centralized control, but it probably wouldn't be as snappy as Twitter. People stay on Twitter out of convenience, and also because of the network effect. You would have to make lots of people switch at the same time. That is the challenge.
It was trending on product hunt yesterday and many alternatives are created every week.
I don't understand people on HN. They don't want mainstream social media but won't try these smaller alternatives because they are not mainstream. Seems pretty contradictory, no?
This isn't a real alternative because it doesn't have [insert the reason why people here hate twitter or Facebook]
For a thing to be a better alternative it would need to have structural differences that ensure it will not devolve as Twitter has. Gab is essentially a carbon copy of Twitter, but for a different set of ideas.
Pleroma and Mastodon (as well as other ActivityPub enabled microblogging software) are the only real, better alternatives that exist right now.
> Gab is an English-language alt-tech social networking service known for its far-right userbase.[7] The site has been widely described as a safe haven for extremists including neo-Nazis, white supremacists, and the alt-right, and has attracted users and groups who have been banned from other social networks.[8][9][19] Gab states that it promotes free speech and individual liberty, although these statements have been criticized as being a shield for its alt-right ecosystem.[20][17][21] Antisemitism is a prominent part of the site's content, and the company itself has engaged in antisemitic commentary on Twitter.[23][24][25] Researchers have written that Gab has been "repeatedly linked to radicalization leading to real-world violent events".[26]
It's almost like there's a pattern of "free speech" alternatives turning into a cesspool.
This is the big-tech equivalent of saying "we might be joking or being cheeky when we ban someone" just to add more legal ambiguity and grey areas when it comes to explaining why they banned / flagged something or let something be allowed.
That said, when the losing candidate declares victory on a platform, then alleges fraud, citing nothing, it's not even really the time for abstract philosophy.
And there's "misinformation", the bane of agenda setters.
This will be selectively applied.
Why is this so complicated?
This indicates the dramatic damage done to news organisations when they have been caught in egregious lies and falsehoods, again and again.
Remember, when Twitter cites "official sources", it doesn't mean it's correct, or even that it's not totally fabricated. It means it's probably partisan-slanted "news" written by discredited media organisations.
Even news organisations that are generally trusted on HN have enormous bias and propensity for lies.
Such as the Washington Post falsely claiming Russia hacked critical US energy infrastructure, then retracting the fabricated claim altogether. [2] Or NPR claiming the victim driver of a vehicle during a protest who was attacked violently by gun-wielding assailants was a "right-wing extremist", which was nuked without retraction (they did not apologise for the slanderous claim). [3] [4]
The Associated Press, a self-claimed non-partisan news organisation, falsely claimed the Trump campaign detained 100,000 migrant children, while the actual truth was that it was orchestrated during the Obama administration. Reuters, AFP and NPR also participated in this fabrication. [5] [6] [7]
[1] https://www.journalism.org/2020/01/24/u-s-media-polarization...
[2] https://yro.slashdot.org/story/16/12/31/1533245/washington-p...
[3] https://pbs.twimg.com/media/EbEIlsUUcAACYht?format=jpg&name=...
[4] https://www.wave3.com/2020/06/18/protesters-arrested-followi...
[5] https://abcstlouis.com/news/nation-world/retractions-issued-...
[6] https://finance.yahoo.com/news/multiple-outlets-retract-stor...
[7] https://www.imediaethics.org/ap-afp-reuters-npr-retract-chil...
No, it mostly reflects the rise of ideological tribalism supported by the rise of a diverse array of media outlets catering to (and reinforcing) preconceived biases (some originally driven by propaganda interests, but more driven by the business desire to capture a distinct demographic market for advertising; in the end, the interests overlap and coexist.)
Just catering to preconceived biases leads to basically good faith bias with a bit of manipulation. The current situation is well beyond that.
So based on your citations, "enormous" is roughly a handful between the thirty of them over the last decade?
Get some perspective, do you actually think the WaPo literally made up a story or do you think they (let's be uncharitable) ran with a story they should've done more due diligence on?
It's easy to see how a false story about a serious attack on US infrastructure could spark confrontation, or even more serious war (much like when NYT fabricated evidence that WMDs existed in Iraq).
No, it mostly reflects the rise of ideological tribalism supported by media outlets.
The mainstream media?
"If the news are fake imagine history".
People need to understand that NO ONE can define what is "misinformation" because all sides have incentives to lie.
This is purely thought policing.
Also, people talk about "Fake news" from mainstream media, but actually, the examples of the mainstream media reporting literally incorreect information is pretty rare, and has real world consequences when it happens.
Sure. But if they choose to do that, it is not a place where you can expect to share ideas and have real discussions with people. It becomes a place where perception is crafted, something those of us who actually want to use the internet as opposed to a TV channel are trying to avoid.
I post "It's currently sunny where I live"
That's misinformation.
How can you even compare?
It's silly that there exist people in the modern age who still believe the news.
And it's these lying news (who are private owned) who define "misinformation".
Do people really believe that when a company says "we will become independent fact checkers" that that is even possible?
Did you learn nothing from your past experiences of what happens when a company says "we will become an independent fact checker, TRUST US, WE WILL NEVER LIE AND BE 100% HONEST AND OBJECTIVE".
But yes, generally, when a company promises to be your best buddy no strings attached it is time to get suspicious, and when someone believes them I begin to figure out if they're just naive or actually unintelligent.
Of course the above would not include abstractions like two parallel lines will never meet or overly simplistic hypotheticals.