What people are concerned about are the newsfeeds and timelines, specifically. Companies like Facebook and Twitter and YouTube love to pretend that their newsfeed/timeline products are just like chat apps or phone calls--neutral messaging platforms.
They're not. And the specific reason they are not, is the algorithmic timeline and content suggestions.
It's silly to worry about giving these products "the power to determine what people can—and can’t—say online." They've already seized it for themselves--by deciding for me which content will show up in my newsfeed/timeline/suggested list. They decide which content gets promoted to me.
Yes they use an algorithm to do so, instead of human decisions. But guess who built the algorithm?
Companies that run algorithmic newsfeeds and timelines need to own their role as a publisher and a gatekeeper of content.
Instead of pretending they don't make choices, they should be introspective and thoughtful about the criteria they are using to make those choices. "Engagement" is not neutral criteria because emotions are not symmetrical. Engagement is higher on topics of fear, anger, rage, violence. That's down to our evolution; that's down to the amygdala.
So if you build a publishing system designed solely to maximize engagement, it's going to become a system that preferentially serves content that feeds negative emotions. There are articles and case studies where a person starts with a fresh account and sees what kind of content gets pushed to them; inevitably they get horrible conspiracy theories and fear-oriented content.
Making decisions about what content your audience sees is an act of publishing, even if it's executed via complex algorithm. The companies doing this need to accept their responsibility for what they decide to serve and promote.
I think the more pertinent reason here is that these platforms have broadcast capability (immediate communication with many people) as opposed to p2p capability (traditional SMS or phone calls). Even if Twitter was strictly chronological, without any algorithmic mutation, we'd still presumably be insisting they police content, right? I agree with your conclusion that they're publishers, but to me, what makes a publisher a publisher is not content curation or mutation, but is simply broadcast capability. And so our drive to regulate follows quite naturally from similar drives to regulate the press and media.
One involves a neutral role, in which subscribed feeds are delivered to users without modification or filtering.
The other involves an active role on the part of the platform for any number of reasons: increased engagement, removal of voices that may cause perceived damage or lack of trust in the platform itself, or other, more ideological reasons.
I've noticed an intentional avoidance of distinction, lately, between active and passive behavior on a number of fronts, from sexual activity, to medical advice and intervention, to social media publishing. It's a pretty crucial component in ethical analysis that I suspect is being intentionally blurred.
Many people are demanding that Facebook do exactly this.
They're stuck, however, because many of the people who bought into the myth of objective, non-biased algorithms have gone down the rabbit hole of the garbage recommended by those algorithms. To those users, attempts to cull the garbage will be interpreted as censorship.
Add to that there is no way to improve the recommendation systems (from an ethical standpoint of "improve") without hurting engagement.
Add to that HN's allergy to government regulation.
It's going to be quite a rollercoaster ride over the next few years. :)
"There are good ideas floating around for how Facebook could make life harder on WhatsApp propaganda artists. In an op-ed published in the Times this week, Brazilian researchers Cristina Tardáguila, Fabrício Benevenuto and Pablo Ortellado offered three ideas: restrict the number of times a message can be forwarded from 20 to five, which Facebook has already done in India; dramatically lower the number of people that a user can send a single message to, from its current limit of 256; and limit the size of new groups created in the weeks leading up to an election, in the hopes that it will stop new viral misinformation mobs from forming." https://www.theverge.com/2018/10/19/17997516/facebook-electi...
10-15 years ago, SMS forwarding and bulk SMS were marked as a problem. Indian government had put in laws in place where they can and do ask telecom operators to jam mobile or even internet signals in sensitive areas.
In which case, asking technology companies to toe the line is the next logical step.
facebook DOES censor Messenger.
This is true. And it is more difficult than most people want to even come close to dealing with. Let's say that you wave a wand and you have a bot that will 100% eliminate every racist claim ever made on the service, burying it and showing it to no one. You have just made it impossible to talk about racism. By killing the discussion, you will enlarge the groups of people who would never think to use a racial epithet, but who harbor deep convictions that different races have genetically-driven differences in capability and some need coddling. That is racism. But you can't even point that out, let alone discuss why it is factually wrong, on a sanitized platform.
On a sanitized Twitter, Megan Phelps-Roper and her sister would still be members of Westover Baptist Church, protesting gay funerals and spewing vitriol. They might not be able to do it on Twitter, but they'd be doing it elsewhere. Because Twitter was NOT sanitized, and because it WAS possible to confront people with total refutation and challenge to their most closely-held beliefs, Megan Phelps-Roper was convinced that her own position was wrong and destructive. And because of that lack of censorship, that permission to offend and call out, Westover Baptist has 2 fewer people working daily to hurt others. Anyone calling for a sanitized online platform is calling for a death of discussion, a death of social progress, and a death of any opportunity for the ignorant to learn.
In the 1960s, it was profane, disgusting, and obscene to suggest that interracial marriage should be allowed. It wasn't just 'a different opinion.' It was a view that made people sick, that riled up violence, that led to name-calling and hate. And it was only because the public forum was able to bear that hate, those insults, etc, that progress eventually happened.
Eric Schmidt in his book 'A New Digital Age' makes the argument that people like himself should take the reigns and kill discussion so that he might make the decisions for society. Were he around in the 60s, he would be fighting to lock down discussions about interracial marriage. He, and many like him, see the public having heated discussions and roiling in conflict and conclude they are mindless and incapable of policing themselves. This is a view as old as time. It's Conservatism. The old kind. The kind that backed kings, pharoahs, chieftains, etc. The kind that said some people are simply Better and destined to lead, while others are Lesser and destined to follow. Don't be surprised, but many are comfortable to accept that role as a follower if it means less responsibility or need to think. Conservatism died out near the end of the 18th century and through the 19th but there is no reason it couldn't re-establish itself with a fresh coat of paint and maybe with the help of some automation.
Sure they are. Microsoft is even monitoring their service for "bad words".
https://boston.cbslocal.com/2018/03/27/microsoft-ban-offensi...
> What people are concerned about are the newsfeeds and timelines, specifically.
No. That's a small part of what primarily the left want to censor.
> It's silly to worry about giving these products "the power to determine what people can—and can’t—say online." They've already seized it for themselves--by deciding for me which content will show up in my newsfeed/timeline/suggested list. They decide which content gets promoted to me.
Which you can choose to ignore or bypass.
> So if you build a publishing system designed solely to maximize engagement, it's going to become a system that preferentially serves content that feeds negative emotions.
Then why aren't you demanding CNN or the NYTimes be censored?
> The companies doing this need to accept their responsibility for what they decide to serve and promote.
They are. They are serving what their customers want.
The only people who are complaining about it are authoritarian and selfish individuals who want to control what people see and say. It's no different than a prude whining about the porn people watch.
> They're not. And the specific reason they are not, is the algorithmic timeline and content suggestions.
They're not because they're public, akin to broadcasting. One to many. In the past such has always been more or less under careful control. Public broadcast TV and radio were under control of dogma and moral, and it wasn't feasible to make your own. Publishers could opt not to release a manuscript if it didn't fit their ideology.
Consider the following thought experiment: "Twitter and Facebook were exactly as popular as they are now but they'd show everything only chronologically (last on top). Do you recon the control problem would be solved at that point?"
Now consider the following thought experiment: "Twitter and Facebook are only private 1:1 conversations. Do you recon the control problem would be solved at that point?"
In example #2 (regardless of it being chronologically shown or via an algorithm) the communication -whatever it might be- only goes to one person, not the general public. This contains the strength of propaganda (such as fake news or hate speech) greatly.
Also, remember that there are all kind of biases [1] even while we're not aware of them or when we are weak to fall for them.
[1] It is worth summing them all up but I am by no means an expert on this subject. I'm currently reading the book "The Confidence Game" by Maria Konnikova and it explains various of them in detail.
Repealing Digital Safe Harbour would be a good first step. If you are responsible for what people see, you are responsible for the content.
When we demand that Twitter ban anti-semitic tweets, or that Cloudflare block white supremacist websites, or that Youtube deplatform Alex Jones, we are taking the power to limit speech (which the founders felt was too important to be wielded by the government) and handing it to middle managers at software companies. The de jure rule is "Freedom of Speech shall not be infringed" but the de facto rule is "Don't say anything that would upset the advertisers."
This seems like a Bad Idea (tm) but until/unless a decentralized Mastodon/Scuttlebutt style platform gets traction, I don't know what the solution is. It's a natural result of relying on private apps as a primary method of communication.
> we are taking the power to limit speech (which the
> founders felt was too important to be wielded by the
> government)
Someone spray-paints a swastika on your car. Do you think the founders would mind if you painted it over?You can hypothesize a future in which Mastodon gets very popular, and in which a single for-profit node or a coordinated group of such nodes monopolize it and end up wielding censorship power similar to traditional centralized social media sites, but that's not what it's designed to do and there's no reason to assume it's a fait accompli.
In the end, even most of these hypothetical distributed social network instances would have the de facto rule:
>Don't say anything that would upset the advertisers...
The core assumption is that while _I_ am able to see these vile ideas for the lies they are, the unsophisticated masses must not be allowed to hear them, lest they fall prey.
This is problematic in ways that used to be obvious to people in free societies, but for some reason seems lost now.
I don't think this is true. I think some people always got it, and some people still do. The difference is that the internet allows anyone to post their opinions, but it used to be a lot harder to reach other people.
We still have people that we hold up on pedestals for saying the right things, and the people that we remember from the past are the ones that said things that were incredibly right, or incredibly wrong. We don't remember what every common Joe used to say daily.
When I was a kid, I saw a full hooded KKK march on TV, and I said "why do we let them march?" and my mom said "because you have to hear from people you don't want to hear from to know free speech is working"
I cannot fathom that this conversation would even be considered good parenting today.
The issue is how to avoid exploitation and manipulation.
When the KKK marched several decades ago, it got coverage in newspapers and media proportional to its influence in society. Today, the wealthy and foreign opponents can weaponize hate speech like this to fan flames of division for their own purposes. That is the problem.
Your first two paragraphs would be a huge hit on /pol right up until you got to the point of resolving which adversaries and interests you're talking about.
I don't know where it takes us when an authoritarian, silencing approach is what both sides agree on, and they just haggle over where to point it.
Preventing manipulation and exploitation and doing them are really completely identical from a practical standpoint. Even nobility preserving woods was itself a form of exploitation since remaining untouched was a divergence from the status quo.
It's also about control. The biggest whiners about "hate speech" are the media and news companies. Because they want to control what people see and hear. They don't care about hate speech, they just want to be able to control which hate speech the masses get to see.
It's patronizing, it's paternalistic and it's also puritanically authoritarian. It's evil.
Free speech is what defines a democratic country.
Terms like 'Hate Speech', 'Fake News' are buzz phrase distractions that get in the way of the core of this reality. We already have a legal system in place that defines libel, threats etc. We don't need a new layer of corporate jurisdiction over our ability to speak online or monitoring what we can or can't say
Ideas are the only counter to other ideas and how we communicate those idea is via speech. Suppression only invites martyrdom on the behalf of those suppressed, increasing their credibility.
There's no plausible mechanism by which talking about an idea more and more, makes it disappear. That's all an idea is: something to talk about.
Now, if you want to argue that no ideas should ever disappear, go ahead, but don't cloak it in "antiseptic" language. Antiseptic kills things. If you're talking about antiseptic for ideas, you're talking about killing ideas.
The algorithm will always favor manipulation...
honestly, i just wish users would go back to small communities and indy publishers... It's easy for me to find a music forum or jeep forum with people i want to hangout with but on facebook those that can game/manipulate get the most views and its ALWAYS selling controversy...
Defending free speech means defending those we disagree with, and maybe even hate.
I think no one disagrees that it "works" insofar as it stops the bad person from getting their bad ideas out there. Opponents of deplatforming generally argue that the long-term reaction to the deplatforming is worse than the problem the bad person's ideas were causing. Better to counter the bad ideas with good ideas to do long term good.
I suppose it depends one what you goal with deplatforming is. If it's convincing people that the ideas that people like Alex Jones and organisations like Black Lives Matter (they're in the article as someone who have similarly been affected) are wrong or hateful then I think it will fail.
Consider the fact that deplatforming (primarily of right wing speakers) in the US became significantly more prevalent around 2014, and 2015. It didn't help in the 2016 election.
For example, despite what anyone thinks about what Trump says (I think he's an idiot and a clown), he does occasionally say something that isn't complete BS (please don't make me try to find such an example).
You just can't go down that route and expect a good outcome. In fact, I would argue that you're going to end up with the opposite result that you intended because you'll, inadvertently, end up giving more attention to idiotic ideas than the ideas actually deserve.
A lot of holders of bad ideas have no interest in rational debate. For those people, sunlight is an energy source, not an antiseptic.
When ISIS posts a video of a beheading, there isn't a lot of reasonable discussion going on in the comment section.
I doubt the founding fathers were inviting Quislings to their debating societies during the revolutionary war.
Viruses thrive through exposure to new hosts.
See, you can prove anything using an analogy.
The question is, should "bad speech" be amplified, promoted, propogated, broadcast, surfaced, and repeated, ON PURPOSE; just so it can get rebuked, debunked, dismissed, and exposed?
(I agree with you, the answer is closer to "The Remedy to Bad Speech is More Speech ... Marketplace of Ideas" however, the question being discussed is more akin to "Theres a limited amout of space at the front page, and people have limited amounts of attention to give, WHO gets the megaphone, and for how long". Its the inverse of "the robust debate principle recognizes that sometimes in a crowd of speakers it is necessary to turn down the volume of certain loud and clamorous speakers in order to give others a chance to speak." Facebook and the algorithm DO DECIDE who to turn the volumn up on, who to promote to the top. They already arent neutral, they already exhibit preference and bias for certain ideas.)
Others are arguing that this is flipping the argument, BUT we are talking about algorithmic placement moreso than true censorship. If someone is allowed to post something but it NEVER makes it into someone elses Newsfeed, is it as good as censored?
There's some evidence deplatforming works:
https://motherboard.vice.com/en_us/article/bjbp9d/do-social-...
> “We’ve been running a research project over last year, and when someone relatively famous gets no platformed by Facebook or Twitter or YouTube, there's an initial flashpoint, where some of their audience will move with them” Joan Donovan, Data and Society’s platform accountability research lead, told me on the phone, “but generally the falloff is pretty significant and they don’t gain the same amplification power they had prior to the moment they were taken off these bigger platforms.”
> There’s not a ton of research on this, but the work that has been done so far is promising. A study published by researchers at Georgia Tech last year found that banning the platform's most toxic subreddits resulted in less hate speech elsewhere on the site, and especially from the people who were active on those subreddits.
https://mashable.com/article/milo-yiannopoulos-deplatforming...
The idea that if you just say the right, True (TM) thing then people will flock to it so not only naive it's so obviously wrong if you look at a laymen's perspective on basically any subject. It's also just a waste of time for the subjects typically in question. The communication of ideas has changed. It's a chaotic free for all. We've over done it. It's time to have a serious and reasonable conversation about the current state we find ourselves in, least we shoot ourselves in the foot with blind, headstrong optimism about ideals that don't much the reality of human nature.
In before someone references 1984 blindly and displays the ever-popular dystopia-prediction fetish that's so prevalent in these conversations.
I find it curious that those who apparently are in such a rush to suppress Bad (TM) ideas adopt the worst ideas of fascists in order to do so.
(setup - 1950 but we have facebook/twitter/youtube/instagram)
Most of the population thinks that people promoting same sex relationships are just hell bent on destroying the good and wholesome America and demand that the leaders of LGBT of the time be deplatformed from that twitter/facebook/youtube/etc.
I have yet to hear compelling answers to this problem, and I am not that optimistic that it can be solved in the next few decades. I do agree that trust busting is the wrong approach. At least the problem is currently centralized.
If you want free speech, you accept the consequences. If you want “regulated” speech, there are consequences.
That’s it. I would argue that the level of satire a society can cope with, is directly proportional to the quality of democracy the society has.
You're contradicting yourself: First, you deny that there are graduations in "freedom", saying it's all-or-nothing.
But then, a democracy's quality is apparently proportional to its freedom of speech, implying that there are, indeed, nuances.
So where does that put America :)
It's only unsolved among people who don't understand what free speech is.
There are no "compelling answers" because the problem at hand is how to maintain the positive branding of free speech while removing what it means for speech to be free.
I always thought that the Internet would be a democratic platform that would improve the debate in society. Maybe we would go back to a democracy without intermediaries.
I was wrong.
We are entering a dystopian world where the profits of a handful of companies are more important than the rest of society.
There was no reason why the opposition to that candidate couldn't have put up their own absurd lies, or, dare I say, disproved the far right candidate's lies, thereby achieving the same success via the same platform. The elections are all about effective campaigning. You can't blame the platform because the candidate you don't like is too effective at using it. Instead learn from your mistakes and use this platform to be just as effective.
Then they spend all the time refuting absurd lies instead of explaining their proposals.
Tech is just the easy scapegoat for society.
One of the blocked accounts were of the son of the candidate.
- limit the amount of persons in a group
- limit the amount of people you can forward a message
- Restrict broadcasts
Should Twitter, Facebook and Google Executives be the Arbiters of What We See and Read? August 21 2014 - https://theintercept.com/2014/08/21/twitter-facebook-executi...
Facebook Is Collaborating With the Israeli Government to Determine What Should Be Censored September 12 2016 - https://theintercept.com/2016/09/12/facebook-is-collaboratin...
Then: Facebook Says It Is Deleting Accounts at the Direction of the U.S. and Israeli Governments December 30 2017 - https://theintercept.com/2017/12/30/facebook-says-it-is-dele...
"hate speech" from:ggreenwald on Twitter - https://twitter.com/search?q=%22hate%20speech%22%20from%3Agg...
Observation: promoting is cheaper (even profitable). But they can promote it with plausible deniability.
Which is a more "dangerous" path? And to whom? Society? Shareholders?
This worked to marvelous effect when legitimate, unique net neutrality concerns were buried under an avalanche of duplicate anti-net-neutrality letters sent on behalf of people who were very much deceased.
The solution to this challenge likely needs a much more nuanced approach to it than just burying hate speech in more speech, because look how well that turned out.
Time bears us many more examples where your proposal was successfully inverted to hideous ends. The loudest have a very pervasive tendency to win.
It struck me as childishly pedantic to address, so
There's a can/should debate hidden in here. Tech companies totally can police hate speech (or any kind of speech) on their platforms, thanks to handy things like a ToS. Whether they should is a cultural question about what kind of a society we want to have. If history has taught me anything, it's that the can side of the debate wins in the long run.
Did you have the same biases I did? At that time and age (my teens, mostly), I just assumed people with different values than mine were ignorant, and so naturally they wouldn't be capable of using advanced technology.
I'm not proud of that, but there's still a _lot_ of that sentiment kicking around, including in the form that giving people additional access to technology and knowledge will educate the masses into the "correct" set of values held by whoever's pushing for greater technological adoption.
Greater access does help /if they are willing to use it in self improving ways/ in the first place. If they just use it for tabloids and gossip it won't be a library to them but tabloids and gossip.
It's a coincidence that John Perry Barlow died at the height of all this, but I think it's extremely symbolic that governments are asserting their power just as technolibertarianism's radical cleric passed away.
Speech that explicitly or direct line implicitly dehumanizes anyone.
Then the arguments become pretty simple: does a statement dehumanize someone? Does it indicate that they are any less human than another? That's a much easier discussion to have.
>Speech that explicitly or direct line implicitly dehumanizes anyone.
Except, you've gone nowhere. You've substituted the intrinsic ambiguity and subjectivity inherent in 'hate speech' with the same intrinsic ambiguity and subjectivity inherent in 'dehumanization' .. in fact, I have a better idea of what 'hate speech' is than what 'dehumanization' actually entails. I've seen people argue that a model in a bikini is dehumanizing.
It appears to me that we are excluding voices from public discourse because, in the opinion of a powerful group, they have been rude.
One person's rude criticism can easily become another person's hate speech.
People get sick die because of political decisions, and people gain great amounts of wealth because of political decisions.
People maintain comfortable places in society or remain in fully employed poverty because of political decisions.
There is very little in peoples' lives that aren't affected by politics.
If you're in a position for politics to not affect you because it's just an abstract topic of conversation, you're part of a privileged class of people who aren't being scrutinized and blamed because of your inherent, born qualities. Congratulations.
(see the answer?)
There is a clear definition of hate speech. It's not as amorphous as you need it to be in order to protect your ego.
if 1) do you consider yourself a human? A) yes, then why haven't you killed yourself? Since you haven't you think you're more deserving of life relative to everyone else and therefore are dehumanizing everyone but yourself, if 2) ok, you've publicly proclaimed that you don't think you're human which then let me add a second rule. Only humans have right to speech.