"I fear that many decentralised web projects are designed for censorship resistance not so much because they deliberately want to become hubs for neo-nazis, but rather out of a kind of naive utopian belief that more speech is always better. But I think we have learnt in the last decade that this is not the case."
What you should have learned in the last decade is that social networks designed around virality, engagement and "influencing" are awful for the society in the long run. But somehow now the conversation has turned away from that and towards "better moderation".
Engage your brain. Read Marshall McLuhan. The design of a medium is far more important than how it is moderated.
Yes, and don't forget the 24 hour news cycle with its focus on getting outrage and attention through fear. I did not know who Marshall McLuhan was until now- thanks for the tip!
Yeah, social media is just one in a series of possibly misguided techno-social "innovations," and it probably won't be the last.
My understanding is that groups like the Amish don't reject technology outright, but adopt it selectively based on its effects on their society (and will even roll back things they've adopted if they're not working out). Wider society probably would benefit from a dose of that kind of wisdom right now, after decades of decades of "because we can"-driven "innovation."
IMO this is a great point. Social medias as they exist today are broken because they have been engineered on the assumption to make money on ads. Making money on ads works by engineering around virality, engagement and influencing.
Another thing that McLuhan teaches though is that actually the (social) media is the message. And ultimately this lead to a Viking dude standing in the US capitol.
Now, that whole situation was awful. But it was also hilarious. In social media, this was barely a meme that lived on for a few hours. Whereas within the ancient system of democracy, an intrusion into the parlament is breaking some sacret rules. And there, surely the incident will cast long winding consequences.
To cut to the chase: Social media outcomes have to be viewed wearing a social media hat. Same for real-life. In this case, gladly. Another great case were this was true was Kony 2012, where essentially all the Slacktivism lead to nothing.
1. https://www.philosophizethis.org/podcast/episode-149-on-medi...
* Of course, I enjoyed the podcast episode so much that I did end up going on to read The Gutenberg Galaxy and The Medium Is the Massage [sic], and wholeheartedly recommend both.
Engagement is still roughly "our" problem, because ad-driven ~media are externalizing the costs of engagement on society. This is where the Upton Sinclair quote fits.
Moderation is still roughly the platform's problem because it comes with liabilities they can't readily externalize. Engagement certainly overlaps with this, but most of these liabilities exist regardless of engagement.
I think you make a very interesting point about the impact of vitality over the long term, and I would like to read some of your thoughts about where we are headed, what can be done about it and why. I haven’t heard of Marshall McLuhan before.
For those looking for a relatively accessible introduction to McLuhan’s ideas, check out his book “The medium is the message/massage”. It’s fairly short, and with illustrations, quite readable. I think it has more concrete examples than “Understanding media” which is a more abstract & denser read.
Book Review: Technopoly https://scott.london/reviews/postman.html
Interview with Neil Postman - Technopoly https://www.youtube.com/watch?v=KbAPtGYiRvg
- It's made up of a bunch of independent servers, or "instances". The common analogy here is to email systems.
- If you want to join the federation, stand up an instance and start using it. Voila! Now you're part of it.
- My instance has a lot of users, and I don't want to run them off, so it's in my own interest to moderate my own instance in a way that my community likes. Allow too much in without doing anything? They leave. Tighten it so that it starts losing its value? They leave. There's a feedback mechanism that guides me to the middle road.
- But my users can leave for greener pastures if they think I'm doing a bad job and think another instance is better. They're not stuck with me.
The end result is that there are thousands of instances with widely varied moderation policies. There are some "safe spaces" where people who've been sexually assaulted hang out and that have zero tolerance for harassment or trolling. There are others that are very laissez faire. There's a marketplace of styles to choose from, and no one server has to try to be a perfect fit for everyone.
I realize that this is not helpful information for someone who wants to run a single large service. I bring it up just to point out that there's more than one way to skin that cat.
(That final idiom would probably get me banned on some servers. And that's great! More power to that community for being willing and able to set policies, even if I wouldn't agree with them.)
I think it's evident from Facebook, Twitter, et al that human moderation of very dynamic situations is incredibly hard, maybe even impossible.
I've been brewing up strategies of letting the community itself moderate because a machine really cannot "see" what content is good or bad, re: context.
While I think that community moderation will inevitably lead to bubbles, it's a better and more organic tradeoff than letting a centralized service dictate what is and isn't "good".
When a man says he supports freedom of speech, he isn't thinking about the speech that he wishes to limit as he finds it so abhorrent, and where that line lies differs from one man to the other.
Such initiatives fail, as even when men come together and admit they allow for the most abhorrent of opinions to be censored, they seldom realize that each and every one of them has a very different idea of what that is.
I've suggested that idea a million time, it's all yours for the taking for those who want to implement it:
Build a social network where there is a per-user karma/reputation graph, with a recursive mechanism to propagate reputation (with decay): I like a post, that boosts the signal/reputation from whoever posted, and from people who liked it, and decreases signal from people who downvoted it.
There can be arbitrarily more sophisticated propagation algorithms to jumpstart new users by weighing their first few votes more highly and "absorb" existing user reputation graphs (some Bayesian updating of some kind).
Allow basic things like blocking/muting/etc with similar effects.
This alone would help people curate their information way more efficiently. There are people who post things I know for a fact I never want to read again. That's fine, let me create my own bubble.
The TrustNet/Freechains concepts seem adjacent and it's the first time I come across them — looks interesting.
The alternative is having some kind of elite moderators that moderate all communities. It sets a lower bar on their quality. Unfortunately, it also sets an upper bar on their quality. Everything will be as good as the appointed elite likes it, neither better nor worse.
From the perspective of what the average person sees, the latter is probably better. From the perspective that I am an individual who can choose a community or two to participate in, and I don't care about the rest, the latter is better.
There's a ton of material on that subject, thankfully; look at news groups, HN itself (flagging), Stack Overflow, Joel Spolsky's blogs on Discourse, etc etc etc. My girlfriend is active on Twitter and frequently joins in mass reporting of certain content, which is both a strong signal, and easily influenced by mobs.
How would you know what the evidence tells us from those platforms, when their criteria and resources for moderation are proprietary and opaque?
FB is a profitable company.
Have you calculated how many moderators could be paid $20 per hour out of $15.92 billion profit?
Approximately 400,000.
Maybe at some point the better strategy is to limit public exposure and favor segmenting some groups out into their own space that requires extremely explicit opt-in measures? Hard to say, and tucking it away into its own corner of the web seems rife with its own problems.
As another commenter expressed on some other topic, this is a long-running problem with many incarnations: Usenet, IRC, BBSs, etc. It's become especially salient with the explosion of social media platforms that include everyone from Grandma to Grandson.
Bottom line... my heart goes out to moderators of these kind of platforms.
This isn't movie reviews. Good and bad are not the standards. The standard is whether or not something is illegal. When the feds come knocking on your door because your servers are full of highly illegal content, "we let them moderate themselves" will be no defense.
However, the police have far too much to do, so in practice millions of blatently illegal death threats get sent every day and do not receive any police response. Hence the demand for a non-police response that can far more cheaply remove the death threats or threateners.
Discussion of homosexuality is "illegal" in many states. It is a moral imperative for systems to break those laws.
Think of each TV show as it's own Discord server and within the show are user-generated topic rooms.
My hope is that users basically self-silo into topic rooms that interest them in regards to whatever show their watching.
For example: the Yankees are playing the Marlins. Users can create a #yankees room, a #marlins room, a #umpire room, etc to create chat rooms around a given topic in regards to whatever they're watching. In each room, a user has the ability to block, filter words, etc...so they can tailor their chat experience in whatever way they want while watching any given show.
The market literally selects for more one sided clickbaity outrage articles.
Meanwhile social networks compete for your attention and “engagement” for clicks on ads so their algorithms will show you the stories that are the most outrageous and put you in an echo chamber.
It’s not some accident. It’s by design.
If we were ok with slowing down the news and running it like Wikipedia with a talk page, peer review, byzantine consensus, whatever you want to call it — concentric circles where people digest what happens and the public gets a balanced view that is based on collaboration rather than competition with a profit motive - our society would be less divided and more informed.
Also, Apple and Google should start charging for notifications, with an exception for real-time calls and self-selected priority channels/contacts signing the notification payload. The practically free notifications creates a tragedy of the commons and ruins our dinners!
> I'm building a large, chatroom-like service for TV.
So the profit motive is likely the motivation for applying a "central" doctrine of acceptable discourse using a decentralized mechanism.
> While I think that community moderation will inevitably lead to bubbles
Which allows for e.g. an athiest community to have content that rips religion x's scripture to shred (and why not?) in the same planet that also has a religion-x community that has content that takes a bat to over-reaching rationalism. Oh the horror! Diversity of thought. "We simply can not permit this."
Edit: now I see Slashdot and Reddit mentioned at the end in the updates section (I don't remember seeing them on my first read, but that might just be me).
Voting tells us what we value but that doesn't mean what is good for us. It also treats all content as somewhat equivalent, which isn't true. A call to (maybe violent) action isn't the same thing as sharing a cute cat video.
There's a whole Moonshot of spam resistance that's going to need to happen in Mastodon/Matrix/Whatever.
With a centralized service, trust is simple: how much you trust the single entity that represents the service.
In a distributed network, nodes need to build trust to each other. In the best-known federated network, email, domain reputation is a thing. Various blacklists and graylists pass around trust values in bulk.
So a node with a ton of sock puppets trying to spam votes (or content) is going to lose the trust of its peers fast, so the spam from it will end up marked as such. A well-run node will gain considerable trust with time.
This, of course, while helpful, does not guarantee "fairness" of any kind. If technology and people's values clash, the values prevail. You cannot alter values with technology alone (even weapon technology).
I don't see that Martin Kleppmann is using 'democracy' to mean 'majoritarianism' here. He makes considered points about how to form and implement policies against harmful content, and appears to talk about agreement by consensus.
Democracy and majoritarianism are (in general) quite different things. This might be more apparent in European democracies.
The need for censoring content still exists because certain kinds of content are deemed illegal, and failure to remove that may end up in serving jail time.
On the other hand, moderation is named very aptly.
That said, I fully support the right of private companies to censor content on their premises as they see fit. If they do a poor job, I can just avoid using their services.
For example, when you moderate a debate you do not silence opinions you disagree with, you simply ensure that people express themselves within 'acceptable' boundaries, which usually means civility.
To me this means that 'decentralised content moderation' is largely an utopia: Whilst the rules may be defined by the community, letting everyone moderate will, in my view, always end up being similar to upvoting/downvoting which is a vote of agreement/disagreement.
How well it works is always a topic here.
A democracy makes great efforts to ensure 1 person = 1 vote. Online platforms do not.
Moderation is not the same. It is not about agreeing but curating content that is not acceptable (off-topic, illegal, insulting).
Article quote: "In decentralised social media, I believe that ultimately it should be the users themselves who decide what is acceptable or not"
In my view that is only workable if that means users define the rules because, as said above, I think 'voting' on individual piece of content always leads to echo chambers and to censoring dissenting views.
Of course this may be fine if within an online community focus on one topic or interest, but probably not if you want to foster open discussions and a plurality of views and opinions.
We can observe this right here on HN. On submissions that are prone to trigger strong opinions downvotes and flagging explode.
Any system where any rando can post any random thing with no gates is going to be much more of a slog to moderate than one where there are several gates that imply the person is acting in good faith.
Edit: Discussed here [1] and here [2].
[0]: https://matrix.org/blog/2020/10/19/combating-abuse-in-matrix...
I worry that just making it a way to earn bitcoins risks it becoming one more way for poor people to scrape together pennies at the cost of giving themselves PTSD.
We just wrote about its philosophy earlier this week.
https://almonit.com/blog/2021-01-08/self-governing_internet_...
I can't even get to the heart of the poster's argument. That's because the shitty state of all current social media software defines "anybody" as:
* a single user making statements in earnest
* a contractor tacitly working on behalf of some company
* an employee or contractor working on behalf of a nation state
* a botnet controlled by a company or nation state
It's so bad that you can witness the failure in realtime on, say, Reddit. I'm sure I'm not the only one who has skimmed comments and thought, "Gee, that's a surprisingly reaction from lots of respondents." Then go back even 30 minutes later and the overwhelming reaction is now the opposite, with many comments in the interim about new or suspicious accounts and lots of moderation of the initial astroturfing effort.
Those of us who have some idea of the scope of the problem (hopefully) become skeptical enough to resist rabbit-holes. But if you have no idea of the scope (or even the problem itself), you can easily get caught in a vicious cycle of being fed a diet of propaganda that is perhaps 80% outright fake news.
As long as the state of the art remains this shitty (and there are plenty of monetary incentives for it to remain this way), what's the point of smearing that mendacity across a federated system?
Why?
It is fairly clear at this point that content moderation at internet scale is not possible. Why? A. Using other users to flag dangerous content is not working. Which users do you trust to bestow this power with? How do remove this power from them? How do you control it becoming a digital lynch mob? Can you have users across political, gender, other dimensions All mostly not solvable problems.
B. Is it possible to use machine learning? To some extent. But any machine learning algorithm will have inherent bias, because test data will also be produced by biased individuals. Also people will eventually figure out how to get around those algorithms as well.
The causality between content published on the internet and action in real world is not immediate. It is not like someone is sitting in a crowded place and shouting fire causing a stampede. As there is a sufficient delay between speech and action, we can say that the medium the speech is published in is not the primary cause of the action, even if there is link. Chances of direct linkage are fairly rare and police/law should be able to deal with those.
Content moderation, at least the way Twitter has been trying to do, has not been effective, created lot of ways for mobs to enforce censorship, and there is absolutely no real word positive impact of this censorship is. Only use of this moderation and censorship has been for right to claim victimhood and gain more viewer/readership to be honest.
For example, a soldier with PTSD may want an environment that moderates content. Or a journalist with epilepsy may want a platform where people don't spam her with gifs designed to trigger epilepsy when she says something critical of a game release.
Edit: I mean spam.
For example, in my network, anyone can start a node and the user has full control over it. So how would you censor this node? The following ideas don't seem to work:
1. Voting or another social choice consensus mechanism. Problems:
- Allows a colluding majority to mount DOS attacks against anyone.
- Can easily be circumvented by changing host keys / creating a new identity.
2. The equivalent of a killfile: Users decide to blacklist a node, dropping all connections to it. Problems:
- Easy to circumvent by creating new host keys / creating a new identity.
3. Karma system: This is just the same as voting / social choice aggregation and has the same problems.
4. IP banning by distributing the blocked IPs with the binaries in frequent updates. Problem:
- Does not work well with dynamic IPs and VPNs.
Basically, I can't see a way to prevent users from creating new identities / key pairs for themselves whenever the old one has been banned. Other than security by obscurity nonsense ("rootkit" on the user's machine, hidden keys embedded in binaries, etc.) or a centralized server as a gateway, how would you solve that problem?
Basically it would work something like this: By default, clients hide content (comments, submissions, votes, etc) created by new identities, treating it as untrusted (possible spam/abusive/malicious content) unless another identity with a good reputation vouches for it. (Either by vouching for the content directly, or vouching for the identity that submitted it.) Upvoting a piece of content vouches for it, and increases your identity's trust in the content's submitter. Flagging a piece of content distrusts it and decreases your identity's trust in the content's submitter (possibly by a large amount depending on the flag type), and in other identities that vouched for that content. Previously unseen identities are assigned a reputation based on how much other identities you trust (and they identities they trust, etc.) trust or distrust that unseen identity.
The advantage of this system is that it not only prevents sibyl attacks, but also doubles as a form of fully decentralized community-driven moderation.
That's the general idea anyway. The exact details of how a system like that would work probably need a lot of fleshing out and real-world testing in order to make them work effectively.
You could prevent banned users from returning with a new identity by disallowing the creation of new identities. E.g. many Mastodon instances disable their signup pages and new users can only be added by the admins.
If you don't want to put restrictions on new identities, you could still treat them as suspect by default. E.g. apply a kind of rate limiting where content created by new users is shown at most once per day and the limit rises slowly as the user's content is viewed more and more without requiring moderation. (This is a half-baked idea I had just now, so I'm sure there are many drawbacks. But it might be worth a shot.)
This two level split allows node operators to think of most other users at the node level, which means dealing with far fewer entities. It provides users with a choice of hosts, but means that their choice has consequences.
For most part platforms take decisions that will suit the majority of users.
> Thus, as soon as a censorship-resistant social network becomes sufficiently popular, I expect that it will be filled with messages from spammers, neo-nazis, and child pornographers (or any other type of content that you consider despicable).
Unfortunately, I agree this is likely the case, and also agree with many of the other points where there's unlikely to be an agreed upon approach at scale.
I feel the two most important aspects of any moderation are transparency and consistency. I'd always like to know what community I'm joining.
We'll likely see more niche communities continue to pop up on centralized and decentralized networks where the moderation, content and community can be more tailored to their own expectations.
It's because our entire society is permeated with ideas about capitalism and competition being the best way to organize something, almost part of the moral fabric of the country. Someone "built it", now they ought to "own" the platform. Then they get all this responsibility to moderate, not moderate, or whatever.
Compare with science, wikipedia, open source projects, etc. where things are peer reviewed before the wider public sees them, and there is collaboration instead of competition. People contribute to a growing snowball. There is no profit motive or market competition. There is no private ownership of ideas. There are no celebrities, no heroes. No one can tweet to 5 million people at 3 am.
Somehow, this has mistakenly become a “freedom of speech” issue instead of an issue of capitalism and private ownership of the means of distribution. In this perverse sense, "freedom of speech" even means corporations should have a right to buy local news stations and tell news anchors the exact talking points to say, word for word, or replacing the human mouthpieces if they don't...
Really this is just capitalism, where capital consists of audience/followers instead of money/dollars. Top down control by a corporation is normal in capitalism. You just see a landlord (Parler) crying about higher landlord ... ironically crying to the even higher landlord, the US government - to use force and “punish” Facebook.
Going further, it means corporations (considered by some to have the same rights as people) using their infrastructure and distribution agreements to push messages and agendas crafted by a small group of people to millions. Celebrity culture is the result. Ashton Kutcher was the first to 1 million Twitter followers because kingmakers in the movie industry chose him earlier on to star in movies, and so on down the line.
Many companies themselves employ social media managers to regularly moderate their own Facebook Pages and comments, deleting even off-topic comments. Why should they have an inalienable right to be on a platform? So inside their own website and page these private companies can moderate and choose not to partner with someone but private companies Facebook and Twitter should be prevented from making decisions about content on THEIR own platform. You want a platform that can’t kick you off? It’s called open source software, and decentralized networks. You know what they don’t have?
Private ownership of the whole network. “But I built it so I get to own it” is the capitalist attitude that leads to exactly this situation. The only way we will get there is if people build it and then DON’T own the whole platform. Think about it!
Let's unpack this:
Axiom: a kind of naive utopian belief [exists that asserts] that more speech is always better. But I think we have learnt in the last decade that this is not the case.
False premise. The "naive belief", based on the empirical evidence of history, is that prioritizing the supression of speech to address social issues is the hallmark of authoritarian systems.Martin also claims "we have learned" something that he is simply asserting as fact. My lesson from the last 3 decades has been that it was a huge mistake to let media ownership be concentrated in the hands of a few. We used to have laws against this in the 90s.
Axiom: By "we" as in "we want", Martin means the community of likeminded people, aka the dreaded "filter bubble" or "community value system".
Who is this "we", Martin? Theorem: If we want technologies to help build the type of society that we want to live in, then certain abusive types of behaviour must be restricted.
We already see that the "we" of Martin is a restricted subset of "we the Humanity". There are "we" communities that disagree with Martin's on issues ranging from: the fundamental necessity for freedom of thougth and conscience; the positive value of diversity of thoughts; the positive value of unorthodox ("radical") thought; the fundamental identity of the concept of "community" with "shared values"; etc. Q.E.D.: Thus, content moderation is needed.
Give the man a PhD.--
So here is a parable of a man named Donald Knuth. This Donald, while a highly respected and productive contributing member of the 'Community of Computer Scientists of America' [ACM, etc.], also sadly entertains irrational beliefs that "we" "know" to be superstitious non-sense.
The reason that this otherwise sane man entertains this nonsensical thoughts is because of the "filter bubble" of the community he was raised in.
Of course, to this day, Donald Knuth has never tried to force his views in the ACM on other ACM members, many of whom are devout athiests. And should Donald Knuth ever try to preach his religion in ACM, we would expect respectful but firm "community filter bubble" action of ACM telling Mr. Knuth to keep his religious views for his religious community.
But, "[i]f we want technologies to help build the type of society that we want to live in" -- and my fellow "we", do "we" not agree that there is no room for Donald Knuth's religious nonsense in "our type of society"? -- would it not be wise to ensure that the tragedy that befell the otherwise thoughtful and rational Donald Knuth could happen to other poor unsuspecting people who happen to be born and raised in some "fringe" community?
"Thus, content moderation is needed."