I don't see that Martin Kleppmann is using 'democracy' to mean 'majoritarianism' here. He makes considered points about how to form and implement policies against harmful content, and appears to talk about agreement by consensus.
Democracy and majoritarianism are (in general) quite different things. This might be more apparent in European democracies.
The straightforward meaning is that ultimately I decide what is acceptable or not for me, and you decide what is acceptable or not for you. We can, and likely will, have a different opinion on different things.
But the following talk of "governance" and "democratic control" suggest that the one who ultimately decides are not users as individuals, but rather some kind of process that would be called democratic in some sense. Ultimately, someone else will make the decision for you... but you can participate in the process, if you wish... but if your opinions are too unusual, you will probably lose anyway... and then the rest of us will smugly congratulate ourselves for giving you the chance.
> Democracy and majoritarianism are (in general) quite different things.
Sure, a minority can have rights as long as it is popular, rich, well organized, able to make coalitions with other minorities, or too unimportant to attract anyone's attention. But that still means living under the potential threat. I don't see a reason why online communities would have to be built like this, if instead you could create a separate little virtual universe for everyone who wished to be left alone... and then invent good tools for navigating these universes, to make it convenient, from user perspective, to create their places, to invite and be invited, and to exlude those who don't follow the local rules (who in turn can create their own places and compete for popularity).
I disagree that this is straightforward in meaning. Even if I do have a good idea of what is unacceptable to me, I need someone external to screen for that. If the point is to avoid personally facing the content that I find unacceptable, it's impossible for me to adequately perform this screening on my own behalf.
I can instruct or employ someone (or something) to do this, but then ultimately they will make the decision for me. It's only plausible to do this at scale, unless I'm wealthy enough to employ my own personal cup-bearer who accepts the harm. So, it makes sense to band together with other users with similar requirements.
Your claim seems to be that delegating these decisions is a bad thing that should be avoided, but it is an essential and inevitable part of this service - I have to delegate that decision to someone else, or I won't get that service.
This is not to mention legal restrictions on content in different jurisdictions, which define a minimum standard of moderation and responsibility, that may include additional risk wherever they are not fully defined.
And here we run into the issue that economists and political scientists call "the Principal-Agent problem"[0].
Whether we're talking about the management of a firm acting in the interests of owners, elected officials acting in the interests of voters, or moderators of communication platforms acting in the interest of users, this isn't a solved problem.
And in fact, that last has extra wrinkles since there is not agreement on just whose interests the moderator is supposed to prioritize (there can be similar disagreement regarding company management, but at least the disagreement itself is far better defined).
This is deeply messy, and as hard as it is now, it is only going to get worse with every additional human that is able to access and participate in these systems.
[0] https://en.m.wikipedia.org/wiki/Principal%E2%80%93agent_prob...