As do I, and you. I was pretty happy when all of those ISIS accounts that were spewing violence and hate got shut down. I'm sure you have your boundaries on what you'd want society to allow and restrict on public mediums.
The specific details of what those boundaries are will depend on your personal notion of what constitutes communication which can be considered abusive. As it will for any other person.
Copyright violations. Beheading videos. "Pornography" or pseudo-"pornography" involving minors. Direct threats of violence towards individuals or groups. Deepfakes generated without consent. Etc. Etc.
I guarantee you if we have a back and forth discussion we will discover where your boundaries lie across the myriad issues where people typically want to control public discourse.
If I want to press the ban/blacklist button on tag, account, group, or whatever social unit to not see it again that's my decision. Sure, some of them should be default (probably don't want people to get porn the second they sign up), but user should be in power to moderate and filter their own stream.
> Copyright violations. Beheading videos. "Pornography" or pseudo-"pornography" involving minors. Direct threats of violence towards individuals or groups. Deepfakes generated without consent. Etc. Etc.
3/4 of what you mentioned is illegal in most places in the first place so it isn't point of contention.
And that isn't really a problem. Site deciding this or that political view is now bad is.
You try to put removing the illegal/disturbing content in same category as worldview manipulation. The first is way more black and white than the second and should not be considered together, even if similar systems are used for them.
The wanker at the twitter office that was presumably appointed by twitter management to do that, yes? And some people want to appeal to that particular seat of power to influence and limit discourse along some dimension, within twitter.
And yet other appeals to higher powers, such as governments - control communication with deeper consequences across broader domains.
What's legal or illegal evolves with politics and culture. So there's no fundamental purchase there for the kind of moral conversation you were trying to elicit.
If in a few years the people you accuse of wanting to control other people's speech are able to get some laws passed making the speech they want controlled properly illegal, I'd venture you would resist accepting that as suddenly legitimate - even if those things would be "straight up illegal" at that point.
I guess I should have been more explicit in the first response - but what I'm suggesting is that this conversation is better had in less absolutist terms than what you proposed. There was the implication that the other person somehow inherently wanted to control communication in a qualitatively different way than you (or I did).
That's not to go down the path of sophistry - but just to suggest to orient the conversation around where the boundaries should be placed in practical terms, and discuss where the differences in boundaries lie and on an issue by issue basis evaluate that, rather than absolutist/ideological terms.
"You want to control speech (and I don't)" doesn't really lead anywhere in terms of discourse. It's a dead end.
> What's legal or illegal evolves with politics and culture. So there's no fundamental purchase there for the kind of moral conversation you were trying to elicit. If in a few years the people you accuse of wanting to control other people's speech are able to get some laws passed making the speech they want controlled properly illegal, I'd venture you would resist accepting that as suddenly legitimate - even if those things would be "straight up illegal" at that point.
The parent commenter was simply saying that the abhorrent examples you provided were not a point of contention, and not relevant to a conversation about free speech. They weren't attempting to put the word of the law onto a pedestal as you seem to suggest, they were simply pointing out that the things you mentioned are universally considered bad all around the planet, whereas freedom of speech is not.
The situation you've outlined in which one political party successfully silences their opponents through legal means is exactly why a "Site deciding this or that political view is now bad" is so dangerous.
If you don't see distinction between posting pedophilia/beheadings and different political views that's entirely you problem.
"We don't allow what law doesn't allow, and moderate what is considering explicit by default, but you can opt out of that" I think is entirely fine line for platform to stand on. Anything above that is them fucking around with public opinion for their benefits.
>The wanker at the twitter office that was presumably appointed by twitter management to do that, yes? And some people want to appeal to that particular seat of power to influence and limit discourse along some dimension, within twitter.
> And yet other appeals to higher powers, such as governments - control communication with deeper consequences across broader domains.
Government is supposed to work toward interest of its people, not corporations, that's the difference here. And, well, the government is only entity that can tell corporation to behave. You can just not vote in the wankers to the office too.
Government (in first world countries) will only tell you to stop once you actually start to incite violence, not when you post some wrongthink on social media (except UK I guess...).
Are there governments worse than corporations ? Sure, but corpos can just not participate in their market. They will, for the money, and that's all there is to say about corporate goals.
> What's legal or illegal evolves with politics and culture. So there's no fundamental purchase there for the kind of moral conversation you were trying to elicit.
Sure, slavery was legal at some point. Doesn't really matter. That's not the point.
The point here that you are desperately trying to miss is that corporation should not have power to manipulate public opinion at will, because only target for corporation is to earn more money
And you can tell government to change the laws, that's how we got slavery banned, women getting rights to vote and minorities being not repressed. Can't do that to corporation. Of course you need to want government act in benefit of its people which US have hard time doing (the stuff your "lobbyists" are doing would get them arrested in EU...) but even that skewed system is still better than Zuck or Elon or other clown deciding what people should see or not see.
Most people do not like casually being exposed to such content, and so any sort of soft policy against it results in users self-selecting out of the social ecosystem. It's no longer just your decision, but one that affects the entire platform. And soon the entire platform becomes dedicated to that disturbing content because everyone else has left.
You could go to Parler or Gab or whatever site right now to see the results of your experiment in action and see why user self-moderation leads to the destruction of the site. This is why users offload that mental stress to the owners of the site who then hire people to manually filter the cruft themselves (often for the worse of the people doing said filtering, but that's a whole 'nother thing).
If you actually read my comment properly you'd notice that I explicitly said there should be a default ban list blocking that
> Either you yourself will have to watch the beheading video in order to know to ban it, or another group of users will need to tag it as 'disturbing' and you will need to ban 'disturbing' content'.
That's orthogonal problem; If you wanted to get rid of that entirely you'd have to pre-screen every content and that's just not feasible, nor any of the media does it aside from some automated filters on keywords.
Nor it is actual problem if you don't subscribe to people that post that; random nobody deciding to post that won't be featured in any feeds anyway because of how algorithms work so there is very little chance it will land on accident in someone's feed.
Of course someone could play the long con and get popular only to start publishing disturbing stuff, but none of current systems stop that and I'm not sure you even could.