However, the actual policy proposals for replacing Section 230 are all outright dystopian. Josh Hawley, in particular, is NOT a free speech advocate. His problem with Facebook/Tiwtter is perceived liberal bias, and the alternatives to Section 230 that he suggests are 100% about wrestling editorial oversight away from one class (tech CEOs) and then giving it to another (a politically-appointed board).
Does anyone have a good proposal for how to go about reforming Section 230 in a way that's workable and values free speech?
Nobody has a good proposal because every discussion about the idealism of "values free speech" is always hiding the true difficulty: nobody wants to be forced to pay for others' undesirable speech.
E.g. Youtube can't be a "free speech" platform because advertisers have free will and can choose to not pay for it. (Previous comment about Adpocalypse: https://news.ycombinator.com/item?id=23259087)
Always mentally translate "create a website that allows free speech" into "create a website that forces others to always pay for undesirable speech they don't agree with" -- and you will see that's a virtually impossible dream to accomplish. There is no broadcasting medium (including websites) in any country that doesn't have interference and pressure to remove/ban content via consumer boycotts, advertisers, subscribers, business judgement, or government officials.
Websites have the hard reality of requiring cpu/disk/bandwidth and they all cost money and that's the lever used by others that keeps "absolute free speech" from getting realistically implemented.
The issue is one of association. There are strong social forces that punish association with any distasteful speech. The association taints everything (and everyone) it touches, and the liability in the form of negative blowback can grow far beyond whatever costs were involved in actually serving the content.
Even if some set of individuals were willing to donate all the hosting costs of the distasteful speech, there would be strong social pressure for hosting platforms not to accept the money.
I get that many advertisers won't, but even companies who do want to advertise on controversial content don't really get the choice to do so, since platforms seem more prone to flat out removing/banning said content rather than putting it behind a 'controversial' flag and letting advertisers opt in/out of advertising on it.
These sites already have systems to mark what kind of content something is, and advertisers can already choose to market on content in some categories and not others. So it'd seem like if there are companies willing to pay for such speech, they should be allowed to.
In particular, my view was basically: "don't like FB/YT/Twitter policies? Spin up a wordpress; it's as easy as posting to FB/YT and you can pay the monthly bill for a quite decent audience using loose change."
I've come around in the past couple of years. Social networks/content platforms are... well, networks and platforms. Like it or not, the policies of the largest networks/platforms will have a non-trivial impact on public opinion. They have become a (perhaps the) public square. And the government doesn't have to extend Section 230 protections to those networks/platforms.
I really like the idea that individual users should be able to create their own content filters and buy/sell content filters. At least in the abstract, this seems like it would address the need for content moderation without centralizing the censorship.
> Websites have the hard reality of requiring cpu/disk/bandwidth and they all cost money and that's the lever used by others that keeps "absolute free speech" from getting realistically implemented.
There seems to be a blind spot here in the idea that "websites" have to be big monolithic platforms that give everyone a megaphone.
"Websites" where you can say whatever you want are and have been cheap, and there have been famous examples of this for decades (Timecube!).
But expecting to get access to someone else's megaphone is a very different question. Recently it's been mediated by "engagement" which is a socially terrible base metric, editorially - it encourages the most ridiculous, provocative thing. But this is still a choice, not just some technological inevitability or "correct" ideal state. Big platforms will always necessarily do some sort of curation.
Putting the government in charge of that curation seems silly, since the real cost of bypassing the platforms is so low. Yeah, you have to earn the eyeballs then, instead of piggybacking on other people's shit, but is that so bad?
It's like saying "people shouldn't make independent movies anymore, we're just gonna have the government review all the scripts the big studios take on and make them take some they normally wouldn't."
how hard would it to be to allow them to match with advertisers who specifically want to be on that type of content?
Require platforms over a certain size to provide real-time data accessibility across platforms. Facebook and Twitter are monopolies by virtue of market position, anyone can build a platform that is functionally the same. Create competition here.
Just like the good old days, until somebody "thought of the children".
Different people have come up with different plans about how you could address tech monopolies, with varying degrees of extremity:
- Splitting up companies that control entire vertical slices of a market. Warren in particular was campaigning pretty hard on this, especially in regards to Amazon/Apple app stores.
- Forcing companies to allow data exports by consumers, and specifically to allow automated data exports. For example, Facebook would need to allow you to access an API to pull your data, so you could plug that API into a competitor instead of manually downloading everything.
- Weakening Computer Fraud and Abuse laws around site scraping and adversarial interoperability.
- Adding additional exceptions to the DMCA around interoperability. For example, allowing companies to break Kindle DRM for the purpose of moving books to a competing service if Amazon didn't provide a way for them to migrate books on its own.
- Forcing certain data formats to be standardized, or requiring standardized API layers on top of services.
There's a lot of debate in those areas about how far is too far, and what counts as a natural monopoly, and what negative side effects might exist for particular strategies. But, the thread running through all of them is that Section 230 is fine, awesome even. There's no need to get rid of it, 99% of the time we want moderation on most of our platforms.
Platform censorship is really only a problem when consumers don't have the ability to easily switch platforms/hosts, and in that case we should break the monopolies, not the Right to Filter[0]. You see people complain about censorship on Twitter, you don't see as many people complain about censorship on Mastodon, because on Mastodon you can set up your own server if you really need to. One of the biggest points of federated services is to allow communities to choose how aggressive they want to be about moderation.
- splitting up: seems like a temporary fix at best (see ma bell).
- data exports: exporting is nice, but... then what? the network is still a network.
- weakening CFAA/DMCA and allowing scraping/interop: It'd be a terrible hacky world, but I could imagine it working. Probably would end up looking like a weird inverted version of the Wuph! bit from The Office. https://en.wikipedia.org/wiki/WUPHF.com So not a good solution but maybe actually a solution. Plus we should do this anyways.
- standardized API: I like this combined with the sibling proposal of allowing people to build their own filters. I think that's my new position unless someone can convince me otherwise :)
Also this is contextually a clear retaliation for speech that the government does not like, and their arguments are pretextual.
But also if a platform loses 230 protectIon if it restrict political opinions then sites would need to leave racist and homophobic comments up, personal attacks against the authors, etc. Because if they lose 230 protection they become directly liable for content on their site if the filter any of it.
That was the whole point of section 230 - sites have a legitimate reason to want to stop arbitrary content being hosted by them, but they only had the “i’m just a dumb pipe” defense as long as they left everything up. Preventing that is literally the reason section 230 exists.
But here we have a president who doesn’t like one platform’s content moderation policies, as has decided to rewrite the law in order to make that moderation illegal.
It is clearly retaliatory, and it is clearly with the intent of restricting the speech of those entities.
Isn’t that the opposite of what the text of the law says? Doesn’t it provide for protection when moderate content that “one may find objectionable”, which could basically be anything.
It's not so much an attempt to defend free speech as to bury the affected companies in litigation if their moderation policies aren't either nonexistent or up front and aggressive: exactly the situation Section 230 was written to avoid. It's just in this case the lawsuits will come from the parties seeking to cause offence rather than the offended.
In either case, the web in its design is still a "public place" where "free speech" can occur, where anyone who is connected to the internet has the potential (setting aside issues of state censorship) to communicate, via a public website, anything to anyone, anywhere in the world.
Section 230 was reputed to be passed in response to a lawsuit against Prodigy, an online subscription service, which technically, IMO, was not the same as the emerging "web". IMO, services like Prodigy, Compuserve, America Online, etc. were walled gardens that could exist outside of the web. Rightly or wrongly, I always viewed Section 230 as protecting ISP's from litigation arising out of the content people included on their websites, not as protecting websites from litigation arising out of the publication of the content. It is up to the website owner to remove offending content, not the ISP to block access to it. This makes practical sense. We wanted ISPs to stay in businesss.
As crazy as it may seem to consider messing with Section 230, there is certainly an argument that the protection it affords has been usurped in ways never anticipated, by enormous "communal" websites larger than anyone could have imagined. When someone's website has billions of pages, comprising submissions from the general public, it becomes impractical to remove offending content. I doubt Section 230 was intended to address this problem, to keep a small number of individual websites in business and ensure the creation of a small number of advertising services billionaires.
Yeah, it's interesting. That's how everyone thought about it back in the day. Section 230 was about ISPs, not phpBBs.
I remember actually worrying about this when setting up my own forums. Even talked to a lawyer and included CYA language in the EULA. I would also use this as a justification for bans ("I'm possibly on the hook for his bullshit, which might be called harassment by some local pd, because the law is new and who the hell knows what will happen").
Thanks for the reminder. That's really interesting, and it's also really interesting that I didn't even remember this.
If a “censored” tweet couldn’t be shared/retweeted/replied to but was still available for anyone who wanted to seek it out then the idea (however distasteful) hasn’t been censored strictly speaking but it also hasn’t been amplified. I’d prefer a compromise that leaves control over acceptable content in the hands of the platform owner or the users rather than the government.
You are wrong. The bill does not designate a political board, it requires tech companies that have over $30 million U.S. users per month and an annual income of over $1.5 billion, to publish all of their content moderation policies. Users who charge that the companies are not implementing content moderation policies fairly would be able to sue for $5,000 plus attorney fees.
I think it's reasonable for these social media behemoths to post their mod logs.
I'd even like to see sites like HNs do it. Lobsters does: https://lobste.rs/moderations
If you have a specific gripe with this, let's discuss the legal.
I really don't see how GP is currently top comment.
Forcing giant social media companies to publish their content moderation is transferring power from the tech ELITE to the public. No political committee is in charge, the company will be forced to be published their logs, the courts can be used when users think companies are still acting in bad faith and not properly publishing their moderation logs.
PSA: READ THE BILL, IT'S SIX PAGES!!!
https://www.hawley.senate.gov/sites/default/files/2020-06/Li...
This isn't transferring power from the tech elite to the public, it's making trolling the new patent trolling.
Here's Josh's own description of the legislative intent: "Big tech companies would have to prove to the FTC by clear and convincing evidence that their algorithms and content-removal practices are politically neutral. The FTC could not certify big tech companies for immunity except by a supermajority vote"
If the FTC has the authority that Josh wants it to have then it will 100% be politically weaponized by whoever controls the white house at the time of passage (so, Trump, because it'll only pass if R's sweep in 2021). IMO it's quite naive to think otherwise. In general, but also specifically with respect to Trump.
But, assume Trump is this amazingly neutral and high-minded person uninterested in using political power to shape social media narratives. Okay. I have a PhD in machine learning, have tons of experience designing and deploying systems, and I'm pretty up to date on all of the fairness literature. I have No. Fucking. Clue. how I would convince even myself that a content moderation algorithm is "politically neutral".
Even with a clean spec, this seems hard because content moderation algorithms are huge and complex. Wasn't it just a few years ago that a bug in Java's sorting algorithm was found by trying to certify its correctness? Like, bugs live in freaking sorting algorithms of the most popular languages for years and years. Even with a ridiculously clean spec, ...
...and the spec here isn't nearly as clean as "sort the list". The question of what "politically neutral" even means is extraordinarily political. So even if the FTC wasn't explicitly weaponized -- and, dear god, it will be, because the counter-factural here is insane -- the judgements here will still be implicitly political because the spec ("politically neutral") is inherently political.
Proving that a hugely complex ML algorithm is fair simply won't be some sort of apolitical mathematical exercise.
Also, note well: I wasn't even referring to the Ending Support for Internet Censorship Act specifically. But I don't really want to start a debate around this point because that's all beside the point.
I'm curious. What do you think of the proposal made in this thread that people should be able to use their own filters and the big tech cos should be required to implement a clean api for enabling third party filters?
That seems like it solves the "some people don't want to see X" problem in a pretty politically neutral way, but also in a way that acknowledges the difference between the walled-garden network effects web of 2020 and the more decentralized web of 2005.
Seems strictly superior to Josh's proposal of creating a huge incentive to politically weaponize the FTC and giving that almost certainly weaponized body broad authority.
If you think that the "user-chosen filters" solution is not better than Josh's proposal, I'm really interested to hear why.
For reference, here's the law:
> (1) No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.
> (2) No provider or user of an interactive computer service shall be held liable on account of— (A)any action voluntarily taken in good faith to restrict access to or availability of material that the provider or user considers to be obscene, lewd, lascivious, filthy, excessively violent, harassing, or otherwise objectionable, whether or not such material is constitutionally protected; or (B) any action taken to enable or make available to information content providers or others the technical means to restrict access to material described in paragraph (1).
Section 230 was written to solve a very specific problem: Prodigy tried to moderate content on their site, and when someone posted libelous content and they didn't remove it, Prodigy was held legally responsible. CompuServe did not moderate content, and when someone posted libelous content, CompuServe was not held legally responsible. There was a perception that this was a counterintuitive result, and so Section 230 patched over it.
This has nothing to do with the ideological content of the communications. The messages in both cases were already unlawful because they were libelous - the question is whether CompuServe and Prodigy bore any liability (i.e., any obligation to not republish it), or just the end user.
Also, as written, Section 230 does not create an obligation to do anything. You don't have to moderate obscene, lewd, etc. content. You can choose not to moderate anything. The law simply says, 1, you the website operator aren't responsible for what people post, and 2, you don't gain any additional liability if you choose to moderate these things. It doesn't create any liability for not moderating them. The perception (which seems to have been empirically correct) is that Prodigy's approach would be more popular in the market than CompuServe's, and so the law should not create a legal incentive to act like CompuServe. The new law simply removed that incentive; it did not create a legal incentive to act like Prodigy.
The results of the two cases are only counterintuitive if you believe it is good for society for service providers to proactively moderate speech that is already illegal and err on the side of over-moderating. I don't think that belief is easy to reconcile with a strong pro-free-speech view - you're trusting a platform to be making decisions that would otherwise be made by courts, and you don't have nearly the representation/recourse/etc. you do with the legal system, if they decide to moderate you.
In particular, adding an obligation to protect free speech means that providers can only moderate content if they're confident it would result in legal liability. If they're not sure (suppose that, to pick a recent example, someone says that J. K. Rowling "cannot be trusted around children" - is this libelous, or a constitutionally-protected opinion?), they should err on the side of not moderating. But that matches the status quo ante Section 230. If you think that forums should err on the side of under-moderating, then it was perfectly fine to be in the legal situation where Prodigy's approach was riskier than CompuServe's.
Note also that neither of these scenarios does anything to discourage people from running forums where they tightly control what is said (ideologically or otherwise). If I want to host a personal blog with only my own posts, I can do that today, I could do that before Section 230, and I can do that essentially regardless of anyone's proposals (because I have a First Amendment right to say what I want and only what I want). If I want to invite my friends and only my friends to comment, I can do that too. If I want to invite the entire world to comment and I screen comments before posting, I can do that too (I also have a First Amendment right to free association). I'm still liable for unlawful posts (from libel to copyright infringement to whatever else), but if I'm willing to tightly moderate content, that's okay.
Another pro-free-speech opinion here, by the way, is that the real problem is with libel laws, and neither CompuServe nor Prodigy should have been held liable because the speech shouldn't have been illegal in the first place. This is entirely orthogonal to the "free speech" concern of perceived ideological bias.
It's only in the weird intersection of all of these things that the framing of Section 230 and ideological bias seems to make sense - you'd have to take the anti-free-speech view that ruinous penalties for libel are good, and then carve out an anti-free-speech exception that says that if you choose not to exercise your right to say what you want or associate with who you want, libel laws don't apply to you. And then, somehow, the two anti-free-speech approaches cancel out and turn into a free speech view - platforms are obligated to be non-ideologically-biased (in a sense defined by the government) for fear of arbitrary civil penalties.
(By the way, any free-speech reform to Section 230 really should start with repealing 230(e)(5), where FOSTA/SESTA partially removed Section 230's protections so that platforms became responsible for messages posted by users about "the promotion or facilitation of prostitution.")
What do you think of the proposal that, to keep section 230, websites with a large audience must implement a standardized api and then folks would be allowed to create their own content filters on top of that api?
Thanks for your post.
This seems to be because they live in a bubble where everyone agrees with them. But when they look at the real world they do not see the same. giving them the perception of bias, but there is none. They simply have an unpopular opinion.
Section 230 exists because the courts punished Prodigy because they tried to moderate their forums but did it imperfectly, but didn't punish CompuServe because they let anything go. The idea is to allow imperfect moderation in addition to both zero and perfect moderation.
The internet without section 230 isn't a bastion of internet freedom. It's 4chan and 8chan. It's a shithole.
Between the corporately-curated snail mail and 4chan, I think social media will skew toward 4chan, which I'd gladly accept. Frankly, I like it even more than moderately-moderated social media; it's a lot funner. :D Shithole, yes, but charming and fun. But that's probably mostly a product of its anonymity.
Also, imperfect moderation is the thing that tends to annoy users most. With perfect moderation (like a blog with comments disabled?), you have no hope. With no moderation, there's no danger. With imperfect moderation, there's inconsistent or nonsensical bans, there's the urge to take chances and get punished, sometimes by capricious mods, and there's endless sidebars of rules to read before posting. Ugg.
The internet will find a different way to appeal to the mainstream, probably by becoming more similar to cable TV & Netflix & Disney: practically eliminate amateur content and stick to professional, big budget productions.
I can't think of any high-profile liberal politicians who have such a blatant disregard for the rules of the platform, so that must be what you're referring to, right? Or is your contention that rules like "don't use our platform call for violence" are themselves political censorship targeted at conservatives?
I fail to see a difference between the two, and think both are untenable fantasies.
The gun manufacturer ceases to maintain control and cannot be assigned responsibility after the sale of the good.
There are a number of implied warranties made when a transaction occurs. [1]
[1] https://www.investopedia.com/terms/i/implied-warranty.asp
[0]: https://www.tracking-point.com/ (Yeah it's an aftermarket product, but for argument's sake, let's say it was 1st party)
I can't hold a skateboard co. responsible for what people do with skateboards they've purchased. I can most certainly hold a skate park responsible for what happens in the skate park.
"Company makes thing, people do bad with thing, hold Company responsible" is a scary line of thought, and that's exact same scenario for both Facebook and guns.
"Thing hurts person using it" is closer to your skate park analogy, and yeah, in that case, of course Company should be responsible for making a bad thing.
If a platform controls and exercise editorial control when something is said, what is said, and who may speak and who may listen, then it may be useful to hold that platform liable. It is about intent, control and power.
Gun manufacturers regulations however is full of rules that is about protecting society and international agreements. Selling guns to countries currently at war is problematic, so we hold those manufacturers responsible if they try to profit from running guns.
Besides that, you're framing this wrong. If I host a platform as I host you in my house, I have a right and a duty to make sure you aren't committing illegal actions within my domain. This is a fairly universal law, written and unwritten, that who and what you host in your domain is your responsibility. Why should a few privileged platforms get a free pass?
Sometimes the arms dealers (sellers not manufacture) are liable for the actions of the gun owners.
There is no shortage of cases, I searched Walmart (because Walmart scale), but some examples:
1. Walmart Settles Lawsuit for Selling Gun Used in Murder by Neo-Nazi (https://blogs.findlaw.com/injured/2018/11/walmart-settles-la...)
2. Wal-Mart sued over sale of bullets used in Pennsylvania murders (https://www.reuters.com/article/us-pennsylvania-bullets/wal-...)
Sellers of guns and ammunition assumed they were protected from liability by the federal Protection of Lawful Commerce in Arms Act.
On that angle, why are the folks arguing "a few big tech companies have had immunity for too long, they have too much influence," not also arguing "a few gun manufacturers have had immunity for too long and have too much influence?" To me, providing a tool to kill and facing no consequence is a lot more influence than Twitter enforcing their terms of service.
To be clear, I don't think weapons companies should be responsible for what people do with their products. I certainly don't think tech companies should be responsible for what people do with their products, either.
> The Justice Department proposal is a legislative plan that would have to be adopted by Congress.
Oh boy...the costs of running Google, Twitter, Facebook and others... will quintuple overnight when Congress passes this.
I can't wait until the fines start raining down. They'll have earned every cent of the financial damages. The arrogant, biased platforms picked a fight they can't win with half the political power in the US.
This rapid, broad shift is why Larry and Sergey ran for the hills not long ago, abandoning Alphabet as fast as possible; they saw what was coming (including the anti-trust investigations). I bet they destroyed as much of their internal communication history as possible as well (legally of course, probably), so it can't be used against them or the company.
Regardless of the merits of the idea, it’s an amazing paradigm shift for the GOP.
For example, if two US citizens on US soil discuss insider trading of an Australian company that does not even do business in the US using trades on US brokers, those two individuals are in violation of the Australian Corporations Act and can be criminally prosecuted (by Australian authorities). Why? Because Australia claims jurisdiction over any Australian company.
Likewise, "sex tourism" with children in South East Asia is rampant and many countries are unwilling or unable to prosecute. Australia has deemed having sex with an underage person in a foreign country is likewise a crime in Australia that they can and do prosecute.
The US is able to to exercise a lot of power with international banks because they have the power to remove a financial institution's access to the US banking system. It's this stick that allowed the IRS to go after Swiss banks for complicity in US citizens evading US taxes.
Unless of course any company whose viability materially depends on Section 230 protections would have been dead in the water pretty much in any other jurisdiction on earth in the first place, the threat of moving out of the US was a naked bluff and the powers that be realized it.
https://www.hawley.senate.gov/sites/default/files/2020-06/Li...
Back in even the 90s and 00s even the dim bulbs responding to other dim bulbs like Yahoo or AOL doing dumb stuff like shutting down child molestation victim support group channels from ham-handed attempts to try to moderate didn't lead to any idiots thinking that the government should somehow punish them even though it was rightfully called stupid and morally wrong. Was it because they actually understood the internet existed as many small sites as well as the big names?
Specifically when conservative voices are banned or a voice of any leaning? If a voice, right or left, tweets supporting hate or violence should that be removed with equal prejudice or left in place regardless? If it's a left wing voice posting bannable content 9/10 times, is that unfairly banning/censoring of left wing voices or simply the ratio that such ban-able content occurs? Does that cross over into publisher status? I don't think so.
I ask as the outrage over specifically conservative voices being censored has less to do with reality and more to do with loud people wanting attention as the bias against them is [mostly made-up](https://thehill.com/opinion/technology/440703-evidence-contr...).
There isn't censorship targeting right wing comments, just removal of extremist comments that often catch vocal right wing groups MORE OFTEN than left wing voices. There are loud people on the right (and some on the left) spouting misinformation, disinformation, debunked conspiracy theories (think QAnon), or outright threats and lies. These loud people and groups, when their content is removed, get more loud and right wing media platforms embrace this because it feeds on a common [Persecution Complex](https://en.wikipedia.org/wiki/Persecutory_delusion) that is largely non existent and often linked directly to some level of what some call privilege (White/Rich/establishment/etc). Yes, there are left wing media platofrmas that do the same but they are not in any way as close to the audience size as Fox News. They aren't even 'news' as in October of 2018, they [specifically noted in their ToS they are entertainment](https://mediabiasfactcheck.com/fox-news/). If you're getting the idea that banning of conservative voices is censorship from any voice/commentator hosted by Fox News, that's not news, it's entertainment!
I mean, look at this [summary of legal cases](https://www.theverge.com/2020/5/27/21272066/social-media-bia...). The majority are from conservatives with 1 item from a democrat and most are quickly struck down as the complain is founded on an inaccurate idea of the first amendment, namely that the users rights were not infringed and the lawsuit attacked the platforms rights! The platform has the right to not let you use it for your inaccurate, biased, or even malicious content. You are not infringed upon by being removed from that platform for those issues specifically or likely for anything the platform deems is against their ToS or rules. In r/conservative, this means any dissent from the established norm, even just pointing out polling data that invalidates the headline, is an immediate ban. That's allowed by Reddit and not under the jurisdiction of the government. It's not lawsuit worthy either. My rights aren't being infringed if a subreddit wants to be an echo chamber of misinformation protected by heavy censorship. There is some schadenfreude when that sub complains of censorship of right wing voices but rejects any sources stating it's not reflective of reality.
As a comparison, consider how climate science is presented across media. 99% of scientists agree Climate Change is aggravated by human activity and something we need to tackle; so does the Pentagon. 1% are the counter voices saying it isn't an issue or human activity is not a factor. These sides are then presented as equal (which is misinformation) and given equal weight like debated 1v1, not 99v1 like in reality. Same thing for right wing voices and voters across the US, [they are a minority](https://news.gallup.com/poll/15370/party-affiliation.aspx). 25% of those polled identify with the GOP even though the GOP holds more than 51% of seats in the Senate and after the 2016 election held more than 51% in the House for that term. When this minority holding a majority is 'attacked', everyone on connected outlets is going to hear about that (remember on some platforms this is entertainment, not news). That's basically the commentators first amendment right to protest the ban and definitely allowed under free speech. That doesn't mean it's an accurate portrayal of reality just like the censorship of right wing voices _seems_ biased but really isn't. In both cases, the minority is extremely vocal and active disguising (or simply ignoring) data stating otherwise. Conflict, even artificial, drives clicks and revenue and that's what entertainment is all about.
My bigger question is how is this interference with interstate commerce? I could see that argument applied to an influencer or commentator who is removed from their primary platform. If they lost revenue they may even have standing. But that still isn't an issue for Twitter, you can be removed from any platform you don't own and should always have your own site and system set up for hosting your content. That's preached by internet first media groups since YouTube rose to prominence. But it isn't a violation of interstate commerce.
This is a open and shut first amendment case.
Yes, it is, but the outcome would be the opposite of what you expect.
constitutionally, private corporations can't be censored or compelled to speak when the speech is 1A protected speech.
but not all speech receives 1A protections, and Section 230 is about the type of speech that isn't protected by 1A. E.g., without 230, corporations could be sued for users' libel or held criminally liable for helping distribute lots of different types of speech (e.g., making terroristic threats, distributing child porn, facilitating illegal acts, etc.).
So, a case that hinges on 230 protections would be open-and-shut if 230 were repealed. Just with the opposite outcome of the one you're expecting.
You realize that an editorial posted on the NYTimes or Fox News site can get them sued for libel or defamation, right? This does, in fact, happen all the time. Read about several of the left leaning cable networks and their lawsuit with the 'Covington Kids'. They are a publisher, and are responsible for their content.
Google, Twitter, Reddit, etc. acted like platforms for many years, same as the telecoms, and nobody ever complained. Look at how much influence and cash they have. What could possibly cause them to risk this gold mine?
Donald Trump became president, and the liberal companies and employees in Silicon Valley / West Coast lost their nerve and resorted to censorship, decided to use their 'platforms' as their own private political tools.
Oops...
Now they can enjoy the same restrictions that other publishers have always had to deal with. Hopefully their shareholders realize the source of the issue, and boot the activist executives and employees, as it's now going to cost them a lot of money, all of which they brought on themselves.
* https://www.washingtonpost.com/lifestyle/style/cnn-settles-l...
As one of those shareholders, from a purely profit-motive, I'm not so sure that replacing 230 with something more onerous would be a net negative.
Moats are expensive, but also valuable.
This is what happens when you get government involvement in tech.
There are absolutely other powers that stand to benefit, or are pushing the agenda as well, but to claim it's not a result of Trump's hissy fit is ridiculous.
https://deadline.com/2020/05/donald-trump-twitter-social-med...
The executive order was signed by exactly one individual who was completely open about why he was signing it.
At the end of the day, it's expensive to police your platform so they went with the immunity route.
From broad, repeated invasions of online privacy to numerous scandals involving state-sponsored disinformation campaigns, Facebook shows time and again that they are not responsible corporate stewards of the internet.
Zuckerberg has got to be one of the least popular Fortune 500 CEOs and yet he's completely invincible, investors don't want to touch him.
So how do you propose to hold such a company accountable if not through regulation and oversight?
And now they want editorial control over publicly owned information services?
Why don't they (the installed alt-right) go claim that the government deserves editorial control of the papers as well?
Clearly that's the issue needing attention here; not obstructing an obstruction investigation.
https://en.wikipedia.org/wiki/Facebook–Cambridge_Analytica_d... and up a bit further: https://en.wikipedia.org/wiki/SCL_Group
The current administration just gave away $500,000,000,000 of taxpayers' money and now won't even release the list of who took the money?
And you think that the problem is Facebook's editorial transparency and lack of accountability?
Kindly review https://usaspending.gov and tell me where our money is going
Besides blaming them for being unable to stop state actors is a bizzare form of victim blaming - complicity would make sense sure but not inability to stop it. Asking to be both stronger than and weaker than states at the same time is an unusual demand.
Have you considered that the reason why most people are okay with Facebook not punishing president Trump is because many people agree with him? Have you considered that -- facebook having more active users than twitter -- that facebook is more representative of the United states than your feelings?
Twitter is known to be mainly popular among the media, who -- shock and surprise -- don't like trump and lean liberal. Facebook is popular amongst the rest of us. Is it possible in your worldview that other people may not think like you?
Hope you are satisfied with all that awesome power.
MySpace, Facebook, and Twitter was nice clean space to hangout for a while. Then horrible and traumatic pictures and videos start showing up in my feed. I know the world is horrible place but I don't need constant reminder about it. I unfollowed as many people as I can.
Now as a parent, I cannot constantly monitor these supposedly safe sites. I have seen disgusting or violent videos on YouTube for Kids, Amazon Videos aimed at kids, and even some kids shows on Netflix.
These platform should be responsible for the content they host, no matter who uploaded. That would be one way to clean up flith.
That's why I will pay for cable TV again and let someone moderate content for me.
And... this isn't going to change "kids shows on Netflix"...