What do you think about calls to remove platform immunity from algorithms that have an editorial effect?
You mean, "repeal Section 230"? Because the entire point of Section 230 is to allow imperfect biased moderation without having to eliminate all user content. Such calls are ridiculous, stupid, or malicious on a host of levels. Making editorial decisions about what to allow on your own private property is core 1A Freedom of Speech, with caselaw dating back to well before the web.
"Editorial effect" is also an utterly meaningless phrase. You probably have some silly politics thing in mind, but moderating against porn or violence also has an "editorial effect". So does having a forum devoted to aircraft or cats. I think trains and birds are great too. But if I want to run a forum specifically about aircraft or cats, I need to be able to delete train or bird posts, and if necessary ban users who won't follow the rules. This is all completely biased and has the editorial effect of shaping the forum to a specific niche of speech, there is nothing common carrier about running a focused forum. And politics could indeed enter into it, what if some political group proposes a law banning aircraft or cats? Rallying and organizing against that could include being biased against those who want to support that law. Colorful and strident invective may be featured. Such is life in a free society.
If you want a soap box that does something else, the law also protects your ability to make that (or to group up to do it or pay someone else to do it or whatever else). And as a practical matter it is now easier and cheaper to do so and get to a potential global audience then at any time in human history (let alone the history of the US). Win the argument in the marketplace of ideas, not using the state monopoly on violence.
Off the top of my head, I don't have a good way to differentiate those algorithms in legal terms. As another comment points out, even sorting chronologically has an editorial effect of sorts, but these things are different and I know it when I see it. Perhaps someone wiser than me has an unambiguous definition.
That's always the rub and the core issue of free speech: there are no oracles. You have to imagine what your worst most hated enemy demagogue would do with the tools you propose to create, because they will have them. Nobody can be trusted with the power. It is hard though, and I won't completely dismiss the idea that the scale networking/storage/ML offers can create emergent effects that don't show up at a smaller level. The legal notion of tracking for example.
----
0: though "amplify rumors over well-sourced reporting, demagoguery over reasoned debate" = the tabloids that exist right there at a large percentage of supermarket checkout aisles, remember nothing new under the sun, you might be surprised at some of the content of regular newspapers for that matter in the 1800s say.
It seems like the proponents of such a rule change are being underhanded, thinking "We can't ban companies from having a political bias, so we'll say that if the company has a political bias (i.e. any editorial/content policy), it becomes liable for any libel, or scams, or threats (written in any language) that appear anywhere on its platform".
I might support a narrow form of this, though, which says that if a platform doesn't let you opt out of (legal content) filtering/re-ordering of content, then the platform has profited from you receiving messages with an unwanted bias (i.e. commercial speech), and therefore owes the user a small amount of statutory damages each time the user suffers some harm.
Isnt this every non-chronilogical sorting algorithm?