For people who believe that everything is political, there are no projects with less moral ambiguity, it's just more or less openly visible.
It used to be that people and organizations making unethical purchases were the ones we considered, and held, responsible. For a long time we've had good, positive movements centered on informing the buyer. We added expiration dates, ingredient lists, nutritional value information, crashworthiness scores and reliability ratings, country of origin labels, even ethical sourcing labels. Perhaps too much of a good thing caused information overload and resulting numbness? Somehow, between the Prohibition, the "war on drugs", and the supply side moral regulations, we've lost the spirit of "well informed free agents making decisions".
Most of the services (FB and the likes) we're discussing here are morally neutral by their nature, and it takes concerted efforts to make them non-neutral[1]. It is the particular use they are being put to that is moral or immoral. Let's not shift vast moral powers from the wide society to a narrow cadre, shall we? The economy is a neat distributed system. It's the popular democracy before democracy became popular. Let's not give it up.
--
[1] example of non-neutrality: the current trend of algorithmic manipulation
I don't think that's the case. Is it moral to exploit human psychology when developing addictive features that pull people into the site over and over? Is it moral to sell user information to advertisers so they can emotionally manipulate you into buying crap you don't need? Is it moral to design interactions that evoke outrage and disagreement in order to increase engagement? Is it moral to track user activity across the web, outside the company's site?
I don't think any of these things are moral. These practices might not be necessary for a site like FB (then again they might), but this is the model they all seem to choose. And that's what actually matters.
The gist was, a bare messaging+microblogging platform is, by its own nature, morally neutral[1]. Of course if the operator starts doing editorial decisions - like algorithmic timelines, or propping up/pushing down content, or manipulating user mood - then the operator clearly is making moral judgements & decisions.
Funny how respecting user privacy does, at least partly, absolve the operator from a lot of risks related to making moral judgements on a mass scale in a hurry.
--
[1] with the only caveat that, if somebody believes facilitating communication to be evil or good, then it would be considered respectively evil or good.
Everything has some political issue around it, but Facebook has politics baked into it because it's using political issues as a means of making money. They sell advertising to politicials, when they know the ads are lies. Their platform is filled with fake accounts pushing genocidal agendas from dictators, and in many cases facebook is sweeping it under the rug.
The way their platform is built is setup to manipulate people, and that platform is being used at scale to do so in ways facebook knows is fucking up the world. Its very existence is political at this point.
I don't really take issue with that, Germany even has that codified, and we're very far from being free-speech-absolutists. Media companies are compelled by law to air political ads by all political parties without checking them, judging them or commentary. Short of being obviously illegal, there's nothing they can do which lead to our center-left state media being told by the supreme court to air the far-right (actually far right, with skin heads, boots and all the stops, not just anti-low-skill-immigration conservatives) NPD's spot.
> Their platform is filled with fake accounts pushing genocidal agendas from dictators, and in many cases facebook is sweeping it under the rug.
But not really. They exist, but the platform isn't "filled" with them. The vast majority on FB is not political.
I'm sure that FB would be quite okay with not having politics at all. Sure, people are on the platform, but they'd rather have engagement around cat pictures, celebrity news and similar things, because people shouting at others about their ideology aren't buying sneakers. They're not a political advertising company that relies on political ads as their primary funding.
Banning political speech is simply not an option, because some people sometimes want to argue about politics, and you're going to have to fight your users if you don't allow that. You never want to fight your users.
Food and water production has VERY big ethical issues. Palm oil, mass-slaughter of animals, deforestation, Nestle taking away water from locals, CO2 emmissions etc.
So yes, there are problems in the food and water industry, but I don't really get what your point is? Should we just close our eyes, ears and mouths and say "fuck it, not my problem"?
Not at all what I was aiming at. The problem people have with FB isn't how they produce the product, but who uses it.
The problem with food and water in the equivalent scenario, would be in who consumes it. If you let everyone consume it "woah, that's a political choice". But it really isn't. It's the default, deviating from it is a political choice.
No, the problem is that the product they produce is _specifically designed_ to be used in this manner because conflict and argument increases "engagement" and for a large portion of the employee base their bonus depends on performing work that leads to this outcome.