So besides straight up changing the algorithms to promote non-decisive content, these are a couple things I think could help:
- Limit the spread of information in general in favor of content created by the people you follow
- Un-personalize advertising
> Limit the spread of information in general in favor of content created by the people you follow
I don't think that's what people want from their social networks nowadays. FB, Twitter, YouTube, TikTok, Snapchat, etc all do not work this way anymore. Suggesting that Facebook revert their app to what it was 10 years ago is not a serious suggestion because there are many other apps that will fill that void. If it's not FB, another app will take its place and give people the outrage they're looking for.
> Un-personalize advertising
Advertising plays a very small part in this. Most of what you would call "disinformation" is spread through reposts, which are not affected by advertising.
Sure, there might be some hostile actors out their spending money on pushing propaganda to the masses. But from my experience, people actively seek this nonsense out, the algorithms just make it easier for them to find it.
In my eyes, the real problem is that most people aren't equipped with the right tools to identify bullshit. Simple things like an inability to gauge scale. e.g. "9,000,000 gallons of oil has been spilled from pipelines in the last 10 years" Is that a lot? I have no idea, but what I can do is compare that against other forms of oil transportation. Most people won't do that work though, they will go straight to outrage.
I'm happy to be proven wrong though. Maybe this is the thread where people will make practical suggestions.
- Reduce majority of bots by making them unsustainable.
- Provide direct money to improve moderation and make platforms liable.
- Make journalism cater to individuals rather than ad networks.
- Remove toxicity because trolls won't pay after getting banned regularly. No need for other fingerprinting methods.
- Reduce the number of users and silo them automatically.
Free business models are anti-competitive and result in worse service for the users by making platforms accountable to advertisers (other companies) than consumers.
Force facebook to introduce minimum payment based on purchasing power. Outlaw free/freemium models in software or limit them to a time period (3-6 months).
This won't apply to non-profit services. And open source will be fine since it will only apply to services or for-profit companies.
When it was discovered that tetraethyl lead was widespread in the environment and caused neurological damage, it was banned. Yes, that materially harmed several chemical companies whose livelihood was based on producing tetraethyl lead.
So what?
If your business model harms people, I don't care if stopping harming people eliminates your business. People matter. Businesses do not.
Are we supposed to just go, "Yeah, we know Facebook is harmful to millions, but won't someone think of the poor shareholders?" Then shrug and accept it?
For example by doing something when they are warned for years by multiple entities that FB is used as a tool to support genocide, like in Myanmar.
It seems that only when such things blow up publicly and the stench of bad publicity gets too bad they send out Zuckerbot announcing his usual platitudes to then get back to business as usual. And that's far from the only example were their product was used for oppression by authoritarian regimes.
This company could do a hell of a lot more to counter this. But they just don't give a shit, unless publicity gets too bad.
edit : word change