The whole idea behind this sort of filtering and vetting is not about teaching people how better to reason about the world and judging the truth of those that would tell you what is happening and why, but rather it says that most people are incapable of making those types of decisions for themselves and that an informed elite should be in control of what information is available to the plebs. Once you concede this, how far is it before we make "fake news", or rather unapproved news, simply illegal? How long until news simply becomes propaganda?
There absolutely is a problem with "fake news" from all quarters and it is nothing new at all: whether that was "bat boy" at the grocery checkout, the sinking of the Maine, or the news and characterizations during the recent campaign. "Stop the Bullshit" and filtering efforts is not a solution to the problem, merely the creation of new problems such an even less thoughtful populous and a new set of arbiters of the truth that I for one won't trust any more that the current ones.
One recommendation I'd have is not to use the word 'safety' in this context, it feels kind of overprotective. Perhaps the link should read "Take me back to reality" instead?
I chose "safety" because I think I copied Chrome's message when visiting a website with an invalid certificate.
Every crowdsourced news aggregation site faces the same challenge, too. Digg had serious vote manipulation problems. Reddit's /r/politics section was overrun by pro-Clinton activists, etc. Concerted, concentrated effort can almost always overpower the consensus of average users.
EDIT: to be clear, my point is that if/when this becomes a powerful tool, the incentive to aim it at a broader or different set of goals is overwhelming.
This is the same basic difference that you can see between good and bad science lectures in schools. Bad lectures give students a bunch of formulas they have to trust and memorize. Good lectures describe how the world works, which they can confirm through their own observations and experiments.
Anything that relies on blind trust in some curator will eventually have the exact same problems news most media has right now.
If its not news sites, its images with some text over them, videos, etc.
The problems that face us will not be solved by siloing ourselves away from views we disagree with. It might take more effort to filter out bullshit in our current age, but if we're not even aware of the bullshit someone else is feeding off, how are we going to combat it? We need to bring more people into public debate, not isolate them.
Also, confidence in the press is at an all time low, so there is already awareness. [1]
1. http://www.gallup.com/poll/192665/americans-confidence-newsp...
I'm against all the fake news in Facebook et all but if we dont teach people to be good at detecting it we're just putting a bandaid over a broken bone so to speak. Am I being overly optimistic of society, that learning how to detect bullshit is better than doing the hard work for them?
In addition,it should be not that this tool can be used to apply any prejudice, bias, etc. I certainly empathize with the sentiment here but this can quickly becomes a case of be careful what you wish for.
This is an interesting idea. Perhaps we need a public PageRank algorithm and database of pages and domains that could evolve over time. Then the browser addon could just overlay the score of the current page's public rank (and perhaps have a menu showing a list of known outside pages/domains that link to the current page/domain).
You can't teach people to be on full alert all the time.
The analysis that shows you whether a site is fakenews or not is structural (and simple: go through the backcatalog, see if the Dalai Lama or the Pope is endorsing both Putin and Trump). Once you've accomplished that, there is no point to wasting any energy on any of the stories on the site.
The fake news/spam thing might be straightforward. But what people are objecting to is the black list. Who decides what is one that blacklist?? That is the problem. What happens if the developer goes rogue/gets hacked/sells to a shady interest?? What happens if the developer is in favor of a certain parties biased news? Or even more insidious, what if they dont recognize their bias??
And in anticipation of the crowd sourced/decentralized argument I have heard elsewhere: lets talk about bitcoin. It is decentralized, under no one's control, right? A while back there was talk of increasing the block size to make transfers go faster. (the technical details are irrelevant, sorry if I made a mistake). One developer resigned because the other couple wouldn't make the change. Who controls bitcoin? ~4 people. This is why people are worried about blacklists.
As a side note, the book Fahrenheit 451 is super relevant right now. This is what we are scared of. FWIW, it was the only book I read in school that I remember/had an influence on me. Please read everyone! Thanks
I like this approach, big, bold and in your face. We need to call out all the fake, garbage news out there.
Time is finite. It is impossible to consume all of the information that is published in the world. It's not just a little impossible: the fraction of information that an individual can consume is very near zero. Most people who have worked in an academic or scientific field know that it is impossible to consume even a fraction of the domain-specific publications in their field, much less "all the news that's fit to print."
It makes sense, therefore, to have a strategy for selecting a subset of information that one trusts as "worth considering," which might include a spam filter (just as email has, for good reason).
I personally very seldom read news articles shared online, because my experience has been that they are consistently of very low quality. Speaking for myself, I get big world event news from the Economist, which has earned some trust, and the rare nytimes/wsj article that is about something it can't possibly fuck up (anything outside the borders of the United States is generally beyond NYT/WSJ).
Would I be wrong for filtering all of the shared news articles from my feed? The only reason I keep them there is that I skim Facebook to get a feel for what people are thinking about and feeling on a given day (to stay slightly "in touch" with people, even if I think they wallow in a world of self-serving garbage information and would be better served by finding something more interesting to occupy their minds).
A better criticism of this kind of filtering might that it is intrinsically arrogant, but I don't think it is any more paternalistic or irresponsible than a spam filter for email.
Because the current hard coded list of URLs is a start, but it's not really a scalable solution to the issue.
However, from what I can see in this file:
https://github.com/jacquerie/stop-the-bullshit/blob/master/d...
It just seems like it's going to compare examples of articles included in the source files, as found here:
https://github.com/jacquerie/stop-the-bullshit/tree/master/d...
So how is it going to detect the difference between a real or fake piece from this?
So, given some training data that produced two reasonable clusters with respect to the ground truth, I have a model that I can expect to generalize well on new data.
Now, this is not what that notebook shows, because it's missing the evaluating on testing data! The main point of the notebook is that the Jaccard Distance of the tokens of the HTML of the page, despite being very simple, appears to generate a reasonable model.
But showing them the _reason_ why a certain website is blocked can become an opportunity to teach people critical thought, something that other comment threads point out.
Nice project. I may use it :)
Sounds like an interesting concept, though it'll need some really careful moderation to stop it getting abused. I've seen sites get low WOT ratings because the staff banned a few members and said members then took it out on their WOT page.
You'll need a good way to stop these negative SEO type attacks from being weaponised against a news site's rating by its competitors.
As we've seen this last election, yes, they are. How else do you explain millions of shares on "news" stories that are immediately, obviously false to anyone looking past the headlines?
I'm willing to bet that the vast majority of these shares were by people who wanted the news to be true anyway. I have friends who shared some stuff like that. they were never gonna change their vote/view. everyone else would just roll their eyes.
I'm with Zuckerberg on this one. fake news did not affect the election. people are just looking for an excuse to blame. I would love to see how many people will still be against fake news come April first
Are there any examples of subtle fake news stories? My feeling is they're really obvious especially if not being covered by major news sites.
If I write a blatant hit piece which is based entirely on factual information a lot of people will report it because they are upset over it but it doesn't change the fact that is truthful.
How do you prevent people from suppressing opposing view points by reporting them as fake?
I was merely pointing out that there could be more to address this problem and that I love this idea so far (I read the readme).
Please, let me do your thinking for you.
Wait, but then we would need Stop the Stop the Stop-the-Bullshit.... um... but what if?!!
> "Also includes a clustering analysis that could lead to an algorithm to automatically detect Fake News."
This implies that they want to do automatic fake news detection, but aren't quite there yet. Is that correct?
For general requests: https://github.com/jacquerie/stop-the-bullshit/blob/master/s...
For facebook: https://github.com/jacquerie/stop-the-bullshit/blob/master/s...