The author briefly mentions the Internet Watch Foundation's objective stats and then dismisses it when they say they don't know why it's so low compared to Facebook and Twitter. Maybe because it just is?
Why are Mastercard and Visa not "investigating" Facebook and other sites for allow child abuse uploads in the millions?
What's the specific stat or claim you disagree with? If you read the article, you can see both the author and several interview subjects explicitly clarify severe times they're perfectly fine with porn, as long as it's of age and consensual.
What's a source on Facebook having millions of child pornography videos? If they did, I'm sure NYT would report on them and Mastercard and Visa would investigate them, too.
The times did an actual investigation into the csam on Facebook and other big teach and it's a massive problem. https://www.nytimes.com/interactive/2019/09/28/us/child-sex-...
No Visa and Mastercard weren't forced in a moral panic to investigate.
This suggests the majority of it was exchanged privately, which means it mostly may have been automatically detected by systems matching content hashes. On Pornhub, it's all available to the public rather than in private messages, which makes the problem a lot more evident and visible, and arguably more damaging (e.g. if you were raped as a child, you potentially may find it more damaging if one million people see the video compared to ten people).
If Facebook hadn't taken appropriate measures, Visa and Mastercard certainly should have investigated them if they were in a position to.
Another issue here is Pornhub is directly profiting off of the CSAM and non-consensual porn by plastering ads all over the page displaying the content and encouraging premium account registrations. Facebook isn't directly monetizing that content.
Of course I know Pornhub isn't doing this deliberately, but 1) they're not financially incentivized to take things down too aggressively (due to loss of ad revenue and premium registrations), 2) even if they were incentivized to do it, the problem is too big and too evasive to tackle with just blacklisting, manual reporting, and a small team of moderators. The only workable solution at this point is a whitelist, which sensibly seems to be the approach they're now taking.
>Its site is infested with rape videos.
Not illegal in the US. In fact, there is plenty of consensual porn made to look like women are being assaulted or coaxed into sex.
> came across many videos on Pornhub that were recordings of assaults on unconscious women and girls. The rapists would open the eyelids of the victims and touch their eyeballs to show that they were nonresponsive.
If I come upon an adult being assaulted, including sexually, and recorded the incident, this is perfectly legal in the US. That said, there is consensual porn that falls into this category, made by professional outfits.
>It monetizes child rapes, revenge pornography, spy cam videos of women showering, racist and misogynist content, and footage of women being asphyxiated in plastic bags.
It monetizes all content, even illegal content which hasn't been reviewed or reported. This is how it works for all sites that allow user generated content, including YouTube.
Racist and misogynist content isn't illegal in the US.
There are plenty of BDSM videos including footage of women being asphyxiated in plastic bags. You can find men and women with hot wax being poured on their genitalia or needles piercing their genitalia. You can find golden showers and scat too. None of that content is illegal in the US.
>Unlike YouTube, Pornhub allows these videos to be downloaded directly from its website.
Really? We all know this isn't true because of youtube-dl. Restricting downloads will not work for the determined. If the browser can see it, it can be downloaded.
>A search for “girls under18” (no space) or “14yo” leads in each case to more than 100,000 videos. Most aren’t of children being assaulted, but too many are.
By the author's own admission, most of the results were not true. Of those that have illegal content, report it. There is only so much a site can do to weed out illegal user generated content. PornHub doesn’t encourage illegal content or turns a blind eye to it. The company uses many of the automated tools that Microsoft and Google developed for fingerprinting pictures and videos of child pornography.
>Depictions of child abuse also appear on mainstream sites like Twitter, Reddit and Facebook.
You can find videos of adults beating children with all manner of household items. You can find videos of vehicles running over people with their twisted, mangled bodies laying in the street. You can find killings of rival gangs in Mexico and Brazil, or state executions in Iran and Iraq. None of this stuff is illegal in the US.
It would be nice if the author focused on the illegal content, rather than the stuff he finds distasteful.
Federally, no. It appears to be illegal in 46 states, though. And I personally do think (actual) rape should be illegal to film, distribute, or profit off of. (I'm a major detractor of any kind of censorship, but I believe the three exceptions should be [actual] child pornography, [actual] rape pornography, and the sort of porn where humans or non-human animals are [actually] tortured/killed.)
>In fact, there is plenty of consensual porn made to look like women are being assaulted or coaxed into sex.
Of course, but that's completely different, and is indirectly addressed in the article:
>To be clear, most aren’t of 13-year-olds, but the fact that they’re promoted with that language seems to reflect an effort to attract pedophiles.
>The issue is not pornography but rape. Let’s agree that promoting assaults on children or on anyone without consent is unconscionable. The problem with Bill Cosby or Harvey Weinstein or Jeffrey Epstein was not the sex but the lack of consent — and so it is with Pornhub.
This is also part of the core problem. If some content is reported, and it's not clear to a moderator if it's consensual or non-consensual, it may not get removed even it turns out it actually was non-consensual. Same with porn of minors: if it's not unambiguous that an actual minor in a video is below 18, Pornhub often wouldn't remove the content.
---
>It monetizes all content, even illegal content which hasn't been reviewed or reported. This is how it works for all sites that allow user generated content, including YouTube.
I agree it wasn't appropriate of the reporter to sandwich "racist and misogynist content" between those other far more horrible things. I don't necessarily endorse 100% of the article; just the core point.
YouTube manages to largely avoid this problem by doing something closer to a whitelist approach - if content doesn't fall into the category of "non-pornographic", it's semi-automatically blocked and removed.
>There are plenty of BDSM videos including footage of women being asphyxiated in plastic bags. You can find men and women with hot wax being poured on their genitalia or needles piercing their genitalia. You can find golden showers and scat too. None of that content is illegal in the US.
It is indeed very important to distinguish between things that some may find extreme or disgusting and things that are unethical and should be or are illegal. The key issue is consent vs. non-consent. If a woman has a BDSM fetish and is filmed being asphyxiated, that's completely different from a group of people attacking an unsuspecting woman and asphyxiating her.
The article should've gone to greater lengths to separate this when writing sentences like that, but it also makes itself very clear in other parts that the issue is the non-consensual acts, and specifically the fact that Pornhub is directly profiting off of them and not effective at detecting or removing the vast majority of it.
>There is only so much a site can do to weed out illegal user generated content.
Exactly. As you say, any large platform that allows arbitrary uploads won't be able to be effective at this - hence why a whitelist approach is required here instead of a blacklist, which is sensibly now what they're going to be doing.
>You can find videos of adults beating children with all manner of household items. You can find videos of vehicles running over people with their twisted, mangled bodies laying in the street. You can find killings of rival gangs in Mexico and Brazil, or state executions in Iran and Iraq. None of this stuff is illegal in the US.
Of course, but how is any of that relevant to the sentence you're quoting? Recordings of gore and violence are legal. Child pornography isn't legal. It's a completely different situation if Twitter, Reddit, and Facebook are swarming with gore vs. swarming with child pornography.
>It would be nice if the author focused on the illegal content, rather than the stuff he finds distasteful.
I agree, and it is important to be very precise and explicit when advocating for censorship, so I understand the need for semantics and picking apart the reporter's fuzzy condemnations.
But I don't understand being seemingly dismissive of the core issue discussed in the article - massive amounts of real non-consensual porn and child pornography being directly profited off of while also being ineffectively combatted (and being largely infeasible to properly combat), growing every day, accessible to the world.
It's quite plausible they made and are making tens of millions of dollars directly from that content. This doesn't necessarily mean they encouraged it or were turning a blind eye to it, but I think they were kind of glancing away from it until a big enough fire was lit under their ass by NYT and payment processors. They had a large, financial, perverse incentive (pun intended) to not dedicate a massive amount of resources to the problem until now. And now it appears they are dedicating those resources by implementing this whitelist approach, which is painstaking but is the only way to do it safely.
You're absolutely right to critique the article in the areas where it lacks rigor or slides a bit further down a slippery slope, but there's still a big elephant in the room here regarding the actual situation, independent of the article's faults.
Let's separate the core problem away from the greater cloud of general moral panic. There's a kind of "fallacy fallacy" here (https://en.wikipedia.org/wiki/Argument_from_fallacy); the existence of a moral panic doesn't necessarily mean there isn't a real, harmful problem that originally sparked it. They can contain kernels of reality that need to be addressed - e.g. the "vapes killing people" moral panic containing the actual truth that many bootleg THC vapes contain harmful filler compounds.