But one major problem is they allowed anyone to upload AND download. So userA would upload child abuse imagery. userB would download it. PornHub would delete video uploaded by userA. userB would reupload with different video name. userC would download the reupload.
It's playing whack-a-mole with illegal child abuse imagery. Interestingly PornHub only really started caring when payment processors started looking into things. But better late than never.
Allowing (re-)upload of prohibited or previously removed content is a fatal flaw, and I find it hard to believe they've been allowing it for so long without either being staggeringly incompetent as an organisation, or wilfully turning a blind eye in the name of profits.
There are various lists of hashes for Child Sexual Abuse Materials which I'm sure they'd have access to, and they could license something like Microsoft PhotoDNA [1] (or implement something similar themselves, it's not like they're lacking the tech talent) which is able to detect image alterations that break simple checksum comparisons and operate on video content.
They don't need to play whack-a-mole, for the most part this should be a solved problem. Obviously if it's new content that hasn't been fingerprinted before you still need manual reporting and moderation, but they could and should be scanning against known CSAM on upload and quarantining it / shadow banning the user until it can be evaluated by a human and passed off to law enforcement.
It's hard to do this at scale given how much content they ingest every day, but they need to bite the bullet and invest some cash and engineering time - what they are doing now simply isn't good enough. Hopefully having the spotlight put on them will force them to do the right thing.
As opposed to the alternatives? Have a moderator sitting in a chair behind every single person using a computer? Outlaw storing/transmitting any user generated data, and turn the internet into Cable TV 2.0?
Playing whack-a-mole with law breakers is a fact of life in a non-totalitarian society.
Simply disabling downloads is not enough, they would need either an effective DRM system, or to kill off third party downloaders such as youtube-dl. I'm sure the RIAA would love to be able to go after youtube-dl on child porn charges instead of just cppyright charges.
They could attempt to implement a content-ID system to detect re-uploads. I suspect this could work better for child porn than copyright, because you will not have adversarial parties trying to claim that you uploaded their child porn.
The difficulty in such an approach is that the mere possession of child porn is a crime, which makes developing and maintaining such a technology far more difficult and expensive (even with the law enforcement support that would be necessary to make such development legal).
The more insideous effect is that putting this requirement on pornhub also puts requirements on every platform supporting user generated material. In order to avoid getting caught up in the child-porn minefield it creates, platforms would need to censor all porn and porn-adjecent content.
This is just my own two cents independent of the article, but I think for any platform that allows anyone to upload anything, a blacklist system is inevitably doomed to fail once the platform is beyond a certain size. The only feasible option in that case is to whitelist, which is what Pornhub seems to now be doing going forward.
The author briefly mentions the Internet Watch Foundation's objective stats and then dismisses it when they say they don't know why it's so low compared to Facebook and Twitter. Maybe because it just is?
Why are Mastercard and Visa not "investigating" Facebook and other sites for allow child abuse uploads in the millions?