To get it out of the way, I do not agree that it should've taken a journalist to get involved to have this situation solved.
However, I'd like to prompt Hacker News with how would you handle receiving support requests from a product that has >2.7B users. Almost all of which are non-directly revenue generating, across hundreds of different languages, in every conceivable location in the world.
It's an extremely hard problem to solve, but I don't think anyone has got it right. I'll be playing devil's advocate in the comments. Keep me busy for my flights.
It doesn't matter how many users you have. This "solution" seems like swatting a fly with a nuclear weapon. Why not just take down the offending video until the user takes corrective action? YouTube can clearly identify the offending video out of the non-offending ones, so that's not a technical problem. And it can be done entirely with automation, so it wouldn't need humans. Further, they obviously can tell that the user does not have a history or track record of this kind of activity. Why do these tech companies always go straight to the "no recourse ban hammer"?
I believe this is how Twitter handles / handled rule breaking content, you got an infraction / suspension until you acknowledge and deleted the offending tweet.
Of course, I believe videos are a lot harder because it's video content which takes more effort to analyse than relatively short plain ish text messages, especially automated.
A good automation would be integrating something like what they already have for Music Copyright, where you can automatically trim the segments around the conflicting content.
It should have been pretty straightforward to establish a pattern (or lack of) around whether this was an intentionally abusive channel or a first-time offence.
But of course this costs money and takes time to build.
2.7B users is a lot, but how many of those are established content creators (like say - more than 10k subscribers) that are banned on a daily basis? How many people would it take to review those cases?
One way: dollars.
Another way: don't provide services you can't support.
The only way it's actually going to happen, and when it does we'll shockingly find it was always possible and companies just didn't feel like doing it: regulation (which will just result in a mix of the first two options)
Also playing devil's advocate here: Define "support".
I'm sure Google will tell you that they support their billions of users just fine, relatively speaking, and that the percentage of people who fall through the cracks is an acceptable margin (to them, obviously not to the users themselves).
To your point about "not providing the service", do you believe that the trade-off that would happen if Google, for example, stopped offering free tier Youtube uploads, would be worth it for providing better support to paying users?
Would the incredibly massive reduction in uploaded content be worth it?
Or do we have to live with these kinds of gaps in order to get the rest?
This would cut into margins, but maybe it is not possible to run hyper scale companies only managed by a couple of engineers.
And maybe we should not accept that profit seeking people want to do that anyways.
It wouldn't cut into margins, it would make YouTube wildly unprofitable, with no viable path to monetization that would ever pay for the support burden.
I realize that some on HN—it sounds like you included—are perfectly happy to argue that if a company can't provide human customer support to every one of their users then that company shouldn't exist, but most of YouTube's users would fervently disagree.
For that scale, you're looking at an army of tens of thousands of customer reps - on top of however many they already have. I don't know how Google does it, but FB has a number of subsidiaries or contracted companies across the world that spend their days doing content moderation.
[1] https://www.techspot.com/news/88563-re-logic-cancels-terrari...
[2] https://www.reddit.com/r/googlecloud/comments/m3hi63/whats_g...
[3] https://www.reddit.com/r/googlecloud/comments/1ey0rx8/gcp_su...
[4] https://www.reddit.com/r/googlecloud/comments/owt679/how_doe...
The article mentions that the channel was taken down because a hacker in the live Zoom meeting (being streamcast into YouTube) played porn. YouTube could have simply blocked that single YT video while retaining the rest of the channel.
If multiple instances of users hacking Zoom meetings came to light, Google could simply block Zoom from streamcasting videos into YouTube until they fixed their shit.
But I do agree with your 2nd point about misaligned incentives though. I don't think "how do we ensure that every user can get fair support" was ever on any product roadmap for these free global-scale products..
Or more accurately, the "users" in this case are the advertisers, not the uploaders.
The risk is of bad incentives when providing support becomes profitable...
Better yet, pay them. It is, after all, work.
Reddit, a multi-billion dollar company has perfected the art of exploiting unpaid volunteer work.
So much so that when said workers rebel against the administration they get booted from their position. Moderators are easily replaced as there is always someone willing to toe the administration's line.
Do you have any other impossible conundrums I can clear up before coffee?
Hence abuse is a local thing too. One can be getting flagged in one region but not in another. ‘Abuse’ amounts to getting certain flags auto-applied in some locations or whatever. Should not affect the account itself though.