You're training yourself with a very unreliable source of truth.
Intentionally if I might add. Reddit users aren't particularly interested in providing feedback that will inevitably be used to make AI tools more convincing in the future, nobody's really moderating those subs, and that makes them the perfect target for poisoning via shitposting in the comments.
I don’t just look at the bot decision or accept every consensus blindly. I read the arguments.
If I watch a video and think it’s real and the comments point to the source, which has a description saying they use AI, how is that unreliable?
Alternatively, I watch a video and think it’s AI but a commenter points to a source like YT where the video was posted 5 years ago, or multiple similar videos/news articles about the weird subject of the video, how is that unreliable?
Personally, I don't think that behavior is very healthy, and the other parent comment suggested an easy "get out of jail free" way of not thinking about it anymore while also limiting anxiety: they're unreliable subreddits. I'd say take that advice and move on.
Some people, quite some time ago, also came to that conclusion. (And they did not even had AI to blame)
Any day now… right?
If the next generation can weather the slop storm, they may have a chance to re-establish new forms of authentic communication, though probably on a completely different scale and in different forms to the Web and current social media platforms.
Now that photos and videos can be faked, we'll have to go back to the older system.
I am no big fan of AI but misinformation is a tale as old as time.