If there was a solution to this problem from the search engine's point of view 5 years ago, which I do not stipulate but let's roll with it, there isn't one now. ChatGPT can overcome basically all detection techniques when combined with the current amount of efforts already largely successfully avoiding detection, and it will continue to get better. There are no signals for random unattested web content that will separate what we want from stuff constructed to look like what we want but with embedded motivations or content we don't.
A web of trust may be inevitable, but it's not like that can't be attacked either, especially past the first hop. It seems inevitable that slowly but very surely our trust is going to get pulled in much, much more tightly than it is now. I don't see much that can be done about that, even in theory. It was a historical accident that we ever could trust random websites to not be 100% focused on their own interests, simply because the tech to do that wasn't there yet. Now it is, and we will be entering a world where we can not trust any free resources, whether we like it or not.