Because laymen massively outnumber experts, the layman vote always overwhelms the informed one, so the reaction of people who don’t know the subject is the only thing that matters. Truth only seems to matter because most subjects either can be somewhat intuited by non-experts, or are in a niche that you’re not, so “layman plausibility” means your reaction, too. But the true nature of the dialog reveals itself as soon as people talk about something you’re an expert on.
Answers like this aren’t a bug in a truth machine, they’re a plausibility machine working as designed.
To lend credence to this idea, I reflexively upvoted you despite not having read any experts on this voting phenomenon.
LLMs are to human mimics what AGI will be to human creators/innovators (and then some of course).
We are GIs, at least 98%+, LLM like behavior may exist in our cognitive repertoire, but we certainly aren't limited to it. Can an LLM drive locomotion?
I never understood AGI as generating sui generis ideas as a requirement. I thought that AGIs could also be uncreative mimics.
Hmmm ... that doesn't seem to match what actually happens. After false beliefs holding back humanity for its entire history, science came along and produced actual, working, truth. And science is the opposite of what you say: The crowds don't matter, only the facts. Newton was not a crowd, and the crowds didn't produce anything remotely as true and valuable for all those years. The crowds persecuted Galileo (and many others).
"In matters of science, the authority of thousands is not worth the humble reasoning of one single person." - attributed to Galileo
As someone pointed out, I think here on HN, the intuition of the crowds sucks. If it was any good, we'd have had the right physics in 5,000 BCE not starting in the 17th century.
> the intuition of the crowds sucks. If it was any good, we'd have had the right physics in 5,000 BCE not starting in the 17th century.
Eh. People used to stay in their lane. Only these days can you get a city person voting on proper farming techniques.
Yes, as long as the truth is the most significant systematic influence on beliefs, any reasonable method of aggregrate of belief will converge on the truth with sufficient numbers.
Unfortunately, the required condition for convergence on the truth is often not true, and there is no way of reliably determining when it is true other than determining the truth independently and determining if belief converges on it.
Significant effects on belief about facts from cognitive/perceptual biases, especially where the fact is not something easily observable like “is it raining at this instant where you are standing” are not rare, and these biases often align for similarly situated individuals.
Then you get crap where the experts, even when they agree, "dumb it down" for the crowds. This leads the masses who actually do pay attention to experts to think the wrong ideas are truth.
> After all, what has been more important to a human — evolutionarily: the truth or social access?
I don't think this is required for people to be very wrong. Caring about the truth can easily lead to assuming other people who speak authoritatively know what they're talking about, or to speaking authoritatively yourself when you think you're right.
The only way you might have it work is if random people were shown random posts from random topics, and asked to vote on them. And the ranking was based upon that feedback. There's problems there as well, but probably far fewer than in the current system.
Massively aggravated by "sorting by top" defaults for both original posts and separately for the comments on those posts.