Could you tell a bit more about why this objection surprised you so much? I am often seeing this in the same tech circles that rejected the web3 craze; and I have to confess, it does sound reasonable to me. There is a recent PR opened in the MDN docs repository after MDN added the "AI explain" button, pointing out how utterly useless a AI guide is if it gives you incorrect answers and you do not have enough knowledge to catch it [0].
Although presumably very smart folks are working on it.
A few days ago some poster tried to compare denying LLM’s to being a naysayer in Gallileos time. Is this all we can do? Make references to past events and fail to evaluate the present properly?
My analysis here is that the young person in question appeared to focus only on current utility. You have not yet explained why you say it is a failed analysis of why that response may seem surprising.
Nobody can explain how you can make an LLM understand what it's saying.
I was shocked because I felt that being occasionally wrong would be a foot note, but to the person I was talking to its entire usefulness hinged on it being fully reliable.
For the capabilities we’re getting surely, the occasional wrong answer is barely a sacrifice!
People outside of tech don't necessarily think this way. So hearing that "it produces incorrect answers" is kind of a deal breaker, no?
Who is right in this case? I actually think that the LLM approach has limits that we could hit the wall of, and for that technique at least never get past the "I'm just making shit up" problem, and in fact the skeptics are quite right.
LLMs are exciting as an automatic language content generation tool, they chain words together in ways that sound like humans, and extract reasoning patterns that look like human reasoning. But they're not reasoning, it doesn't take much to trip them up in basic argumentation. But because they look like they're "thinking" some people with a tech-optimist bias get excited and just assume that the problems will resolve themselves. They could be very wrong, there could in fact be very strong limits to this approach.
... More worrisome is if LLMs because omnipresent despite having this flaw, and we just accept bullshit from computers the way we seemingly now accept complete bullshit from politicians and businessmen....
I think it's a deal-breaker for many people inside tech either :-)
It looks like it may be useful in doing grunt work in the field where you are an expert and can check and correct anything that it produces.
Where people expect it to be useful though is in providing them with information they do not already know; and the fact that you cannot trust anything it says makes it unusable for this case.
I think we can all agree that LLMs, as they stand today, are useless for anything apart from "creative tasks" where accuracy and facts simply don't matter. So things like writing fan fiction it is great at, but for anything meaningful it is utter tripe. No one can deny this.