RAG/LLMs are a clear improvement to the baseline though. People will unfairly judge LLMs even when they provide more accuracy and better results, even if they save lives, simply because they can't meet the impossible demands of neo-luddites. People want it to be like "an evil force" and I blame OpenAI and the news for this narrative.
This take reminds me of some of the (weaker) arguments against blockchain when it was popular. For some - just because there was not a 100% chance a blockchain can prevent every conceivable exploit and hack it was therefore useless hype - they ignore the decentralization utility, throw out the peer-to-peer ledger concept, throw out the consensus protocols, etc. How could something like git have been invented in such a political, anti-tech environment? Git would have been shut down by the masses, otherwise smart people would label it as a scary evil force. Thankfully peer-to-peer was very cool back then and so git is useful tech that we get to use.
I'm seeing the same thing with LLMs, all people are focused on is: Prove to me AI isn't evil - people can see a valuable use case in a demo but it doesn't matter, I think like blockchain some are beyond convincing. They just aren't into technology anymore.