I find this highly concerning but I feel similar.
Even "smart people" I work with seem to have gulped down the LLM cool aid because it's convenient and it's "cool".
Sometimes I honestly think: "just surrender to it all, believe in all the machine tells you unquestionably, forget the fact checking, it feels good to be ignorant... it will be fine...".
I just can't do it though.
It's the same issue with Google Search, any web page, or, heck, any book. Fact checking gets you only so far. You need critical thinking. It's okay to "learn" wrong facts from time to time as long as you are willing to be critical and throw the ideas away if they turn out to be wrong. I think this Popperian view is much more useful than living with the idea that you can only accept information that is provably true. Life is too short to verify every fact. Most things outside programming are not even verifiable anyway. By the time that Steve Jobs would have "verified" that the iPhone was certainly a good idea to pursue, Apple might have been bankrupt. Or in the old days, by the time you have verified that there is a tiger in the bush, it has already eaten you.
When I spend time on something that turns out to be incorrect, I would prefer it to be because of choice I made instead of some random choice made by an LLM. Maybe the author is someone I'm interested in, maybe there's value in understanding other sides of the issue, etc. When I learn something erroneous from an LLM, all I know is that the LLM told me.
People should be able "throw the ideas away if they turn out to be wrong" but the problem is these ideas unconsciously or not help build your model of the world. Once you find out something isn't true it's hard to unpick your mental model of the world.
Intuitively, I would think the same, but a book about education research that I read and my own experience taught me that new information is surprisingly easy to unlearn. It’s probably because new information sits at the edges of your neural networks and do not yet provide a foundation for other knowledge. This will only happen if the knowledge stands the test of time (which is exactly how it should be according to Popper). If a counterexample is found, then the information can easily be discarded since it’s not foundational anyway and the brain learns the counterexample too (the brain is very good in remembering surprising things).
The benefit is that I got a quick look at various solutions and quickly satisfied a curiosity, and decided if I’m interested in the concept or not. Without AI, I might just leave the idea alone or spend too much time figuring it out. Or perhaps never quite figure out the terms of what I’m trying to discover, as it’s good at connecting dots when you have an idea with some missing pieces.
I wouldn’t use it for a conversation about things as others are describing. I need a way to verify its output at any time. I find that idea bizarre. Just chatting with a hallucinating machine. Yet I still find it useful as a sort of “idea machine”.
I think even if an AGI was created, and humans survived this event. I'd still have trouble trusting it.
The quote "trust but verify" is everything to me.
I don't like being told lies in the first place and having to unlearn it.
It doesn't help that I might as well have just gone straight to the "verification" instead.
It's smart but can also be very dumb.
I think the current state of AI trustworthiness (“very impressive and often accurate but occasionally extremely wrong”) triggers similar mental pathways to interacting with a true sociopath or pathological liar for the first time in real life, which can be intensely disorienting and cause one to question their trust in everyone else, as they try to comprehend this type of person.