All of these make mistakes (there are documented incidents).
And yes, we can counter with "the journalists are dumb for not verifying", "the lawyers are dumb for not checking", etc., but we should also be open for the fact that these are intelligent and professional people who make mistakes because they were mislead by those who sell LLMs.
In the past someone might have been physically healthy and strong enough to physically shovel dirt all day long
Nowadays this is rarer because we use an excavator instead. Yes, a professional dirt mover is more productive with an excavator than a shovel, but is likely not as physically fit as someone spending their days moving dirt with a shovel
I think it will be similar with AI. It is absolutely going to offload a lot of people's thinking into the LLMs and their "do it by hand" muscles will atrophy. For knowledge workers, that's our brain
I know this was a similar concern with search engines and Stack Overflow, so I am trying to temper my concern here as best I can. But I can't shake the feeling that LLMs provide a way for people to offload their thinking and go on autopilot a lot more easily than Search ever did
I'm not saying that we were better off when we had to move dirt by hand either. I'm just saying there was a physical tradeoff when people moved out of the fields and into offices. I suspect there will be a cognitive tradeoff now that we are moving away from researching solutions to problems and towards asking the AI to give us solutions to problems
As if that is a somehow exonerating sentence.
LLMs are a tool, just like any number of tools that are used by developers in modern software development. If a dev doesn’t use the tool properly, don’t trust them. If they do, trust them. The way to assess if they use it properly is in the code they produce.
Your premise is just fundamentally flawed. Before LLMs, the proof of a quality dev was in the pudding. After LLMs, the proof of a quality dev remains in the pudding.
Indeed it does, however what the "proof" is has changed. In terms of sitting down and doing a full, deep review, tracing every path validating every line etc. Then for sure, nothing has changed.
However, at least in my experience, pre LLM those reviews were not EVERY CASE there were many times I elided parts of a deep review because i saw markers in the code that to me showed competency, care etc. With those markers there are certain failure conditions that can be deemed very unlikely to exist and therefore the checks can be skipped. Is that ALWAYS the correct assumption? Absolutely not but the more experienced you are the less false positives you get.
LLMs make those markers MUCH harder to spot, so you have to fall back to doing a FULL indepth review no matter what. You have to eat ALL the pudding so to speak.
For people that relied on maybe tasting a bit of the pudding then assuming based on the taste the rest of the pudding probably tastes the same its rather jarring and exhausting to now have to eat all of it all the time.