This reminds me of the school principal who sent $100k to a scammer claiming to be Elon Musk. The kicker is that she was repeatedly told that it was a scam.
https://abc7chicago.com/fake-elon-musk-jan-mcgee-principal-b...
Which makes LLMs far more dangerous than idiot humans in most cases.
And… I am really not sure punishment is the answer to fallibility, outside of almost kinky Catholicism.
The reality is these things are very good, but imperfect, much like people.
Certain gullible people, who tends to listen to certain charlatans.
Rational, intelligent people wouldn't consider replacing a skilled human worker with a LLM that on a good day can compete with a 3-year old.
You may see the current age as litmus for critical thinking.
Humans are also very confidently wrong a considerable portion of the time. Particularly about anything outside their direct expertise
LLMs fail in entirely novel ways you can't even fathom upfront.
Trust me, so do humans. Source: have worked with humans.
Id say those are the goals we should be working for. That's the failure we want to look at. We are humans.