Of course not. Most developers don't understand how LLM work, even roughly.
> Humans do the same thing all the time. Of course it's not always correct!
The difference is that LLMs can not acknowledge incompetence, are always confidently incorrect, and will never reach a stopping point, at best they'll start going circular.