Plus you need to see ChatGPT fail. When it fails, it will keep actively trying to
persuade you it's right rather than fix problems.
It does this through 2 deceptive techniques, which no doubt work on many people:
1) it suggests it was right all along, very politely pressuring you to accept it. It does not really make arguments for why it's right. It just pushes you to accept it's output. (if a human does this, I would argue this is a human trying to hide the fact they made a mistake, or perhaps hiding they don't know. Not an honest mistake. Either way, VERY bad sign)
2) (and/or) it will suggest and make changes that don't address the concern. In a way this is the same as 1), but ...
Using ChatGPT for anything remotely important runs a very, very big risk of causing disasters imho. For a template, basic inspiration, perhaps ... but ...
Let's put it this way. If you gave control of a nuclear plant to ChatGPT, it would make you feel good about this, then melt the plant down.
ChatGPT is incredibly impressive. But it's a con man. Hell, it's a better con artist than a lot of human con men, but it's fundamentally trying to convince you, not caring if it's right or wrong. It's a "troll". An incredibly good troll. But it's not trying to solve problems. Extremely impressive achievement, no question, but using it for anything more than inspiration ...
But I think if this thing is deployed widely it will crash and burn. It will rapidly get a reputation for leading people to disaster and that will be the end of it.