As long as it’s vulnerable to hallucinating, it can’t be used for anything where there are “wrong answers” - and I don’t think ChatGPT-4 has fixed that issue yet.*
Now if it’s one of those tasks where there are “no wrong answers”, I can see it being somewhat useful. A non-ChatGPT AI example would be those art AIs - art doesn’t have to make sense.
The pessimist in me see things like ChatGPT as the ideal internet troll - it can be trained to post stuff that maximise karma gain while pushing a narrative which it will hallucinate its way into justifying.
* When they do fix it, everyone is out of a job. Humans will only be used for cheap labor - because we are cheaper than machines.
Jobs where higher error rates are acceptable, or where errors are easier to detect, will succumb to automation first. Art and poetry fit both of these criteria.
The claim is that as the model and training data sizes increase, these errors will get more and more rare.
We will see...
I am very optimistic about the far future. However, there will be a transition period where some jobs have been automated away but not others. There will be massive inequality between the remaining knowledge workers and manual laborers. If I was in a role on the early automation side of the spectrum then I would be retraining ASAP.
You know sometimes you have a “bright idea” then after thinking about it for a second you realise it’s nonsense. With AI like ChatGPT, the “thinking about it for a second” part never happens.
Step 1 will be to use chat gpt to get all of the loan inputs from documents, step 2 could be to identify any information that is missing that we should use to make the decision, step 3 will be making the decision. At each step well checks/balances and have human feedback. But don't kid yourself this is coming and the benefit for those that make the shift first are huge.
Because programming tests are hard.
Well-trained programmers would 90% fail the leetcode hard question with zero-shot. Preparations are important.