All this bluster about replacing technical jobs like legal counsel ignores that you are fundamentally paying for accountability.
“The AI told me it was ok” only works if, when it’s not, there is recourse.
We can barely hold Google et Al accountable for horrible user policies…why would anyone think OpenAI will accept any responsibility for any recommendations made by a GPT?
They won't, but that doesn't mean some other business won't automate legal counsel and assume risk. If, down the line, GPT (or some other model) has empirically been proven to be more accurate than legal assistants and lawyers, why wouldn't this been the obvious outcome?