So much of a what a CEO does is fostering culture, hiring people and setting a unique vision for the company.
Imagine thinking people would be inspired to work for a chatbot. Hilariously ridiculous.
I dunno, I would probably prefer to work under that chatbot than my current CEO that only tries to squize as much as possible out of ppl already working for him.
No one who has been a CEO, or frankly even worked closely with one, would think this could be even remotely close to possible. Or desirable if it was.
But that is probably 1% or less of the population eh?
Seems your claim's been disproven already
OK, I’ll bite. What’s your evidence for this argument?
They’re plausible word sequence generators, not ‘planning for the future’ agents. Or market analyzers. Or character evaluators. Or anything else.
And they tend to be really ‘gullible’.
What evidence do you have they could do any of those things? (And not just generate plausible text at a prompt, but actually do those things)
Every bit of interaction I’ve ever had with an LLM.
But there's a scarier further step: When people assume an exceptional text-specialist model can also meta-impersonate a generalist model impersonating a specific and different kind of specialist! ("LLM, create a legal defense.")