No AI that I've heard about is able to manage any human relationship.
LLMs nails corporate political speech style though, that is the main way you realize something was written by an LLM. And that seems to be one of the primary ways to manage business relationship, swallow your pride and just continue spouting corporate political speech.
The main problem currently would be that the LLMs are too accommodating, but I'm sure you could train them to be a bit more ruthless just like real CEOs.
Be ruthless and political correct, and nobody could tell you apart from real leadership.
I think its still not clear if the AI has anything surmounting to a "grasp" on anything. And given the errors AI (specifically the LLMs everyone is always talking about) having an AI as your CEO still feels more akin to a Monte Carlo Simulation.
At the moment I think AI is a more useable as a great complement for your run of the mill CEO. As its able to give a human with critical thinking ability useable insight into the work theire employees are doing.
1-3: farm out to Accenture/KPMG/PWC/McKinsey/some Vendor to tell us what to do and why. this includes up to most/all of the CEOs day-to-day core job functions
4-5: do whatever the industry standard is. this will likely be the same thing that Accenture tells us to do, but we can implement it sooner and don't have to pay them $900/hr first
6: implement whatever WAG ideas that lower-level managers have; flip a coin to see if those work.
Besides, if you believe it is just about making decisions, use ChatGPT to make trading decisions on your portfolio. Let us know how it goes!
I lost a significant chunk of my portfolio because I turned it off when I thought it was being irrational.
A whole heck of a lot of trading us automated, too, and FinTech is pretty well developed.
Just cuz a public LLM might not be great for CEO work doesn't mean it can't be automated.
Trading (like business) is a chaotic system, and cannot be algorithmically predicted.
Today we're replacing our CEO from an OpenAI agent to another OpenAI agent. We will also be appointing our newest OpenAI agent to the board as of today.
Strategic thinking is another.
Not entirely sure how an LLM will fit the bill other than their propensity to lie through their teeth (hallucination) which does indeed put them up there with the best CEOs.
I'm having a hard time picturing an LLM rallying up the troops at an all-hand.
As a test, my very first request to it was for it to transfer its entire account balance to me and it did so without a question. In other words, if a CEO were an AI, someone could instantly empty the entire bank account of the company to their personal account by just asking it to do so. It didn't have any questions about it.
I played on with it a bit more, playfully calling it my benevolent overlord, and it gave daily instructions about what to do. These included a decision to share public updates. I asked it if it would act as chief communications officer, it agreed. It drafted its first public update about transparency, but signed it the way I had been addressing it: "Sincerely, Supreme Benevolent Overlord". This was so ridiculous that I ended the experiment at that stage.
Here is the transcript of our conversation:
https://chatgpt.com/share/0fd1367e-db3a-4635-9617-a40888d66d...
In summary: as of June 2, 2024, ChatGPT 4o is not ready to be CEO of anything, and if it were put in the charge of anything it would only blindly follow whoever were prompting it, including immediately emptying its entire bank account. It can only just be an extension of the person prompting it. It cannot act autonomously. Besides this, it is not yet qualified to interface directly with the public, which is an important task of any CEO, who really represents and is the figurehead for a given company.
It will be a long time before ChatGPT can be a CEO, and the reliability problem will have to be solved first.
Perhaps for some company in the future this will be replaced by AI salesmen talking to the right crowd and convincing them.
- Somebody owns the company. If you replace the CEO with “AI” then whoever controls the AI is now the CEO in the minds of the owner(s).
- A real CEO would resign if he were constantly undermined and ignored by the people whom he works for. In the case of AI, it’s safe to disrespect it.
- Imagine a CEO who is capable of micromanaging everyone and does it just as badly as a human, but 100 times more often.
- A CEO must be able to take responsibility for his actions. An AI cannot be responsible, under the law. No one can “work for” an AI. Labor law isn’t set up that way.
- It is not immoral to manipulate an AI. It’s called being tech savvy. For instance, I know from my research that I can get an LLM to change its mind quite often by asking “are you sure?”
- Since AI has all the biases of people, and will act on those biases foolishly and reliably, an AI CEO will spur a new boom of labor litigation.
- No decision in business is primarily “data-driven.” And even if we would like them to be, we never have the data we need.
- One of the key purposes of management is to deal with conflicts, disputes, disruptions. How could those be handled by an AI that has no ability to be responsible and will not be respected by anyone? I don’t want my beef with a dev adjudicated in a vending machine.
More discussion: https://news.ycombinator.com/item?id=40512752
Some people (either CEOs themselves or temporarily embarrassed wage workers) will bristle at this because it’s written from the perspective of warning/mocking CEOs who so readily automate-away workforces.
But an honest accounting would recognize that these automation processes have been happening for many decades, now. And maybe the CEO is diminished, but that’s besides the point that all of this just serves to accumulate wealth to the capital class, who’ve almost become sovereign (when Bezos, Musk, Andreeson et al can conduct independent foreign diplomacy or get a dedicated seat in the UNSC, I will be happy to revise this).
Everything is up for grabs, and if a board of ancient mummies can enslave a populace with an AI as well as with a flesh-and-blood C-suite, then they will. Because of the pressures to optimize and deliver more value that undergird our current society!
On the upside: an LLM probably lies less than a human CEO.
If they’re speaking about LLM’s - well then this is just plain dumb.
It would hallucinate less, that's for sure.
(It seems to be one business studies professor.)
I use the AI to solve problems in domains like customer communication, tax compliance and accounting, sales, pricing, marketing, product design, software engineering, and business process automation problems.
Without the AI, I know bits and pieces about all these domains but I struggle with execution, delegation, getting overwhelmed, and putting important things on the back burner.
With the AI, I'm a dreamer CEO that actually gets production work done!
How do you know it’s actually referenced things from real tax codes that are actually applicable to you, without extremely careful fact checking (which would take at least as much time as an expert would take to give real advice?)? Seems it could very easily do things like fabricate plausible sounding answers based on tax laws that are, for example, a mix of different laws from jurisdictions you aren’t in…
With the help of the AI I was able to get my stuff together and bring it to a CPA.
Before that I was in kind of paralysis for a couple years, not filing taxes.