And, as an obligate customer of many large companies, you should be in favor of that as well. Most companies already automate, poorly, a great deal of customer service work; let us hope they do not force us to interact with these deeply useless things as well.
If the primary complaint is the blues that GPT-4 wrote is not that great, I think it is definitely worth the hype, given that a year before people argued that AI can never pass turing test.
And this is my biggest issue with the AI mania right now -- the models don't actually understand the difference between correct or incorrect. They don't actually have a conceptual model of the world in which we live, just a model of word patterns. They're auto complete on steroids which will happily spit out endless amounts of garbage. Once we let these monsters lose with full trust in their output, we're going to start seeing some really catastrophic results. Imagine your insurance company replaces thier claims adjuster with this, or chain stores put them in charge of hiring and firing. We're driving a speeding train right towards a cliff and so many of us are chanting "go faster!"
No they won't.
>they can go find someone else with more expertise when they are lacking.
They can but they often don't.
>the models don't actually understand the difference between correct or incorrect.
They certainly do
I see similar comments everywhere where AI is praised, and I don't get why you need to comment this. Literally no one ever said LLM surpassed experts in their field, so basically you aren't arguing against anyone.
To use the canonical example of "internet service support call," most issues are because the rep either can't do what you're asking (e.g. process a disconnect without asking for a reason) or because they have no visibility into the thing you're asking about (e.g. technician rolls).
I honestly think we'd be in a better place if companies freed up funding (from contact center worker salary) to work on those problems (enhancing empowerment and systems integration).
That's impossible, LLMs are not that good. They might be firing people and crashing service quality.
https://www.cnn.com/2023/08/30/tech/gannett-ai-experiment-pa...
If the AI is a lot cheaper than a human, then it can make business sense to replace the human even if the AI is not nearly as good.
If it takes a whole business day to "spin up" a human for a task, and takes literally 5 seconds to call an OpenAI API, then guess what? The API wins.
We are updating our expectations very fast. We are fighting over a growing pie. Maybe the cost reduction from not having to pay human wages is much smaller than the productivity increase created by human assisted AI. Maybe it's not an issue to pay the humans. AI works better with human help for now, in fact it only works with humans, never capable of serious autonomy.