I don't really think they do, because to me it seemed pretty much since GPT-1, that having callbacks to run python and query google, having "inner dialog" before summarizing an answer and a dozen more simple improvements like this are quite obvious things to do, that nobody just actually implemented (yet). And if some of them are not obvious per se, they are pretty obvious in the hindsight. But, yeah, it's debatable.
I must admit though, that I doubt that this obvious weakness is not obvious to the stakeholders. I have no idea what the plan is, maybe what they gonna have that Anthropic doesn't is gonna be a nuclear reactor. Like, honestly, all we are pretending to be forward-thinking analysts here, but in reality I couldn't figure out that Musk's "investment" into Twitter is literally politics at the time of it happening. Even though I was sure there is some plan, I couldn't say what it is, and I don't remember anybody in these threads expressing clearly enough what is quite obvious in the hindsight. Neither did all these people like Matt Levine, who are actually paid for their shitposting: I mostly remember them making fun of Musk "doing stupid stuff and finding out" and calling it a "toy".
What's the distinction? What kind of functionality do they offer that other models don't?
Lots of products have been successful without a technical moat. Facebook has network effects, Apple has UX (though silicon has become a technical advantage if not moat), Adobe has “everyone knows how to use these tools” switching costs, Google has brand synonymous with search.
Companies are betting that models will be commodities but AI products will be sticky.
I could switch to a different provider if I needed to maybe with cheaper pricing or better models but that doesn't mean OpenAI doesn't offer a "product".
Their unique value-adds are the Chat GPT brand, being the "default destination" when people want AI, as well as all the "extra features" they add on top of raw LLMs, like the ability to do internet searches, recall facts about you from previous conversations, present data in a nice, interactive way by writing a react app, call down to Python or Wolfram Alpha for arithmetic etc.
I wouldn't be surprised if they eventually stop developing their own models and start using the best ones available at any given time.
The "consumer conversational AI space" only exists right now as a novelty, not a long-term market segment. In the not too distant future that space will be covered for most users for free by their hardware manufacturers, and the number of people willing to pay a monthly subscription to a third party will drop even further than it already has.
Default destination for many is still just Google, and they've added AI to their searches. AI chat boxes are shoehorned into a ton of applications and at the end of the day it'll go to the most accessible one for people. This is why AI in Windows or in your Web Browser or on your phone is a huge goal.
As far as extra features, chat GPT is a good default, but they're severely lacking compared to most other solutions out there.
That is the reason they are making products so that people stay on the platform.
This means that in a world where AWS/Azure/GCP all compete in the compute and the models themselves are commodities, AI isn't a product, it's a feature of every product. In that world, what is OpenAI doing besides being an unnecessary middleman to Azure?
I'd agree there isn't much money in it. OpenAI should probably milk the revenue they get now and make hay while the sun is shining. But their apparent strategy is to bet it all on finding another breakthrough similar to the switch from text completion to a chat interface