Should we really buy the many months of switching difficulty argument?
Surely the main API surface is a HTTP API like ChatCompletions? If it's the exact shape of Anthropic's API, the difference is surely minor. There are likely up to 2 API surfaces, that's it. If the OpenAI model APIs are more flexible (esp. with the new 1M context of GPT-5.4), then it should have little difficulty adapting. Then there is LiteLLM and similar that make it even easier, half of their tooling should be using something that abstracts like that anyway. Yes it needs evals and prompt engineering work to optimise it, but they should be used to that by now. Presumably they could even clean-room fine-tune an OpenAI model to match the same Claude shape with low loss. So I don't buy it.
It’s not the syntax of the API that’s the issue, it’s the behaviour and performance of the model. You can create code, images, and video with just about any model, but there’s reasons people prefer Claude Code or Sora for particular tasks