I should have built stronger separation boundaries with more general abstractions. It works fine, I haven't had any critical bugs / mistakes, but it's really nasty once you get to the actual JSON you'll send.
Google's was 100% designed by a committee of people who had never seen anyone else's API, and if they had, they would have dismissed it via NIH. (disclaimer: ex-Googler, no direct knowledge)
Google made their API before the others had one, since they were the first with making these kind of language models. Its just that it has been an internal API before.
That'd be a good explanation, but it's theoretical.
In practice:
A) there was no meaningful internal LLM API pre-ChatGPT. All this AI stuff was under lock and key until Nov 2022, then it was an emergency.
B) the bits we're discussing are OpenAI-specific concepts that could only have occurred after OpenAI's.
The API includes chat messages organized with roles, an OpenAI concept, and "tools", an OpenAI concept, both of which came well after the GPT API.
Initial API announcement here: https://developers.googleblog.com/en/palm-api-makersuite-an-...
We built something like this for ourselves here -> https://www.npmjs.com/package/@kluai/gateway?activeTab=readm....
Documentation is a bit sparse but TL;DR - deploy it in a cloudflare worker and now you can access about 15 providers (the one that matter - OpenAI, Cohere, Azure, Bedrock, Gemini, etc) all with the same API without any issues.