Yea, half of this seems like problems we've already solved for APIs generally, the other half is LLM specific, like prompt management, log/mon, response quality in prod, real time feedback
It seems like tools from our current ops ought to work just fine for the non-LLM uniqueness. At the same time, Datadog is pretty popular for a managed experience, and the LLM services as proxies kind of fit that model
This project looks great and aligns with my thinking