But for remote HTTP MCP servers there should be a dead simple solution. A couple years ago OpenAI launched plugins as `.well-known/ai-plugin.json`, where it'd contain a link to your API spec, ChatGPT could read it, and voila. So all you needed to implement was this endpoint and ChatGPT could read your whole API. It was pretty cool.
ChatGPT Plugins failed, however. I'm confident it wasn't because of the tech stack, it was due to the fact that the integration demand wasn't really there yet: companies were in the early stages of building their own LLM stacks, ChatGPT desktop didn't exist. It also wasn't marketed as a developer-first global integration solution: little to no consistent developer advocacy was done around it. It was marketed towards consumers and it was pretty unwieldy.
IMO the single-endpoint solution and adhering to existing paradigms is the simplest and most robust solution. For MCP, I'd advocate that this is what the `mcp/` endpoint should become.
Edit: Also tool calling in models circa 2023 was not nearly as good as it is now.