It's as you said: people misunderstand MCP and what it delivers.
If you only use it as an API? Useless. If you use it on a small solo project? Useless.
But if you want to share skills across a fleet of repos? Deliver standard prompts to baseline developer output and productivity? Without having to sync them? And have it updated live? MCP prompts.
If you want to share canonical docs like standard guidance on security and performance? Always up to date and available in every project from the start? No need to sync and update? MCP resources.
If you want standard telemetry and observability of usage? MCP because now you can emit and capture OTEL from the server side.
If you want to wire execution into sandboxed environments? MCP.
MCP makes sense for org-level agent engineering but doesn't make sense for the solo vibe coder working on an isolated codebase locally with no need to sandbox execution.
People are using MCP for the wrong use cases and then declaring them excess when the real use case is standardizing remote delivery and of skills and resources. Tool execution is secondary.
MCP is a protocol that may have been useful once, but it seems obsolete already. Agents are really good at discovering capabilities and using them. If you give it a list of CLI tools with a one line description, it would probably call the tool's help page and find out everything it needs to know before using the tool. What benefit does MCP actually add?
Otherwise I don't understand how MCP vs CLI solves anything.
Centralized MCP server over HTTP that enables standardized doc lookup across the org, standardized skills (as MCP prompt), MCP resources (these are virtual indexes of the docs that is similar to how Vercel formatted their `AGENTS.md`), and a small set of tools.
We emit OTEL from the server and build dashboards to see how the agents and devs are using context and tools and which documents are "high signal" meaning they get hit frequently so we know that tuning these docs will yield more consistent output.
OAuth lets us see the users because every call has identity attached.
> Sandboxing and auth is a problem solved at the agent ("harness") level
If you run a homogeneous set of harnesses/runtimes (we don't; some folks are on Cursor, some on Codex, some on Claude, some on OpenCode, some on VS Code GHCP). The only thing that works across all of them? MCP.Everything about local CLIs and skill files works great as long as you are 1) running in your own env, 2) working on a small, isolated codebase, 3) working in a fully homogeneous environment, 4) each repo only needs to know about itself and not about a broader ecosystem of services and capabilities.
Beyond that, some kind of protocol is necessary to standardize how information is shared across contexts.
That's why my OP prefaced that MCP is critical for orgs and enterprises because it alleviates some of the friction points for standardizing behavior across a fleet of repos and tools.
> You don't need to reinvent OpenAPI badly
You are only latching onto one aspect of MCP servers: tools. But MCP delivers two other critical features: prompts and resources and it is here where MCP provides contextual scaffold over otherwise generic OpenAPI. Tools is perhaps the least interesting of MCP features (though useful, still, in an enterprise context because centralized tools allows for telemetry)For prompts and resources to work, industry would have to agree on defined endpoints, request/response types. That's what MCP is.