Want it to be easy to update? Make it a git repo with all the docs. My agent already knows to always do a git fetch before interacting with a repo in a new session. Or you can fetch on a timer. Whatever.
I haven't yet figured out the point of this MCP stuff. Codex seems to have innate knowledge of how to curl jira and confluence and gitlab and prometheus and SQL databases and more. All you need to configure is a .netrc file and put the hostname in AGENTS.md. Are MCP tools even composable? Like can the model pipe the response to grep or jq or another MCP call without it entering/wasting context? Or is a normal CRUD API strictly more powerful and easier to use?
I set my nginx to return the Markdown source (which is just $URL.md) for my website; any LLM which wants up-to-date docs from my website can do so as easily as `curl --header 'Accept: text/markdown' 'https://gwern.net/archiving'`. One simple flag. Boom, done.
1. A cli script or small collection of scripts
2. A very short markdown file explaining how it works and when to use it.
3. Optionally, some other reference markdown files
Context use is tiny, nearly everything is loaded on demand.
And as I'm writing this, I realize it's exactly what skills are.
Can anyone give an example of something that this wouldn't work for, and which would require MCP instead?
Has this changed?
My uncharitable interpretation is that MCP servers are NJ design for agents, and high quality APIs and CLIs are MIT design.
fabien@debian2080ti:~$ du -sh /usr/share/man/ #all lang
52M /usr/share/man/
Yep... in fact there are already a lot of tooling for that, e.g. man obviously but also apropos.You could argue that they could just let the agent curl an agent-optimized API, and that is what MCP is.
I’ve been wrapping the agent’s curl calls in a small cli that handles the auth but I’m wondering if other people have come up with something lighter/more portable.
Job security.
As it turns out these are very helpful for obscure features and settings buried in the documentation.
It doesnt matter if your LLM in/out tokens are a bit cheaper than competitors when you use 3x of them on every prompt. Maybe Google should focus on addressing that first?