- As an end-user, you can connect MCPs only with `search` and `fetch` that only work in deep research mode.
- As a developer, you can use MCP with the API and that supports the full set of MCP tools - all tools become available; this shows in the dev playground.
- For Custom GPTs, they support any action, but not MCPs. So if you had a layer to translate MCP to their API spec, it will work. But Custom GPTs with actions only support models 4o and 4.1; so you don't get the benefit of the o-series of models.
Figuring out what works when is harder than it needs to be.
ChatGPT desktop client with only search/fetch MCPs is far, far inferior to CC from a utility/value perspective.
Eg how I described here a while ago: https://x.com/wunderwuzzi23/status/1930899939737166075?s=46&...
Ironically, I have a blog post drafted that explains this also in detail, and should probably still publish it.
It’s disappointing they are gating this and browser agent for $100 users, and even for 100, it’s only two tool methods.
If I understand correctly, it requires your MCP server to have exactly two tools - search and fetch.
So this is not really support for MCP in general, as in all the available MCP servers. It’s support for their own custom higher-level protocol built on top of MCP.
Quote we're talking about: > To work with ChatGPT Connectors or deep research (in ChatGPT or via API), your MCP server must implement two tools - search and fetch.
Reference links:
- Using remote MCP servers with the API: https://platform.openai.com/docs/guides/tools-remote-mcp
- Which account types can setup custom connectors in ChatGPT: https://help.openai.com/en/articles/11487775-connectors-in-c...
Cause from TFA “To work with ChatGPT Connectors or deep research (in ChatGPT or via API), your MCP server must implement two tools - search and fetch.”
Also, this page is actually the only docs site about MCP they have, and their help articles link to it too.
MCP servers can expose tools that are agents, but don't have to, and usually don't.
That being said, I can't say I've come across an actual implementation of a2a outside of press releases...
Given how disastrous the AI 'industry' has been, between misappropriating data from customers, performing actions on behalf of customers that lead to data and/or financial loss, and then seeking protection from the law in one or more cases of these, isn't providing an MCP service essentially requiring you to notify customers of a GDPR-or-similar data compromise event at some point in the future when it suddenly but inevitably betrays you?
Like, isn't OpenAI just leading people to a footgun and then kindly asking them to use it, for the betterment of OpenAI's bottom line, which was significantly in the red for FY24?