The Model Context Protocol (MCP) is a new open standard designed by Anthropic, for connecting LLMs with external tools and data. MCP lets AI apps like Claude Desktop connect to local and remote servers that provide access to tools and resources like databases, search engines, and other APIs. An illustrative screenshot from the MCP website is below:
In the screenshot, Claude Desktop (the MCP client) uses a tool call to the weather MCP server, which in turn fetches data from a weather API.
If you’re not familiar with OpenAPI, it’s a standard for defining APIs. I wondered, we already have this specification for describing tools and data sources, why do we need another one like MCP? Indeed, ChatGPT can already connect to OpenAPI servers.
I asked as much in this HackerNews comment. But I sort of see the point now. OpenAPI is great for defining REST APIs but it’s not great for standardizing the kind of interactions that LLMs need. The specification has some handy provisions, for example, letting servers notify clients that a resource (like a database) was updated.
Even though Claude Desktop is meant to be the poster child for MCP, it doesn’t implement the full spec. For example, best I can tell, it does not support SSE transport (stdio only), thus servers must be local. Prompts and resources must be selected manually. Dynamic resources (where the server can generate a bunch of resources on the fly) don’t seem to have any UI affordance. Server generated notifications don’t seem to appear. And perhaps most peskily, Claude Desktop has strict timeouts on tool calls, which means that without any way to schedule work in the background, the client can’t do long-running tasks.