MCP (Model Context Protocol)
Also known as: Model Context Protocol, MCP server, Tool-augmented LLM protocol
Quick definition
MCP (Model Context Protocol) is an open standard from Anthropic that defines how language models discover and call external tools at runtime. Instead of every app shipping a custom function-calling layer, MCP gives every model the same wire format — a JSON-RPC handshake, a tool registry, and structured input/output schemas.
Contents
What is MCP?
MCP (Model Context Protocol) is a specification published by Anthropic in late 2024 and adopted across the AI industry through 2025-2026. It defines a standard wire format for AI clients (Claude Desktop, Claude Code, Cursor, ChatGPT Pro Connectors, Zed, Google Antigravity, n8n's AI Agent node) to discover and invoke external tools without per-vendor custom integration. Before MCP, every AI app shipped its own 'function calling' layer with proprietary schemas; MCP unifies that into one open protocol.
At a technical level: an MCP server is a process (typically run via stdio) that exposes a JSON-RPC interface. The client connects, calls list_tools to get the catalog, sees tool names, descriptions, and JSON Schema input definitions, then calls call_tool with structured arguments. The server executes the tool and returns structured output. The model picks tools to call based on the user's natural-language prompt; the protocol handles the plumbing.
Why MCP matters for SaaS products
Pre-MCP: a SaaS that wanted AI agent integration had to build separate adapters for each client (a Claude integration, a Cursor integration, a custom OpenAI function-calling wrapper, etc.). Each adapter required vendor-specific code, vendor-specific authentication, vendor-specific deployment. Post-MCP: build one MCP server. Every compatible AI client gets the integration for free. New clients (Antigravity, Zed) light up automatically the day they ship MCP support.
For users this means natural-language access to SaaS features without switching apps. 'Schedule my new product video to TikTok and Instagram for tomorrow 9am EST' executes against your social scheduler from inside Claude Desktop, with no separate UI to learn. The agent picks the right tool (schedule_post), fills in arguments (scheduled_date, platforms array, media_urls), and dispatches.
How an MCP integration looks in practice
Three steps. First, the SaaS publishes an MCP server (commonly as an npm or PyPI package). For CodivUpload, that's `npx codivupload-mcp` — a single command that starts the server. Second, the user adds the server to their AI client config. For Claude Desktop, that's a JSON entry in claude_desktop_config.json with command: 'npx', args: ['codivupload-mcp'], and an env block with the API key. Third, the user prompts the AI in plain language. The AI lists available tools (publish_post, schedule_post, list_profiles, get_analytics, etc.), picks the right one, fills arguments from context, and calls it.
The AI never sees the API key — it lives in the env block of the local MCP server config and never crosses the wire to the model provider. This is a meaningful security property: the AI does the reasoning, the local MCP server holds the credentials.
MCP vs traditional REST API
Both exist for the same purpose (programmatic access to a SaaS). The differences: REST API needs glue code per client (SDK calls, auth handling, response parsing); MCP needs a one-time config block. REST API is for when you write the orchestration; MCP is for when an AI does. REST API supports any caller; MCP is specifically for AI agents. Most modern SaaS products with serious AI strategy ship both — REST for direct integration, MCP for AI clients. CodivUpload publishes both: api.codivupload.com (REST) and npx codivupload-mcp (MCP).
Connect Claude Desktop to CodivUpload's MCP server
json
// ~/Library/Application Support/Claude/claude_desktop_config.json (macOS)
// %APPDATA%/Claude/claude_desktop_config.json (Windows)
{
"mcpServers": {
"codivupload": {
"command": "npx",
"args": ["-y", "codivupload-mcp"],
"env": {
"CODIVUPLOAD_API_KEY": "cdv_..."
}
}
}
}
// Then in Claude:
// > Schedule my new product video to TikTok, Instagram, and YouTube
// > for tomorrow 9am EST. Use the launch caption I drafted in the doc.
//
// Claude lists tools, picks schedule_post, fills arguments, and calls it.
// You see the resulting post_id and scheduled_for timestamp in the
// conversation. No SDK. No SDK upgrade when new tools ship.Common pitfalls
- ×Storing the API key in the AI client config in plaintext — most clients encrypt at rest, but treat the config file with the same care as any credential file
- ×Assuming the AI knows your platform's quirks — pair MCP with documentation skills (e.g. CodivUpload's `npx codivupload-skills`) so the agent has Instagram media-type rules and TikTok privacy flags loaded
- ×Trying to debug failed tool calls without server logs — MCP servers usually log to stderr; check the AI client's MCP log panel when calls fail
- ×Running multiple MCP servers without isolated env blocks — the API key from one server can leak into another's process if you reuse env vars
Tips
- ✓Pair MCP with markdown skill packs — npx codivupload-skills installs platform-aware skill files that teach the AI per-platform rules
- ✓Use one workspace per MCP server instance — multi-tenancy (running for multiple clients) works best with separate config blocks per workspace
- ✓Subscribe to webhook callbacks alongside MCP — gives async status updates without the AI re-querying
- ✓Restart the AI client after config changes — most clients only reload MCP server config on launch
Frequently asked questions
Which AI clients support MCP today?+
Claude Desktop (macOS, Windows, Linux), Claude Code (CLI), Cursor IDE, Zed Editor, ChatGPT Pro Connectors, Google Antigravity, and n8n's AI Agent node (1.50+). Continue.dev and Cline (VS Code extension) also support MCP. New clients ship support every quarter — once a client supports MCP over stdio, every existing MCP server works without any new integration work.
Is MCP secure?+
Yes when configured correctly. The API key lives in your local config file in plaintext, encrypted at rest by the AI client. The model provider (Anthropic, OpenAI) only sees the tool name and arguments the agent decides to send — never the API key. The MCP server attaches the key to outbound HTTPS requests as a Bearer token. No different from any other authenticated REST API in security posture.
Does MCP work for non-AI use cases?+
Technically you can call an MCP server from any process that speaks JSON-RPC over stdio, but the design is specifically for AI agents. For non-AI integration, use the underlying REST API directly — it's typically faster and more flexible without the AI layer.
How is MCP different from OpenAI's function calling?+
OpenAI function calling is OpenAI-specific — works only with OpenAI models, requires you to define functions in OpenAI's schema, and locks you in. MCP is open-standard, works across model providers (Anthropic, OpenAI via ChatGPT Pro Connectors, Google, etc.), and uses a vendor-neutral schema. MCP is the multi-vendor evolution of the function-calling pattern.
How do I publish my own MCP server?+
Start with the official Anthropic MCP TypeScript or Python SDK. The SDK exposes a server class — you register tools by providing a name, description, JSON Schema input definition, and an async handler function. Package as an npm or PyPI package so users can run via `npx your-mcp` or `pipx run your-mcp`. Anthropic publishes a server registry at github.com/modelcontextprotocol — submitting yours gets it discovered by AI clients.
Add CodivUpload to your AI client in 60 seconds
First-party MCP server published on npm. One config block, your API key, and Claude / Cursor / ChatGPT can post to all 11 social platforms in plain language. Pair with the AI Skills npm package for per-platform reasoning.
See MCP setup guideRead next
Related glossary terms