Skip to main content
10 min read tech

MCP Protocol: How Model Context Protocol Reshapes Enterprise AI

MCP (Model Context Protocol) is the open standard for connecting AI agents to any tool or service. Learn how it works and why enterprises adopt it.

In early 2025, Anthropic released a simple JSON-based protocol specification and published it on GitHub. Within 90 days, it had become the most-referenced AI protocol specification in enterprise architecture discussions. By mid-2025, it was transferred to the Linux Foundation’s newly created Agentic AI Foundation — alongside Google’s A2A protocol — as the foundational standards for the agentic AI era.

That protocol is MCP: Model Context Protocol. And if you are building or deploying AI agents, you need to understand it.

The Problem MCP Solves

Imagine you are building an AI agent to help your sales team. This agent needs to:

  • Check deals in your CRM (Salesforce)
  • Read recent client emails (Outlook)
  • Pull contract status from your document system (SharePoint)
  • Update pipeline stages (Salesforce again)
  • Schedule follow-up meetings (Calendar)
  • Draft and send emails (Outlook)

Before MCP, connecting an AI agent to these systems required writing custom integration code for every single system — handling authentication, rate limits, error formats, and response parsing for each one. Then you had to expose these as “function calling” definitions in a format specific to your AI provider. When you switched AI providers, you rewrote everything.

MCP solves this by defining a universal interface:

  1. Each external system is wrapped in an MCP server that exposes its capabilities as standardised “tools”
  2. Any MCP-compatible AI model can discover and call those tools using the same protocol
  3. You build the integration once — it works with any AI model that supports MCP

This is why the analogy to TCP/IP is accurate. TCP/IP is the protocol that lets any device communicate on the internet regardless of its manufacturer. MCP is the protocol that lets any AI agent communicate with any tool or service regardless of who built either of them.

How MCP Works: The Architecture

MCP has three core components:

MCP Servers

An MCP server is a lightweight process that wraps an external system and exposes its capabilities. A server for your company’s PostgreSQL database might expose tools like:

  • query_database — run a read-only SQL query
  • get_table_schema — retrieve the schema for a specific table
  • list_tables — list all available tables

Each tool has a name, description, and defined input parameters. The AI model can discover available tools and call them with specific parameters — just like calling a function, but across a network boundary with built-in security controls.

MCP servers can expose three types of capabilities:

  • Tools — executable actions (run a query, send an email, create a task)
  • Resources — readable data (documents, database records, files)
  • Prompts — reusable prompt templates for common tasks

MCP Clients

MCP clients are the AI systems that call MCP servers. Any AI application that supports the MCP protocol can act as a client: Claude, GPT-4 (via compatible wrappers), Gemini, and custom agent frameworks like Klawty OS.

The client handles:

  • Discovering available servers and their capabilities
  • Managing connections (typically via standard I/O or HTTP/SSE)
  • Passing tool call requests and receiving results
  • Managing authentication tokens

The MCP Host

The host is the application layer that manages the MCP client and connects it to a model. In Claude Desktop, Anthropic’s app is the host. In Klawty OS, the agent runtime is the host. The host is responsible for security boundaries — deciding which servers an agent can access and with what permissions.

The Security Model: Why It Matters

MCP’s security model is one of the most important architectural decisions you will make when deploying AI agents. Get this wrong and you will have an AI agent that can access far more than it should.

The principle of least privilege applies directly:

Each agent should have access only to the MCP servers it needs, and each MCP server should expose only the capabilities appropriate for that agent’s role.

In Klawty OS, this maps to our tiered autonomy model:

  • AUTO tools — the agent can call these without any approval (read operations, research)
  • AUTO+ tools — the agent executes and notifies (creating tasks, updating records)
  • PROPOSE tools — the agent proposes, humans approve within 15 minutes (sending emails, deploys)
  • CONFIRM tools — requires explicit human approval emoji reaction before execution
  • BLOCK tools — hardcoded as unavailable regardless of any instructions (financial transfers, credential changes)

This tiered model sits on top of MCP — the protocol handles the communication, our governance layer handles the risk classification.

Authentication flow:

Proper MCP server authentication means:

  1. The MCP server uses OAuth or token-based auth to connect to the underlying service
  2. The AI agent never sees raw credentials — it calls the MCP server which handles auth internally
  3. Scope is limited: a Gmail MCP server for drafting emails should not grant delete or admin permissions
  4. All calls are logged for audit purposes

A2A Protocol: Agent-to-Agent Collaboration

MCP solves how agents connect to tools. A2A (Agent-to-Agent protocol) solves how agents connect to each other.

Developed by Google and now an open Linux Foundation standard alongside MCP, A2A defines how agents discover each other’s capabilities, delegate tasks, and coordinate outcomes — even across organisational boundaries.

The A2A architecture introduces the concept of an Agent Card: a standardised capability declaration that an agent publishes, describing what it can do, what it expects as input, and what it returns as output. Any other A2A-compatible agent or orchestrator can discover this card and delegate appropriate tasks.

Where MCP ends and A2A begins:

ScenarioProtocol
Your sales agent reads from CRMMCP (agent → tool)
Your sales agent delegates contract review to a legal agentA2A (agent → agent)
Your legal agent reads the contract documentMCP (agent → tool)
Your legal agent sends results back to sales agentA2A (agent → agent)

In practice, enterprise multi-agent systems use both protocols: MCP for every interaction with external tools and data, A2A for orchestration and delegation between specialised agents.

Klawty OS supports both MCP and A2A natively. Every Klawty agent publishes an Agent Card, making it discoverable for A2A collaboration. Every tool integration uses MCP server architecture, ensuring compatibility with the 3,000+ available MCP servers.

Available MCP Servers in 2026

The MCP ecosystem has grown dramatically. As of early 2026, major available servers include:

Productivity & Communication

  • Google Workspace (Gmail, Drive, Calendar, Docs)
  • Microsoft 365 (Outlook, Teams, SharePoint, OneDrive)
  • Slack — post messages, create channels, search
  • Notion — read/write pages and databases

Development & Infrastructure

  • GitHub — repos, PRs, issues, actions
  • GitLab — full DevOps pipeline integration
  • PostgreSQL, MySQL, SQLite — read/write database access
  • Docker — container management
  • Kubernetes — cluster operations (scoped carefully)

Business Operations

  • Salesforce — full CRM access
  • HubSpot — contacts, deals, pipelines
  • Jira & Confluence — project management and knowledge base
  • Stripe — payment and subscription data

Web & Research

  • Brave Search — web search without tracking
  • Puppeteer — browser automation
  • Fetch — web page content extraction

Observability

  • Datadog, Grafana — metrics and alerting
  • Sentry — error tracking

Each of these is available as an open-source MCP server that can be self-hosted or used through managed providers.

Building a Custom MCP Server

When your business has proprietary systems — an internal ERP, a custom database, a legacy API — you can build a custom MCP server to expose it to your AI agents.

A minimal MCP server in Python:

from mcp.server import Server
from mcp.types import Tool, TextContent

app = Server("my-erp-server")

@app.list_tools()
async def list_tools():
    return [
        Tool(
            name="get_order_status",
            description="Get the status of a customer order",
            inputSchema={
                "type": "object",
                "properties": {
                    "order_id": {"type": "string", "description": "The order ID"}
                },
                "required": ["order_id"]
            }
        )
    ]

@app.call_tool()
async def call_tool(name: str, arguments: dict):
    if name == "get_order_status":
        order_id = arguments["order_id"]
        # Query your actual ERP here
        status = erp_client.get_order(order_id)
        return [TextContent(type="text", text=f"Order {order_id}: {status}")]

This server can then be connected to any MCP-compatible AI system — Claude Desktop, Klawty, or any other MCP client — immediately making your ERP data accessible to your AI agents.

Enterprise MCP Implementation: Key Decisions

When deploying MCP in an enterprise context, several architectural decisions shape your security posture and governance:

1. Self-hosted vs. managed MCP servers

Self-hosting means your credentials and data never leave your infrastructure. For sensitive systems (financial data, HR, internal databases), this is typically the right choice. Managed MCP providers are convenient for public APIs where data residency is less critical.

2. Server-per-system vs. aggregated servers

One MCP server per external system (recommended for security — each server has minimal blast radius) vs. aggregated servers that proxy to multiple systems. Smaller blast radius wins for enterprise deployment.

3. Read vs. write capabilities

Not every agent needs write access to every system. Design your MCP servers with read-only and read-write variants, and assign based on agent role and autonomy tier.

4. Logging and observability

Every MCP tool call should be logged: which agent called what tool, with what parameters, at what time, and with what result. This is not optional in regulated environments — and it is mandatory for EU AI Act audit trail requirements.

5. Rate limiting and quotas

AI agents can call MCP tools at machine speed — protect your backend systems with per-agent rate limits. An agent in a loop bug could hammer your API 1,000 times per minute without this.

MCP in Klawty OS

Klawty ships with a pre-configured MCP architecture:

  • 40+ pre-built MCP servers covering common enterprise integrations (Google Workspace, Slack, GitHub, PostgreSQL, Salesforce, and more)
  • Custom MCP server generator — describe your API and Klawty generates a typed MCP server with authentication and error handling
  • Per-agent server scoping — each agent is configured with exactly the MCP servers it needs; no cross-agent tool access by default
  • Tiered autonomy per tool — every tool is risk-classified and routed through the appropriate approval tier
  • Full audit logging — every MCP call is logged to the Klawty audit trail, EU AI Act compliant

The result: a business can connect a new data source to their AI agents in 20 minutes (using a pre-built server) or 2-4 hours (building a custom server) — without writing AI integration code or managing API authentication per agent.

What This Means for Enterprise AI Strategy

If you are building or planning an AI agent system in 2026, MCP adoption is not optional — it is the industry baseline.

Every major AI vendor (Anthropic, OpenAI, Google, Microsoft) now supports MCP either natively or through SDKs. The 3,000+ available MCP servers mean the integration work for common enterprise tools is largely done. The Linux Foundation governance ensures the protocol will remain open and continue evolving.

The strategic implication: your AI integration investment should be in MCP server quality and governance architecture, not in proprietary per-AI-vendor integration code. Build once, use with any capable model.

The businesses that adopt MCP-based architecture today are building a durable foundation. Those that don’t will be refactoring their AI integrations every time they change AI providers — which in a market moving as fast as this one, is an expensive problem to have.


Ready to build your MCP-based agent architecture? Our team at dcode has implemented MCP integrations for businesses from SMBs to enterprise scale. Talk to us about your integration requirements.

Sources: Anthropic MCP Specification (anthropic.com/mcp); Linux Foundation Agentic AI Foundation announcements (2025); Google A2A Protocol specification; Klawty OS technical documentation.

Frequently Asked Questions

What is Model Context Protocol (MCP)?
MCP is an open standard protocol developed by Anthropic and transferred to the Linux Foundation's Agentic AI Foundation in 2025. It defines how AI models (LLMs) can securely call external tools, APIs, and data sources using a standardised interface. Think of it as USB-C for AI integrations — one universal connector that works across different AI systems and tools.
How is MCP different from function calling?
Function calling is a proprietary mechanism each AI provider implements differently. MCP is a universal open standard. A tool built as an MCP server works with Claude, GPT-4, Gemini, and any other MCP-compatible model — you build it once and it works everywhere. Function calling locks you to a specific AI provider's SDK.
What is an MCP server?
An MCP server is a lightweight process that exposes tools, resources, and prompts to AI models using the MCP standard. For example, an MCP server for Gmail exposes tools like 'send_email', 'search_emails', 'get_thread' — the AI agent calls these tools without ever seeing your Gmail credentials directly.
What is the A2A protocol and how does it relate to MCP?
A2A (Agent-to-Agent) is Google's open protocol for peer-to-peer agent collaboration, backed by 100+ enterprise partners. Where MCP connects agents to tools and data, A2A connects agents to other agents — enabling complex multi-agent workflows across organisations and systems. Klawty supports both: MCP for tool access, A2A for inter-agent collaboration.
Is MCP secure enough for enterprise use?
MCP itself is protocol-level — security depends on your implementation. Each MCP server must implement proper authentication, authorisation, and scope limitation. Klawty's governance layer adds per-tool risk classification, action logging, and approval gates on top of MCP, making it enterprise-ready.
How many MCP servers are available?
As of early 2026, there are 3,000+ community-built MCP servers available, with official servers for Slack, GitHub, PostgreSQL, Jira, Google Workspace, Notion, Salesforce, and dozens of other platforms. Most enterprise SaaS vendors now maintain their own official MCP servers.
Tags: MCP protocol Model Context Protocol AI integration AI agents enterprise AI A2A protocol agentic AI

Share this article

Related Articles