In early 2025, Google published a protocol specification and quietly released it alongside a list of co-signatories. The list was not quiet at all: Salesforce, SAP, ServiceNow, Workday, Deloitte, KPMG, McKinsey, and over ninety other enterprise organisations had committed to supporting it at launch.
The protocol was A2A — Agent-to-Agent — and it answered a question that every organisation deploying AI agents eventually runs into: what happens when one agent needs to work with another?
This is the companion guide to our MCP Protocol article. If MCP is the protocol that connects AI agents to the tools and data they need, A2A is the protocol that connects AI agents to each other. Together, they form the foundational two-layer protocol stack for production agentic AI systems.
The Problem A2A Solves
Spend any time with production multi-agent systems and you quickly discover that the hardest part is not getting a single agent to work — it is getting multiple agents to work together reliably, securely, and at scale.
Consider what happens without a standard collaboration protocol. Agent A, built on Claude and running on your own infrastructure, needs to hand a task to Agent B, which your legal team runs on a separate platform. How does A know B exists? How does it describe the task in terms B understands? How does B signal that it is working, or that it has finished, or that it ran into a problem that requires human review? How do you audit the entire interaction for compliance purposes?
Without a shared protocol, the answer to all of these questions is: custom integration code. You write a bespoke handshake between every pair of agents that need to collaborate. In a system with five agents, that is potentially twenty unique integration paths to build and maintain. In a real enterprise with dozens of agents spanning multiple vendors, departments, and external partners, it becomes unmanageable.
A2A solves this with a standard vocabulary and a standard lifecycle for agent-to-agent interactions. Any agent that implements A2A can discover, communicate with, and delegate work to any other A2A-compliant agent — regardless of which LLM powers it, which vendor built it, or which infrastructure it runs on.
The Agent Card: How Agents Announce Themselves
The foundational primitive in A2A is the Agent Card — a machine-readable document, served over HTTPS, that describes everything another agent (or a human operator) needs to know about an agent’s capabilities and how to interact with it.
An Agent Card is a structured JSON document that declares:
- Identity: The agent’s name, description, and the URL of its A2A endpoint
- Capabilities: What the agent can do — expressed as a list of named skills with descriptions, input parameters, and output types
- Authentication: What authentication schemes the agent supports (OAuth 2.0 bearer tokens, API keys, mTLS, etc.)
- Supported modalities: Whether the agent can accept text, files, structured data, images, or other content types
- Streaming support: Whether the agent supports real-time push notifications during long-running tasks
Here is a simplified example of what an Agent Card might look like for a contract review agent:
{
"name": "Contract Review Agent",
"description": "Reviews contracts for compliance, risk clauses, and required amendments",
"url": "https://legal-agents.acme.com/a2a/contract-review",
"version": "1.2.0",
"skills": [
{
"id": "review_contract",
"name": "Review Contract",
"description": "Analyse a contract document and return risk assessment with recommended amendments",
"inputModes": ["file", "text"],
"outputModes": ["text", "structured_data"]
},
{
"id": "check_compliance",
"name": "Check Regulatory Compliance",
"description": "Verify contract clauses against GDPR, NIS2, and internal legal policies",
"inputModes": ["text", "structured_data"],
"outputModes": ["structured_data"]
}
],
"authentication": {
"schemes": ["oauth2", "bearer"]
}
}
Agent Cards are typically served at a well-known URL path — /.well-known/agent.json — making them discoverable by any A2A client that knows the agent’s base domain. Enterprise deployments often maintain an agent registry: a centralised directory of Agent Cards across all deployed agents, enabling dynamic discovery without hardcoding agent URLs.
The Task Lifecycle: Stateful, Asynchronous Collaboration
Unlike a simple API call that returns immediately, agent-to-agent collaboration is inherently asynchronous. A task delegated to a legal review agent might take thirty seconds or thirty minutes depending on contract complexity. A2A models this reality with a formal task lifecycle.
Every A2A interaction follows this lifecycle:
1. Task Submission (submitted)
The client agent — the one delegating work — sends a tasks/send request to the server agent’s A2A endpoint. This request includes:
- A unique task ID (generated by the client)
- The task message content (instructions, files, structured data)
- The session context (to maintain continuity across multi-turn interactions)
The server agent acknowledges receipt and transitions the task to submitted status.
2. Working (working)
The server agent begins executing the task. For long-running tasks, it can push incremental status updates back to the client using A2A’s notification mechanism. These updates might include progress indicators, intermediate results, or requests for additional information.
3. Input Required (input-required)
If the server agent reaches a point where it cannot proceed without additional information, it transitions to input-required status and signals back to the client. This is the mechanism for implementing human-in-the-loop workflows within an A2A task — the agent pauses and waits.
4. Completed or Failed (completed / failed)
The task either succeeds, with results returned to the client agent, or fails, with a structured error response that the client agent can interpret and act upon — retrying, escalating, or reporting to a human operator.
The full task state machine is deterministic and auditable. Every state transition is timestamped. Every piece of content exchanged is logged. This is not a nice-to-have for enterprise deployments — it is the foundation of compliance.
Push vs Pull: How Agents Stay Informed
For short tasks, a simple request-response pattern is sufficient: the client sends the task, the server completes it, the client reads the result. For longer-running tasks, polling repeatedly for updates is inefficient and adds latency.
A2A supports two notification models:
Push (Server-Sent Events / SSE): The server agent streams status updates and partial results back to the client in real time as the task progresses. This is the preferred model when both agents are online and connected.
Pull (Polling): The client agent periodically calls tasks/get to retrieve the current task status and any available results. This is the fallback model for asynchronous environments where the client may not maintain a persistent connection.
Enterprise deployments typically use push notifications for real-time workflows — where human users are waiting — and pull for background batch processing where latency tolerance is higher.
A Practical Example: Sales Agent Delegates to Legal Agent
Abstract protocols become concrete when you see them in a real workflow. Here is how A2A and MCP work together in a typical enterprise scenario.
The scenario: A sales agent is closing a deal. The customer has requested a non-standard payment clause be added to the standard contract template. Before the sales agent can send the modified contract, the legal team’s policy requires a compliance review. The legal review agent is operated by a separate team on separate infrastructure.
Step 1 — Sales agent discovers the legal agent
The sales agent queries the internal agent registry and finds the Contract Review Agent’s Agent Card. It reads the review_contract skill definition and confirms the agent accepts file inputs and returns structured compliance reports.
Step 2 — Sales agent prepares the contract (MCP in action)
Using its own MCP tools, the sales agent:
- Calls the CRM MCP server to retrieve the client’s account details and contract template
- Calls the Document MCP server to apply the requested payment clause amendment
- Produces a draft contract document ready for review
Step 3 — Sales agent delegates via A2A
The sales agent issues a tasks/send to the Contract Review Agent’s A2A endpoint, attaching the draft contract and specifying the review scope: “Verify GDPR compliance and flag any non-standard payment terms for approval.”
It includes its OAuth bearer token, scoped to allow the legal agent to read the contract but not to modify any CRM records or send any external communications.
Step 4 — Legal agent works (MCP in action)
The legal agent receives the task and begins its review. Using its own MCP tools — completely separate from the sales agent’s MCP servers — it:
- Calls the Legal Database MCP server to retrieve current GDPR requirements for payment data processing
- Calls the Policy Repository MCP server to load the company’s internal contract standards
- Calls the Compliance Check MCP server to run automated clause analysis
These MCP calls are entirely internal to the legal agent. The sales agent has no visibility into them and no access to the legal team’s systems. This isolation is a core security property of the A2A architecture.
Step 5 — Legal agent returns results
The legal agent completes its review and transitions the task to completed. It returns a structured compliance report: two clauses pass, one requires amendment, and one requires human legal counsel sign-off before the contract can proceed.
Step 6 — Sales agent acts on results
The sales agent receives the report, updates the CRM with the review outcome, and — because one item requires human approval — creates a task for the legal counsel’s review queue rather than proceeding autonomously. The HITL gate is enforced through the A2A workflow, not bolted on afterward.
The entire interaction is logged, timestamped, and auditable. The scope token means the legal agent could only do what it was authorised to do. No custom integration code was written between the two teams’ systems.
Security and Agent Identity
Security in A2A is not an afterthought — it is the load-bearing architecture.
The protocol is built on OAuth 2.0 and OIDC (OpenID Connect) for agent authentication. This means:
Agent identity is verified. Each agent has a cryptographically-backed identity. When one agent calls another, the receiving agent can verify who is calling and whether that caller is authorised to submit tasks.
Delegation is scoped. When an orchestrating agent delegates to a sub-agent, it issues a scoped access token — not its own full identity credentials. That token specifies exactly what the sub-agent is permitted to do. The legal agent in our example could not access the sales CRM even if it wanted to, because the token simply did not include that scope.
Delegation chains are auditable. A2A supports the full OAuth 2.0 delegation chain, so if Agent A delegates to Agent B which delegates to Agent C, every step in that chain is recorded with cryptographic provenance. For regulated industries, this is the audit trail that compliance officers require.
Agent Cards declare security requirements. A server agent’s Agent Card specifies what authentication schemes it supports and what scopes are required for each skill. Client agents must satisfy these requirements to submit tasks — there is no way to call an A2A agent anonymously (unless it explicitly permits it, which is appropriate only for public-facing discovery endpoints).
Enterprise deployments add additional governance layers on top of A2A’s protocol-level security: rate limiting, per-agent spending caps, behaviour monitoring, anomaly detection, and human approval gates for high-risk delegation chains. The protocol provides the foundation; your governance layer provides the enforcement.
A2A in the Enterprise Ecosystem
The breadth of the A2A launch coalition is significant. This is not a protocol being adopted by a few startups — it is backed by the major enterprise software platforms that large organisations already run.
Salesforce Agentforce supports A2A, meaning Salesforce’s native agents can collaborate with agents running on other platforms without custom connectors.
SAP Joule supports A2A, enabling SAP’s domain-specific agents (finance, supply chain, HR) to participate in cross-system agentic workflows.
ServiceNow Now Assist supports A2A, making it possible for IT service management agents to delegate to and receive tasks from agents across the enterprise.
Workday supports A2A for HR and finance agents, enabling cross-domain workflows where a hiring agent can collaborate with a finance agent to approve headcount without manual handoffs.
The practical implication for enterprise architecture decisions made in 2026 is clear: A2A compatibility is becoming a baseline requirement for enterprise AI agent platforms, not a differentiating feature. If a platform you are evaluating does not support A2A, you are building yourself into an integration corner.
Where A2A Sits in the Protocol Stack
Understanding how A2A and MCP relate to each other is essential for designing coherent agentic architectures.
MCP (the tool layer): Connects an individual AI agent to external systems — databases, APIs, file stores, communication platforms. An agent uses MCP to read data, trigger actions, and interact with the world. MCP is an inward-facing protocol — it extends what a single agent can do.
A2A (the collaboration layer): Connects AI agents to other AI agents — enabling delegation, coordination, and multi-agent orchestration. An agent uses A2A to hand work to another agent that has specialised capabilities, access to different systems, or authority in a different domain. A2A is an outward-facing protocol — it enables agents to work collectively.
The combination means you can build architectures that were simply not possible with monolithic AI systems: an orchestrating agent that manages a complex workflow by dynamically routing sub-tasks to the best-qualified specialised agent for each step, with each specialised agent using its own MCP tools to do its work, all within a unified audit and governance framework.
This is the architecture pattern that underpins every serious enterprise multi-agent deployment in 2026. It is also the architecture pattern we run in production at d-code — our 8-agent Inscape system uses exactly this layered approach, with agent-to-agent coordination at the orchestration layer and MCP-style tool access at the execution layer.
Klawty OS and Native A2A Support
Klawty OS was designed from the ground up for multi-agent systems, and A2A support is built into the core — not added as an integration point.
Every agent deployed on Klawty automatically publishes a well-formed Agent Card at a discoverable HTTPS endpoint. The Agent Card is generated from the agent’s skill and tool definitions, so it stays in sync with the agent’s actual capabilities without manual maintenance. Operators can configure which skills are publicly discoverable and which are internal.
Each Klawty agent can act as both an A2A client — submitting tasks to other agents — and an A2A server — receiving and executing delegated tasks. The framework handles the protocol mechanics: task ID generation, state machine transitions, push notification delivery, token validation, and delegation scope enforcement.
On top of A2A’s native security model, Klawty’s governance layer adds:
- Delegation audit trails: Every A2A interaction is logged with full context — which agent delegated, which agent received, what scope was granted, what the outcome was
- Tiered autonomy for delegations: Just as individual agent actions have autonomy tiers (AUTO / PROPOSE / CONFIRM / BLOCK), delegations to external agents can be classified by risk level and require human approval above a threshold
- Agent registry integration: Klawty maintains an internal registry of all known Agent Cards, enabling dynamic agent discovery without hardcoded endpoint URLs
- Cross-org federation: For enterprise clients with multiple Klawty deployments — or with external A2A-compliant agents from other vendors — Klawty can federate Agent Card discovery across organisational boundaries with configurable trust policies
The result is that implementing a multi-agent workflow on Klawty does not require writing A2A protocol code. You define your agents, their skills, and their collaboration topology — Klawty handles the protocol layer.
What This Means for Organisations Building in 2026
The emergence of A2A as a stable, broadly-adopted standard changes the calculus for enterprise AI architecture in several ways.
Specialisation becomes viable. Without a standard collaboration protocol, building highly specialised agents is expensive because each one requires custom integration with every other agent it needs to work with. With A2A, you can build a best-in-class legal review agent, deploy it once, and have it accessible to any other A2A-compliant agent in your organisation — or across organisations.
Vendor lock-in risk decreases. Because A2A is vendor-neutral, you are not locked into a single AI platform for your entire agent fleet. Your Salesforce agent can collaborate with your internally-built compliance agent, which can collaborate with your external partner’s risk assessment agent — all through the same standard protocol.
Governance becomes tractable. One of the hardest problems in scaling multi-agent systems is maintaining visibility and control as the number of agents and interactions grows. A2A’s structured task lifecycle, combined with OAuth-based identity and scoped delegation, gives you the primitives needed to implement meaningful governance at scale.
Cross-organisational AI workflows become practical. A2A enables agent-to-agent collaboration across organisational boundaries — your procurement agent working directly with a supplier’s inventory agent, with appropriate authentication and scoping on both sides. This is a category of automation that simply did not exist before standardised inter-agent protocols.
The organisations that build their agent architectures on standard protocols today — MCP for tool access, A2A for collaboration — will have dramatically lower integration costs, higher flexibility, and stronger compliance postures than those building proprietary agent silos. The technology choices you make in 2026 will define your AI architecture for years.
Getting Started with A2A
For teams new to A2A, the practical starting point is to audit your existing and planned agent deployments against three questions:
Which of your agents could benefit from specialised capabilities that another agent already has? The answer to this question maps out the A2A collaboration graph your architecture should support.
Which of your agents need to be accessible to agents operated by other teams or organisations? These agents need to publish Agent Cards and implement the A2A server role.
What are the security boundaries between your agent domains? The answer drives your scope and delegation policy design — which agents can delegate to which other agents, with what permissions, and with what approval requirements.
If you are building on Klawty OS, these questions map directly to configuration decisions the platform guides you through. If you are building on other infrastructure, d-code’s architecture team can help you design the A2A topology and governance model before you commit to implementation.
The protocol is ready. The ecosystem is there. The question is whether your architecture is designed to take advantage of it.
Build Production-Grade Multi-Agent Systems
d-code designs, builds, and operates multi-agent systems that use A2A and MCP to deliver real enterprise value — not proof-of-concept demos, but production systems running continuously at scale.
If you are evaluating how A2A fits into your AI architecture, or you are ready to build a multi-agent system with proper collaboration, governance, and compliance foundations, talk to our team. We bring the protocol expertise, the production experience, and the governance frameworks that make the difference between a pilot that fails and a system that compounds value over time.
Explore our Agent Development services or contact us directly to discuss your architecture.