Agentic Protocols Compared: MCP, A2A, ACP, and the Protocol Landscape
MCP connects models to tools. A2A connects agents to agents. ACP standardizes agent communication. Here is how they differ, when to use each, and why observability across all of them matters.
Why Protocols Matter for Agents
The AI ecosystem is fragmenting. There are hundreds of foundation models, thousands of tool providers, and a rapidly growing population of autonomous agents — each built with different frameworks, hosted on different infrastructure, and designed for different tasks. Without standardized protocols governing how these components communicate, every integration is bespoke. Every model-to-tool connection requires custom glue code. Every agent-to-agent handoff demands a proprietary adapter. The result is an ecosystem that scales in capability but not in interoperability.
This is a problem the software industry has solved before. HTTP standardized how clients and servers exchange documents on the web. REST established conventions for how applications expose and consume APIs over HTTP. gRPC defined a high-performance contract between services. SMTP standardized email delivery. In each case, a protocol emerged not because anyone mandated it, but because the cost of ad-hoc integration became untenable. The AI agent ecosystem has reached that inflection point.
Agent communication operates across three distinct layers, each requiring its own protocol abstraction. The first layer is model-to-tool connectivity — how a language model discovers, invokes, and receives results from external tools and data sources. The second layer is agent-to-agent delegation — how autonomous agents discover each other's capabilities, assign tasks, stream progress, and exchange structured results. The third layer is agent-to-human interaction — how agents communicate with people, request approvals, and present findings. Each layer has different requirements for discovery, state management, streaming, and error handling.
Three protocols have emerged to address these layers: Anthropic's Model Context Protocol (MCP) for model-to-tool connectivity, Google's Agent-to-Agent Protocol (A2A) for inter-agent task delegation, and IBM's Agent Communication Protocol (ACP) for structured agent messaging. They are not competitors — they operate at different layers of the stack and are designed to be complementary. Understanding what each protocol does, where it fits, and how they interact is essential for anyone building production agentic systems.
Protocols do not win by being theoretically elegant. They win by reducing the marginal cost of integration to near zero. HTTP won because adding a new web page cost nothing extra once you had a server. MCP, A2A, and ACP will succeed to the extent that they make connecting a new tool, agent, or communication channel trivially cheap compared to building a custom integration.
Model Context Protocol (MCP)
MCP is Anthropic's open standard for connecting language models to external tools, data sources, and services. It addresses the most fundamental integration problem in the AI stack: how does a model access the outside world? Before MCP, every tool integration required custom code — a function definition for OpenAI, a different format for Anthropic, another for Google, yet another for open-source frameworks. MCP replaces this fragmentation with a single, universal interface. Anthropic calls it the “USB-C for AI” — a standardized connector that works regardless of which model or tool is on either end.
The architecture follows a client-server pattern. MCP servers expose capabilities — tools that perform actions, resources that provide data, and prompts that template common interactions. MCP clients (typically AI applications or IDE integrations) discover these capabilities and present them to language models. The model decides which tools to call based on the user's request and the available capabilities. The client executes the tool call against the server and returns the result to the model. This separation means tool developers build their integration once as an MCP server, and it works with every MCP-compatible client automatically.
MCP defines four core primitives. Resources represent data that the model can read — files, database records, API responses, or any structured content. Resources are identified by URIs and can be listed, read, and subscribed to for updates. Tools represent actions that the model can perform — searching a database, sending an email, creating a ticket, executing code. Tools are defined with JSON Schema input parameters and return structured results. Prompts are reusable templates that bundle instructions, context, and tool configurations for common tasks. Sampling provides a controlled mechanism for servers to request model completions through the client, enabling recursive and agentic patterns while keeping the human in the loop.
Transport is flexible. For local integrations — tools running on the same machine as the client — MCP uses stdio (standard input/output), which is simple, fast, and requires no network configuration. For remote integrations — tools running on external servers — MCP uses HTTP with Server-Sent Events (SSE) for streaming responses. The protocol layer sits on top of JSON-RPC 2.0, providing a familiar request-response pattern with built-in error handling and capability negotiation.
Real-world adoption has been rapid. VS Code, Cursor, Claude Desktop, and numerous other AI applications support MCP natively. The ecosystem already includes servers for databases (PostgreSQL, SQLite), development tools (GitHub, GitLab, Jira), cloud services (AWS, GCP), file systems, web scraping, and hundreds of other integrations. When a developer writes an MCP server for their internal API, that server is immediately usable by every MCP-compatible client — no per-client integration work required.
# Example MCP server exposing a tool
from mcp import Server, Tool
server = Server("my-tools")
@server.tool("search_database")
async def search(query: str, limit: int = 10) -> list[dict]:
"""Search the internal database."""
return await db.search(query, limit=limit)
# Client connects and discovers tools automaticallyWhen building MCP servers, design tools with clear, descriptive names and comprehensive JSON Schema definitions. The model's ability to correctly invoke your tool depends entirely on how well the tool's name, description, and parameter schema communicate its purpose. Vague names and missing descriptions lead to misuse. Precise names and thorough schemas lead to reliable tool calls.
Agent-to-Agent Protocol (A2A)
A2A addresses a fundamentally different problem than MCP. Where MCP connects models to tools, A2A connects agents to agents. The distinction matters because agents are not tools. A tool is a stateless function that takes input and returns output. An agent is an autonomous entity with its own capabilities, state, decision-making logic, and potentially its own set of tools. Communicating with an agent requires discovery, negotiation, task management, and lifecycle tracking — none of which the model-to-tool paradigm was designed to handle.
Google designed A2A around four core concepts. Agent Cards are JSON metadata documents that describe an agent's capabilities, skills, endpoint URL, and authentication requirements. They serve as discovery metadata — the equivalent of a service's API documentation, but machine-readable and standardized. An orchestrating agent can fetch another agent's card to understand what it can do before deciding whether to delegate work to it. Agent Cards are typically hosted at a well-known URL (/.well-known/agent.json), making discovery as simple as fetching a URL.
Tasks are the unit of work in A2A. When an agent delegates work to another agent, it creates a task with a description of what needs to be done. The receiving agent processes the task and updates its status through a defined lifecycle: submitted → working → completed (or failed, canceled). The delegating agent can poll for status updates or receive them via streaming. Tasks can include input data, constraints, and context that the receiving agent needs to complete the work.
Artifacts are the structured outputs that a task produces. Unlike simple return values, artifacts can be rich, typed content — documents, images, data tables, code, or any structured format. Multiple artifacts can be produced by a single task, and they can be streamed incrementally as the agent works. This design acknowledges that agent work often produces complex, multi-part results that cannot be represented as a single return value.
Push Notifications enable asynchronous communication. Long-running tasks don't require the delegating agent to hold a connection open. Instead, the receiving agent can push status updates, partial results, and completion notifications to a callback URL. This is critical for production systems where tasks may take minutes or hours to complete and where the delegating agent has other work to do in the meantime.
The pattern maps naturally to service mesh architectures in microservices. Just as a service mesh provides discovery, load balancing, and communication between microservices, A2A provides discovery, delegation, and communication between agents. The Agent Card is analogous to a service registry entry. The task lifecycle mirrors request-response patterns with long-running operation support. Push notifications parallel webhook callbacks. Teams already familiar with microservice orchestration will find A2A's patterns intuitive.
A2A is designed for trust boundaries. When you delegate a task to another agent — especially one operated by a different team or organization — you need structured contracts, capability discovery, and lifecycle management. MCP assumes a trusted, tight coupling between model and tool. A2A assumes autonomous entities that must negotiate and coordinate.
Agent Communication Protocol (ACP)
ACP is IBM's contribution to the agentic protocol landscape, and it occupies a distinct niche. While MCP standardizes tool access and A2A standardizes task delegation, ACP focuses on the message-passing semantics between agents in multi-agent systems. It is less concerned with how an agent discovers or delegates to another agent, and more concerned with the structure and semantics of the messages they exchange once communication is established.
The core abstraction in ACP is the structured message envelope. Every message between agents follows a standardized format that includes the sender identity, recipient identity, message type, conversation context, and payload. This envelope structure ensures that agents can interpret messages consistently regardless of implementation language, framework, or hosting environment. The message type system is extensible — ACP defines standard types for requests, responses, proposals, acceptances, rejections, and informational messages, but agents can define custom types for domain-specific interactions.
Capability advertisement in ACP differs from A2A's Agent Card approach. Rather than publishing a static metadata document, ACP agents advertise their capabilities through the message protocol itself. An agent can query another agent's capabilities, receive a structured response describing what the agent can do, and then formulate requests accordingly. This dynamic capability discovery is better suited to environments where agent capabilities change over time or depend on context.
Conversation threading is a first-class concept in ACP. Multi-agent interactions are rarely single request-response pairs. They involve back-and-forth exchanges, clarifications, counter-proposals, and incremental refinements. ACP maintains conversation context across messages, allowing agents to reference prior messages, build on previous exchanges, and maintain coherent multi-turn dialogues. This threading model is essential for complex coordination tasks where agents must negotiate, compromise, and converge on solutions.
Negotiation patterns are where ACP most clearly differentiates itself. In many multi-agent scenarios, agents must agree on approaches before executing. One agent proposes a plan, another evaluates it against its own constraints, suggests modifications, and the first agent either accepts or counter-proposes. ACP provides structured patterns for these negotiations — proposal, evaluation, counter-proposal, acceptance, rejection — that are reusable across domains. These patterns draw from decades of research in multi-agent systems and contract net protocols.
The distinction between ACP and A2A is subtle but important. A2A is task-centric: agent A creates a task for agent B, and agent B executes it. The relationship is hierarchical — one agent delegates, another performs. ACP is conversation-centric: agents exchange messages as peers, negotiating and collaborating rather than delegating and executing. In practice, many multi-agent systems need both — task delegation for clear-cut work assignments and conversation for collaborative problem-solving.
If your multi-agent system primarily involves one orchestrator assigning tasks to specialist agents, A2A is the better fit. If your agents are peers that need to negotiate, debate, or collaboratively refine outputs, ACP's conversation-centric model provides richer semantics for those interactions.
Comparing the Protocols
A side-by-side comparison clarifies how each protocol fits into the agentic architecture:
| Aspect | MCP | A2A | ACP |
|---|---|---|---|
| Primary Focus | Model ↔ Tool connectivity | Agent ↔ Agent delegation | Agent ↔ Agent messaging |
| Originated By | Anthropic | IBM | |
| Key Metaphor | USB-C for AI | Service mesh for agents | Message bus for agents |
| Discovery | Server exposes capabilities | Agent Cards | Capability advertisement |
| Communication | JSON-RPC over stdio/HTTP | HTTP + JSON + SSE | Structured envelopes |
| Statefulness | Stateful sessions | Task lifecycle | Conversation threads |
| Streaming | SSE for server events | SSE for task updates | Streaming support |
| Best For | Tool integration | Cross-org agent orchestration | Multi-agent conversations |
| Maturity | Most mature, wide adoption | Growing adoption | Early stage |
The critical takeaway is that these protocols are complementary, not competing. They operate at different layers of the agentic stack and solve different problems. MCP handles the model-to-tool layer — how a language model interacts with external capabilities. A2A handles the agent-to-agent delegation layer — how autonomous agents assign work to each other across trust boundaries. ACP handles the agent-to-agent conversation layer — how agents engage in structured dialogue, negotiation, and collaborative reasoning.
A production agentic platform might use all three simultaneously. An orchestrator agent discovers specialist agents via A2A Agent Cards, delegates tasks using A2A's task lifecycle, and those specialist agents use MCP to connect to the tools they need to complete their work. When agents need to collaboratively refine a complex output, they switch to ACP's conversation patterns for negotiation. The protocols compose naturally because they were designed for different layers of the stack.
When to Use What
Choosing the right protocol depends on what you are building and where you are in the maturity curve of your agentic system. Here is a decision framework:
Use MCP when you need to connect models to tools, databases, and external services. If you are building a tool server that multiple AI clients should be able to consume — an internal API wrapper, a database query interface, a document retrieval service — MCP is the clear choice. It has the broadest adoption, the most mature tooling, and the simplest integration path. Any AI application that supports MCP can immediately use your tool server without additional integration work.
Use A2A when you are orchestrating agents across organizational boundaries, building agent marketplaces, or delegating tasks between autonomous agents that operate independently. A2A shines when there is a clear principal-agent relationship — one agent needs work done and another agent can do it. The task lifecycle, artifact model, and push notification system are designed for this delegation pattern. If you are building an agent that needs to discover and use other agents the way a microservice discovers and calls other microservices, A2A provides the right abstractions.
Use ACP when you are building multi-agent systems where agents need structured conversation, negotiation, and collaborative problem-solving. If your agents are peers that must agree on approaches, evaluate each other's proposals, and converge on solutions through dialogue, ACP's conversation-centric model provides richer semantics than A2A's task-centric model. Research teams, complex planning systems, and multi-stakeholder decision processes are natural fits.
Use all three when you are building a full agentic platform that connects models to tools (MCP), orchestrates agent teams across boundaries (A2A), and enables rich agent dialogue for collaborative tasks (ACP). The protocols are designed to be layered, not exclusive.
In practice, most teams start with MCP for tool connectivity because it has the most adoption and the most immediate ROI. Connecting your models to your internal tools and databases via MCP provides tangible value on day one. As the system evolves toward multi-agent architectures — where specialized agents handle different aspects of a workflow and need to coordinate — A2A becomes relevant for the orchestration layer. ACP enters the picture when the coordination requires genuine dialogue rather than simple task delegation. The progression from MCP to MCP + A2A to MCP + A2A + ACP mirrors the typical maturity curve of an agentic system.
Do not over-architect your protocol stack. Start with MCP for tool connectivity. Add A2A when you genuinely need agent-to-agent delegation. Add ACP when your agents need to negotiate rather than just delegate. Each layer adds value, but each also adds complexity. Match the protocol to the actual coordination patterns your system requires, not the ones you might need someday.
Observability Across Protocols
Regardless of which protocol you adopt — or which combination you layer together — observability is non-negotiable. Every MCP tool call, every A2A task delegation, every ACP message exchange represents a decision point in your agentic system. Without visibility into these interactions, debugging failures, diagnosing performance issues, and understanding agent behavior become exercises in guesswork.
The challenge is that each protocol introduces its own observability surface area. MCP tool calls need to be traced with their input arguments, output results, latency, and token consumption. A2A task delegations need to be traced across agent boundaries — who delegated to whom, what was the task, how long did it take, what artifacts were produced. ACP conversations need to be traced at the message level — the full dialogue history, negotiation steps, and final resolution. A protocol-agnostic observability layer must capture all of these interaction patterns and correlate them into a unified trace.
This is exactly what TuringPulse provides. The SDK instruments protocol interactions at the boundary layer, capturing the full context of each call without requiring changes to your protocol implementation. Tool calls, task delegations, and message exchanges are all recorded as spans in a distributed trace, giving you end-to-end visibility across the entire agentic workflow — from the initial user request through model reasoning, tool invocation, agent delegation, and final response.
from turingpulse_sdk import init, instrument
init(api_key="sk_...", workflow_name="multi-agent-pipeline")
@instrument(name="MCP Tool Router")
async def handle_mcp_request(tool_name: str, args: dict) -> dict:
# TuringPulse captures: tool name, args, result, latency, tokens
result = await mcp_client.call_tool(tool_name, args)
return result
@instrument(name="A2A Task Delegator")
async def delegate_to_agent(agent_card: dict, task: dict) -> dict:
# TuringPulse traces: delegation chain, agent capabilities, task lifecycle
result = await a2a_client.create_task(agent_card["url"], task)
return resultBecause the @instrument decorator is protocol-agnostic, you do not need separate observability tools for each protocol. Whether a span represents an MCP tool call, an A2A task delegation, or an ACP negotiation exchange, it appears in the same trace view. You can search by span name (e.g., “MCP Tool Router”), inspect latency across delegation chains, and add KPI tracking to any instrumented function — using the same SDK capabilities you already use for your agent code.
As the protocol landscape matures and production systems adopt multi-protocol architectures, the ability to observe across all protocol layers from a single pane of glass becomes a competitive advantage. Teams that instrument early — before their agentic systems become complex enough to be opaque — build the operational muscle memory and data history that makes debugging, optimization, and governance possible at scale. Teams that wait until they have a production incident to think about observability discover that retrofitting instrumentation into a running multi-agent system is significantly harder than building it in from the start.
Protocols define how agents communicate. Observability reveals whether that communication is working. You cannot debug what you cannot see — and as agentic systems adopt multi-protocol architectures, the observability layer that spans all protocols becomes the single most important operational investment.