Pricing
Tool Governance

MCP Tool Governance: Control What Your AI Agents Can Do

The Model Context Protocol (MCP) lets AI agents call external tools — APIs, databases, file systems, code executors. MCP tool governance intercepts and evaluates every tool call before and after execution, giving you policy enforcement, PII protection, and a complete audit trail at the action layer where it matters most.

The problem with unmonitored tool calls

AI agents use MCP to invoke external tools — REST APIs, SQL databases, file systems, code execution environments, and hundreds of third-party integrations. Every tool invocation is an action surface with potential for PII leakage, compliance violations, runaway costs, and data exfiltration.

Unlike LLM output filtering, which operates on generated text after the fact, tool governance must operate at the action layer. By the time an agent's output reaches your guardrails, it may have already executed a tool that reads sensitive data, writes to production, or invokes a costly external API. The only way to prevent these outcomes is to intercept tool calls before they reach the MCP server.

Without governance at the tool layer, you have no visibility into what your agent is doing with external systems. You cannot block dangerous invocations, detect anomalous usage patterns, or produce the audit trails that regulators and compliance teams require.

How the MCP Proxy governs tool calls

The TuringPulse MCP Proxy sits between the AI client (Cursor, Claude Desktop, or any MCP-compatible application) and the MCP server. From the client's perspective, it behaves exactly like the underlying server. From the MCP server's perspective, it receives standard requests from what appears to be the client. The proxy is transparent — the server doesn't know it exists.

When a tool call flows through the proxy, it follows a five-step evaluation flow:

  1. Evaluate arguments — Before forwarding to the server, the proxy runs policy checks on every argument: PII scanning, regex pattern matching, keyword filters, and tool-specific rules. If any policy fails, the call can be blocked or flagged.
  2. Block, flag, or allow — Depending on the policy configuration, the proxy either blocks the call and returns a controlled error, flags it for audit while allowing it to proceed, or allows it without modification.
  3. Forward to server — If allowed, the proxy forwards the tool call to the MCP server and waits for the result.
  4. Evaluate results — Before returning the result to the agent, the proxy applies the same policy evaluation to the response. PII in tool results can be blocked or flagged before it reaches the model context.
  5. Record telemetry — Every tool call, policy decision, latency, and outcome is emitted as a trace span, giving you full observability in the TuringPulse dashboard.

This flow happens synchronously on every tool call. There is no batch processing or sampling — governance is applied in real time, at the point of execution.

What gets intercepted

The MCP Proxy captures and evaluates four layers of every tool interaction. Each layer maps to a specific governance concern.

Tool Arguments

PII scan, regex match, and keyword filter on every argument before execution. If an agent attempts to pass a social security number, API key, or sensitive file path to a tool, the proxy can block or flag it before the call reaches the server.

Argument-level rules can be tool-specific — for example, enforcing that a SQL tool only receives read-only queries, or that a file tool never accesses paths outside a sandbox.

Tool Results

Same policy evaluation on results before returning to the agent. A tool that fetches customer records may return PII — the proxy can block or flag that content before it enters the model context, preventing downstream leakage in generated responses.

Result policies are especially important for tools that query databases, call external APIs, or read from file systems where you don't control the source data.

Tool Metadata

Tool name, server, latency, and invocation frequency — tracked as trace spans in the TuringPulse observability platform. Every tool call appears in your trace view with full context: which workflow, which agent, which arguments were passed, and how long the server took to respond.

Metadata enables KPI rules, drift detection, and anomaly alerts on tool usage patterns — for example, flagging when a tool is invoked an order of magnitude more often than usual, or when latency crosses a threshold.

Enforcement Decisions

Every policy evaluation logged with the decision (block, flag, allow), the context that triggered it, and a full audit trail. For compliance and incident response, you can answer: "Was this tool call blocked? Why? What rule fired? What was the agent trying to do?"

Enforcement logs are queryable and connected to traces, so you can navigate from a blocked call to the full execution context of the agent run.

Two deployment modes

The MCP Proxy supports two deployment modes, each suited to a different stage of the development lifecycle.

CLI mode — Local development

For Cursor, Claude Desktop, and other MCP clients running on a developer machine, the proxy runs as a local CLI process. You install it via pip, configure your IDE's MCP settings to point at tp-mcp-proxy wrap with your MCP server as the subprocess, and governance runs transparently on every tool call. Telemetry is sent to the TuringPulse ingestion endpoint; policies are fetched from the cloud.

CLI mode is ideal for development and testing — you get full policy enforcement and observability without changing how you use your tools.

Cloud mode — Production

For production deployments, the MCP Proxy runs as a sidecar or standalone service in Kubernetes. Your AI agents connect to the proxy instead of directly to MCP servers; the proxy forwards requests to the appropriate server after policy evaluation. This centralizes governance for all agent traffic, regardless of which client or environment is making the call.

Cloud mode gives you consistent policy enforcement, multi-tenant isolation, and the ability to scale governance independently of your MCP servers.

Observability for every tool call

Tool calls are first-class citizens in the TuringPulse observability platform. Every invocation is emitted as a span within the trace of the agent run, with tool name, server, arguments (subject to PII redaction), latency, and policy decisions attached.

KPI rules, drift detection, and anomaly rules apply to tool call metrics. You can define thresholds for latency, invocation frequency, error rate, and block rate — and receive alerts when those thresholds are breached. This closes the loop between governance (blocking bad calls) and observability (understanding what your agents are doing at scale).

The same monitoring and governance infrastructure that applies to LLM calls applies to tool calls — unified in a single control plane.

Policy types

The MCP Proxy supports a range of policy types that can be combined and scoped by tool, server, or tenant:

  • PII scanning — Detect and block or redact personally identifiable information (emails, SSNs, credit card numbers, etc.) in arguments and results.
  • Regex pattern matching — Define custom patterns to block sensitive data formats (API keys, internal URLs, file paths) or enforce required formats.
  • Tool allowlists and blocklists — Restrict which tools an agent can invoke. Block dangerous tools entirely, or allow only a curated subset.
  • Content keyword filtering — Block or flag calls when arguments or results contain specific keywords (e.g., competitor names, confidential project codes).
  • Server-level allowlists — Restrict which MCP servers an agent can connect to, preventing it from reaching untrusted or unvetted tool providers.
  • Argument-specific checks — Apply different rules to different argument fields (e.g., stricter rules on the query parameter than on limit).

Policies are defined in the TuringPulse config service and can be updated without redeploying the proxy. Changes take effect on the next policy poll, typically within seconds.

Govern your AI agent tool calls

Intercept every MCP tool call for policy evaluation. PII scanning, tool allowlists, audit trails, and full observability. Start free with 5,000 traces/month.

Get Started FreeRead the DocsRead the Blog Post