MCP Proxy
The TuringPulse MCP Proxy adds observability and governance to tool calls made through the Model Context Protocol (MCP), used by AI clients like Cursor and Claude Desktop.
What is the MCP Proxy
The MCP Proxy is an intercept layer between AI clients (Cursor, Claude Desktop, etc.) and MCP servers. It sits in the request path and adds:
- Observability — Every tool call becomes a trace span with latency, status, arguments, and results
- Governance — Policies evaluate tool arguments and results for PII, compliance, and safety before allowing execution
Two Deployment Modes
CLI Mode (Local Development)
Install the proxy CLI and wrap your local MCP server as a subprocess. The proxy forwards requests to the MCP server while capturing telemetry and enforcing policies.
pip install turingpulse-mcp-proxyConfigure your AI client to run the proxy instead of the MCP server directly. Example for Cursor's mcp.json:
{
"mcpServers": {
"my-server": {
"command": "tp-mcp-proxy",
"args": ["wrap", "--", "npx", "@modelcontextprotocol/server-filesystem", "/path/to/dir"],
"env": { "TP_API_KEY": "sk_..." }
}
}
}Cloud Mode (Production)
For production or remote MCP servers, use the hosted proxy configured via the MCP Proxy Configuration page in the portal. Connect HTTP/remote MCP servers through the cloud proxy for centralized observability and policy enforcement.
Setup Guide
- Get your API key — Go to Settings → API Keys in the portal and create or copy a key.
- Install the CLI — Run
pip install turingpulse-mcp-proxy. - Configure your IDE — Point your MCP config to
tp-mcp-proxy wrapwith your MCP server as the subprocess. Example for.cursor/mcp.json:
{
"mcpServers": {
"filesystem": {
"command": "tp-mcp-proxy",
"args": ["wrap", "--", "npx", "@modelcontextprotocol/server-filesystem", "/path/to/allowed/dir"],
"env": { "TP_API_KEY": "sk_your_api_key" }
}
}
}- Verify — Make a tool call from your AI client. Check the Trace Explorer in the portal to confirm traces appear.
Tool Governance
Policies evaluate tool calls before and after execution. Configurable checks include:
- PII scanning — Detect sensitive data in tool arguments and results
- Regex pattern matching — Block or flag specific content patterns
- Tool name allowlists/blocklists — Control which tools can be invoked
- Content keyword filtering — Match against configurable keywords
Policies can block, flag, or allow tool calls based on evaluation results.
Observability
Every tool call through the proxy becomes a trace span in TuringPulse:
- Latency — Full round-trip timing
- Status — Success, error, or blocked
- Arguments — Captured tool input (with optional redaction)
- Results — Tool output for debugging and analysis
Use KPI rules for latency SLAs and error rates. Drift and anomaly detection runs on tool usage patterns.
Architecture
Request flow: AI Client → MCP Proxy → Policy Check (pre) →MCP Server → Policy Check (post) → response back to the client. Policy checks run before forwarding to the server (to block unsafe requests) and after receiving results (to flag or block sensitive responses).