Pydantic AI Integration

Auto-instrument Pydantic AI agents with full observability. Capture tool calls, model parameters, and structured outputs with zero code changes.

Pydantic AI >= 0.1.0AgentsTool CallsStructured Outputs

Installation

Terminal
pip install turingpulse_sdk turingpulse_sdk_pydantic_ai pydantic-ai

Quick Start

1. Initialize & Instrument

setup.py
from turingpulse_sdk import init, TuringPulseConfig
from turingpulse_sdk_pydantic_ai import instrument_pydantic_ai

# Initialize TuringPulse
init(TuringPulseConfig(
    api_key="sk_live_your_api_key",
    workflow_name="my-project",
))

# Enable auto-instrumentation for Pydantic AI
instrument_pydantic_ai()

2. Use Pydantic AI Normally

main.py
from pydantic_ai import Agent
from pydantic import BaseModel

class CityInfo(BaseModel):
    name: str
    country: str
    population: int

agent = Agent(
    "openai:gpt-4o",
    result_type=CityInfo,
    system_prompt="You are a geography expert.",
)

# Run the agent - traces are captured automatically
result = agent.run_sync("Tell me about Tokyo")
print(result.data)
# CityInfo(name='Tokyo', country='Japan', population=13960000)
ℹ️
Zero Code Changes
Once auto-instrumentation is enabled, all Pydantic AI agent runs are automatically traced. No decorators or wrappers needed.

What Gets Captured

Data PointDescriptionExample
Agent RunsFull trace for each agent executionagent.run_sync("Tell me about Tokyo")
Model ParametersModel name, temperature, max tokensopenai:gpt-4o, temp=0.7
Structured OutputsPydantic model results with validation statusCityInfo(name='Tokyo', ...)
Tool CallsTool invocations with inputs and outputsget_weather(location='NYC')
Token UsageInput and output token countsprompt: 150, completion: 85
LatencyEnd-to-end and per-step timingtotal: 1250ms, llm: 980ms
ErrorsExceptions with full context and stack tracesValidationError: field required

Advanced Configuration

config.py
from turingpulse_sdk import KPIConfig
from turingpulse_sdk_pydantic_ai import instrument_pydantic_ai

run = instrument_pydantic_ai(
    agent,
    name="pydantic-ai-service",
    model="gpt-4o",
    provider="openai",
    kpis=[
        KPIConfig(kpi_id="latency_ms", use_duration=True, alert_threshold=5000),
        KPIConfig(kpi_id="tokens", alert_threshold=4000, comparator="gt"),
    ],
)

Tool Instrumentation

tools.py
from pydantic_ai import Agent, RunContext

agent = Agent("openai:gpt-4o", system_prompt="You are a helpful assistant.")

@agent.tool
def get_weather(ctx: RunContext[None], location: str) -> str:
    """Get the current weather for a location."""
    return f"Sunny, 72°F in {location}"

@agent.tool
def search_docs(ctx: RunContext[None], query: str) -> str:
    """Search documentation for relevant information."""
    return f"Found 3 results for: {query}"

# Tool calls are automatically captured with inputs, outputs, and timing
result = agent.run_sync("What's the weather in San Francisco?")
💡
Structured Output Tracking
TuringPulse automatically validates and tracks Pydantic model outputs, including validation failures and retry attempts.

Next Steps