Anthropic Integration
Full observability for the Anthropic Claude API. Track messages, tool use, token usage, and streaming responses across Claude 3.5 Sonnet, Opus, and Haiku models.
Anthropic SDK >= 0.20.0Claude 3.5Claude 3 Opus/Sonnet/HaikuTool Use
Installation
Terminal
pip install turingpulse_sdk turingpulse_sdk_anthropic anthropicQuick Start
main.py
from anthropic import Anthropic
from turingpulse_sdk import init, TuringPulseConfig
from turingpulse_sdk_anthropic import patch_anthropic
# Initialize TuringPulse
init(TuringPulseConfig(
api_key="sk_live_your_api_key",
workflow_name="my-project",
))
# Instrument Anthropic - wraps all API calls
patch_anthropic()
# Your code works exactly the same - now with full tracing!
client = Anthropic()
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "Hello, Claude!"}],
)
print(message.content[0].text)Streaming Support
streaming.py
client = Anthropic()
# Streaming is automatically tracked
with client.messages.stream(
model="claude-sonnet-4-20250514",
max_tokens=1024,
messages=[{"role": "user", "content": "Write a poem about AI"}],
) as stream:
for text in stream.text_stream:
print(text, end="", flush=True)
# Trace captures time-to-first-token and full responseTool Use
tools.py
client = Anthropic()
tools = [
{
"name": "get_weather",
"description": "Get the current weather for a location",
"input_schema": {
"type": "object",
"properties": {
"location": {"type": "string", "description": "City name"},
},
"required": ["location"],
},
},
]
message = client.messages.create(
model="claude-sonnet-4-20250514",
max_tokens=1024,
tools=tools,
messages=[{"role": "user", "content": "What's the weather in Tokyo?"}],
)
# Tool use blocks are captured in the traceWith KPIs & Alerts
kpis.py
from turingpulse_sdk import instrument, KPIConfig, GovernanceDirective
from turingpulse_sdk_anthropic import patch_anthropic
patch_anthropic(name="anthropic-service", governance=GovernanceDirective(hatl=True))
@instrument(
name="anthropic-agent",
kpis=[
KPIConfig(kpi_id="latency_ms", use_duration=True, alert_threshold=5000),
KPIConfig(kpi_id="cost_usd", from_result_path="cost", alert_threshold=0.10, comparator="gt"),
],
)
def my_agent(query: str):
return client.messages.create(model="claude-sonnet-4-20250514", messages=[{"role": "user", "content": query}])What Gets Captured
| Data Point | Description | Example |
|---|---|---|
| Messages | Full message request and response with model info | claude-sonnet-4-20250514, stop_reason: end_turn |
| Tool Use | Tool use blocks with inputs and tool results | get_weather(location='Tokyo') |
| Token Usage | Input and output token counts per message | input: 250, output: 180 |
| Streaming | Time-to-first-token and streaming event traces | ttfb: 220ms, total: 2800ms |
| Model Parameters | Temperature, max_tokens, top_p, and system prompt | temp=0.7, max_tokens=1024 |
| Latency | End-to-end request timing | total: 1900ms |
| Errors | API errors with status codes and context | OverloadedError: 529 Overloaded |
💡
Cost Tracking
TuringPulse automatically calculates costs based on Anthropic pricing for each Claude model variant. View cost breakdowns in the dashboard.