DSPy Integration
Full observability for DSPy programs. Capture module calls, optimizer runs, and prompt compilation with automatic instrumentation.
DSPy >= 2.0ModulesOptimizersPrompt Compilation
Installation
Terminal
pip install turingpulse_sdk turingpulse_sdk_dspy dspyQuick Start
1. Initialize & Instrument
setup.py
from turingpulse_sdk import init, TuringPulseConfig
from turingpulse_sdk_dspy import instrument_dspy
# Initialize TuringPulse
init(TuringPulseConfig(
api_key="sk_live_your_api_key",
workflow_name="my-project",
))
# Enable auto-instrumentation for DSPy
instrument_dspy()2. Use DSPy Normally
main.py
import dspy
lm = dspy.LM("openai/gpt-4o-mini")
dspy.configure(lm=lm)
class Summarizer(dspy.Signature):
"""Summarize a document into key points."""
document: str = dspy.InputField()
summary: str = dspy.OutputField()
summarize = dspy.ChainOfThought(Summarizer)
# Run the module - traces are captured automatically
result = summarize(document="Long document text here...")
print(result.summary)ℹ️
Zero Code Changes
Once auto-instrumentation is enabled, all DSPy module calls, LM interactions, and optimizer runs are automatically traced.
What Gets Captured
| Data Point | Description | Example |
|---|---|---|
| Module Calls | Each module invocation with signature and inputs | ChainOfThought(Summarizer) |
| Optimizer Runs | Full optimizer trace with trials, scores, and selected prompts | BootstrapFewShot: 20 trials, best=0.92 |
| Prompt Compilation | Compiled prompt templates and few-shot examples | compiled with 5 demonstrations |
| LM Calls | Model name, prompt, completion, and token usage | gpt-4o-mini, tokens: 350 |
| Chain of Thought | Reasoning steps in CoT modules | rationale: "First, I identify..." |
| Latency | End-to-end and per-module timing | total: 1500ms, lm_call: 1100ms |
| Errors | Exceptions with module context and retry information | DSPyAssertionError: constraint failed |
Advanced Configuration
config.py
from turingpulse_sdk import KPIConfig
from turingpulse_sdk_dspy import instrument_dspy
run = instrument_dspy(
module,
name="dspy-service",
model="gpt-4o",
provider="openai",
kpis=[
KPIConfig(kpi_id="latency_ms", use_duration=True, alert_threshold=5000),
KPIConfig(kpi_id="tokens", alert_threshold=4000, comparator="gt"),
],
)Optimizer Tracing
optimizer.py
import dspy
from dspy.evaluate import Evaluate
# Define a metric
def summary_quality(example, prediction, trace=None):
return len(prediction.summary.split()) <= 50
# Create training data
trainset = [
dspy.Example(document="...", summary="...").with_inputs("document"),
]
# Run the optimizer - each trial is traced
optimizer = dspy.BootstrapFewShot(metric=summary_quality, max_bootstrapped_demos=4)
compiled_summarizer = optimizer.compile(summarize, trainset=trainset)
# Evaluate - each evaluation call is traced
evaluator = Evaluate(devset=trainset, metric=summary_quality)
score = evaluator(compiled_summarizer)Multi-Module Programs
multi-module.py
class RAGProgram(dspy.Module):
def __init__(self):
self.retriever = dspy.Retrieve(k=3)
self.generate = dspy.ChainOfThought("context, question -> answer")
def forward(self, question: str):
context = self.retriever(question).passages
return self.generate(context=context, question=question)
rag = RAGProgram()
# Each module in the program is individually traced:
# - Retrieve step with query and retrieved passages
# - ChainOfThought step with context, reasoning, and answer
result = rag(question="What is quantum computing?")💡
Optimizer Insights
TuringPulse tracks every optimizer trial, making it easy to compare prompt variations and understand which demonstrations improve quality.