Vercel AI SDK Integration
Full observability for the Vercel AI SDK. Capture generateText, streamText, tool calls, and multi-step agent executions with automatic instrumentation.
Vercel AI SDK >= 3.0Next.jsEdge RuntimeMulti-Step Agents
Installation
Terminal
npm install @turingpulse/sdk @turingpulse/sdk-vercel-ai ai @ai-sdk/openaiQuick Start
1. Initialize & Instrument
setup.ts
import { init } from '@turingpulse/sdk';
import { instrumentVercelAI } from '@turingpulse/sdk-vercel-ai';
// Initialize TuringPulse
init({
apiKey: process.env.TP_API_KEY!,
workflowName: 'my-project',
});
// Enable auto-instrumentation for Vercel AI SDK
instrumentVercelAI();2. Use Vercel AI SDK Normally
main.ts
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';
// Generate text - traces are captured automatically
const { text } = await generateText({
model: openai('gpt-4o'),
prompt: 'Explain quantum computing in simple terms.',
});
console.log(text);ℹ️
Zero Code Changes
Once auto-instrumentation is enabled, all Vercel AI SDK calls including generateText, streamText, and tool invocations are automatically traced.
What Gets Captured
| Data Point | Description | Example |
|---|---|---|
| generateText | Full text generation with model and parameters | gpt-4o, tokens: 350 |
| streamText | Streaming response with time-to-first-token | ttfb: 180ms, total: 2400ms |
| Tool Calls | Tool invocations with arguments and results | getWeather(location: 'NYC') |
| Multi-Step Agents | Each step in a multi-step agent execution | step 1: tool_call, step 2: generate |
| Token Usage | Input and output token counts per call | prompt: 200, completion: 150 |
| Latency | End-to-end and per-step timing | total: 1800ms |
| Errors | API errors with context and retry information | APIError: 429 rate limited |
Advanced Configuration
config.ts
import { instrumentVercelAI } from '@turingpulse/sdk-vercel-ai';
instrumentVercelAI({
name: 'vercel-ai-service',
captureInputs: true,
captureOutputs: true,
captureStreamingChunks: false,
kpis: [
{ kpiId: 'latency_ms', useDuration: true, threshold: 5000 },
{ kpiId: 'tokens', threshold: 4000, comparator: 'gt' },
{ kpiId: 'cost_usd', threshold: 0.10, comparator: 'gt' },
],
alertChannels: ['slack://alerts'],
});Streaming
streaming.ts
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
// Streaming is automatically tracked
const result = streamText({
model: openai('gpt-4o'),
prompt: 'Write a short story about a robot.',
});
for await (const chunk of result.textStream) {
process.stdout.write(chunk);
}
// Trace captures time-to-first-token and full responseTool Calls
tools.ts
import { generateText, tool } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
const weatherTool = tool({
description: 'Get the current weather for a location',
parameters: z.object({
location: z.string().describe('The city to get weather for'),
}),
execute: async ({ location }) => {
return { location, temperature: 72, condition: 'Sunny' };
},
});
const { text, toolCalls } = await generateText({
model: openai('gpt-4o'),
tools: { getWeather: weatherTool },
prompt: "What's the weather in San Francisco?",
});
// Tool calls are automatically captured with inputs, outputs, and timingMulti-Step Agents
multi-step.ts
import { generateText, tool } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';
const searchTool = tool({
description: 'Search the web',
parameters: z.object({ query: z.string() }),
execute: async ({ query }) => {
return { results: [`Result for: ${query}`] };
},
});
const calculatorTool = tool({
description: 'Perform calculations',
parameters: z.object({ expression: z.string() }),
execute: async ({ expression }) => {
return { result: eval(expression) };
},
});
// Multi-step agent - each step is individually traced
const { text, steps } = await generateText({
model: openai('gpt-4o'),
tools: { search: searchTool, calculate: calculatorTool },
maxSteps: 5,
prompt: 'What is the population of Tokyo divided by the area in km²?',
});
// Trace shows: step 1 (search) → step 2 (search) → step 3 (calculate) → final answerNext.js Route Handler
app/api/chat/route.ts
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';
export async function POST(req: Request) {
const { messages } = await req.json();
// Route handler calls are automatically traced
const result = streamText({
model: openai('gpt-4o'),
messages,
});
return result.toDataStreamResponse();
}
// Works in both Node.js and Edge runtime💡
Edge Runtime Support
TuringPulse works seamlessly in both Node.js and Edge runtimes, capturing traces from Vercel serverless and edge functions.