Vercel AI SDK Integration

Full observability for the Vercel AI SDK. Capture generateText, streamText, tool calls, and multi-step agent executions with automatic instrumentation.

Vercel AI SDK >= 3.0Next.jsEdge RuntimeMulti-Step Agents

Installation

Terminal
npm install @turingpulse/sdk @turingpulse/sdk-vercel-ai ai @ai-sdk/openai

Quick Start

1. Initialize & Instrument

setup.ts
import { init } from '@turingpulse/sdk';
import { instrumentVercelAI } from '@turingpulse/sdk-vercel-ai';

// Initialize TuringPulse
init({
  apiKey: process.env.TP_API_KEY!,
  workflowName: 'my-project',
});

// Enable auto-instrumentation for Vercel AI SDK
instrumentVercelAI();

2. Use Vercel AI SDK Normally

main.ts
import { generateText } from 'ai';
import { openai } from '@ai-sdk/openai';

// Generate text - traces are captured automatically
const { text } = await generateText({
  model: openai('gpt-4o'),
  prompt: 'Explain quantum computing in simple terms.',
});
console.log(text);
ℹ️
Zero Code Changes
Once auto-instrumentation is enabled, all Vercel AI SDK calls including generateText, streamText, and tool invocations are automatically traced.

What Gets Captured

Data PointDescriptionExample
generateTextFull text generation with model and parametersgpt-4o, tokens: 350
streamTextStreaming response with time-to-first-tokenttfb: 180ms, total: 2400ms
Tool CallsTool invocations with arguments and resultsgetWeather(location: 'NYC')
Multi-Step AgentsEach step in a multi-step agent executionstep 1: tool_call, step 2: generate
Token UsageInput and output token counts per callprompt: 200, completion: 150
LatencyEnd-to-end and per-step timingtotal: 1800ms
ErrorsAPI errors with context and retry informationAPIError: 429 rate limited

Advanced Configuration

config.ts
import { instrumentVercelAI } from '@turingpulse/sdk-vercel-ai';

instrumentVercelAI({
  name: 'vercel-ai-service',
  captureInputs: true,
  captureOutputs: true,
  captureStreamingChunks: false,
  kpis: [
    { kpiId: 'latency_ms', useDuration: true, threshold: 5000 },
    { kpiId: 'tokens', threshold: 4000, comparator: 'gt' },
    { kpiId: 'cost_usd', threshold: 0.10, comparator: 'gt' },
  ],
  alertChannels: ['slack://alerts'],
});

Streaming

streaming.ts
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';

// Streaming is automatically tracked
const result = streamText({
  model: openai('gpt-4o'),
  prompt: 'Write a short story about a robot.',
});

for await (const chunk of result.textStream) {
  process.stdout.write(chunk);
}

// Trace captures time-to-first-token and full response

Tool Calls

tools.ts
import { generateText, tool } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const weatherTool = tool({
  description: 'Get the current weather for a location',
  parameters: z.object({
    location: z.string().describe('The city to get weather for'),
  }),
  execute: async ({ location }) => {
    return { location, temperature: 72, condition: 'Sunny' };
  },
});

const { text, toolCalls } = await generateText({
  model: openai('gpt-4o'),
  tools: { getWeather: weatherTool },
  prompt: "What's the weather in San Francisco?",
});

// Tool calls are automatically captured with inputs, outputs, and timing

Multi-Step Agents

multi-step.ts
import { generateText, tool } from 'ai';
import { openai } from '@ai-sdk/openai';
import { z } from 'zod';

const searchTool = tool({
  description: 'Search the web',
  parameters: z.object({ query: z.string() }),
  execute: async ({ query }) => {
    return { results: [`Result for: ${query}`] };
  },
});

const calculatorTool = tool({
  description: 'Perform calculations',
  parameters: z.object({ expression: z.string() }),
  execute: async ({ expression }) => {
    return { result: eval(expression) };
  },
});

// Multi-step agent - each step is individually traced
const { text, steps } = await generateText({
  model: openai('gpt-4o'),
  tools: { search: searchTool, calculate: calculatorTool },
  maxSteps: 5,
  prompt: 'What is the population of Tokyo divided by the area in km²?',
});

// Trace shows: step 1 (search) → step 2 (search) → step 3 (calculate) → final answer

Next.js Route Handler

app/api/chat/route.ts
import { streamText } from 'ai';
import { openai } from '@ai-sdk/openai';

export async function POST(req: Request) {
  const { messages } = await req.json();

  // Route handler calls are automatically traced
  const result = streamText({
    model: openai('gpt-4o'),
    messages,
  });

  return result.toDataStreamResponse();
}

// Works in both Node.js and Edge runtime
💡
Edge Runtime Support
TuringPulse works seamlessly in both Node.js and Edge runtimes, capturing traces from Vercel serverless and edge functions.

Next Steps