Mastra Integration

Full observability for Mastra workflows. Capture agent steps, tool calls, and workflow orchestration with automatic instrumentation.

Mastra >= 0.1.0WorkflowsAgentsTool Calls

Installation

Terminal
npm install @turingpulse/sdk @turingpulse/sdk-mastra @mastra/core

Quick Start

1. Initialize & Instrument

setup.ts
import { init } from '@turingpulse/sdk';
import { instrumentMastra } from '@turingpulse/sdk-mastra';

// Initialize TuringPulse
init({
  apiKey: process.env.TP_API_KEY!,
  workflowName: 'my-project',
});

// Enable auto-instrumentation for Mastra
instrumentMastra();

2. Use Mastra Normally

main.ts
import { Mastra } from '@mastra/core';
import { Agent } from '@mastra/core/agent';
import { createTool } from '@mastra/core/tools';
import { z } from 'zod';

const weatherTool = createTool({
  id: 'get-weather',
  description: 'Get the current weather for a city',
  inputSchema: z.object({
    city: z.string().describe('The city to get weather for'),
  }),
  execute: async ({ context }) => {
    return { city: context.city, temp: 72, condition: 'Sunny' };
  },
});

const agent = new Agent({
  name: 'Weather Assistant',
  instructions: 'You help users with weather information.',
  model: { provider: 'OPEN_AI', name: 'gpt-4o' },
  tools: { 'get-weather': weatherTool },
});

const mastra = new Mastra({ agents: { weatherAgent: agent } });

// Run the agent - traces are captured automatically
const result = await mastra
  .getAgent('weatherAgent')
  .generate('What is the weather in London?');
console.log(result.text);
ℹ️
Zero Code Changes
Once auto-instrumentation is enabled, all Mastra agent runs, workflow steps, and tool calls are automatically traced.

What Gets Captured

Data PointDescriptionExample
Agent RunsFull trace for each agent generationagent.generate('What is...')
Workflow StepsEach step in a workflow with inputs and outputsstep: fetchData, duration: 250ms
Tool CallsTool invocations with arguments and resultsget-weather(city: 'London')
Workflow OrchestrationStep ordering, parallel execution, and branchingparallel: [stepA, stepB] → stepC
Token UsageInput and output token counts per LLM callprompt: 180, completion: 95
LatencyEnd-to-end and per-step timingtotal: 2400ms, llm: 1800ms
ErrorsExceptions with workflow context and step infoStepError: tool execution failed

Advanced Configuration

config.ts
import { instrumentMastra } from '@turingpulse/sdk-mastra';

instrumentMastra({
  name: 'mastra-service',
  captureInputs: true,
  captureOutputs: true,
  captureWorkflowState: true,
  kpis: [
    { kpiId: 'latency_ms', useDuration: true, threshold: 10000 },
    { kpiId: 'tokens', threshold: 8000, comparator: 'gt' },
    { kpiId: 'workflow_steps', threshold: 15, comparator: 'gt' },
  ],
  alertChannels: ['slack://alerts'],
});

Workflow Instrumentation

workflow.ts
import { Workflow, Step } from '@mastra/core/workflows';
import { z } from 'zod';

const fetchDataStep = new Step({
  id: 'fetch-data',
  inputSchema: z.object({ query: z.string() }),
  outputSchema: z.object({ results: z.array(z.string()) }),
  execute: async ({ context }) => {
    return { results: ['result1', 'result2'] };
  },
});

const analyzeStep = new Step({
  id: 'analyze',
  inputSchema: z.object({ results: z.array(z.string()) }),
  outputSchema: z.object({ summary: z.string() }),
  execute: async ({ context }) => {
    return { summary: 'Analysis complete' };
  },
});

const workflow = new Workflow({
  name: 'data-pipeline',
  triggerSchema: z.object({ query: z.string() }),
});

workflow.step(fetchDataStep).then(analyzeStep).commit();

// Each workflow step is individually traced with inputs,
// outputs, and execution timing
const run = await workflow.execute({ triggerData: { query: 'sales data' } });
💡
Workflow Visualization
TuringPulse shows a visual trace of your Mastra workflows, making it easy to debug step failures and identify performance bottlenecks.

Next Steps