Mastra Integration
Full observability for Mastra workflows. Capture agent steps, tool calls, and workflow orchestration with automatic instrumentation.
Mastra >= 0.1.0WorkflowsAgentsTool Calls
Installation
Terminal
npm install @turingpulse/sdk @turingpulse/sdk-mastra @mastra/coreQuick Start
1. Initialize & Instrument
setup.ts
import { init } from '@turingpulse/sdk';
import { instrumentMastra } from '@turingpulse/sdk-mastra';
// Initialize TuringPulse
init({
apiKey: process.env.TP_API_KEY!,
workflowName: 'my-project',
});
// Enable auto-instrumentation for Mastra
instrumentMastra();2. Use Mastra Normally
main.ts
import { Mastra } from '@mastra/core';
import { Agent } from '@mastra/core/agent';
import { createTool } from '@mastra/core/tools';
import { z } from 'zod';
const weatherTool = createTool({
id: 'get-weather',
description: 'Get the current weather for a city',
inputSchema: z.object({
city: z.string().describe('The city to get weather for'),
}),
execute: async ({ context }) => {
return { city: context.city, temp: 72, condition: 'Sunny' };
},
});
const agent = new Agent({
name: 'Weather Assistant',
instructions: 'You help users with weather information.',
model: { provider: 'OPEN_AI', name: 'gpt-4o' },
tools: { 'get-weather': weatherTool },
});
const mastra = new Mastra({ agents: { weatherAgent: agent } });
// Run the agent - traces are captured automatically
const result = await mastra
.getAgent('weatherAgent')
.generate('What is the weather in London?');
console.log(result.text);ℹ️
Zero Code Changes
Once auto-instrumentation is enabled, all Mastra agent runs, workflow steps, and tool calls are automatically traced.
What Gets Captured
| Data Point | Description | Example |
|---|---|---|
| Agent Runs | Full trace for each agent generation | agent.generate('What is...') |
| Workflow Steps | Each step in a workflow with inputs and outputs | step: fetchData, duration: 250ms |
| Tool Calls | Tool invocations with arguments and results | get-weather(city: 'London') |
| Workflow Orchestration | Step ordering, parallel execution, and branching | parallel: [stepA, stepB] → stepC |
| Token Usage | Input and output token counts per LLM call | prompt: 180, completion: 95 |
| Latency | End-to-end and per-step timing | total: 2400ms, llm: 1800ms |
| Errors | Exceptions with workflow context and step info | StepError: tool execution failed |
Advanced Configuration
config.ts
import { instrumentMastra } from '@turingpulse/sdk-mastra';
instrumentMastra({
name: 'mastra-service',
captureInputs: true,
captureOutputs: true,
captureWorkflowState: true,
kpis: [
{ kpiId: 'latency_ms', useDuration: true, threshold: 10000 },
{ kpiId: 'tokens', threshold: 8000, comparator: 'gt' },
{ kpiId: 'workflow_steps', threshold: 15, comparator: 'gt' },
],
alertChannels: ['slack://alerts'],
});Workflow Instrumentation
workflow.ts
import { Workflow, Step } from '@mastra/core/workflows';
import { z } from 'zod';
const fetchDataStep = new Step({
id: 'fetch-data',
inputSchema: z.object({ query: z.string() }),
outputSchema: z.object({ results: z.array(z.string()) }),
execute: async ({ context }) => {
return { results: ['result1', 'result2'] };
},
});
const analyzeStep = new Step({
id: 'analyze',
inputSchema: z.object({ results: z.array(z.string()) }),
outputSchema: z.object({ summary: z.string() }),
execute: async ({ context }) => {
return { summary: 'Analysis complete' };
},
});
const workflow = new Workflow({
name: 'data-pipeline',
triggerSchema: z.object({ query: z.string() }),
});
workflow.step(fetchDataStep).then(analyzeStep).commit();
// Each workflow step is individually traced with inputs,
// outputs, and execution timing
const run = await workflow.execute({ triggerData: { query: 'sales data' } });💡
Workflow Visualization
TuringPulse shows a visual trace of your Mastra workflows, making it easy to debug step failures and identify performance bottlenecks.