FAQ

Frequently asked questions

Everything you need to know about TuringPulse. Can't find what you're looking for? Reach out to our team.

General

What is TuringPulse?+
TuringPulse is the control plane for AI agents. It provides end-to-end observability, governance, monitoring, and human-in-the-loop coordination for autonomous AI systems. Think of it as Datadog meets compliance — purpose-built for LLM-powered agents.
Who is TuringPulse built for?+
TuringPulse is built for engineering teams running AI agents in production. Whether you're building with LangChain, CrewAI, AutoGen, or calling LLM APIs directly, TuringPulse helps you understand what your agents are doing, detect when they drift, and enforce governance policies.
How is TuringPulse different from LangSmith or other observability tools?+
Most observability tools focus on tracing and logging. TuringPulse goes further with built-in governance (policy engine, HITL review queues, compliance packs), proactive monitoring (drift detection, anomaly rules, KPI thresholds), and human coordination (approval workflows, audit trails). It's observability plus governance in one platform.
Do I need to change my existing code to use TuringPulse?+
No. TuringPulse's SDK plugins provide zero-code instrumentation for 15+ frameworks. For most setups, you add two lines of code: initialize the SDK and add the framework plugin. Your existing agent code stays unchanged.

Product & Features

What are the four pillars of TuringPulse?+
TuringPulse is organized around four pillars: Evaluate (trace explorer, metrics, evaluations, root cause analysis), Govern (policy engine, compliance packs, ingestion controls), Monitor (drift detection, anomaly rules, KPI thresholds, alert channels), and Coordinate (human-in-the-loop review queues, policy triggers, governance insights, audit history).
What AI frameworks and LLM providers does TuringPulse support?+
We support 15+ framework plugins including LangChain, LangGraph, CrewAI, AutoGen, LlamaIndex, Pydantic AI, Google ADK, Strands, Semantic Kernel, DSPy, Haystack, Mastra, and Vercel AI. For LLM providers: OpenAI, Anthropic, Google GenAI, AWS Bedrock, Cohere, Mistral, and Vertex AI. Both Python and TypeScript SDKs are available.
What is drift detection?+
Drift detection monitors your agent's performance metrics over time and alerts you when behavior changes significantly. TuringPulse uses z-score, percentage change, and IQR statistical methods to detect performance drift (latency, error rates) and cost drift (token usage, API spend) against rolling baselines.
What is human-in-the-loop (HITL) in TuringPulse?+
HITL lets you define policies that route certain agent decisions to human reviewers before they execute. You configure conditions (e.g., high cost, sensitive content, low confidence) and TuringPulse's review queue presents flagged decisions to the right team members with full context and audit trails.
Can I set up custom KPIs and alerts?+
Yes. You can define custom KPI thresholds on any metric (latency, cost, error rate, token usage, custom metrics), configure drift baselines, and create multi-metric anomaly rules. Alerts can be sent to Slack, email, PagerDuty, Microsoft Teams, or custom webhooks with severity-based filtering.
How does TuringPulse compare to LangSmith, Arize, and Datadog LLM Monitoring?+
LangSmith focuses on prompt engineering and LangChain tracing. Arize specializes in ML model monitoring. Datadog LLM Monitoring extends traditional APM. TuringPulse combines all three concerns — distributed tracing, LLM-specific monitoring, and governance — into a single platform purpose-built for autonomous AI agents. It is the only platform that offers built-in human-in-the-loop review queues, policy enforcement, drift detection, and compliance packs alongside full observability.
What is AI agent monitoring and why does it matter?+
AI agent monitoring is the practice of tracking the behavior, performance, and cost of autonomous AI systems in production. Unlike traditional software, AI agents make non-deterministic decisions, call external tools, and chain multiple LLM calls together. Without monitoring, you cannot detect performance regressions, cost spikes, hallucinations, or policy violations. TuringPulse provides real-time monitoring with KPI thresholds, anomaly rules, and drift detection to catch issues before they impact users.
What is LLM observability and how is it different from traditional observability?+
LLM observability extends traditional application monitoring (logs, metrics, traces) with capabilities specific to large language models: token usage tracking, prompt/completion capture with redaction, latency per LLM call, cost attribution, hallucination detection, and evaluation pipelines. Traditional APM tools treat LLM calls as opaque HTTP requests. TuringPulse instruments each LLM call as a typed span with model, tokens, cost, and quality metadata — giving you the visibility needed to optimize and govern AI-powered features.
How does distributed tracing work for AI agent workflows?+
TuringPulse auto-instruments your agent code using the @instrument decorator (Python) or instrument/withInstrumentation functions (TypeScript). Each function call, LLM request, tool invocation, and retriever query becomes a span in a distributed trace. Spans are linked in a parent-child hierarchy so you can see the full execution DAG of a multi-step agent workflow — including branching, retries, and parallel tool calls — in a single trace view.
Can TuringPulse help with LLM cost optimization?+
Yes. TuringPulse tracks token usage and cost per span, per workflow, and per tenant. You can set KPI thresholds on cost metrics, configure drift detection to alert you when spend increases unexpectedly, and use the analytics dashboard to identify the most expensive LLM calls. Teams typically reduce LLM costs by 20-40% after gaining visibility into token consumption patterns across their agent workflows.
Does TuringPulse support multi-agent systems and complex workflow orchestration?+
Yes. TuringPulse is designed for multi-agent architectures where agents delegate to sub-agents, invoke tools, and coordinate across services. The distributed tracing system captures the full execution graph regardless of how many agents or steps are involved. Framework plugins for LangGraph, CrewAI, AutoGen, and others automatically capture agent-level and node-level spans with proper parent-child relationships.

Pricing & Billing

What is included in the free plan?+
The Free plan is a single-user plan that includes 1,000 traces/month, 7-day data retention, full observability and analytics, quota-limited KPI/drift/governance features, the HITL review queue, and community support — with 1 project and 3 workflows. Upgrade to Pro to invite team members.
Can I upgrade or downgrade my plan at any time?+
Yes. You can upgrade to a higher plan at any time and the change takes effect immediately. Downgrades take effect at the start of your next billing cycle. Your data is retained according to the retention policy of your active plan.
What happens if I exceed my monthly trace limit?+
When you reach your trace limit, new traces are still accepted but queued for processing at the start of your next billing cycle. You will receive a notification when you reach 80% and 100% of your quota. You can upgrade your plan at any time to increase your limit immediately.
Do you offer a free trial of paid plans?+
The Free plan is available indefinitely with no credit card required. For Pro and Pro Plus plans, contact us to request a trial with higher limits so you can evaluate the full feature set before committing.
What payment methods do you accept?+
We accept all major credit and debit cards (Visa, Mastercard, American Express) through our payment processor Paddle. Enterprise customers can pay via invoice with net-30 terms.

Security & Compliance

How does TuringPulse handle data security?+
All data is encrypted in transit (TLS 1.3) and at rest. TuringPulse enforces strict tenant isolation at the database level — your data is never accessible to other customers. We implement role-based access controls, audit logging for all administrative actions, and field-level redaction for sensitive data.
Does TuringPulse support compliance frameworks like HIPAA and GDPR?+
Yes. TuringPulse offers pre-built compliance packs for HIPAA and GDPR with policy definitions and regulatory references. You can apply compliance packs to your tenant to enforce compliance-aligned governance rules with enforcement logging.
Is there SSO / SAML support?+
SSO and SAML integration is available on the Enterprise plan. TuringPulse uses Keycloak as its identity provider, supporting SAML 2.0, OpenID Connect, and social login providers.

Technical & Deployment

Is there an on-premise or self-hosted deployment option?+
Yes. The Enterprise plan includes an on-premise deployment option. TuringPulse can be deployed in your own cloud environment (AWS, GCP, Azure) or on-premise infrastructure using Kubernetes and Helm charts. Contact our sales team for architecture details.
How do I instrument my AI agents?+
Install the TuringPulse SDK for your language (Python or TypeScript), add the appropriate framework plugin, and call init() with your API key. For most frameworks, instrumentation is automatic — no code changes needed beyond initialization. See our documentation for framework-specific quickstart guides.
What data does TuringPulse collect from my agents?+
TuringPulse collects traces (execution flow), spans (individual operations like LLM calls, tool invocations, retriever queries), and associated metrics (latency, token usage, cost). You control exactly what data is sent through SDK configuration, including field-level redaction for sensitive inputs and outputs.
Does TuringPulse add latency to my agent calls?+
TuringPulse's SDK instruments your code asynchronously and batches telemetry data before sending. The overhead is typically less than 1ms per span. Trace data is sent in the background and does not block your agent's execution path.

Still have questions?

We're here to help. Reach out and we'll get back to you within 24 hours.

Contact SupportRead the Docs