Semantic Kernel Integration

Full observability for Microsoft Semantic Kernel agents. Capture kernel functions, planner execution, and plugin calls with automatic instrumentation.

Semantic Kernel >= 1.0.0Kernel FunctionsPlannersPlugins

Installation

Terminal
pip install turingpulse_sdk turingpulse_sdk_semantic_kernel semantic-kernel

Quick Start

1. Initialize & Instrument

setup.py
from turingpulse_sdk import init, TuringPulseConfig
from turingpulse_sdk_semantic_kernel import instrument_semantic_kernel

# Initialize TuringPulse
init(TuringPulseConfig(
    api_key="sk_live_your_api_key",
    workflow_name="my-project",
))

# Enable auto-instrumentation for Semantic Kernel
instrument_semantic_kernel()

2. Use Semantic Kernel Normally

main.py
import semantic_kernel as sk
from semantic_kernel.connectors.ai.open_ai import AzureChatCompletion

kernel = sk.Kernel()

kernel.add_service(AzureChatCompletion(
    deployment_name="gpt-4o",
    endpoint="https://your-resource.openai.azure.com/",
    api_key="your-azure-key",
))

# Create a prompt function
summarize = kernel.add_function(
    plugin_name="TextPlugin",
    function_name="summarize",
    prompt="Summarize this text in 2 sentences: {{$input}}",
)

# Run the function - traces are captured automatically
result = await kernel.invoke(summarize, input="Long article text here...")
print(result)
ℹ️
Zero Code Changes
Once auto-instrumentation is enabled, all Semantic Kernel function invocations, planner steps, and plugin calls are automatically traced.

What Gets Captured

Data PointDescriptionExample
Kernel FunctionsEach function invocation with plugin and function nameTextPlugin.summarize
Planner ExecutionFull planner trace including plan generation and step executionStepwisePlanner: 4 steps
Plugin CallsPlugin invocations with parameters and return valuesMathPlugin.add(a=5, b=3)
LLM CallsModel name, prompt, completion, and token usagegpt-4o, tokens: 250
Memory OperationsSemantic memory save and recall operationsmemory.recall(query='...')
LatencyEnd-to-end and per-function timingtotal: 3200ms, planner: 2100ms
ErrorsExceptions with kernel context and function detailsKernelError: function not found

Advanced Configuration

config.py
from turingpulse_sdk import KPIConfig
from turingpulse_sdk_semantic_kernel import instrument_semantic_kernel

run = instrument_semantic_kernel(
    kernel, chat_function,
    name="semantic-kernel-service",
    model="gpt-4o",
    provider="openai",
    kpis=[
        KPIConfig(kpi_id="latency_ms", use_duration=True, alert_threshold=10000),
        KPIConfig(kpi_id="tokens", alert_threshold=8000, comparator="gt"),
    ],
)

Plugin Instrumentation

plugins.py
from semantic_kernel.functions import kernel_function

class WeatherPlugin:
    @kernel_function(name="get_forecast", description="Get weather forecast")
    def get_forecast(self, city: str) -> str:
        return f"Sunny, 75°F in {city}"

    @kernel_function(name="get_alerts", description="Get weather alerts")
    def get_alerts(self, region: str) -> str:
        return f"No active alerts for {region}"

kernel.add_plugin(WeatherPlugin(), plugin_name="Weather")

# Plugin function calls are automatically captured with inputs,
# outputs, and execution timing
result = await kernel.invoke(
    kernel.plugins["Weather"]["get_forecast"],
    city="Seattle",
)

Planner Tracing

planner.py
from semantic_kernel.planners import FunctionCallingStepwisePlanner

planner = FunctionCallingStepwisePlanner(
    service_id="gpt-4o",
    max_iterations=10,
)

# Planner execution is fully traced:
# - Plan generation step
# - Each iteration with selected function
# - Final result aggregation
result = await planner.invoke(
    kernel,
    question="What's the weather in Seattle and should I bring an umbrella?",
)
💡
Azure OpenAI Support
TuringPulse seamlessly works with both OpenAI and Azure OpenAI connectors in Semantic Kernel, tracking costs and tokens for both.

Next Steps