Human-after-the-Loop (HATL)
Audit and review AI agent actions after they complete.
What is HATL?
Human-after-the-Loop (HATL) is a governance pattern where agent actions are reviewed after execution. Unlike HITL, it doesn't block execution, making it ideal for quality audits and compliance reviews.
When to Use HATL
- Quality Assurance - Review output quality
- Compliance Audits - Verify regulatory compliance
- Training Data - Collect labeled examples
- Performance Monitoring - Track agent behavior
- Low-Risk Actions - Where delay isn't acceptable
Enabling HATL
Via SDK
from turingpulse_sdk import instrument, GovernanceDirective
@instrument(
name="customer-support",
governance=GovernanceDirective(
hatl=True,
reviewers=["qa@company.com"],
)
)
def handle_query(query: str):
return agent.respond(query)Via UI
- Navigate to Governance → Policies
- Click Create Policy
- Select Human-after-the-Loop (HATL)
- Configure workflow, reviewers, and sample rate
- Save and enable
Sample Rate & Conditions
To review only a percentage of executions or apply conditional triggers, configure policies in the platform:
- Navigate to Governance → Policies
- Create a HATL policy for your workflow
- Set sample rate (e.g., review 10% of runs)
- Add conditions (e.g., always review low-confidence outputs)
The SDK enables HATL on a workflow. Sampling and condition logic is managed in the platform, giving you flexibility without code changes.
HATL Workflow
- Execution - Agent runs normally
- Queue - Selected runs added to review queue
- Review - Reviewer examines input/output
- Action - Acknowledge, Flag, or Escalate
Review Actions
| Action | Description |
|---|---|
| Acknowledge | Mark as reviewed, no issues |
| Flag | Mark for follow-up or investigation |
| Escalate | Forward to senior reviewer |
| Add Note | Document findings |
HATL Configuration
| Option | Description |
|---|---|
hatl | Enable HATL governance |
reviewers | List of reviewer emails |
severity | Default priority level |