Human-after-the-Loop (HATL)

Audit and review AI agent actions after they complete.

What is HATL?

Human-after-the-Loop (HATL) is a governance pattern where agent actions are reviewed after execution. Unlike HITL, it doesn't block execution, making it ideal for quality audits and compliance reviews.

When to Use HATL

  • Quality Assurance - Review output quality
  • Compliance Audits - Verify regulatory compliance
  • Training Data - Collect labeled examples
  • Performance Monitoring - Track agent behavior
  • Low-Risk Actions - Where delay isn't acceptable

Enabling HATL

Via SDK

from turingpulse_sdk import instrument, GovernanceDirective

@instrument(
    name="customer-support",
    governance=GovernanceDirective(
        hatl=True,
        reviewers=["qa@company.com"],
    )
)
def handle_query(query: str):
    return agent.respond(query)

Via UI

  1. Navigate to Governance → Policies
  2. Click Create Policy
  3. Select Human-after-the-Loop (HATL)
  4. Configure workflow, reviewers, and sample rate
  5. Save and enable

Sample Rate & Conditions

To review only a percentage of executions or apply conditional triggers, configure policies in the platform:

  1. Navigate to Governance → Policies
  2. Create a HATL policy for your workflow
  3. Set sample rate (e.g., review 10% of runs)
  4. Add conditions (e.g., always review low-confidence outputs)

The SDK enables HATL on a workflow. Sampling and condition logic is managed in the platform, giving you flexibility without code changes.

HATL Workflow

  1. Execution - Agent runs normally
  2. Queue - Selected runs added to review queue
  3. Review - Reviewer examines input/output
  4. Action - Acknowledge, Flag, or Escalate

Review Actions

ActionDescription
AcknowledgeMark as reviewed, no issues
FlagMark for follow-up or investigation
EscalateForward to senior reviewer
Add NoteDocument findings

HATL Configuration

OptionDescription
hatlEnable HATL governance
reviewersList of reviewer emails
severityDefault priority level

Next Steps