The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.
Insurance AI Agent Reliability Benchmark
The first standardized benchmark for AI agents in insurance.
510 test scenarios. 10 categories. One question: Can your AI handle real insurance workflows?
Why This Exists
Insurance AI agents must be reliable. There is no room for error.
A wrong routing decision delays a claim. A missed compliance flag triggers regulatory action. A failed escalation harms a vulnerable customer.
General chatbot benchmarks do not test for this. No standard benchmark existed for insurance AI agents.
This dataset fills that gap.
Built from patterns observed in production voice AI systems for insurance. Every scenario reflects a real workflow.
What It Tests
510 scenarios across 10 categories test four things:
- Intent recognition -- Does the AI understand what the customer needs?
- Routing decisions -- Does the AI send the request to the right place?
- Action completeness -- Does the AI take every required step?
- Response quality -- Does the AI respond clearly and correctly?
Categories
| Category | Count | Description |
|---|---|---|
claim_intake |
80 | New claim filing |
claim_status |
60 | Claim status inquiries |
policy_inquiry |
70 | Coverage questions |
payment_processing |
50 | Billing and payments |
policy_change |
60 | Policy modifications |
document_request |
40 | Document requests |
escalation_trigger |
50 | Human handoff triggers |
error_recovery |
40 | Error handling |
multi_turn_workflow |
30 | Complex multi-step flows |
edge_cases |
30 | Unusual and adversarial inputs |
Routing Decisions
Each scenario maps to one of four routing decisions:
| Decision | Meaning |
|---|---|
ai_handle |
AI handles the request fully |
ai_with_verification |
AI handles it, human verifies |
human_handoff |
Transfer to a human agent |
hybrid_collaborative |
AI and human work together |
Insurance Lines Covered
- Personal Auto
- Homeowners
- Renters
- Life
- Health
- Commercial General Liability
- Workers Compensation
- Professional Liability
- Cyber Liability
Difficulty Distribution
| Level | Share | Description |
|---|---|---|
| Easy | ~40% | Clear intent. Standard workflow. |
| Medium | ~35% | Ambiguous input. Requires clarification. |
| Hard | ~25% | Complex, regulatory, or adversarial. |
Evaluation Metrics
Five metrics measure agent performance:
| Metric | What It Measures |
|---|---|
| Intent Accuracy | Correct intent classification |
| Routing Correctness | Correct routing decision |
| Action Completeness | All required actions identified (0-1 score) |
| Response Quality | Rubric-based evaluation |
| Latency Compliance | Response within acceptable time |
Quick Start
from datasets import load_dataset
dataset = load_dataset("pashas/insurance-ai-reliability-benchmark")
# Access splits
train = dataset["train"] # 357 scenarios
validation = dataset["validation"] # 76 scenarios
test = dataset["test"] # 77 scenarios
# Look at a scenario
scenario = train[0]
print(f"Category: {scenario['category']}")
print(f"Customer: {scenario['input']['customer_message']}")
print(f"Expected routing: {scenario['expected_output']['routing_decision']}")
Dataset Splits
| Split | Examples | Purpose |
|---|---|---|
| Train | 357 | Model development and fine-tuning |
| Validation | 76 | Hyperparameter tuning |
| Test | 77 | Final evaluation |
Schema
Each scenario includes:
id-- Unique scenario identifiercategory-- One of 10 test categoriessubcategory-- Specific workflow within the categorydifficulty-- Easy, Medium, or Hardinput-- Customer message and contextexpected_output-- Correct intent, routing, actions, and responseevaluation_criteria-- Scoring rubric for the scenariometadata-- Insurance line, regulatory flags, and additional context
Related Work
- Hybrid Orchestrator Framework: github.com/pavelsukhachev/hybrid-orchestrator
- Research paper: "Hybrid Human-AI Orchestration: Design Patterns from a Production Voice AI System" (TechRxiv/IEEE)
Citation
@dataset{sukhachev2026insurance,
title={Insurance AI Agent Reliability Benchmark},
author={Sukhachev, Pavel},
year={2026},
publisher={Hugging Face},
url={https://huggingface.co/datasets/pashas/insurance-ai-reliability-benchmark}
}
License
Apache 2.0
Author
Pavel Sukhachev
- Email: pavel@electromania.llc
- LinkedIn: linkedin.com/in/pavelsukhachev
- ORCID: 0009-0006-4546-9963
- Downloads last month
- 4