File size: 4,684 Bytes
58d59d7
 
 
 
 
 
 
 
 
 
 
 
 
 
fe46281
58d59d7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fe46281
 
58d59d7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
effacff
58d59d7
 
 
 
 
 
 
 
 
 
 
 
 
effacff
58d59d7
 
 
 
 
 
 
 
 
 
 
 
 
 
effacff
58d59d7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
fe46281
58d59d7
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
---
license: apache-2.0
task_categories:
  - text-generation
language:
  - en
tags:
  - benchmark
  - agents
  - llm-agents
  - test-driven-development
  - specification
  - mutation-testing
pretty_name: SpecSuite-Core
arxiv: 2603.08806
size_categories:
  - n<1K
configs:
  - config_name: specs
    data_files:
      - split: train
        path: data/specs/train.jsonl
  - config_name: mutations
    data_files:
      - split: train
        path: data/mutations/train.jsonl
  - config_name: results
    data_files:
      - split: train
        path: data/results/train.jsonl
---

# SpecSuite-Core

**SpecSuite-Core** is a benchmark suite of 4 deeply-specified AI agent behavioral specifications, designed to evaluate the **TDAD (Test-Driven Agent Definition)** methodology described in our paper.

Paper: [Test-Driven Agent Definition (TDAD)](https://arxiv.org/abs/2603.08806)
Code: [GitHub](https://github.com/f-labs-io/tdad-paper-code)

## Overview

Each specification defines a complete agent behavior contract including:
- **Tools** with typed input/output schemas and failure modes
- **Policies** with priorities and enforcement levels
- **Decision trees** defining the agent's control flow
- **Response contracts** with required JSON fields
- **Mutation intents** for robustness testing
- **Spec evolution** (v1 → v2) for backward compatibility testing

## Specifications

| Spec | Domain | v1 Tools | v1 Policies | v2 Change |
|------|--------|----------|-------------|-----------|
| **SupportOps** | Customer support (cancel, address, billing) | 7 | 4 | Abuse detection |
| **DataInsights** | SQL analytics & reporting | 3 | 4 | Cost-aware queries |
| **IncidentRunbook** | Incident response & escalation | 6 | 4 | Customer impact tracking |
| **ExpenseGuard** | Expense approval & reimbursement | 6 | 5 | Manager approval gate |

## Configs

### `specs` — Agent Specifications (8 rows)

Each row is one spec version (4 specs × 2 versions). The `spec_yaml` field contains the full YAML specification.

```python
from datasets import load_dataset
specs = load_dataset("f-labs-io/SpecSuite-Core", "specs")

# Get SupportOps v1
spec = specs["train"].filter(lambda x: x["spec_id"] == "supportops_v1")[0]
print(spec["title"])        # "SupportOps Agent"
print(spec["tool_names"])   # ["verify_identity", "get_account", ...]
print(spec["spec_yaml"])    # Full YAML specification
```

### `mutations` — Mutation Intents (27 rows)

Each row is a mutation intent: a description of a plausible behavioral failure that the test suite should detect.

```python
mutations = load_dataset("f-labs-io/SpecSuite-Core", "mutations")

# All critical mutations
critical = mutations["train"].filter(lambda x: x["severity"] == "critical")
for m in critical:
    print(f"{m['spec_lineage']}/{m['mutation_id']}: {m['intent'][:80]}...")
```

Categories: `policy_violation`, `process_violation`, `business_logic_violation`, `grounding_violation`, `escalation_violation`, `safety_violation`, `quality_regression`, `compliance_violation`, `robustness_violation`, `decision_violation`, `tooling_violation`

### `results` — Pipeline Run Results (24 rows)

Each row is one end-to-end TDAD pipeline run with metrics across all 4 evaluation dimensions.

```python
results = load_dataset("f-labs-io/SpecSuite-Core", "results")

for r in results["train"]:
    print(f"{r['spec']}/{r['version']}: VPR={r['vpr_percent']}% HPR={r['hpr_percent']}% MS={r['mutation_score']}% ${r['total_cost_usd']:.2f}")
```

## Metrics

| Metric | Full Name | What It Measures |
|--------|-----------|-----------------|
| **VPR** | Visible Pass Rate | Compilation success (tests seen during prompt optimization) |
| **HPR** | Hidden Pass Rate | Generalization (held-out tests never seen during compilation) |
| **MS** | Mutation Score | Robustness (% of seeded behavioral faults detected by tests) |
| **SURS** | Spec Update Regression Score | Backward compatibility (v1 tests passing on v2 prompt) |

## Full Pipeline

The specifications in this dataset are designed to be used with the TDAD pipeline:

1. **TestSmith** generates executable tests from the spec
2. **PromptSmith** iteratively compiles a system prompt until tests pass
3. **MutationSmith** generates behavioral mutations to measure test quality
4. Tests and mutations measure the agent across VPR, HPR, MS, and SURS

See the [GitHub repo](https://github.com/f-labs-io/tdad-paper-code) for the full executable pipeline with Docker support.

## Citation

```bibtex
@article{rehan2026tdad,
  title={Test-Driven Agent Definition: A Specification-First Framework for LLM Agent Development},
  author={Rehan, Tzafrir},
  year={2026}
}
```

## License

Apache 2.0