| | --- |
| | license: apache-2.0 |
| | task_categories: |
| | - text-generation |
| | language: |
| | - en |
| | tags: |
| | - benchmark |
| | - agents |
| | - llm-agents |
| | - test-driven-development |
| | - specification |
| | - mutation-testing |
| | pretty_name: SpecSuite-Core |
| | arxiv: 2603.08806 |
| | size_categories: |
| | - n<1K |
| | configs: |
| | - config_name: specs |
| | data_files: |
| | - split: train |
| | path: data/specs/train.jsonl |
| | - config_name: mutations |
| | data_files: |
| | - split: train |
| | path: data/mutations/train.jsonl |
| | - config_name: results |
| | data_files: |
| | - split: train |
| | path: data/results/train.jsonl |
| | --- |
| | |
| | # SpecSuite-Core |
| |
|
| | **SpecSuite-Core** is a benchmark suite of 4 deeply-specified AI agent behavioral specifications, designed to evaluate the **TDAD (Test-Driven Agent Definition)** methodology described in our paper. |
| |
|
| | Paper: [Test-Driven Agent Definition (TDAD)](https://arxiv.org/abs/2603.08806) |
| | Code: [GitHub](https://github.com/f-labs-io/tdad-paper-code) |
| |
|
| | ## Overview |
| |
|
| | Each specification defines a complete agent behavior contract including: |
| | - **Tools** with typed input/output schemas and failure modes |
| | - **Policies** with priorities and enforcement levels |
| | - **Decision trees** defining the agent's control flow |
| | - **Response contracts** with required JSON fields |
| | - **Mutation intents** for robustness testing |
| | - **Spec evolution** (v1 → v2) for backward compatibility testing |
| |
|
| | ## Specifications |
| |
|
| | | Spec | Domain | v1 Tools | v1 Policies | v2 Change | |
| | |------|--------|----------|-------------|-----------| |
| | | **SupportOps** | Customer support (cancel, address, billing) | 7 | 4 | Abuse detection | |
| | | **DataInsights** | SQL analytics & reporting | 3 | 4 | Cost-aware queries | |
| | | **IncidentRunbook** | Incident response & escalation | 6 | 4 | Customer impact tracking | |
| | | **ExpenseGuard** | Expense approval & reimbursement | 6 | 5 | Manager approval gate | |
| |
|
| | ## Configs |
| |
|
| | ### `specs` — Agent Specifications (8 rows) |
| |
|
| | Each row is one spec version (4 specs × 2 versions). The `spec_yaml` field contains the full YAML specification. |
| |
|
| | ```python |
| | from datasets import load_dataset |
| | specs = load_dataset("f-labs-io/SpecSuite-Core", "specs") |
| | |
| | # Get SupportOps v1 |
| | spec = specs["train"].filter(lambda x: x["spec_id"] == "supportops_v1")[0] |
| | print(spec["title"]) # "SupportOps Agent" |
| | print(spec["tool_names"]) # ["verify_identity", "get_account", ...] |
| | print(spec["spec_yaml"]) # Full YAML specification |
| | ``` |
| |
|
| | ### `mutations` — Mutation Intents (27 rows) |
| |
|
| | Each row is a mutation intent: a description of a plausible behavioral failure that the test suite should detect. |
| |
|
| | ```python |
| | mutations = load_dataset("f-labs-io/SpecSuite-Core", "mutations") |
| | |
| | # All critical mutations |
| | critical = mutations["train"].filter(lambda x: x["severity"] == "critical") |
| | for m in critical: |
| | print(f"{m['spec_lineage']}/{m['mutation_id']}: {m['intent'][:80]}...") |
| | ``` |
| |
|
| | Categories: `policy_violation`, `process_violation`, `business_logic_violation`, `grounding_violation`, `escalation_violation`, `safety_violation`, `quality_regression`, `compliance_violation`, `robustness_violation`, `decision_violation`, `tooling_violation` |
| |
|
| | ### `results` — Pipeline Run Results (24 rows) |
| |
|
| | Each row is one end-to-end TDAD pipeline run with metrics across all 4 evaluation dimensions. |
| |
|
| | ```python |
| | results = load_dataset("f-labs-io/SpecSuite-Core", "results") |
| | |
| | for r in results["train"]: |
| | print(f"{r['spec']}/{r['version']}: VPR={r['vpr_percent']}% HPR={r['hpr_percent']}% MS={r['mutation_score']}% ${r['total_cost_usd']:.2f}") |
| | ``` |
| |
|
| | ## Metrics |
| |
|
| | | Metric | Full Name | What It Measures | |
| | |--------|-----------|-----------------| |
| | | **VPR** | Visible Pass Rate | Compilation success (tests seen during prompt optimization) | |
| | | **HPR** | Hidden Pass Rate | Generalization (held-out tests never seen during compilation) | |
| | | **MS** | Mutation Score | Robustness (% of seeded behavioral faults detected by tests) | |
| | | **SURS** | Spec Update Regression Score | Backward compatibility (v1 tests passing on v2 prompt) | |
| |
|
| | ## Full Pipeline |
| |
|
| | The specifications in this dataset are designed to be used with the TDAD pipeline: |
| |
|
| | 1. **TestSmith** generates executable tests from the spec |
| | 2. **PromptSmith** iteratively compiles a system prompt until tests pass |
| | 3. **MutationSmith** generates behavioral mutations to measure test quality |
| | 4. Tests and mutations measure the agent across VPR, HPR, MS, and SURS |
| |
|
| | See the [GitHub repo](https://github.com/f-labs-io/tdad-paper-code) for the full executable pipeline with Docker support. |
| |
|
| | ## Citation |
| |
|
| | ```bibtex |
| | @article{rehan2026tdad, |
| | title={Test-Driven Agent Definition: A Specification-First Framework for LLM Agent Development}, |
| | author={Rehan, Tzafrir}, |
| | year={2026} |
| | } |
| | ``` |
| |
|
| | ## License |
| |
|
| | Apache 2.0 |
| |
|