The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
Audio Agent Bench Suite
A suite of six multi-turn, multi-domain spoken conversational benchmarks for evaluating voice AI and audio agent systems. Each sub-dataset targets a distinct real-world deployment domain, together covering the core capabilities required of production audio agents: instruction following, knowledge-base grounding, tool/function-call accuracy, long-range conversational memory, and state tracking.
Sub-datasets
| Dataset | Domain | Turns | HuggingFace |
|---|---|---|---|
| conversation-bench | AI conference assistant (AI Engineer World's Fair) | 75 | arcada-labs/conversation-bench |
| product-bench | Laptop sales assistant (TechMart Electronics) | 31 | arcada-labs/product-bench |
| grocery-bench | Grocery ordering assistant (Harvest & Hearth Market) | 30 | arcada-labs/grocery-bench |
| appointment-bench | Dental office scheduling assistant (Bayshore Family Dental) | 25 | arcada-labs/appointment-bench |
| event-bench | Event planning assistant (Evergreen Events) | 29 | arcada-labs/event-bench |
| assistant-bench | Personal assistant with flights, hotels, calendar & email (Atlas) | 31 | arcada-labs/assistant-bench |
Overview
Each sub-dataset consists of a sequence of conversational turns designed to be evaluated as a complete multi-turn dialogue. Every turn contains:
- A human voice recording (WAV) of the user's input
- A transcript of the spoken input
- A golden reference response that an ideal agent should produce
- Function/tool call specifications for turns requiring tool use (null otherwise)
- Category labels and scoring dimensions for structured evaluation
The benchmark is English-only and is intended for evaluation purposes only, not for training.
Schema
All six sub-datasets share the following schema:
| Field | Type | Description |
|---|---|---|
turn_id |
int | Sequential turn index (0-based) |
input_text |
string | Transcript of the user's spoken input |
golden_text |
string | Reference golden response |
required_function_call |
list or null | Function/tool calls required for this turn |
function_call_response |
list or null | Expected responses from function calls |
categories |
list | Turn-level category labels |
subcategory |
string or null | Optional finer-grained category |
scoring_dimensions |
list | Evaluation dimensions applicable to this turn |
audio_file |
string | Relative path to the WAV audio file |
Categories
| Category | Description |
|---|---|
basic_qa |
Factual question answering from the knowledge base |
tool_use |
Requires calling one or more functions/tools |
long_range_memory |
Requires recall of information from earlier in the conversation |
state_tracking |
Involves cross-entity references, chained corrections, or multi-step state changes |
ambiguity_handling |
Input is ambiguous and requires disambiguation or appropriate clarification |
Scoring Dimensions
Each turn is evaluated across up to 5 dimensions. Core dimensions (tool_use_correct, instruction_following, kb_grounding) are scored on every turn; state_tracking and ambiguity_handling are scored only on turns tagged with the relevant categories.
| Dimension | Scored on | Description |
|---|---|---|
tool_use_correct |
All turns | Whether required tool/function calls were made correctly. Missed calls land here unless a more specific dimension applies. |
instruction_following |
All turns | Whether the assistant's words and actions followed the user's instructions in a non-tool sense. Strictly separated from tool_use_correct. |
kb_grounding |
All turns | Whether the response is grounded in the knowledge base provided in the system prompt. |
state_tracking |
Tagged turns | Whether the agent correctly tracked conversational state across turns (e.g. cross-entity references, chained corrections, long-range memory). |
ambiguity_handling |
Tagged turns | Whether the agent correctly resolved ambiguous input — neither over-clarifying nor hallucinating a resolution. Over-clarification penalties land here. |
Loading the Data
Each sub-dataset can be loaded individually using the HuggingFace datasets library:
from datasets import load_dataset
# Load a specific sub-dataset
ds = load_dataset("arcada-labs/conversation-bench")
# Load all six
domains = [
"conversation-bench",
"product-bench",
"grocery-bench",
"appointment-bench",
"event-bench",
"assistant-bench",
]
datasets = {name: load_dataset(f"arcada-labs/{name}") for name in domains}
Each sub-dataset has a default configuration with a train split containing all benchmark turns.
Audio Files
Audio files are recordings of 2 human English-speaking voice actors reading the scripted user inputs. They are stored in the audio/ directory of each sub-dataset repo and referenced by relative path in the audio_file field (e.g. audio/turn_000.wav).
Evaluation
The benchmark is designed for end-to-end evaluation of audio agent systems: the audio file is the input, and the system's response is compared against golden_text along the applicable scoring_dimensions. For tool-use turns, required_function_call and function_call_response define the expected tool interaction.
Responsible AI
This dataset was collected and annotated with the following considerations:
- Fictional identities: All names used in scripts (e.g. "Jennifer Smith") are fictional. No real personally identifiable information appears in the text modality.
- Speaker consent: Audio recordings feature 2 human English-speaking voice actors who provided consent for inclusion of their recordings.
- No sensitive data: No financial, health, or other sensitive personal data is present.
- Evaluation only: The dataset is not intended for model training.
- Known limitations: English-only; limited to predefined domains; audio features only 2 human speakers, which may not reflect the acoustic diversity of real-world users (accents, speaking styles, noise conditions); small scale (25–75 turns per sub-dataset).
License
Creative Commons Attribution 4.0
Citation
If you use the Audio Agent Bench Suite in your research, please cite:
@dataset{arcada_labs_audio_agent_bench_suite,
author = {Arcada Labs},
title = {AudioAgentBench: Evaluating Multi-Turn Voice Agents on Real-World Tasks},
year = {2026},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/arcada-labs/audio-agent-bench-suite}
}
- Downloads last month
- 20