assistant-bench / README.md
arcada-labs's picture
Upload README.md with huggingface_hub
7d8464c verified
metadata
language:
  - en
license: mit
pretty_name: Assistant Bench
tags:
  - audio
  - benchmark
  - speech-to-speech
  - voice-ai
  - multi-turn
  - tool-use
  - evaluation
  - state-tracking
  - function-calling
task_categories:
  - automatic-speech-recognition
  - text-generation
size_categories:
  - n<1K
configs:
  - config_name: default
    data_files:
      - split: train
        path: metadata.jsonl

Assistant Bench

31-turn multi-turn speech-to-speech benchmark for evaluating voice AI models as a personal assistant handling flights, email, calendar, and reminders.

Part of Audio Arena, a suite of 6 benchmarks spanning 221 turns across different domains. Built by Arcada Labs.

Leaderboard | GitHub | All Benchmarks

Dataset Description

The model acts as a personal assistant managing flight bookings, email composition, calendar events, and reminders. Turns include dual requests packed into a single utterance, mid-conversation topic switching, late references back to early topics, and correction chains that the model must track across the full session.

What This Benchmark Tests

  • Tool use: 7 functions — flight booking, email, calendar, reminders, and more
  • Dual requests in single turns: Two distinct tasks in one spoken utterance
  • Topic switching: Abrupt mid-conversation jumps between domains (flights, email, calendar)
  • Late references to early topics: Callbacks to details from 20+ turns earlier
  • Intent segmentation: Parsing multi-intent speech into separate tool calls
  • Mid-sentence self-correction: Speaker changes their mind partway through a request
  • Retroactive email correction: Changing details of a previously composed email
  • Correction-chain recall: Tracking a value through multiple sequential corrections
  • False memory traps: 3 false memory traps plus a correction-chain trap
  • Audio traps: Name spelling, airport codes, dates, and times that are ambiguous in speech

Dataset Structure

assistant-bench/
├── audio/                          # TTS-generated audio (1 WAV per turn)
│   ├── turn_000.wav
│   ├── turn_001.wav
│   └── ... (31 files)
├── real_audio/                     # Human-recorded audio
│   ├── person1/
│   │   └── turn_000.wav ... turn_030.wav
│   └── person2/
│       └── turn_000.wav ... turn_030.wav
├── benchmark/
│   ├── turns.json                  # Turn definitions with golden answers
│   ├── hard_turns.json             # Same as turns.json but input_text=null (audio-only)
│   ├── tool_schemas.json           # Tool/function schemas (7 tools)
│   └── knowledge_base.txt          # Assistant KB
└── metadata.jsonl                  # HF dataset viewer metadata

Metadata Fields

Field Description
file_name Path to the audio file
turn_id Turn index (0–30)
speaker tts, person1, or person2
input_text What the user says (text transcript)
golden_text Expected assistant response
required_function_call Tool call the model should make (JSON, nullable)
function_call_response Scripted tool response (JSON, nullable)
categories Evaluation categories for this turn
subcategory Specific sub-skill being tested
scoring_dimensions Which judge dimensions apply

Audio Format

  • Format: WAV, 16-bit PCM, mono
  • TTS audio: Generated via text-to-speech
  • Real audio: Human-recorded by multiple speakers, same transcript content

Usage

With Audio Arena CLI

pip install audio-arena  # or: git clone + uv sync

# Run with a text model
uv run audio-arena run assistant_bench --model claude-sonnet-4-5 --service anthropic

# Run with a speech-to-speech model
uv run audio-arena run assistant_bench --model gpt-realtime --service openai-realtime

# Judge the results
uv run audio-arena judge runs/assistant_bench/<run_dir>

With Hugging Face Datasets

from datasets import load_dataset

ds = load_dataset("arcada-labs/assistant-bench")

Evaluation

Models are judged on up to 5 dimensions per turn:

Dimension Description
tool_use_correct Correct function called with correct arguments
instruction_following User's request was actually completed
kb_grounding Claims are supported by the knowledge base or tool results
state_tracking Consistency with earlier turns (scored on tagged turns only)
ambiguity_handling Correct disambiguation (scored on tagged turns only)

For speech-to-speech models, a 6th turn_taking dimension evaluates audio timing correctness.

See the full methodology for details on two-phase evaluation, penalty absorption, and category-aware scoring.

Part of Audio Arena

Benchmark Turns Scenario
Conversation Bench 75 Conference assistant
Appointment Bench 25 Dental office scheduling
Assistant Bench (this dataset) 31 Personal assistant
Event Bench 29 Event planning
Grocery Bench 30 Grocery ordering
Product Bench 31 Laptop comparison shopping

Citation

@misc{audioarena2026,
  title={Audio Arena: Multi-Turn Speech-to-Speech Evaluation Benchmarks},
  author={Arcada Labs},
  year={2026},
  url={https://audioarena.ai}
}