grocery-bench / README.md
arcada-labs's picture
Upload README.md with huggingface_hub
1da5532 verified
metadata
language:
  - en
license: mit
pretty_name: Grocery Bench
tags:
  - audio
  - benchmark
  - speech-to-speech
  - voice-ai
  - multi-turn
  - tool-use
  - evaluation
  - state-tracking
  - function-calling
task_categories:
  - automatic-speech-recognition
  - text-generation
size_categories:
  - n<1K
configs:
  - config_name: default
    data_files:
      - split: train
        path: metadata.jsonl

Grocery Bench

30-turn multi-turn speech-to-speech benchmark for evaluating voice AI models as a grocery ordering assistant.

Part of Audio Arena, a suite of 6 benchmarks spanning 221 turns across different domains. Built by Arcada Labs.

Leaderboard | GitHub | All Benchmarks

Dataset Description

The model acts as a grocery ordering assistant helping a customer build, modify, and finalize an order. The conversation is designed around 15 difficulty enhancements that stress-test item lookup, quantity math, chained corrections, and order reconciliation — culminating in a full order summary the model must compute correctly.

What This Benchmark Tests

  • Tool use: 5 functions — item lookup, add to cart, remove from cart, modify quantity, order summary
  • 3-item turns: Multiple items added in a single spoken request
  • Relative-math quantity: "Double the bananas", "add three more"
  • Conditional addition/removal: "If X costs more than $5, remove it"
  • Chained corrections: Multiple sequential edits to the same item
  • Homophone collisions: flower vs flour — ambiguous in speech
  • Fifteen/fifty audio confusion: Quantities that sound alike over audio
  • Ambiguous "both": References to multiple items where "both" is under-specified
  • Revert removal: Undoing a previously removed item
  • Swap operations: Replace one item with another in a single turn
  • Retroactive quantity change: Changing a quantity set many turns earlier
  • Mid-sentence self-correction: Speaker changes their mind partway through
  • False memory traps: Assertions about items never added
  • Full order reconciliation: Final order summary requiring correct math across all modifications

Dataset Structure

grocery-bench/
├── audio/                          # TTS-generated audio (1 WAV per turn)
│   ├── turn_000.wav
│   ├── turn_001.wav
│   └── ... (30 files)
├── real_audio/                     # Human-recorded audio
│   ├── person1/
│   │   └── turn_000.wav ... turn_029.wav
│   └── person2/
│       └── turn_000.wav ... turn_029.wav
├── benchmark/
│   ├── turns.json                  # Turn definitions with golden answers
│   ├── hard_turns.json             # Same as turns.json but input_text=null (audio-only)
│   ├── tool_schemas.json           # Tool/function schemas (5 tools)
│   └── knowledge_base.txt          # Grocery store KB (products, policies, delivery)
└── metadata.jsonl                  # HF dataset viewer metadata

Metadata Fields

Field Description
file_name Path to the audio file
turn_id Turn index (0–29)
speaker tts, person1, or person2
input_text What the user says (text transcript)
golden_text Expected assistant response
required_function_call Tool call the model should make (JSON, nullable)
function_call_response Scripted tool response (JSON, nullable)
categories Evaluation categories for this turn
subcategory Specific sub-skill being tested
scoring_dimensions Which judge dimensions apply

Audio Format

  • Format: WAV, 16-bit PCM, mono
  • TTS audio: Generated via text-to-speech
  • Real audio: Human-recorded by multiple speakers, same transcript content

Usage

With Audio Arena CLI

pip install audio-arena  # or: git clone + uv sync

# Run with a text model
uv run audio-arena run grocery_bench --model claude-sonnet-4-5 --service anthropic

# Run with a speech-to-speech model
uv run audio-arena run grocery_bench --model gpt-realtime --service openai-realtime

# Judge the results
uv run audio-arena judge runs/grocery_bench/<run_dir>

With Hugging Face Datasets

from datasets import load_dataset

ds = load_dataset("arcada-labs/grocery-bench")

Evaluation

Models are judged on up to 5 dimensions per turn:

Dimension Description
tool_use_correct Correct function called with correct arguments
instruction_following User's request was actually completed
kb_grounding Claims are supported by the knowledge base or tool results
state_tracking Consistency with earlier turns (scored on tagged turns only)
ambiguity_handling Correct disambiguation (scored on tagged turns only)

For speech-to-speech models, a 6th turn_taking dimension evaluates audio timing correctness.

See the full methodology for details on two-phase evaluation, penalty absorption, and category-aware scoring.

Part of Audio Arena

Benchmark Turns Scenario
Conversation Bench 75 Conference assistant
Appointment Bench 25 Dental office scheduling
Assistant Bench 31 Personal assistant
Event Bench 29 Event planning
Grocery Bench (this dataset) 30 Grocery ordering
Product Bench 31 Laptop comparison shopping

Citation

@misc{audioarena2026,
  title={Audio Arena: Multi-Turn Speech-to-Speech Evaluation Benchmarks},
  author={Arcada Labs},
  year={2026},
  url={https://audioarena.ai}
}