arcada-labs commited on
Commit
beca429
·
verified ·
1 Parent(s): 99c017d

Upload README.md with huggingface_hub

Browse files
Files changed (1) hide show
  1. README.md +149 -2
README.md CHANGED
@@ -1,4 +1,23 @@
1
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
2
  configs:
3
  - config_name: default
4
  data_files:
@@ -6,6 +25,134 @@ configs:
6
  path: metadata.jsonl
7
  ---
8
 
9
- # event_bench
10
 
11
- 29-turn event planning benchmark: venue+catering+guest count changes, mid-sentence self-corrections, vague pronoun resolution, wrong-math correction, multi-request reversals, ambiguous add-on disambiguation, hypothetical reasoning, phone swap, retroactive date change, 3 false memory traps, cross-entity state tracking
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ language:
3
+ - en
4
+ license: mit
5
+ pretty_name: Event Bench
6
+ tags:
7
+ - audio
8
+ - benchmark
9
+ - speech-to-speech
10
+ - voice-ai
11
+ - multi-turn
12
+ - tool-use
13
+ - evaluation
14
+ - state-tracking
15
+ - function-calling
16
+ task_categories:
17
+ - automatic-speech-recognition
18
+ - text-generation
19
+ size_categories:
20
+ - n<1K
21
  configs:
22
  - config_name: default
23
  data_files:
 
25
  path: metadata.jsonl
26
  ---
27
 
28
+ # Event Bench
29
 
30
+ **29-turn multi-turn speech-to-speech benchmark** for evaluating voice AI models as an event planning assistant.
31
+
32
+ Part of [Audio Arena](https://audioarena.ai), a suite of 6 benchmarks spanning 221 turns across different domains. Built by [Arcada Labs](https://arcada.dev).
33
+
34
+ [Leaderboard](https://audioarena.ai/leaderboard) | [GitHub](https://github.com/Design-Arena/audio-arena) | [All Benchmarks](#part-of-audio-arena)
35
+
36
+ ## Dataset Description
37
+
38
+ The model acts as an event planning assistant managing venue bookings, catering, and guest logistics. The conversation features cascading changes — a venue switch triggers catering repricing, a guest count update triggers capacity checks — along with mid-sentence self-corrections, retroactive date changes, and multi-request reversals that test whether the model can track compounding state changes.
39
+
40
+ ## What This Benchmark Tests
41
+
42
+ - **Tool use**: 6 functions — venue booking, catering quotes, guest management, availability checks, and more
43
+ - **Cascading state changes**: Venue switch triggers catering repricing, guest count changes trigger capacity rechecks
44
+ - **Mid-sentence self-corrections**: Speaker changes details partway through a request
45
+ - **Vague pronoun resolution**: Ambiguous references ("that one", "the other place") requiring context
46
+ - **Wrong-math correction**: User provides incorrect arithmetic the model must not blindly accept
47
+ - **Multi-request reversals**: Undoing multiple changes in a single turn
48
+ - **Ambiguous add-on disambiguation**: Add-ons that could apply to multiple entities
49
+ - **Hypothetical reasoning**: "What if" scenarios the model must handle without committing state
50
+ - **Phone number swap**: Contact number correction mid-conversation
51
+ - **Retroactive date change**: Changing the event date after downstream bookings are already set
52
+ - **False memory traps**: 3 turns asserting things that never happened
53
+ - **Cross-entity state tracking**: Keeping venue, catering, and guest details consistent
54
+
55
+ ## Dataset Structure
56
+
57
+ ```
58
+ event-bench/
59
+ ├── audio/ # TTS-generated audio (1 WAV per turn)
60
+ │ ├── turn_000.wav
61
+ │ ├── turn_001.wav
62
+ │ └── ... (29 files)
63
+ ├── real_audio/ # Human-recorded audio
64
+ │ ├── person1/
65
+ │ │ └── turn_000.wav ... turn_028.wav
66
+ │ └── person2/
67
+ │ └── turn_000.wav ... turn_028.wav
68
+ ├── benchmark/
69
+ │ ├── turns.json # Turn definitions with golden answers
70
+ │ ├── hard_turns.json # Same as turns.json but input_text=null (audio-only)
71
+ │ ├── tool_schemas.json # Tool/function schemas (6 tools)
72
+ │ └── knowledge_base.txt # Event planning KB
73
+ └── metadata.jsonl # HF dataset viewer metadata
74
+ ```
75
+
76
+ ### Metadata Fields
77
+
78
+ | Field | Description |
79
+ |-------|-------------|
80
+ | `file_name` | Path to the audio file |
81
+ | `turn_id` | Turn index (0–28) |
82
+ | `speaker` | `tts`, `person1`, or `person2` |
83
+ | `input_text` | What the user says (text transcript) |
84
+ | `golden_text` | Expected assistant response |
85
+ | `required_function_call` | Tool call the model should make (JSON, nullable) |
86
+ | `function_call_response` | Scripted tool response (JSON, nullable) |
87
+ | `categories` | Evaluation categories for this turn |
88
+ | `subcategory` | Specific sub-skill being tested |
89
+ | `scoring_dimensions` | Which judge dimensions apply |
90
+
91
+ ## Audio Format
92
+
93
+ - **Format**: WAV, 16-bit PCM, mono
94
+ - **TTS audio**: Generated via text-to-speech
95
+ - **Real audio**: Human-recorded by multiple speakers, same transcript content
96
+
97
+ ## Usage
98
+
99
+ ### With Audio Arena CLI
100
+
101
+ ```bash
102
+ pip install audio-arena # or: git clone + uv sync
103
+
104
+ # Run with a text model
105
+ uv run audio-arena run event_bench --model claude-sonnet-4-5 --service anthropic
106
+
107
+ # Run with a speech-to-speech model
108
+ uv run audio-arena run event_bench --model gpt-realtime --service openai-realtime
109
+
110
+ # Judge the results
111
+ uv run audio-arena judge runs/event_bench/<run_dir>
112
+ ```
113
+
114
+ ### With Hugging Face Datasets
115
+
116
+ ```python
117
+ from datasets import load_dataset
118
+
119
+ ds = load_dataset("arcada-labs/event-bench")
120
+ ```
121
+
122
+ ## Evaluation
123
+
124
+ Models are judged on up to 5 dimensions per turn:
125
+
126
+ | Dimension | Description |
127
+ |-----------|-------------|
128
+ | `tool_use_correct` | Correct function called with correct arguments |
129
+ | `instruction_following` | User's request was actually completed |
130
+ | `kb_grounding` | Claims are supported by the knowledge base or tool results |
131
+ | `state_tracking` | Consistency with earlier turns (scored on tagged turns only) |
132
+ | `ambiguity_handling` | Correct disambiguation (scored on tagged turns only) |
133
+
134
+ For speech-to-speech models, a 6th `turn_taking` dimension evaluates audio timing correctness.
135
+
136
+ See the [full methodology](https://github.com/Design-Arena/audio-arena#methodology) for details on two-phase evaluation, penalty absorption, and category-aware scoring.
137
+
138
+ ## Part of Audio Arena
139
+
140
+ | Benchmark | Turns | Scenario |
141
+ |-----------|-------|----------|
142
+ | [Conversation Bench](https://huggingface.co/datasets/arcada-labs/conversation-bench) | 75 | Conference assistant |
143
+ | [Appointment Bench](https://huggingface.co/datasets/arcada-labs/appointment-bench) | 25 | Dental office scheduling |
144
+ | [Assistant Bench](https://huggingface.co/datasets/arcada-labs/assistant-bench) | 31 | Personal assistant |
145
+ | **Event Bench** (this dataset) | 29 | Event planning |
146
+ | [Grocery Bench](https://huggingface.co/datasets/arcada-labs/grocery-bench) | 30 | Grocery ordering |
147
+ | [Product Bench](https://huggingface.co/datasets/arcada-labs/product-bench) | 31 | Laptop comparison shopping |
148
+
149
+ ## Citation
150
+
151
+ ```bibtex
152
+ @misc{audioarena2026,
153
+ title={Audio Arena: Multi-Turn Speech-to-Speech Evaluation Benchmarks},
154
+ author={Arcada Labs},
155
+ year={2026},
156
+ url={https://audioarena.ai}
157
+ }
158
+ ```