YirongSun commited on
Commit
377b71d
·
verified ·
1 Parent(s): ab1cde0

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +279 -3
README.md CHANGED
@@ -1,3 +1,279 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ ---
4
+ # SonicBench: Dissecting the Physical Perception Bottleneck in Large Audio Language Models
5
+
6
+ *12 physical attributes, 5 perceptual dimensions, 2 task types - dissecting the physical perception bottleneck of Large Audio Language Models.*
7
+
8
+ <p align="center">
9
+ <!-- Badges -->
10
+ <a href="https://huggingface.co/datasets/YirongSun/SonicBench">
11
+ <img src="https://img.shields.io/badge/HF%20Dataset-SonicBench-16a085.svg" alt="HF Dataset">
12
+ </a>
13
+ <a href="https://github.com/EIT-NLP/SonicBench">
14
+ <img src="https://img.shields.io/badge/GitHub-SonicBench-181717.svg" alt="GitHub Repo">
15
+ </a>
16
+ <a href="https://github.com/EIT-NLP/SonicBench">
17
+ <img src="https://img.shields.io/github/stars/EIT-NLP/SonicBench?style=social" alt="GitHub Stars">
18
+ </a>
19
+ <a href="#7-citation">
20
+ <img src="https://img.shields.io/badge/Cite-BibTeX-9cf.svg" alt="Cite">
21
+ </a>
22
+ </p>
23
+
24
+ <p align="center">
25
+ <!-- Top navigation -->
26
+ <a href="#1-benchmark-overview">Benchmark</a> •
27
+ <a href="#2-directory-layout">Directory Layout</a> •
28
+ <a href="#3-json-format">JSON Format</a> •
29
+ <a href="#4-probe-json-traineval-splits">Probe Splits</a> •
30
+ <a href="https://arxiv.org/abs/xxxx.xxxxx">Paper</a>
31
+ </p>
32
+
33
+ > **TL;DR.** SonicBench is a psychophysically grounded benchmark that probes **physical audio perception** rather than semantics:
34
+ > 12 core attributes × 5 perceptual dimensions × 2 paradigms (recognition vs. comparison) = 2,400 question-audio pairs.
35
+ > Despite strong performance on semantic and paralinguistic tasks, most LALMs perform near **chance** and fail to show the expected human-like advantage on comparison tasks.
36
+ > Explicit chain-of-thought reasoning brings only marginal gains, while linear probes on frozen encoders reach **≥60%** accuracy, revealing that the main bottleneck lies in **alignment and decoding**.
37
+
38
+
39
+ ---
40
+
41
+
42
+ ## 1. Benchmark Overview
43
+
44
+ SonicBench targets **physical perception**, i.e., the ability to interpret intrinsic properties of audio signals that underlie any higher-level reasoning.
45
+ It covers **12 core attributes** grouped into **5 perceptual dimensions**:
46
+
47
+ - **Spectral & Amplitude**
48
+ `pitch`, `brightness`, `loudness`, `velocity`
49
+ - **Temporal**
50
+ `duration`, `tempo`
51
+ - **Spatial & Environment**
52
+ `direction`, `distance`, `reverberation`
53
+ - **Timbre**
54
+ `timbre`, `texture`
55
+ - **Scene-Level**
56
+ `counting`
57
+
58
+ For each attribute, SonicBench defines two **complementary psychophysical paradigms**:
59
+
60
+ 1. **Recognition (absolute judgment)**
61
+ - Input: a single 4-second audio clip.
62
+ - Task: make an **absolute** decision between two physical categories
63
+ (e.g., “bright” vs. “dark”, “short” vs. “long”, “near” vs. “far”).
64
+ - Output: a binary choice `"A"` or `"B"`.
65
+
66
+ 2. **Comparison (relative judgment)**
67
+ - Input: two 4-second clips concatenated with **0.5 seconds of silence** in between
68
+ (≈ 8.5 seconds in total).
69
+ - Task: make a **relative** judgment of which clip has a larger value along a given attribute
70
+ (e.g., which is brighter / louder / faster / closer).
71
+ - Output: `"A"` for the first segment, `"B"` for the second.
72
+
73
+ This yields in total:
74
+
75
+ - **12 attributes × 2 task types × 100 items = 2,400 question-audio pairs**
76
+
77
+ This design turns a broad space of non-linguistic, low-level skills into a **structured, attribute-wise benchmark**, and the comparison paradigm explicitly probes **relational reasoning**, where human listeners are typically more proficient than in absolute estimation.
78
+
79
+ ---
80
+
81
+ ## 2. Directory Layout
82
+
83
+ On this Hugging Face dataset, the directory structure is:
84
+
85
+ ```
86
+ .
87
+ ├── brightness/
88
+ │ ├── task_recog/
89
+ │ │ ├── brightness_single_000.wav
90
+ │ │ ├── brightness_single_001.wav
91
+ │ │ └── ...
92
+ │ └── task_comparison/
93
+ │ ├── brightness_pair_000.wav
94
+ │ ├── brightness_pair_001.wav
95
+ │ └── ...
96
+ ├── counting/
97
+ │ ├── task_recog/
98
+ │ └── task_comparison/
99
+ ├── direction/
100
+ │ ├── task_recog/
101
+ │ └── task_comparison/
102
+ ├── ...
103
+ ├── json/
104
+ │ ├── brightness_recog.json
105
+ │ ├── brightness_comparison.json
106
+ │ ├── counting_recog.json
107
+ │ ├── counting_comparison.json
108
+ │ ├── ...
109
+ └── probe_json/
110
+ ├── brightness_recog/
111
+ │ ├── train.json
112
+ │ └── eval.json
113
+ ├── brightness_comparison/
114
+ │ ├── train.json
115
+ │ └── eval.json
116
+ ├── counting_recog/
117
+ │ ├── train.json
118
+ │ └── eval.json
119
+ ├── counting_comparison/
120
+ │ ├── train.json
121
+ │ └── eval.json
122
+ ├── ...
123
+ ````
124
+
125
+ * Each attribute has its own folder under the root, containing WAV files for `task_recog` and `task_comparison`.
126
+ * `json/` contains the **canonical evaluation JSONs** (2,400 QA pairs in total).
127
+ * `probe_json/` exposes **train/eval splits** for probing experiments (see Section 4 in our paper).
128
+
129
+ ---
130
+
131
+ ## 3. JSON Format
132
+
133
+ All main benchmark files in `json/` follow a unified conversational format.
134
+ Each JSON file is a **list of items**; each item has at least the following keys:
135
+
136
+ * `voice`:
137
+
138
+ * A list of audio paths (relative to the dataset root).
139
+ * For SonicBench, there is currently **one path per item**, e.g.:
140
+
141
+ * Recognition: `"brightness/task_recog/brightness_single_000.wav"`
142
+ * Comparison: `"brightness/task_comparison/brightness_pair_000.wav"`
143
+
144
+ * `conversations`:
145
+
146
+ * A list of message turns, following a simple chat-style schema:
147
+
148
+ * `from`: `"human"` or `"gpt"`
149
+ * `value`: the text content
150
+
151
+ A typical example from `brightness_comparison.json`:
152
+
153
+ ```json
154
+ {
155
+ "voice": [
156
+ "brightness/task_comparison/brightness_pair_000.wav"
157
+ ],
158
+ "conversations": [
159
+ {
160
+ "from": "human",
161
+ "value": "The audio includes two segments with a 0.5-second silent interval. Which is brighter? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
162
+ },
163
+ {
164
+ "from": "gpt",
165
+ "value": "B"
166
+ }
167
+ ]
168
+ }
169
+ ```
170
+
171
+ * The first turn (`from: "human"`) gives the **full instruction** and contains a `<audio>` placeholder.
172
+ * The second turn (`from: "gpt"`) contains the **ground-truth answer**, which is always a **single letter** `"A"` or `"B"` with no explanation or punctuation.
173
+
174
+ In summary:
175
+
176
+ * To evaluate a model, you typically:
177
+
178
+ * Play or feed the audio in `voice[0]` into your model.
179
+ * Give the model the `value` of the `"human"` turn as the textual prompt.
180
+ * Compare the model’s final answer with the `value` of the `"gpt"` turn (letter `"A"`/`"B"`).
181
+
182
+ ---
183
+
184
+ ## 4. Probe JSON (Train/Eval Splits)
185
+
186
+ The `probe_json/` directory contains **train/eval splits** derived from the same underlying items, designed for:
187
+
188
+ * Linear probes on frozen audio encoders
189
+ * Small classifier training
190
+ * Attribute-wise analysis without touching the main test set
191
+
192
+ For each attribute × task type (e.g., `brightness_recog`, `distance_comparison`), there is a folder:
193
+
194
+ ```text
195
+ probe_json/brightness_recog/train.json
196
+ probe_json/brightness_recog/eval.json
197
+ probe_json/brightness_comparison/train.json
198
+ probe_json/brightness_comparison/eval.json
199
+ probe_json/velocity_recog/train.json
200
+ probe_json/velocity_recog/eval.json
201
+ probe_json/velocity_comparison/train.json
202
+ probe_json/velocity_comparison/eval.json
203
+ ...
204
+ ```
205
+
206
+ Each `train.json` / `eval.json` is again a **list of items with the same schema**.
207
+ These splits correspond to a fixed random partition (`42`). In our experiments, we train simple linear probes on `train.json` and evaluate on `eval.json`, while keeping the encoder frozen.
208
+
209
+ ---
210
+
211
+ ## 5. What We Found with SonicBench
212
+
213
+ Using SonicBench, we evaluate **36 systems** across three families:
214
+
215
+ - **LALMs** – Large Audio(-Language) Models built by aligning pre-trained audio encoders with LLMs
216
+ - **LARMs** – audio-specific reasoning models
217
+ - **OLMs** – omni-modal models that include an audio interface
218
+
219
+ SonicBench uncovers several consistent patterns:
220
+
221
+ 1. **Fundamental physical perception is weak.**
222
+ Despite strong performance on semantic and paralinguistic benchmarks, most models perform **near random guessing (~50%)** on many SonicBench tasks.
223
+ Even the best model in our study (Qwen3-Omni) reaches only about **72%** accuracy, far below human performance (~91%).
224
+ This indicates that current systems often lack **reliable physical grounding**, even when their high-level behavior appears competent.
225
+
226
+ 2. **No human-like advantage on comparison tasks.**
227
+ In human psychophysics, **relative comparison** is often easier than absolute judgment.
228
+ In contrast, LALMs and related systems show **no systematic advantage** on comparison tasks;
229
+ for several attributes, **comparison accuracy is even lower than recognition accuracy**.
230
+ This suggests that current models struggle with **relational reasoning over physical attributes**.
231
+
232
+ 3. **Inference-time reasoning brings limited gains.**
233
+ We experiment with **explicit reasoning** and inference-time scaling (longer chain-of-thought, more deliberation).
234
+ The improvements on SonicBench are **marginal**, indicating that simply adding reasoning tokens cannot compensate for missing or poorly used physical representations.
235
+
236
+ 4. **Encoders perceive more than the full model can use.**
237
+ When we freeze audio encoders and train **simple linear probes** on `probe_json` splits, these probes consistently achieve **≥60% accuracy** across attributes and, in several cases, **outperform the full end-to-end models**.
238
+ This shows that the **physical cues are already present** in the encoder representations.
239
+ The primary bottleneck lies in **alignment and decoding**-the projector and language layers fail to faithfully leverage the sensory information they receive.
240
+
241
+ ---
242
+
243
+ ## 6. Intended Uses
244
+
245
+ SonicBench is designed primarily as an **evaluation and analysis benchmark** for physical audio perception. Typical use cases include:
246
+
247
+ * **Benchmarking physical grounding**
248
+ Evaluate LALMs, LARMs, and OLMs on their ability to perceive core physical attributes.
249
+
250
+ * **Attribute-wise and dimension-wise diagnostics**
251
+ Use the 12 attributes and 5 perceptual dimensions to pinpoint which aspects (e.g., spectral vs. spatial vs. scene-level) a model handles well or fails on.
252
+
253
+ * **Studying recognition vs. comparison behavior**
254
+ Compare model performance across absolute (recognition) and relative (comparison) paradigms to analyze **relational reasoning** over acoustic signals.
255
+
256
+ * **Encoder probing and architecture analysis**
257
+ Use `probe_json` train/eval splits to attach simple probes to audio encoders, isolating where information is lost along the encoder-projector-LLM pipeline.
258
+
259
+ > We recommend treating all files in `json/` as **held-out test sets**.
260
+ > For training probes or auxiliary models, please use the splits provided under `probe_json/`.
261
+
262
+
263
+ ---
264
+
265
+ ## 7. Citation
266
+
267
+ If you use SonicBench in your work, please cite:
268
+
269
+ ```bibtex
270
+ xxx
271
+ ```
272
+
273
+ ---
274
+
275
+ ## 8. Contact
276
+ ```
277
+ Email: win1282467298@gmail.com, qiuxinzju@zju.edu.cn, xyshen@eitech.edu.cn
278
+ Organization: EIT-NLP Lab
279
+ ```