--- license: mit --- # SonicBench: Dissecting the Physical Perception Bottleneck in Large Audio Language Models *12 physical attributes, 5 perceptual dimensions, 2 task types - dissecting the physical perception bottleneck of Large Audio Language Models.*

HF Dataset GitHub Repo GitHub Stars Cite

BenchmarkDirectory LayoutJSON FormatProbe SplitsPaper

> **TL;DR.** SonicBench is a psychophysically grounded benchmark that probes **physical audio perception** rather than semantics: > 12 core attributes × 5 perceptual dimensions × 2 paradigms (recognition vs. comparison) = 2,400 question-audio pairs. > Despite strong performance on semantic and paralinguistic tasks, most LALMs perform near **chance** and fail to show the expected human-like advantage on comparison tasks. > Explicit chain-of-thought reasoning brings only marginal gains, while linear probes on frozen encoders reach **≥60%** accuracy, revealing that the main bottleneck lies in **alignment and decoding**. --- ## 1. Benchmark Overview SonicBench targets **physical perception**, i.e., the ability to interpret intrinsic properties of audio signals that underlie any higher-level reasoning. It covers **12 core attributes** grouped into **5 perceptual dimensions**: - **Spectral & Amplitude** `pitch`, `brightness`, `loudness`, `velocity` - **Temporal** `duration`, `tempo` - **Spatial & Environment** `direction`, `distance`, `reverberation` - **Timbre** `timbre`, `texture` - **Scene-Level** `counting` For each attribute, SonicBench defines two **complementary psychophysical paradigms**: 1. **Recognition (absolute judgment)** - Input: a single 4-second audio clip. - Task: make an **absolute** decision between two physical categories (e.g., “bright” vs. “dark”, “short” vs. “long”, “near” vs. “far”). - Output: a binary choice `"A"` or `"B"`. 2. **Comparison (relative judgment)** - Input: two 4-second clips concatenated with **0.5 seconds of silence** in between (≈ 8.5 seconds in total). - Task: make a **relative** judgment of which clip has a larger value along a given attribute (e.g., which is brighter / louder / faster / closer). - Output: `"A"` for the first segment, `"B"` for the second. This yields in total: - **12 attributes × 2 task types × 100 items = 2,400 question-audio pairs** This design turns a broad space of non-linguistic, low-level skills into a **structured, attribute-wise benchmark**, and the comparison paradigm explicitly probes **relational reasoning**, where human listeners are typically more proficient than in absolute estimation. --- ## 2. Directory Layout On this Hugging Face dataset, the directory structure is: ``` . ├── brightness/ │ ├── task_recog/ │ │ ├── brightness_single_000.wav │ │ ├── brightness_single_001.wav │ │ └── ... │ └── task_comparison/ │ ├── brightness_pair_000.wav │ ├── brightness_pair_001.wav │ └── ... ├── counting/ │ ├── task_recog/ │ └── task_comparison/ ├── direction/ │ ├── task_recog/ │ └── task_comparison/ ├── ... ├── json/ │ ├── brightness_recog.json │ ├── brightness_comparison.json │ ├── counting_recog.json │ ├── counting_comparison.json │ ├── ... └── probe_json/ ├── brightness_recog/ │ ├── train.json │ └── eval.json ├── brightness_comparison/ │ ├── train.json │ └── eval.json ├── counting_recog/ │ ├── train.json │ └── eval.json ├── counting_comparison/ │ ├── train.json │ └── eval.json ├── ... ```` * Each attribute has its own folder under the root, containing WAV files for `task_recog` and `task_comparison`. * `json/` contains the **canonical evaluation JSONs** (2,400 QA pairs in total). * `probe_json/` exposes **train/eval splits** for probing experiments (see Section 4 in our paper). --- ## 3. JSON Format All main benchmark files in `json/` follow a unified conversational format. Each JSON file is a **list of items**; each item has at least the following keys: * `voice`: * A list of audio paths (relative to the dataset root). * For SonicBench, there is currently **one path per item**, e.g.: * Recognition: `"brightness/task_recog/brightness_single_000.wav"` * Comparison: `"brightness/task_comparison/brightness_pair_000.wav"` * `conversations`: * A list of message turns, following a simple chat-style schema: * `from`: `"human"` or `"gpt"` * `value`: the text content A typical example from `brightness_comparison.json`: ```json { "voice": [ "brightness/task_comparison/brightness_pair_000.wav" ], "conversations": [ { "from": "human", "value": "The audio includes two segments with a 0.5-second silent interval. Which is brighter? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text.