SonicBench / README.md
YirongSun's picture
Update README.md
bc06e8f verified
metadata
license: mit

SonicBench: Dissecting the Physical Perception Bottleneck in Large Audio Language Models

12 physical attributes, 5 perceptual dimensions, 2 task types - dissecting the physical perception bottleneck of Large Audio Language Models.

HF Dataset GitHub Repo GitHub Stars Cite

BenchmarkDirectory LayoutJSON FormatProbe SplitsPaper

TL;DR. SonicBench is a psychophysically grounded benchmark that probes physical audio perception rather than semantics:
12 core attributes × 5 perceptual dimensions × 2 paradigms (recognition vs. comparison) = 2,400 question-audio pairs.
Despite strong performance on semantic and paralinguistic tasks, most LALMs perform near chance and fail to show the expected human-like advantage on comparison tasks.
Explicit chain-of-thought reasoning brings only marginal gains, while linear probes on frozen encoders reach ≥60% accuracy, revealing that the main bottleneck lies in alignment and decoding.


1. Benchmark Overview

SonicBench targets physical perception, i.e., the ability to interpret intrinsic properties of audio signals that underlie any higher-level reasoning.
It covers 12 core attributes grouped into 5 perceptual dimensions:

  • Spectral & Amplitude
    pitch, brightness, loudness, velocity
  • Temporal
    duration, tempo
  • Spatial & Environment
    direction, distance, reverberation
  • Timbre
    timbre, texture
  • Scene-Level
    counting

For each attribute, SonicBench defines two complementary psychophysical paradigms:

  1. Recognition (absolute judgment)

    • Input: a single 4-second audio clip.
    • Task: make an absolute decision between two physical categories
      (e.g., “bright” vs. “dark”, “short” vs. “long”, “near” vs. “far”).
    • Output: a binary choice "A" or "B".
  2. Comparison (relative judgment)

    • Input: two 4-second clips concatenated with 0.5 seconds of silence in between
      (≈ 8.5 seconds in total).
    • Task: make a relative judgment of which clip has a larger value along a given attribute
      (e.g., which is brighter / louder / faster / closer).
    • Output: "A" for the first segment, "B" for the second.

This yields in total:

  • 12 attributes × 2 task types × 100 items = 2,400 question-audio pairs

This design turns a broad space of non-linguistic, low-level skills into a structured, attribute-wise benchmark, and the comparison paradigm explicitly probes relational reasoning, where human listeners are typically more proficient than in absolute estimation.


2. Directory Layout

On this Hugging Face dataset, the directory structure is:

.
├── brightness/
│   ├── task_recog/
│   │   ├── brightness_single_000.wav
│   │   ├── brightness_single_001.wav
│   │   └── ...
│   └── task_comparison/
│       ├── brightness_pair_000.wav
│       ├── brightness_pair_001.wav
│       └── ...
├── counting/
│   ├── task_recog/
│   └── task_comparison/
├── direction/
│   ├── task_recog/
│   └── task_comparison/
├── ...
├── json/
│   ├── brightness_recog.json
│   ├── brightness_comparison.json
│   ├── counting_recog.json
│   ├── counting_comparison.json
│   ├── ...
└── probe_json/
    ├── brightness_recog/
    │   ├── train.json
    │   └── eval.json
    ├── brightness_comparison/
    │   ├── train.json
    │   └── eval.json
    ├── counting_recog/
    │   ├── train.json
    │   └── eval.json
    ├── counting_comparison/
    │   ├── train.json
    │   └── eval.json
    ├── ...
  • Each attribute has its own folder under the root, containing WAV files for task_recog and task_comparison.
  • json/ contains the canonical evaluation JSONs (2,400 QA pairs in total).
  • probe_json/ exposes train/eval splits for probing experiments (see Section 4 in our paper).

3. JSON Format

All main benchmark files in json/ follow a unified conversational format. Each JSON file is a list of items; each item has at least the following keys:

  • voice:

    • A list of audio paths (relative to the dataset root).

    • For SonicBench, there is currently one path per item, e.g.:

      • Recognition: "brightness/task_recog/brightness_single_000.wav"
      • Comparison: "brightness/task_comparison/brightness_pair_000.wav"
  • conversations:

    • A list of message turns, following a simple chat-style schema:

      • from: "human" or "gpt"
      • value: the text content

A typical example from brightness_comparison.json:

{
  "voice": [
    "brightness/task_comparison/brightness_pair_000.wav"
  ],
  "conversations": [
    {
      "from": "human",
      "value": "The audio includes two segments with a 0.5-second silent interval. Which is brighter? Only answer letter 'A' (refers to the first clip) or 'B' (refers to the second clip). Do not add any explanation, punctuation, or extra text. <audio>"
    },
    {
      "from": "gpt",
      "value": "B"
    }
  ]
}
  • The first turn (from: "human") gives the full instruction and contains a <audio> placeholder.
  • The second turn (from: "gpt") contains the ground-truth answer, which is always a single letter "A" or "B" with no explanation or punctuation.

In summary:

  • To evaluate a model, you typically:

    • Play or feed the audio in voice[0] into your model.
    • Give the model the value of the "human" turn as the textual prompt.
    • Compare the model’s final answer with the value of the "gpt" turn (letter "A"/"B").

4. Probe JSON (Train/Eval Splits)

The probe_json/ directory contains train/eval splits derived from the same underlying items, designed for:

  • Linear probes on frozen audio encoders
  • Small classifier training
  • Attribute-wise analysis without touching the main test set

For each attribute × task type (e.g., brightness_recog, distance_comparison), there is a folder:

probe_json/brightness_recog/train.json
probe_json/brightness_recog/eval.json
probe_json/brightness_comparison/train.json
probe_json/brightness_comparison/eval.json
probe_json/velocity_recog/train.json
probe_json/velocity_recog/eval.json
probe_json/velocity_comparison/train.json
probe_json/velocity_comparison/eval.json
...

Each train.json / eval.json is again a list of items with the same schema. These splits correspond to a fixed random partition (42). In our experiments, we train simple linear probes on train.json and evaluate on eval.json, while keeping the encoder frozen.


5. What We Found with SonicBench

Using SonicBench, we evaluate 36 systems across three families:

  • LALMs – Large Audio(-Language) Models built by aligning pre-trained audio encoders with LLMs
  • LARMs – audio-specific reasoning models
  • OLMs – omni-modal models that include an audio interface

SonicBench uncovers several consistent patterns:

  1. Fundamental physical perception is weak.
    Despite strong performance on semantic and paralinguistic benchmarks, most models perform near random guessing (~50%) on many SonicBench tasks.
    Even the best model in our study (Qwen3-Omni) reaches only about 72% accuracy, far below human performance (~91%).
    This indicates that current systems often lack reliable physical grounding, even when their high-level behavior appears competent.

  2. No human-like advantage on comparison tasks.
    In human psychophysics, relative comparison is often easier than absolute judgment.
    In contrast, LALMs and related systems show no systematic advantage on comparison tasks;
    for several attributes, comparison accuracy is even lower than recognition accuracy.
    This suggests that current models struggle with relational reasoning over physical attributes.

  3. Inference-time reasoning brings limited gains.
    We experiment with explicit reasoning and inference-time scaling (longer chain-of-thought, more deliberation).
    The improvements on SonicBench are marginal, indicating that simply adding reasoning tokens cannot compensate for missing or poorly used physical representations.

  4. Encoders perceive more than the full model can use.
    When we freeze audio encoders and train simple linear probes on probe_json splits, these probes consistently achieve ≥60% accuracy across attributes and, in several cases, outperform the full end-to-end models.
    This shows that the physical cues are already present in the encoder representations.
    The primary bottleneck lies in alignment and decoding-the projector and language layers fail to faithfully leverage the sensory information they receive.


6. Intended Uses

SonicBench is designed primarily as an evaluation and analysis benchmark for physical audio perception. Typical use cases include:

  • Benchmarking physical grounding
    Evaluate LALMs, LARMs, and OLMs on their ability to perceive core physical attributes.

  • Attribute-wise and dimension-wise diagnostics
    Use the 12 attributes and 5 perceptual dimensions to pinpoint which aspects (e.g., spectral vs. spatial vs. scene-level) a model handles well or fails on.

  • Studying recognition vs. comparison behavior
    Compare model performance across absolute (recognition) and relative (comparison) paradigms to analyze relational reasoning over acoustic signals.

  • Encoder probing and architecture analysis
    Use probe_json train/eval splits to attach simple probes to audio encoders, isolating where information is lost along the encoder-projector-LLM pipeline.

We recommend treating all files in json/ as held-out test sets.
For training probes or auxiliary models, please use the splits provided under probe_json/.


7. Citation

If you use SonicBench in your work, please cite:

@misc{sun2026sonicbench,
      title={SonicBench: Dissecting the Physical Perception Bottleneck in Large Audio Language Models}, 
      author={Yirong Sun and Yanjun Chen and Xin Qiu and Gang Zhang and Hongyu Chen and Daokuan Wu and Chengming Li and Min Yang and Dawei Zhu and Wei Zhang and Xiaoyu Shen},
      year={2026},
      eprint={2601.11039},
      archivePrefix={arXiv},
      primaryClass={cs.SD},
      url={https://arxiv.org/abs/2601.11039}, 
}

8. Contact

Email: win1282467298@gmail.com, qiuxinzju@zju.edu.cn, xyshen@eitech.edu.cn  
Organization: EIT-NLP Lab