File size: 4,447 Bytes
8e6682e
 
 
 
 
 
 
 
3c50568
8e6682e
 
 
3280a1a
8e6682e
3c50568
8e6682e
 
 
3280a1a
8e6682e
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3280a1a
8e6682e
3280a1a
 
 
8e6682e
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
---
configs:
- config_name: default
  data_files:
  - split: dev
    path: "dev.jsonl"
license: apache-2.0
---
# DCASE 2026 Task 5 Audio-Dependent Question Answering (ADQA) Development Set

<div align="center">

[![DCASE 2026 Task 5](https://img.shields.io/badge/DCASE%202026-Task%205%20Dev%20Set-red.svg)](https://dcase.community/challenge2026/task-audio-dependent-question-answering)
[![Paper](https://img.shields.io/badge/Paper-ICLR%202026-b31b1b.svg)](https://arxiv.org/abs/2509.21060)
[![Training Set](https://img.shields.io/badge/Training%20Set-AudioMCQ--StrongAC--GeminiCoT-yellow.svg)](https://huggingface.co/datasets/Harland/AudioMCQ-StrongAC-GeminiCoT)

</div>

This is the official **Development Set** for [DCASE 2026 Challenge Task 5: Audio-Dependent Question Answering (ADQA)](https://dcase.community/challenge2026/task-audio-dependent-question-answering).

The ADQA task focuses on addressing **"Textual Hallucination"** in Large Audio-Language Models (LALMs) — where models pass audio understanding benchmarks by relying on text prompts and internal linguistic priors rather than actual audio perception. ADQA introduces a rigorous evaluation framework using **Audio-Dependency Filtering (ADF)** to ensure questions cannot be answered through common sense or text-only reasoning.

## Audio-Dependency Filtering (ADF)

All samples in this development set undergo a rigorous four-step ADF hard-filtering process to guarantee genuine audio dependence:

1. **Silent Audio Filtering:** Questions solvable by LALMs without audio are removed.
2. **LLM Common-sense Check:** Ensures no external knowledge alone can solve the question.
3. **Perplexity-based Soft Filtering:** Eliminates samples with text-based statistical shortcuts.
4. **Manual Verification:** Final human-in-the-loop check for ground-truth accuracy.

## Statistics

| Metric | Count |
|--------|-------|
| Total Samples | 1,607 |
| Unique Audio Files | 1,607 |

### Data Sources

The development set is composed of two parts:

- **Existing Benchmarks:** A portion of the samples is derived from established audio understanding benchmarks, including [MMAU](https://github.com/sakshi113/mmau), [MMAR](https://github.com/ddlBoJack/MMAR), and [MMSU](https://huggingface.co/datasets/ddwang2000/MMSU). These samples cover a wide range of audio understanding tasks such as speech, music, and sound perception.
- **Human-Annotated Questions:** The remaining majority consists of newly constructed, human-annotated multiple-choice questions based on diverse audio sources, designed to further challenge models on real-world audio comprehension.

All samples undergo the four-step **Audio-Dependency Filtering (ADF)** process described above.

## Directory Structure

```text
DCASE2026-Task5-DevSet/
├── dev.jsonl                # Main data file (1,607 samples, shuffled)
├── dev_audios/              # Audio files (1,607 .wav files)
└── README.md
```

## Data Format

Each entry in `dev.jsonl` is a JSON object with the following fields:

| Field | Type | Description |
|-------|------|-------------|
| `id` | string | Unique sample identifier (e.g., `dev_0001`) |
| `audio_path` | string | Relative path to audio file |
| `question_text` | string | Question text |
| `answer` | string | Correct answer |
| `multi_choice` | list[string] | Answer choices |

### Example

```json
{
  "id": "dev_0001",
  "audio_path": "dev_audios/dev_0001.wav",
  "question_text": "What is the speaker's primary emotion in this audio?",
  "answer": "Happiness",
  "multi_choice": ["Sadness", "Happiness", "Anger", "Fear"]
}
```

## Submission Format

The system output file should be a `.csv` file with the following two columns:

| Column | Description |
|--------|-------------|
| `question` | The question ID (e.g., `dev_0001`) |
| `answer` | The system's answer, must match one of the given choices |

## License

This dataset is distributed under the **Apache-2.0** license.

## Citation

If you use this development set or participate in DCASE 2026 Task 5, please cite:

```bibtex
@article{he2025measuring,
  title={Measuring Audio's Impact on Correctness: Audio-Contribution-Aware Post-Training of Large Audio Language Models},
  author={He, Haolin and Du, Xingjian and Sun, Renhe and Dai, Zheqi and Xiao, Yujia and Yang, Mingru and Zhou, Jiayi and Li, Xiquan and Liu, Zhengxi and Liang, Zining and others},
  journal={arXiv preprint arXiv:2509.21060},
  year={2025}
}
```