AudioMarathon / README.md
Hezep's picture
Update README.md
4245446 verified
---
license: cc-by-nc-4.0
task_categories:
- audio-classification
- automatic-speech-recognition
- question-answering
language:
- en
Formats:
- json
tags:
- Audio
- Multi-modal Large Language Models
modalities:
- audio
- text
size_categories:
- 1K<n<10K
configs:
- config_name: default
data_files:
- split: test
path: test.csv
pretty_name: Longaudioben
---
<div align="center">
<h1 style="display: inline-block; margin: 0;">🎡 AudioMarathon: A Comprehensive Benchmark for Long-Context Audio Understanding and Efficient Inference in Multimodal LLMs</h1>
</div>
<h4 align="center">
[![arXiv](https://img.shields.io/badge/arXiv-2510.07293-b31b1b.svg)](https://arxiv.org/abs/2510.07293)
[![GitHub](https://img.shields.io/badge/GitHub-AudioMarathon-181717?logo=github)](https://github.com/DabDans/AudioMarathon)
[![Dataset](https://img.shields.io/badge/πŸ€—-Dataset-yellow)](https://huggingface.co/datasets/Hezep/AudioMarathon)
[![License: CC BY-NC 4.0](https://img.shields.io/badge/License-CC%20BY--NC%204.0-lightgrey.svg)](https://creativecommons.org/licenses/by-nc/4.0/)
</h4>
---
## Abstract
**AudioMarathon** is a large-scale, multi-task audio understanding benchmark designed to systematically evaluate audio language models' capabilities in processing and comprehending long-form audio content. It provides a diverse set of **10** tasks built upon three pillars:
long-context audio inputs with durations ranging from **90.0 to 300.0** seconds, which
correspond to encoded sequences of 2,250 to 7,500 audio tokens, respectively, full
domain coverage across speech, sound, and music, and complex reasoning that
requires multi-hop inference.
## πŸ“Š Task Taxonomy & Statistics
### Task Categories
AudioMarathon organizes tasks into four primary categories:
1. **Speech Content Extraction** -
2. **Audio Classification**
3. **Speaker Information Modeling**
### Dataset Statistics
| Task ID | Dataset | Task Type | # Samples | Duration | Format | License | Status |
| ------- | --------------------------------------- | ---------------------------------- | --------- | ------------ | ---------- | ------------- | ----------- |
| 1 | [LibriSpeech-long](#1-librispeech-long) | Automatic Speech Recognition (ASR) | 204 | 1-4min | FLAC 16kHz | CC BY 4.0 | βœ… Full |
| 2 | [RACE](#2-race) | Speech Content Reasoning (SCR) | 820 | 2-4.22min | WAV 16kHz | Apache-2.0 | βœ… Full |
| 3 | [HAD](#3-had) | Speech Detection (SD) | 776 | 3~5min | WAV 16kHz | CC BY 4.0 | βœ… Full |
| 4 | [GTZAN](#4-gtzan) | Music classifier (MC) | 120 | 4min | WAV 22kHz | Research Only | βœ… Full |
| 5 | [TAU](#5-tau) | Audio scene classifier (ASC) | 1145 | 1.5-3.5min | WAV 16kHz | CC BY 4.0 | βœ… Full |
| 6 | [VESUS](#6-vesus) | Emotion Recognition (ER) | 185 | 1.5-2min | WAV 16kHz | Academic Only | βœ… Full |
| 7 | [SLUE](#7-slue) | Speech Entity Recognition (SER) | 490 | 2.75~5min | WAV 16kHz | CC BY 4.0 | βœ… Full |
| 8 | [DESED](#8-desed) | Sound event detection (SED) | 254 | 4.5-5min | WAV 16kHz | Mixed CC* | βœ… Full |
| 9 | [VoxCeleb-Gender](#9-voxceleb-gender) | Speaker Gender Recognition (SGR) | 1614 | 1.5-3.5min | WAV 16kHz | CC BY 4.0 | βœ… Full |
| 10 | [VoxCeleb-Age](#10-voxceleb-age) | Speaker Age Recognition (SAR) | 959 | 1.5-3.5min | WAV 16kHz | CC BY 4.0 | βœ… Full |
**Total**: 6567 samples | ~64GB
\* *DESED requires per-clip Freesound attribution (CC0/CC BY 3.0/4.0)*
---
## 🎯 Benchmark Objectives
AudioMarathon is designed to evaluate:
1. **Long-Audio Processing**: Ability to maintain coherence across extended audio sequences
2. **Multi-Domain Generalization**: Performance across diverse acoustic environments and tasks
3. **Semantic Understanding**: Comprehension of spoken content, not just acoustic patterns
4. **Efficiency**: Computational requirements for long-form audio processing
---
## πŸ“ Directory Structure
```
Dataset/
β”œβ”€β”€ librispeech-long/ # Automatic Speech Recognition
β”‚ β”œβ”€β”€ README.md
β”‚ β”œβ”€β”€ test-clean/ # Clean test set
β”‚ β”œβ”€β”€ test-other/ # Noisy test set
β”‚ β”œβ”€β”€ dev-clean/ # Clean dev set
β”‚ └── dev-other/ # Noisy dev set
β”‚
β”œβ”€β”€ race_audio/ # Reading Comprehension
β”‚ β”œβ”€β”€ race_benchmark.json # Task metadata
β”‚ └── test/ # Audio articles
β”‚ └── article_*/
β”‚
β”œβ”€β”€ HAD/ # Half-truth Audio Detection
β”‚ └── concatenated_audio/
β”‚ β”œβ”€β”€ had_audio_classification_task.json
β”‚ β”œβ”€β”€ real/ # Authentic audio
β”‚ └── fake/ # Synthesized audio
β”‚
β”œβ”€β”€ GTZAN/ # Music Genre Classification
β”‚ └── concatenated_audio/
β”‚ β”œβ”€β”€ music_genre_classification_meta.json
β”‚ └── wav/ # Genre-labeled music clips
β”‚
β”œβ”€β”€ TAU/ # Acoustic Scene Classification
β”‚ β”œβ”€β”€ acoustic_scene_task_meta.json
β”‚ β”œβ”€β”€ LICENSE
β”‚ β”œβ”€β”€ README.md
β”‚ └── concatenated_resampled/
β”‚
β”œβ”€β”€ VESUS/ # Emotion Recognition
β”‚ β”œβ”€β”€ audio_emotion_dataset.json
β”‚ └── [1-10]/ # Speaker directories
β”‚
β”œβ”€β”€ SLUE/ # Named Entity Recognition
β”‚ β”œβ”€β”€ merged_audio_data.json
β”‚ β”œβ”€β”€ dev/
β”‚ β”œβ”€β”€ test/
β”‚ └── fine-tune/
β”‚
β”œβ”€β”€ DESED/ # Sound Event Detection
β”‚ └── DESED_dataset/
β”‚ β”œβ”€β”€ license_public_eval.tsv
β”‚ └── concatenated_audio/
β”‚
β”œβ”€β”€ VoxCeleb/ # Speaker Recognition
β”‚ β”œβ”€β”€ concatenated_audio/
β”‚ β”‚ └── gender_id_task_meta.json
β”‚ β”œβ”€β”€ concatenated_audio_age/
β”‚ β”‚ └── age_classification_task_meta.json
β”‚ └── txt/
β”‚
└── README.md # This file
```
---
## 🎯 Dataset Details
### 1. LibriSpeech-long
**Task**: Automatic Speech Recognition (ASR)
**Description**: Long-form English speech from audiobooks
**Format**: FLAC files with `.trans.txt` transcriptions
**Splits**: test-clean, test-other, dev-clean, dev-other
**License**: CC BY 4.0
**Source**: https://github.com/google-deepmind/librispeech-long
**Structure**:
```
librispeech-long/
test-clean/
<speaker_id>/
<chapter_id>/
<speaker>-<chapter>-<utterance>.flac
<speaker>-<chapter>.trans.txt
```
<!-- **Citation**:
```bibtex
@article{kahn2020libri,
title={Libri-light: A benchmark for asr with limited or no supervision},
author={Kahn, Jacob and Rivière, Morgane and Zheng, Weiyi and Khudanpur, Sanjeev and others},
journal={ICASSP 2020},
year={2020}
}
``` -->
---
### 2. RACE
**Task**: Reading Comprehension from Audio
**Description**: Multiple-choice questions based on audio passages
**Format**: WAV files + JSON metadata
**Sample Count**: ~200 articles
**License**: Apache-2.0 (verify)
**Source**: https://huggingface.co/datasets/ehovy/race
**JSON Format**:
```json
{
"article_id": 7870154,
"audio_path": "test/article_7870154/audio.wav",
"question": "What did the author do...?",
"options": ["A", "B", "C", "D"],
"answer": "A"
}
```
<!-- **Citation**:
```bibtex
@inproceedings{lai2017race,
title={RACE: Large-scale reading comprehension dataset from examinations},
author={Lai, Guokun and Xie, Qizhe and Liu, Hanxiao and Yang, Yiming and Hovy, Eduard},
booktitle={EMNLP},
year={2017}
}
``` -->
---
### 3. HAD
**Task**: Half-truth Audio Detection
**Description**: Classify audio as real or containing synthesized segments
**License**: CC BY 4.0
**Source**: https://zenodo.org/records/10377492
**JSON Format**:
```json
{
"path": "real/HAD_train_real_249.wav",
"question": "Is this audio authentic or fake?",
"choice_a": "Real",
"choice_b": "Fake",
"answer_gt": "Real",
"duration_seconds": 297.78
}
```
---
### 4. GTZAN
**Task**: Music Genre Classification
**Description**: 10-genre music classification dataset
**Genres**: blues, classical, country, disco, hiphop, jazz, metal, pop, reggae, rock
**⚠️ License**: Research Use Only
**Source**: https://www.kaggle.com/datasets/andradaolteanu/gtzan-dataset-music-genre-classification
<!-- **Citation**:
```bibtex
@inproceedings{tzanetakis2002musical,
title={Musical genre classification of audio signals},
author={Tzanetakis, George and Cook, Perry},
booktitle={IEEE Transactions on Speech and Audio Processing},
year={2002}
}
``` -->
---
### 5. TAU
**Task**: Acoustic Scene Classification
**Description**: Urban sound scene recognition
**Scenes**: airport, bus, metro, park, public_square, shopping_mall, street_pedestrian, street_traffic, tram
**License**: CC BY 4.0
**Source**: https://zenodo.org/records/7870258
**Files**:
- `acoustic_scene_task_meta.json`: Task metadata
- `LICENSE`: Original license text
- `concatenated_resampled/`: Resampled audio files
<!-- **Citation**:
```bibtex
@article{mesaros2018detection,
title={Detection and classification of acoustic scenes and events: Outcome of the DCASE 2016 challenge},
author={Mesaros, Annamaria and Heittola, Toni and Virtanen, Tuomas},
journal={IEEE/ACM TASLP},
year={2018}
}
``` -->
---
### 6. VESUS
**Task**: Emotion Recognition from Speech
**Description**: Actors reading neutral script with emotional inflections
**Emotions**: neutral, angry, happy, sad, fearful
**Actors**: 10 (5 male, 5 female)
**⚠️ License**: Academic Use Only (access by request)
**Source**: https://engineering.jhu.edu/nsa/vesus/
<!-- **Citation**:
```bibtex
@inproceedings{sager2019vesus,
title={VESUS: A crowd-annotated database to study emotion production and perception in spoken English},
author={Sager, Jennifer and Shankar, Raghav and Reinhold, Jacob and Venkataraman, Archana},
booktitle={Interspeech},
year={2019}
}
``` -->
---
### 7. SLUE
**Task**: Named Entity Recognition (NER) from Speech
**Description**: Count named entities in audio segments
**Entity Types**: LAW, NORP, ORG, PLACE, QUANT, WHEN
**License**: CC BY 4.0 (VoxPopuli-derived)
**JSON Format**:
```json
{
"path": "dev/concatenated_audio_with/concatenated_audio_0000.wav",
"question": "How many named entities appear?",
"options": ["49 entities", "51 entities", "52 entities", "46 entities"],
"answer_gt": "D",
"entity_count": 49
}
```
<!-- **Citation**:
```bibtex
@article{shon2022slue,
title={SLUE: New benchmark tasks for spoken language understanding evaluation on natural speech},
author={Shon, Suwon and Pasad, Ankita and Wu, Felix and others},
journal={ICASSP 2022},
year={2022}
}
``` -->
---
### 8. DESED
**Task**: Sound Event Detection
**Description**: Detect domestic sound events
**Events**: Alarm bell, Blender, Cat, Dishes, Dog, Electric shaver, Frying, Running water, Speech, Vacuum cleaner
**License**: Mixed CC (Freesound sources: CC0, CC BY 3.0/4.0)
**Source**: https://github.com/turpaultn/DESED
**⚠️ ATTRIBUTION REQUIRED**:
- Audio clips sourced from Freesound.org
- Each clip has individual CC license
- Must maintain attribution when redistributing
- See `license_public_eval.tsv` for per-file credits
<!-- **Citation**:
```bibtex
@inproceedings{turpault2019sound,
title={Sound event detection in domestic environments with weakly labeled data and soundscape synthesis},
author={Turpault, Nicolas and Serizel, Romain and Salamon, Justin and Shah, Ankit Parag},
booktitle={DCASE Workshop},
year={2019}
}
``` -->
---
### 9. VoxCeleb-Gender
**Task**: Speaker Gender Identification
**Description**: Binary classification (male/female)
**License**: CC BY 4.0
**Source**: https://www.robots.ox.ac.uk/~vgg/data/voxceleb/
**JSON Format**:
```json
{
"path": "concatenated_audio/speaker_001.wav",
"question": "What is the gender of the speaker?",
"choice_a": "Male",
"choice_b": "Female",
"answer_gt": "A"
}
```
---
### 10. VoxCeleb-Age
**Task**: Speaker Age Classification
**Description**: Multi-class age group classification
**Age Groups**: 20s, 30s, 40s, 50s, 60s, 70s
**License**: CC BY 4.0
**Source**: VoxCeleb + https://github.com/hechmik/voxceleb_enrichment_age_gender
**Note**: Age/gender labels are derivative annotations on VoxCeleb corpus
<!-- **Citation**:
```bibtex
@inproceedings{nagrani2017voxceleb,
title={VoxCeleb: a large-scale speaker identification dataset},
author={Nagrani, Arsha and Chung, Joon Son and Zisserman, Andrew},
booktitle={Interspeech},
year={2017}
}
``` -->
---
## πŸ”§ Usage Guidelines
You can load the dataset via Hugging Face datasets:
from datasets import load_dataset
ds = load_dataset("Hezep/AudioMarathon")
---
### Special Requirements
### Disclaimer
This benchmark is provided "AS IS" without warranty. Users bear sole responsibility for:
- License compliance verification
- Obtaining restricted datasets independently
- Proper attribution maintenance
- Determining fitness for specific use cases
---
## πŸ“Š Benchmark Statistics
### Overview
| Metric | Value |
| -------------- | ---------------------------------------- |
| Total Tasks | 10 |
| Total Samples | 6567 |
| Total Duration | 392h |
| Total Size | ~60 GB |
| Languages | English |
| Domains | Speech, Music, Soundscape, Environmental |
### Audio Characteristics
| Property | Range | Predominant |
|----------|-------|-------------|
| Sampling Rate | 16 kHz - 22.05 kHz | 16 kHz (90%) |
| Duration | 30s - 5+ min | 2-3 min (avg) |
| Channels | Mono | Mono (100%) |
| Format | FLAC, WAV | WAV (80%) |
| Bit Depth | 16-bit | 16-bit (100%) |
### Task Distribution
| Category | # Tasks | # Samples | % of Total |
| ------------------------ | ------- | --------- | ---------- |
| Speech Understanding | 3 | 1514 | 23% |
| Acoustic Analysis | 3 | 1519 | 23% |
| Speaker Characterization | 3 | 2758 | 42% |
| Content Authenticity | 1 | 776 | 12% |
---
## πŸ”— Related Resources
- **Paper (arXiv)**: https://arxiv.org/abs/2510.07293
- **GitHub Repository**: https://github.com/DabDans/AudioMarathon
- **Hugging Face Dataset**: https://huggingface.co/datasets/Hezep/AudioMarathon
---
## πŸ“ Citation
If you use AudioMarathon in your research, please cite:
```bibtex
@article{he2025audiomarathon,
title={AudioMarathon: A Comprehensive Benchmark for Long-Context Audio Understanding and Efficiency in Audio LLMs},
author={He, Peize and Wen, Zichen and Wang, Yubo and Wang, Yuxuan and Liu, Xiaoqian and Huang, Jiajie and Lei, Zehui and Gu, Zhuangcheng and Jin, Xiangqi and Yang, Jiabing and Li, Kai and Liu, Zhifei and Li, Weijia and Wang, Cunxiang and He, Conghui and Zhang, Linfeng},
journal={arXiv preprint arXiv:2510.07293},
year={2025},
url={https://arxiv.org/abs/2510.07293}
}
```
### Citing Component Datasets
When using specific tasks, please also cite the original datasets (see individual task documentation above for BibTeX entries):
- **LibriSpeech**: Panayotov et al. (2015)
- **RACE**: Lai et al. (2017)
- **HAD**: Zenodo record 10377492
- **GTZAN**: Tzanetakis & Cook (2002)
- **TAU**: DCASE Challenge (Mesaros et al., 2018)
- **VESUS**: Sager et al. (2019)
- **SLUE**: Shon et al. (2022)
- **DESED**: Turpault et al. (2019)
- **VoxCeleb**: Nagrani et al. (2017, 2018)
Full BibTeX entries available in individual task sections.
---
## 🀝 Contributing & Support
## πŸ“§ Contact
- **GitHub Issues**: https://github.com/DabDans/AudioMarathon/issues
---
## πŸ™ Acknowledgments
AudioMarathon builds upon the pioneering work of numerous research teams. We gratefully acknowledge the creators of:
- LibriSpeech (Panayotov et al.)
- RACE (Lai et al.)
- HAD (Zenodo contributors)
- GTZAN (Tzanetakis & Cook)
- TAU/DCASE (Mesaros et al.)
- VESUS (Sager et al., JHU)
- SLUE (Shon et al.)
- DESED (Turpault et al. & Freesound community)
- VoxCeleb (Nagrani et al., Oxford VGG)
Their datasets enable comprehensive audio understanding research.
---
<p align="center">
<b>AudioMarathon</b><br>
A Comprehensive Long-Form Audio Understanding Benchmark<br>
<i>Version 1.0.0 | October 2025</i>
</p>