File size: 4,145 Bytes
7cd50fb 4da0904 7cd50fb 62287af 4034f4e 7cd50fb 4034f4e 62287af 94adb0d 62287af 7cd50fb | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 | ---
language:
- en
license: cc-by-sa-4.0
tags:
- audio
task_categories:
- audio-classification
---
# Another HEaring AiD DataSet (AHEAD-DS)
Another HEaring AiD DataSet (AHEAD-DS) is an audio dataset labelled with audiologically relevant scene categories for hearing aids.
* [Website](https://github.com/Australian-Future-Hearing-Initiative)
* [Paper](https://arxiv.org/abs/2508.10360)
* [Code](https://github.com/Australian-Future-Hearing-Initiative/prism-ml/prism-ml-yamnetp-tune)
* [Dataset AHEAD-DS](https://huggingface.co/datasets/hzhongresearch/ahead_ds)
* [Dataset AHEAD-DS unmixed](https://huggingface.co/datasets/hzhongresearch/ahead_ds_unmixed)
* [Models](https://huggingface.co/hzhongresearch/yamnetp_ahead_ds)
## Description of data
All files are encoded as single channel WAV, 16 bit signed, sampled at 16 kHz with 10 seconds per recording.
| Category | Training | Validation | Testing | All |
|:----------------------------------|:---------|:-----------|:--------|:-----|
| cocktail_party | 934 | 134 | 266 | 1334 |
| interfering_speakers | 733 | 105 | 209 | 1047 |
| in_traffic | 370 | 53 | 105 | 528 |
| in_vehicle | 409 | 59 | 116 | 584 |
| music | 1047 | 150 | 299 | 1496 |
| quiet_indoors | 368 | 53 | 104 | 525 |
| reverberant_environment | 156 | 22 | 44 | 222 |
| wind_turbulence | 307 | 44 | 88 | 439 |
| speech_in_traffic | 370 | 53 | 105 | 528 |
| speech_in_vehicle | 409 | 59 | 116 | 584 |
| speech_in_music | 1047 | 150 | 299 | 1496 |
| speech_in_quiet_indoors | 368 | 53 | 104 | 525 |
| speech_in_reverberant_environment | 155 | 22 | 44 | 221 |
| speech_in_wind_turbulence | 307 | 44 | 88 | 439 |
| Total | 6980 | 1001 | 1987 | 9968 |
# Licence
Licenced under CC BY-SA 4.0. See [LICENCE.txt](LICENCE.txt).
AHEAD-DS was derived from [HEAR-DS](https://www.hz-ol.de/en/hear-ds.html) (CC0 licence) and [CHiME 6 dev](https://openslr.org/150/) (CC BY-SA 4.0 licence). If you use this work, please cite the following publications.
Attribution.
```
@misc{zhong2026datasetmodelauditoryscene,
title={A dataset and model for auditory scene recognition for hearing devices: AHEAD-DS and OpenYAMNet},
author={Henry Zhong and Jörg M. Buchholz and Julian Maclaren and Simon Carlile and Richard Lyon},
year={2026},
eprint={2508.10360},
archivePrefix={arXiv},
primaryClass={cs.SD},
url={https://arxiv.org/abs/2508.10360},
}
```
HEAR-DS attribution.
```
@inproceedings{huwel2020hearing,
title={Hearing aid research data set for acoustic environment recognition},
author={H{\"u}wel, Andreas and Adilo{\u{g}}lu, Kamil and Bach, J{\"o}rg-Hendrik},
booktitle={ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={706--710},
year={2020},
organization={IEEE}
}
```
CHiME 6 attribution.
```
@inproceedings{barker18_interspeech,
author={Jon Barker and Shinji Watanabe and Emmanuel Vincent and Jan Trmal},
title={{The Fifth 'CHiME' Speech Separation and Recognition Challenge: Dataset, Task and Baselines}},
year=2018,
booktitle={Proc. Interspeech 2018},
pages={1561--1565},
doi={10.21437/Interspeech.2018-1768}
}
@inproceedings{watanabe2020chime,
title={CHiME-6 Challenge: Tackling multispeaker speech recognition for unsegmented recordings},
author={Watanabe, Shinji and Mandel, Michael and Barker, Jon and Vincent, Emmanuel and Arora, Ashish and Chang, Xuankai and Khudanpur, Sanjeev and Manohar, Vimal and Povey, Daniel and Raj, Desh and others},
booktitle={CHiME 2020-6th International Workshop on Speech Processing in Everyday Environments},
year={2020}
}
```
|