|
|
--- |
|
|
pretty_name: "Language-based Audio Retrieval Dataset" |
|
|
configs: |
|
|
- config_name: corpus |
|
|
data_files: |
|
|
- split: train |
|
|
path: "data/corpus.parquet" |
|
|
- config_name: queries |
|
|
data_files: |
|
|
- split: train |
|
|
path: "data/query.parquet" |
|
|
- config_name: qrels |
|
|
data_files: |
|
|
- split: train |
|
|
path: "data/qrels.parquet" |
|
|
language: |
|
|
- en |
|
|
license: |
|
|
- other |
|
|
task_categories: |
|
|
- other |
|
|
tags: |
|
|
- audio |
|
|
- dcase |
|
|
- retrieval |
|
|
size_categories: |
|
|
- 1k<n<10k |
|
|
--- |
|
|
|
|
|
# Language-based Audio Retrieval Dataset |
|
|
|
|
|
This dataset is derived from the **DCASE 2022 Challenge Task 6 (Subtask B) - Language-based Audio Retrieval** evaluation dataset, originally published on [Zenodo](https://zenodo.org/records/6590983). |
|
|
|
|
|
## Overview |
|
|
|
|
|
This dataset contains 1,000 audio files paired with natural language captions, designed for evaluating language-based audio retrieval systems. The dataset has been preprocessed and structured into parquet files for efficient loading and processing in machine learning workflows. |
|
|
|
|
|
## Dataset Structure |
|
|
|
|
|
### Files |
|
|
|
|
|
- **`corpus.parquet`** (1000 entries) |
|
|
Contains the audio corpus with embedded binary audio data. |
|
|
- `file_name`: Name of the audio file |
|
|
- `sound_id`: Unique identifier for each sound (from Freesound) |
|
|
- `audio`: Binary audio data (WAV format) |
|
|
|
|
|
- **`query.parquet`** (1000 queries) |
|
|
Contains natural language queries/captions for retrieval. |
|
|
- `query_id`: Identifier matching the sound_id |
|
|
- `query`: Natural language description of the audio |
|
|
|
|
|
- **`qrels.parquet`** (1000 relevance judgments) |
|
|
Ground truth relevance judgments for evaluation. |
|
|
- `query_id`: Query identifier |
|
|
- `corpus_id`: Corpus item identifier |
|
|
- `score`: Relevance score (1 = relevant) |
|
|
|
|
|
### Original Source Files |
|
|
|
|
|
- **`retrieval_audio/`**: Directory containing 1,000 WAV audio files |
|
|
- **`retrieval_audio_metadata.csv`**: Metadata for each audio file including: |
|
|
- File name, keywords, sound_id, Freesound URL |
|
|
- Start/end samples, manufacturer, license information |
|
|
- **`retrieval_captions.csv`**: Natural language captions for each audio file |
|
|
- **`retrieval_audio.7z`**: Compressed archive of audio files |
|
|
|
|
|
### Utility Files |
|
|
|
|
|
- **`dataset_creator.ipynb`**: Jupyter notebook used to process and create the parquet files |
|
|
- **`requirements.txt`**: Python dependencies |
|
|
- **`LICENSE`**: License information |
|
|
|
|
|
## Dataset Statistics |
|
|
|
|
|
- **Total audio files**: 1,000 |
|
|
- **Audio format**: WAV (various sample rates from Freesound) |
|
|
- **Caption format**: Single natural language description per audio file |
|
|
- **Audio sources**: Freesound platform |
|
|
- **Average audio duration**: ~15-30 seconds (variable) |
|
|
|
|
|
## Usage Example |
|
|
|
|
|
```python |
|
|
import pandas as pd |
|
|
import pyarrow.parquet as pq |
|
|
|
|
|
# Load the corpus |
|
|
corpus = pq.read_table('corpus.parquet').to_pandas() |
|
|
print(f"Corpus shape: {corpus.shape}") |
|
|
|
|
|
# Load queries |
|
|
queries = pq.read_table('query.parquet').to_pandas() |
|
|
print(f"Number of queries: {len(queries)}") |
|
|
|
|
|
# Load relevance judgments |
|
|
qrels = pq.read_table('qrels.parquet').to_pandas() |
|
|
print(f"Number of relevance judgments: {len(qrels)}") |
|
|
|
|
|
# Access audio binary data |
|
|
audio_binary = corpus.iloc[0]['audio'] |
|
|
|
|
|
# Access caption/query |
|
|
caption = queries.iloc[0]['query'] |
|
|
print(f"Example caption: {caption}") |
|
|
``` |
|
|
|
|
|
## Example Data |
|
|
|
|
|
### Sample Audio Caption |
|
|
> "A liquid continuously being poured out and hitting a bottom base." |
|
|
|
|
|
### Sample Metadata |
|
|
- **File**: `drainage pipe running.wav` |
|
|
- **Keywords**: atmosphere, field-recording, nature, spring, water, woods, forest, ambient |
|
|
- **Sound ID**: 235940 |
|
|
- **Freesound Link**: https://freesound.org/people/odilonmarcenaro/sounds/235940 |
|
|
- **License**: CC BY 3.0 |
|
|
|
|
|
## Task Description |
|
|
|
|
|
This dataset is designed for **language-based audio retrieval**, where the goal is to: |
|
|
1. Given a natural language query (caption), retrieve the most relevant audio clip(s) from the corpus |
|
|
2. Evaluate retrieval performance using standard metrics (e.g., Recall@K, Mean Average Precision) |
|
|
|
|
|
Each query has exactly one relevant audio file in the corpus (1-to-1 mapping). |
|
|
|
|
|
## Source Dataset Information |
|
|
|
|
|
### Original Dataset |
|
|
- **Name**: Language-based audio retrieval DCASE 2022 evaluation dataset |
|
|
- **Version**: 1.0 |
|
|
- **Published**: May 29, 2022 |
|
|
- **Creator**: Samuel Lipping (Tampere University) |
|
|
- **DOI**: [10.5281/zenodo.6590983](https://doi.org/10.5281/zenodo.6590983) |
|
|
|
|
|
### Audio Source |
|
|
All audio files are sourced from the [Freesound](https://freesound.org) platform and are licensed under various Creative Commons licenses. Please refer to `retrieval_audio_metadata.csv` for specific license information for each file. |
|
|
|
|
|
### Development Dataset |
|
|
This is the **evaluation dataset** for DCASE 2022 Task 6B. For training and development, use the **Clotho v2.1 dataset** available at: https://zenodo.org/record/4783391 |
|
|
|
|
|
## License |
|
|
|
|
|
- **Audio files**: Licensed under various Creative Commons licenses as specified in `retrieval_audio_metadata.csv` (from Freesound platform) |
|
|
- **Captions**: Tampere University license (see `LICENSE` file) |
|
|
|
|
|
## Citation |
|
|
|
|
|
If you use this dataset, please cite: |
|
|
|
|
|
```bibtex |
|
|
@dataset{lipping_2022_6590983, |
|
|
author = {Lipping, Samuel}, |
|
|
title = {{Language-based audio retrieval DCASE 2022 |
|
|
evaluation dataset}}, |
|
|
month = may, |
|
|
year = 2022, |
|
|
publisher = {Zenodo}, |
|
|
version = {1.0}, |
|
|
doi = {10.5281/zenodo.6590983}, |
|
|
url = {https://doi.org/10.5281/zenodo.6590983} |
|
|
} |
|
|
``` |
|
|
|
|
|
## References |
|
|
|
|
|
1. Frederic Font, Gerard Roma, and Xavier Serra. 2013. Freesound technical demo. In Proceedings of the 21st ACM international conference on Multimedia (MM '13). ACM, New York, NY, USA, 411-412. DOI: https://doi.org/10.1145/2502081.2502245 |
|
|
|
|
|
2. DCASE 2022 Challenge: https://dcase.community/challenge2022/ |
|
|
|
|
|
3. Lipping, S. (2022). Language-based audio retrieval DCASE 2022 evaluation dataset (1.0) [Data set]. Zenodo. https://doi.org/10.5281/zenodo.6590983 |
|
|
|
|
|
## Related Links |
|
|
|
|
|
- [Original Dataset on Zenodo](https://zenodo.org/records/6590983) |
|
|
- [Freesound Platform](https://freesound.org) |
|
|
- [DCASE Challenge](https://dcase.community/) |
|
|
- [Clotho v2.1 Development Dataset](https://zenodo.org/record/4783391) |
|
|
|