LibriRIR-100 / README.md
mandipgoswami's picture
Upload README.md with huggingface_hub
4031361 verified
metadata
language:
  - en
license: cc-by-nc-4.0
tags:
  - automatic-speech-recognition
  - speech-enhancement
  - dereverberation
  - robustness
  - audio
  - reverberation
  - librispeech
  - room-acoustics
  - paired-data
  - training-data

LibriRIR-100

Dataset Summary

LibriRIR-100 is a large-scale paired clean↔reverberant speech training corpus containing exactly 100 hours of speech. Each utterance is paired with a room impulse response from RIR-Mega (mandipgoswami/rirmega), stratified across four RT60 reverberation conditions. Designed as a drop-in training resource for robust ASR, speech enhancement, and dereverberation models.

Why LibriRIR-100

Existing paired reverberant speech datasets are either small-scale, lack acoustic metadata, or are not freely available. LibriRIR-100 fills this gap by providing:

  • Exactly 100 hours of paired clean and reverberant speech
  • RT60/DRR/C50 metadata on every sample for condition-specific training and evaluation
  • Stratified RT60 bins (short, medium, long, very_long) for balanced acoustic diversity
  • Reproducible RIR assignment — each sample_id encodes the exact RIR used

Use Cases

  • Training robust ASR models — e.g. fine-tuning Whisper for reverberant conditions
  • Training dereverberation models — WPE, Demucs, neural dereverberation
  • Training speech enhancement models — denoising, dereverberation, super-resolution
  • Acoustic condition-aware model training — using RT60/DRR metadata for curriculum or multi-task learning

Dataset Structure

Column Type Description
sample_id string {speaker_id}_{chapter_id}_{utterance_id}_{rir_id}
audio_clean Audio(16kHz) Clean 16kHz mono FLAC
audio_reverb Audio(16kHz) Reverberant 16kHz mono FLAC
text string Ground truth transcript (lowercased)
speaker_id string LibriSpeech speaker ID
chapter_id string LibriSpeech chapter ID
utterance_id string LibriSpeech utterance ID
rir_id string RIR-Mega sample_id
rt60_bin string short / medium / long / very_long / unknown
RT60_T30_s float RIR RT60 (null if missing)
DRR_dB float RIR DRR (null if missing)
C50_dB float RIR C50 (null if missing)
duration_s float Utterance duration in seconds
split string train or validation

Splits: 90% train, 10% validation (stratified by rt60_bin).

RT60 bin distribution: Approximately 25% each for short, medium, long, very_long.

How It's Built

  • Speech source: LibriSpeech train-clean-100 (100h subset, CC BY 4.0)
  • RIR source: RIR-Mega v2 (mandipgoswami/rirmega)
  • Pipeline: Full reproduction code at github.com/mandip42/LibriRIR-100

Quickstart

from datasets import load_dataset

ds = load_dataset("mandipgoswami/LibriRIR-100")
sample = ds["train"][0]
# sample["audio_clean"], sample["audio_reverb"], sample["text"], sample["RT60_T30_s"]

Reproducing This Dataset

Clone the pipeline and run:

git clone https://github.com/mandip42/LibriRIR-100.git
cd LibriRIR-100
pip install -e .
python scripts/build_and_publish.py --config configs/default.yaml

Limitations

  • English only (LibriSpeech)
  • Single RIR per utterance (no multi-condition augmentation in this release)
  • RIR-Mega metadata (RT60, DRR, C50) may be missing for some samples

Citation

@misc{goswami2025libririr100,
  title        = {LibriRIR-100: A Paired Clean-Reverberant Speech Training Corpus},
  author       = {Goswami, Mandip},
  year         = {2025},
  publisher    = {Hugging Face},
  url          = {https://huggingface.co/datasets/mandipgoswami/LibriRIR-100}
}

@misc{goswami2025rirmega,
  title        = {RIR-Mega: A Large-Scale Room Impulse Response Corpus},
  author       = {Goswami, Mandip},
  year         = {2025},
  eprint       = {2510.18917},
  archivePrefix= {arXiv}
}

License

CC BY-NC 4.0. Audio content derived from LibriSpeech (CC BY 4.0) and RIR-Mega (CC BY-NC 4.0).