RIR-Mega: a large-scale simulated room impulse response dataset for machine learning and room acoustics modeling
Paper
• 2510.18917 • Published
• 5
audio audioduration (s) 1.49 27.8 | label class label 0 classes |
|---|---|
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null | |
null |
Whisper-RIR-Mega is a benchmark dataset of paired clean and reverberant speech for evaluating ASR robustness to room acoustics. Each sample consists of:
rir_id, RT60, DRR, C50, etc. when availableSplits are stratified by RT60 (or DRR) when metadata exists, so the benchmark is balanced across acoustic conditions.
Use this dataset to:
from huggingface_hub import snapshot_download
from datasets import load_from_disk
import whisper
import jiwer
# Download and load (use load_dataset if your HF datasets supports it)
path = snapshot_download("mandipgoswami/whisper-rirmega-bench", repo_type="dataset")
ds = load_from_disk(path + "/hf_dataset")["test"]
model = whisper.load_model("base")
# One sample
row = ds[0]
clean_wer = jiwer.wer(row["text_ref"], model.transcribe(row["audio_clean"]["path"], language="en")["text"])
reverb_wer = jiwer.wer(row["text_ref"], model.transcribe(row["audio_reverb"]["path"], language="en")["text"])
print(f"Clean WER: {clean_wer:.4f} Reverb WER: {reverb_wer:.4f}")
| Column | Type | Description |
|---|---|---|
| sample_id | string | Unique ID (from LibriSpeech + RIR) |
| audio_clean | Audio | Clean 16 kHz audio |
| audio_reverb | Audio | Reverberant 16 kHz audio |
| text_ref | string | Reference transcript |
| rir_id | string | RIR-Mega sample ID |
| split | string | train / validation / test |
| rir_* | mixed | RIR metadata (RT60_T30_s, DRR_dB, …) |
Splits: validation and test for benchmarking; train optional (default config uses test + validation only).
Full reproducibility: see the GitHub repo and run:
python -m bench.build_and_publish --config configs/default.yaml
The leaderboard is generated by the same pipeline and updated on each release. Example (your run may vary):
| model_id | clean | reverb | Δ WER |
|---|---|---|---|
| openai/whisper-tiny | … | … | … |
| openai/whisper-base | … | … | … |
| openai/whisper-small | … | … | … |
| openai/whisper-medium | … | … | … |
| openai/whisper-large-v3 | … | … | … |
See the Space for interactive charts (WER vs RT60/DRR) and the latest leaderboard.
k_rirs_per_utt in the config.Citation (BibTeX):
@misc{whisper-rirmega-bench,
title = {Whisper-RIR-Mega: Paired Clean-Reverberant Speech Robustness Benchmark},
author = {Goswami, Mandip},
year = {2025},
publisher = {Hugging Face},
url = {https://huggingface.co/datasets/mandipgoswami/whisper-rirmega-bench},
note = {Dataset built with LibriSpeech and RIR-Mega.}
}
RIR-Mega citation:
@misc{goswami2025rirmega,
title = {RIR-Mega: A Large-Scale Room Impulse Response Corpus with Benchmarks},
author = {Goswami, Mandip},
year = {2025},
eprint = {2510.18917},
archivePrefix= {arXiv},
primaryClass = {cs.SD},
url = {https://arxiv.org/abs/2510.18917}
}
pip install -e .HF_TOKEN (and optionally reduce n_utterances in configs/default.yaml for a quick run).python -m bench.build_and_publish --config configs/default.yamlHF_TOKEN is set).For a <5 minute smoke test: python scripts/sanity_check.py