--- dataset_info: pretty_name: LibriSpeech Evaluation Annotations tags: - automatic-speech-recognition - asr - evaluation - librispeech task_categories: - automatic-speech-recognition language: - en license: cc-by-4.0 --- # 📚 LibriSpeech Evaluation Annotations Dataset ### 📖 **Dataset Description** This dataset contains **evaluation hypotheses and reference transcripts** for the [LibriSpeech ASR Corpus](https://www.openslr.org/12). It is designed for benchmarking Automatic Speech Recognition (ASR) models such as [OpenAI Whisper](https://github.com/openai/whisper) and [Faster-Whisper](https://github.com/guillaumekln/faster-whisper). --- ### 📦 **Dataset Highlights** - 📚 Based on the official [LibriSpeech ASR Corpus](https://www.openslr.org/12/). - 📈 Provides standardized **hypotheses** and **reference transcripts** for ASR evaluation. - 📂 Includes pre-generated `.trn` files for multiple ASR models and versions. - ✅ Ideal for benchmarking **Word Error Rate (WER)** and comparing ASR model performance. --- ### 📚 **Supported Tasks** - ✔️ **Automatic Speech Recognition (ASR) Evaluation** - ✔️ **Benchmarking Word Error Rate (WER)** - ✔️ **Model Comparison Across Dataset Splits** --- ### 🌍 Languages - English (`en`) --- ### 📂 Dataset Structure ``` librispeech-eval/ ├── generate_csv.py ├── dataset.py ├── all_splits.csv ├── test-clean/ │ ├── test-clean.ref.trn │ └── test-clean.hyp.whisper-base-v20240930.trn ├── test-other/ │ ├── test-other.ref.trn │ └── test-other.hyp.whisper-base-v20240930.trn ├── dev-clean/ │ ├── dev-clean.ref.trn │ └── dev-clean.hyp.whisper-base-v20240930.trn └── dev-other/ ├── dev-other.ref.trn └── dev-other.hyp.whisper-base-v20240930.trn ``` --- ### 📖 Usage Example ```python from datasets import load_dataset import werpy import werx # 📥 Load the consolidated CSV from the Hugging Face Hub dataset = load_dataset( "analyticsinmotion/librispeech-eval", data_files="all_splits.csv", split="train" ) # 📄 Specify which split and model/version to evaluate split = "test-clean" model_name = "whisper-base" model_version = "v20240930" # 📚 Filter references and hypotheses for the chosen split/model/version filtered = dataset.filter( lambda x: x["split"] == split and x["model_name"] == model_name and x["model_version"] == model_version ) references = [row["reference"] for row in filtered] hypotheses = [row["hypothesis"] for row in filtered] # ✅ Normalize using werpy normalized_refs = [werpy.normalize(ref) for ref in references] normalized_hyps = [werpy.normalize(hyp) for hyp in hypotheses] # 📈 Compute WER directly using werx final_wer = werx.wer(normalized_refs, normalized_hyps) print(f"{model_name} WER (normalized) on {split}: {final_wer:.2%}") ``` #### 🏃 Example Output ``` README.md: 100% 5.66k/5.66k [00:00<00:00, 839kB/s] all_splits.csv: 100% 2.65M/2.65M [00:01<00:00, 2.50MB/s] Generating train split: 11126/0 [00:00<00:00, 145686.84 examples/s] Filter: 100% 11126/11126 [00:00<00:00, 123119.41 examples/s] whisper-base WER (normalized) on test-clean: 5.96% ``` --- ### 📄 Generating the Consolidated CSV You can generate or update `all_splits.csv` at any time using the included script: ```bash python generate_csv.py ``` - This script automatically scans available dataset splits and hypothesis files. - It will generate a consolidated CSV file at librispeech-eval/all_splits.csv. - The CSV makes it easier to load and analyze the dataset programmatically. --- ### 📄 CSV Columns (`all_splits.csv`) | Column | Description | |----------------|--------------------------------------------| | `split` | Dataset split (e.g., `test-clean`, `test-other`) | | `hypothesis` | Predicted transcript | | `reference` | Ground truth transcript | | `model_name` | ASR model name (e.g., `whisper-base`) | | `model_version`| ASR model version (e.g., `v20240930`) | --- ### 📅 Dataset Splits | Split Name | Type | Data Characteristics | Samples | Duration (Hours) | Suitable For | |----------------|-----------------|---------------------------|---------|------------------|----------------------------------------------| | `test-clean` | Test Set | Clean, high-quality audio | 2,620 | 5.4 | Evaluating model **performance** under ideal conditions | | `test-other` | Test Set | Noisy, challenging audio | 2,939 | 5.1 | Evaluating model **robustness** to challenging/noisy environments | | `dev-clean` | Validation Set | Clean, high-quality audio | 2,703 | 5.4 | **Hyperparameter tuning** and validation under ideal conditions | | `dev-other` | Validation Set | Noisy, challenging audio | 2,864 | 5.3 | **Stress-testing** during validation under difficult conditions | --- ### 📄 License This dataset is licensed under the [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/). - You are free to **share** and **adapt** the data, provided appropriate credit is given. - The original audio and official transcripts remain under the [LibriSpeech License](https://www.openslr.org/12/). --- ### 📢 Citation If you use this dataset, please cite the original LibriSpeech paper: ``` @inproceedings{panayotov2015librispeech, title={Librispeech: An ASR corpus based on public domain audio books}, author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev}, booktitle={ICASSP}, pages={5206--5210}, year={2015}, organization={IEEE} } ```