|
|
--- |
|
|
dataset_info: |
|
|
pretty_name: LibriSpeech Evaluation Annotations |
|
|
tags: |
|
|
- automatic-speech-recognition |
|
|
- asr |
|
|
- evaluation |
|
|
- librispeech |
|
|
task_categories: |
|
|
- automatic-speech-recognition |
|
|
language: |
|
|
- en |
|
|
license: cc-by-4.0 |
|
|
--- |
|
|
|
|
|
# π LibriSpeech Evaluation Annotations Dataset |
|
|
|
|
|
|
|
|
|
|
|
### π **Dataset Description** |
|
|
|
|
|
This dataset contains **evaluation hypotheses and reference transcripts** for the [LibriSpeech ASR Corpus](https://www.openslr.org/12). |
|
|
It is designed for benchmarking Automatic Speech Recognition (ASR) models such as [OpenAI Whisper](https://github.com/openai/whisper) and [Faster-Whisper](https://github.com/guillaumekln/faster-whisper). |
|
|
|
|
|
--- |
|
|
|
|
|
### π¦ **Dataset Highlights** |
|
|
- π Based on the official [LibriSpeech ASR Corpus](https://www.openslr.org/12/). |
|
|
- π Provides standardized **hypotheses** and **reference transcripts** for ASR evaluation. |
|
|
- π Includes pre-generated `.trn` files for multiple ASR models and versions. |
|
|
- β
Ideal for benchmarking **Word Error Rate (WER)** and comparing ASR model performance. |
|
|
|
|
|
--- |
|
|
|
|
|
### π **Supported Tasks** |
|
|
- βοΈ **Automatic Speech Recognition (ASR) Evaluation** |
|
|
- βοΈ **Benchmarking Word Error Rate (WER)** |
|
|
- βοΈ **Model Comparison Across Dataset Splits** |
|
|
|
|
|
--- |
|
|
|
|
|
### π Languages |
|
|
|
|
|
- English (`en`) |
|
|
|
|
|
--- |
|
|
|
|
|
### π Dataset Structure |
|
|
|
|
|
``` |
|
|
librispeech-eval/ |
|
|
βββ generate_csv.py |
|
|
βββ dataset.py |
|
|
βββ all_splits.csv |
|
|
βββ test-clean/ |
|
|
β βββ test-clean.ref.trn |
|
|
β βββ test-clean.hyp.whisper-base-v20240930.trn |
|
|
βββ test-other/ |
|
|
β βββ test-other.ref.trn |
|
|
β βββ test-other.hyp.whisper-base-v20240930.trn |
|
|
βββ dev-clean/ |
|
|
β βββ dev-clean.ref.trn |
|
|
β βββ dev-clean.hyp.whisper-base-v20240930.trn |
|
|
βββ dev-other/ |
|
|
βββ dev-other.ref.trn |
|
|
βββ dev-other.hyp.whisper-base-v20240930.trn |
|
|
|
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
### π Usage Example |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
import werpy |
|
|
import werx |
|
|
|
|
|
# π₯ Load the consolidated CSV from the Hugging Face Hub |
|
|
dataset = load_dataset( |
|
|
"analyticsinmotion/librispeech-eval", |
|
|
data_files="all_splits.csv", |
|
|
split="train" |
|
|
) |
|
|
|
|
|
# π Specify which split and model/version to evaluate |
|
|
split = "test-clean" |
|
|
model_name = "whisper-base" |
|
|
model_version = "v20240930" |
|
|
|
|
|
# π Filter references and hypotheses for the chosen split/model/version |
|
|
filtered = dataset.filter( |
|
|
lambda x: x["split"] == split and |
|
|
x["model_name"] == model_name and |
|
|
x["model_version"] == model_version |
|
|
) |
|
|
|
|
|
references = [row["reference"] for row in filtered] |
|
|
hypotheses = [row["hypothesis"] for row in filtered] |
|
|
|
|
|
# β
Normalize using werpy |
|
|
normalized_refs = [werpy.normalize(ref) for ref in references] |
|
|
normalized_hyps = [werpy.normalize(hyp) for hyp in hypotheses] |
|
|
|
|
|
# π Compute WER directly using werx |
|
|
final_wer = werx.wer(normalized_refs, normalized_hyps) |
|
|
|
|
|
print(f"{model_name} WER (normalized) on {split}: {final_wer:.2%}") |
|
|
|
|
|
``` |
|
|
|
|
|
#### π Example Output |
|
|
|
|
|
``` |
|
|
README.md: 100% |
|
|
5.66k/5.66k [00:00<00:00, 839kB/s] |
|
|
all_splits.csv: 100% |
|
|
2.65M/2.65M [00:01<00:00, 2.50MB/s] |
|
|
Generating train split: |
|
|
11126/0 [00:00<00:00, 145686.84 examples/s] |
|
|
Filter: 100% |
|
|
11126/11126 [00:00<00:00, 123119.41 examples/s] |
|
|
whisper-base WER (normalized) on test-clean: 5.96% |
|
|
``` |
|
|
|
|
|
--- |
|
|
|
|
|
### π Generating the Consolidated CSV |
|
|
|
|
|
You can generate or update `all_splits.csv` at any time using the included script: |
|
|
|
|
|
```bash |
|
|
python generate_csv.py |
|
|
``` |
|
|
|
|
|
- This script automatically scans available dataset splits and hypothesis files. |
|
|
|
|
|
- It will generate a consolidated CSV file at librispeech-eval/all_splits.csv. |
|
|
|
|
|
- The CSV makes it easier to load and analyze the dataset programmatically. |
|
|
|
|
|
--- |
|
|
|
|
|
### π CSV Columns (`all_splits.csv`) |
|
|
|
|
|
| Column | Description | |
|
|
|----------------|--------------------------------------------| |
|
|
| `split` | Dataset split (e.g., `test-clean`, `test-other`) | |
|
|
| `hypothesis` | Predicted transcript | |
|
|
| `reference` | Ground truth transcript | |
|
|
| `model_name` | ASR model name (e.g., `whisper-base`) | |
|
|
| `model_version`| ASR model version (e.g., `v20240930`) | |
|
|
|
|
|
--- |
|
|
|
|
|
### π
Dataset Splits |
|
|
|
|
|
| Split Name | Type | Data Characteristics | Samples | Duration (Hours) | Suitable For | |
|
|
|----------------|-----------------|---------------------------|---------|------------------|----------------------------------------------| |
|
|
| `test-clean` | Test Set | Clean, high-quality audio | 2,620 | 5.4 | Evaluating model **performance** under ideal conditions | |
|
|
| `test-other` | Test Set | Noisy, challenging audio | 2,939 | 5.1 | Evaluating model **robustness** to challenging/noisy environments | |
|
|
| `dev-clean` | Validation Set | Clean, high-quality audio | 2,703 | 5.4 | **Hyperparameter tuning** and validation under ideal conditions | |
|
|
| `dev-other` | Validation Set | Noisy, challenging audio | 2,864 | 5.3 | **Stress-testing** during validation under difficult conditions | |
|
|
|
|
|
|
|
|
--- |
|
|
|
|
|
### π License |
|
|
|
|
|
This dataset is licensed under the [Creative Commons Attribution 4.0 International (CC BY 4.0)](https://creativecommons.org/licenses/by/4.0/). |
|
|
|
|
|
- You are free to **share** and **adapt** the data, provided appropriate credit is given. |
|
|
- The original audio and official transcripts remain under the [LibriSpeech License](https://www.openslr.org/12/). |
|
|
|
|
|
--- |
|
|
|
|
|
### π’ Citation |
|
|
|
|
|
If you use this dataset, please cite the original LibriSpeech paper: |
|
|
|
|
|
``` |
|
|
@inproceedings{panayotov2015librispeech, |
|
|
title={Librispeech: An ASR corpus based on public domain audio books}, |
|
|
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev}, |
|
|
booktitle={ICASSP}, |
|
|
pages={5206--5210}, |
|
|
year={2015}, |
|
|
organization={IEEE} |
|
|
} |
|
|
``` |
|
|
|