librispeech-eval / README.md
rossarmstrong
Add: Example output from the example
55d5076 unverified
metadata
dataset_info:
  pretty_name: LibriSpeech Evaluation Annotations
  tags:
    - automatic-speech-recognition
    - asr
    - evaluation
    - librispeech
  task_categories:
    - automatic-speech-recognition
  language:
    - en
  license: cc-by-4.0

πŸ“š LibriSpeech Evaluation Annotations Dataset

πŸ“– Dataset Description

This dataset contains evaluation hypotheses and reference transcripts for the LibriSpeech ASR Corpus.
It is designed for benchmarking Automatic Speech Recognition (ASR) models such as OpenAI Whisper and Faster-Whisper.


πŸ“¦ Dataset Highlights

  • πŸ“š Based on the official LibriSpeech ASR Corpus.
  • πŸ“ˆ Provides standardized hypotheses and reference transcripts for ASR evaluation.
  • πŸ“‚ Includes pre-generated .trn files for multiple ASR models and versions.
  • βœ… Ideal for benchmarking Word Error Rate (WER) and comparing ASR model performance.

πŸ“š Supported Tasks

  • βœ”οΈ Automatic Speech Recognition (ASR) Evaluation
  • βœ”οΈ Benchmarking Word Error Rate (WER)
  • βœ”οΈ Model Comparison Across Dataset Splits

🌍 Languages

  • English (en)

πŸ“‚ Dataset Structure

librispeech-eval/
β”œβ”€β”€ generate_csv.py
β”œβ”€β”€ dataset.py
β”œβ”€β”€ all_splits.csv 
β”œβ”€β”€ test-clean/
β”‚   β”œβ”€β”€ test-clean.ref.trn
β”‚   └── test-clean.hyp.whisper-base-v20240930.trn
β”œβ”€β”€ test-other/
β”‚   β”œβ”€β”€ test-other.ref.trn
β”‚   └── test-other.hyp.whisper-base-v20240930.trn
β”œβ”€β”€ dev-clean/
β”‚   β”œβ”€β”€ dev-clean.ref.trn
β”‚   └── dev-clean.hyp.whisper-base-v20240930.trn
└── dev-other/
    β”œβ”€β”€ dev-other.ref.trn
    └── dev-other.hyp.whisper-base-v20240930.trn

πŸ“– Usage Example

from datasets import load_dataset
import werpy
import werx

# πŸ“₯ Load the consolidated CSV from the Hugging Face Hub
dataset = load_dataset(
    "analyticsinmotion/librispeech-eval",
    data_files="all_splits.csv",
    split="train"
)

# πŸ“„ Specify which split and model/version to evaluate
split = "test-clean"
model_name = "whisper-base"
model_version = "v20240930"

# πŸ“š Filter references and hypotheses for the chosen split/model/version
filtered = dataset.filter(
    lambda x: x["split"] == split and
              x["model_name"] == model_name and
              x["model_version"] == model_version
)

references = [row["reference"] for row in filtered]
hypotheses = [row["hypothesis"] for row in filtered]

# βœ… Normalize using werpy
normalized_refs = [werpy.normalize(ref) for ref in references]
normalized_hyps = [werpy.normalize(hyp) for hyp in hypotheses]

# πŸ“ˆ Compute WER directly using werx
final_wer = werx.wer(normalized_refs, normalized_hyps)

print(f"{model_name} WER (normalized) on {split}: {final_wer:.2%}")

πŸƒ Example Output

README.md: 100%
5.66k/5.66k [00:00<00:00, 839kB/s]
all_splits.csv: 100%
2.65M/2.65M [00:01<00:00, 2.50MB/s]
Generating train split: 
11126/0 [00:00<00:00, 145686.84 examples/s]
Filter: 100%
11126/11126 [00:00<00:00, 123119.41 examples/s]
whisper-base WER (normalized) on test-clean: 5.96%

πŸ“„ Generating the Consolidated CSV

You can generate or update all_splits.csv at any time using the included script:

python generate_csv.py
  • This script automatically scans available dataset splits and hypothesis files.

  • It will generate a consolidated CSV file at librispeech-eval/all_splits.csv.

  • The CSV makes it easier to load and analyze the dataset programmatically.


πŸ“„ CSV Columns (all_splits.csv)

Column Description
split Dataset split (e.g., test-clean, test-other)
hypothesis Predicted transcript
reference Ground truth transcript
model_name ASR model name (e.g., whisper-base)
model_version ASR model version (e.g., v20240930)

πŸ“… Dataset Splits

Split Name Type Data Characteristics Samples Duration (Hours) Suitable For
test-clean Test Set Clean, high-quality audio 2,620 5.4 Evaluating model performance under ideal conditions
test-other Test Set Noisy, challenging audio 2,939 5.1 Evaluating model robustness to challenging/noisy environments
dev-clean Validation Set Clean, high-quality audio 2,703 5.4 Hyperparameter tuning and validation under ideal conditions
dev-other Validation Set Noisy, challenging audio 2,864 5.3 Stress-testing during validation under difficult conditions

πŸ“„ License

This dataset is licensed under the Creative Commons Attribution 4.0 International (CC BY 4.0).

  • You are free to share and adapt the data, provided appropriate credit is given.
  • The original audio and official transcripts remain under the LibriSpeech License.

πŸ“’ Citation

If you use this dataset, please cite the original LibriSpeech paper:

@inproceedings{panayotov2015librispeech,
  title={Librispeech: An ASR corpus based on public domain audio books},
  author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
  booktitle={ICASSP},
  pages={5206--5210},
  year={2015},
  organization={IEEE}
}