|
|
--- |
|
|
license: apache-2.0 |
|
|
task_categories: |
|
|
- automatic-speech-recognition |
|
|
language: |
|
|
- en |
|
|
tags: |
|
|
- phoneme-recognition |
|
|
- arpabet |
|
|
- pronunciation |
|
|
- wav2vec2 |
|
|
- ctc |
|
|
- speech |
|
|
pretty_name: LibriSpeech ARPAbet Phonemes |
|
|
size_categories: |
|
|
- 10K<n<100K |
|
|
configs: |
|
|
- config_name: full |
|
|
default: true |
|
|
data_files: |
|
|
- split: train |
|
|
path: data/train/*.parquet |
|
|
- split: test |
|
|
path: data/test/*.parquet |
|
|
- config_name: mini |
|
|
data_files: |
|
|
- split: train |
|
|
path: data/mini/train.parquet |
|
|
- split: test |
|
|
path: data/mini/test.parquet |
|
|
dataset_info: |
|
|
- config_name: full |
|
|
features: |
|
|
- name: audio |
|
|
dtype: audio |
|
|
- name: input_values |
|
|
sequence: float32 |
|
|
- name: labels |
|
|
sequence: int32 |
|
|
- name: text |
|
|
dtype: string |
|
|
- name: duration |
|
|
dtype: float32 |
|
|
splits: |
|
|
- name: train |
|
|
num_examples: 15928 |
|
|
- name: test |
|
|
num_examples: 1770 |
|
|
- config_name: mini |
|
|
features: |
|
|
- name: audio |
|
|
dtype: audio |
|
|
- name: input_values |
|
|
sequence: float32 |
|
|
- name: labels |
|
|
sequence: int32 |
|
|
- name: text |
|
|
dtype: string |
|
|
- name: duration |
|
|
dtype: float32 |
|
|
splits: |
|
|
- name: train |
|
|
num_examples: 900 |
|
|
- name: test |
|
|
num_examples: 100 |
|
|
--- |
|
|
|
|
|
# LibriSpeech ARPAbet Processed Dataset |
|
|
|
|
|
Pre-processed dataset for training ARPAbet phoneme recognition models using CTC loss. |
|
|
|
|
|
## Dataset Description |
|
|
|
|
|
This dataset is derived from [LibriSpeech](https://huggingface.co/datasets/openslr/librispeech_asr) (train-clean-100 split) with the following preprocessing: |
|
|
|
|
|
- **Audio**: Resampled to 16kHz, normalized using Wav2Vec2 feature extractor |
|
|
- **Labels**: Text transcriptions converted to ARPAbet phoneme sequences using CMU Pronouncing Dictionary |
|
|
- **Filtering**: Samples with out-of-vocabulary words (not in CMU Dict) are excluded |
|
|
- Original LibriSpeech train-clean-100: 28,539 samples |
|
|
- After filtering: 17,698 samples (62% retained) |
|
|
- Skipped: 10,841 samples (38% had at least one OOV word) |
|
|
|
|
|
### Features |
|
|
|
|
|
| Feature | Type | Description | |
|
|
|---------|------|-------------| |
|
|
| `audio` | `Audio` | Original audio (FLAC, 16kHz) - playable in HF viewer | |
|
|
| `input_values` | `Sequence[float]` | Normalized audio waveform (float32) - ready for training | |
|
|
| `labels` | `Sequence[int]` | ARPAbet phoneme token IDs | |
|
|
| `text` | `string` | Original text transcription | |
|
|
| `duration` | `float` | Audio duration in seconds | |
|
|
|
|
|
### ARPAbet Vocabulary (72 tokens) |
|
|
|
|
|
The vocabulary includes: |
|
|
- **Special tokens (3)**: `<pad>`, `<unk>`, `|` (word boundary) |
|
|
- **Consonants (24)**: B, CH, D, DH, F, G, HH, JH, K, L, M, N, NG, P, R, S, SH, T, TH, V, W, Y, Z, ZH |
|
|
- **Vowels with stress markers (45)**: 15 base vowels x 3 stress levels (0, 1, 2) |
|
|
- Example: AA0 (no stress), AA1 (primary), AA2 (secondary) |
|
|
|
|
|
### Splits |
|
|
|
|
|
| Split | Samples | Description | |
|
|
|-------|---------|-------------| |
|
|
| train | 15,928 | Training data (90%) | |
|
|
| test | 1,770 | Evaluation data (10%) | |
|
|
|
|
|
## Usage |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
from huggingface_hub import hf_hub_download |
|
|
import json |
|
|
|
|
|
# Load full dataset (~17k samples, ~15GB) |
|
|
dataset = load_dataset("davidggphy/librispeech-arpabet-processed", "full") |
|
|
|
|
|
# Load mini dataset (1000 samples, ~1GB) - great for quick testing! |
|
|
dataset = load_dataset("davidggphy/librispeech-arpabet-processed", "mini") |
|
|
|
|
|
# Streaming mode (download samples on-demand) |
|
|
stream = load_dataset("davidggphy/librispeech-arpabet-processed", "full", split="train", streaming=True) |
|
|
for sample in stream.take(100): |
|
|
print(sample["text"]) |
|
|
|
|
|
# Load vocabulary mapping |
|
|
vocab_path = hf_hub_download( |
|
|
repo_id="davidggphy/librispeech-arpabet-processed", |
|
|
filename="vocab.json", |
|
|
repo_type="dataset" |
|
|
) |
|
|
with open(vocab_path) as f: |
|
|
vocab_data = json.load(f) |
|
|
|
|
|
token_to_id = vocab_data["token_to_id"] |
|
|
id_to_token = {int(k): v for k, v in vocab_data["id_to_token"].items()} |
|
|
|
|
|
# Access samples |
|
|
sample = dataset[0] |
|
|
print(f"Text: {sample['text']}") |
|
|
print(f"Duration: {sample['duration']:.2f}s") |
|
|
print(f"Labels: {[id_to_token[i] for i in sample['labels']]}") |
|
|
``` |
|
|
|
|
|
### Listening to Audio |
|
|
|
|
|
The `input_values` column contains normalized audio waveforms at 16kHz. To play or save the audio: |
|
|
|
|
|
```python |
|
|
import numpy as np |
|
|
import soundfile as sf |
|
|
from IPython.display import Audio |
|
|
|
|
|
sample = dataset["train"][0] |
|
|
|
|
|
# Convert to numpy array |
|
|
audio = np.array(sample["input_values"], dtype=np.float32) |
|
|
|
|
|
# Play in Jupyter/Colab |
|
|
Audio(audio, rate=16000) |
|
|
|
|
|
# Or save to file |
|
|
sf.write("sample.wav", audio, 16000) |
|
|
|
|
|
# Check duration |
|
|
print(f"Duration: {sample['duration']:.2f}s") |
|
|
print(f"Text: {sample['text']}") |
|
|
``` |
|
|
|
|
|
### Training with Wav2Vec2 |
|
|
|
|
|
```python |
|
|
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor |
|
|
|
|
|
# Load model and processor |
|
|
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base", vocab_size=72) |
|
|
processor = Wav2Vec2Processor.from_pretrained("davidggphy/wav2vec2-arpabet-phoneme") |
|
|
|
|
|
# The dataset is ready for CTC training |
|
|
# input_values: normalized audio |
|
|
# labels: phoneme token IDs |
|
|
``` |
|
|
|
|
|
## Intended Use |
|
|
|
|
|
This dataset is designed for: |
|
|
- Training phoneme recognition models for English pronunciation assessment |
|
|
- Fine-tuning Wav2Vec2 for ARPAbet output |
|
|
- Research in automatic pronunciation evaluation |
|
|
|
|
|
## Source Data |
|
|
|
|
|
- **Base Dataset**: [LibriSpeech ASR Corpus](https://huggingface.co/datasets/openslr/librispeech_asr) (train-clean-100 split) |
|
|
- **Phoneme Dictionary**: [CMU Pronouncing Dictionary](https://github.com/cmusphinx/cmudict) |
|
|
|
|
|
## Limitations |
|
|
|
|
|
- Only covers words present in CMU Dictionary (~126k words) |
|
|
- Based on American English pronunciation |
|
|
- Does not include phonetic variations or connected speech phenomena |
|
|
|
|
|
## Citation |
|
|
|
|
|
If you use this dataset, please cite LibriSpeech: |
|
|
|
|
|
```bibtex |
|
|
@inproceedings{panayotov2015librispeech, |
|
|
title={Librispeech: an ASR corpus based on public domain audio books}, |
|
|
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev}, |
|
|
booktitle={ICASSP}, |
|
|
year={2015} |
|
|
} |
|
|
``` |
|
|
|
|
|
## License |
|
|
|
|
|
Apache 2.0 (same as LibriSpeech) |
|
|
|