Datasets:
File size: 6,078 Bytes
ee367c1 f56c6ef e65afe3 76ff7ce e65afe3 ee367c1 f56c6ef d244a2b f56c6ef b614eba f56c6ef 816292e f56c6ef 816292e f56c6ef 75be3b6 f56c6ef a7ab930 f56c6ef e65afe3 f56c6ef e65afe3 816292e e65afe3 816292e a7ab930 f56c6ef 816292e a7ab930 f56c6ef 816292e f56c6ef d244a2b f56c6ef |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 |
---
license: apache-2.0
task_categories:
- automatic-speech-recognition
language:
- en
tags:
- phoneme-recognition
- arpabet
- pronunciation
- wav2vec2
- ctc
- speech
pretty_name: LibriSpeech ARPAbet Phonemes
size_categories:
- 10K<n<100K
configs:
- config_name: full
default: true
data_files:
- split: train
path: data/train/*.parquet
- split: test
path: data/test/*.parquet
- config_name: mini
data_files:
- split: train
path: data/mini/train.parquet
- split: test
path: data/mini/test.parquet
dataset_info:
- config_name: full
features:
- name: audio
dtype: audio
- name: input_values
sequence: float32
- name: labels
sequence: int32
- name: text
dtype: string
- name: duration
dtype: float32
splits:
- name: train
num_examples: 15928
- name: test
num_examples: 1770
- config_name: mini
features:
- name: audio
dtype: audio
- name: input_values
sequence: float32
- name: labels
sequence: int32
- name: text
dtype: string
- name: duration
dtype: float32
splits:
- name: train
num_examples: 900
- name: test
num_examples: 100
---
# LibriSpeech ARPAbet Processed Dataset
Pre-processed dataset for training ARPAbet phoneme recognition models using CTC loss.
## Dataset Description
This dataset is derived from [LibriSpeech](https://huggingface.co/datasets/openslr/librispeech_asr) (train-clean-100 split) with the following preprocessing:
- **Audio**: Resampled to 16kHz, normalized using Wav2Vec2 feature extractor
- **Labels**: Text transcriptions converted to ARPAbet phoneme sequences using CMU Pronouncing Dictionary
- **Filtering**: Samples with out-of-vocabulary words (not in CMU Dict) are excluded
- Original LibriSpeech train-clean-100: 28,539 samples
- After filtering: 17,698 samples (62% retained)
- Skipped: 10,841 samples (38% had at least one OOV word)
### Features
| Feature | Type | Description |
|---------|------|-------------|
| `audio` | `Audio` | Original audio (FLAC, 16kHz) - playable in HF viewer |
| `input_values` | `Sequence[float]` | Normalized audio waveform (float32) - ready for training |
| `labels` | `Sequence[int]` | ARPAbet phoneme token IDs |
| `text` | `string` | Original text transcription |
| `duration` | `float` | Audio duration in seconds |
### ARPAbet Vocabulary (72 tokens)
The vocabulary includes:
- **Special tokens (3)**: `<pad>`, `<unk>`, `|` (word boundary)
- **Consonants (24)**: B, CH, D, DH, F, G, HH, JH, K, L, M, N, NG, P, R, S, SH, T, TH, V, W, Y, Z, ZH
- **Vowels with stress markers (45)**: 15 base vowels x 3 stress levels (0, 1, 2)
- Example: AA0 (no stress), AA1 (primary), AA2 (secondary)
### Splits
| Split | Samples | Description |
|-------|---------|-------------|
| train | 15,928 | Training data (90%) |
| test | 1,770 | Evaluation data (10%) |
## Usage
```python
from datasets import load_dataset
from huggingface_hub import hf_hub_download
import json
# Load full dataset (~17k samples, ~15GB)
dataset = load_dataset("davidggphy/librispeech-arpabet-processed", "full")
# Load mini dataset (1000 samples, ~1GB) - great for quick testing!
dataset = load_dataset("davidggphy/librispeech-arpabet-processed", "mini")
# Streaming mode (download samples on-demand)
stream = load_dataset("davidggphy/librispeech-arpabet-processed", "full", split="train", streaming=True)
for sample in stream.take(100):
print(sample["text"])
# Load vocabulary mapping
vocab_path = hf_hub_download(
repo_id="davidggphy/librispeech-arpabet-processed",
filename="vocab.json",
repo_type="dataset"
)
with open(vocab_path) as f:
vocab_data = json.load(f)
token_to_id = vocab_data["token_to_id"]
id_to_token = {int(k): v for k, v in vocab_data["id_to_token"].items()}
# Access samples
sample = dataset[0]
print(f"Text: {sample['text']}")
print(f"Duration: {sample['duration']:.2f}s")
print(f"Labels: {[id_to_token[i] for i in sample['labels']]}")
```
### Listening to Audio
The `input_values` column contains normalized audio waveforms at 16kHz. To play or save the audio:
```python
import numpy as np
import soundfile as sf
from IPython.display import Audio
sample = dataset["train"][0]
# Convert to numpy array
audio = np.array(sample["input_values"], dtype=np.float32)
# Play in Jupyter/Colab
Audio(audio, rate=16000)
# Or save to file
sf.write("sample.wav", audio, 16000)
# Check duration
print(f"Duration: {sample['duration']:.2f}s")
print(f"Text: {sample['text']}")
```
### Training with Wav2Vec2
```python
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Load model and processor
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base", vocab_size=72)
processor = Wav2Vec2Processor.from_pretrained("davidggphy/wav2vec2-arpabet-phoneme")
# The dataset is ready for CTC training
# input_values: normalized audio
# labels: phoneme token IDs
```
## Intended Use
This dataset is designed for:
- Training phoneme recognition models for English pronunciation assessment
- Fine-tuning Wav2Vec2 for ARPAbet output
- Research in automatic pronunciation evaluation
## Source Data
- **Base Dataset**: [LibriSpeech ASR Corpus](https://huggingface.co/datasets/openslr/librispeech_asr) (train-clean-100 split)
- **Phoneme Dictionary**: [CMU Pronouncing Dictionary](https://github.com/cmusphinx/cmudict)
## Limitations
- Only covers words present in CMU Dictionary (~126k words)
- Based on American English pronunciation
- Does not include phonetic variations or connected speech phenomena
## Citation
If you use this dataset, please cite LibriSpeech:
```bibtex
@inproceedings{panayotov2015librispeech,
title={Librispeech: an ASR corpus based on public domain audio books},
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
booktitle={ICASSP},
year={2015}
}
```
## License
Apache 2.0 (same as LibriSpeech)
|