Datasets:
audio
audioduration (s) 1.41
17.2
| input_values
listlengths 22.6k
276k
| labels
listlengths 10
289
| text
stringlengths 11
338
| duration
float32 1.41
17.2
|
|---|---|---|---|---|
[0.011749684810638428,0.01224525272846222,0.01224525272846222,0.01274082064628601,0.0127408206462860(...TRUNCATED)
| [4,31,16,20,48,2,18,54,11,18,20,58,14,2,43,2,13,43,20,2,9,31,22,2,20,61,12,5,2,24,70,2,34,22,2,6,33,(...TRUNCATED)
| "CHAPTER SIXTEEN I MIGHT HAVE TOLD YOU OF THE BEGINNING OF THIS LIAISON IN A FEW LINES BUT I WANTED (...TRUNCATED)
| 14.53
|
|
[-0.0030826283618807793,-0.0030826283618807793,-0.0022173337638378143,-0.0022173337638378143,-0.0009(...TRUNCATED)
| [43,2,23,55,19,20,2,33,3,34,22,2,37,12,2,14,28,20,2,20,70,2,12,58,22,2,13,44,18,46,12,7,2,20,43,13,2(...TRUNCATED)
| "I WISHED ABOVE ALL NOT TO LEAVE MYSELF TIME TO THINK OVER THE POSITION I HAD ACCEPTED FOR IN SPITE (...TRUNCATED)
| 13.295
|
|
[-0.0036708791740238667,-0.006147773005068302,-0.00912004616111517,-0.00912004616111517,-0.006147773(...TRUNCATED)
| [33,18,70,13,5,2,37,12,2,31,20,2,23,34,14,18,2,31,14,2,33,16,55,17,33,14,18,2,34,22,2,14,64,25,2,33,(...TRUNCATED)
| "ASSUMED ALL AT ONCE AN APPEARANCE OF NOISE AND DISORDER NEVER BELIEVE HOWEVER DISINTERESTED THE LOV(...TRUNCATED)
| 11.125
|
|
[0.00537200877442956,0.005927596241235733,0.004260833840817213,0.0037052466068416834,0.0048164213076(...TRUNCATED)
| [14,34,21,54,15,2,55,25,2,18,61,2,54,11,18,16,46,14,18,54,22,2,31,25,2,6,46,17,2,11,33,16,17,58,18,3(...TRUNCATED)
| "NOTHING IS SO EXPENSIVE AS THEIR CAPRICES FLOWERS BOXES AT THE THEATRE SUPPERS DAYS IN THE COUNTRY (...TRUNCATED)
| 14.08
|
|
[-0.002226009499281645,-0.0027454837691038847,-0.0027454837691038847,-0.0027454837691038847,-0.00066(...TRUNCATED)
| [43,2,11,52,13,2,20,70,2,16,46,17,54,18,2,18,20,34,5,57,5,2,12,37,2,23,28,25,2,11,37,12,5,2,20,70,2,(...TRUNCATED)
| "I CAME TO PARIS STUDIED LAW WAS CALLED TO THE BAR AND LIKE MANY OTHER YOUNG MEN PUT MY DIPLOMA IN M(...TRUNCATED)
| 13.33
|
|
[0.015264268964529037,0.012279050424695015,0.008796295151114464,0.006806149613112211,0.0008357124752(...TRUNCATED)
| [7,37,17,2,6,33,2,17,46,18,20,2,14,28,20,2,33,2,16,46,14,57,2,34,22,2,5,46,20,2,6,55,18,2,6,46,14,2,(...TRUNCATED)
| "FOR THE REST NOT A PENNY OF DEBT THIS THEN WAS MY POSITION WHEN I MADE THE ACQUAINTANCE OF MARGUERI(...TRUNCATED)
| 13.72
|
|
[-0.014229810796678066,-0.02742611989378929,-0.02742611989378929,-0.02214759588241577,-0.00851141009(...TRUNCATED)
| [19,58,2,23,67,5,2,17,43,20,2,20,70,2,13,58,2,54,14,2,6,33,2,13,37,17,14,54,15,2,6,31,20,2,19,58,2,2(...TRUNCATED)
| "SHE WOULD WRITE TO ME IN THE MORNING THAT SHE WOULD DINE WITH ME NOT AT HOME BUT AT SOME RESTAURANT(...TRUNCATED)
| 14.155
|
|
[-0.0017454009503126144,-0.00010253505752189085,0.000718897907063365,0.0015403309371322393,0.0019510(...TRUNCATED)
| [7,48,8,55,22,2,13,58,2,55,7,2,43,2,8,55,22,2,24,70,2,37,12,2,6,58,25,2,5,54,20,52,12,25,2,3,34,20,2(...TRUNCATED)
| "FORGIVE ME IF I GIVE YOU ALL THESE DETAILS BUT YOU WILL SEE THAT THEY WERE THE CAUSE OF WHAT WAS TO(...TRUNCATED)
| 13.185
|
|
[-0.0006215412286110222,-0.0006215412286110222,-0.0006215412286110222,-0.0010902025969699025,-0.0015(...TRUNCATED)
| [13,43,2,12,34,22,2,7,37,17,2,9,49,2,9,31,5,2,18,61,2,5,54,18,20,49,3,54,15,2,31,14,2,55,14,7,12,69,(...TRUNCATED)
| "MY LOVE FOR HER HAD SO DISTURBING AN INFLUENCE UPON ME THAT EVERY MOMENT I SPENT AWAY FROM MARGUERI(...TRUNCATED)
| 12.965
|
|
[-0.022506199777126312,-0.017473949119448662,0.004484965465962887,0.02552892453968525,0.004484965465(...TRUNCATED)
| [31,25,2,14,28,20,2,20,70,2,14,61,2,6,31,20,2,43,2,23,28,25,2,12,55,22,54,15,2,6,46,13,2,43,2,3,54,8(...TRUNCATED)
| "AS NOT TO KNOW THAT I WAS LIVING THEM I BEGAN BY BORROWING FIVE OR SIX THOUSAND FRANCS ON MY LITTLE(...TRUNCATED)
| 14.15
|
End of preview. Expand
in Data Studio
LibriSpeech ARPAbet Processed Dataset
Pre-processed dataset for training ARPAbet phoneme recognition models using CTC loss.
Dataset Description
This dataset is derived from LibriSpeech (train-clean-100 split) with the following preprocessing:
- Audio: Resampled to 16kHz, normalized using Wav2Vec2 feature extractor
- Labels: Text transcriptions converted to ARPAbet phoneme sequences using CMU Pronouncing Dictionary
- Filtering: Samples with out-of-vocabulary words (not in CMU Dict) are excluded
- Original LibriSpeech train-clean-100: 28,539 samples
- After filtering: 17,698 samples (62% retained)
- Skipped: 10,841 samples (38% had at least one OOV word)
Features
| Feature | Type | Description |
|---|---|---|
audio |
Audio |
Original audio (FLAC, 16kHz) - playable in HF viewer |
input_values |
Sequence[float] |
Normalized audio waveform (float32) - ready for training |
labels |
Sequence[int] |
ARPAbet phoneme token IDs |
text |
string |
Original text transcription |
duration |
float |
Audio duration in seconds |
ARPAbet Vocabulary (72 tokens)
The vocabulary includes:
- Special tokens (3):
<pad>,<unk>,|(word boundary) - Consonants (24): B, CH, D, DH, F, G, HH, JH, K, L, M, N, NG, P, R, S, SH, T, TH, V, W, Y, Z, ZH
- Vowels with stress markers (45): 15 base vowels x 3 stress levels (0, 1, 2)
- Example: AA0 (no stress), AA1 (primary), AA2 (secondary)
Splits
| Split | Samples | Description |
|---|---|---|
| train | 15,928 | Training data (90%) |
| test | 1,770 | Evaluation data (10%) |
Usage
from datasets import load_dataset
from huggingface_hub import hf_hub_download
import json
# Load full dataset (~17k samples, ~15GB)
dataset = load_dataset("davidggphy/librispeech-arpabet-processed", "full")
# Load mini dataset (1000 samples, ~1GB) - great for quick testing!
dataset = load_dataset("davidggphy/librispeech-arpabet-processed", "mini")
# Streaming mode (download samples on-demand)
stream = load_dataset("davidggphy/librispeech-arpabet-processed", "full", split="train", streaming=True)
for sample in stream.take(100):
print(sample["text"])
# Load vocabulary mapping
vocab_path = hf_hub_download(
repo_id="davidggphy/librispeech-arpabet-processed",
filename="vocab.json",
repo_type="dataset"
)
with open(vocab_path) as f:
vocab_data = json.load(f)
token_to_id = vocab_data["token_to_id"]
id_to_token = {int(k): v for k, v in vocab_data["id_to_token"].items()}
# Access samples
sample = dataset[0]
print(f"Text: {sample['text']}")
print(f"Duration: {sample['duration']:.2f}s")
print(f"Labels: {[id_to_token[i] for i in sample['labels']]}")
Listening to Audio
The input_values column contains normalized audio waveforms at 16kHz. To play or save the audio:
import numpy as np
import soundfile as sf
from IPython.display import Audio
sample = dataset["train"][0]
# Convert to numpy array
audio = np.array(sample["input_values"], dtype=np.float32)
# Play in Jupyter/Colab
Audio(audio, rate=16000)
# Or save to file
sf.write("sample.wav", audio, 16000)
# Check duration
print(f"Duration: {sample['duration']:.2f}s")
print(f"Text: {sample['text']}")
Training with Wav2Vec2
from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor
# Load model and processor
model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base", vocab_size=72)
processor = Wav2Vec2Processor.from_pretrained("davidggphy/wav2vec2-arpabet-phoneme")
# The dataset is ready for CTC training
# input_values: normalized audio
# labels: phoneme token IDs
Intended Use
This dataset is designed for:
- Training phoneme recognition models for English pronunciation assessment
- Fine-tuning Wav2Vec2 for ARPAbet output
- Research in automatic pronunciation evaluation
Source Data
- Base Dataset: LibriSpeech ASR Corpus (train-clean-100 split)
- Phoneme Dictionary: CMU Pronouncing Dictionary
Limitations
- Only covers words present in CMU Dictionary (~126k words)
- Based on American English pronunciation
- Does not include phonetic variations or connected speech phenomena
Citation
If you use this dataset, please cite LibriSpeech:
@inproceedings{panayotov2015librispeech,
title={Librispeech: an ASR corpus based on public domain audio books},
author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev},
booktitle={ICASSP},
year={2015}
}
License
Apache 2.0 (same as LibriSpeech)
- Downloads last month
- 84