--- license: apache-2.0 task_categories: - automatic-speech-recognition language: - en tags: - phoneme-recognition - arpabet - pronunciation - wav2vec2 - ctc - speech pretty_name: LibriSpeech ARPAbet Phonemes size_categories: - 10K`, ``, `|` (word boundary) - **Consonants (24)**: B, CH, D, DH, F, G, HH, JH, K, L, M, N, NG, P, R, S, SH, T, TH, V, W, Y, Z, ZH - **Vowels with stress markers (45)**: 15 base vowels x 3 stress levels (0, 1, 2) - Example: AA0 (no stress), AA1 (primary), AA2 (secondary) ### Splits | Split | Samples | Description | |-------|---------|-------------| | train | 15,928 | Training data (90%) | | test | 1,770 | Evaluation data (10%) | ## Usage ```python from datasets import load_dataset from huggingface_hub import hf_hub_download import json # Load full dataset (~17k samples, ~15GB) dataset = load_dataset("davidggphy/librispeech-arpabet-processed", "full") # Load mini dataset (1000 samples, ~1GB) - great for quick testing! dataset = load_dataset("davidggphy/librispeech-arpabet-processed", "mini") # Streaming mode (download samples on-demand) stream = load_dataset("davidggphy/librispeech-arpabet-processed", "full", split="train", streaming=True) for sample in stream.take(100): print(sample["text"]) # Load vocabulary mapping vocab_path = hf_hub_download( repo_id="davidggphy/librispeech-arpabet-processed", filename="vocab.json", repo_type="dataset" ) with open(vocab_path) as f: vocab_data = json.load(f) token_to_id = vocab_data["token_to_id"] id_to_token = {int(k): v for k, v in vocab_data["id_to_token"].items()} # Access samples sample = dataset[0] print(f"Text: {sample['text']}") print(f"Duration: {sample['duration']:.2f}s") print(f"Labels: {[id_to_token[i] for i in sample['labels']]}") ``` ### Listening to Audio The `input_values` column contains normalized audio waveforms at 16kHz. To play or save the audio: ```python import numpy as np import soundfile as sf from IPython.display import Audio sample = dataset["train"][0] # Convert to numpy array audio = np.array(sample["input_values"], dtype=np.float32) # Play in Jupyter/Colab Audio(audio, rate=16000) # Or save to file sf.write("sample.wav", audio, 16000) # Check duration print(f"Duration: {sample['duration']:.2f}s") print(f"Text: {sample['text']}") ``` ### Training with Wav2Vec2 ```python from transformers import Wav2Vec2ForCTC, Wav2Vec2Processor # Load model and processor model = Wav2Vec2ForCTC.from_pretrained("facebook/wav2vec2-base", vocab_size=72) processor = Wav2Vec2Processor.from_pretrained("davidggphy/wav2vec2-arpabet-phoneme") # The dataset is ready for CTC training # input_values: normalized audio # labels: phoneme token IDs ``` ## Intended Use This dataset is designed for: - Training phoneme recognition models for English pronunciation assessment - Fine-tuning Wav2Vec2 for ARPAbet output - Research in automatic pronunciation evaluation ## Source Data - **Base Dataset**: [LibriSpeech ASR Corpus](https://huggingface.co/datasets/openslr/librispeech_asr) (train-clean-100 split) - **Phoneme Dictionary**: [CMU Pronouncing Dictionary](https://github.com/cmusphinx/cmudict) ## Limitations - Only covers words present in CMU Dictionary (~126k words) - Based on American English pronunciation - Does not include phonetic variations or connected speech phenomena ## Citation If you use this dataset, please cite LibriSpeech: ```bibtex @inproceedings{panayotov2015librispeech, title={Librispeech: an ASR corpus based on public domain audio books}, author={Panayotov, Vassil and Chen, Guoguo and Povey, Daniel and Khudanpur, Sanjeev}, booktitle={ICASSP}, year={2015} } ``` ## License Apache 2.0 (same as LibriSpeech)