|
|
--- |
|
|
license: bsd-2-clause |
|
|
language: |
|
|
- en |
|
|
task: grapheme-to-phoneme-conversion |
|
|
size: ~130,000 word-pronunciation pairs |
|
|
format: "text-phoneme pairs" |
|
|
--- |
|
|
|
|
|
# CMUdict - Carnegie Mellon Pronouncing Dictionary |
|
|
|
|
|
This dataset contains grapheme-to-phoneme (G2P) mappings derived from the CMU Pronouncing Dictionary version 0.7b. |
|
|
|
|
|
## Dataset Description |
|
|
|
|
|
- **License** BSD 2-Clause License (CMU license) |
|
|
- **Task** Grapheme-to-Phoneme Conversion |
|
|
- **Language** English |
|
|
- **Size** ~130,000 word-pronunciation pairs |
|
|
- **Format** Text-phoneme pairs |
|
|
|
|
|
## Usage |
|
|
|
|
|
This dataset is structured for training and evaluating grapheme-to-phoneme (G2P) conversion models, which convert written words to their phonetic pronunciations. |
|
|
|
|
|
### Loading the Dataset |
|
|
|
|
|
```python |
|
|
from datasets import load_dataset |
|
|
|
|
|
# Load from Huggingface Hub |
|
|
dataset = load_dataset("jc4p/CMUdict") |
|
|
|
|
|
# Access splits |
|
|
train_data = dataset["train"] |
|
|
validation_data = dataset["validation"] |
|
|
test_data = dataset["test"] |
|
|
|
|
|
# Example |
|
|
sample = train_data[0] |
|
|
print(f"Word: {sample['text']}") |
|
|
print(f"Phonemes: {sample['phonemes']}") |
|
|
``` |
|
|
|
|
|
### Use Cases |
|
|
|
|
|
1. **Text-to-Speech (TTS) Systems** |
|
|
- Train models to predict pronunciation for out-of-vocabulary words |
|
|
- Improve naturalness of synthetic speech by providing accurate pronunciations |
|
|
|
|
|
2. **Automatic Speech Recognition (ASR)** |
|
|
- Create pronunciation dictionaries for language models |
|
|
- Improve recognition accuracy for rare or unusual words |
|
|
|
|
|
3. **Language Learning Applications** |
|
|
- Help language learners with correct pronunciation |
|
|
- Create interactive pronunciation guides |
|
|
|
|
|
4. **Linguistic Research** |
|
|
- Study phonological patterns in English |
|
|
- Compare dialectal variations in pronunciation |
|
|
|
|
|
5. **Spelling Correction** |
|
|
- Use phonetic similarity to detect and correct spelling errors |
|
|
|
|
|
### Training a G2P Model |
|
|
|
|
|
Here's a basic example of how to train a sequence-to-sequence model: |
|
|
|
|
|
```python |
|
|
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM, Seq2SeqTrainingArguments, Seq2SeqTrainer |
|
|
from datasets import load_dataset |
|
|
|
|
|
# Load dataset |
|
|
dataset = load_dataset("CMUdict") |
|
|
|
|
|
# Prepare tokenizer |
|
|
tokenizer = AutoTokenizer.from_pretrained("t5-small") |
|
|
|
|
|
# Tokenize function |
|
|
def tokenize_function(examples): |
|
|
model_inputs = tokenizer(examples["text"], max_length=128, truncation=True) |
|
|
labels = tokenizer(examples["phonemes"], max_length=128, truncation=True) |
|
|
model_inputs["labels"] = labels["input_ids"] |
|
|
return model_inputs |
|
|
|
|
|
# Apply tokenization |
|
|
tokenized_datasets = dataset.map(tokenize_function, batched=True) |
|
|
|
|
|
# Load model |
|
|
model = AutoModelForSeq2SeqLM.from_pretrained("t5-small") |
|
|
|
|
|
# Training arguments |
|
|
training_args = Seq2SeqTrainingArguments( |
|
|
output_dir="./results", |
|
|
evaluation_strategy="epoch", |
|
|
learning_rate=5e-5, |
|
|
per_device_train_batch_size=16, |
|
|
per_device_eval_batch_size=16, |
|
|
weight_decay=0.01, |
|
|
save_total_limit=3, |
|
|
num_train_epochs=3, |
|
|
predict_with_generate=True, |
|
|
) |
|
|
|
|
|
# Initialize trainer |
|
|
trainer = Seq2SeqTrainer( |
|
|
model=model, |
|
|
args=training_args, |
|
|
train_dataset=tokenized_datasets["train"], |
|
|
eval_dataset=tokenized_datasets["validation"], |
|
|
tokenizer=tokenizer, |
|
|
) |
|
|
|
|
|
# Train the model |
|
|
trainer.train() |
|
|
``` |
|
|
|
|
|
## License Information |
|
|
|
|
|
``` |
|
|
Copyright (C) 1993-2015 Carnegie Mellon University. All rights reserved. |
|
|
|
|
|
Redistribution and use in source and binary forms, with or without |
|
|
modification, are permitted provided that the following conditions |
|
|
are met: |
|
|
|
|
|
1. Redistributions of source code must retain the above copyright |
|
|
notice, this list of conditions and the following disclaimer. |
|
|
The contents of this file are deemed to be source code. |
|
|
|
|
|
2. Redistributions in binary form must reproduce the above copyright |
|
|
notice, this list of conditions and the following disclaimer in |
|
|
the documentation and/or other materials provided with the |
|
|
distribution. |
|
|
|
|
|
This work was supported in part by funding from the Defense Advanced |
|
|
Research Projects Agency, the Office of Naval Research and the National |
|
|
Science Foundation of the United States of America, and by member |
|
|
companies of the Carnegie Mellon Sphinx Speech Consortium. We acknowledge |
|
|
the contributions of many volunteers to the expansion and improvement of |
|
|
this dictionary. |
|
|
|
|
|
THIS SOFTWARE IS PROVIDED BY CARNEGIE MELLON UNIVERSITY ``AS IS'' AND |
|
|
ANY EXPRESSED OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, |
|
|
THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR |
|
|
PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL CARNEGIE MELLON UNIVERSITY |
|
|
NOR ITS EMPLOYEES BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, |
|
|
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT |
|
|
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, |
|
|
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY |
|
|
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT |
|
|
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE |
|
|
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. |
|
|
``` |
|
|
|
|
|
## Citation |
|
|
|
|
|
``` |
|
|
@misc{cmu_pronouncing_dictionary, |
|
|
title={The {CMU} Pronouncing Dictionary}, |
|
|
author={{Carnegie Mellon University}}, |
|
|
howpublished={\url{http://www.speech.cs.cmu.edu/cgi-bin/cmudict}}, |
|
|
year={2015}, |
|
|
note={Version 0.7b} |
|
|
} |
|
|
``` |