Datasets:
Upload README.md with huggingface_hub
Browse files
README.md
CHANGED
|
@@ -20,6 +20,12 @@ size_categories:
|
|
| 20 |
|
| 21 |
Pre-processed dataset for training ARPAbet phoneme recognition models using CTC loss.
|
| 22 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 23 |
## Dataset Description
|
| 24 |
|
| 25 |
This dataset is derived from [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) (train-clean-100 split) with the following preprocessing:
|
|
@@ -54,14 +60,28 @@ The vocabulary includes:
|
|
| 54 |
|
| 55 |
```python
|
| 56 |
from datasets import load_dataset
|
|
|
|
|
|
|
| 57 |
|
| 58 |
# Load the dataset
|
| 59 |
dataset = load_dataset("davidggphy/librispeech-arpabet-processed")
|
| 60 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 61 |
# Access samples
|
| 62 |
sample = dataset["train"][0]
|
| 63 |
-
print(f"Audio
|
| 64 |
-
print(f"Labels: {sample['labels']}")
|
| 65 |
```
|
| 66 |
|
| 67 |
### Training with Wav2Vec2
|
|
|
|
| 20 |
|
| 21 |
Pre-processed dataset for training ARPAbet phoneme recognition models using CTC loss.
|
| 22 |
|
| 23 |
+
## Inspiration
|
| 24 |
+
|
| 25 |
+
This project was inspired by Simon Edwards' blog post on pronunciation training using CTC:
|
| 26 |
+
- [Ear Pronunciation via CTC](https://simedw.com/2026/01/31/ear-pronunication-via-ctc/) - Blog post
|
| 27 |
+
- [Hacker News Discussion](https://news.ycombinator.com/item?id=46832074)
|
| 28 |
+
|
| 29 |
## Dataset Description
|
| 30 |
|
| 31 |
This dataset is derived from [LibriSpeech](https://huggingface.co/datasets/librispeech_asr) (train-clean-100 split) with the following preprocessing:
|
|
|
|
| 60 |
|
| 61 |
```python
|
| 62 |
from datasets import load_dataset
|
| 63 |
+
from huggingface_hub import hf_hub_download
|
| 64 |
+
import json
|
| 65 |
|
| 66 |
# Load the dataset
|
| 67 |
dataset = load_dataset("davidggphy/librispeech-arpabet-processed")
|
| 68 |
|
| 69 |
+
# Load vocabulary mapping
|
| 70 |
+
vocab_path = hf_hub_download(
|
| 71 |
+
repo_id="davidggphy/librispeech-arpabet-processed",
|
| 72 |
+
filename="vocab.json",
|
| 73 |
+
repo_type="dataset"
|
| 74 |
+
)
|
| 75 |
+
with open(vocab_path) as f:
|
| 76 |
+
vocab_data = json.load(f)
|
| 77 |
+
|
| 78 |
+
token_to_id = vocab_data["token_to_id"]
|
| 79 |
+
id_to_token = {int(k): v for k, v in vocab_data["id_to_token"].items()}
|
| 80 |
+
|
| 81 |
# Access samples
|
| 82 |
sample = dataset["train"][0]
|
| 83 |
+
print(f"Audio length: {len(sample['input_values'])} samples")
|
| 84 |
+
print(f"Labels: {[id_to_token[i] for i in sample['labels']]}")
|
| 85 |
```
|
| 86 |
|
| 87 |
### Training with Wav2Vec2
|