AAdonis commited on
Commit
7a04089
·
verified ·
1 Parent(s): af6bf32

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +149 -3
README.md CHANGED
@@ -1,3 +1,149 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: cc-by-4.0
3
+ task_categories:
4
+ - automatic-speech-recognition
5
+ - text-to-speech
6
+ language:
7
+ - en
8
+ - de
9
+ - fr
10
+ - es
11
+ - ru
12
+ - ja
13
+ - ko
14
+ - pt
15
+ - tr
16
+ - th
17
+ - wo
18
+ tags:
19
+ - audio
20
+ - speech
21
+ - phoneme-alignment
22
+ - mfa
23
+ - forced-alignment
24
+ pretty_name: Multilingual MFA-Aligned Speech Dataset
25
+
26
+ ---
27
+
28
+ # Multilingual MFA-Aligned Speech Dataset
29
+
30
+ A large-scale multilingual speech dataset with **word-level and phoneme-level alignments** produced using the Montreal Forced Aligner (MFA).
31
+
32
+ ## Dataset Description
33
+
34
+ This dataset consolidates multiple speech corpora across various languages, all processed through MFA to provide precise phoneme and word alignments. Each sample includes the original audio, transcript, and detailed timing information for both words and phonemes.
35
+
36
+ ### Features
37
+
38
+ | Column | Type | Description |
39
+ |--------|------|-------------|
40
+ | `audio` | Audio | Audio waveform at 16kHz |
41
+ | `transcript` | string | Text transcription |
42
+ | `phoneme_sequence` | string | Phoneme sequence with spaces between words |
43
+ | `words` | list | Word-level alignments: `[{word, start, end}, ...]` |
44
+ | `phonemes` | list | Phoneme-level alignments: `[{phoneme, start, end}, ...]` |
45
+ | `source` | string | Original dataset source (e.g., voxpopuli, common_voice) |
46
+
47
+ ### Languages & Statistics
48
+
49
+ | Language | Config | Hours | Samples | Sources |
50
+ |----------|--------|-------|---------|---------|
51
+ | English | `english` | TBD | TBD | Common Voice, VoxPopuli, GigaSpeech, Emilia, Genshin Voice, Gemini Speech |
52
+ | German | `german` | TBD | TBD | Multilingual LibriSpeech, Emilia |
53
+ | French | `french` | TBD | TBD | French Game Voice, Multilingual LibriSpeech |
54
+ | Spanish | `spanish` | TBD | TBD | CML TTS, LibriVox, TEDx Spanish |
55
+ | Russian | `russian` | TBD | TBD | Russian Audio Data, Multilingual LibriSpeech |
56
+ | Japanese | `japanese` | TBD | TBD | Combined Japanese Dataset, Japanese Anime Speech |
57
+ | Korean | `korean` | TBD | TBD | Zeroth STT Korean, Korea Speech |
58
+ | Portuguese | `portuguese` | TBD | TBD | Portuguese TTS, Multilingual LibriSpeech |
59
+ | Turkish | `turkish` | TBD | TBD | Turkish Merge Audio, Khan Academy Turkish |
60
+ | Thai | `thai` | TBD | TBD | Porjai Thai Voice Dataset |
61
+ | Wolof | `wolof` | TBD | TBD | Wolof French ASR |
62
+
63
+ **Total: ~20,000+ hours** (estimated)
64
+
65
+ ## Usage
66
+
67
+ ### Load a specific language
68
+
69
+ ```python
70
+ from datasets import load_dataset
71
+
72
+ # Load English data
73
+ dataset = load_dataset("AAdonis/merged_mfa_alignments", "english", split="train")
74
+
75
+ # Load German data
76
+ dataset = load_dataset("AAdonis/merged_mfa_alignments", "german", split="train")
77
+ ```
78
+
79
+ ### Access alignments
80
+
81
+ ```python
82
+ sample = dataset[0]
83
+
84
+ # Get audio
85
+ audio = sample["audio"]["array"]
86
+ sample_rate = sample["audio"]["sampling_rate"]
87
+
88
+ # Get transcript, phonemes, and source
89
+ transcript = sample["transcript"]
90
+ phonemes = sample["phoneme_sequence"] # "h ɛ l oʊ w ɜː l d"
91
+ source = sample["source"] # e.g., "voxpopuli"
92
+
93
+ # Get word-level alignments
94
+ for word_info in sample["words"]:
95
+ print(f"{word_info['word']}: {word_info['start']:.2f}s - {word_info['end']:.2f}s")
96
+
97
+ # Get phoneme-level alignments
98
+ for phon_info in sample["phonemes"]:
99
+ print(f"{phon_info['phoneme']}: {phon_info['start']:.3f}s - {phon_info['end']:.3f}s")
100
+ ```
101
+
102
+ ### Filter by source
103
+
104
+ ```python
105
+ # Get only VoxPopuli samples
106
+ voxpopuli_samples = dataset.filter(lambda x: x["source"] == "voxpopuli")
107
+
108
+ # Get only Common Voice samples
109
+ cv_samples = dataset.filter(lambda x: x["source"] == "common_voice")
110
+ ```
111
+
112
+ ## Processing Details
113
+
114
+ ### MFA Alignment
115
+
116
+ All samples were aligned using the Montreal Forced Aligner (MFA) with language-specific acoustic models and pronunciation dictionaries.
117
+
118
+ ### Quality Filtering
119
+
120
+ During processing, samples were filtered and split based on:
121
+ - **`<unk>` words**: Samples containing unknown words are split at those boundaries
122
+ - **`spn` phonemes**: Spoken noise markers cause sample splits
123
+ - **Duration**: Samples are filtered by minimum/maximum duration thresholds
124
+ - **Word count**: Minimum word requirements per segment
125
+
126
+ ### Phoneme Sequence Format
127
+
128
+ The `phoneme_sequence` column contains IPA phonemes with:
129
+ - Phonemes within a word are concatenated directly
130
+ - Words are separated by spaces
131
+ - Example: `"h ɛ l oʊ"` for "hello" (4 phonemes, 1 word)
132
+
133
+ ## Citation
134
+
135
+ If you use this dataset, please cite the original source datasets and the Montreal Forced Aligner:
136
+
137
+ ```bibtex
138
+ @article{mcauliffe2017montreal,
139
+ title={Montreal Forced Aligner: Trainable text-speech alignment using Kaldi},
140
+ author={McAuliffe, Michael and Socolof, Michaela and Mihuc, Sarah and Wagner, Michael and Sonderegger, Morgan},
141
+ journal={Interspeech},
142
+ year={2017}
143
+ }
144
+ ```
145
+
146
+ ## License
147
+
148
+ This dataset is released under CC-BY-4.0. Please also respect the licenses of the original source datasets.
149
+