yuriyvnv commited on
Commit
d748567
·
verified ·
1 Parent(s): 96fc080

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +250 -75
README.md CHANGED
@@ -1,83 +1,258 @@
1
  ---
2
- library_name: transformers
3
  license: apache-2.0
 
 
4
  base_model: openai/whisper-tiny
5
  tags:
6
- - generated_from_trainer
 
 
 
 
 
 
 
 
 
 
7
  model-index:
8
  - name: whisper-tiny-mixed-nl
9
- results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
10
  ---
11
 
12
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
13
- should probably proofread and complete it, then remove this comment. -->
14
-
15
- # whisper-tiny-mixed-nl
16
-
17
- This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
18
- It achieves the following results on the evaluation set:
19
- - Loss: 0.3334
20
-
21
- ## Model description
22
-
23
- More information needed
24
-
25
- ## Intended uses & limitations
26
-
27
- More information needed
28
-
29
- ## Training and evaluation data
30
-
31
- More information needed
32
-
33
- ## Training procedure
34
-
35
- ### Training hyperparameters
36
-
37
- The following hyperparameters were used during training:
38
- - learning_rate: 5e-05
39
- - train_batch_size: 256
40
- - eval_batch_size: 8
41
- - seed: 42
42
- - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
43
- - lr_scheduler_type: linear
44
- - lr_scheduler_warmup_ratio: 0.1
45
- - num_epochs: 5
46
-
47
- ### Training results
48
-
49
- | Training Loss | Epoch | Step | Validation Loss |
50
- |:-------------:|:------:|:----:|:---------------:|
51
- | 0.5185 | 0.1961 | 50 | 0.6420 |
52
- | 0.3326 | 0.3922 | 100 | 0.5209 |
53
- | 0.2777 | 0.5882 | 150 | 0.4578 |
54
- | 0.2297 | 0.7843 | 200 | 0.4240 |
55
- | 0.206 | 0.9804 | 250 | 0.3979 |
56
- | 0.1444 | 1.1765 | 300 | 0.3879 |
57
- | 0.1445 | 1.3725 | 350 | 0.3699 |
58
- | 0.1388 | 1.5686 | 400 | 0.3627 |
59
- | 0.1289 | 1.7647 | 450 | 0.3518 |
60
- | 0.1302 | 1.9608 | 500 | 0.3454 |
61
- | 0.0809 | 2.1569 | 550 | 0.3433 |
62
- | 0.079 | 2.3529 | 600 | 0.3409 |
63
- | 0.08 | 2.5490 | 650 | 0.3363 |
64
- | 0.0804 | 2.7451 | 700 | 0.3321 |
65
- | 0.0783 | 2.9412 | 750 | 0.3300 |
66
- | 0.0487 | 3.1373 | 800 | 0.3327 |
67
- | 0.0493 | 3.3333 | 850 | 0.3324 |
68
- | 0.0483 | 3.5294 | 900 | 0.3334 |
69
- | 0.048 | 3.7255 | 950 | 0.3317 |
70
- | 0.0475 | 3.9216 | 1000 | 0.3292 |
71
- | 0.0323 | 4.1176 | 1050 | 0.3335 |
72
- | 0.0317 | 4.3137 | 1100 | 0.3335 |
73
- | 0.031 | 4.5098 | 1150 | 0.3340 |
74
- | 0.0325 | 4.7059 | 1200 | 0.3333 |
75
- | 0.0327 | 4.9020 | 1250 | 0.3334 |
76
-
77
-
78
- ### Framework versions
79
-
80
- - Transformers 4.50.2
81
- - Pytorch 2.5.1+cu124
82
- - Datasets 3.6.0
83
- - Tokenizers 0.21.2
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  license: apache-2.0
3
+ language:
4
+ - nl
5
  base_model: openai/whisper-tiny
6
  tags:
7
+ - automatic-speech-recognition
8
+ - whisper
9
+ - dutch
10
+ - speech
11
+ - audio
12
+ - synthetic-data
13
+ - asr
14
+ - hf-asr-leaderboard
15
+ datasets:
16
+ - mozilla-foundation/common_voice_17_0
17
+ - yuriyvnv/synthetic_transcript_nl
18
  model-index:
19
  - name: whisper-tiny-mixed-nl
20
+ results:
21
+ - task:
22
+ type: automatic-speech-recognition
23
+ name: Automatic Speech Recognition
24
+ dataset:
25
+ name: Common Voice 17.0 (Dutch)
26
+ type: mozilla-foundation/common_voice_17_0
27
+ config: nl
28
+ split: test
29
+ metrics:
30
+ - type: wer
31
+ value: 25.05
32
+ name: Test WER
33
+ - task:
34
+ type: automatic-speech-recognition
35
+ name: Automatic Speech Recognition
36
+ dataset:
37
+ name: Multilingual LibriSpeech (Dutch)
38
+ type: facebook/multilingual_librispeech
39
+ config: dutch
40
+ split: test
41
+ metrics:
42
+ - type: wer
43
+ value: 43.11
44
+ name: Test WER (MLS)
45
+ pipeline_tag: automatic-speech-recognition
46
+ library_name: transformers
47
  ---
48
 
49
+ # Whisper-Tiny Dutch - Mixed Synthetic Data (Mid-High Quality Filtered)
50
+
51
+ This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) for Dutch automatic speech recognition (ASR). It was trained on Common Voice 17.0 Dutch combined with **WAVe-filtered synthetic speech data** (quality threshold q ≥ 0.5).
52
+
53
+ ## Introduction
54
+
55
+ ### How the Data Was Created
56
+
57
+ The training data combines real speech from Common Voice 17.0 with synthetic speech generated through a two-stage pipeline:
58
+
59
+ 1. **Transcript Generation**: We used GPT-4o-mini to generate Dutch transcripts that match the word count distribution observed in Common Voice, ensuring realistic utterance lengths and diverse linguistic content.
60
+
61
+ 2. **Speech Synthesis**: Each transcript was converted to audio using OpenAI's TTS-1 model with 9 different voice variants (alloy, ash, coral, echo, fable, nova, onyx, sage, shimmer), producing 34,898 synthetic samples.
62
+
63
+ 3. **Quality Filtering with WAVe**: Raw synthetic speech often contains defects such as mispronunciations, omitted words, or prosodic anomalies. To address this, we applied **WAVe (Word-Aligned Verification)**, a model that assesses audio-text alignment at the word level rather than the sentence level. WAVe uses multi-head attention to align each word to its corresponding audio frames and assigns per-word confidence scores via a GLU-based scorer. Samples scoring below the threshold (q < 0.5) were removed, retaining 30,182 high-quality synthetic samples.
64
+
65
+ ### How the Model Was Created
66
+
67
+ The model was fine-tuned from `openai/whisper-tiny` using the Hugging Face Transformers library with the following approach:
68
+
69
+ 1. **Mixed Training**: Combined 34,952 real speech samples from Common Voice 17.0 Dutch with 30,182 WAVe-filtered synthetic samples (65,134 total).
70
+
71
+ 2. **Optimization**: Trained for 5 epochs with a learning rate of 5e-5, global batch size of 256, and BF16 precision on an NVIDIA H200 GPU.
72
+
73
+ 3. **Checkpoint Selection**: The best checkpoint was selected based on validation loss, occurring at step 1000 with a validation loss of 0.3292.
74
+
75
+ This approach achieves **3.7% relative improvement** on the Common Voice test set compared to training on real data alone, while also improving cross-domain generalization on the Multilingual LibriSpeech benchmark.
76
+
77
+ ## Model Details
78
+
79
+ | Property | Value |
80
+ |----------|-------|
81
+ | **Base Model** | openai/whisper-tiny |
82
+ | **Language** | Dutch (nl) |
83
+ | **Task** | Automatic Speech Recognition (transcribe) |
84
+ | **Parameters** | 39M |
85
+ | **Training Data** | Common Voice 17.0 + Mid-High Quality Synthetic |
86
+ | **Total Training Samples** | 65,134 |
87
+ | **Sampling Rate** | 16kHz |
88
+
89
+ ## Evaluation Results
90
+
91
+ ### This Model (whisper-tiny-mixed-nl)
92
+
93
+ | Metric | Value |
94
+ |--------|-------|
95
+ | **Validation Loss** | 0.3292 |
96
+ | **Validation WER** | 19.36% |
97
+ | **Test WER (Common Voice)** | 25.05% |
98
+ | **Test WER (MLS)** | 43.11% |
99
+ | **Best Checkpoint** | Step 1000 |
100
+ | **Max Training Steps** | 1,270 |
101
+
102
+ ### Comparison with Other Training Configurations
103
+
104
+ | Training Data | Max Steps | Val Loss | Val WER | Test WER (CV) | Test WER (MLS) |
105
+ |---------------|-----------|----------|---------|---------------|----------------|
106
+ | Common Voice Only | 680 | 0.3382 | 19.77% | 26.00% | 44.85% |
107
+ | High-Quality Filtered + CV | 890 | 0.3323 | 19.59% | 25.51% | 43.76% |
108
+ | **Mid-High Quality Filtered + CV** | **1,270** | **0.3292** | **19.36%** | **25.05%** | **43.11%** |
109
+ | All Synthetic + CV (Unfiltered) | 1,365 | 0.3207 | 19.61% | 24.93% | 43.12% |
110
+
111
+ ### Key Performance Highlights
112
+
113
+ - **Best Validation WER** (19.36%) among all Whisper-Tiny Dutch configurations
114
+ - **Best cross-domain generalization** on MLS benchmark (43.11% WER)
115
+ - **3.7% relative improvement** on Common Voice test set vs baseline (25.05% vs 26.00%)
116
+ - **7% fewer training steps** than unfiltered synthetic data while achieving better generalization
117
+
118
+ ## Training Data
119
+
120
+ ### Dataset Composition
121
+
122
+ | Source | Samples | Description |
123
+ |--------|---------|-------------|
124
+ | [Common Voice 17.0 Dutch](https://huggingface.co/datasets/mozilla-foundation/common_voice_17_0) | 34,952 | Real speech from Mozilla's crowdsourced dataset |
125
+ | [Synthetic Transcript NL](https://huggingface.co/datasets/yuriyvnv/synthetic_transcript_nl) (q ≥ 0.5) | 30,182 | WAVe-filtered TTS audio from GPT-4o-mini transcripts |
126
+ | **Total** | **65,134** | |
127
+
128
+ ### Synthetic Data Generation Pipeline
129
+
130
+ The synthetic dataset ([yuriyvnv/synthetic_transcript_nl](https://huggingface.co/datasets/yuriyvnv/synthetic_transcript_nl)) was generated using:
131
+
132
+ 1. **Transcript Generation**: GPT-4o-mini, matching Common Voice word count distribution
133
+ 2. **Speech Synthesis**: OpenAI TTS-1 model with 9 voice variants (alloy, ash, coral, echo, fable, nova, onyx, sage, shimmer)
134
+ 3. **Quality Filtering**: WAVe model filtering at threshold q ≥ 0.5
135
+
136
+ ### WAVe Quality Distribution (Dutch Synthetic Data)
137
+
138
+ | Quality Level | Samples | Percentage |
139
+ |--------------|---------|------------|
140
+ | High (q ≥ 0.8) | 10,555 | 30.2% |
141
+ | Medium (0.5 ≤ q < 0.8) | 19,627 | 56.2% |
142
+ | Low (q < 0.5) - Removed | 4,716 | 13.5% |
143
+
144
+ ## Training Procedure
145
+
146
+ ### Hyperparameters
147
+
148
+ | Parameter | Value |
149
+ |-----------|-------|
150
+ | Learning Rate | 5e-5 |
151
+ | Batch Size (Global) | 256 |
152
+ | Warmup Steps | 200 |
153
+ | Max Epochs | 5 |
154
+ | Precision | BF16 |
155
+ | Optimizer | AdamW (fused) |
156
+ | Eval Steps | 50 |
157
+ | Metric for Best Model | eval_loss |
158
+
159
+ ### Training Infrastructure
160
+
161
+ - **GPU**: NVIDIA H200 (140GB VRAM)
162
+ - **Operating System**: Ubuntu 22.04
163
+ - **Framework**: Hugging Face Transformers
164
+
165
+ ### Training Curve
166
+
167
+ ```
168
+ Step 100: val_loss = 0.5209
169
+ Step 250: val_loss = 0.3979
170
+ Step 500: val_loss = 0.3454
171
+ Step 750: val_loss = 0.3300
172
+ Step 1000: val_loss = 0.3292 ← Best checkpoint
173
+ Step 1250: val_loss = 0.3334
174
+ ```
175
+
176
+ ## Usage
177
+
178
+ ### Transcription Pipeline
179
+
180
+ ```python
181
+ from transformers import pipeline
182
+
183
+ transcriber = pipeline(
184
+ "automatic-speech-recognition",
185
+ model="yuriyvnv/whisper-tiny-mixed-nl",
186
+ device="cuda"
187
+ )
188
+
189
+ result = transcriber("path/to/dutch_audio.wav")
190
+ print(result["text"])
191
+ ```
192
+
193
+ ### Direct Model Usage
194
+
195
+ ```python
196
+ from transformers import WhisperProcessor, WhisperForConditionalGeneration
197
+ import librosa
198
+
199
+ processor = WhisperProcessor.from_pretrained("yuriyvnv/whisper-tiny-mixed-nl")
200
+ model = WhisperForConditionalGeneration.from_pretrained("yuriyvnv/whisper-tiny-mixed-nl")
201
+ model.to("cuda")
202
+
203
+ audio, sr = librosa.load("path/to/dutch_audio.wav", sr=16000)
204
+ input_features = processor(audio, sampling_rate=16000, return_tensors="pt").input_features.to("cuda")
205
+
206
+ predicted_ids = model.generate(input_features)
207
+ transcription = processor.batch_decode(predicted_ids, skip_special_tokens=True)[0]
208
+ print(transcription)
209
+ ```
210
+
211
+ ### Specifying Language
212
+
213
+ ```python
214
+ model.generation_config.language = "nl"
215
+ model.generation_config.task = "transcribe"
216
+ ```
217
+
218
+ ## Methodology
219
+
220
+ This model leverages **WAVe (Word-Aligned Verification)**, a word-level quality assessment method for filtering synthetic speech data. Unlike sentence-level filtering approaches, WAVe:
221
+
222
+ - Aligns each word to its corresponding audio frames using multi-head attention
223
+ - Assigns per-word confidence scores via a GLU-based scorer
224
+ - Detects localized synthesis errors (mispronunciations, omitted words, prosodic anomalies)
225
+ - Achieves **6.5% improvement** over sentence-level filtering methods
226
+
227
+ For full methodology details, see the references below.
228
+
229
+ ## Limitations
230
+
231
+ - **Model capacity**: Whisper-Tiny (39M params) has limited representational power
232
+ - **Domain specificity**: Optimized for general Dutch; may underperform on technical domains
233
+ - **Acoustic conditions**: Trained on clean speech; noise robustness not guaranteed
234
+ - **Dialect coverage**: Performance may vary across Dutch regional variants
235
+
236
+ ## Citation
237
+
238
+ ```bibtex
239
+ @article{perezhohin2024enhancing,
240
+ title={Enhancing Automatic Speech Recognition: Effects of Semantic Audio Filtering on Models Performance},
241
+ author={Perezhohin, Yuriy and Santos, Tiago and Costa, Victor and Peres, Fernando and Castelli, Mauro},
242
+ journal={IEEE Access},
243
+ year={2024},
244
+ publisher={IEEE}
245
+ }
246
+ ```
247
+
248
+ ## References
249
+
250
+ - **Base Model**: [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny)
251
+ - **Training Data (Real)**: [mozilla-foundation/common_voice_17_0](https://huggingface.co/datasets/mozilla-foundation/common_voice_17_0)
252
+ - **Training Data (Synthetic)**: [yuriyvnv/synthetic_transcript_nl](https://huggingface.co/datasets/yuriyvnv/synthetic_transcript_nl)
253
+ - **Whisper Paper**: [Robust Speech Recognition via Large-Scale Weak Supervision](https://arxiv.org/abs/2212.04356)
254
+ - **IEEE Access Paper**: [Enhancing ASR with Semantic Audio Filtering](https://ieeexplore.ieee.org/document/10720758)
255
+
256
+ ## License
257
+
258
+ Apache 2.0