asierhv commited on
Commit
0bc9349
·
verified ·
1 Parent(s): bd9ab28

added description and "how to use" example

Browse files
Files changed (1) hide show
  1. README.md +126 -37
README.md CHANGED
@@ -28,45 +28,92 @@ model-index:
28
  value: 10.025150042869392
29
  ---
30
 
31
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
32
- should probably proofread and complete it, then remove this comment. -->
33
-
34
  # Whisper Small Catalan
35
 
36
- This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the mozilla-foundation/common_voice_13_0 ca dataset.
37
- It achieves the following results on the evaluation set:
38
- - Loss: 0.2169
39
- - Wer: 10.0252
 
 
 
40
 
41
  ## Model description
42
 
43
- More information needed
 
 
 
 
 
 
 
44
 
45
- ## Intended uses & limitations
 
 
 
 
46
 
47
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
48
 
49
  ## Training and evaluation data
50
 
51
- More information needed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
52
 
53
  ## Training procedure
54
 
55
  ### Training hyperparameters
56
 
57
- The following hyperparameters were used during training:
58
- - learning_rate: 1e-05
59
- - train_batch_size: 64
60
- - eval_batch_size: 32
61
- - seed: 42
62
- - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
63
- - lr_scheduler_type: linear
64
- - lr_scheduler_warmup_steps: 500
65
- - training_steps: 5000
66
 
67
- ### Training results
68
 
69
- | Training Loss | Epoch | Step | Validation Loss | Wer |
70
  |:-------------:|:-----:|:----:|:---------------:|:-------:|
71
  | 0.1708 | 1.1 | 1000 | 0.2494 | 12.1846 |
72
  | 0.0421 | 3.09 | 2000 | 0.2458 | 11.2689 |
@@ -74,27 +121,57 @@ The following hyperparameters were used during training:
74
  | 0.0928 | 7.08 | 4000 | 0.2150 | 10.0394 |
75
  | 0.0504 | 9.08 | 5000 | 0.2169 | 10.0252 |
76
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
77
 
78
- ### Framework versions
 
 
 
 
79
 
80
- - Transformers 4.33.0.dev0
81
- - Pytorch 2.0.1+cu117
82
- - Datasets 2.14.4
83
- - Tokenizers 0.13.3
 
 
 
 
 
 
 
 
 
84
 
85
  ## Citation
86
 
87
- If you use these models in your research, please cite:
88
 
89
  ```bibtex
90
  @misc{dezuazo2025whisperlmimprovingasrmodels,
91
- title={Whisper-LM: Improving ASR Models with Language Models for Low-Resource Languages},
92
- author={Xabier de Zuazo and Eva Navas and Ibon Saratxaga and Inma Hernáez Rioja},
93
- year={2025},
94
- eprint={2503.23542},
95
- archivePrefix={arXiv},
96
- primaryClass={cs.CL},
97
- url={https://arxiv.org/abs/2503.23542},
98
  }
99
  ```
100
 
@@ -102,9 +179,21 @@ Please, check the related paper preprint in
102
  [arXiv:2503.23542](https://arxiv.org/abs/2503.23542)
103
  for more details.
104
 
105
- ## Licensing
 
 
106
 
107
  This model is available under the
108
  [Apache-2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
109
  You are free to use, modify, and distribute this model as long as you credit
110
- the original creators.
 
 
 
 
 
 
 
 
 
 
 
28
  value: 10.025150042869392
29
  ---
30
 
 
 
 
31
  # Whisper Small Catalan
32
 
33
+ ## Model summary
34
+
35
+ **Whisper Small Catalan** is an automatic speech recognition (ASR) model for **Catalan (ca)** speech. It is fine-tuned from [openai/whisper-small] on the **Catalan subset of Mozilla Common Voice 13.0**, achieving a **Word Error Rate (WER) of 10.03%** on the evaluation split.
36
+
37
+ This model offers higher transcription accuracy than the tiny variant while being smaller and faster than the base variant.
38
+
39
+ ---
40
 
41
  ## Model description
42
 
43
+ * **Architecture:** Transformer-based encoder–decoder (Whisper)
44
+ * **Base model:** openai/whisper-small
45
+ * **Language:** Catalan (ca)
46
+ * **Task:** Automatic Speech Recognition (ASR)
47
+ * **Output:** Text transcription in Catalan
48
+ * **Decoding:** Autoregressive sequence-to-sequence decoding
49
+
50
+ Fine-tuned to improve transcription quality on Catalan audio.
51
 
52
+ ---
53
+
54
+ ## Intended use
55
+
56
+ ### Primary use cases
57
 
58
+ * Accurate transcription of Catalan audio
59
+ * Research and development in Catalan ASR
60
+ * Media, educational, or accessibility applications
61
+
62
+ ### Out-of-scope use
63
+
64
+ * Real-time transcription without optimization
65
+ * Speech translation
66
+ * Safety-critical applications without further validation
67
+
68
+ ---
69
+
70
+ ## Limitations and known issues
71
+
72
+ * Performance may degrade on:
73
+ * Noisy or low-quality recordings
74
+ * Conversational or spontaneous speech
75
+ * Regional dialects not well represented in Common Voice
76
+ * Occasional transcription errors on difficult audio
77
+
78
+ ---
79
 
80
  ## Training and evaluation data
81
 
82
+ * **Dataset:** Mozilla Common Voice 13.0 (Catalan subset)
83
+ * **Data type:** Crowd-sourced, read speech
84
+ * **Preprocessing:**
85
+ * Audio resampled to 16 kHz
86
+ * Text normalized using Whisper tokenizer
87
+ * Filtering of invalid or problematic samples
88
+
89
+ * **Evaluation metric:** Word Error Rate (WER) on held-out evaluation set
90
+
91
+ ---
92
+
93
+ ## Evaluation results
94
+
95
+ | Metric | Value |
96
+ | ---------- | ---------- |
97
+ | WER (eval) | **10.03%** |
98
+
99
+ ---
100
 
101
  ## Training procedure
102
 
103
  ### Training hyperparameters
104
 
105
+ * Learning rate: 1e-5
106
+ * Optimizer: Adam (β1=0.9, β2=0.999, ε=1e-8)
107
+ * LR scheduler: Linear
108
+ * Warmup steps: 500
109
+ * Training steps: 5,000
110
+ * Train batch size: 64
111
+ * Eval batch size: 32
112
+ * Seed: 42
 
113
 
114
+ ### Training results (summary)
115
 
116
+ | Training Loss | Epoch | Step | Validation Loss | WER |
117
  |:-------------:|:-----:|:----:|:---------------:|:-------:|
118
  | 0.1708 | 1.1 | 1000 | 0.2494 | 12.1846 |
119
  | 0.0421 | 3.09 | 2000 | 0.2458 | 11.2689 |
 
121
  | 0.0928 | 7.08 | 4000 | 0.2150 | 10.0394 |
122
  | 0.0504 | 9.08 | 5000 | 0.2169 | 10.0252 |
123
 
124
+ ---
125
+
126
+ ## Framework versions
127
+
128
+ - Transformers 4.33.0.dev0
129
+ - PyTorch 2.0.1+cu117
130
+ - Datasets 2.14.4
131
+ - Tokenizers 0.13.3
132
+
133
+ ---
134
+
135
+ ## How to use
136
+
137
+ ```python
138
+ from transformers import pipeline
139
+
140
+ hf_model = "HiTZ/whisper-small-ca" # replace with actual repo ID
141
+ device = 0 # set to -1 for CPU
142
 
143
+ pipe = pipeline(
144
+ task="automatic-speech-recognition",
145
+ model=hf_model,
146
+ device=device
147
+ )
148
 
149
+ result = pipe("audio.wav")
150
+ print(result["text"])
151
+ ```
152
+
153
+ ---
154
+
155
+ ## Ethical considerations and risks
156
+
157
+ * This model transcribes speech and may process personal data.
158
+ * Users should ensure compliance with applicable data protection laws (e.g., GDPR).
159
+ * The model should not be used for surveillance or non-consensual audio processing.
160
+
161
+ ---
162
 
163
  ## Citation
164
 
165
+ If you use this model in your research, please cite:
166
 
167
  ```bibtex
168
  @misc{dezuazo2025whisperlmimprovingasrmodels,
169
+ title={Whisper-LM: Improving ASR Models with Language Models for Low-Resource Languages},
170
+ author={Xabier de Zuazo and Eva Navas and Ibon Saratxaga and Inma Hernáez Rioja},
171
+ year={2025},
172
+ eprint={2503.23542},
173
+ archivePrefix={arXiv},
174
+ primaryClass={cs.CL}
 
175
  }
176
  ```
177
 
 
179
  [arXiv:2503.23542](https://arxiv.org/abs/2503.23542)
180
  for more details.
181
 
182
+ ---
183
+
184
+ ## License
185
 
186
  This model is available under the
187
  [Apache-2.0 License](https://www.apache.org/licenses/LICENSE-2.0).
188
  You are free to use, modify, and distribute this model as long as you credit
189
+ the original creators.
190
+
191
+ ---
192
+
193
+ ## Contact and attribution
194
+
195
+ * Fine-tuning and evaluation: HiTZ/Aholab - Basque Center for Language Technology
196
+ * Base model: OpenAI Whisper
197
+ * Dataset: Mozilla Common Voice
198
+
199
+ For questions or issues, please open an issue in the model repository.