Update README.md
Browse files
README.md
CHANGED
|
@@ -12,13 +12,14 @@ model_type: speech-to-text
|
|
| 12 |
widget:
|
| 13 |
- src: path_to_sample_audio_file.wav
|
| 14 |
---
|
|
|
|
|
|
|
| 15 |
|
| 16 |
# Whisper Tiny Fine-Tuned on Kalaallisut (Greenlandic) π
|
| 17 |
|
| 18 |
This is a fine-tuned version of the [Whisper Tiny](https://huggingface.co/openai/whisper-tiny) model by OpenAI, adapted to the **Kalaallisut** (Greenlandic) language. The model has been trained and optimized to handle transcriptions specifically for this language, which is historically underrepresented in speech recognition models.
|
| 19 |
|
| 20 |
### π Training Process
|
| 21 |
-
This model still spits gibberish and not good enough. Still gonna add more to this model for a while and see if its improving.
|
| 22 |
|
| 23 |
This model was carefully trained on a dataset of **Kalaallisut** audio files paired with transcriptions. Special care was taken to avoid overfitting, which occurred in earlier versions of this fine-tuning process. After reworking the training approach, including tweaking hyperparameters and employing early stopping to monitor model performance, the final **Word Error Rate (WER)** was reduced significantly to:
|
| 24 |
|
|
|
|
| 12 |
widget:
|
| 13 |
- src: path_to_sample_audio_file.wav
|
| 14 |
---
|
| 15 |
+
This model still spits gibberish and not good enough. Still gonna add more to this model for a while and see if its improving.
|
| 16 |
+
|
| 17 |
|
| 18 |
# Whisper Tiny Fine-Tuned on Kalaallisut (Greenlandic) π
|
| 19 |
|
| 20 |
This is a fine-tuned version of the [Whisper Tiny](https://huggingface.co/openai/whisper-tiny) model by OpenAI, adapted to the **Kalaallisut** (Greenlandic) language. The model has been trained and optimized to handle transcriptions specifically for this language, which is historically underrepresented in speech recognition models.
|
| 21 |
|
| 22 |
### π Training Process
|
|
|
|
| 23 |
|
| 24 |
This model was carefully trained on a dataset of **Kalaallisut** audio files paired with transcriptions. Special care was taken to avoid overfitting, which occurred in earlier versions of this fine-tuning process. After reworking the training approach, including tweaking hyperparameters and employing early stopping to monitor model performance, the final **Word Error Rate (WER)** was reduced significantly to:
|
| 25 |
|