VoiceLessQ commited on
Commit
29d6635
Β·
verified Β·
1 Parent(s): ab4cc1d

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -12,13 +12,14 @@ model_type: speech-to-text
12
  widget:
13
  - src: path_to_sample_audio_file.wav
14
  ---
 
 
15
 
16
  # Whisper Tiny Fine-Tuned on Kalaallisut (Greenlandic) 🌍
17
 
18
  This is a fine-tuned version of the [Whisper Tiny](https://huggingface.co/openai/whisper-tiny) model by OpenAI, adapted to the **Kalaallisut** (Greenlandic) language. The model has been trained and optimized to handle transcriptions specifically for this language, which is historically underrepresented in speech recognition models.
19
 
20
  ### πŸ“š Training Process
21
- This model still spits gibberish and not good enough. Still gonna add more to this model for a while and see if its improving.
22
 
23
  This model was carefully trained on a dataset of **Kalaallisut** audio files paired with transcriptions. Special care was taken to avoid overfitting, which occurred in earlier versions of this fine-tuning process. After reworking the training approach, including tweaking hyperparameters and employing early stopping to monitor model performance, the final **Word Error Rate (WER)** was reduced significantly to:
24
 
 
12
  widget:
13
  - src: path_to_sample_audio_file.wav
14
  ---
15
+ This model still spits gibberish and not good enough. Still gonna add more to this model for a while and see if its improving.
16
+
17
 
18
  # Whisper Tiny Fine-Tuned on Kalaallisut (Greenlandic) 🌍
19
 
20
  This is a fine-tuned version of the [Whisper Tiny](https://huggingface.co/openai/whisper-tiny) model by OpenAI, adapted to the **Kalaallisut** (Greenlandic) language. The model has been trained and optimized to handle transcriptions specifically for this language, which is historically underrepresented in speech recognition models.
21
 
22
  ### πŸ“š Training Process
 
23
 
24
  This model was carefully trained on a dataset of **Kalaallisut** audio files paired with transcriptions. Special care was taken to avoid overfitting, which occurred in earlier versions of this fine-tuning process. After reworking the training approach, including tweaking hyperparameters and employing early stopping to monitor model performance, the final **Word Error Rate (WER)** was reduced significantly to:
25