nocturneFlow commited on
Commit
b0fadbe
·
verified ·
1 Parent(s): ca613b9

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +63 -71
README.md CHANGED
@@ -1,71 +1,63 @@
1
- ---
2
- library_name: transformers
3
- language:
4
- - kk
5
- license: apache-2.0
6
- base_model: nocturneFlow/whisper-medium-ft
7
- tags:
8
- - generated_from_trainer
9
- metrics:
10
- - wer
11
- model-index:
12
- - name: Whisper Medium KK - Kazakh - Fleurs - Common Voice
13
- results: []
14
- ---
15
-
16
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
- should probably proofread and complete it, then remove this comment. -->
18
-
19
- # Whisper Medium KK - Kazakh - Fleurs - Common Voice
20
-
21
- This model is a fine-tuned version of [nocturneFlow/whisper-medium-ft](https://huggingface.co/nocturneFlow/whisper-medium-ft) on an unknown dataset.
22
- It achieves the following results on the evaluation set:
23
- - Loss: 0.0778
24
- - Wer: 10.2803
25
-
26
- ## Model description
27
-
28
- More information needed
29
-
30
- ## Intended uses & limitations
31
-
32
- More information needed
33
-
34
- ## Training and evaluation data
35
-
36
- More information needed
37
-
38
- ## Training procedure
39
-
40
- ### Training hyperparameters
41
-
42
- The following hyperparameters were used during training:
43
- - learning_rate: 1e-05
44
- - train_batch_size: 8
45
- - eval_batch_size: 8
46
- - seed: 42
47
- - gradient_accumulation_steps: 4
48
- - total_train_batch_size: 32
49
- - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
50
- - lr_scheduler_type: linear
51
- - lr_scheduler_warmup_steps: 500
52
- - training_steps: 5000
53
- - mixed_precision_training: Native AMP
54
-
55
- ### Training results
56
-
57
- | Training Loss | Epoch | Step | Validation Loss | Wer |
58
- |:-------------:|:------:|:----:|:---------------:|:-------:|
59
- | 0.1487 | 0.2173 | 1000 | 0.1408 | 17.6987 |
60
- | 0.1137 | 0.4347 | 2000 | 0.1059 | 13.7526 |
61
- | 0.092 | 0.6520 | 3000 | 0.0924 | 12.0276 |
62
- | 0.0814 | 0.8693 | 4000 | 0.0814 | 10.6566 |
63
- | 0.047 | 1.0865 | 5000 | 0.0778 | 10.2803 |
64
-
65
-
66
- ### Framework versions
67
-
68
- - Transformers 4.51.3
69
- - Pytorch 2.6.0+cu118
70
- - Datasets 3.6.0
71
- - Tokenizers 0.21.0
 
1
+ ---
2
+ library_name: transformers
3
+ language:
4
+ - kk
5
+ license: apache-2.0
6
+ base_model: nocturneFlow/whisper-medium-ft
7
+ tags:
8
+ - generated_from_trainer
9
+ metrics:
10
+ - wer
11
+ model-index:
12
+ - name: Whisper Medium KK - Kazakh - Fleurs - Common Voice
13
+ results: []
14
+ datasets:
15
+ - Shirali/ISSAI_KSC_335RS_v_1_1
16
+ - mozilla-foundation/common_voice_17_0
17
+ - google/fleurs
18
+ ---
19
+
20
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
21
+ should probably proofread and complete it, then remove this comment. -->
22
+
23
+ # Whisper Medium KK - Kazakh - Fleurs - Common Voice
24
+
25
+ This model is a fine-tuned version of [nocturneFlow/whisper-medium-ft](https://huggingface.co/nocturneFlow/whisper-medium-ft) on an unknown dataset.
26
+ It achieves the following results on the evaluation set:
27
+ - Loss: 0.0778
28
+ - Wer: 10.2803
29
+
30
+ ## Training procedure
31
+
32
+ ### Training hyperparameters
33
+
34
+ The following hyperparameters were used during training:
35
+ - learning_rate: 1e-05
36
+ - train_batch_size: 8
37
+ - eval_batch_size: 8
38
+ - seed: 42
39
+ - gradient_accumulation_steps: 4
40
+ - total_train_batch_size: 32
41
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
42
+ - lr_scheduler_type: linear
43
+ - lr_scheduler_warmup_steps: 500
44
+ - training_steps: 5000
45
+ - mixed_precision_training: Native AMP
46
+
47
+ ### Training results
48
+
49
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
50
+ |:-------------:|:------:|:----:|:---------------:|:-------:|
51
+ | 0.1487 | 0.2173 | 1000 | 0.1408 | 17.6987 |
52
+ | 0.1137 | 0.4347 | 2000 | 0.1059 | 13.7526 |
53
+ | 0.092 | 0.6520 | 3000 | 0.0924 | 12.0276 |
54
+ | 0.0814 | 0.8693 | 4000 | 0.0814 | 10.6566 |
55
+ | 0.047 | 1.0865 | 5000 | 0.0778 | 10.2803 |
56
+
57
+
58
+ ### Framework versions
59
+
60
+ - Transformers 4.51.3
61
+ - Pytorch 2.6.0+cu118
62
+ - Datasets 3.6.0
63
+ - Tokenizers 0.21.0