nocturneFlow commited on
Commit
951c104
·
verified ·
1 Parent(s): 3b6024e

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +74 -71
README.md CHANGED
@@ -1,71 +1,74 @@
1
- ---
2
- library_name: transformers
3
- language:
4
- - kk
5
- license: apache-2.0
6
- base_model: openai/whisper-medium
7
- tags:
8
- - generated_from_trainer
9
- metrics:
10
- - wer
11
- model-index:
12
- - name: Whisper Medium KK - Kazakh - Fleurs - Common Voice
13
- results: []
14
- ---
15
-
16
- <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
- should probably proofread and complete it, then remove this comment. -->
18
-
19
- # Whisper Medium KK - Kazakh - Fleurs - Common Voice
20
-
21
- This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
22
- It achieves the following results on the evaluation set:
23
- - Loss: 0.3910
24
- - Wer: 21.2101
25
-
26
- ## Model description
27
-
28
- More information needed
29
-
30
- ## Intended uses & limitations
31
-
32
- More information needed
33
-
34
- ## Training and evaluation data
35
-
36
- More information needed
37
-
38
- ## Training procedure
39
-
40
- ### Training hyperparameters
41
-
42
- The following hyperparameters were used during training:
43
- - learning_rate: 1e-05
44
- - train_batch_size: 8
45
- - eval_batch_size: 8
46
- - seed: 42
47
- - gradient_accumulation_steps: 4
48
- - total_train_batch_size: 32
49
- - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
50
- - lr_scheduler_type: linear
51
- - lr_scheduler_warmup_steps: 500
52
- - training_steps: 5000
53
- - mixed_precision_training: Native AMP
54
-
55
- ### Training results
56
-
57
- | Training Loss | Epoch | Step | Validation Loss | Wer |
58
- |:-------------:|:-------:|:----:|:---------------:|:-------:|
59
- | 0.0045 | 7.5725 | 1000 | 0.3121 | 23.2826 |
60
- | 0.0003 | 15.1507 | 2000 | 0.3523 | 21.3939 |
61
- | 0.0001 | 22.7232 | 3000 | 0.3738 | 21.3661 |
62
- | 0.0001 | 30.3013 | 4000 | 0.3863 | 21.3772 |
63
- | 0.0001 | 37.8738 | 5000 | 0.3910 | 21.2101 |
64
-
65
-
66
- ### Framework versions
67
-
68
- - Transformers 4.51.3
69
- - Pytorch 2.6.0+cu118
70
- - Datasets 3.6.0
71
- - Tokenizers 0.21.0
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ language:
4
+ - kk
5
+ license: apache-2.0
6
+ base_model: openai/whisper-medium
7
+ tags:
8
+ - generated_from_trainer
9
+ metrics:
10
+ - wer
11
+ model-index:
12
+ - name: Whisper Medium KK - Kazakh - Fleurs - Common Voice
13
+ results: []
14
+ datasets:
15
+ - google/fleurs
16
+ - mozilla-foundation/common_voice_17_0
17
+ ---
18
+
19
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
20
+ should probably proofread and complete it, then remove this comment. -->
21
+
22
+ # Whisper Medium KK - Kazakh - Fleurs - Common Voice
23
+
24
+ This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the None dataset.
25
+ It achieves the following results on the evaluation set:
26
+ - Loss: 0.3910
27
+ - Wer: 21.2101
28
+
29
+ ## Model description
30
+
31
+ More information needed
32
+
33
+ ## Intended uses & limitations
34
+
35
+ More information needed
36
+
37
+ ## Training and evaluation data
38
+
39
+ More information needed
40
+
41
+ ## Training procedure
42
+
43
+ ### Training hyperparameters
44
+
45
+ The following hyperparameters were used during training:
46
+ - learning_rate: 1e-05
47
+ - train_batch_size: 8
48
+ - eval_batch_size: 8
49
+ - seed: 42
50
+ - gradient_accumulation_steps: 4
51
+ - total_train_batch_size: 32
52
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
53
+ - lr_scheduler_type: linear
54
+ - lr_scheduler_warmup_steps: 500
55
+ - training_steps: 5000
56
+ - mixed_precision_training: Native AMP
57
+
58
+ ### Training results
59
+
60
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
61
+ |:-------------:|:-------:|:----:|:---------------:|:-------:|
62
+ | 0.0045 | 7.5725 | 1000 | 0.3121 | 23.2826 |
63
+ | 0.0003 | 15.1507 | 2000 | 0.3523 | 21.3939 |
64
+ | 0.0001 | 22.7232 | 3000 | 0.3738 | 21.3661 |
65
+ | 0.0001 | 30.3013 | 4000 | 0.3863 | 21.3772 |
66
+ | 0.0001 | 37.8738 | 5000 | 0.3910 | 21.2101 |
67
+
68
+
69
+ ### Framework versions
70
+
71
+ - Transformers 4.51.3
72
+ - Pytorch 2.6.0+cu118
73
+ - Datasets 3.6.0
74
+ - Tokenizers 0.21.0