personalizedrefrigerator commited on
Commit
ae4f787
·
verified ·
1 Parent(s): f43ad77

End of training

Browse files
README.md CHANGED
@@ -1,44 +1,27 @@
1
  ---
2
  library_name: transformers
3
- language:
4
- - fr
5
  license: apache-2.0
6
  base_model: openai/whisper-tiny
7
  tags:
8
  - generated_from_trainer
9
- datasets:
10
- - mozilla-foundation/common_voice_11_0
11
  metrics:
12
  - wer
13
  model-index:
14
- - name: Whisper Tiny (Finetuned on French)
15
- results:
16
- - task:
17
- name: Automatic Speech Recognition
18
- type: automatic-speech-recognition
19
- dataset:
20
- name: Common Voice 11.0
21
- type: mozilla-foundation/common_voice_11_0
22
- config: fr
23
- split: test
24
- args: fr
25
- metrics:
26
- - name: Wer
27
- type: wer
28
- value: 0.4218038891187422
29
  ---
30
 
31
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
32
  should probably proofread and complete it, then remove this comment. -->
33
 
34
- # Whisper Tiny (Finetuned on French)
35
 
36
- This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the Common Voice 11.0 dataset.
37
  It achieves the following results on the evaluation set:
38
- - Loss: 0.7793
39
- - Model Preparation Time: 0.0027
40
- - Wer: 0.4218
41
- - Cer: 0.1940
42
 
43
  ## Model description
44
 
@@ -57,36 +40,40 @@ More information needed
57
  ### Training hyperparameters
58
 
59
  The following hyperparameters were used during training:
60
- - learning_rate: 1e-05
61
- - train_batch_size: 16
62
  - eval_batch_size: 8
63
  - seed: 42
64
- - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
65
  - lr_scheduler_type: linear
66
- - training_steps: 12000
67
  - mixed_precision_training: Native AMP
68
 
69
  ### Training results
70
 
71
  | Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Wer | Cer |
72
  |:-------------:|:------:|:-----:|:---------------:|:----------------------:|:------:|:------:|
73
- | 0.516 | 0.0833 | 1000 | 0.9995 | 0.0027 | 0.4487 | 0.2213 |
74
- | 0.5283 | 0.1667 | 2000 | 0.9679 | 0.0027 | 0.4511 | 0.2207 |
75
- | 0.5421 | 0.25 | 3000 | 0.9532 | 0.0027 | 0.4462 | 0.2172 |
76
- | 0.5735 | 0.3333 | 4000 | 0.9474 | 0.0027 | 0.4365 | 0.2110 |
77
- | 0.5774 | 0.4167 | 5000 | 0.9119 | 0.0027 | 0.4794 | 0.2390 |
78
- | 0.591 | 0.5 | 6000 | 0.8834 | 0.0027 | 0.4171 | 0.2024 |
79
- | 0.5218 | 0.5833 | 7000 | 0.8777 | 0.0027 | 0.4293 | 0.2096 |
80
- | 0.4328 | 0.6667 | 8000 | 0.8750 | 0.0027 | 0.4139 | 0.2017 |
81
- | 0.5392 | 0.75 | 9000 | 0.8736 | 0.0027 | 0.5618 | 0.3050 |
82
- | 0.4311 | 0.8333 | 10000 | 0.8587 | 0.0027 | 0.5618 | 0.3030 |
83
- | 0.4728 | 0.9167 | 11000 | 0.8514 | 0.0027 | 0.4293 | 0.2034 |
84
- | 0.4521 | 1.0 | 12000 | 0.8516 | 0.0027 | 0.4220 | 0.2054 |
 
 
 
 
85
 
86
 
87
  ### Framework versions
88
 
89
- - Transformers 4.46.3
90
- - Pytorch 2.5.1+cu121
91
- - Datasets 3.1.0
92
- - Tokenizers 0.20.3
 
1
  ---
2
  library_name: transformers
 
 
3
  license: apache-2.0
4
  base_model: openai/whisper-tiny
5
  tags:
6
  - generated_from_trainer
 
 
7
  metrics:
8
  - wer
9
  model-index:
10
+ - name: whisper-tiny-fr
11
+ results: []
 
 
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
+ # whisper-tiny-fr
18
 
19
+ This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the None dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.6871
22
+ - Model Preparation Time: 0.0056
23
+ - Wer: 0.5022
24
+ - Cer: 0.3447
25
 
26
  ## Model description
27
 
 
40
  ### Training hyperparameters
41
 
42
  The following hyperparameters were used during training:
43
+ - learning_rate: 0.0001
44
+ - train_batch_size: 8
45
  - eval_batch_size: 8
46
  - seed: 42
47
+ - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
  - lr_scheduler_type: linear
49
+ - training_steps: 16000
50
  - mixed_precision_training: Native AMP
51
 
52
  ### Training results
53
 
54
  | Training Loss | Epoch | Step | Validation Loss | Model Preparation Time | Wer | Cer |
55
  |:-------------:|:------:|:-----:|:---------------:|:----------------------:|:------:|:------:|
56
+ | 1.0566 | 0.0625 | 1000 | 1.3246 | 0.0056 | 0.7494 | 0.4923 |
57
+ | 0.8712 | 0.125 | 2000 | 1.1335 | 0.0056 | 0.6508 | 0.4880 |
58
+ | 0.7638 | 0.1875 | 3000 | 1.0380 | 0.0056 | 0.5891 | 0.4386 |
59
+ | 0.8262 | 0.25 | 4000 | 0.9789 | 0.0056 | 0.5435 | 0.3623 |
60
+ | 0.669 | 0.3125 | 5000 | 0.9403 | 0.0056 | 0.5613 | 0.3955 |
61
+ | 0.6105 | 0.375 | 6000 | 0.9065 | 0.0056 | 0.5876 | 0.3812 |
62
+ | 0.5432 | 0.4375 | 7000 | 0.8885 | 0.0056 | 0.5350 | 0.3625 |
63
+ | 0.5188 | 0.5 | 8000 | 0.8876 | 0.0056 | 0.5612 | 0.3821 |
64
+ | 0.6963 | 0.5625 | 9000 | 0.8451 | 0.0056 | 0.5926 | 0.3922 |
65
+ | 0.6387 | 0.625 | 10000 | 0.7571 | 0.0056 | 0.4981 | 0.3410 |
66
+ | 0.5572 | 0.6875 | 11000 | 0.7194 | 0.0056 | 0.5045 | 0.3508 |
67
+ | 0.5207 | 0.75 | 12000 | 0.7124 | 0.0056 | 0.4742 | 0.3312 |
68
+ | 0.4515 | 0.8125 | 13000 | 0.7004 | 0.0056 | 0.4817 | 0.3345 |
69
+ | 0.4858 | 0.875 | 14000 | 0.7089 | 0.0056 | 0.4602 | 0.3271 |
70
+ | 0.4601 | 0.9375 | 15000 | 0.6796 | 0.0056 | 0.5509 | 0.3672 |
71
+ | 0.4808 | 1.0 | 16000 | 0.6719 | 0.0056 | 0.4443 | 0.2966 |
72
 
73
 
74
  ### Framework versions
75
 
76
+ - Transformers 4.49.0
77
+ - Pytorch 2.6.0+cu124
78
+ - Datasets 3.4.1
79
+ - Tokenizers 0.21.1
config.json CHANGED
@@ -54,7 +54,7 @@
54
  "pad_token_id": 50257,
55
  "scale_embedding": false,
56
  "torch_dtype": "float32",
57
- "transformers_version": "4.46.3",
58
  "use_cache": true,
59
  "use_weighted_layer_sum": false,
60
  "vocab_size": 51865
 
54
  "pad_token_id": 50257,
55
  "scale_embedding": false,
56
  "torch_dtype": "float32",
57
+ "transformers_version": "4.49.0",
58
  "use_cache": true,
59
  "use_weighted_layer_sum": false,
60
  "vocab_size": 51865
generation_config.json CHANGED
@@ -236,5 +236,5 @@
236
  "transcribe": 50359,
237
  "translate": 50358
238
  },
239
- "transformers_version": "4.46.3"
240
  }
 
236
  "transcribe": 50359,
237
  "translate": 50358
238
  },
239
+ "transformers_version": "4.49.0"
240
  }
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:e5bedfa33ed7973b7eec6cb1db0e4306013e0a76cf157ac259b1ecd0fc7bb48b
3
  size 151061672
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:9cefe71e77c49e46e78f5e38f5f1fc250a329f0f46766a4a751dd90496e7ef52
3
  size 151061672
runs/Mar17_21-30-31_b2a92614898a/events.out.tfevents.1742247247.b2a92614898a.765.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67c09115fa4b6193bf257a8cb20efbbf9d3500c4b75548391f6e25a4e19633ed
3
+ size 17590
runs/Mar17_22-01-24_b2a92614898a/events.out.tfevents.1742249066.b2a92614898a.8333.0 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b38013ae3a641fc46e0dd0ce3918774da8a16503b63b2a96b9092311ef6a0301
3
+ size 149743
runs/Mar17_22-01-24_b2a92614898a/events.out.tfevents.1742260399.b2a92614898a.8333.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:f77f7e494b3128425dd5223ca1df3b20d1325c7d792d5fc299dab6655a310e79
3
+ size 519
training_args.bin CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:45658c1cc60c2da0b7d59182fce10b2b6e19016fc491eacdf8458053c2178f63
3
- size 5432
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:82dc1bd201f593001f8f33ed0e96717f04f96861409b704e895d5861edaeefb4
3
+ size 5560