MathildeB3 commited on
Commit
68120d6
·
verified ·
1 Parent(s): a110f6b

Model save

Browse files
README.md ADDED
@@ -0,0 +1,79 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ license: apache-2.0
4
+ base_model: openai/whisper-small
5
+ tags:
6
+ - generated_from_trainer
7
+ metrics:
8
+ - wer
9
+ model-index:
10
+ - name: 52Hz-small-fr-v2b
11
+ results: []
12
+ ---
13
+
14
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
+ should probably proofread and complete it, then remove this comment. -->
16
+
17
+ # 52Hz-small-fr-v2b
18
+
19
+ This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the None dataset.
20
+ It achieves the following results on the evaluation set:
21
+ - Loss: 0.4081
22
+ - Wer: 17.4837
23
+
24
+ ## Model description
25
+
26
+ More information needed
27
+
28
+ ## Intended uses & limitations
29
+
30
+ More information needed
31
+
32
+ ## Training and evaluation data
33
+
34
+ More information needed
35
+
36
+ ## Training procedure
37
+
38
+ ### Training hyperparameters
39
+
40
+ The following hyperparameters were used during training:
41
+ - learning_rate: 2e-05
42
+ - train_batch_size: 4
43
+ - eval_batch_size: 8
44
+ - seed: 42
45
+ - gradient_accumulation_steps: 4
46
+ - total_train_batch_size: 16
47
+ - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
+ - lr_scheduler_type: cosine
49
+ - lr_scheduler_warmup_ratio: 0.1
50
+ - num_epochs: 15
51
+ - mixed_precision_training: Native AMP
52
+
53
+ ### Training results
54
+
55
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
56
+ |:-------------:|:-----:|:----:|:---------------:|:-------:|
57
+ | 1.4611 | 1.0 | 22 | 0.8951 | 40.3595 |
58
+ | 0.6363 | 2.0 | 44 | 0.5120 | 25.1634 |
59
+ | 0.2988 | 3.0 | 66 | 0.4660 | 20.4248 |
60
+ | 0.2174 | 4.0 | 88 | 0.4379 | 36.7647 |
61
+ | 0.1602 | 5.0 | 110 | 0.4227 | 44.9346 |
62
+ | 0.1294 | 6.0 | 132 | 0.4130 | 18.6275 |
63
+ | 0.0972 | 7.0 | 154 | 0.4207 | 18.1373 |
64
+ | 0.0591 | 8.0 | 176 | 0.4045 | 19.4444 |
65
+ | 0.0377 | 9.0 | 198 | 0.4233 | 15.6863 |
66
+ | 0.0337 | 10.0 | 220 | 0.4078 | 17.1569 |
67
+ | 0.0292 | 11.0 | 242 | 0.4063 | 17.6471 |
68
+ | 0.0225 | 12.0 | 264 | 0.4063 | 16.9935 |
69
+ | 0.0313 | 13.0 | 286 | 0.4073 | 17.4837 |
70
+ | 0.0185 | 14.0 | 308 | 0.4080 | 17.4837 |
71
+ | 0.0198 | 15.0 | 330 | 0.4081 | 17.4837 |
72
+
73
+
74
+ ### Framework versions
75
+
76
+ - Transformers 4.57.3
77
+ - Pytorch 2.9.1+cu130
78
+ - Datasets 4.4.2
79
+ - Tokenizers 0.22.2
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:9637af3c4cb50d3a2ae96b499d8e909ce1d15faa2fd2cc7d568e05636e3547a8
3
  size 966995080
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:dbbacf38f729abc74ae72300363ef85313faad1bd537f12cbd20404677dde971
3
  size 966995080
runs/Feb18_20-50-35_sl-tp-br-517/events.out.tfevents.1771444236.sl-tp-br-517.2626834.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:0d8b2aa98712955ae741444697320466c738ea30230255ce76a554ed125dbafe
3
- size 25327
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:495cdac65f1b1f22bc34440eec644c25d5fc83b2737a81fb3e35dd3ffbb9bfc3
3
+ size 25681