Danieljava commited on
Commit
bc0bd6f
·
verified ·
1 Parent(s): ed88a6c

End of training

Browse files
README.md ADDED
@@ -0,0 +1,78 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ library_name: transformers
3
+ language:
4
+ - yo
5
+ license: apache-2.0
6
+ base_model: openai/whisper-small
7
+ tags:
8
+ - generated_from_trainer
9
+ datasets:
10
+ - hf-internal-testing/librispeech_asr_dummy
11
+ metrics:
12
+ - wer
13
+ model-index:
14
+ - name: Whisper Small yo - fine_tune
15
+ results:
16
+ - task:
17
+ name: Automatic Speech Recognition
18
+ type: automatic-speech-recognition
19
+ dataset:
20
+ name: librispeech_asr_dataset
21
+ type: hf-internal-testing/librispeech_asr_dummy
22
+ metrics:
23
+ - name: Wer
24
+ type: wer
25
+ value: 6.587473002159827
26
+ ---
27
+
28
+ <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
+ should probably proofread and complete it, then remove this comment. -->
30
+
31
+ # Whisper Small yo - fine_tune
32
+
33
+ This model is a fine-tuned version of [openai/whisper-small](https://huggingface.co/openai/whisper-small) on the librispeech_asr_dataset dataset.
34
+ It achieves the following results on the evaluation set:
35
+ - Loss: 0.1471
36
+ - Wer Ortho: 6.6134
37
+ - Wer: 6.5875
38
+
39
+ ## Model description
40
+
41
+ More information needed
42
+
43
+ ## Intended uses & limitations
44
+
45
+ More information needed
46
+
47
+ ## Training and evaluation data
48
+
49
+ More information needed
50
+
51
+ ## Training procedure
52
+
53
+ ### Training hyperparameters
54
+
55
+ The following hyperparameters were used during training:
56
+ - learning_rate: 1e-05
57
+ - train_batch_size: 16
58
+ - eval_batch_size: 16
59
+ - seed: 42
60
+ - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
61
+ - lr_scheduler_type: constant_with_warmup
62
+ - lr_scheduler_warmup_steps: 50
63
+ - training_steps: 500
64
+ - mixed_precision_training: Native AMP
65
+
66
+ ### Training results
67
+
68
+ | Training Loss | Epoch | Step | Validation Loss | Wer Ortho | Wer |
69
+ |:-------------:|:------:|:----:|:---------------:|:---------:|:------:|
70
+ | 0.0123 | 3.2895 | 500 | 0.1471 | 6.6134 | 6.5875 |
71
+
72
+
73
+ ### Framework versions
74
+
75
+ - Transformers 5.0.0
76
+ - Pytorch 2.10.0+cu128
77
+ - Datasets 4.0.0
78
+ - Tokenizers 0.22.2
runs/Feb25_10-22-39_c8d0ced26559/events.out.tfevents.1772014959.c8d0ced26559.216.0 CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:b92f4498609d3d0f81f94a8b31d5cc5c66cc72d82c419653e7825eeec9d538d8
3
- size 9680
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a731a04591d93ba4516cb37061273987e596127420c753131198329791be10ed
3
+ size 10034