Baselhany commited on
Commit
5ad050c
·
verified ·
1 Parent(s): baeeee9

Model save

Browse files
README.md CHANGED
@@ -1,27 +1,25 @@
1
  ---
2
  library_name: transformers
3
- language:
4
- - ar
5
  license: apache-2.0
6
- base_model: openai/whisper-base
7
  tags:
8
  - generated_from_trainer
9
  metrics:
10
  - wer
11
  model-index:
12
- - name: Whisper base AR - BA
13
  results: []
14
  ---
15
 
16
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
17
  should probably proofread and complete it, then remove this comment. -->
18
 
19
- # Whisper base AR - BA
20
 
21
- This model is a fine-tuned version of [openai/whisper-base](https://huggingface.co/openai/whisper-base) on the quran-ayat-speech-to-text dataset.
22
  It achieves the following results on the evaluation set:
23
- - Loss: 0.1026
24
- - Wer: 0.2760
25
 
26
  ## Model description
27
 
@@ -49,28 +47,18 @@ The following hyperparameters were used during training:
49
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
50
  - lr_scheduler_type: linear
51
  - lr_scheduler_warmup_steps: 500
52
- - num_epochs: 15
53
  - mixed_precision_training: Native AMP
54
 
55
  ### Training results
56
 
57
- | Training Loss | Epoch | Step | Validation Loss | Wer |
58
- |:-------------:|:-------:|:----:|:---------------:|:------:|
59
- | 1.2205 | 0.9987 | 595 | 0.0773 | 0.2691 |
60
- | 0.7481 | 1.9987 | 1190 | 0.0850 | 0.3144 |
61
- | 0.4781 | 2.9987 | 1785 | 0.0928 | 0.2558 |
62
- | 0.338 | 3.9987 | 2380 | 0.0905 | 0.2638 |
63
- | 0.2998 | 4.9987 | 2975 | 0.0881 | 0.3087 |
64
- | 0.1988 | 5.9987 | 3570 | 0.0934 | 0.2822 |
65
- | 0.155 | 6.9987 | 4165 | 0.0865 | 0.2979 |
66
- | 0.1536 | 7.9987 | 4760 | 0.0894 | 0.2806 |
67
- | 0.1456 | 8.9987 | 5355 | 0.0867 | 0.3210 |
68
- | 0.1205 | 9.9987 | 5950 | 0.0868 | 0.3017 |
69
- | 0.087 | 10.9987 | 6545 | 0.0880 | 0.2926 |
70
- | 0.0673 | 11.9987 | 7140 | 0.0859 | 0.2944 |
71
- | 0.0683 | 12.9987 | 7735 | 0.0887 | 0.2996 |
72
- | 0.0474 | 13.9987 | 8330 | 0.0876 | 0.2974 |
73
- | 0.0406 | 14.9987 | 8925 | 0.0886 | 0.2991 |
74
 
75
 
76
  ### Framework versions
 
1
  ---
2
  library_name: transformers
 
 
3
  license: apache-2.0
4
+ base_model: Baselhany/Graduation_Project_distillation_Whisper_base2
5
  tags:
6
  - generated_from_trainer
7
  metrics:
8
  - wer
9
  model-index:
10
+ - name: Graduation_Project_distillation_Whisper_base2
11
  results: []
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
15
  should probably proofread and complete it, then remove this comment. -->
16
 
17
+ # Graduation_Project_distillation_Whisper_base2
18
 
19
+ This model is a fine-tuned version of [Baselhany/Graduation_Project_distillation_Whisper_base2](https://huggingface.co/Baselhany/Graduation_Project_distillation_Whisper_base2) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.1097
22
+ - Wer: 0.2742
23
 
24
  ## Model description
25
 
 
47
  - optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
  - lr_scheduler_type: linear
49
  - lr_scheduler_warmup_steps: 500
50
+ - num_epochs: 5
51
  - mixed_precision_training: Native AMP
52
 
53
  ### Training results
54
 
55
+ | Training Loss | Epoch | Step | Validation Loss | Wer |
56
+ |:-------------:|:------:|:----:|:---------------:|:------:|
57
+ | 0.3035 | 0.9987 | 595 | 0.0981 | 0.2460 |
58
+ | 0.2664 | 1.9987 | 1190 | 0.0935 | 0.2819 |
59
+ | 0.2242 | 2.9987 | 1785 | 0.0878 | 0.2786 |
60
+ | 0.1442 | 3.9987 | 2380 | 0.0873 | 0.2561 |
61
+ | 0.1145 | 4.9987 | 2975 | 0.0848 | 0.2558 |
 
 
 
 
 
 
 
 
 
 
62
 
63
 
64
  ### Framework versions
model.safetensors CHANGED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:c44cfa7b426aff9c6f55efd8c8db8807dea2f6f78a7cf19371d209d9bed81362
3
  size 223144592
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c52bd2321d3a6d973bd18d2f7edb573ec2184d670ffeddd333d500a3daf46653
3
  size 223144592
runs/Jun20_20-34-49_83d86f13a4a6/events.out.tfevents.1750464270.83d86f13a4a6.19.1 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7adab8c8a523247b3c2a5f2599d0e336f2e28619763fa1794cb16d2d4a14846e
3
+ size 406