rbcurzon commited on
Commit
76194b9
·
verified ·
1 Parent(s): 1e2b755

Model save

Browse files
Files changed (2) hide show
  1. README.md +19 -27
  2. generation_config.json +1 -1
README.md CHANGED
@@ -4,24 +4,11 @@ license: apache-2.0
4
  base_model: openai/whisper-medium
5
  tags:
6
  - generated_from_trainer
7
- datasets:
8
- - rbcurzon/ph_dialect_asr
9
  metrics:
10
  - wer
11
  model-index:
12
  - name: whisper-medium-ph
13
- results:
14
- - task:
15
- name: Automatic Speech Recognition
16
- type: automatic-speech-recognition
17
- dataset:
18
- name: rbcurzon/ph_dialect_asr all
19
- type: rbcurzon/ph_dialect_asr
20
- args: all
21
- metrics:
22
- - name: Wer
23
- type: wer
24
- value: 0.12829864835872132
25
  ---
26
 
27
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
@@ -29,10 +16,10 @@ should probably proofread and complete it, then remove this comment. -->
29
 
30
  # whisper-medium-ph
31
 
32
- This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on the rbcurzon/ph_dialect_asr all dataset.
33
  It achieves the following results on the evaluation set:
34
- - Loss: 0.3113
35
- - Wer: 0.1283
36
 
37
  ## Model description
38
 
@@ -52,26 +39,31 @@ More information needed
52
 
53
  The following hyperparameters were used during training:
54
  - learning_rate: 1e-05
55
- - train_batch_size: 16
56
- - eval_batch_size: 16
57
  - seed: 42
58
- - optimizer: Use adamw_torch with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
 
 
59
  - lr_scheduler_type: linear
60
  - lr_scheduler_warmup_steps: 500
61
- - training_steps: 2000
62
  - mixed_precision_training: Native AMP
63
 
64
  ### Training results
65
 
66
  | Training Loss | Epoch | Step | Validation Loss | Wer |
67
  |:-------------:|:------:|:----:|:---------------:|:------:|
68
- | 0.1001 | 1.2330 | 1000 | 0.3040 | 0.1433 |
69
- | 0.0125 | 2.4661 | 2000 | 0.3113 | 0.1283 |
 
 
 
70
 
71
 
72
  ### Framework versions
73
 
74
- - Transformers 4.53.0.dev0
75
- - Pytorch 2.7.0+cu126
76
- - Datasets 3.6.0
77
- - Tokenizers 0.21.1
 
4
  base_model: openai/whisper-medium
5
  tags:
6
  - generated_from_trainer
 
 
7
  metrics:
8
  - wer
9
  model-index:
10
  - name: whisper-medium-ph
11
+ results: []
 
 
 
 
 
 
 
 
 
 
 
12
  ---
13
 
14
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
 
16
 
17
  # whisper-medium-ph
18
 
19
+ This model is a fine-tuned version of [openai/whisper-medium](https://huggingface.co/openai/whisper-medium) on an unknown dataset.
20
  It achieves the following results on the evaluation set:
21
+ - Loss: 0.2901
22
+ - Wer: 0.1147
23
 
24
  ## Model description
25
 
 
39
 
40
  The following hyperparameters were used during training:
41
  - learning_rate: 1e-05
42
+ - train_batch_size: 8
43
+ - eval_batch_size: 8
44
  - seed: 42
45
+ - gradient_accumulation_steps: 2
46
+ - total_train_batch_size: 16
47
+ - optimizer: Use OptimizerNames.ADAMW_TORCH_FUSED with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
48
  - lr_scheduler_type: linear
49
  - lr_scheduler_warmup_steps: 500
50
+ - training_steps: 5000
51
  - mixed_precision_training: Native AMP
52
 
53
  ### Training results
54
 
55
  | Training Loss | Epoch | Step | Validation Loss | Wer |
56
  |:-------------:|:------:|:----:|:---------------:|:------:|
57
+ | 0.1822 | 1.4818 | 1000 | 0.2656 | 0.1445 |
58
+ | 0.0706 | 2.9637 | 2000 | 0.2491 | 0.1270 |
59
+ | 0.0072 | 4.4448 | 3000 | 0.2729 | 0.1191 |
60
+ | 0.005 | 5.9266 | 4000 | 0.2810 | 0.1157 |
61
+ | 0.0009 | 7.4077 | 5000 | 0.2901 | 0.1147 |
62
 
63
 
64
  ### Framework versions
65
 
66
+ - Transformers 4.56.0.dev0
67
+ - Pytorch 2.8.0+cu128
68
+ - Datasets 4.0.0
69
+ - Tokenizers 0.21.4
generation_config.json CHANGED
@@ -237,5 +237,5 @@
237
  "transcribe": 50359,
238
  "translate": 50358
239
  },
240
- "transformers_version": "4.53.0.dev0"
241
  }
 
237
  "transcribe": 50359,
238
  "translate": 50358
239
  },
240
+ "transformers_version": "4.56.0.dev0"
241
  }