maag commited on
Commit
91b8bba
·
1 Parent(s): a7b9f06

update model card README.md

Browse files
Files changed (1) hide show
  1. README.md +17 -20
README.md CHANGED
@@ -1,42 +1,39 @@
1
  ---
2
- language:
3
- - cs
4
  license: apache-2.0
5
  base_model: openai/whisper-tiny
6
  tags:
7
- - whisper-event
8
  - generated_from_trainer
9
  datasets:
10
- - mozilla-foundation/common_voice_13_0
11
  metrics:
12
  - wer
13
  model-index:
14
- - name: Whisper tiny Czech CV13 v1
15
  results:
16
  - task:
17
  name: Automatic Speech Recognition
18
  type: automatic-speech-recognition
19
  dataset:
20
- name: mozilla-foundation/common_voice_13_0 cs
21
- type: mozilla-foundation/common_voice_13_0
22
  config: cs
23
  split: test
24
  args: cs
25
  metrics:
26
  - name: Wer
27
  type: wer
28
- value: 53.0400387724153
29
  ---
30
 
31
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
32
  should probably proofread and complete it, then remove this comment. -->
33
 
34
- # Whisper tiny Czech CV13 v1
35
 
36
- This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the mozilla-foundation/common_voice_13_0 cs dataset.
37
  It achieves the following results on the evaluation set:
38
- - Loss: 0.6426
39
- - Wer: 53.0400
40
 
41
  ## Model description
42
 
@@ -57,25 +54,25 @@ More information needed
57
  The following hyperparameters were used during training:
58
  - learning_rate: 1e-05
59
  - train_batch_size: 32
60
- - eval_batch_size: 8
61
  - seed: 42
62
  - distributed_type: multi-GPU
63
- - num_devices: 2
64
  - gradient_accumulation_steps: 4
65
- - total_train_batch_size: 256
66
- - total_eval_batch_size: 16
67
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
68
  - lr_scheduler_type: linear
69
  - lr_scheduler_warmup_steps: 1000
70
- - training_steps: 300
71
 
72
  ### Training results
73
 
74
  | Training Loss | Epoch | Step | Validation Loss | Wer |
75
  |:-------------:|:-----:|:----:|:---------------:|:-------:|
76
- | 0.8297 | 1.45 | 100 | 0.8730 | 66.3524 |
77
- | 0.62 | 2.91 | 200 | 0.7188 | 57.8663 |
78
- | 0.4986 | 4.36 | 300 | 0.6426 | 53.0400 |
79
 
80
 
81
  ### Framework versions
 
1
  ---
 
 
2
  license: apache-2.0
3
  base_model: openai/whisper-tiny
4
  tags:
 
5
  - generated_from_trainer
6
  datasets:
7
+ - common_voice_13_0
8
  metrics:
9
  - wer
10
  model-index:
11
+ - name: whisper_tiny_cs
12
  results:
13
  - task:
14
  name: Automatic Speech Recognition
15
  type: automatic-speech-recognition
16
  dataset:
17
+ name: common_voice_13_0
18
+ type: common_voice_13_0
19
  config: cs
20
  split: test
21
  args: cs
22
  metrics:
23
  - name: Wer
24
  type: wer
25
+ value: 45.51693948063724
26
  ---
27
 
28
  <!-- This model card has been generated automatically according to the information the Trainer had access to. You
29
  should probably proofread and complete it, then remove this comment. -->
30
 
31
+ # whisper_tiny_cs
32
 
33
+ This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the common_voice_13_0 dataset.
34
  It achieves the following results on the evaluation set:
35
+ - Loss: 0.6782
36
+ - Wer: 45.5169
37
 
38
  ## Model description
39
 
 
54
  The following hyperparameters were used during training:
55
  - learning_rate: 1e-05
56
  - train_batch_size: 32
57
+ - eval_batch_size: 16
58
  - seed: 42
59
  - distributed_type: multi-GPU
60
+ - num_devices: 3
61
  - gradient_accumulation_steps: 4
62
+ - total_train_batch_size: 384
63
+ - total_eval_batch_size: 48
64
  - optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
65
  - lr_scheduler_type: linear
66
  - lr_scheduler_warmup_steps: 1000
67
+ - training_steps: 3000
68
 
69
  ### Training results
70
 
71
  | Training Loss | Epoch | Step | Validation Loss | Wer |
72
  |:-------------:|:-----:|:----:|:---------------:|:-------:|
73
+ | 0.1007 | 21.86 | 1000 | 0.5430 | 44.0802 |
74
+ | 0.013 | 43.72 | 2000 | 0.6489 | 44.9182 |
75
+ | 0.0079 | 65.57 | 3000 | 0.6782 | 45.5169 |
76
 
77
 
78
  ### Framework versions