File size: 3,404 Bytes
1a519d8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
library_name: transformers
language:
- am
license: apache-2.0
base_model: openai/whisper-tiny
tags:
- generated_from_trainer
metrics:
- wer
model-index:
- name: ' tiny Amharic - Biniyam Daniel'
  results: []
---

<!-- This model card has been generated automatically according to the information the Trainer had access to. You
should probably proofread and complete it, then remove this comment. -->

#  tiny Amharic - Biniyam Daniel

This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on an unknown dataset.
It achieves the following results on the evaluation set:
- Loss: 0.0803
- Wer: 24.4366

## Model description

More information needed

## Intended uses & limitations

More information needed

## Training and evaluation data

More information needed

## Training procedure

### Training hyperparameters

The following hyperparameters were used during training:
- learning_rate: 1e-05
- train_batch_size: 16
- eval_batch_size: 8
- seed: 42
- optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
- lr_scheduler_type: cosine
- lr_scheduler_warmup_ratio: 0.1
- num_epochs: 2
- mixed_precision_training: Native AMP

### Training results

| Training Loss | Epoch  | Step | Validation Loss | Wer      |
|:-------------:|:------:|:----:|:---------------:|:--------:|
| 1.5795        | 0.0665 | 100  | 1.5182          | 150.6407 |
| 1.2873        | 0.1330 | 200  | 1.2244          | 109.2797 |
| 0.3664        | 0.1995 | 300  | 0.3282          | 70.2607  |
| 0.2133        | 0.2660 | 400  | 0.2056          | 52.2095  |
| 0.171         | 0.3324 | 500  | 0.1731          | 46.8184  |
| 0.1422        | 0.3989 | 600  | 0.1467          | 41.7587  |
| 0.1392        | 0.4654 | 700  | 0.1344          | 38.8643  |
| 0.1264        | 0.5319 | 800  | 0.1247          | 36.7654  |
| 0.1113        | 0.5984 | 900  | 0.1186          | 34.3570  |
| 0.1038        | 0.6649 | 1000 | 0.1125          | 33.0977  |
| 0.0978        | 0.7314 | 1100 | 0.1091          | 33.0535  |
| 0.0959        | 0.7979 | 1200 | 0.1033          | 30.3137  |
| 0.0876        | 0.8644 | 1300 | 0.1003          | 29.6730  |
| 0.0975        | 0.9309 | 1400 | 0.0966          | 29.6730  |
| 0.0863        | 0.9973 | 1500 | 0.0968          | 28.9660  |
| 0.0633        | 1.0638 | 1600 | 0.0934          | 28.0601  |
| 0.0612        | 1.1303 | 1700 | 0.0913          | 28.3031  |
| 0.0576        | 1.1968 | 1800 | 0.0905          | 27.1542  |
| 0.0652        | 1.2633 | 1900 | 0.0886          | 27.1984  |
| 0.0636        | 1.3298 | 2000 | 0.0857          | 26.5135  |
| 0.0623        | 1.3963 | 2100 | 0.0852          | 25.9611  |
| 0.0556        | 1.4628 | 2200 | 0.0839          | 25.3646  |
| 0.0569        | 1.5293 | 2300 | 0.0827          | 25.6739  |
| 0.0574        | 1.5957 | 2400 | 0.0822          | 25.2762  |
| 0.0649        | 1.6622 | 2500 | 0.0813          | 24.9448  |
| 0.0744        | 1.7287 | 2600 | 0.0808          | 24.7238  |
| 0.0686        | 1.7952 | 2700 | 0.0805          | 24.7017  |
| 0.0587        | 1.8617 | 2800 | 0.0803          | 24.4145  |
| 0.0615        | 1.9282 | 2900 | 0.0803          | 24.4366  |
| 0.0637        | 1.9947 | 3000 | 0.0803          | 24.4366  |


### Framework versions

- Transformers 4.57.1
- Pytorch 2.6.0+cu124
- Datasets 3.6.0
- Tokenizers 0.22.1