|
|
--- |
|
|
language: |
|
|
- zh |
|
|
license: apache-2.0 |
|
|
base_model: openai/whisper-tiny |
|
|
tags: |
|
|
- generated_from_trainer |
|
|
datasets: |
|
|
- formospeech/tat_asr_aligned |
|
|
model-index: |
|
|
- name: Whisper Tiny Taiwanese Android |
|
|
results: [] |
|
|
--- |
|
|
|
|
|
<!-- This model card has been generated automatically according to the information the Trainer had access to. You |
|
|
should probably proofread and complete it, then remove this comment. --> |
|
|
|
|
|
# Whisper Tiny Taiwanese Android |
|
|
|
|
|
This model is a fine-tuned version of [openai/whisper-tiny](https://huggingface.co/openai/whisper-tiny) on the TAT ASR Aligned dataset. |
|
|
It achieves the following results on the evaluation set: |
|
|
- Loss: 0.6536 |
|
|
- Cer: 10.3016 |
|
|
|
|
|
## Model description |
|
|
|
|
|
More information needed |
|
|
|
|
|
## Intended uses & limitations |
|
|
|
|
|
More information needed |
|
|
|
|
|
## Training and evaluation data |
|
|
|
|
|
More information needed |
|
|
|
|
|
## Training procedure |
|
|
|
|
|
### Training hyperparameters |
|
|
|
|
|
The following hyperparameters were used during training: |
|
|
- learning_rate: 0.0001 |
|
|
- train_batch_size: 64 |
|
|
- eval_batch_size: 32 |
|
|
- seed: 42 |
|
|
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08 |
|
|
- lr_scheduler_type: linear |
|
|
- lr_scheduler_warmup_steps: 1362 |
|
|
- training_steps: 13620 |
|
|
- mixed_precision_training: Native AMP |
|
|
|
|
|
### Training results |
|
|
|
|
|
| Training Loss | Epoch | Step | Validation Loss | Cer | |
|
|
|:-------------:|:-------:|:-----:|:---------------:|:-------:| |
|
|
| 0.371 | 0.9985 | 681 | 0.4334 | 14.4492 | |
|
|
| 0.2637 | 1.9971 | 1362 | 0.3950 | 13.0672 | |
|
|
| 0.1725 | 2.9956 | 2043 | 0.3962 | 12.1858 | |
|
|
| 0.1102 | 3.9941 | 2724 | 0.4102 | 11.8710 | |
|
|
| 0.0715 | 4.9927 | 3405 | 0.4442 | 11.9113 | |
|
|
| 0.0467 | 5.9912 | 4086 | 0.4830 | 12.2436 | |
|
|
| 0.0322 | 6.9897 | 4767 | 0.5100 | 11.6466 | |
|
|
| 0.0234 | 7.9883 | 5448 | 0.5315 | 11.5878 | |
|
|
| 0.0182 | 8.9868 | 6129 | 0.5542 | 11.8786 | |
|
|
| 0.012 | 9.9853 | 6810 | 0.5834 | 11.5762 | |
|
|
| 0.0083 | 10.9839 | 7491 | 0.5833 | 11.4945 | |
|
|
| 0.0061 | 11.9824 | 8172 | 0.6000 | 11.1774 | |
|
|
| 0.0045 | 12.9809 | 8853 | 0.6136 | 11.0700 | |
|
|
| 0.0027 | 13.9795 | 9534 | 0.6144 | 10.8808 | |
|
|
| 0.0008 | 14.9780 | 10215 | 0.6320 | 10.6295 | |
|
|
| 0.0006 | 15.9765 | 10896 | 0.6380 | 10.6150 | |
|
|
| 0.0003 | 16.9751 | 11577 | 0.6385 | 10.4755 | |
|
|
| 0.0003 | 17.9736 | 12258 | 0.6498 | 10.4047 | |
|
|
| 0.0001 | 18.9721 | 12939 | 0.6537 | 10.3546 | |
|
|
| 0.0001 | 19.9707 | 13620 | 0.6536 | 10.3016 | |
|
|
|
|
|
|
|
|
### Framework versions |
|
|
|
|
|
- Transformers 4.42.3 |
|
|
- Pytorch 2.3.0+cu121 |
|
|
- Datasets 2.20.0 |
|
|
- Tokenizers 0.19.1 |
|
|
|