Merged Whisper Large V3 - Tigrinya

This model is a permanent merge of the LoRA weights from Aregay01/whisper-large-v3-tigrinya-cosine-3nd-40k-5e-5-r-32 into the base model openai/whisper-large-v3.

Model Overview

  • Language: Tigrinya (ti)
  • Dataset: ~40,000 samples (multi-source merged)
  • Architecture: Whisper Large V3
  • Fine-tuning Method: LoRA (Rank 32, Alpha 128)

Training Logs (Extracted from last-checkpoint)

Step Training Loss Validation Loss
50 148.1558 N/A
100 37.7685 N/A
150 30.9806 N/A
200 30.6713 N/A
250 26.6567 N/A
300 26.1101 N/A
350 25.0399 N/A
400 23.2749 N/A
450 21.7290 N/A
500 21.3158 N/A
550 20.4182 N/A
600 20.6106 N/A
650 20.3457 N/A
700 18.6889 N/A
750 18.8613 N/A
800 17.4691 N/A
850 16.9671 N/A
900 16.9742 N/A
950 16.5389 N/A
1000 15.9641 N/A
Downloads last month
37
Safetensors
Model size
2B params
Tensor type
F16
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for Aregay01/whisper-large-v3-tigrinya-experiment

Adapter
(196)
this model

Space using Aregay01/whisper-large-v3-tigrinya-experiment 1