Whisper Tiny Slo Artur - Full FT
This model is a fine-tuned Slovenian version of openai/whisper-tiny on the Artur 1.0 Full Dataset.
It was fine-tuned on Vega supercomputer.
Best model at step 32000 achieves the following results on the test set:
- Loss: 0.1765
- Wer: 15.19
Model description
Example usage:
from transformers import WhisperForConditionalGeneration, WhisperProcessor
import whisper
processor = WhisperProcessor.from_pretrained("blko/whisper-tiny-sl-artur-full-ft")
# load our tiny model at step 32000
model = WhisperForConditionalGeneration.from_pretrained(
"blko/whisper-tiny-sl-artur-full-ft",
revision="e822c3fcbf7c47c966309fcd47aaf46036bcf558"
)
audio_path = "./test_recording.wav" # 16kHz sample rate required
mel_from_file = whisper.audio.log_mel_spectrogram(audio_path).unsqueeze(dim=0)
output = model.generate(mel_from_file)
print(processor.batch_decode(output))
Training and evaluation data
Pre-processed dataset used in this work can be found on HuggingFace at link Artur 1.0
Training procedure
For details including training and testing code refer to GitHub link at UHO: Slovenian speech recognition in an application for people with hearing impairment
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 3.75e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 51000
- mixed_precision_training: Native AMP
Framework versions
- Transformers 4.44.2
- Pytorch 2.1.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1
- Downloads last month
- -
Model tree for blko/whisper-tiny-sl-artur-full-ft
Base model
openai/whisper-tinyEvaluation results
- Wer on Artur 1.0 Full Datasetself-reported15.190