Whisper Base Slo Artur - Full FT

This model is a fine-tuned Slovenian version of openai/whisper-base on the Artur 1.0 Full Dataset.
It was fine-tuned on Vega supercomputer.
Best model at step 32000 achieves the following results on the test set:

  • Loss: 0.1458
  • Wer: 11.38

Model description

Example usage:

from transformers import WhisperForConditionalGeneration, WhisperProcessor
import whisper

processor = WhisperProcessor.from_pretrained("blko/whisper-base-sl-artur-full-ft")

# load our base model at step 32000
model = WhisperForConditionalGeneration.from_pretrained(
    "blko/whisper-base-sl-artur-full-ft",
    revision="772cbcea0383a8f4359d3bd8457aa63ca881c47b"
    )

audio_path = "./test_recording.wav"    # 16kHz sample rate required
mel_from_file = whisper.audio.log_mel_spectrogram(audio_path).unsqueeze(dim=0)
output = model.generate(mel_from_file)
print(processor.batch_decode(output))

Training and evaluation data

Pre-processed dataset used in this work can be found on HuggingFace at link Artur 1.0

Training procedure

For details including training and testing code refer to GitHub link at UHO: Slovenian speech recognition in an application for people with hearing impairment

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 2.5e-05
  • train_batch_size: 32
  • eval_batch_size: 16
  • seed: 42
  • optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
  • lr_scheduler_type: linear
  • lr_scheduler_warmup_steps: 500
  • training_steps: 51000
  • mixed_precision_training: Native AMP

Framework versions

  • Transformers 4.44.2
  • Pytorch 2.1.0+cu121
  • Datasets 2.21.0
  • Tokenizers 0.19.1
Downloads last month
6
Safetensors
Model size
72.6M params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for blko/whisper-base-sl-artur-full-ft

Finetuned
(649)
this model

Evaluation results