metadata
library_name: transformers
language:
- sl
license: apache-2.0
base_model: openai/whisper-base
tags:
- hf-asr-leaderboard
- generated_from_trainer
datasets:
- Artur-1-0-full
metrics:
- wer
model-index:
- name: Whisper Base Slo Artur - Full FT
results:
- task:
name: Automatic Speech Recognition
type: automatic-speech-recognition
dataset:
name: Artur 1.0 Full Dataset
type: Artur-1-0-full
args: 'config: sl, split: test'
metrics:
- name: Wer
type: wer
value: 11.38
Whisper Base Slo Artur - Full FT
This model is a fine-tuned Slovenian version of openai/whisper-base on the Artur 1.0 Full Dataset.
It was fine-tuned on Vega supercomputer.
Best model at step 32000 achieves the following results on the test set:
- Loss: 0.1458
- Wer: 11.38
Model description
Example usage:
from transformers import WhisperForConditionalGeneration, WhisperProcessor
import whisper
processor = WhisperProcessor.from_pretrained("blko/whisper-base-sl-artur-full-ft")
# load our base model at step 32000
model = WhisperForConditionalGeneration.from_pretrained(
"blko/whisper-base-sl-artur-full-ft",
revision="772cbcea0383a8f4359d3bd8457aa63ca881c47b"
)
audio_path = "./test_recording.wav" # 16kHz sample rate required
mel_from_file = whisper.audio.log_mel_spectrogram(audio_path).unsqueeze(dim=0)
output = model.generate(mel_from_file)
print(processor.batch_decode(output))
Training and evaluation data
Pre-processed dataset used in this work can be found on HuggingFace at link Artur 1.0
Training procedure
For details including training and testing code refer to GitHub link at UHO: Slovenian speech recognition in an application for people with hearing impairment
Training hyperparameters
The following hyperparameters were used during training:
- learning_rate: 2.5e-05
- train_batch_size: 32
- eval_batch_size: 16
- seed: 42
- optimizer: Adam with betas=(0.9,0.999) and epsilon=1e-08
- lr_scheduler_type: linear
- lr_scheduler_warmup_steps: 500
- training_steps: 51000
- mixed_precision_training: Native AMP
Framework versions
- Transformers 4.44.2
- Pytorch 2.1.0+cu121
- Datasets 2.21.0
- Tokenizers 0.19.1