Instructions to use mitchelldehaven/whisper-large-v2-uk with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use mitchelldehaven/whisper-large-v2-uk with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="mitchelldehaven/whisper-large-v2-uk")# Load model directly from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq processor = AutoProcessor.from_pretrained("mitchelldehaven/whisper-large-v2-uk") model = AutoModelForSpeechSeq2Seq.from_pretrained("mitchelldehaven/whisper-large-v2-uk") - Notebooks
- Google Colab
- Kaggle
Whisper model finetuned using audio data from CommonVoice Ukrainian v10 train and dev set with additional data via semi-supervised data.
There is a differences in tokenization of source data (in our data normalization process, we replace punctucation with "" rather than Whisper's " "). This mismatch leads to a slight degradation on CommonVoice.
- Downloads last month
- 26
Evaluation results
- WER on mozilla-foundation/common_voice_11_0test set self-reported13.010