Automatic Speech Recognition
Transformers
TensorBoard
Safetensors
Bashkir
whisper
Eval Results (legacy)
Instructions to use stdbug/whisper-tiny-ba with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use stdbug/whisper-tiny-ba with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="stdbug/whisper-tiny-ba")# Load model directly from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq processor = AutoProcessor.from_pretrained("stdbug/whisper-tiny-ba") model = AutoModelForSpeechSeq2Seq.from_pretrained("stdbug/whisper-tiny-ba") - Notebooks
- Google Colab
- Kaggle
Finetuned openai/whisper-tiny on 133675 Bashkir training audio samples from mozilla-foundation/common_voice_17_0.
This model was created from the Mozilla.ai Blueprint: speech-to-text-finetune.
Evaluation results on 14513 audio samples of Bashkir:
Baseline model (before finetuning) on Bashkir
- Word Error Rate (Normalized): 150.765
- Word Error Rate (Orthographic): 127.801
- Character Error Rate (Normalized): 116.224
- Character Error Rate (Orthographic): 115.431
- Loss: 5.831
Finetuned model (after finetuning) on Bashkir
- Word Error Rate (Normalized): 102.544
- Word Error Rate (Orthographic): 103.049
- Character Error Rate (Normalized): 89.277
- Character Error Rate (Orthographic): 89.293
- Loss: 1.441
- Downloads last month
- 3
Model tree for stdbug/whisper-tiny-ba
Base model
openai/whisper-tinyEvaluation results
- wer on Common Voice (Bashkir)self-reported102.544