mozilla-foundation/common_voice_13_0
Updated • 2.29k • 3
How to use zuazo/whisper-tiny-pt with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="zuazo/whisper-tiny-pt") # Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("zuazo/whisper-tiny-pt")
model = AutoModelForSpeechSeq2Seq.from_pretrained("zuazo/whisper-tiny-pt")# Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("zuazo/whisper-tiny-pt")
model = AutoModelForSpeechSeq2Seq.from_pretrained("zuazo/whisper-tiny-pt")This model is a fine-tuned version of openai/whisper-tiny on the mozilla-foundation/common_voice_13_0 pt dataset. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Wer |
|---|---|---|---|---|
| 0.4763 | 14.08 | 1000 | 0.5686 | 31.3114 |
| 0.3784 | 28.17 | 2000 | 0.5350 | 30.0693 |
| 0.3286 | 42.25 | 3000 | 0.5239 | 29.2413 |
| 0.3073 | 56.34 | 4000 | 0.5200 | 29.4138 |
| 0.2971 | 70.42 | 5000 | 0.5191 | 28.9653 |
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="zuazo/whisper-tiny-pt")