google/fleurs
Viewer • Updated • 768k • 57.7k • 402
How to use JulioCastro/whisper-medium-ca with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("automatic-speech-recognition", model="JulioCastro/whisper-medium-ca") # Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("JulioCastro/whisper-medium-ca")
model = AutoModelForSpeechSeq2Seq.from_pretrained("JulioCastro/whisper-medium-ca")# Load model directly
from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq
processor = AutoProcessor.from_pretrained("JulioCastro/whisper-medium-ca")
model = AutoModelForSpeechSeq2Seq.from_pretrained("JulioCastro/whisper-medium-ca")This model is a fine-tuned version of openai/whisper-medium on the Common Voice 11.0, the Fleurs, the SLR69, the tb3_parla and the parlament_parla datasets. It achieves the following results on the evaluation set:
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="JulioCastro/whisper-medium-ca")