Automatic Speech Recognition
Transformers
TensorBoard
Safetensors
Yoruba
whisper
Generated from Trainer
Eval Results (legacy)
Instructions to use Danieljava/whisper-small-dv with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Danieljava/whisper-small-dv with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="Danieljava/whisper-small-dv")# Load model directly from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq processor = AutoProcessor.from_pretrained("Danieljava/whisper-small-dv") model = AutoModelForSpeechSeq2Seq.from_pretrained("Danieljava/whisper-small-dv") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 68a22220b73f11b50bc78b07c95c8f8c11e1ca8aeec98577bcc1d8fbf2d3a443
- Size of remote file:
- 967 MB
- SHA256:
- 1d52beffc29d155436f1be2f57b14ab2370629e5130e24bdd41f217385b64413
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.