Instructions to use mohitsha/whisper-tiny-static-shape-quantized-SL-448 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use mohitsha/whisper-tiny-static-shape-quantized-SL-448 with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("automatic-speech-recognition", model="mohitsha/whisper-tiny-static-shape-quantized-SL-448")# Load model directly from transformers import AutoProcessor, AutoModelForSpeechSeq2Seq processor = AutoProcessor.from_pretrained("mohitsha/whisper-tiny-static-shape-quantized-SL-448") model = AutoModelForSpeechSeq2Seq.from_pretrained("mohitsha/whisper-tiny-static-shape-quantized-SL-448") - Notebooks
- Google Colab
- Kaggle
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Whisper Model Quantized
The repository contains the whisper model quantized using the Smooth Quant using ONNXRuntime
- Only the Whisper decoder is quantized in the model
- The model has been modified to accept fixed shapes input of (1, 80, 3000) for the encoder and (1,448) for decoder.
- For inference the un-quantized encoder model and quantized decoder model is used.
- This model is for testing and could be modified in the future with better versions.
Evaluation:
The model achieves WER of 6.02% on the librispeech_asr (clean) test dataset
- Downloads last month
- 5