Instructions to use CheshireCC/faster-whisper-large-v3-float32 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use CheshireCC/faster-whisper-large-v3-float32 with Transformers:
# Load model directly from transformers import AutoModel model = AutoModel.from_pretrained("CheshireCC/faster-whisper-large-v3-float32", dtype="auto") - Notebooks
- Google Colab
- Kaggle
openai-Whisper model large-v3, CTranslate2 format
convert from flax model files
download flax_model.msgpack of float32 and other config files from https://huggingface.co/openai/whisper-large-v3
from ctranslate2.converters import TransformersConverter as cvter
model_name_or_path = <your folder with model files>
output_dir = <target folder to save model files>
cvter_01 = cvter(model_name_or_path=model_name_or_path)
cvter_01.convert(output_dir=output_dir, quantization="float32", force=True)
- Downloads last month
- 16
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support