Instructions to use lmz/candle-quantized-t5 with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use lmz/candle-quantized-t5 with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("lmz/candle-quantized-t5") model = AutoModelForSeq2SeqLM.from_pretrained("lmz/candle-quantized-t5") - Notebooks
- Google Colab
- Kaggle
Flan-t5-xl.gguf missing
#2
by bayang - opened
Sure thing!
What x-bit integer quantization support did you use to generate the flan-t5-large.gguf ?
(q6k)?