How to use jbochi/candle-coedit-quantized with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("jbochi/candle-coedit-quantized") model = AutoModelForSeq2SeqLM.from_pretrained("jbochi/candle-coedit-quantized")
The community tab is the place to discuss and collaborate with the HF community!