How to use wisdominanutshell/coedit-xxl-8bit with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("wisdominanutshell/coedit-xxl-8bit") model = AutoModelForSeq2SeqLM.from_pretrained("wisdominanutshell/coedit-xxl-8bit")
Borrowed and modified from coedit-xxl
· Sign up or log in to comment