How to use mrm8488/switch-base-16-finetuned-xsum with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("mrm8488/switch-base-16-finetuned-xsum") model = AutoModelForSeq2SeqLM.from_pretrained("mrm8488/switch-base-16-finetuned-xsum")