How to use google/t5-efficient-tiny with Transformers:
# Load model directly from transformers import AutoTokenizer, AutoModelForSeq2SeqLM tokenizer = AutoTokenizer.from_pretrained("google/t5-efficient-tiny") model = AutoModelForSeq2SeqLM.from_pretrained("google/t5-efficient-tiny")
Hi all,
Does anyone know, which exactly model parameters are shared between encoder and decoder?
· Sign up or log in to comment