How to use from the
Use from the
Transformers library
# Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM

tokenizer = AutoTokenizer.from_pretrained("chenxran/orion-instance-generator")
model = AutoModelForSeq2SeqLM.from_pretrained("chenxran/orion-instance-generator")
Quick Links

No model card

Downloads last month
9
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Space using chenxran/orion-instance-generator 1