How to use from the
Use from the
Transformers library
# Use a pipeline as a high-level helper
from transformers import pipeline

pipe = pipeline("text-generation", model="nutorbit/bart-xllm")
# Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM

tokenizer = AutoTokenizer.from_pretrained("nutorbit/bart-xllm")
model = AutoModelForCausalLM.from_pretrained("nutorbit/bart-xllm")
Quick Links

No model card

Downloads last month
10
Safetensors
Model size
0.3B params
Tensor type
F32
F16
I8
Inference Providers NEW
This model isn't deployed by any Inference Provider. 馃檵 Ask for provider support