# Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("Studyard/Context2Question")
model = AutoModelForSeq2SeqLM.from_pretrained("Studyard/Context2Question")Quick Links
README.md exists but content is empty.
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐ Ask for provider support
# Gated model: Login with a HF token with gated access permission hf auth login