Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting
Paper • 2101.00416 • Published
# Load model directly
from transformers import AutoTokenizer, AutoModelForSeq2SeqLM
tokenizer = AutoTokenizer.from_pretrained("microsoft/ssr-base")
model = AutoModelForSeq2SeqLM.from_pretrained("microsoft/ssr-base")SSR-base model as in EMNLP 2021 paper Improving Sequence-to-Sequence Pre-training via Sequence Span Rewriting.
# Use a pipeline as a high-level helper # Warning: Pipeline type "summarization" is no longer supported in transformers v5. # You must load the model directly (see below) or downgrade to v4.x with: # 'pip install "transformers<5.0.0' from transformers import pipeline pipe = pipeline("summarization", model="microsoft/ssr-base")