Generating Self-Contained and Summary-Centric Question Answer Pairs via Differentiable Reward Imitation Learning
Paper • 2109.04689 • Published • 1
# Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("sc2qa/msmarco_qa_classifier")
model = AutoModelForSequenceClassification.from_pretrained("sc2qa/msmarco_qa_classifier")YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
For details, please refer to the following links.
Github repo: https://github.com/amazon-research/SC2QA-DRIL
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-classification", model="sc2qa/msmarco_qa_classifier")