rajpurkar/squad
Viewer • Updated • 98.2k • 148k • 363
How to use guo1006/bert-finetuned-squad-accelerate with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("question-answering", model="guo1006/bert-finetuned-squad-accelerate") # Load model directly
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("guo1006/bert-finetuned-squad-accelerate")
model = AutoModelForQuestionAnswering.from_pretrained("guo1006/bert-finetuned-squad-accelerate")# 初始化pipeline
from transformers import pipeline
question_answerer = pipeline(
"question-answering",
model="guo1006/bert-finetuned-squad-accelerate"
)
# 上下文
context = """
🤗 Transformers is backed by the three most popular deep learning libraries —
Jax, PyTorch and TensorFlow — with a seamless integration between them.
It's straightforward to train your models with one before loading them
for inference with the other.
"""
# 提问
question = "Which deep learning libraries back 🤗 Transformers?"
# 推理
result = question_answerer(question=question, context=context)
# 输出结果
print(result)
Base model
google-bert/bert-base-cased