rajpurkar/squad
Viewer • Updated • 98.2k • 146k • 363
How to use batterydata/bert-base-uncased-squad-v1 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("question-answering", model="batterydata/bert-base-uncased-squad-v1") # Load model directly
from transformers import AutoTokenizer, AutoModelForQuestionAnswering
tokenizer = AutoTokenizer.from_pretrained("batterydata/bert-base-uncased-squad-v1")
model = AutoModelForQuestionAnswering.from_pretrained("batterydata/bert-base-uncased-squad-v1")YAML Metadata Error:"tags" must be an array
Language model: bert-base-uncased
Language: English
Downstream-task: Extractive QA
Training data: SQuAD v1
Eval data: SQuAD v1
Code: See example
Infrastructure: 8x DGX A100
batch_size = 32
n_epochs = 3
base_LM_model = "bert-base-uncased"
max_seq_len = 386
learning_rate = 3e-5
doc_stride=128
max_query_length=64
Evaluated on the SQuAD v1.0 dev set.
"exact": 80.93,
"f1": 88.20,
Evaluated on the battery device dataset.
"precision": 62.19,
"recall": 75.00,
from transformers import AutoModelForQuestionAnswering, AutoTokenizer, pipeline
model_name = "batterydata/bert-base-uncased-squad-v1"
# a) Get predictions
nlp = pipeline('question-answering', model=model_name, tokenizer=model_name)
QA_input = {
'question': 'What is the electrolyte?',
'context': 'The typical non-aqueous electrolyte for commercial Li-ion cells is a solution of LiPF6 in linear and cyclic carbonates.'
}
res = nlp(QA_input)
# b) Load model & tokenizer
model = AutoModelForQuestionAnswering.from_pretrained(model_name)
tokenizer = AutoTokenizer.from_pretrained(model_name)
Shu Huang: sh2009 [at] cam.ac.uk
Jacqueline Cole: jmc61 [at] cam.ac.uk
BatteryBERT: A Pre-trained Language Model for Battery Database Enhancement