google/boolq
Viewer • Updated • 12.7k • 51.8k • 100
How to use nfliu/roberta-large_boolq with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-classification", model="nfliu/roberta-large_boolq") # Load model directly
from transformers import AutoTokenizer, AutoModelForSequenceClassification
tokenizer = AutoTokenizer.from_pretrained("nfliu/roberta-large_boolq")
model = AutoModelForSequenceClassification.from_pretrained("nfliu/roberta-large_boolq")This model is a fine-tuned version of roberta-large on the boolq dataset. It achieves the following results on the evaluation set:
import torch
from transformers import AutoModelForSequenceClassification, AutoTokenizer
model = AutoModelForSequenceClassification.from_pretrained("nfliu/roberta-large_boolq")
tokenizer = AutoTokenizer.from_pretrained("nfliu/roberta-large_boolq")
# Each example is a (question, context) pair.
examples = [
("Lake Tahoe is in California", "Lake Tahoe is a popular tourist spot in California."),
("Water is wet", "Contrary to popular belief, water is not wet.")
]
encoded_input = tokenizer(examples, padding=True, truncation=True, return_tensors="pt")
with torch.no_grad():
model_output = model(**encoded_input)
probabilities = torch.softmax(model_output.logits, dim=-1).cpu().tolist()
probability_no = [round(prob[0], 2) for prob in probabilities]
probability_yes = [round(prob[1], 2) for prob in probabilities]
for example, p_no, p_yes in zip(examples, probability_no, probability_yes):
print(f"Question: {example[0]}")
print(f"Context: {example[1]}")
print(f"p(No | question, context): {p_no}")
print(f"p(Yes | question, context): {p_yes}")
print()
More information needed
More information needed
More information needed
The following hyperparameters were used during training:
| Training Loss | Epoch | Step | Validation Loss | Accuracy |
|---|---|---|---|---|
| No log | 0.85 | 250 | 0.4508 | 0.8024 |
| 0.5086 | 1.69 | 500 | 0.3660 | 0.8502 |
| 0.5086 | 2.54 | 750 | 0.4092 | 0.8508 |
| 0.2387 | 3.39 | 1000 | 0.4975 | 0.8554 |
| 0.2387 | 4.24 | 1250 | 0.5577 | 0.8526 |
Base model
FacebookAI/roberta-large