ModernBert Base LFQA
Usage
from transformers import AutoModelForSequenceClassification, AutoTokenizer
m = AutoModelForSequenceClassification.from_pretrained("nlpatunt/modernbert-base-lfqa")
t = AutoTokenizer.from_pretrained("nlpatunt/mdernbert-base-lfqa")
This is a modernbert-base model trained on the dataset nlpatunt/lfqa_processed
Given a question and a pair of answers a1, a2, the task is to determine whether a1 will be preffered by humans over a2. The dataset has tie annotations but we discard them when we train the model.
The results for the test evaluation is:
Test evaluation:
{'eval_loss': 0.6075420379638672, 'eval_accuracy': 0.6529160739687055, 'eval_macro_f1': 0.6526623576485072, 'eval_runtime': 5.9397, 'eval_samples_per_second': 118.355, 'eval_steps_per_second': 14.815, 'epoch': 8.0}
- Downloads last month
- 8