allenai/scifact
Updated • 1.97k • 26
How to use swarajsonawane4/scifact-phi3-lora with PEFT:
from peft import PeftModel
from transformers import AutoModelForCausalLM
base_model = AutoModelForCausalLM.from_pretrained("microsoft/Phi-3-mini-4k-instruct")
model = PeftModel.from_pretrained(base_model, "swarajsonawane4/scifact-phi3-lora")LoRA adapter fine-tuned on the SciFact dataset for biomedical claim verification. Given a scientific claim and retrieved evidence, the model produces structured verdicts (SUPPORTED / REFUTED / INSUFFICIENT) with citations.
This adapter sits on top of a RAG pipeline that retrieves biomedical evidence. The fine-tuned model generates grounded answers with explicit document citations, suitable for fact-checking applications.
| Epoch | Training Loss | Validation Loss |
|---|---|---|
| 1 | 0.527 | 0.687 |
| 2 | 0.406 | 0.671 |
| 3 | 0.487 | 0.662 |
| 4 | 0.415 | 0.658 |
| 5 | 0.487 | 0.659 |
Generalization gap: 0.17 (healthy, no overfitting).
Load with PEFT on top of the base model:
from transformers import AutoModelForCausalLM, AutoTokenizer
from peft import PeftModel
import torch
BASE = "microsoft/Phi-3-mini-4k-instruct"
ADAPTER = "swarajsonawane4/scifact-phi3-lora"
tokenizer = AutoTokenizer.from_pretrained(BASE)
base_model = AutoModelForCausalLM.from_pretrained(
BASE, torch_dtype=torch.bfloat16, device_map="auto"
)
model = PeftModel.from_pretrained(base_model, ADAPTER)
model.eval()
Verdict: SUPPORTED | REFUTED | INSUFFICIENT
Brief justification citing [D0], [D1], etc.
Swaraj Sudhakar Sonawane - MSc. Digital Engineering, Bauhaus University Weimar
Base model
microsoft/Phi-3-mini-4k-instruct