You need to agree to share your contact information to access this model

This repository is publicly accessible, but you have to accept the conditions to access its files and content.

Log in or Sign Up to review the conditions and access this model content.

DeBERTa-v3 Fine-tuned on MedNLI for Medical Fact Verification

This model is a fine-tuned version of MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli on the MedNLI dataset for medical natural language inference.

Model Description

  • Base Model: MoritzLaurer/DeBERTa-v3-base-mnli-fever-anli
  • Task: Natural Language Inference (3-class: entailment, neutral, contradiction)
  • Domain: Medical/Clinical text
  • Fine-tuned on: MedNLI dataset

Performance

Metric Baseline Fine-tuned Improvement
MedNLI Accuracy 72.4% 84.7% +12.3%
Contradiction Recall 79.5% 92.6% +13.1%

Per-Class Metrics (Test Set)

Class Precision Recall F1-Score
Entailment 0.833 0.819 0.826
Neutral 0.802 0.795 0.799
Contradiction 0.903 0.926 0.915

Training Details

Training hyperparameters (see config.json → custom_training_info for full details):

  • Learning Rate: 3e-5
  • Weight Decay: 0.01
  • Warmup Ratio: 0.1
  • Epochs: 5
  • Batch Size: 16 (effective: 32)
  • K-Fold CV: 5 folds (best fold: 2)

Usage

from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

# Load model and tokenizer
tokenizer = AutoTokenizer.from_pretrained("shidey/deberta-v3-mednli-nli")
model = AutoModelForSequenceClassification.from_pretrained("shidey/deberta-v3-mednli-nli")

# Example: Check if a claim is supported by evidence
premise = "The patient was diagnosed with type 2 diabetes."
hypothesis = "The patient has a metabolic disorder."

inputs = tokenizer(premise, hypothesis, return_tensors="pt", truncation=True, max_length=256)
outputs = model(**inputs)
probs = torch.softmax(outputs.logits, dim=-1)

labels = ["entailment", "neutral", "contradiction"]
prediction = labels[probs.argmax().item()]
confidence = probs.max().item()

print(f"Prediction: {prediction} (confidence: {confidence:.2%})")

Intended Use

This model is designed for:

  • Medical fact verification systems
  • Clinical NLI tasks
  • Healthcare RAG pipelines requiring claim validation

Limitations

  • Trained on English clinical text only
  • Performance may vary on non-clinical medical text
  • Should not be used as sole source for medical decisions

Citation

If you use this model, please cite:

@misc{deberta-v3-mednli,
  title={DeBERTa-v3 Fine-tuned on MedNLI},
  author={Your Name},
  year={2026},
  publisher={Hugging Face},
  url={https://huggingface.co/shidey/deberta-v3-mednli-nli}
}
Downloads last month
10
Safetensors
Model size
0.2B params
Tensor type
F32
·
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for shidey/deberta-v3-mednli-nli

Finetuned
(9)
this model

Dataset used to train shidey/deberta-v3-mednli-nli