Update README.md
Browse files
README.md
CHANGED
|
@@ -18,7 +18,7 @@ metrics:
|
|
| 18 |
- type: loss
|
| 19 |
value: 0.2673
|
| 20 |
model-index:
|
| 21 |
-
- name: PubMedBERT
|
| 22 |
results:
|
| 23 |
- task:
|
| 24 |
type: natural-language-inference
|
|
@@ -35,14 +35,14 @@ model-index:
|
|
| 35 |
value: 0.2673
|
| 36 |
---
|
| 37 |
|
| 38 |
-
# PubMedBERT
|
| 39 |
|
| 40 |
-
[](https://huggingface.co/Bam3752/
|
| 41 |
[](https://huggingface.co/Bam3752)
|
| 42 |
[](https://opensource.org/licenses/MIT)
|
| 43 |
|
| 44 |
-
|
| 45 |
-
It
|
| 46 |
|
| 47 |
---
|
| 48 |
|
|
@@ -50,8 +50,8 @@ It is designed to evaluate **entailment, contradiction, and neutrality** between
|
|
| 50 |
|
| 51 |
- **Base model:** [pritamdeka/PubMedBERT-MNLI-MedNLI](https://huggingface.co/pritamdeka/PubMedBERT-MNLI-MedNLI)
|
| 52 |
- **Fine-tuning datasets:** BioASQ + MedNLI
|
| 53 |
-
- **Objective:** 3-class NLI (entailment
|
| 54 |
-
- **Method:** LoRA
|
| 55 |
- **Hardware:** Apple MPS (Metal backend)
|
| 56 |
- **Hyperparameters:**
|
| 57 |
- Epochs: 4
|
|
@@ -72,19 +72,19 @@ It is designed to evaluate **entailment, contradiction, and neutrality** between
|
|
| 72 |
| Macro F1 | **0.9036** |
|
| 73 |
| Eval Loss | **0.2673** |
|
| 74 |
|
| 75 |
-
Calibrated with isotonic regression (`calibration/isotonic.pkl`) for
|
| 76 |
|
| 77 |
---
|
| 78 |
|
| 79 |
## 🚀 Usage
|
| 80 |
|
| 81 |
-
###
|
| 82 |
|
| 83 |
```python
|
| 84 |
from transformers import AutoModelForSequenceClassification, AutoTokenizer
|
| 85 |
|
| 86 |
-
model = AutoModelForSequenceClassification.from_pretrained("Bam3752/
|
| 87 |
-
tokenizer = AutoTokenizer.from_pretrained("Bam3752/
|
| 88 |
|
| 89 |
premise = "Aspirin reduces the risk of myocardial infarction."
|
| 90 |
hypothesis = "Aspirin prevents heart attacks."
|
|
|
|
| 18 |
- type: loss
|
| 19 |
value: 0.2673
|
| 20 |
model-index:
|
| 21 |
+
- name: PubMedBERT BioNLI LoRA
|
| 22 |
results:
|
| 23 |
- task:
|
| 24 |
type: natural-language-inference
|
|
|
|
| 35 |
value: 0.2673
|
| 36 |
---
|
| 37 |
|
| 38 |
+
# PubMedBERT BioNLI LoRA
|
| 39 |
|
| 40 |
+
[](https://huggingface.co/Bam3752/PubMedBERT-BioNLI-LoRA)
|
| 41 |
[](https://huggingface.co/Bam3752)
|
| 42 |
[](https://opensource.org/licenses/MIT)
|
| 43 |
|
| 44 |
+
**PubMedBERT BioNLI LoRA** is a biomedical **Natural Language Inference (NLI)** model fine-tuned with **LoRA adapters**.
|
| 45 |
+
It classifies **entailment, contradiction, and neutrality** between biomedical text pairs, optimized for **chain-of-thought reasoning validation**.
|
| 46 |
|
| 47 |
---
|
| 48 |
|
|
|
|
| 50 |
|
| 51 |
- **Base model:** [pritamdeka/PubMedBERT-MNLI-MedNLI](https://huggingface.co/pritamdeka/PubMedBERT-MNLI-MedNLI)
|
| 52 |
- **Fine-tuning datasets:** BioASQ + MedNLI
|
| 53 |
+
- **Objective:** 3-class NLI (entailment / neutral / contradiction)
|
| 54 |
+
- **Method:** LoRA parameter-efficient fine-tuning
|
| 55 |
- **Hardware:** Apple MPS (Metal backend)
|
| 56 |
- **Hyperparameters:**
|
| 57 |
- Epochs: 4
|
|
|
|
| 72 |
| Macro F1 | **0.9036** |
|
| 73 |
| Eval Loss | **0.2673** |
|
| 74 |
|
| 75 |
+
👉 Calibrated with isotonic regression (`calibration/isotonic.pkl`) for reliable probabilities.
|
| 76 |
|
| 77 |
---
|
| 78 |
|
| 79 |
## 🚀 Usage
|
| 80 |
|
| 81 |
+
### Transformers
|
| 82 |
|
| 83 |
```python
|
| 84 |
from transformers import AutoModelForSequenceClassification, AutoTokenizer
|
| 85 |
|
| 86 |
+
model = AutoModelForSequenceClassification.from_pretrained("Bam3752/PubMedBERT-BioNLI-LoRA")
|
| 87 |
+
tokenizer = AutoTokenizer.from_pretrained("Bam3752/PubMedBERT-BioNLI-LoRA")
|
| 88 |
|
| 89 |
premise = "Aspirin reduces the risk of myocardial infarction."
|
| 90 |
hypothesis = "Aspirin prevents heart attacks."
|