File size: 2,518 Bytes
c7a764d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9b639d7
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
---
license: mit
tags:
  - roberta
  - text-classification
  - healthcare
  - biomedical
  - adverse-drug-reaction
  - nlp
datasets:
  - custom
language:
  - en
model-index:
  - name: RoBERTa ADR Severity Classifier
    results:
      - task:
          name: Text Classification
          type: text-classification
        metrics:
          - type: accuracy
            value: 0.891
          - type: f1
            value: 0.891
          - type: auc
            value: 0.956
---

# 🤖 RoBERTa ADR Severity Classifier

This is a fine-tuned [RoBERTa](https://huggingface.co/roberta-base) model that detects **Adverse Drug Reactions (ADRs)** and classifies them as either **severe** (`1`) or **not severe** (`0`). It is trained on annotated ADR text data and is part of a broader NLP pipeline that extracts symptoms, diseases, and medications from biomedical reports.

---

## 🧠 Model Details

- **Base Model:** `roberta-base`
- **Task:** Binary Text Classification (`Severe` vs `Not Severe`)
- **Training Data:** 3,000+ annotated ADR descriptions
- **Framework:** Hugging Face Transformers + PyTorch

---

## 🔬 Intended Use

This model is intended for **research and educational purposes** in biomedical NLP. It can be used to:

- Flag potentially dangerous side effects in user-reported ADRs
- Prioritize ADR cases based on severity
- Serve as a backend for medical QA systems or healthcare apps

---

## 📈 Performance

Evaluated on a balanced test set of 1,623 samples:

| Metric     | Class 0 (Not Severe) | Class 1 (Severe) |
|------------|----------------------|------------------|
| Precision  | 0.904                | 0.880            |
| Recall     | 0.865                | 0.915            |
| F1-Score   | 0.884                | 0.897            |
| Accuracy   | **0.891**            |                  |
| AUC        | **0.956**            |                  |

---

## 🚀 Example Usage

```python
from transformers import AutoTokenizer, AutoModelForSequenceClassification
import torch

model = AutoModelForSequenceClassification.from_pretrained("calerio-uva/roberta-adr-model")
tokenizer = AutoTokenizer.from_pretrained("calerio-uva/roberta-adr-model")

text = "Severe migraine with vision loss and vomiting after taking ibuprofen."
inputs = tokenizer(text, return_tensors="pt", truncation=True, padding=True, max_length=512)

with torch.no_grad():
    logits = model(**inputs).logits
    probs = torch.softmax(logits, dim=1)

print(f"Not Severe: {probs[0][0]:.3f}, Severe: {probs[0][1]:.3f}")