File size: 8,617 Bytes
c661e63
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
---
language:
- en
license: other
pipeline_tag: text-generation
library_name: transformers
tags:
- clinical-nlp
- medical-coding
- icd10
- icd-10-cm
- reasoning
- reinforcement-learning
- grpo
- healthcare
base_model:
- Qwen/Qwen2.5-7B-Instruct
---

# DeepICD-R1-7B

## Model Summary

**DeepICD-R1-7B** is a clinical reasoning language model for **ICD-10-CM diagnosis outcome prediction from admission notes**.  
It is derived from **Qwen2.5-7B-Instruct** and trained using the **DeepICD-R1 framework**, which combines structured reasoning traces with reinforcement learning and hierarchical reward signals.

The model is designed to predict a **single ICD-10-CM diagnosis code** from clinical text while producing an interpretable reasoning trace explaining the decision.

The training methodology follows the approach described in the paper:

**DeepICD-R1: Medical Reasoning through Hierarchical Rewards and Unsupervised Distillation**

This work frames clinical diagnosis prediction as a **reasoning task optimized through reinforcement learning**.

---

# Model Details

- **Model name:** DeepICD-R1-7B  
- **Organization:** DATEXIS  
- **Base model:** Qwen2.5-7B-Instruct  
- **Parameters:** ~7B  
- **Task:** Single ICD-10-CM diagnosis prediction from admission notes  
- **Training paradigm:** Supervised reasoning + reinforcement learning  
- **Framework:** VERL RL trainer  
- **Domain:** Clinical NLP / healthcare reasoning  

The Qwen2.5-7B-Instruct architecture is a **7-billion-parameter instruction-tuned language model designed for instruction following and long-form generation tasks**. :contentReference[oaicite:1]{index=1}

---

# Intended Use

This model is intended for **research purposes**, including:

- clinical reasoning research
- ICD-10-CM coding prediction
- reinforcement learning for language models
- reasoning trace generation
- structured prediction from clinical text

### Out-of-Scope Use

This model **must not be used for**:

- medical diagnosis
- clinical decision support
- patient triage
- automated medical coding without expert supervision
- billing or compliance workflows

---

# Training Methodology

The **DeepICD-R1 framework** treats diagnosis prediction as a reasoning problem.

Training combines:

### 1. Supervised reasoning traces
A dataset of reasoning chains explaining diagnosis predictions.

### 2. Reinforcement learning optimization

Training uses **Group Relative Policy Optimization (GRPO)** to improve reasoning and prediction accuracy.

### 3. Hierarchical reward signals

Rewards are aligned with the hierarchical structure of ICD codes.

The reward function combines:

- **format reward** β€” correct reasoning + diagnosis structure  
- **outcome reward** β€” correct diagnosis prediction  
- **hierarchical reward** β€” partial credit for correct ICD prefixes  

This design encourages models to produce both **accurate diagnoses and structured reasoning**.

---

# Training Data

The training task uses **clinical admission notes paired with ICD-10-CM diagnosis codes**, derived from de-identified electronic health record datasets such as **MIMIC-IV**.

Task formulation:

**Input**

Clinical admission note describing patient presentation.

**Output**

Structured reasoning trace and predicted ICD-10-CM code.

---

# Output Format

The model is trained to produce structured outputs separating reasoning from the final diagnosis.

### Example

```text
<think>
The patient presents with ...
Symptoms and clinical history suggest ...
...
</think>

<diagnosis>
M5116
</diagnosis>
```
## Training Configuration

The model was trained using the **VERL reinforcement learning trainer** with **Group Relative Policy Optimization (GRPO)**, following the DeepICD-R1 training framework.

### Core Training Parameters

| Parameter | Value |
|-----------|------|
| Algorithm | GRPO |
| Training framework | VERL (`verl.trainer.main_ppo`) |
| Base model | Qwen2.5-7B-Instruct |
| Training batch size | 64 |
| PPO mini batch size | 64 |
| PPO micro batch size per GPU | 16 |
| Learning rate | 1e-6 |
| LR warmup steps | 80 |
| Total epochs | 1 |
| Max prompt length | 2048 tokens |
| Max response length | 1024 tokens |

### Rollout / Generation Settings

| Parameter | Value |
|-----------|------|
| Rollout engine | vLLM |
| Samples per prompt (`n`) | 8 |
| Temperature | 0.9 |
| Top-k | disabled |
| dtype | bfloat16 |
| Tensor parallel size | 1 |
| GPU memory utilization | 0.4 |

### Optimization Details

| Parameter | Value |
|-----------|------|
| Entropy coefficient | 0.001 |
| KL controller coefficient | 0.001 |
| KL loss | disabled |
| Gradient checkpointing | enabled |
| Torch compile | enabled |
| FSDP param offload | disabled |
| FSDP optimizer offload | disabled |

### Hardware

| Component | Value |
|-----------|------|
| GPUs | 4 |
| Nodes | 1 |
| Precision | bfloat16 |

### Reward Function

Training uses a **custom batched reward function** combining several reward signals:

- **Outcome reward** β€” correct ICD-10 prediction
- **Format reward** β€” correct `<think>` and `<diagnosis>` structure
- **Hierarchical reward** β€” partial credit for ICD prefix matches
- **Reasoning reward** β€” encourages meaningful reasoning traces
- **LLM-based reward** β€” optional external judge scoring

These rewards align the model toward producing **both accurate diagnoses and structured reasoning traces**.

The reasoning trace provides transparency into how the diagnosis was derived from the clinical note.

---

## Evaluation

Evaluation follows the methodology described in the **DeepICD-R1 paper**.

Performance is measured using **macro-averaged F1 scores** at multiple levels of the ICD hierarchy.

| Level | Description |
|------|-------------|
| Chapter | Broad ICD category |
| Category | First three digits |
| Full code | Complete ICD-10 code |

Hierarchical evaluation allows partial credit when the model predicts the correct high-level diagnostic category even if the full code is incorrect.

---

## Limitations

Models following the **DeepICD-R1 framework** share several limitations.

### Dataset limitations

- Training data consists primarily of **English clinical notes**
- Distribution reflects **hospital-specific patient populations**
- ICD labels are **highly imbalanced**, affecting rare diagnoses

### Model limitations

- Reasoning traces may appear convincing while being incorrect
- Predictions may fail for rare or long-tail diagnoses
- Models may demonstrate **premature diagnostic closure**
- Reinforcement learning rewards are only proxies for expert feedback

---

## Ethical Considerations

This model is trained on **de-identified clinical data** and intended strictly for research.

### Potential risks

- propagation of dataset biases  
- overconfidence in generated reasoning  
- misuse in clinical decision making  

### Appropriate safeguards

- expert oversight  
- dataset bias evaluation  
- fairness audits  
- controlled deployment environments  

---

## Hardware and Training Setup

Typical training configuration for models in this family includes:

- **GPUs:** multi-GPU training (4–8 GPUs)  
- **Precision:** bfloat16  
- **Rollout engine:** vLLM  
- **Training framework:** VERL PPO / GRPO trainer  
- **Sampling:** multiple rollouts per prompt  

---

## Usage

### Transformers Example

```python
from transformers import AutoTokenizer, AutoModelForCausalLM

model_id = "DATEXIS/DeepICD-R1-7B"

tokenizer = AutoTokenizer.from_pretrained(model_id)
model = AutoModelForCausalLM.from_pretrained(
    model_id,
    device_map="auto",
    torch_dtype="auto"
)

prompt = """
You are a clinical reasoning model.

Given the following admission note,
produce reasoning in <think> tags
and a final ICD-10 diagnosis in <diagnosis> tags.

[ADMISSION NOTE]
"""

inputs = tokenizer(prompt, return_tensors="pt").to(model.device)

outputs = model.generate(
    **inputs,
    max_new_tokens=512
)

print(tokenizer.decode(outputs[0], skip_special_tokens=True))
```
## Recommended Inference Practices

- Use prompts consistent with the training format.
- Validate predicted ICD-10 codes against official code formats.
- Always review predictions with medical experts.
- Avoid exposing reasoning traces in safety-critical settings without verification.

---

## Citation

If you use this model, please cite:

```bibtex
@inproceedings{roehr2026deepicdr1,
  title={DeepICD-R1: Medical Reasoning through Hierarchical Rewards and Unsupervised Distillation},
  author={R{\"o}hr, Tom and Steffek, Thomas and Teucher, Roman and Bressem, Keno and others},
  booktitle={Proceedings of LREC-COLING},
  year={2026}
}