Safetensors
dinov2
medical
File size: 2,892 Bytes
cd3fc3a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
09542b1
0ab30ca
 
cd3fc3a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c73710d
cd3fc3a
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
---
tags:
- medical
license: other
license_name: research-only-rail-m
model-index:
- name: Curia-2
  results: []
extra_gated_prompt: >-
  Please confirm that you have read and agree to the following disclaimer.

  The model in this repository is provided for
  research use only (Research-only RAIL-M license).
  The model(s) and/or software are not
  intended for use in clinical decision-making or for any other clinical use,
  and performance for clinical use has not been established.

---

<div align="center">
  <img src="https://cdn-uploads.huggingface.co/production/uploads/62cdea59a9be5c195561c2b8/lgz6FefJZr9nMkqQ4_Y5T.png" width="40%" alt="Raidium" />
</div>
<hr>


<p align="center">
  <a href="https://www.raidium.eu/blog/curia-2-foundation-model/"><b>🌐 Blog Post</b></a> |
  <a href="https://huggingface.co/raidium/curia"><b> 🤗 Original Curia</b></a> |
  <a href="https://arxiv.org/abs/2509.06830"><b>📄 Curia Paper Link</b></a>
</p>
<h2>
<p align="center">
  <h1 align="center">Curia-2: Scaling Self-Supervised Learning for Radiology Foundation Models</h1>
</p>
</h2>


We introduce Curia-2, a follow-up to Curia which significantly improves the original pre-training strategy and representation quality to better capture the specificities of radiological data. Curia-2 excels on vision-focused tasks and fairs competitively to vision-language models on clinically complex tasks such as finding detection.

Research paper coming soon.



## Loading the model

To load the model, use the `AutoModel` class from huggingface transformers library.

```python
from transformers import AutoModel
model = AutoModel.from_pretrained("raidium/curia-2")
```

You can also load the image pre-processor

```python
from transformers import AutoImageProcessor
processor = AutoImageProcessor.from_pretrained("raidium/curia-2", trust_remote_code=True)
```


Then to forward an image:


```python
img = 2048 * np.random.rand(256, 256) - 1024 # single axial slice, in PL orientation
model_input = processor(img)
features = model(**model_input)
```

The image must follow the following format:
```
input: numpy array of shape (H, W)
  Images needs to be in:
  - PL for axial
  - IL for coronal
  - IP for sagittal
  for CT, no windowing, just hounsfield or normalized image
  for MRI, similar, no windowing, just raw values or normalized image
```

## License

The model is released under the RESEARCH-ONLY RAIL-M license.
https://huggingface.co/raidium/curia/blob/main/LICENSE

## Cite our paper

```
@article{dancette2025curia,
  title={Curia: A Multi-Modal Foundation Model for Radiology},
  author={Dancette, Corentin and Khlaut, Julien and Saporta, Antoine and Philippe, Helene and Ferreres, Elodie and Callard, Baptiste and Danielou, Th{\'e}o and Alberge, L{\'e}o and Machado, L{\'e}o and Tordjman, Daniel and others},
  journal={arXiv preprint arXiv:2509.06830},
  year={2025}
}
```