Safetensors
custom_code
File size: 5,457 Bytes
1780d21
78003e8
1780d21
 
78003e8
bc4db88
 
 
 
78003e8
 
bc4db88
78003e8
 
 
 
bc4db88
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
78003e8
bc4db88
78003e8
bc4db88
 
 
 
 
 
 
 
 
 
 
 
 
 
 
9ebdadb
78003e8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
bc4db88
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
---
license: cc-by-nc-4.0
---

<div align="center">
    
# RadZero: Similarity-Based Cross-Attention for Explainable Vision-Language Alignment in Chest X-ray with Zero-Shot Multi-Task Capability [NeurIPS 2025]



<p align="center">
📝 <a href="https://arxiv.org/abs/2504.07416" target="_blank">Paper</a> • 🤗 <a href="https://huggingface.co/Deepnoid/RadZero" target="_blank">Model</a> • 🧩 <a href="https://github.com/deepnoid-ai/RadZero" target="_blank">Codes</a>
</p>

</div>

## Introduction 

<p align="center">
  <img src="misc/introduction.png" alt="Key Differences vs. Existing Methods" width="80%" />
</p>
<p align="center">
  <em>Figure 1. Comparison of attention maps and the proposed VL similarity map for visualizing VL
alignment. (a) While traditional attention maps inevitably exhibit high values at certain points due to softmax activation, the proposed VL similarity maps yield low values for unrelated image-text pair. (b) Their fixed scale, originating from cosine similarity, enables open-vocabulary semantic segmentation through simple thresholding.</em>
</p>

<p align="center">
  <img src="misc/method.png" alt="RadZero Method Overview" width="80%" />

</p>
<p align="center">
  <em>Figure 2. Overview of the RadZero framework. Finding-sentences are extracted from reports and aligned with local image patch features through similarity-based cross-attention (VL-CABS), enabling zero-shot classification, grounding, and segmentation.</em>
</p>


## Abstract

> Recent advancements in multimodal models have significantly improved vision-language (VL) alignment in radiology. However, existing approaches struggle to effectively utilize complex radiology reports for learning and offer limited interpretability through attention probability visualizations. To address these challenges, we introduce RadZero, a novel framework for VL alignment in chest X-ray with zero-shot multi-task capability. A key component of our approach is VL-CABS (Vision-Language Cross-Attention Based on Similarity), which aligns text embeddings with local image features for interpretable, fine-grained VL reasoning. RadZero leverages large language models to extract concise semantic sentences from radiology reports and employs multi-positive contrastive training to effectively capture relationships between images and multiple relevant textual descriptions. It uses a pre-trained vision encoder with additional trainable Transformer layers, allowing efficient high-resolution image processing. By computing similarity between text embeddings and local image patch features, VL-CABS enables zero-shot inference with similarity probability for classification, and pixel-level VL similarity maps for grounding and segmentation. Experimental results on public chest radiograph benchmarks show that RadZero outperforms state-of-the-art methods in zero-shot classification, grounding, and segmentation. Furthermore, VL similarity map analysis highlights the potential of VL-CABS for improving explainability in VL alignment. Additionally, qualitative evaluation demonstrates RadZero's capability for open-vocabulary semantic segmentation, further validating its effectiveness in medical imaging. 


## RadZero Model Inference

### Install dependencies 

```
pip install -r requirements.txt
```

### Model Inference Codes

RadZero can perform **zero-shot classification / grounding / segmentation for chest X-ray** 
using the **RadZero model** on 🤗 <a href="https://huggingface.co/Deepnoid/RadZero" target="_blank">Hugging Face</a>.


```python
# Deepnoid/RadZero/inference.py
import warnings

import torch
from transformers import AutoImageProcessor, AutoModel, AutoTokenizer

from utils import model_inference

# Suppress specific warnings for cleaner logs
warnings.filterwarnings("ignore", category=UserWarning)


def load_model(device, dtype):

    tokenizer = AutoTokenizer.from_pretrained("Deepnoid/RadZero")
    image_processor = AutoImageProcessor.from_pretrained("Deepnoid/RadZero")

    model = AutoModel.from_pretrained(
        "Deepnoid/RadZero",
        trust_remote_code=True,
        torch_dtype=dtype,
        device_map=device,
    )

    models = {
        "tokenizer": tokenizer,
        "image_processor": image_processor,
        "model": model,
    }
    return models


if __name__ == "__main__":
    # Setup constant
    device = torch.device("cuda")
    dtype = torch.float32

    # load models
    models = load_model(device, dtype)

    # load image
    image_path = "cxr_image.jpg"

    # inference
    similarity_prob, similarity_map = model_inference(
        image_path, "There is fibrosis", **models
    )

    print(similarity_prob)
    print(similarity_map.min())
    print(similarity_map.max())
    print(similarity_map.shape)
```




## References

- **Pretrained models**
  - **Vision encoder**: [XrayDINOv2](https://huggingface.co/StanfordAIMI/dinov2-base-xray-224)
  - **Text encoder**: [all-mpnet-base-v2](https://huggingface.co/sentence-transformers/all-mpnet-base-v2)


## Acknowledgments
This work was supported by the Technology Innovation Program (RS-2025-02221011, Development
of Medical-Specialized Multimodal Hyperscale Generative AI Technology for Global Integration)
funded by the Ministry of Trade Industry & Energy (MOTIE, South Korea).

## LICENSE
[![License: CC BY-NC 4.0](https://img.shields.io/badge/License-CC%20BY--NC%204.0-lightgrey.svg)](https://creativecommons.org/licenses/by-nc/4.0/)