File size: 5,427 Bytes
beea791 02af0fc beea791 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 |
---
license: cc-by-nc-4.0
language:
- en
base_model:
- Qwen/Qwen2.5-3B-Instruct
pipeline_tag: image-text-to-text
tags:
- Chest-Xray
- CXR
- Reasoning
- VQA
- Report
- Grounding
---
<div align="center">
<img src="https://github.com/YBZh/CheXOne/raw/main/asset/chexone_logo1.png" width="600" alt="CheXOne Logo">
<p align="center">
📝 <a href="https://huggingface.co/StanfordAIMI/CheXOne" target="_blank">Paper</a> • 🤗 <a href="https://huggingface.co/StanfordAIMI/CheXOne" target="_blank">Hugging Face</a> • 🧩 <a href="https://github.com/YBZh/CheXOne" target="_blank">Github</a> • 🪄 <a href="https://github.com/YBZh/CheXOne" target="_blank">Project</a>
</p>
</div>
<!-- <div align="center">
</div> -->
## ✨ Key Features:
* **Reasoning Capability**: Produces explicit reasoning traces alongside final answers.
* **Multi-Task Support**: Supports Visual Question Answering (VQA), Report Generation, and Visual Grounding.
* **Resident-Level Report Drafting**: Matches or outperforms resident-drafted reports in 50% of cases.
- **Two Inference Modes**
- **Reasoning Mode**: Higher performance with explicit reasoning traces.
- **Instruct Mode**: Faster inference without reasoning traces.
## 🎬 Get Started
CheXOne is post-trained on Qwen2.5VL-3B-Instruct model, which has been in the latest Hugging face transformers and we advise you to build from source with command:
```
pip install git+https://github.com/huggingface/transformers accelerate
```
```python
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor
from qwen_vl_utils import process_vision_info
# default: Load the model on the available device(s)
model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
"StanfordAIMI/CheXOne", torch_dtype="auto", device_map="auto"
)
# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image scenarios.
# model = Qwen2_5_VLForConditionalGeneration.from_pretrained(
# "StanfordAIMI/CheXOne",
# torch_dtype=torch.bfloat16,
# attn_implementation="flash_attention_2",
# device_map="auto",
# )
# default processer
processor = AutoProcessor.from_pretrained("StanfordAIMI/CheXOne")
# The default range for the number of visual tokens per image in the model is 4-16384.
# We recommand to set max_pixels=512*512 to align with the training setting.
# min_pixels = 256*28*28
# max_pixels = 512*512
# processor = AutoProcessor.from_pretrained("StanfordAIMI/CheXOne", min_pixels=min_pixels, max_pixels=max_pixels)
# Inference Mode: Reasoning
messages = [
{
"role": "user",
"content": [
{
"type": "image",
"image": "https://github.com/YBZh/CheXOne/blob/main/asset/cxr.jpg",
},
{"type": "text", "text": "Write an example findings section for the CXR. Please reason step by step, and put your final answer within \\boxed{{}}."},
],
}
]
# Inference Mode: Instruct
# messages = [
# {
# "role": "user",
# "content": [
# {
# "type": "image",
# "image": "https://github.com/YBZh/CheXOne/blob/main/asset/cxr.jpg",
# },
# {"type": "text", "text": "Write an example findings section for the CXR."},
# ],
# }
# ]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference: Generation of the output
generated_ids = model.generate(**inputs, max_new_tokens=1024)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
<details>
<summary>Multi image inference</summary>
```python
# Messages containing multiple images and a text query
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "https://github.com/YBZh/CheXOne/blob/main/asset/cxr.jpg"},
{"type": "image", "image": "https://github.com/YBZh/CheXOne/blob/main/asset/cxr_lateral.jpg"},
{"type": "text", "text": "Write an example findings section for the CXR. Please reason step by step, and put your final answer within \\boxed{{}}."},
],
}
]
# Preparation for inference
text = processor.apply_chat_template(
messages, tokenize=False, add_generation_prompt=True
)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
)
inputs = inputs.to("cuda")
# Inference
generated_ids = model.generate(**inputs, max_new_tokens=128)
generated_ids_trimmed = [
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
output_text = processor.batch_decode(
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False
)
print(output_text)
```
</details>
## ✏️ Citation
```
@article{xx,
title={xx},
author={Cxxx},
journal={xx},
url={xx},
year={xx}
}
``` |