|
|
--- |
|
|
license: cc-by-nc-4.0 |
|
|
language: |
|
|
- en |
|
|
base_model: |
|
|
- Qwen/Qwen2.5-3B-Instruct |
|
|
pipeline_tag: image-text-to-text |
|
|
tags: |
|
|
- Chest-Xray |
|
|
- CXR |
|
|
- Reasoning |
|
|
- VQA |
|
|
- Report |
|
|
- Grounding |
|
|
--- |
|
|
|
|
|
<div align="center"> |
|
|
<img src="https://github.com/YBZh/CheXOne/raw/main/asset/chexone_logo1.png" width="600" alt="CheXOne Logo"> |
|
|
|
|
|
|
|
|
<p align="center"> |
|
|
📝 <a href="https://huggingface.co/StanfordAIMI/CheXOne" target="_blank">Paper</a> • 🤗 <a href="https://huggingface.co/StanfordAIMI/CheXOne" target="_blank">Hugging Face</a> • 🧩 <a href="https://github.com/YBZh/CheXOne" target="_blank">Github</a> • 🪄 <a href="https://github.com/YBZh/CheXOne" target="_blank">Project</a> |
|
|
</p> |
|
|
</div> |
|
|
|
|
|
<!-- <div align="center"> |
|
|
</div> --> |
|
|
|
|
|
## ✨ Key Features: |
|
|
* **Reasoning Capability**: Produces explicit reasoning traces alongside final answers. |
|
|
|
|
|
* **Multi-Task Support**: Supports Visual Question Answering (VQA), Report Generation, and Visual Grounding. |
|
|
|
|
|
* **Resident-Level Report Drafting**: Matches or outperforms resident-drafted reports in 50% of cases. |
|
|
|
|
|
- **Two Inference Modes** |
|
|
- **Reasoning Mode**: Higher performance with explicit reasoning traces. |
|
|
- **Instruct Mode**: Faster inference without reasoning traces. |
|
|
|
|
|
|
|
|
## 🎬 Get Started |
|
|
CheXOne is post-trained on Qwen2.5VL-3B-Instruct model, which has been in the latest Hugging face transformers and we advise you to build from source with command: |
|
|
``` |
|
|
pip install git+https://github.com/huggingface/transformers accelerate |
|
|
``` |
|
|
|
|
|
```python |
|
|
from transformers import Qwen2_5_VLForConditionalGeneration, AutoTokenizer, AutoProcessor |
|
|
from qwen_vl_utils import process_vision_info |
|
|
|
|
|
# default: Load the model on the available device(s) |
|
|
model = Qwen2_5_VLForConditionalGeneration.from_pretrained( |
|
|
"StanfordAIMI/CheXOne", torch_dtype="auto", device_map="auto" |
|
|
) |
|
|
|
|
|
# We recommend enabling flash_attention_2 for better acceleration and memory saving, especially in multi-image scenarios. |
|
|
# model = Qwen2_5_VLForConditionalGeneration.from_pretrained( |
|
|
# "StanfordAIMI/CheXOne", |
|
|
# torch_dtype=torch.bfloat16, |
|
|
# attn_implementation="flash_attention_2", |
|
|
# device_map="auto", |
|
|
# ) |
|
|
|
|
|
# default processer |
|
|
processor = AutoProcessor.from_pretrained("StanfordAIMI/CheXOne") |
|
|
|
|
|
# The default range for the number of visual tokens per image in the model is 4-16384. |
|
|
# We recommand to set max_pixels=512*512 to align with the training setting. |
|
|
# min_pixels = 256*28*28 |
|
|
# max_pixels = 512*512 |
|
|
# processor = AutoProcessor.from_pretrained("StanfordAIMI/CheXOne", min_pixels=min_pixels, max_pixels=max_pixels) |
|
|
|
|
|
# Inference Mode: Reasoning |
|
|
messages = [ |
|
|
{ |
|
|
"role": "user", |
|
|
"content": [ |
|
|
{ |
|
|
"type": "image", |
|
|
"image": "https://github.com/YBZh/CheXOne/blob/main/asset/cxr.jpg", |
|
|
}, |
|
|
{"type": "text", "text": "Write an example findings section for the CXR. Please reason step by step, and put your final answer within \\boxed{{}}."}, |
|
|
], |
|
|
} |
|
|
] |
|
|
|
|
|
# Inference Mode: Instruct |
|
|
# messages = [ |
|
|
# { |
|
|
# "role": "user", |
|
|
# "content": [ |
|
|
# { |
|
|
# "type": "image", |
|
|
# "image": "https://github.com/YBZh/CheXOne/blob/main/asset/cxr.jpg", |
|
|
# }, |
|
|
# {"type": "text", "text": "Write an example findings section for the CXR."}, |
|
|
# ], |
|
|
# } |
|
|
# ] |
|
|
|
|
|
# Preparation for inference |
|
|
text = processor.apply_chat_template( |
|
|
messages, tokenize=False, add_generation_prompt=True |
|
|
) |
|
|
image_inputs, video_inputs = process_vision_info(messages) |
|
|
inputs = processor( |
|
|
text=[text], |
|
|
images=image_inputs, |
|
|
videos=video_inputs, |
|
|
padding=True, |
|
|
return_tensors="pt", |
|
|
) |
|
|
inputs = inputs.to("cuda") |
|
|
|
|
|
# Inference: Generation of the output |
|
|
generated_ids = model.generate(**inputs, max_new_tokens=1024) |
|
|
generated_ids_trimmed = [ |
|
|
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) |
|
|
] |
|
|
output_text = processor.batch_decode( |
|
|
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False |
|
|
) |
|
|
print(output_text) |
|
|
``` |
|
|
<details> |
|
|
<summary>Multi image inference</summary> |
|
|
|
|
|
```python |
|
|
# Messages containing multiple images and a text query |
|
|
messages = [ |
|
|
{ |
|
|
"role": "user", |
|
|
"content": [ |
|
|
{"type": "image", "image": "https://github.com/YBZh/CheXOne/blob/main/asset/cxr.jpg"}, |
|
|
{"type": "image", "image": "https://github.com/YBZh/CheXOne/blob/main/asset/cxr_lateral.jpg"}, |
|
|
{"type": "text", "text": "Write an example findings section for the CXR. Please reason step by step, and put your final answer within \\boxed{{}}."}, |
|
|
], |
|
|
} |
|
|
] |
|
|
|
|
|
# Preparation for inference |
|
|
text = processor.apply_chat_template( |
|
|
messages, tokenize=False, add_generation_prompt=True |
|
|
) |
|
|
image_inputs, video_inputs = process_vision_info(messages) |
|
|
inputs = processor( |
|
|
text=[text], |
|
|
images=image_inputs, |
|
|
videos=video_inputs, |
|
|
padding=True, |
|
|
return_tensors="pt", |
|
|
) |
|
|
inputs = inputs.to("cuda") |
|
|
|
|
|
# Inference |
|
|
generated_ids = model.generate(**inputs, max_new_tokens=128) |
|
|
generated_ids_trimmed = [ |
|
|
out_ids[len(in_ids) :] for in_ids, out_ids in zip(inputs.input_ids, generated_ids) |
|
|
] |
|
|
output_text = processor.batch_decode( |
|
|
generated_ids_trimmed, skip_special_tokens=True, clean_up_tokenization_spaces=False |
|
|
) |
|
|
print(output_text) |
|
|
``` |
|
|
</details> |
|
|
|
|
|
## ✏️ Citation |
|
|
|
|
|
``` |
|
|
@article{xx, |
|
|
title={xx}, |
|
|
author={Cxxx}, |
|
|
journal={xx}, |
|
|
url={xx}, |
|
|
year={xx} |
|
|
} |
|
|
``` |