YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Qwen2-VL-2B-Instruct – Stage 3 (Multi-turn)
1. Model Overview
This model is part of a Vision-Language AI system designed for chest X-ray analysis in Vietnamese clinical settings.
The full pipeline consists of 3 stages:
- Stage 1: Findings generation (image → radiology findings)
- Stage 2: Impression generation (image → clinical impression)
- Stage 3: Multi-turn conversation (findings + impression + dialogue)
This repository corresponds to:
- Stage: 3 (Multi-turn)
- Task: Multi-turn reasoning with findings and impression
- Domain: Vietnamese medical imaging (Chest X-ray)
The model supports multi-turn dialogue, where:
- Turn 1: Generate findings
- Turn 2: Generate clinical impression based on previous context
2. Installation
pip install torch torchvision transformers qwen-vl-utils pillow
3. Inference
GPU is recommended.
from transformers import Qwen2VLForConditionalGeneration, AutoProcessor
from qwen_vl_utils import process_vision_info
import torch
model = Qwen2VLForConditionalGeneration.from_pretrained(
"THP2903/weight_qwen2-2b_instruct_multi",
torch_dtype="auto",
device_map="auto"
)
processor = AutoProcessor.from_pretrained("THP2903/weight_qwen2-2b_instruct_multi")
# Turn 1: Findings
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": "your_image.jpg"},
{"type": "text", "text": "Ảnh chụp xray bệnh nhân nam, 48 tuổi PA. Mô tả thông tin benh nhân."},
],
}
]
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
).to("cuda")
generated_ids = model.generate(**inputs, max_new_tokens=512)
generated_ids_trimmed = [
out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
response1 = processor.batch_decode(
generated_ids_trimmed,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)[0]
print("Turn 1:", response1)
# Turn 2: Impression (reuse previous response)
messages.append(
{"role": "assistant", "content": [{"type": "text", "text": response1}]}
)
messages.append(
{
"role": "user",
"content": [
{"type": "text", "text": "Kết luận bệnh gì?"}
],
}
)
text = processor.apply_chat_template(messages, tokenize=False, add_generation_prompt=True)
image_inputs, video_inputs = process_vision_info(messages)
inputs = processor(
text=[text],
images=image_inputs,
videos=video_inputs,
padding=True,
return_tensors="pt",
).to("cuda")
generated_ids = model.generate(**inputs, max_new_tokens=512)
generated_ids_trimmed = [
out_ids[len(in_ids):] for in_ids, out_ids in zip(inputs.input_ids, generated_ids)
]
response2 = processor.batch_decode(
generated_ids_trimmed,
skip_special_tokens=True,
clean_up_tokenization_spaces=False
)[0]
print("Turn 2:", response2)
4. Notes
- Input must be a chest X-ray image
- Turn 1 generates findings
- Turn 2 generates clinical impression using previous conversation context
- Conversation history is maintained via messages list
- This model follows the original Qwen2-VL multi-turn inference pipeline
- For best performance, consider using Qwen2-VL-7B
- Downloads last month
- 2
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support