Med-R1 / README.md
yuxianglai117's picture
Update README.md
11ef389 verified
|
raw
history blame
1.37 kB
metadata
license: apache-2.0
datasets:
  - foreverbeliever/OmniMedVQA
language:
  - en
metrics:
  - accuracy
base_model:
  - Qwen/Qwen2-VL-2B-Instruct
pipeline_tag: visual-question-answering

Med-R1

Med-R1 is a reinforcement learning (RL)-enhanced vision-language model (VLM) designed for medical reasoning across 8 imaging modalities (CT, MRI, Ultrasound, Dermoscopy, Fundus Photography, Optical Coherence Tomography (OCT), Microscopy, and X-ray) and 5 key tasks (modality recognition, anatomy identification, disease diagnosis, lesion grading, and biological attribute analysis). Using Group Relative Policy Optimization (GRPO), Med-R1 improves generalization and trustworthiness, surpassing Qwen2-VL-2B by 29.94% and even outperforming the much larger Qwen2-VL-72B. Our model checkpoints provide researchers with a powerful tool for advancing medical AI with RL-driven enhancements.

Description of Models

  • Cross-Modality: We provide checkpoints trained separately on the following modalities:

    • CT, MRI, X-Ray, Fundus (FP), Dermoscopy (Der), Microscopy (Micro), OCT, and Ultrasound (US).
  • Cross-Task Learning: We provide checkpoints trained separately on the following tasks:

    • Anatomy Identification, Disease Diagnosis, Lesion Grading, Modality Recognition, and Biological Attribute Analysis.

Citation