Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,135 @@
|
|
| 1 |
-
---
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
|
| 3 |
+
license: cc-by-4.0
|
| 4 |
+
tags:
|
| 5 |
+
- medical
|
| 6 |
+
- vision-language
|
| 7 |
+
- visual-grounding
|
| 8 |
+
- multi-modal
|
| 9 |
+
- pre-trained
|
| 10 |
+
---
|
| 11 |
+
|
| 12 |
+
# Model Card: Med-GLIP
|
| 13 |
+
|
| 14 |
+
## Model Details
|
| 15 |
+
|
| 16 |
+
* **Model Name:** Med-GLIP
|
| 17 |
+
* **Paper Title:** Med-GLIP: Advancing Medical Language-Image Pre-training with Large-scale Grounded Dataset
|
| 18 |
+
* **Authors:** Ziye Deng, Ruihan He, Jiaxiang Liu, Yuan Wang, Zijie Meng, Songtao Jiang, Yong Xie, Zuozhu Liu
|
| 19 |
+
* **Affiliations:** (Not explicitly mentioned in the abstract, but affiliations of the authors would be listed here)
|
| 20 |
+
* **Version:** v1
|
| 21 |
+
* **Date:** (Presumed August 2025)
|
| 22 |
+
* **Model Type:** Medical Language-Image Pre-training Model with Visual Grounding capabilities.
|
| 23 |
+
* **Relevant Links:**
|
| 24 |
+
* arXiv Page: [https://arxiv.org/abs/2508.10528v1](https://arxiv.org/abs/2508.10528v1)
|
| 25 |
+
* DOI: [https://doi.org/10.48550/arXiv.2508.10528](https://doi.org/10.48550/arXiv.2508.10528)
|
| 26 |
+
* Code Repository: (Add link if available)
|
| 27 |
+
* Model Weights: (Add link if available)
|
| 28 |
+
* **License:** Creative Commons Attribution 4.0 International (CC BY 4.0)
|
| 29 |
+
* **Citation:**
|
| 30 |
+
|
| 31 |
+
```bibtex
|
| 32 |
+
@misc{deng2025medglip,
|
| 33 |
+
title={Med-GLIP: Advancing Medical Language-Image Pre-training with Large-scale Grounded Dataset},
|
| 34 |
+
author={Ziye Deng and Ruihan He and Jiaxiang Liu and Yuan Wang and Zijie Meng and Songtao Jiang and Yong Xie and Zuozhu Liu},
|
| 35 |
+
year={2025},
|
| 36 |
+
eprint={2508.10528},
|
| 37 |
+
archivePrefix={arXiv},
|
| 38 |
+
primaryClass={cs.CV}
|
| 39 |
+
}
|
| 40 |
+
```
|
| 41 |
+
|
| 42 |
+
## Model Description
|
| 43 |
+
|
| 44 |
+
Med-GLIP is a medical-domain language-image pre-training model designed to enhance the understanding of fine-grained correspondences between medical images and text. In contrast to existing medical multi-modal models (e.g., MedKLIP, LLaVA-Med), Med-GLIP specifically emphasizes **visual grounding**, the ability to localize medical entities or findings mentioned in text to their corresponding regions in the image. The model's development is coupled with a large-scale, grounded medical language-image dataset, **Med-GLIP-5M**.
|
| 45 |
+
|
| 46 |
+
The model aims to overcome the limitations of existing methods in **fine-grained understanding and localization**, which is crucial for applications that require precise linking between report findings and image regions. By pre-training on the large-scale grounding dataset, Med-GLIP learns stronger cross-modal alignment capabilities.
|
| 47 |
+
|
| 48 |
+
## Intended Use
|
| 49 |
+
|
| 50 |
+
* **Primary Intended Uses:**
|
| 51 |
+
* Medical Visual Question Answering (VQA)
|
| 52 |
+
* Medical Report Generation (MRG)
|
| 53 |
+
* Phrase Grounding: Localizing text phrases (e.g., diseases, anatomical structures) to image regions.
|
| 54 |
+
* Serving as a foundational pre-trained model for various downstream medical multi-modal tasks (e.g., interactive segmentation, diagnostic assistance).
|
| 55 |
+
* **Primary Intended Users:**
|
| 56 |
+
* Medical AI researchers
|
| 57 |
+
* Engineers developing medical image analysis and reporting tools
|
| 58 |
+
* Researchers interested in multi-modal learning and visual grounding
|
| 59 |
+
* **Out-of-Scope Uses:**
|
| 60 |
+
* Direct use in clinical diagnostic decision-making without rigorous validation and regulatory approval.
|
| 61 |
+
* Use in non-medical image-text tasks.
|
| 62 |
+
|
| 63 |
+
## Training Data
|
| 64 |
+
|
| 65 |
+
* **Dataset:** **Med-GLIP-5M**
|
| 66 |
+
* A custom-built, large-scale medical language-image dataset specifically created for Med-GLIP, featuring extensive **grounding annotations** (correspondences between image regions and text phrases).
|
| 67 |
+
* **Dataset Construction:** The paper details the pipeline, including Data Source Analysis, Data Collection, Data Preprocessing, Quality Control, and the generation of grounding annotations (possibly utilizing tools like SAM).
|
| 68 |
+
* **Composition:** (Specific details depend on the full paper) Expected to include various medical imaging modalities (e.g., X-rays, CTs, MRIs) paired with corresponding radiological reports or descriptive texts, with a focus on high-quality phrase-region bounding box annotations.
|
| 69 |
+
|
| 70 |
+
## Model Architecture
|
| 71 |
+
|
| 72 |
+
* Med-GLIP is based on the architectural principles of **GLIP (Grounded Language-Image Pre-training)**, adapted for the medical domain. Key components are expected to include:
|
| 73 |
+
* **Image Encoder:** Likely based on a Transformer architecture (e.g., ViT or Swin Transformer) for feature extraction.
|
| 74 |
+
* **Text Encoder:** Likely based on a BERT variant for encoding text inputs (reports and query phrases).
|
| 75 |
+
* **Cross-Modal Fusion Module:** For deep interaction between image and text features.
|
| 76 |
+
* **Grounding Head:** To predict bounding boxes corresponding to the input text phrases based on the fused features.
|
| 77 |
+
* **Training Objectives:**
|
| 78 |
+
* **Grounding Loss:** Minimizing the difference between predicted and ground-truth bounding boxes (e.g., using L1 and GIoU loss).
|
| 79 |
+
* **Image-Text Contrastive (ITC) Loss:** Ensuring that matched image-text pairs are aligned in the feature space, facilitating global alignment. The formula is likely similar to: $L_{ITC} = -\log \frac{\exp(\text{sim}(I, T)/\tau)}{\sum \exp(\text{sim}(I, T')/\tau)} - \log \frac{\exp(\text{sim}(I, T)/\tau)}{\sum \exp(\text{sim}(I', T)/\tau)}$.
|
| 80 |
+
|
| 81 |
+
## Evaluation
|
| 82 |
+
|
| 83 |
+
* **Evaluation Tasks:**
|
| 84 |
+
* **Phrase Grounding:** Evaluated on dedicated medical grounding datasets.
|
| 85 |
+
* **Visual Question Answering (VQA):** Evaluated on standard medical VQA datasets (e.g., VQA-RAD, SLAKE, PathVQA).
|
| 86 |
+
* **Medical Report Generation (MRG):** Evaluated on datasets like MIMIC-CXR for report quality.
|
| 87 |
+
* **Metrics:**
|
| 88 |
+
* Grounding: IoU (Intersection over Union), Recall@k.
|
| 89 |
+
* VQA: Accuracy, AUC.
|
| 90 |
+
* MRG: Text generation metrics such as BLEU, ROUGE, and CIDEr.
|
| 91 |
+
* **Results:** The paper reports significant performance gains over previous state-of-the-art methods in several downstream tasks, particularly those requiring strong grounding capabilities. Figures and tables (e.g., Figure 7, Table 6) provide qualitative and quantitative comparisons.
|
| 92 |
+
|
| 93 |
+
## Limitations
|
| 94 |
+
|
| 95 |
+
* Model performance is highly dependent on the quality and coverage of the Med-GLIP-5M dataset.
|
| 96 |
+
* The model's ability to generalize to rare diseases or unseen imaging modalities/styles may be limited.
|
| 97 |
+
* Noise or inaccuracies introduced during the automated grounding annotation process could affect the model's precision.
|
| 98 |
+
* The model's computational requirements may be high for training and inference.
|
| 99 |
+
* (Refer to the full paper for a comprehensive discussion of limitations.)
|
| 100 |
+
|
| 101 |
+
## Bias, Risks, and Ethical Considerations
|
| 102 |
+
|
| 103 |
+
* **Data Bias:** The Med-GLIP-5M dataset may contain demographic biases (e.g., in age, gender, race representation) from its source institutions, which can be reflected in the model's performance on underrepresented groups.
|
| 104 |
+
* **Clinical Risk:** The model is an AI research tool and **must not** be used for primary clinical diagnosis or patient care without explicit, strict clinical validation and regulatory approval. Misinterpretation of results could lead to patient harm.
|
| 105 |
+
* **Interpretability:** While the grounding feature aids in interpretability, the overall decision-making process is complex, and failures should be treated with caution.
|
| 106 |
+
* (Refer to the full paper for a detailed discussion of ethical and societal implications.)
|
| 107 |
+
|
| 108 |
+
## How to Get Started with the Model
|
| 109 |
+
|
| 110 |
+
(If the model code and weights are released, this section provides usage instructions)
|
| 111 |
+
|
| 112 |
+
```python
|
| 113 |
+
# Example: Loading the model and processor (assuming compatibility with a library like Hugging Face's transformers)
|
| 114 |
+
|
| 115 |
+
# from transformers import AutoProcessor, AutoModelForVisualGrounding
|
| 116 |
+
# import torch
|
| 117 |
+
# from PIL import Image
|
| 118 |
+
|
| 119 |
+
# # Load processor and model
|
| 120 |
+
# processor = AutoProcessor.from_pretrained("your-org/med-glip")
|
| 121 |
+
# model = AutoModelForVisualGrounding.from_pretrained("your-org/med-glip")
|
| 122 |
+
|
| 123 |
+
# # Prepare input
|
| 124 |
+
# image = Image.open("path/to/medical_image.jpg")
|
| 125 |
+
# text_query = "evidence of right lower lobe consolidation"
|
| 126 |
+
|
| 127 |
+
# inputs = processor(images=image, text=text_query, return_tensors="pt")
|
| 128 |
+
|
| 129 |
+
# # Perform inference
|
| 130 |
+
# with torch.no_grad():
|
| 131 |
+
# outputs = model(**inputs)
|
| 132 |
+
|
| 133 |
+
# # Process the output to get bounding boxes (implementation details vary)
|
| 134 |
+
# predicted_boxes = outputs.logits
|
| 135 |
+
# ...
|