MedCLIPSeg / README.md
TahaKoleilat's picture
Update README.md
b52c8f5 verified
metadata
license: cc-by-nc-4.0
task_categories:
  - image-segmentation
language:
  - en
tags:
  - medical-imaging
  - image-segmentation
  - vision-language-models
  - clip
  - unimedclip
  - biomedical
  - healthcare

MedCLIPSeg: Probabilistic Vision–Language Adaptation for Data-Efficient and Generalizable Medical Image Segmentation

Health-X Lab | IMPACT Lab

Taha Koleilat, Hojat Asgariandehkordi, Omid Nejati Manzari, Berardino Barile, Yiming Xiao, Hassan Rivaz

arXiv Project Website Code HuggingFace Dataset HuggingFace Models Citation

Overview

Medical image segmentation remains challenging due to limited annotations for training, ambiguous anatomical features, and domain shifts. While vision–language models such as CLIP offer strong cross-modal representations, their potential for dense, text-guided medical image segmentation remains underexplored. We present MedCLIPSeg, a novel framework that adapts CLIP for robust, data-efficient, and uncertainty-aware medical image segmentation. Our approach leverages patch-level CLIP embeddings through probabilistic cross-modal attention, enabling bidirectional interaction between image and text tokens and explicit modeling of predictive uncertainty. Together with a soft patch-level contrastive loss that encourages nuanced semantic learning across diverse textual prompts, MedCLIPSeg improves data efficiency and domain generalizability. Extensive experiments across 16 datasets, spanning five imaging modalities and six organs, demonstrate that MedCLIPSeg outperforms prior methods in accuracy, efficiency, and robustness, while providing interpretable uncertainty maps that highlight the local reliability of segmentation results. This work demonstrates the potential of probabilistic vision–language modeling for text-driven medical image segmentation.

How to install datasets

Our study includes 16 biomedical image segmentation datasets. Place all the datasets in one directory under data to ease management. The file structure looks like

data/
├── <DATASET_NAME>/
│   ├── Prompts_Folder/
│   │   └── <prompt_files>        # text prompts (*.xlsx)
│   │
│   ├── Train_Folder/
│   │   ├── img/
│   │   │   └── <image_files>
│   │   └── label/
│   │       └── <mask_files>
│   │
│   ├── Val_Folder/
│   │   ├── img/
│   │   │   └── <image_files>
│   │   └── label/
│   │       └── <mask_files>
│   │
│   └── Test_Folder/
│       ├── img/
│       │   └── <image_files>
│       └── label/
│           └── <mask_files>
│
└── <DATASET_NAME_2>/
    └── (same structure as above)

Dataset Organization

Each dataset is split into training, validation, and testing splits.

The Prompts_Folder contains the text prompt files associated with each dataset. These include:

  • Prompt definitions used for data-efficiency experiments (e.g., 10%, 25%, 50% training and validation subsets)
  • Additional variant prompt designs explored in the study, such as alternative phrasing and semantic formulations

These prompt files enable flexible evaluation under different supervision regimes.

Dataset Summary

Dataset Train Validation Test Modality Organ
BUSI (62, 156, 312) (7, 19, 39) 78 Ultrasound Breast
BTMRI (273, 684, 1,369) (132, 330, 660) 1,005 MRI Brain
ISIC (80, 202, 404) (9, 22, 45) 379 Dermatoscopy Skin
Kvasir-SEG (80, 200, 400) (10, 25, 50) 100 Endoscopy Colon
QaTa-COV19 (571, 1,429, 2,858) (142, 357, 714) 2,113 X-ray Chest
EUS (2,631, 6,579, 13,159) (175, 439, 879) 10,090 Ultrasound Pancreas
BUSUC 567 122 122 Ultrasound Breast
BUSBRA 1,311 282 282 Ultrasound Breast
BUID 162 35 35 Ultrasound Breast
UDIAT 113 25 25 Ultrasound Breast
BRISC 4,000 1,000 1,000 MRI Brain
UWaterlooSkinCancer 132 0 41 Dermatoscopy Skin
CVC-ColonDB 20 0 360 Endoscopy Colon
CVC-ClinicDB 490 61 61 Endoscopy Colon
CVC-300 6 0 60 Endoscopy Colon
BKAI 799 100 100 Endoscopy Colon

Download the datasets

All the datasets can be found on Hugging Face here. Download each dataset seperately:

After downloading each dataset, unzip and place each under data like the following

data/
├── BTMRI/
│   ├── Prompts_Folder/
│   │   └── <prompt_files>        # text prompts (*.xlsx)
│   │
│   ├── Train_Folder/
│   │   ├── img/
│   │   │   └── <image_files>
│   │   └── label/
│   │       └── <mask_files>
│   │
│   ├── Val_Folder/
│   │   ├── img/
│   │   │   └── <image_files>
│   │   └── label/
│   │       └── <mask_files>
│   │
│   └── Test_Folder/
│       ├── img/
│       │   └── <image_files>
│       └── label/
│           └── <mask_files>

Preprocessing EUS dataset

  • Download the EUS Healthy subset, place it in data/EUS, unzip, and rename the extracted content to EUS_healthy

  • Download the EUS Cancer subset, place it in data/EUS, extract the data, and rename the extracted content to EUS_cancer

  • Run the preprocessing script:

    python utils/preprocess_EUS.py
    
  • Delete EUS_healthy and EUS_cancer

Citation

If you use our work, please consider citing:

@article{koleilat2026medclipseg,
  title={MedCLIPSeg: Probabilistic Vision-Language Adaptation for Data-Efficient and Generalizable Medical Image Segmentation},
  author={Koleilat, Taha and Asgariandehkordi, Hojat and Manzari, Omid Nejati and Barile, Berardino and Xiao, Yiming and Rivaz, Hassan},
  journal={arXiv preprint arXiv:2602.20423},
  year={2026}
}