Add pipeline_tag and improve model card metadata
Browse filesHi! I'm Niels from the community science team at Hugging Face.
This PR improves the model card for MedCLIPSeg by:
- Adding `pipeline_tag: image-segmentation` to the YAML metadata for better discoverability.
- Refining the tags to follow Hub standards.
- Adding the author list and a concise summary of the model's purpose.
The technical details and reproduction steps remain unchanged. Thanks for sharing this research with the community!
README.md
CHANGED
|
@@ -1,12 +1,10 @@
|
|
| 1 |
---
|
| 2 |
-
license: cc-by-nc-4.0
|
| 3 |
-
task_categories:
|
| 4 |
-
- image-segmentation
|
| 5 |
language:
|
| 6 |
- en
|
|
|
|
|
|
|
| 7 |
tags:
|
| 8 |
- medical-imaging
|
| 9 |
-
- image-segmentation
|
| 10 |
- vision-language-models
|
| 11 |
- clip
|
| 12 |
- unimedclip
|
|
@@ -23,7 +21,11 @@ tags:
|
|
| 23 |
<a href="https://huggingface.co/TahaKoleilat/MedCLIPSeg" target="_blank"><img alt="HuggingFace Models" src="https://img.shields.io/badge/Models-Reproduce-2ea44f?logo=huggingface&logoColor=white" height="25"/></a>
|
| 24 |
<a href="#citation"><img alt="Citation" src="https://img.shields.io/badge/Citation-BibTeX-6C63FF?logo=bookstack&logoColor=white" height="25"/></a>
|
| 25 |
|
| 26 |
-
This repository hosts the **official trained model checkpoints** for **MedCLIPSeg**,
|
|
|
|
|
|
|
|
|
|
|
|
|
| 27 |
|
| 28 |
The released checkpoints correspond exactly to the experiments reported in our paper and are provided **for evaluation and reproducibility purposes only**.
|
| 29 |
|
|
|
|
| 1 |
---
|
|
|
|
|
|
|
|
|
|
| 2 |
language:
|
| 3 |
- en
|
| 4 |
+
license: cc-by-nc-4.0
|
| 5 |
+
pipeline_tag: image-segmentation
|
| 6 |
tags:
|
| 7 |
- medical-imaging
|
|
|
|
| 8 |
- vision-language-models
|
| 9 |
- clip
|
| 10 |
- unimedclip
|
|
|
|
| 21 |
<a href="https://huggingface.co/TahaKoleilat/MedCLIPSeg" target="_blank"><img alt="HuggingFace Models" src="https://img.shields.io/badge/Models-Reproduce-2ea44f?logo=huggingface&logoColor=white" height="25"/></a>
|
| 22 |
<a href="#citation"><img alt="Citation" src="https://img.shields.io/badge/Citation-BibTeX-6C63FF?logo=bookstack&logoColor=white" height="25"/></a>
|
| 23 |
|
| 24 |
+
This repository hosts the **official trained model checkpoints** for **MedCLIPSeg**, presented in the paper [MedCLIPSeg: Probabilistic Vision-Language Adaptation for Data-Efficient and Generalizable Medical Image Segmentation](https://huggingface.co/papers/2602.20423).
|
| 25 |
+
|
| 26 |
+
**Authors:** Taha Koleilat, Hojat Asgariandehkordi, Omid Nejati Manzari, Berardino Barile, Yiming Xiao, Hassan Rivaz.
|
| 27 |
+
|
| 28 |
+
MedCLIPSeg is a vision–language framework for **medical image segmentation** built on top of **CLIP**. It adapts CLIP for robust, data-efficient, and uncertainty-aware segmentation through probabilistic cross-modal attention and bidirectional interaction between image and text tokens.
|
| 29 |
|
| 30 |
The released checkpoints correspond exactly to the experiments reported in our paper and are provided **for evaluation and reproducibility purposes only**.
|
| 31 |
|