Update README.md
Browse files
README.md
CHANGED
|
@@ -15,19 +15,27 @@ Currently released subset: <br>CHAOS (CN).
|
|
| 15 |
---
|
| 16 |
## Abstract
|
| 17 |
|
| 18 |
-
Multi-turn reasoning segmentation is essential for mimicking real-world clinical workflows,
|
| 19 |
-
where anatomical structures are identified through step-by-step dialogue based on spatial,
|
| 20 |
-
functional, or pathological descriptions. However, the lack of a dedicated benchmark in this area has limited progress.
|
| 21 |
-
To address this gap, we introduce the first bilingual benchmark for multi-turn medical image segmentation, supporting both Chinese and English dialogues.
|
| 22 |
-
The benchmark consists of 28,904 images, 113,963 segmentation masks, and 232,188 question–answer pairs, covering major organs and anatomical systems across CT
|
| 23 |
-
and MRI modalities. Each dialogue requires the model to infer the segmentation target based on prior conversational turns
|
| 24 |
-
and previously segmented regions. We evaluate several state-of-the-art models, including MedCLIP-SAM, LISA, and LISA++,
|
| 25 |
-
and report three key findings: (1) existing models perform poorly on our benchmark, far below clinical usability standards;
|
| 26 |
-
(2) performance degrades as dialogue turns increase, reflecting limited multi-turn reasoning capabilities; and (3) general-purpose models such as LISA
|
| 27 |
-
can outperform medical-specific models, suggesting that further integration of domain knowledge is needed for specialized medical applications.
|
| 28 |
|
| 29 |
---
|
| 30 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 31 |
## 📁 Dataset Directory Structure
|
| 32 |
|
| 33 |
```
|
|
|
|
| 15 |
---
|
| 16 |
## Abstract
|
| 17 |
|
| 18 |
+
Multi-turn reasoning segmentation is essential for mimicking real-world clinical workflows, where anatomical structures are identified through step-by-step dialogue based on spatial, functional, or pathological descriptions. However, the lack of a dedicated benchmark in this area has limited progress. To address this gap, we introduce the first bilingual benchmark for multi-turn medical image segmentation, supporting both Chinese and English dialogues. The benchmark consists of 28,904 images, 113,963 segmentation masks, and 232,188 question–answer pairs, covering major organs and anatomical systems across CT and MRI modalities. Each dialogue requires the model to infer the segmentation target based on prior conversational turns and previously segmented regions. We evaluate several state-of-the-art models, including MedCLIP-SAM, LISA, and LISA++, and report three key findings: (1) existing models perform poorly on our benchmark, far below clinical usability standards; (2) performance degrades as dialogue turns increase, reflecting limited multi-turn reasoning capabilities; and (3) general-purpose models such as LISA can outperform medical-specific models, suggesting that further integration of domain knowledge is needed for specialized medical applications.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 19 |
|
| 20 |
---
|
| 21 |
|
| 22 |
+
## Highlights
|
| 23 |
+
|
| 24 |
+
* **New Task — Multi-Turn Reasoning Segmentation (MTRS):** At each turn, the model consumes the **current instruction + interaction history** (prior prompts and masks) to produce the **next segmentation**.
|
| 25 |
+
* **Three Reasoning Facets:** (i) **Clinical/Anatomical** (e.g., “segment the solid organ in the right upper abdomen involved in glucose metabolism”), (ii) **Spatial** (e.g., “segment the elliptical structure adjacent to the right side of the abdominal aorta”), (iii) **History-based References** (e.g., “segment the necrotic region surrounding the previously segmented tumor”).
|
| 26 |
+
* **Bilingual Benchmark (ZH/EN):** First dataset supporting **multi-turn medical dialogues** in **Chinese and English**.
|
| 27 |
+
* **Scale & Coverage:** **28,904 images**, **113,963 masks**, **232,188 QA pairs** across **CT & MRI**; covers major organs and anatomical systems.
|
| 28 |
+
* **What It Measures:** Cross-turn **memory**, **history-conditioned mask refinement**, and **language-to-image alignment** over multiple rounds.
|
| 29 |
+
* **SOTA Evaluation:** Benchmarked **MedCLIP-SAM**, **LISA**, and **LISA++** under multi-turn settings.
|
| 30 |
+
* **Key Findings:**
|
| 31 |
+
|
| 32 |
+
1. Current models are **well below clinical usability** on this benchmark.
|
| 33 |
+
2. **Performance degrades** as dialogue turns increase.
|
| 34 |
+
3. **General-purpose models** outperform **medical-specific** models, indicating a need to infuse stronger domain knowledge.
|
| 35 |
+
* **Intended Impact:** Establishes the **first large-scale yardstick** for MTRS, enabling fair, reproducible comparison and catalyzing progress on **multi-turn reasoning in medical imaging**.
|
| 36 |
+
|
| 37 |
+
|
| 38 |
+
---
|
| 39 |
## 📁 Dataset Directory Structure
|
| 40 |
|
| 41 |
```
|