File size: 6,486 Bytes
36fdbcf |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 |
# PRISM
[PRISM](https://arxiv.org/abs/2404.15028): A **P**romptable and **R**obust **I**nteractive **S**egmentation **M**odel with Visual Prompts
Placenta application:
[PRISM Lite](https://arxiv.org/abs/2408.05372): A lightweight model for interactive 3D placenta segmentation in ultrasound
Interactive Segmentation Model for Placenta Segmentation from 3D Ultrasound Images ([arXiv version](https://arxiv.org/abs/2407.08020))
## News
[07/07/24] Check out the decent performance/version of [PRISM on placenta segmentation in ultrasound images](https://github.com/MedICL-VU/PRISM-placenta).
[05/13/24] Our work is early accepted by MICCAI 2024.
[03/07/24] The [pretrained PRISM](https://drive.google.com/drive/u/1/folders/1B6Df44Gd9PEBGPkE1FwC8Ds4jefCekUB) models and [preprocessed datasets](https://drive.google.com/drive/folders/13uGNb2WQhSQcBQIUhnvYJere1LBYGDsW?usp=sharing) are uploaded.
## TODO
demo (gradio)
## Introduction of PRISM
<img src='figs/framework_v1.png' width='600'>
PRISM is a robust model/method for interactive segmentation in medical imaging. We strive for human-level performance, as a human-in-loop interactive segmentation model with prompts should gradually refine its outcomes until they closely match inter-rater variability.
## PRISM tumor segmentation examples
Briefly, PRISM produces tumor segmentation with mean Dice values of **93.79 (colon), 94.48 (pancreas), 94.18 (liver), and 96.58 (kidney)**.
| | |
:-------------------------:|:-------------------------:
Iterative correction for colon tumor | 
Iterative correction for multiple tumors | 
Qualitative results with compared methods | 
The quantitative results can be viewed in our [paper](https://arxiv.org/abs/2404.15028).
## Datasets
The anatomical differences among individuals and ambiguous boundaries are present in the datasets.
- Our preprocessed
We used four public [datasets](https://drive.google.com/drive/folders/13uGNb2WQhSQcBQIUhnvYJere1LBYGDsW?usp=sharing) for 3D tumor segmentation in [colon](https://drive.google.com/drive/u/1/folders/1bt17794HCZfmJ2MLh5w0Y_IAJyUj6ti2), [pancreas](https://drive.google.com/drive/u/1/folders/1NncGDG5Cu795WJTmBse-Lm0GrJmtvTdc), [liver](https://drive.google.com/drive/u/1/folders/1vDM2VkNAT5dvFX5XTRhPe6b7zwYWqU_U) and [kidney](https://drive.google.com/drive/u/1/folders/12UDho-JEZHfK1c1laD5dBFNxvJumcoDF).
- Original
Here are the links for the datasets: [MSD-colon](http://medicaldecathlon.com/), [MSD-pancreas](http://medicaldecathlon.com/), [LiTS2017](https://competitions.codalab.org/competitions/17094) and [KiTS2021](https://kits-challenge.org/kits21/).
## Models
| colon | pancreas | liver | kidney |
|------------------------------|------------------------------|------------------------------|------------------------------|
| [Download](https://drive.google.com/drive/u/1/folders/1nPUC0cCsyA_w-tKkhL_Bw7lesBorGzCl) |[Download](https://drive.google.com/drive/u/1/folders/1JPiF7wtSnbFdl0ZLmFQt1b4H-XH4FDrM)| [Download](https://drive.google.com/drive/u/1/folders/1JAFOca1FxWebzZjRa1lKo1OAv0HXqeh6) |[Download](https://drive.google.com/drive/u/1/folders/1sN0HQLM-LfWB5Kp119YwMsZIfv3VJj7S)|
## Get Started
**Installation**
```
conda create -n prism python=3.9
conda activate prism
sudo install git
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113 # install pytorch
pip install git+https://github.com/facebookresearch/segment-anything.git # install segment anything packages
pip install git+https://github.com/deepmind/surface-distance.git # for normalized surface dice (NSD) evaluation
pip install -r requirements.txt
```
**Train**
```
python train.py --data colon --data_dir your_data_directory --save_name your_save_name --multiple_outputs --dynamic --use_box --refine
```
add "--use_scribble" and "--efficient_scribble" if you want to train with scribbles.
**Train (Distributed Data Parallel)**
the only difference between this and above (train) command is the use of "--ddp".
```
python train.py --data colon --data_dir your_data_directory --save_name your_save_name -multiple_outputs --dynamic --use_box --refine --ddp
```
**Test**
put downloaded pretrained model under the implementation directory
```
python test.py --data colon --data_dir your_data_directory --split test --checkpoint best --save_name prism_pretrain --num_clicks 1 --iter_nums 11 --multiple_outputs --use_box --use_scribble --efficient_scribble --refine --refine_test
```
**FAQ**
if you got the error as AttributeError: module 'cv2' has no attribute 'ximgproc', please check [this](https://stackoverflow.com/questions/57427233/module-cv2-cv2-has-no-attribute-ximgproc) out
DDP mode has lower Dice and more epoch numbers may solve it
On my end, combining trainer and trainer_basic speeds up
training the model without refine module (as we reported in the paper) has better accuracy than with refine but not using it
## License
The model is licensed under the [Apache 2.0 license](LICENSE)
## Acknowledgements
Thanks for the code from: [SAM](https://github.com/facebookresearch/segment-anything), [SAM-Med3D](https://github.com/uni-medical/SAM-Med3D), [ProMISe](https://github.com/MedICL-VU/ProMISe), [ScribblePrompt](https://github.com/halleewong/ScribblePrompt), [nnU-Net](https://github.com/MIC-DKFZ/nnUNet)
If you find this repository useful, please consider citing:
```
@inproceedings{li2024prism,
title={Prism: A promptable and robust interactive segmentation model with visual prompts},
author={Li, Hao and Liu, Han and Hu, Dewei and Wang, Jiacheng and Oguz, Ipek},
booktitle={International Conference on Medical Image Computing and Computer-Assisted Intervention},
pages={389--399},
year={2024},
organization={Springer}
}
```
```
@inproceedings{li2024interactive,
title={Interactive Segmentation Model for Placenta Segmentation from 3D Ultrasound Images},
author={Li, Hao and Oguz, Baris and Arenas, Gabriel and Yao, Xing and Wang, Jiacheng and Pouch, Alison and Byram, Brett and Schwartz, Nadav and Oguz, Ipek},
booktitle={International Workshop on Advances in Simplifying Medical Ultrasound},
pages={132--142},
year={2024},
organization={Springer}
}
```
Please send an email to hao.li.1@vanderbilt.edu for any questions and always happy to help! :)
|