Updating README.md with metadata and consistency improvements
Browse files
README.md
CHANGED
|
@@ -1,20 +1,36 @@
|
|
| 1 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 2 |
|
| 3 |
## Description:
|
| 4 |
-
NV-Segment-CT
|
| 5 |
-
This model is a hugging face refactored version of the [MONAI VISTA3D](https://github.com/Project-MONAI/model-zoo/tree/dev/models/vista3d) bundle. A pipeline with transformer library interfaces is provided by this model. For more details about the original model, please visit the [MONAI model zoo](https://github.com/Project-MONAI/model-zoo).
|
| 6 |
|
| 7 |
This model is for research purposes and not for clinical usage.
|
| 8 |
|
|
|
|
| 9 |
|
| 10 |
-
Core to
|
| 11 |
|
| 12 |
- **Segment everything**: Enables whole body exploration, crucial for understanding complex diseases affecting multiple organs and for holistic treatment planning.
|
| 13 |
- **Segment using class**: Provides detailed sectional views based on specific classes, essential for targeted disease analysis or organ mapping, such as tumor identification in critical organs.
|
| 14 |
- **Segment point prompts**: Enhances segmentation precision through user-directed, click-based selection. This interactive approach accelerates the creation of accurate ground-truth data, essential in medical imaging analysis.
|
| 15 |
|
| 16 |
## Run pipeline:
|
| 17 |
-
For running the pipeline,
|
| 18 |
|
| 19 |
Here is a code snippet to showcase how to execute inference with this model.
|
| 20 |
```python
|
|
@@ -66,8 +82,6 @@ list(set([i+1 for i in range(132)]) - set([2,16,18,20,21,23,24,25,26,27,128,129,
|
|
| 66 |
- To specify a new class for zero-shot segmentation, set the `label_prompt` to a value between 133 and 254. Ensure that `points` and `point_labels` are also provided; otherwise, the inference result will be a tensor of zeros.
|
| 67 |
|
| 68 |
|
| 69 |
-
|
| 70 |
-
|
| 71 |
## Model Architecture:
|
| 72 |
**Architecture Type:** Transformer <br>
|
| 73 |
**Network Architecture:** SAM-like<br>
|
|
@@ -94,45 +108,6 @@ MONAI Core v.1.3 <br>
|
|
| 94 |
**[Preferred/Supported] Operating System(s):** <br>
|
| 95 |
* Linux <br>
|
| 96 |
|
| 97 |
-
## Model Version(s):
|
| 98 |
-
Internal ONLY
|
| 99 |
-
0.1.9 <br>
|
| 100 |
-
Version changelog: https://gitlab-master.nvidia.com/dlmed/vista3d_bundle/-/blob/main/configs/metadata.json
|
| 101 |
-
|
| 102 |
-
# Training & Evaluation:
|
| 103 |
-
## Training Dataset:
|
| 104 |
-
Internal ONLY
|
| 105 |
-
15 Datasets
|
| 106 |
-
Name, JIRA/SWIPAT, Commercial, and # of Data Tracked
|
| 107 |
-
"VISTA" Sheet: https://docs.google.com/spreadsheets/d/14frhzELquSF_-tF7yGFDBHmSdnp-9-5pmbONQx8iQWk/edit?usp=sharing
|
| 108 |
-
|
| 109 |
-
## Evaluation Dataset:
|
| 110 |
-
Internal ONLY
|
| 111 |
-
15 Datasets
|
| 112 |
-
Name, JIRA/SWIPAT, Commercial, and # of Data Tracked
|
| 113 |
-
"VISTA" Sheet: https://docs.google.com/spreadsheets/d/14frhzELquSF_-tF7yGFDBHmSdnp-9-5pmbONQx8iQWk/edit?usp=sharing
|
| 114 |
-
https://docs.google.com/spreadsheets/d/1hmv-O-f6tdgndsRnoqCgcunR2uQ9IySDhZWmjsXwgbM/edit?usp=sharing
|
| 115 |
-
|
| 116 |
-
** Data Collection Method by dataset <br>
|
| 117 |
-
* [Hybrid: Human, Automatic/Sensors] <br>
|
| 118 |
-
|
| 119 |
-
** Labeling Method by dataset <br>
|
| 120 |
-
* [Hybrid: Human, Automatic/Sensors] <br>
|
| 121 |
-
|
| 122 |
-
**Properties:** Custom internal and public dataset of organs from multiple scanner types. <br>
|
| 123 |
-
|
| 124 |
-
|
| 125 |
-
## Evaluation Dataset:
|
| 126 |
-
|
| 127 |
-
** Data Collection Method by dataset <br>
|
| 128 |
-
* [Hybrid: Human, Automatic/Sensors] <br>
|
| 129 |
-
|
| 130 |
-
** Labeling Method by dataset <br>
|
| 131 |
-
* [Hybrid: Human, Automatic/Sensors] <br>
|
| 132 |
-
|
| 133 |
-
**Properties:** Custom internal and public dataset of organs from multiple scanner types. <br>
|
| 134 |
-
|
| 135 |
-
|
| 136 |
## Inference:
|
| 137 |
**Engine:** Triton <br>
|
| 138 |
**Test Hardware:**
|
|
@@ -141,10 +116,10 @@ H100<br>
|
|
| 141 |
L40<br>
|
| 142 |
|
| 143 |
## Ethical Considerations:
|
| 144 |
-
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse.
|
| 145 |
|
| 146 |
## Additional Information:
|
| 147 |
-
The current list of classes available
|
| 148 |
|
| 149 |
"0": "background",
|
| 150 |
"1": "liver",
|
|
@@ -280,6 +255,12 @@ The current list of classes available within VISTA-3D:
|
|
| 280 |
"131": "vertebrae L6",
|
| 281 |
"132": "airway"
|
| 282 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 283 |
# License
|
| 284 |
|
| 285 |
## Code License
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: other
|
| 3 |
+
license_name: nvidia-open-model-license-agreement
|
| 4 |
+
license_link: https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license/
|
| 5 |
+
pipeline_tag: image-segmentation
|
| 6 |
+
library_name: monai
|
| 7 |
+
tags:
|
| 8 |
+
- nvidia
|
| 9 |
+
- medical-imaging
|
| 10 |
+
- ct
|
| 11 |
+
- segmentation
|
| 12 |
+
- vista3d
|
| 13 |
+
---
|
| 14 |
+
|
| 15 |
+
# NV-Segment-CT
|
| 16 |
+
|
| 17 |
+

|
| 18 |
|
| 19 |
## Description:
|
| 20 |
+
NV-Segment-CT is a specialized interactive foundation model for 3D medical imaging. It excels in providing accurate and adaptable segmentation analysis across anatomies and modalities. Utilizing a multi-head architecture, NV-Segment-CT adapts to varying conditions and anatomical areas, helping guide users' annotation workflow.
|
|
|
|
| 21 |
|
| 22 |
This model is for research purposes and not for clinical usage.
|
| 23 |
|
| 24 |
+
**Training & Fine-tuning**: Visit [GitHub](https://github.com/NVIDIA-Medtech/NV-Segment-CTMR) for training code, fine-tuning guides, continual learning examples, and comprehensive development documentation.
|
| 25 |
|
| 26 |
+
Core to NV-Segment-CT are three workflows:
|
| 27 |
|
| 28 |
- **Segment everything**: Enables whole body exploration, crucial for understanding complex diseases affecting multiple organs and for holistic treatment planning.
|
| 29 |
- **Segment using class**: Provides detailed sectional views based on specific classes, essential for targeted disease analysis or organ mapping, such as tumor identification in critical organs.
|
| 30 |
- **Segment point prompts**: Enhances segmentation precision through user-directed, click-based selection. This interactive approach accelerates the creation of accurate ground-truth data, essential in medical imaging analysis.
|
| 31 |
|
| 32 |
## Run pipeline:
|
| 33 |
+
For running the pipeline, NV-Segment-CT requires at least one prompt for segmentation. It supports label prompt, which is the index of the class for automatic segmentation. It also supports point-click prompts for binary interactive segmentation. Users can provide both prompts at the same time.
|
| 34 |
|
| 35 |
Here is a code snippet to showcase how to execute inference with this model.
|
| 36 |
```python
|
|
|
|
| 82 |
- To specify a new class for zero-shot segmentation, set the `label_prompt` to a value between 133 and 254. Ensure that `points` and `point_labels` are also provided; otherwise, the inference result will be a tensor of zeros.
|
| 83 |
|
| 84 |
|
|
|
|
|
|
|
| 85 |
## Model Architecture:
|
| 86 |
**Architecture Type:** Transformer <br>
|
| 87 |
**Network Architecture:** SAM-like<br>
|
|
|
|
| 108 |
**[Preferred/Supported] Operating System(s):** <br>
|
| 109 |
* Linux <br>
|
| 110 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 111 |
## Inference:
|
| 112 |
**Engine:** Triton <br>
|
| 113 |
**Test Hardware:**
|
|
|
|
| 116 |
L40<br>
|
| 117 |
|
| 118 |
## Ethical Considerations:
|
| 119 |
+
NVIDIA believes Trustworthy AI is a shared responsibility and we have established policies and practices to enable development for a wide array of AI applications. When downloaded or used in accordance with our terms of service, developers should work with their internal model team to ensure this model meets requirements for the relevant industry and use case and addresses unforeseen product misuse. Please report security vulnerabilities or NVIDIA AI Concerns [here](https://www.nvidia.com/en-us/support/submit-security-vulnerability/).
|
| 120 |
|
| 121 |
## Additional Information:
|
| 122 |
+
The current list of classes available:
|
| 123 |
|
| 124 |
"0": "background",
|
| 125 |
"1": "liver",
|
|
|
|
| 255 |
"131": "vertebrae L6",
|
| 256 |
"132": "airway"
|
| 257 |
|
| 258 |
+
## Resources
|
| 259 |
+
|
| 260 |
+
- **Training & Fine-tuning**: [GitHub Repository](https://github.com/NVIDIA-Medtech/NV-Segment-CTMR) - Comprehensive training guides, fine-tuning examples, and development documentation
|
| 261 |
+
- **Sister Model**: [NV-Segment-CTMR](https://huggingface.co/nvidia/NV-Segment-CTMR) - Non-commercial model with CT+MRI support (345+ classes)
|
| 262 |
+
- **Clara Medical Collection**: [View all NVIDIA medical AI models](https://huggingface.co/collections/nvidia/clara-medical)
|
| 263 |
+
|
| 264 |
# License
|
| 265 |
|
| 266 |
## Code License
|