project-monai commited on
Commit
c78c28e
·
verified ·
1 Parent(s): b8220e3

Upload spleen_deepedit_annotation version 0.5.7

Browse files
.gitattributes CHANGED
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
 
 
33
  *.zip filter=lfs diff=lfs merge=lfs -text
34
  *.zst filter=lfs diff=lfs merge=lfs -text
35
  *tfevents* filter=lfs diff=lfs merge=lfs -text
36
+ models/model.ts filter=lfs diff=lfs merge=lfs -text
LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright [yyyy] [name of copyright owner]
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
configs/evaluate.json ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "validate#dataset#cache_rate": 0,
3
+ "validate#postprocessing": {
4
+ "_target_": "Compose",
5
+ "transforms": [
6
+ {
7
+ "_target_": "Activationsd",
8
+ "keys": "pred",
9
+ "softmax": true
10
+ },
11
+ {
12
+ "_target_": "AsDiscreted",
13
+ "keys": [
14
+ "pred",
15
+ "label"
16
+ ],
17
+ "argmax": [
18
+ true,
19
+ false
20
+ ],
21
+ "to_onehot": "$len(@label_names)+1"
22
+ },
23
+ {
24
+ "_target_": "SaveImaged",
25
+ "_disabled_": true,
26
+ "keys": "pred",
27
+ "output_dir": "@output_dir",
28
+ "resample": false,
29
+ "squeeze_end_dims": true
30
+ }
31
+ ]
32
+ },
33
+ "validate#handlers": [
34
+ {
35
+ "_target_": "CheckpointLoader",
36
+ "load_path": "$@ckpt_dir + '/model.pt'",
37
+ "load_dict": {
38
+ "model": "@network"
39
+ }
40
+ },
41
+ {
42
+ "_target_": "StatsHandler",
43
+ "iteration_log": false
44
+ },
45
+ {
46
+ "_target_": "MetricsSaver",
47
+ "save_dir": "@output_dir",
48
+ "metrics": [
49
+ "val_mean_dice",
50
+ "val_acc"
51
+ ],
52
+ "metric_details": [
53
+ "val_mean_dice"
54
+ ],
55
+ "batch_transform": "$lambda x: [xx['image'].meta for xx in x]",
56
+ "summary_ops": "*"
57
+ }
58
+ ],
59
+ "run": [
60
+ "$@validate#evaluator.run()"
61
+ ]
62
+ }
configs/inference.json ADDED
@@ -0,0 +1,216 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "imports": [
3
+ "$import glob",
4
+ "$import numpy",
5
+ "$import os",
6
+ "$import ignite"
7
+ ],
8
+ "bundle_root": ".",
9
+ "image_key": "image",
10
+ "output_dir": "$@bundle_root + '/eval'",
11
+ "output_ext": ".nii.gz",
12
+ "output_dtype": "$numpy.float32",
13
+ "output_postfix": "trans",
14
+ "separate_folder": true,
15
+ "load_pretrain": true,
16
+ "dataset_dir": "/workspace/Datasets/MSD_datasets/Task09_Spleen",
17
+ "datalist": "$list(sorted(glob.glob(@dataset_dir + '/imagesTs/*.nii.gz')))",
18
+ "label_names": {
19
+ "spleen": 1,
20
+ "background": 0
21
+ },
22
+ "spatial_size": [
23
+ 128,
24
+ 128,
25
+ 128
26
+ ],
27
+ "number_intensity_ch": 1,
28
+ "device": "$torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')",
29
+ "network_def": {
30
+ "_target_": "DynUNet",
31
+ "spatial_dims": 3,
32
+ "in_channels": "$len(@label_names) + @number_intensity_ch",
33
+ "out_channels": "$len(@label_names)",
34
+ "kernel_size": [
35
+ 3,
36
+ 3,
37
+ 3,
38
+ 3,
39
+ 3,
40
+ 3
41
+ ],
42
+ "strides": [
43
+ 1,
44
+ 2,
45
+ 2,
46
+ 2,
47
+ 2,
48
+ [
49
+ 2,
50
+ 2,
51
+ 1
52
+ ]
53
+ ],
54
+ "upsample_kernel_size": [
55
+ 2,
56
+ 2,
57
+ 2,
58
+ 2,
59
+ [
60
+ 2,
61
+ 2,
62
+ 1
63
+ ]
64
+ ],
65
+ "norm_name": "instance",
66
+ "deep_supervision": false,
67
+ "res_block": true
68
+ },
69
+ "network": "$@network_def.to(@device)",
70
+ "preprocessing_transforms": [
71
+ {
72
+ "_target_": "LoadImaged",
73
+ "keys": "@image_key",
74
+ "reader": "ITKReader"
75
+ },
76
+ {
77
+ "_target_": "EnsureChannelFirstd",
78
+ "keys": "@image_key"
79
+ },
80
+ {
81
+ "_target_": "Orientationd",
82
+ "keys": "@image_key",
83
+ "axcodes": "RAS"
84
+ },
85
+ {
86
+ "_target_": "ScaleIntensityRanged",
87
+ "keys": "@image_key",
88
+ "a_min": -175,
89
+ "a_max": 250,
90
+ "b_min": 0.0,
91
+ "b_max": 1.0,
92
+ "clip": true
93
+ }
94
+ ],
95
+ "deepedit_transforms": [
96
+ {
97
+ "_target_": "scripts.transforms.OrientationGuidanceMultipleLabelDeepEditd",
98
+ "ref_image": "@image_key",
99
+ "label_names": "@label_names"
100
+ },
101
+ {
102
+ "_target_": "AddGuidanceFromPointsDeepEditd",
103
+ "ref_image": "@image_key",
104
+ "guidance": "guidance",
105
+ "label_names": "@label_names"
106
+ },
107
+ {
108
+ "_target_": "Resized",
109
+ "keys": "@image_key",
110
+ "spatial_size": "@spatial_size",
111
+ "mode": "area"
112
+ },
113
+ {
114
+ "_target_": "ResizeGuidanceMultipleLabelDeepEditd",
115
+ "guidance": "guidance",
116
+ "ref_image": "@image_key"
117
+ },
118
+ {
119
+ "_target_": "AddGuidanceSignalDeepEditd",
120
+ "keys": "@image_key",
121
+ "guidance": "guidance",
122
+ "number_intensity_ch": "@number_intensity_ch"
123
+ }
124
+ ],
125
+ "extra_transforms": [
126
+ {
127
+ "_target_": "EnsureTyped",
128
+ "keys": "@image_key"
129
+ }
130
+ ],
131
+ "preprocessing": {
132
+ "_target_": "Compose",
133
+ "transforms": "$@preprocessing_transforms + @deepedit_transforms + @extra_transforms"
134
+ },
135
+ "dataset": {
136
+ "_target_": "Dataset",
137
+ "data": "$[{'image': i} for i in @datalist]",
138
+ "transform": "@preprocessing"
139
+ },
140
+ "dataloader": {
141
+ "_target_": "DataLoader",
142
+ "dataset": "@dataset",
143
+ "batch_size": 1,
144
+ "shuffle": false,
145
+ "num_workers": 2
146
+ },
147
+ "inferer": {
148
+ "_target_": "SimpleInferer"
149
+ },
150
+ "postprocessing": {
151
+ "_target_": "Compose",
152
+ "transforms": [
153
+ {
154
+ "_target_": "EnsureTyped",
155
+ "keys": "pred"
156
+ },
157
+ {
158
+ "_target_": "Activationsd",
159
+ "keys": "pred",
160
+ "softmax": true
161
+ },
162
+ {
163
+ "_target_": "Invertd",
164
+ "keys": "pred",
165
+ "transform": "@preprocessing",
166
+ "orig_keys": "@image_key",
167
+ "nearest_interp": false,
168
+ "to_tensor": true
169
+ },
170
+ {
171
+ "_target_": "AsDiscreted",
172
+ "keys": "pred",
173
+ "argmax": true
174
+ },
175
+ {
176
+ "_target_": "SaveImaged",
177
+ "keys": "pred",
178
+ "output_dir": "@output_dir",
179
+ "output_ext": "@output_ext",
180
+ "output_dtype": "@output_dtype",
181
+ "output_postfix": "@output_postfix",
182
+ "separate_folder": "@separate_folder"
183
+ }
184
+ ]
185
+ },
186
+ "handlers": [
187
+ {
188
+ "_target_": "StatsHandler",
189
+ "iteration_log": false
190
+ }
191
+ ],
192
+ "evaluator": {
193
+ "_target_": "SupervisedEvaluator",
194
+ "device": "@device",
195
+ "val_data_loader": "@dataloader",
196
+ "network": "@network",
197
+ "inferer": "@inferer",
198
+ "postprocessing": "@postprocessing",
199
+ "val_handlers": "@handlers",
200
+ "amp": true
201
+ },
202
+ "checkpointloader": {
203
+ "_target_": "CheckpointLoader",
204
+ "load_path": "$@bundle_root + '/models/model.pt'",
205
+ "load_dict": {
206
+ "model": "@network"
207
+ }
208
+ },
209
+ "initialize": [
210
+ "$monai.utils.set_determinism(seed=123)",
211
+ "$@checkpointloader(@evaluator) if @load_pretrain else None"
212
+ ],
213
+ "run": [
214
+ "$@evaluator.run()"
215
+ ]
216
+ }
configs/inference_trt.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "imports": [
3
+ "$import glob",
4
+ "$import os",
5
+ "$import ignite",
6
+ "$import torch_tensorrt"
7
+ ],
8
+ "network_def": "$torch.jit.load(@bundle_root + '/models/model_trt.ts')",
9
+ "evaluator#amp": false,
10
+ "initialize": [
11
+ "$monai.utils.set_determinism(seed=123)"
12
+ ]
13
+ }
configs/logging.conf ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ [loggers]
2
+ keys=root
3
+
4
+ [handlers]
5
+ keys=consoleHandler
6
+
7
+ [formatters]
8
+ keys=fullFormatter
9
+
10
+ [logger_root]
11
+ level=INFO
12
+ handlers=consoleHandler
13
+
14
+ [handler_consoleHandler]
15
+ class=StreamHandler
16
+ level=INFO
17
+ formatter=fullFormatter
18
+ args=(sys.stdout,)
19
+
20
+ [formatter_fullFormatter]
21
+ format=%(asctime)s - %(name)s - %(levelname)s - %(message)s
configs/metadata.json ADDED
@@ -0,0 +1,112 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "schema": "https://github.com/Project-MONAI/MONAI-extra-test-data/releases/download/0.8.1/meta_schema_20240725.json",
3
+ "version": "0.5.7",
4
+ "changelog": {
5
+ "0.5.7": "update to huggingface hosting",
6
+ "0.5.6": "use monai 1.4 and update large files",
7
+ "0.5.5": "update to use monai 1.3.1",
8
+ "0.5.4": "add load_pretrain flag for infer",
9
+ "0.5.3": "update to use monai 1.3.0",
10
+ "0.5.2": "update the checkpoint loader logic for inference",
11
+ "0.5.1": "add option to validate at training start, and I/O param entries",
12
+ "0.5.0": "enable finetune and early stop",
13
+ "0.4.9": "fix orientation issue on clicks",
14
+ "0.4.8": "Add infer transforms to manage clicks from viewer",
15
+ "0.4.7": "fix the wrong GPU index issue of multi-node",
16
+ "0.4.6": "update to use rc7 which solves dynunet issue",
17
+ "0.4.5": "remove error dollar symbol in readme",
18
+ "0.4.4": "add RAM comsumption with Cachedataset",
19
+ "0.4.3": "update ONNX-TensorRT descriptions",
20
+ "0.4.2": "deterministic retrain benchmark, update fig links",
21
+ "0.4.1": "add the ONNX-TensorRT way of model conversion",
22
+ "0.4.0": "fix mgpu finalize issue",
23
+ "0.3.9": "enable deterministic training",
24
+ "0.3.8": "adapt to BundleWorkflow interface",
25
+ "0.3.7": "add name tag",
26
+ "0.3.6": "restructure readme to match updated template",
27
+ "0.3.5": "update metric in metadata",
28
+ "0.3.4": "add validate.json file and dice score in readme",
29
+ "0.3.3": "update to use monai 1.0.1",
30
+ "0.3.2": "enhance readme on commands example",
31
+ "0.3.1": "fix license Copyright error",
32
+ "0.3.0": "update license files",
33
+ "0.2.0": "unify naming",
34
+ "0.1.0": "complete the model package",
35
+ "0.0.1": "initialize the model package structure"
36
+ },
37
+ "monai_version": "1.4.0",
38
+ "pytorch_version": "2.4.0",
39
+ "numpy_version": "1.24.4",
40
+ "required_packages_version": {
41
+ "itk": "5.4.0",
42
+ "pytorch-ignite": "0.4.11",
43
+ "scikit-image": "0.23.2",
44
+ "einops": "0.7.0",
45
+ "tensorboard": "2.17.0",
46
+ "nibabel": "5.2.1"
47
+ },
48
+ "supported_apps": {},
49
+ "name": "Spleen DeepEdit annotation",
50
+ "task": "Decathlon spleen segmentation",
51
+ "description": "This is a pre-trained model for 3D segmentation of the spleen organ from CT images using DeepEdit.",
52
+ "authors": "MONAI team",
53
+ "copyright": "Copyright (c) MONAI Consortium",
54
+ "data_source": "Task09_Spleen.tar from http://medicaldecathlon.com/",
55
+ "data_type": "nibabel",
56
+ "image_classes": "single channel data, intensity scaled to [0, 1]",
57
+ "label_classes": "single channel data, 1 is spleen, 0 is background",
58
+ "pred_classes": "2 channels OneHot data, channel 1 is spleen, channel 0 is background",
59
+ "eval_metrics": {
60
+ "mean_dice": 0.97
61
+ },
62
+ "intended_use": "This is an example, not to be used for diagnostic purposes",
63
+ "references": [
64
+ "Sakinis, Tomas, et al. 'Interactive segmentation of medical images through fully convolutional neural networks.' arXiv preprint arXiv:1903.08205 (2019)"
65
+ ],
66
+ "network_data_format": {
67
+ "inputs": {
68
+ "image": {
69
+ "type": "image",
70
+ "format": "hounsfield",
71
+ "modality": "CT",
72
+ "num_channels": 3,
73
+ "spatial_shape": [
74
+ 128,
75
+ 128,
76
+ 128
77
+ ],
78
+ "dtype": "float32",
79
+ "value_range": [
80
+ 0,
81
+ 1
82
+ ],
83
+ "is_patch_data": false,
84
+ "channel_def": {
85
+ "0": "image"
86
+ }
87
+ }
88
+ },
89
+ "outputs": {
90
+ "pred": {
91
+ "type": "image",
92
+ "format": "segmentation",
93
+ "num_channels": 2,
94
+ "spatial_shape": [
95
+ 128,
96
+ 128,
97
+ 128
98
+ ],
99
+ "dtype": "float32",
100
+ "value_range": [
101
+ 0,
102
+ 1
103
+ ],
104
+ "is_patch_data": false,
105
+ "channel_def": {
106
+ "0": "background",
107
+ "1": "spleen"
108
+ }
109
+ }
110
+ }
111
+ }
112
+ }
configs/multi_gpu_train.json ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "device": "$torch.device('cuda:' + os.environ['LOCAL_RANK'])",
3
+ "network": {
4
+ "_target_": "torch.nn.parallel.DistributedDataParallel",
5
+ "module": "$@network_def.to(@device)",
6
+ "device_ids": [
7
+ "@device"
8
+ ]
9
+ },
10
+ "train#sampler": {
11
+ "_target_": "DistributedSampler",
12
+ "dataset": "@train#dataset",
13
+ "even_divisible": true,
14
+ "shuffle": true
15
+ },
16
+ "train#dataloader#sampler": "@train#sampler",
17
+ "train#dataloader#shuffle": false,
18
+ "train#trainer#train_handlers": "$@train#handlers[: -2 if dist.get_rank() > 0 else None]",
19
+ "validate#sampler": {
20
+ "_target_": "DistributedSampler",
21
+ "dataset": "@validate#dataset",
22
+ "even_divisible": false,
23
+ "shuffle": false
24
+ },
25
+ "validate#dataloader#sampler": "@validate#sampler",
26
+ "validate#evaluator#val_handlers": "$@validate#handlers[: -3 if dist.get_rank() > 0 else None]",
27
+ "initialize": [
28
+ "$import torch.distributed as dist",
29
+ "$dist.is_initialized() or dist.init_process_group(backend='nccl')",
30
+ "$torch.cuda.set_device(@device)",
31
+ "$monai.utils.set_determinism(seed=123)"
32
+ ],
33
+ "run": [
34
+ "$@validate#handlers#0.set_trainer(trainer=@train#trainer) if @early_stop else None",
35
+ "$@train#trainer.run()"
36
+ ],
37
+ "finalize": [
38
+ "$dist.is_initialized() and dist.destroy_process_group()"
39
+ ]
40
+ }
configs/train.json ADDED
@@ -0,0 +1,458 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "imports": [
3
+ "$import glob",
4
+ "$import os",
5
+ "$import ignite",
6
+ "$import scripts"
7
+ ],
8
+ "bundle_root": ".",
9
+ "ckpt_dir": "$@bundle_root + '/models'",
10
+ "output_dir": "$@bundle_root + '/eval'",
11
+ "dataset_dir": "/workspace/Datasets/MSD_datasets/Task09_Spleen",
12
+ "images": "$list(sorted(glob.glob(@dataset_dir + '/imagesTr/*.nii.gz')))",
13
+ "labels": "$list(sorted(glob.glob(@dataset_dir + '/labelsTr/*.nii.gz')))",
14
+ "label_names": {
15
+ "spleen": 1,
16
+ "background": 0
17
+ },
18
+ "finetune": false,
19
+ "finetune_model_path": "$@bundle_root + '/models/model.pt'",
20
+ "early_stop": false,
21
+ "epochs": 500,
22
+ "spatial_size": [
23
+ 128,
24
+ 128,
25
+ 128
26
+ ],
27
+ "number_intensity_ch": 1,
28
+ "deepgrow_probability_train": 0.4,
29
+ "deepgrow_probability_val": 1.0,
30
+ "val_interval": 1,
31
+ "val_at_start": false,
32
+ "device": "$torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')",
33
+ "network_def": {
34
+ "_target_": "DynUNet",
35
+ "spatial_dims": 3,
36
+ "in_channels": "$len(@label_names) + @number_intensity_ch",
37
+ "out_channels": "$len(@label_names)",
38
+ "kernel_size": [
39
+ 3,
40
+ 3,
41
+ 3,
42
+ 3,
43
+ 3,
44
+ 3
45
+ ],
46
+ "strides": [
47
+ 1,
48
+ 2,
49
+ 2,
50
+ 2,
51
+ 2,
52
+ [
53
+ 2,
54
+ 2,
55
+ 1
56
+ ]
57
+ ],
58
+ "upsample_kernel_size": [
59
+ 2,
60
+ 2,
61
+ 2,
62
+ 2,
63
+ [
64
+ 2,
65
+ 2,
66
+ 1
67
+ ]
68
+ ],
69
+ "norm_name": "instance",
70
+ "deep_supervision": false,
71
+ "res_block": true
72
+ },
73
+ "network": "$@network_def.to(@device)",
74
+ "loss": {
75
+ "_target_": "DiceCELoss",
76
+ "to_onehot_y": true,
77
+ "softmax": true
78
+ },
79
+ "optimizer": {
80
+ "_target_": "torch.optim.Adam",
81
+ "params": "$@network.parameters()",
82
+ "lr": 0.0001
83
+ },
84
+ "lr_scheduler": {
85
+ "_target_": "torch.optim.lr_scheduler.StepLR",
86
+ "optimizer": "@optimizer",
87
+ "step_size": 1000,
88
+ "gamma": 0.1
89
+ },
90
+ "train": {
91
+ "preprocessing_transforms": [
92
+ {
93
+ "_target_": "LoadImaged",
94
+ "keys": [
95
+ "image",
96
+ "label"
97
+ ],
98
+ "reader": "ITKReader"
99
+ },
100
+ {
101
+ "_target_": "NormalizeLabelsInDatasetd",
102
+ "keys": "label",
103
+ "label_names": "@label_names"
104
+ },
105
+ {
106
+ "_target_": "EnsureChannelFirstd",
107
+ "keys": [
108
+ "image",
109
+ "label"
110
+ ]
111
+ },
112
+ {
113
+ "_target_": "Orientationd",
114
+ "keys": [
115
+ "image",
116
+ "label"
117
+ ],
118
+ "axcodes": "RAS"
119
+ },
120
+ {
121
+ "_target_": "ScaleIntensityRanged",
122
+ "keys": "image",
123
+ "a_min": -175,
124
+ "a_max": 250,
125
+ "b_min": 0.0,
126
+ "b_max": 1.0,
127
+ "clip": true
128
+ }
129
+ ],
130
+ "random_transforms": [
131
+ {
132
+ "_target_": "RandFlipd",
133
+ "keys": [
134
+ "image",
135
+ "label"
136
+ ],
137
+ "spatial_axis": [
138
+ 0
139
+ ],
140
+ "prob": 0.1
141
+ },
142
+ {
143
+ "_target_": "RandFlipd",
144
+ "keys": [
145
+ "image",
146
+ "label"
147
+ ],
148
+ "spatial_axis": [
149
+ 1
150
+ ],
151
+ "prob": 0.1
152
+ },
153
+ {
154
+ "_target_": "RandFlipd",
155
+ "keys": [
156
+ "image",
157
+ "label"
158
+ ],
159
+ "spatial_axis": [
160
+ 2
161
+ ],
162
+ "prob": 0.1
163
+ },
164
+ {
165
+ "_target_": "RandRotate90d",
166
+ "keys": [
167
+ "image",
168
+ "label"
169
+ ],
170
+ "prob": 0.1,
171
+ "max_k": 3
172
+ },
173
+ {
174
+ "_target_": "RandShiftIntensityd",
175
+ "keys": "image",
176
+ "offsets": 0.1,
177
+ "prob": 0.5
178
+ }
179
+ ],
180
+ "deepedit_transforms": [
181
+ {
182
+ "_target_": "Resized",
183
+ "keys": [
184
+ "image",
185
+ "label"
186
+ ],
187
+ "spatial_size": "@spatial_size",
188
+ "mode": [
189
+ "area",
190
+ "nearest"
191
+ ]
192
+ },
193
+ {
194
+ "_target_": "FindAllValidSlicesMissingLabelsd",
195
+ "keys": "label",
196
+ "sids": "sids"
197
+ },
198
+ {
199
+ "_target_": "AddInitialSeedPointMissingLabelsd",
200
+ "keys": "label",
201
+ "guidance": "guidance",
202
+ "sids": "sids"
203
+ },
204
+ {
205
+ "_target_": "AddGuidanceSignalDeepEditd",
206
+ "keys": "image",
207
+ "guidance": "guidance",
208
+ "number_intensity_ch": "@number_intensity_ch"
209
+ },
210
+ {
211
+ "_target_": "ToTensord",
212
+ "keys": [
213
+ "image",
214
+ "label"
215
+ ]
216
+ }
217
+ ],
218
+ "preprocessing": {
219
+ "_target_": "Compose",
220
+ "transforms": "$@train#preprocessing_transforms + @train#random_transforms + @train#deepedit_transforms"
221
+ },
222
+ "click_transforms": {
223
+ "_target_": "Compose",
224
+ "transforms": [
225
+ {
226
+ "_target_": "Activationsd",
227
+ "keys": "pred",
228
+ "softmax": true
229
+ },
230
+ {
231
+ "_target_": "AsDiscreted",
232
+ "keys": "pred",
233
+ "argmax": true
234
+ },
235
+ {
236
+ "_target_": "ToNumpyd",
237
+ "keys": [
238
+ "image",
239
+ "label",
240
+ "pred"
241
+ ]
242
+ },
243
+ {
244
+ "_target_": "FindDiscrepancyRegionsDeepEditd",
245
+ "keys": "label",
246
+ "pred": "pred",
247
+ "discrepancy": "discrepancy"
248
+ },
249
+ {
250
+ "_target_": "AddRandomGuidanceDeepEditd",
251
+ "keys": "NA",
252
+ "guidance": "guidance",
253
+ "discrepancy": "discrepancy",
254
+ "probability": "probability"
255
+ },
256
+ {
257
+ "_target_": "AddGuidanceSignalDeepEditd",
258
+ "keys": "image",
259
+ "guidance": "guidance",
260
+ "number_intensity_ch": "@number_intensity_ch"
261
+ },
262
+ {
263
+ "_target_": "ToTensord",
264
+ "keys": [
265
+ "image",
266
+ "label"
267
+ ]
268
+ }
269
+ ]
270
+ },
271
+ "dataset": {
272
+ "_target_": "CacheDataset",
273
+ "data": "$[{'image': i, 'label': l} for i, l in zip(@images[:-9], @labels[:-9])]",
274
+ "transform": "@train#preprocessing",
275
+ "cache_rate": 1.0,
276
+ "num_workers": 4
277
+ },
278
+ "dataloader": {
279
+ "_target_": "DataLoader",
280
+ "dataset": "@train#dataset",
281
+ "batch_size": 1,
282
+ "shuffle": true,
283
+ "num_workers": 0
284
+ },
285
+ "inferer": {
286
+ "_target_": "SimpleInferer"
287
+ },
288
+ "postprocessing": {
289
+ "_target_": "Compose",
290
+ "transforms": [
291
+ {
292
+ "_target_": "Activationsd",
293
+ "keys": "pred",
294
+ "softmax": true
295
+ },
296
+ {
297
+ "_target_": "AsDiscreted",
298
+ "keys": [
299
+ "pred",
300
+ "label"
301
+ ],
302
+ "argmax": [
303
+ true,
304
+ false
305
+ ],
306
+ "to_onehot": "$len(@label_names)+1"
307
+ }
308
+ ]
309
+ },
310
+ "handlers": [
311
+ {
312
+ "_target_": "CheckpointLoader",
313
+ "_disabled_": "$not @finetune",
314
+ "load_path": "@finetune_model_path",
315
+ "load_dict": {
316
+ "model": "@network"
317
+ }
318
+ },
319
+ {
320
+ "_target_": "LrScheduleHandler",
321
+ "lr_scheduler": "@lr_scheduler",
322
+ "print_lr": true
323
+ },
324
+ {
325
+ "_target_": "ValidationHandler",
326
+ "validator": "@validate#evaluator",
327
+ "epoch_level": true,
328
+ "exec_at_start": "@val_at_start",
329
+ "interval": "@val_interval"
330
+ },
331
+ {
332
+ "_target_": "StatsHandler",
333
+ "tag_name": "train_loss",
334
+ "output_transform": "$monai.handlers.from_engine(['loss'], first=True)"
335
+ },
336
+ {
337
+ "_target_": "TensorBoardStatsHandler",
338
+ "log_dir": "@output_dir",
339
+ "tag_name": "train_loss",
340
+ "output_transform": "$monai.handlers.from_engine(['loss'], first=True)"
341
+ }
342
+ ],
343
+ "key_metric": {
344
+ "train_dice": {
345
+ "_target_": "MeanDice",
346
+ "output_transform": "$monai.handlers.from_engine(['pred', 'label'])"
347
+ }
348
+ },
349
+ "train_iteration_update": {
350
+ "_target_": "Interaction",
351
+ "deepgrow_probability": "@deepgrow_probability_train",
352
+ "transforms": "@train#click_transforms",
353
+ "click_probability_key": "probability",
354
+ "train": true,
355
+ "label_names": "@label_names"
356
+ },
357
+ "trainer": {
358
+ "_target_": "SupervisedTrainer",
359
+ "device": "@device",
360
+ "max_epochs": "@epochs",
361
+ "train_data_loader": "@train#dataloader",
362
+ "network": "@network",
363
+ "optimizer": "@optimizer",
364
+ "loss_function": "@loss",
365
+ "inferer": "@train#inferer",
366
+ "amp": true,
367
+ "postprocessing": "@train#postprocessing",
368
+ "key_train_metric": "@train#key_metric",
369
+ "train_handlers": "@train#handlers",
370
+ "iteration_update": "@train#train_iteration_update"
371
+ }
372
+ },
373
+ "validate": {
374
+ "preprocessing": {
375
+ "_target_": "Compose",
376
+ "transforms": "$@train#preprocessing_transforms + @train#deepedit_transforms"
377
+ },
378
+ "dataset": {
379
+ "_target_": "CacheDataset",
380
+ "data": "$[{'image': i, 'label': l} for i, l in zip(@images[-9:], @labels[-9:])]",
381
+ "transform": "@validate#preprocessing",
382
+ "cache_rate": 1.0,
383
+ "num_workers": 4
384
+ },
385
+ "dataloader": {
386
+ "_target_": "DataLoader",
387
+ "dataset": "@validate#dataset",
388
+ "batch_size": 1,
389
+ "shuffle": false,
390
+ "num_workers": 0
391
+ },
392
+ "inferer": {
393
+ "_target_": "SimpleInferer"
394
+ },
395
+ "postprocessing": "%train#postprocessing",
396
+ "handlers": [
397
+ {
398
+ "_target_": "EarlyStopHandler",
399
+ "_disabled_": "$not @early_stop",
400
+ "trainer": null,
401
+ "patience": 1,
402
+ "score_function": "$scripts.score_function",
403
+ "min_delta": 0.01
404
+ },
405
+ {
406
+ "_target_": "StatsHandler",
407
+ "iteration_log": false
408
+ },
409
+ {
410
+ "_target_": "TensorBoardStatsHandler",
411
+ "log_dir": "@output_dir",
412
+ "iteration_log": false
413
+ },
414
+ {
415
+ "_target_": "CheckpointSaver",
416
+ "save_dir": "@ckpt_dir",
417
+ "save_dict": {
418
+ "model": "@network"
419
+ },
420
+ "save_key_metric": true,
421
+ "key_metric_filename": "model.pt"
422
+ }
423
+ ],
424
+ "key_metric": {
425
+ "val_mean_dice": {
426
+ "_target_": "MeanDice",
427
+ "output_transform": "$monai.handlers.from_engine(['pred', 'label'])"
428
+ }
429
+ },
430
+ "val_iteration_update": {
431
+ "_target_": "Interaction",
432
+ "deepgrow_probability": "@deepgrow_probability_val",
433
+ "transforms": "@train#click_transforms",
434
+ "click_probability_key": "probability",
435
+ "train": false,
436
+ "label_names": "@label_names"
437
+ },
438
+ "evaluator": {
439
+ "_target_": "SupervisedEvaluator",
440
+ "device": "@device",
441
+ "val_data_loader": "@validate#dataloader",
442
+ "network": "@network",
443
+ "inferer": "@validate#inferer",
444
+ "postprocessing": "@validate#postprocessing",
445
+ "key_val_metric": "@validate#key_metric",
446
+ "val_handlers": "@validate#handlers",
447
+ "iteration_update": "@validate#val_iteration_update",
448
+ "amp": true
449
+ }
450
+ },
451
+ "initialize": [
452
+ "$monai.utils.set_determinism(seed=123)"
453
+ ],
454
+ "run": [
455
+ "$@validate#handlers#0.set_trainer(trainer=@train#trainer) if @early_stop else None",
456
+ "$@train#trainer.run()"
457
+ ]
458
+ }
docs/README.md ADDED
@@ -0,0 +1,167 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Model Overview
2
+ A pre-trained model for 3D segmentation of the spleen organ from CT images using DeepEdit.
3
+
4
+ DeepEdit is an algorithm that combines the power of two models in one single architecture. It allows the user to perform inference as a standard segmentation method (i.e., UNet) and interactively segment part of an image using clicks [2]. DeepEdit aims to facilitate the user experience and, at the same time, develop new active learning techniques.
5
+
6
+ The model was trained on 32 images and validated on 9 images.
7
+
8
+ ## Data
9
+ The training dataset is the Spleen Task from the Medical Segmentation Decathalon. Users can find more details on the datasets at http://medicaldecathlon.com/.
10
+
11
+ - Target: Spleen
12
+ - Modality: CT
13
+ - Size: 61 3D volumes (41 Training + 20 Testing)
14
+ - Source: Memorial Sloan Kettering Cancer Center
15
+ - Challenge: Large-ranging foreground size
16
+
17
+ ## Training configuration
18
+ The training as performed with the following:
19
+ - GPU: at least 12GB of GPU memory
20
+ - Actual Model Input: 128 x 128 x 128
21
+ - AMP: True
22
+ - Optimizer: Adam
23
+ - Learning Rate: 1e-4
24
+ - Loss: DiceCELoss
25
+
26
+ ### Input
27
+ Three channels
28
+ - CT image
29
+ - Spleen Segment
30
+ - Background Segment
31
+
32
+ ### Output
33
+ Two channels
34
+ - Label 1: spleen
35
+ - Label 0: everything else
36
+
37
+ ## Performance
38
+
39
+ Dice score is used for evaluating the performance of the model. This model achieves a dice score of 0.97, depending on the number of simulated clicks.
40
+
41
+ #### Training Dice
42
+ ![A graph showing the train dice over 90 epochs.](https://developer.download.nvidia.com/assets/Clara/Images/monai_spleen_deepedit_annotation_train_dice_v2.png)
43
+
44
+ #### Training Loss
45
+ ![A graph showing the training loss over 90 epochs.](https://developer.download.nvidia.com/assets/Clara/Images/monai_spleen_deepedit_annotation_train_loss_v2.png)
46
+
47
+ #### Validation Dice
48
+ ![A graph showing the validation dice over 90 epochs.](https://developer.download.nvidia.com/assets/Clara/Images/monai_spleen_deepedit_annotation_val_dice_v2.png)
49
+
50
+ #### TensorRT speedup
51
+ The `spleen_deepedit_annotation` bundle supports acceleration with TensorRT through the ONNX-TensorRT method. The table below displays the speedup ratios observed on an A100 80G GPU.
52
+
53
+ | method | torch_fp32(ms) | torch_amp(ms) | trt_fp32(ms) | trt_fp16(ms) | speedup amp | speedup fp32 | speedup fp16 | amp vs fp16|
54
+ | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: | :---: |
55
+ | model computation | 147.52 | 40.32 | 28.87 | 11.94 | 3.66 | 5.11 | 12.36 | 3.38 |
56
+ | end2end |1292.39 | 1204.62 | 1168.09 | 1149.88 | 1.07 | 1.11 | 1.12 | 1.05 |
57
+
58
+ Where:
59
+ - `model computation` means the speedup ratio of model's inference with a random input without preprocessing and postprocessing
60
+ - `end2end` means run the bundle end-to-end with the TensorRT based model.
61
+ - `torch_fp32` and `torch_amp` are for the PyTorch models with or without `amp` mode.
62
+ - `trt_fp32` and `trt_fp16` are for the TensorRT based models converted in corresponding precision.
63
+ - `speedup amp`, `speedup fp32` and `speedup fp16` are the speedup ratios of corresponding models versus the PyTorch float32 model
64
+ - `amp vs fp16` is the speedup ratio between the PyTorch amp model and the TensorRT float16 based model.
65
+
66
+ Currently, the only available method to accelerate this model is through ONNX-TensorRT. However, the Torch-TensorRT method is under development and will be available in the near future.
67
+
68
+ This result is benchmarked under:
69
+ - TensorRT: 8.5.3+cuda11.8
70
+ - Torch-TensorRT Version: 1.4.0
71
+ - CPU Architecture: x86-64
72
+ - OS: ubuntu 20.04
73
+ - Python version:3.8.10
74
+ - CUDA version: 12.0
75
+ - GPU models and configuration: A100 80G
76
+
77
+ ### Memory Consumption
78
+
79
+ - Dataset Manager: CacheDataset
80
+ - Data Size: 61 3D Volumes
81
+ - Cache Rate: 1.0
82
+ - Single GPU - System RAM Usage: 8.2G
83
+
84
+ ### Memory Consumption Warning
85
+
86
+ If you face memory issues with CacheDataset, you can either switch to a regular Dataset class or lower the caching rate `cache_rate` in the configurations within range [0, 1] to minimize the System RAM requirements.
87
+
88
+ ## MONAI Bundle Commands
89
+ In addition to the Pythonic APIs, a few command line interfaces (CLI) are provided to interact with the bundle. The CLI supports flexible use cases, such as overriding configs at runtime and predefining arguments in a file.
90
+
91
+ For more details usage instructions, visit the [MONAI Bundle Configuration Page](https://docs.monai.io/en/latest/config_syntax.html).
92
+
93
+ #### Execute training:
94
+
95
+ ```
96
+ python -m monai.bundle run --config_file configs/train.json
97
+ ```
98
+
99
+ Please note that if the default dataset path is not modified with the actual path in the bundle config files, you can also override it by using `--dataset_dir`:
100
+
101
+ ```
102
+ python -m monai.bundle run --config_file configs/train.json --dataset_dir <actual dataset path>
103
+ ```
104
+
105
+ #### Override the `train` config to execute multi-GPU training:
106
+
107
+ ```
108
+ torchrun --standalone --nnodes=1 --nproc_per_node=2 -m monai.bundle run --config_file "['configs/train.json','configs/multi_gpu_train.json']"
109
+ ```
110
+
111
+ Please note that the distributed training-related options depend on the actual running environment; thus, users may need to remove `--standalone`, modify `--nnodes`, or do some other necessary changes according to the machine used. For more details, please refer to [pytorch's official tutorial](https://pytorch.org/tutorials/intermediate/ddp_tutorial.html).
112
+
113
+ #### Override the `train` config to execute evaluation with the trained model:
114
+
115
+ ```
116
+ python -m monai.bundle run --config_file "['configs/train.json','configs/evaluate.json']"
117
+ ```
118
+
119
+ #### Execute inference:
120
+
121
+ ```
122
+ python -m monai.bundle run --config_file configs/inference.json
123
+ ```
124
+
125
+ Optionally, clicks can be added to the data dictionary that is passed to the preprocessing transforms. The add keys are defined in `label_names` in `configs/inference.json`, and the corresponding values are the point coordinates. The following is an example of a data dictionary:
126
+
127
+ ```
128
+ {"image": "example.nii.gz", "background": [], "spleen": [[I1, J1, K1], [I2, J2, K2]]}
129
+ ```
130
+ where **[I1,J1,K1]** and **[I2,J2,K2]** are the point coordinates.
131
+
132
+ #### Export checkpoint to TensorRT based models with fp32 or fp16 precision:
133
+
134
+ ```bash
135
+ python -m monai.bundle trt_export --net_id network_def \
136
+ --filepath models/model_trt.ts --ckpt_file models/model.pt \
137
+ --meta_file configs/metadata.json --config_file configs/inference.json \
138
+ --precision <fp32/fp16> --use_onnx "True" --use_trace "True"
139
+ ```
140
+
141
+ #### Execute inference with the TensorRT model:
142
+
143
+ ```
144
+ python -m monai.bundle run --config_file "['configs/inference.json', 'configs/inference_trt.json']"
145
+ ```
146
+
147
+ # References
148
+ [1] Diaz-Pinto, Andres, et al. DeepEdit: Deep Editable Learning for Interactive Segmentation of 3D Medical Images. MICCAI Workshop on Data Augmentation, Labelling, and Imperfections. MICCAI 2022.
149
+
150
+ [2] Diaz-Pinto, Andres, et al. "MONAI Label: A framework for AI-assisted Interactive Labeling of 3D Medical Images." arXiv preprint arXiv:2203.12362 (2022).
151
+
152
+ [3] Sakinis, Tomas, et al. "Interactive segmentation of medical images through fully convolutional neural networks." arXiv preprint arXiv:1903.08205 (2019).
153
+
154
+ # License
155
+ Copyright (c) MONAI Consortium
156
+
157
+ Licensed under the Apache License, Version 2.0 (the "License");
158
+ you may not use this file except in compliance with the License.
159
+ You may obtain a copy of the License at
160
+
161
+ http://www.apache.org/licenses/LICENSE-2.0
162
+
163
+ Unless required by applicable law or agreed to in writing, software
164
+ distributed under the License is distributed on an "AS IS" BASIS,
165
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
166
+ See the License for the specific language governing permissions and
167
+ limitations under the License.
docs/data_license.txt ADDED
@@ -0,0 +1,6 @@
 
 
 
 
 
 
 
1
+ Third Party Licenses
2
+ -----------------------------------------------------------------------
3
+
4
+ /*********************************************************************/
5
+ i. Medical Segmentation Decathlon
6
+ http://medicaldecathlon.com/
models/model.pt ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2b4f139fd4f94b1b2c616d6fc423cd3fef3291ca5e4c7262fd8ae292792c0a7b
3
+ size 124036018
models/model.ts ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:e31509a99ab6f04bbe063e7a7b2acf4570c912a4f0619ab7020bd8b7aa5ef9d5
3
+ size 124167408
scripts/__init__.py ADDED
@@ -0,0 +1 @@
 
 
1
+ from .early_stop_score_function import score_function
scripts/early_stop_score_function.py ADDED
@@ -0,0 +1,15 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+
3
+ import torch
4
+ import torch.distributed as dist
5
+
6
+
7
+ def score_function(engine):
8
+ val_metric = engine.state.metrics["val_mean_dice"]
9
+ if dist.is_initialized():
10
+ device = torch.device("cuda:" + os.environ["LOCAL_RANK"])
11
+ val_metric = torch.tensor([val_metric]).to(device)
12
+ dist.all_reduce(val_metric, op=dist.ReduceOp.SUM)
13
+ val_metric /= dist.get_world_size()
14
+ return val_metric.item()
15
+ return val_metric
scripts/transforms.py ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import Dict
2
+
3
+ import numpy as np
4
+ from einops import rearrange
5
+ from monai.transforms.transform import Transform
6
+
7
+
8
+ class OrientationGuidanceMultipleLabelDeepEditd(Transform):
9
+ def __init__(self, ref_image="image", label_names=None):
10
+ """
11
+ Convert the guidance to the RAS orientation
12
+ """
13
+ self.ref_image = ref_image
14
+ self.label_names = label_names
15
+
16
+ def transform_points(self, point, affine):
17
+ """transform point to the coordinates of the transformed image
18
+ point: numpy array [bs, N, 3]
19
+ """
20
+ bs, n = point.shape[:2]
21
+ point = np.concatenate((point, np.ones((bs, n, 1))), axis=-1)
22
+ point = rearrange(point, "b n d -> d (b n)")
23
+ point = affine @ point
24
+ point = rearrange(point, "d (b n)-> b n d", b=bs)[:, :, :3]
25
+ return point
26
+
27
+ def __call__(self, data):
28
+ d: Dict = dict(data)
29
+ for key_label in self.label_names.keys():
30
+ points = d.get(key_label, [])
31
+ if len(points) < 1:
32
+ continue
33
+ reoriented_points = self.transform_points(
34
+ np.array(points)[None],
35
+ np.linalg.inv(d[self.ref_image].meta["affine"].numpy()) @ d[self.ref_image].meta["original_affine"],
36
+ )
37
+ d[key_label] = reoriented_points[0]
38
+ return d