JasonXU-1998 commited on
Commit
71cf8b5
·
verified ·
1 Parent(s): 89326e0

Upload folder using huggingface_hub

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. .gitattributes +0 -0
  2. Code/PENGWIN_Challenge/Inference/CT/Dockerfile +33 -0
  3. Code/PENGWIN_Challenge/Inference/CT/inference.py +111 -0
  4. Code/PENGWIN_Challenge/Inference/CT/models/Dataset198_PelvicLowres/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/dataset.json +14 -0
  5. Code/PENGWIN_Challenge/Inference/CT/models/Dataset198_PelvicLowres/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/dataset_fingerprint.json +1018 -0
  6. Code/PENGWIN_Challenge/Inference/CT/models/Dataset198_PelvicLowres/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/plans.json +345 -0
  7. Code/PENGWIN_Challenge/Inference/CT/models/Dataset199_Pelvic/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/dataset.json +14 -0
  8. Code/PENGWIN_Challenge/Inference/CT/models/Dataset199_Pelvic/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/dataset_fingerprint.json +1018 -0
  9. Code/PENGWIN_Challenge/Inference/CT/models/Dataset199_Pelvic/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/plans.json +532 -0
  10. Code/PENGWIN_Challenge/Inference/CT/models/Dataset202_newSacrum/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/dataset.json +13 -0
  11. Code/PENGWIN_Challenge/Inference/CT/models/Dataset202_newSacrum/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/dataset_fingerprint.json +1018 -0
  12. Code/PENGWIN_Challenge/Inference/CT/models/Dataset202_newSacrum/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/fold_0/debug.json +53 -0
  13. Code/PENGWIN_Challenge/Inference/CT/models/Dataset202_newSacrum/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/plans.json +345 -0
  14. Code/PENGWIN_Challenge/Inference/CT/models/Dataset203_newHips/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/dataset.json +13 -0
  15. Code/PENGWIN_Challenge/Inference/CT/models/Dataset203_newHips/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/dataset_fingerprint.json +2018 -0
  16. Code/PENGWIN_Challenge/Inference/CT/models/Dataset203_newHips/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/fold_0/debug.json +53 -0
  17. Code/PENGWIN_Challenge/Inference/CT/models/Dataset203_newHips/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/plans.json +345 -0
  18. Code/PENGWIN_Challenge/Inference/CT/requirements.txt +5 -0
  19. Code/PENGWIN_Challenge/Inference/CT/save.sh +33 -0
  20. Code/PENGWIN_Challenge/Inference/CT/test/input/images/pelvic-fracture-ct/replace_to_your_image.mha +3 -0
  21. Code/PENGWIN_Challenge/Inference/CT/test/output/Readme.md +1 -0
  22. Code/PENGWIN_Challenge/Inference/CT/test_run.sh +62 -0
  23. Code/PENGWIN_Challenge/Inference/CT/two_stage_inference.py +284 -0
  24. Code/PENGWIN_Challenge/Inference/CT/utils/__pycache__/utils.cpython-310.pyc +0 -0
  25. Code/PENGWIN_Challenge/Inference/CT/utils/utils.py +308 -0
  26. Code/PENGWIN_Challenge/Inference/X-ray/Dockerfile +25 -0
  27. Code/PENGWIN_Challenge/Inference/X-ray/inference.py +104 -0
  28. Code/PENGWIN_Challenge/Inference/X-ray/requirements.txt +2 -0
  29. Code/PENGWIN_Challenge/Inference/X-ray/save.sh +33 -0
  30. Code/PENGWIN_Challenge/Inference/X-ray/test_run.sh +62 -0
  31. Code/PENGWIN_Challenge/Inference/X-ray/two_stage_inference.py +243 -0
  32. Code/PENGWIN_Challenge/Inference/X-ray/utils/utils.py +134 -0
  33. Code/PENGWIN_Challenge/README.MD +25 -0
  34. Code/PENGWIN_Challenge/nnUNet/.gitignore +116 -0
  35. Code/PENGWIN_Challenge/nnUNet/documentation/__init__.py +0 -0
  36. Code/PENGWIN_Challenge/nnUNet/documentation/assets/scribble_example.png +3 -0
  37. Code/PENGWIN_Challenge/nnUNet/documentation/benchmarking.md +115 -0
  38. Code/PENGWIN_Challenge/nnUNet/documentation/changelog.md +51 -0
  39. Code/PENGWIN_Challenge/nnUNet/documentation/competitions/AutoPETII.md +129 -0
  40. Code/PENGWIN_Challenge/nnUNet/documentation/convert_msd_dataset.md +3 -0
  41. Code/PENGWIN_Challenge/nnUNet/documentation/dataset_format.md +254 -0
  42. Code/PENGWIN_Challenge/nnUNet/documentation/dataset_format_inference.md +39 -0
  43. Code/PENGWIN_Challenge/nnUNet/documentation/explanation_normalization.md +45 -0
  44. Code/PENGWIN_Challenge/nnUNet/documentation/explanation_plans_files.md +185 -0
  45. Code/PENGWIN_Challenge/nnUNet/documentation/extending_nnunet.md +37 -0
  46. Code/PENGWIN_Challenge/nnUNet/documentation/how_to_use_nnunet.md +310 -0
  47. Code/PENGWIN_Challenge/nnUNet/documentation/ignore_label.md +104 -0
  48. Code/PENGWIN_Challenge/nnUNet/documentation/installation_instructions.md +87 -0
  49. Code/PENGWIN_Challenge/nnUNet/documentation/manual_data_splits.md +46 -0
  50. Code/PENGWIN_Challenge/nnUNet/documentation/pretraining_and_finetuning.md +82 -0
.gitattributes CHANGED
The diff for this file is too large to render. See raw diff
 
Code/PENGWIN_Challenge/Inference/CT/Dockerfile ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM --platform=linux/amd64 pytorch/pytorch:2.3.1-cuda12.1-cudnn8-runtime
2
+ # Use a 'large' base container to show-case how to load pytorch and use the GPU (when enabled)
3
+
4
+ # Ensures that Python output to stdout/stderr is not buffered: prevents missing information when terminating
5
+ ENV PYTHONUNBUFFERED=1
6
+
7
+ # Ensures that NVIDIA runtime is used
8
+ ENV NVIDIA_VISIBLE_DEVICES=all
9
+ ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
10
+
11
+ RUN groupadd -r user && useradd -m --no-log-init -r -g user user
12
+ USER user
13
+
14
+ WORKDIR /opt/app
15
+
16
+ COPY --chown=user:user models /opt/app/models
17
+ COPY --chown=user:user packages /opt/app/packages
18
+
19
+ COPY --chown=user:user requirements.txt /opt/app/
20
+ COPY --chown=user:user utils/utils.py /opt/app/utils/
21
+ COPY --chown=user:user inference.py /opt/app/
22
+ COPY --chown=user:user two_stage_inference.py /opt/app/
23
+
24
+
25
+ # You can add any Python dependencies to requirements.txt
26
+ #RUN python -m pip install -i https://pypi.tuna.tsinghua.edu.cn/simple\
27
+ # -i https://mirrors.aliyun.com/pypi/simple/ \
28
+ # --user \
29
+ # --no-cache-dir \
30
+ # --no-color \
31
+ # --requirement /opt/app/requirements.txt
32
+ RUN pip install --no-index --find-links=/opt/app/packages --user --no-cache-dir --no-color --requirement /opt/app/requirements.txt
33
+ ENTRYPOINT ["python", "inference.py"]
Code/PENGWIN_Challenge/Inference/CT/inference.py ADDED
@@ -0,0 +1,111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ The following is a simple example algorithm.
3
+
4
+ It is meant to run within a container.
5
+
6
+ To run it locally, you can call the following bash script:
7
+
8
+ ./test_run.sh
9
+
10
+ This will start the inference and reads from ./test/input and outputs to ./test/output
11
+
12
+ To save the container and prep it for upload to Grand-Challenge.org you can call:
13
+
14
+ ./save.sh
15
+
16
+ Any container that shows the same behavior will do, this is purely an example of how one COULD do it.
17
+
18
+ Happy programming!
19
+ """
20
+ from pathlib import Path
21
+
22
+ from glob import glob
23
+ from two_stage_inference import inference_one_image
24
+ import SimpleITK
25
+ import numpy
26
+
27
+ INPUT_PATH = Path("/input")
28
+ OUTPUT_PATH = Path("/output")
29
+ RESOURCE_PATH = Path("resources")
30
+
31
+
32
+ def run():
33
+ _show_torch_cuda_info()
34
+ output_path = OUTPUT_PATH / "images/pelvic-fracture-ct-segmentation"
35
+ inference_one_image(input_dir=load_image_dir(location=INPUT_PATH / "images/pelvic-fracture-ct"),
36
+ output_dir=output_path)
37
+
38
+ # # Read the input
39
+ # pelvic_facture_ct = load_image_file_as_array(
40
+ # location=INPUT_PATH / "images/pelvic-fracture-ct",
41
+ # )
42
+ #
43
+ #
44
+ # # Process the inputs: any way you'd like
45
+
46
+ #
47
+ # with open(RESOURCE_PATH / "some_resource.txt", "r") as f:
48
+ # print(f.read())
49
+ #
50
+ # # For now, let us set make bogus predictions
51
+ # pelvic_fracture_segmentation = numpy.eye(4, 2)
52
+ #
53
+ # # Save your output
54
+ # write_array_as_image_file(
55
+ # location=OUTPUT_PATH / "images/pelvic-fracture-ct-segmentation",
56
+ # array=pelvic_fracture_segmentation,
57
+ # )
58
+
59
+ return 0
60
+
61
+
62
+ # def load_image_file_as_array(*, location):
63
+ # # Use SimpleITK to read a file
64
+ # input_files = glob(str(location / "*.mha"))
65
+ # result = SimpleITK.ReadImage(input_files[0])
66
+ #
67
+ # # Convert it to a Numpy array
68
+ # return SimpleITK.GetArrayFromImage(result)
69
+
70
+
71
+ def load_image_dir(*, location):
72
+ # Use SimpleITK to read a file
73
+ input_files = glob(str(location / "*.mha"))
74
+ result = input_files[0]
75
+
76
+ # Convert it to a Numpy array
77
+ return result
78
+
79
+
80
+
81
+ def write_array_as_image_file(*, location, array):
82
+ location.mkdir(parents=True, exist_ok=True)
83
+
84
+ # You may need to change the suffix to .tiff to match the expected output
85
+ suffix = ".mha"
86
+
87
+ image = SimpleITK.GetImageFromArray(array)
88
+ SimpleITK.WriteImage(
89
+ image,
90
+ location / f"output{suffix}",
91
+ useCompression=True,
92
+ )
93
+
94
+
95
+ def _show_torch_cuda_info():
96
+ import torch
97
+
98
+ print("=+=" * 10)
99
+ print("Collecting Torch CUDA information")
100
+ print(f"Torch CUDA is available: {(available := torch.cuda.is_available())}")
101
+ if available:
102
+ print(f"\tcuda version: {torch.version.cuda}")
103
+ print(f"\tnumber of devices: {torch.cuda.device_count()}")
104
+ print(f"\tcurrent device: { (current_device := torch.cuda.current_device())}")
105
+ print(f"\tproperties: {torch.cuda.get_device_properties(current_device)}")
106
+
107
+ print("=+=" * 10)
108
+
109
+
110
+ if __name__ == "__main__":
111
+ raise SystemExit(run())
Code/PENGWIN_Challenge/Inference/CT/models/Dataset198_PelvicLowres/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/dataset.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "channel_names": {
3
+ "0": "CT"
4
+ },
5
+ "labels": {
6
+ "background": 0,
7
+ "sacrum": 1,
8
+ "left_hip": 2,
9
+ "right_hip": 3
10
+ },
11
+ "numTraining": 184,
12
+ "file_ending": ".mha",
13
+ "overwrite_image_reader_writer": "SimpleITKIO"
14
+ }
Code/PENGWIN_Challenge/Inference/CT/models/Dataset198_PelvicLowres/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/dataset_fingerprint.json ADDED
@@ -0,0 +1,1018 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "foreground_intensity_properties_per_channel": {
3
+ "0": {
4
+ "max": 2739.0,
5
+ "mean": 348.4314270019531,
6
+ "median": 282.0,
7
+ "min": -643.0,
8
+ "percentile_00_5": -60.0,
9
+ "percentile_99_5": 1239.0,
10
+ "std": 267.4811706542969
11
+ }
12
+ },
13
+ "median_relative_size_after_cropping": 1.0,
14
+ "shapes_after_crop": [
15
+ [
16
+ 128,
17
+ 160,
18
+ 160
19
+ ],
20
+ [
21
+ 108,
22
+ 91,
23
+ 136
24
+ ],
25
+ [
26
+ 114,
27
+ 116,
28
+ 156
29
+ ],
30
+ [
31
+ 90,
32
+ 86,
33
+ 130
34
+ ],
35
+ [
36
+ 125,
37
+ 184,
38
+ 184
39
+ ],
40
+ [
41
+ 113,
42
+ 91,
43
+ 138
44
+ ],
45
+ [
46
+ 127,
47
+ 171,
48
+ 171
49
+ ],
50
+ [
51
+ 107,
52
+ 152,
53
+ 152
54
+ ],
55
+ [
56
+ 97,
57
+ 80,
58
+ 135
59
+ ],
60
+ [
61
+ 112,
62
+ 194,
63
+ 194
64
+ ],
65
+ [
66
+ 100,
67
+ 200,
68
+ 200
69
+ ],
70
+ [
71
+ 106,
72
+ 180,
73
+ 180
74
+ ],
75
+ [
76
+ 120,
77
+ 160,
78
+ 160
79
+ ],
80
+ [
81
+ 119,
82
+ 104,
83
+ 145
84
+ ],
85
+ [
86
+ 104,
87
+ 92,
88
+ 156
89
+ ],
90
+ [
91
+ 94,
92
+ 160,
93
+ 160
94
+ ],
95
+ [
96
+ 112,
97
+ 160,
98
+ 160
99
+ ],
100
+ [
101
+ 120,
102
+ 160,
103
+ 160
104
+ ],
105
+ [
106
+ 112,
107
+ 142,
108
+ 142
109
+ ],
110
+ [
111
+ 112,
112
+ 160,
113
+ 160
114
+ ],
115
+ [
116
+ 112,
117
+ 160,
118
+ 160
119
+ ],
120
+ [
121
+ 136,
122
+ 174,
123
+ 174
124
+ ],
125
+ [
126
+ 104,
127
+ 119,
128
+ 123
129
+ ],
130
+ [
131
+ 112,
132
+ 160,
133
+ 160
134
+ ],
135
+ [
136
+ 103,
137
+ 110,
138
+ 145
139
+ ],
140
+ [
141
+ 112,
142
+ 167,
143
+ 167
144
+ ],
145
+ [
146
+ 100,
147
+ 170,
148
+ 170
149
+ ],
150
+ [
151
+ 122,
152
+ 80,
153
+ 148
154
+ ],
155
+ [
156
+ 121,
157
+ 160,
158
+ 160
159
+ ],
160
+ [
161
+ 114,
162
+ 158,
163
+ 158
164
+ ],
165
+ [
166
+ 125,
167
+ 160,
168
+ 160
169
+ ],
170
+ [
171
+ 112,
172
+ 160,
173
+ 160
174
+ ],
175
+ [
176
+ 117,
177
+ 84,
178
+ 129
179
+ ],
180
+ [
181
+ 116,
182
+ 174,
183
+ 174
184
+ ],
185
+ [
186
+ 119,
187
+ 90,
188
+ 165
189
+ ],
190
+ [
191
+ 123,
192
+ 102,
193
+ 160
194
+ ],
195
+ [
196
+ 134,
197
+ 160,
198
+ 160
199
+ ],
200
+ [
201
+ 112,
202
+ 160,
203
+ 160
204
+ ],
205
+ [
206
+ 125,
207
+ 75,
208
+ 157
209
+ ],
210
+ [
211
+ 112,
212
+ 160,
213
+ 160
214
+ ],
215
+ [
216
+ 120,
217
+ 104,
218
+ 160
219
+ ],
220
+ [
221
+ 112,
222
+ 174,
223
+ 174
224
+ ],
225
+ [
226
+ 110,
227
+ 177,
228
+ 177
229
+ ],
230
+ [
231
+ 102,
232
+ 64,
233
+ 129
234
+ ],
235
+ [
236
+ 110,
237
+ 85,
238
+ 139
239
+ ],
240
+ [
241
+ 110,
242
+ 160,
243
+ 160
244
+ ],
245
+ [
246
+ 97,
247
+ 153,
248
+ 153
249
+ ],
250
+ [
251
+ 132,
252
+ 181,
253
+ 181
254
+ ],
255
+ [
256
+ 112,
257
+ 157,
258
+ 157
259
+ ],
260
+ [
261
+ 104,
262
+ 94,
263
+ 140
264
+ ],
265
+ [
266
+ 112,
267
+ 176,
268
+ 176
269
+ ],
270
+ [
271
+ 112,
272
+ 177,
273
+ 177
274
+ ],
275
+ [
276
+ 102,
277
+ 160,
278
+ 160
279
+ ],
280
+ [
281
+ 120,
282
+ 171,
283
+ 171
284
+ ],
285
+ [
286
+ 96,
287
+ 170,
288
+ 170
289
+ ],
290
+ [
291
+ 114,
292
+ 95,
293
+ 159
294
+ ],
295
+ [
296
+ 136,
297
+ 123,
298
+ 167
299
+ ],
300
+ [
301
+ 124,
302
+ 108,
303
+ 165
304
+ ],
305
+ [
306
+ 106,
307
+ 179,
308
+ 179
309
+ ],
310
+ [
311
+ 107,
312
+ 91,
313
+ 143
314
+ ],
315
+ [
316
+ 112,
317
+ 161,
318
+ 161
319
+ ],
320
+ [
321
+ 121,
322
+ 88,
323
+ 160
324
+ ],
325
+ [
326
+ 108,
327
+ 77,
328
+ 165
329
+ ],
330
+ [
331
+ 112,
332
+ 160,
333
+ 160
334
+ ],
335
+ [
336
+ 112,
337
+ 160,
338
+ 160
339
+ ],
340
+ [
341
+ 102,
342
+ 160,
343
+ 160
344
+ ],
345
+ [
346
+ 118,
347
+ 84,
348
+ 185
349
+ ],
350
+ [
351
+ 112,
352
+ 166,
353
+ 166
354
+ ],
355
+ [
356
+ 112,
357
+ 185,
358
+ 185
359
+ ],
360
+ [
361
+ 112,
362
+ 136,
363
+ 136
364
+ ],
365
+ [
366
+ 102,
367
+ 78,
368
+ 131
369
+ ],
370
+ [
371
+ 107,
372
+ 99,
373
+ 132
374
+ ],
375
+ [
376
+ 105,
377
+ 160,
378
+ 160
379
+ ],
380
+ [
381
+ 104,
382
+ 200,
383
+ 200
384
+ ],
385
+ [
386
+ 118,
387
+ 175,
388
+ 175
389
+ ],
390
+ [
391
+ 112,
392
+ 160,
393
+ 160
394
+ ],
395
+ [
396
+ 112,
397
+ 148,
398
+ 148
399
+ ],
400
+ [
401
+ 124,
402
+ 84,
403
+ 154
404
+ ],
405
+ [
406
+ 107,
407
+ 76,
408
+ 131
409
+ ],
410
+ [
411
+ 141,
412
+ 159,
413
+ 159
414
+ ],
415
+ [
416
+ 109,
417
+ 85,
418
+ 133
419
+ ],
420
+ [
421
+ 125,
422
+ 99,
423
+ 166
424
+ ],
425
+ [
426
+ 112,
427
+ 153,
428
+ 153
429
+ ],
430
+ [
431
+ 107,
432
+ 174,
433
+ 174
434
+ ],
435
+ [
436
+ 114,
437
+ 86,
438
+ 151
439
+ ],
440
+ [
441
+ 106,
442
+ 75,
443
+ 135
444
+ ],
445
+ [
446
+ 100,
447
+ 95,
448
+ 149
449
+ ],
450
+ [
451
+ 130,
452
+ 89,
453
+ 178
454
+ ],
455
+ [
456
+ 116,
457
+ 159,
458
+ 159
459
+ ],
460
+ [
461
+ 96,
462
+ 200,
463
+ 200
464
+ ],
465
+ [
466
+ 110,
467
+ 142,
468
+ 142
469
+ ],
470
+ [
471
+ 117,
472
+ 180,
473
+ 180
474
+ ],
475
+ [
476
+ 130,
477
+ 156,
478
+ 156
479
+ ],
480
+ [
481
+ 100,
482
+ 200,
483
+ 200
484
+ ],
485
+ [
486
+ 103,
487
+ 139,
488
+ 139
489
+ ],
490
+ [
491
+ 112,
492
+ 160,
493
+ 160
494
+ ],
495
+ [
496
+ 120,
497
+ 180,
498
+ 180
499
+ ],
500
+ [
501
+ 112,
502
+ 160,
503
+ 160
504
+ ],
505
+ [
506
+ 113,
507
+ 74,
508
+ 143
509
+ ],
510
+ [
511
+ 112,
512
+ 177,
513
+ 177
514
+ ]
515
+ ],
516
+ "spacings": [
517
+ [
518
+ 2.5,
519
+ 2.5,
520
+ 2.5
521
+ ],
522
+ [
523
+ 2.5,
524
+ 2.5,
525
+ 2.5
526
+ ],
527
+ [
528
+ 2.5,
529
+ 2.5,
530
+ 2.5
531
+ ],
532
+ [
533
+ 2.5,
534
+ 2.5,
535
+ 2.5
536
+ ],
537
+ [
538
+ 2.5,
539
+ 2.5,
540
+ 2.5
541
+ ],
542
+ [
543
+ 2.5,
544
+ 2.5,
545
+ 2.5
546
+ ],
547
+ [
548
+ 2.5,
549
+ 2.5,
550
+ 2.5
551
+ ],
552
+ [
553
+ 2.5,
554
+ 2.5,
555
+ 2.5
556
+ ],
557
+ [
558
+ 2.5,
559
+ 2.5,
560
+ 2.5
561
+ ],
562
+ [
563
+ 2.5,
564
+ 2.5,
565
+ 2.5
566
+ ],
567
+ [
568
+ 2.5,
569
+ 2.5,
570
+ 2.5
571
+ ],
572
+ [
573
+ 2.5,
574
+ 2.5,
575
+ 2.5
576
+ ],
577
+ [
578
+ 2.5,
579
+ 2.5,
580
+ 2.5
581
+ ],
582
+ [
583
+ 2.5,
584
+ 2.5,
585
+ 2.5
586
+ ],
587
+ [
588
+ 2.5,
589
+ 2.5,
590
+ 2.5
591
+ ],
592
+ [
593
+ 2.5,
594
+ 2.5,
595
+ 2.5
596
+ ],
597
+ [
598
+ 2.5,
599
+ 2.5,
600
+ 2.5
601
+ ],
602
+ [
603
+ 2.5,
604
+ 2.5,
605
+ 2.5
606
+ ],
607
+ [
608
+ 2.5,
609
+ 2.5,
610
+ 2.5
611
+ ],
612
+ [
613
+ 2.5,
614
+ 2.5,
615
+ 2.5
616
+ ],
617
+ [
618
+ 2.5,
619
+ 2.5,
620
+ 2.5
621
+ ],
622
+ [
623
+ 2.5,
624
+ 2.5,
625
+ 2.5
626
+ ],
627
+ [
628
+ 2.5,
629
+ 2.5,
630
+ 2.5
631
+ ],
632
+ [
633
+ 2.5,
634
+ 2.5,
635
+ 2.5
636
+ ],
637
+ [
638
+ 2.5,
639
+ 2.5,
640
+ 2.5
641
+ ],
642
+ [
643
+ 2.5,
644
+ 2.5,
645
+ 2.5
646
+ ],
647
+ [
648
+ 2.5,
649
+ 2.5,
650
+ 2.5
651
+ ],
652
+ [
653
+ 2.5,
654
+ 2.5,
655
+ 2.5
656
+ ],
657
+ [
658
+ 2.5,
659
+ 2.5,
660
+ 2.5
661
+ ],
662
+ [
663
+ 2.5,
664
+ 2.5,
665
+ 2.5
666
+ ],
667
+ [
668
+ 2.5,
669
+ 2.5,
670
+ 2.5
671
+ ],
672
+ [
673
+ 2.5,
674
+ 2.5,
675
+ 2.5
676
+ ],
677
+ [
678
+ 2.5,
679
+ 2.5,
680
+ 2.5
681
+ ],
682
+ [
683
+ 2.5,
684
+ 2.5,
685
+ 2.5
686
+ ],
687
+ [
688
+ 2.5,
689
+ 2.5,
690
+ 2.5
691
+ ],
692
+ [
693
+ 2.5,
694
+ 2.5,
695
+ 2.5
696
+ ],
697
+ [
698
+ 2.5,
699
+ 2.5,
700
+ 2.5
701
+ ],
702
+ [
703
+ 2.5,
704
+ 2.5,
705
+ 2.5
706
+ ],
707
+ [
708
+ 2.5,
709
+ 2.5,
710
+ 2.5
711
+ ],
712
+ [
713
+ 2.5,
714
+ 2.5,
715
+ 2.5
716
+ ],
717
+ [
718
+ 2.5,
719
+ 2.5,
720
+ 2.5
721
+ ],
722
+ [
723
+ 2.5,
724
+ 2.5,
725
+ 2.5
726
+ ],
727
+ [
728
+ 2.5,
729
+ 2.5,
730
+ 2.5
731
+ ],
732
+ [
733
+ 2.5,
734
+ 2.5,
735
+ 2.5
736
+ ],
737
+ [
738
+ 2.5,
739
+ 2.5,
740
+ 2.5
741
+ ],
742
+ [
743
+ 2.5,
744
+ 2.5,
745
+ 2.5
746
+ ],
747
+ [
748
+ 2.5,
749
+ 2.5,
750
+ 2.5
751
+ ],
752
+ [
753
+ 2.5,
754
+ 2.5,
755
+ 2.5
756
+ ],
757
+ [
758
+ 2.5,
759
+ 2.5,
760
+ 2.5
761
+ ],
762
+ [
763
+ 2.5,
764
+ 2.5,
765
+ 2.5
766
+ ],
767
+ [
768
+ 2.5,
769
+ 2.5,
770
+ 2.5
771
+ ],
772
+ [
773
+ 2.5,
774
+ 2.5,
775
+ 2.5
776
+ ],
777
+ [
778
+ 2.5,
779
+ 2.5,
780
+ 2.5
781
+ ],
782
+ [
783
+ 2.5,
784
+ 2.5,
785
+ 2.5
786
+ ],
787
+ [
788
+ 2.5,
789
+ 2.5,
790
+ 2.5
791
+ ],
792
+ [
793
+ 2.5,
794
+ 2.5,
795
+ 2.5
796
+ ],
797
+ [
798
+ 2.5,
799
+ 2.5,
800
+ 2.5
801
+ ],
802
+ [
803
+ 2.5,
804
+ 2.5,
805
+ 2.5
806
+ ],
807
+ [
808
+ 2.5,
809
+ 2.5,
810
+ 2.5
811
+ ],
812
+ [
813
+ 2.5,
814
+ 2.5,
815
+ 2.5
816
+ ],
817
+ [
818
+ 2.5,
819
+ 2.5,
820
+ 2.5
821
+ ],
822
+ [
823
+ 2.5,
824
+ 2.5,
825
+ 2.5
826
+ ],
827
+ [
828
+ 2.5,
829
+ 2.5,
830
+ 2.5
831
+ ],
832
+ [
833
+ 2.5,
834
+ 2.5,
835
+ 2.5
836
+ ],
837
+ [
838
+ 2.5,
839
+ 2.5,
840
+ 2.5
841
+ ],
842
+ [
843
+ 2.5,
844
+ 2.5,
845
+ 2.5
846
+ ],
847
+ [
848
+ 2.5,
849
+ 2.5,
850
+ 2.5
851
+ ],
852
+ [
853
+ 2.5,
854
+ 2.5,
855
+ 2.5
856
+ ],
857
+ [
858
+ 2.5,
859
+ 2.5,
860
+ 2.5
861
+ ],
862
+ [
863
+ 2.5,
864
+ 2.5,
865
+ 2.5
866
+ ],
867
+ [
868
+ 2.5,
869
+ 2.5,
870
+ 2.5
871
+ ],
872
+ [
873
+ 2.5,
874
+ 2.5,
875
+ 2.5
876
+ ],
877
+ [
878
+ 2.5,
879
+ 2.5,
880
+ 2.5
881
+ ],
882
+ [
883
+ 2.5,
884
+ 2.5,
885
+ 2.5
886
+ ],
887
+ [
888
+ 2.5,
889
+ 2.5,
890
+ 2.5
891
+ ],
892
+ [
893
+ 2.5,
894
+ 2.5,
895
+ 2.5
896
+ ],
897
+ [
898
+ 2.5,
899
+ 2.5,
900
+ 2.5
901
+ ],
902
+ [
903
+ 2.5,
904
+ 2.5,
905
+ 2.5
906
+ ],
907
+ [
908
+ 2.5,
909
+ 2.5,
910
+ 2.5
911
+ ],
912
+ [
913
+ 2.5,
914
+ 2.5,
915
+ 2.5
916
+ ],
917
+ [
918
+ 2.5,
919
+ 2.5,
920
+ 2.5
921
+ ],
922
+ [
923
+ 2.5,
924
+ 2.5,
925
+ 2.5
926
+ ],
927
+ [
928
+ 2.5,
929
+ 2.5,
930
+ 2.5
931
+ ],
932
+ [
933
+ 2.5,
934
+ 2.5,
935
+ 2.5
936
+ ],
937
+ [
938
+ 2.5,
939
+ 2.5,
940
+ 2.5
941
+ ],
942
+ [
943
+ 2.5,
944
+ 2.5,
945
+ 2.5
946
+ ],
947
+ [
948
+ 2.5,
949
+ 2.5,
950
+ 2.5
951
+ ],
952
+ [
953
+ 2.5,
954
+ 2.5,
955
+ 2.5
956
+ ],
957
+ [
958
+ 2.5,
959
+ 2.5,
960
+ 2.5
961
+ ],
962
+ [
963
+ 2.5,
964
+ 2.5,
965
+ 2.5
966
+ ],
967
+ [
968
+ 2.5,
969
+ 2.5,
970
+ 2.5
971
+ ],
972
+ [
973
+ 2.5,
974
+ 2.5,
975
+ 2.5
976
+ ],
977
+ [
978
+ 2.5,
979
+ 2.5,
980
+ 2.5
981
+ ],
982
+ [
983
+ 2.5,
984
+ 2.5,
985
+ 2.5
986
+ ],
987
+ [
988
+ 2.5,
989
+ 2.5,
990
+ 2.5
991
+ ],
992
+ [
993
+ 2.5,
994
+ 2.5,
995
+ 2.5
996
+ ],
997
+ [
998
+ 2.5,
999
+ 2.5,
1000
+ 2.5
1001
+ ],
1002
+ [
1003
+ 2.5,
1004
+ 2.5,
1005
+ 2.5
1006
+ ],
1007
+ [
1008
+ 2.5,
1009
+ 2.5,
1010
+ 2.5
1011
+ ],
1012
+ [
1013
+ 2.5,
1014
+ 2.5,
1015
+ 2.5
1016
+ ]
1017
+ ]
1018
+ }
Code/PENGWIN_Challenge/Inference/CT/models/Dataset198_PelvicLowres/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/plans.json ADDED
@@ -0,0 +1,345 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "dataset_name": "Dataset198_PelvicLowres",
3
+ "plans_name": "nnUNetResEncUNetMPlans",
4
+ "original_median_spacing_after_transp": [
5
+ 2.5,
6
+ 2.5,
7
+ 2.5
8
+ ],
9
+ "original_median_shape_after_transp": [
10
+ 112,
11
+ 160,
12
+ 160
13
+ ],
14
+ "image_reader_writer": "SimpleITKIO",
15
+ "transpose_forward": [
16
+ 0,
17
+ 1,
18
+ 2
19
+ ],
20
+ "transpose_backward": [
21
+ 0,
22
+ 1,
23
+ 2
24
+ ],
25
+ "configurations": {
26
+ "2d": {
27
+ "data_identifier": "nnUNetPlans_2d",
28
+ "preprocessor_name": "DefaultPreprocessor",
29
+ "batch_size": 128,
30
+ "patch_size": [
31
+ 160,
32
+ 160
33
+ ],
34
+ "median_image_size_in_voxels": [
35
+ 160.0,
36
+ 160.0
37
+ ],
38
+ "spacing": [
39
+ 2.5,
40
+ 2.5
41
+ ],
42
+ "normalization_schemes": [
43
+ "CTNormalization"
44
+ ],
45
+ "use_mask_for_norm": [
46
+ false
47
+ ],
48
+ "resampling_fn_data": "resample_data_or_seg_to_shape",
49
+ "resampling_fn_seg": "resample_data_or_seg_to_shape",
50
+ "resampling_fn_data_kwargs": {
51
+ "is_seg": false,
52
+ "order": 3,
53
+ "order_z": 0,
54
+ "force_separate_z": null
55
+ },
56
+ "resampling_fn_seg_kwargs": {
57
+ "is_seg": true,
58
+ "order": 1,
59
+ "order_z": 0,
60
+ "force_separate_z": null
61
+ },
62
+ "resampling_fn_probabilities": "resample_data_or_seg_to_shape",
63
+ "resampling_fn_probabilities_kwargs": {
64
+ "is_seg": false,
65
+ "order": 1,
66
+ "order_z": 0,
67
+ "force_separate_z": null
68
+ },
69
+ "architecture": {
70
+ "network_class_name": "dynamic_network_architectures.architectures.unet.ResidualEncoderUNet",
71
+ "arch_kwargs": {
72
+ "n_stages": 6,
73
+ "features_per_stage": [
74
+ 32,
75
+ 64,
76
+ 128,
77
+ 256,
78
+ 512,
79
+ 512
80
+ ],
81
+ "conv_op": "torch.nn.modules.conv.Conv2d",
82
+ "kernel_sizes": [
83
+ [
84
+ 3,
85
+ 3
86
+ ],
87
+ [
88
+ 3,
89
+ 3
90
+ ],
91
+ [
92
+ 3,
93
+ 3
94
+ ],
95
+ [
96
+ 3,
97
+ 3
98
+ ],
99
+ [
100
+ 3,
101
+ 3
102
+ ],
103
+ [
104
+ 3,
105
+ 3
106
+ ]
107
+ ],
108
+ "strides": [
109
+ [
110
+ 1,
111
+ 1
112
+ ],
113
+ [
114
+ 2,
115
+ 2
116
+ ],
117
+ [
118
+ 2,
119
+ 2
120
+ ],
121
+ [
122
+ 2,
123
+ 2
124
+ ],
125
+ [
126
+ 2,
127
+ 2
128
+ ],
129
+ [
130
+ 2,
131
+ 2
132
+ ]
133
+ ],
134
+ "n_blocks_per_stage": [
135
+ 1,
136
+ 3,
137
+ 4,
138
+ 6,
139
+ 6,
140
+ 6
141
+ ],
142
+ "n_conv_per_stage_decoder": [
143
+ 1,
144
+ 1,
145
+ 1,
146
+ 1,
147
+ 1
148
+ ],
149
+ "conv_bias": true,
150
+ "norm_op": "torch.nn.modules.instancenorm.InstanceNorm2d",
151
+ "norm_op_kwargs": {
152
+ "eps": 1e-05,
153
+ "affine": true
154
+ },
155
+ "dropout_op": null,
156
+ "dropout_op_kwargs": null,
157
+ "nonlin": "torch.nn.LeakyReLU",
158
+ "nonlin_kwargs": {
159
+ "inplace": true
160
+ }
161
+ },
162
+ "_kw_requires_import": [
163
+ "conv_op",
164
+ "norm_op",
165
+ "dropout_op",
166
+ "nonlin"
167
+ ]
168
+ },
169
+ "batch_dice": true
170
+ },
171
+ "3d_fullres": {
172
+ "data_identifier": "nnUNetPlans_3d_fullres",
173
+ "preprocessor_name": "DefaultPreprocessor",
174
+ "batch_size": 2,
175
+ "patch_size": [
176
+ 112,
177
+ 160,
178
+ 128
179
+ ],
180
+ "median_image_size_in_voxels": [
181
+ 112.0,
182
+ 160.0,
183
+ 160.0
184
+ ],
185
+ "spacing": [
186
+ 2.5,
187
+ 2.5,
188
+ 2.5
189
+ ],
190
+ "normalization_schemes": [
191
+ "CTNormalization"
192
+ ],
193
+ "use_mask_for_norm": [
194
+ false
195
+ ],
196
+ "resampling_fn_data": "resample_data_or_seg_to_shape",
197
+ "resampling_fn_seg": "resample_data_or_seg_to_shape",
198
+ "resampling_fn_data_kwargs": {
199
+ "is_seg": false,
200
+ "order": 3,
201
+ "order_z": 0,
202
+ "force_separate_z": null
203
+ },
204
+ "resampling_fn_seg_kwargs": {
205
+ "is_seg": true,
206
+ "order": 1,
207
+ "order_z": 0,
208
+ "force_separate_z": null
209
+ },
210
+ "resampling_fn_probabilities": "resample_data_or_seg_to_shape",
211
+ "resampling_fn_probabilities_kwargs": {
212
+ "is_seg": false,
213
+ "order": 1,
214
+ "order_z": 0,
215
+ "force_separate_z": null
216
+ },
217
+ "architecture": {
218
+ "network_class_name": "dynamic_network_architectures.architectures.unet.ResidualEncoderUNet",
219
+ "arch_kwargs": {
220
+ "n_stages": 6,
221
+ "features_per_stage": [
222
+ 32,
223
+ 64,
224
+ 128,
225
+ 256,
226
+ 320,
227
+ 320
228
+ ],
229
+ "conv_op": "torch.nn.modules.conv.Conv3d",
230
+ "kernel_sizes": [
231
+ [
232
+ 3,
233
+ 3,
234
+ 3
235
+ ],
236
+ [
237
+ 3,
238
+ 3,
239
+ 3
240
+ ],
241
+ [
242
+ 3,
243
+ 3,
244
+ 3
245
+ ],
246
+ [
247
+ 3,
248
+ 3,
249
+ 3
250
+ ],
251
+ [
252
+ 3,
253
+ 3,
254
+ 3
255
+ ],
256
+ [
257
+ 3,
258
+ 3,
259
+ 3
260
+ ]
261
+ ],
262
+ "strides": [
263
+ [
264
+ 1,
265
+ 1,
266
+ 1
267
+ ],
268
+ [
269
+ 2,
270
+ 2,
271
+ 2
272
+ ],
273
+ [
274
+ 2,
275
+ 2,
276
+ 2
277
+ ],
278
+ [
279
+ 2,
280
+ 2,
281
+ 2
282
+ ],
283
+ [
284
+ 2,
285
+ 2,
286
+ 2
287
+ ],
288
+ [
289
+ 1,
290
+ 2,
291
+ 2
292
+ ]
293
+ ],
294
+ "n_blocks_per_stage": [
295
+ 1,
296
+ 3,
297
+ 4,
298
+ 6,
299
+ 6,
300
+ 6
301
+ ],
302
+ "n_conv_per_stage_decoder": [
303
+ 1,
304
+ 1,
305
+ 1,
306
+ 1,
307
+ 1
308
+ ],
309
+ "conv_bias": true,
310
+ "norm_op": "torch.nn.modules.instancenorm.InstanceNorm3d",
311
+ "norm_op_kwargs": {
312
+ "eps": 1e-05,
313
+ "affine": true
314
+ },
315
+ "dropout_op": null,
316
+ "dropout_op_kwargs": null,
317
+ "nonlin": "torch.nn.LeakyReLU",
318
+ "nonlin_kwargs": {
319
+ "inplace": true
320
+ }
321
+ },
322
+ "_kw_requires_import": [
323
+ "conv_op",
324
+ "norm_op",
325
+ "dropout_op",
326
+ "nonlin"
327
+ ]
328
+ },
329
+ "batch_dice": false
330
+ }
331
+ },
332
+ "experiment_planner_used": "nnUNetPlannerResEncM",
333
+ "label_manager": "LabelManager",
334
+ "foreground_intensity_properties_per_channel": {
335
+ "0": {
336
+ "max": 2739.0,
337
+ "mean": 348.4314270019531,
338
+ "median": 282.0,
339
+ "min": -643.0,
340
+ "percentile_00_5": -60.0,
341
+ "percentile_99_5": 1239.0,
342
+ "std": 267.4811706542969
343
+ }
344
+ }
345
+ }
Code/PENGWIN_Challenge/Inference/CT/models/Dataset199_Pelvic/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/dataset.json ADDED
@@ -0,0 +1,14 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "channel_names": {
3
+ "0": "CT"
4
+ },
5
+ "labels": {
6
+ "background": 0,
7
+ "sacrum": 1,
8
+ "left_hip": 2,
9
+ "right_hip": 3
10
+ },
11
+ "numTraining": 100,
12
+ "file_ending": ".nii.gz",
13
+ "overwrite_image_reader_writer": "SimpleITKIO"
14
+ }
Code/PENGWIN_Challenge/Inference/CT/models/Dataset199_Pelvic/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/dataset_fingerprint.json ADDED
@@ -0,0 +1,1018 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "foreground_intensity_properties_per_channel": {
3
+ "0": {
4
+ "max": 3504.0,
5
+ "mean": 355.5995178222656,
6
+ "median": 286.0,
7
+ "min": -686.0,
8
+ "percentile_00_5": -72.0,
9
+ "percentile_99_5": 1285.0,
10
+ "std": 280.5046691894531
11
+ }
12
+ },
13
+ "median_relative_size_after_cropping": 1.0,
14
+ "shapes_after_crop": [
15
+ [
16
+ 401,
17
+ 512,
18
+ 512
19
+ ],
20
+ [
21
+ 337,
22
+ 276,
23
+ 413
24
+ ],
25
+ [
26
+ 285,
27
+ 381,
28
+ 512
29
+ ],
30
+ [
31
+ 224,
32
+ 254,
33
+ 386
34
+ ],
35
+ [
36
+ 312,
37
+ 512,
38
+ 512
39
+ ],
40
+ [
41
+ 283,
42
+ 279,
43
+ 426
44
+ ],
45
+ [
46
+ 317,
47
+ 512,
48
+ 512
49
+ ],
50
+ [
51
+ 335,
52
+ 512,
53
+ 512
54
+ ],
55
+ [
56
+ 303,
57
+ 245,
58
+ 415
59
+ ],
60
+ [
61
+ 350,
62
+ 512,
63
+ 512
64
+ ],
65
+ [
66
+ 313,
67
+ 512,
68
+ 512
69
+ ],
70
+ [
71
+ 265,
72
+ 512,
73
+ 512
74
+ ],
75
+ [
76
+ 375,
77
+ 512,
78
+ 512
79
+ ],
80
+ [
81
+ 298,
82
+ 329,
83
+ 458
84
+ ],
85
+ [
86
+ 260,
87
+ 281,
88
+ 479
89
+ ],
90
+ [
91
+ 235,
92
+ 512,
93
+ 512
94
+ ],
95
+ [
96
+ 280,
97
+ 512,
98
+ 512
99
+ ],
100
+ [
101
+ 375,
102
+ 512,
103
+ 512
104
+ ],
105
+ [
106
+ 350,
107
+ 512,
108
+ 512
109
+ ],
110
+ [
111
+ 350,
112
+ 512,
113
+ 512
114
+ ],
115
+ [
116
+ 350,
117
+ 512,
118
+ 512
119
+ ],
120
+ [
121
+ 341,
122
+ 512,
123
+ 512
124
+ ],
125
+ [
126
+ 325,
127
+ 380,
128
+ 395
129
+ ],
130
+ [
131
+ 350,
132
+ 512,
133
+ 512
134
+ ],
135
+ [
136
+ 257,
137
+ 355,
138
+ 467
139
+ ],
140
+ [
141
+ 279,
142
+ 512,
143
+ 512
144
+ ],
145
+ [
146
+ 313,
147
+ 512,
148
+ 512
149
+ ],
150
+ [
151
+ 304,
152
+ 241,
153
+ 445
154
+ ],
155
+ [
156
+ 379,
157
+ 512,
158
+ 512
159
+ ],
160
+ [
161
+ 285,
162
+ 512,
163
+ 512
164
+ ],
165
+ [
166
+ 312,
167
+ 512,
168
+ 512
169
+ ],
170
+ [
171
+ 350,
172
+ 512,
173
+ 512
174
+ ],
175
+ [
176
+ 365,
177
+ 263,
178
+ 403
179
+ ],
180
+ [
181
+ 363,
182
+ 512,
183
+ 512
184
+ ],
185
+ [
186
+ 373,
187
+ 279,
188
+ 512
189
+ ],
190
+ [
191
+ 307,
192
+ 326,
193
+ 512
194
+ ],
195
+ [
196
+ 333,
197
+ 512,
198
+ 512
199
+ ],
200
+ [
201
+ 350,
202
+ 512,
203
+ 512
204
+ ],
205
+ [
206
+ 392,
207
+ 154,
208
+ 322
209
+ ],
210
+ [
211
+ 350,
212
+ 512,
213
+ 512
214
+ ],
215
+ [
216
+ 374,
217
+ 331,
218
+ 512
219
+ ],
220
+ [
221
+ 350,
222
+ 512,
223
+ 512
224
+ ],
225
+ [
226
+ 274,
227
+ 512,
228
+ 512
229
+ ],
230
+ [
231
+ 409,
232
+ 183,
233
+ 370
234
+ ],
235
+ [
236
+ 274,
237
+ 248,
238
+ 406
239
+ ],
240
+ [
241
+ 275,
242
+ 512,
243
+ 512
244
+ ],
245
+ [
246
+ 193,
247
+ 512,
248
+ 512
249
+ ],
250
+ [
251
+ 414,
252
+ 512,
253
+ 512
254
+ ],
255
+ [
256
+ 350,
257
+ 512,
258
+ 512
259
+ ],
260
+ [
261
+ 326,
262
+ 345,
263
+ 512
264
+ ],
265
+ [
266
+ 350,
267
+ 512,
268
+ 512
269
+ ],
270
+ [
271
+ 350,
272
+ 512,
273
+ 512
274
+ ],
275
+ [
276
+ 255,
277
+ 512,
278
+ 512
279
+ ],
280
+ [
281
+ 301,
282
+ 512,
283
+ 512
284
+ ],
285
+ [
286
+ 241,
287
+ 512,
288
+ 512
289
+ ],
290
+ [
291
+ 285,
292
+ 304,
293
+ 512
294
+ ],
295
+ [
296
+ 341,
297
+ 318,
298
+ 429
299
+ ],
300
+ [
301
+ 311,
302
+ 287,
303
+ 437
304
+ ],
305
+ [
306
+ 332,
307
+ 512,
308
+ 512
309
+ ],
310
+ [
311
+ 333,
312
+ 266,
313
+ 418
314
+ ],
315
+ [
316
+ 350,
317
+ 512,
318
+ 512
319
+ ],
320
+ [
321
+ 303,
322
+ 280,
323
+ 511
324
+ ],
325
+ [
326
+ 271,
327
+ 240,
328
+ 512
329
+ ],
330
+ [
331
+ 350,
332
+ 512,
333
+ 512
334
+ ],
335
+ [
336
+ 350,
337
+ 512,
338
+ 512
339
+ ],
340
+ [
341
+ 254,
342
+ 512,
343
+ 512
344
+ ],
345
+ [
346
+ 369,
347
+ 231,
348
+ 512
349
+ ],
350
+ [
351
+ 350,
352
+ 512,
353
+ 512
354
+ ],
355
+ [
356
+ 350,
357
+ 512,
358
+ 512
359
+ ],
360
+ [
361
+ 350,
362
+ 512,
363
+ 512
364
+ ],
365
+ [
366
+ 319,
367
+ 209,
368
+ 351
369
+ ],
370
+ [
371
+ 267,
372
+ 301,
373
+ 403
374
+ ],
375
+ [
376
+ 329,
377
+ 512,
378
+ 512
379
+ ],
380
+ [
381
+ 326,
382
+ 512,
383
+ 512
384
+ ],
385
+ [
386
+ 294,
387
+ 512,
388
+ 512
389
+ ],
390
+ [
391
+ 350,
392
+ 512,
393
+ 512
394
+ ],
395
+ [
396
+ 350,
397
+ 512,
398
+ 512
399
+ ],
400
+ [
401
+ 388,
402
+ 249,
403
+ 458
404
+ ],
405
+ [
406
+ 268,
407
+ 275,
408
+ 471
409
+ ],
410
+ [
411
+ 353,
412
+ 512,
413
+ 512
414
+ ],
415
+ [
416
+ 342,
417
+ 268,
418
+ 416
419
+ ],
420
+ [
421
+ 312,
422
+ 305,
423
+ 512
424
+ ],
425
+ [
426
+ 350,
427
+ 512,
428
+ 512
429
+ ],
430
+ [
431
+ 268,
432
+ 512,
433
+ 512
434
+ ],
435
+ [
436
+ 356,
437
+ 235,
438
+ 413
439
+ ],
440
+ [
441
+ 265,
442
+ 283,
443
+ 512
444
+ ],
445
+ [
446
+ 312,
447
+ 295,
448
+ 462
449
+ ],
450
+ [
451
+ 406,
452
+ 251,
453
+ 499
454
+ ],
455
+ [
456
+ 291,
457
+ 512,
458
+ 512
459
+ ],
460
+ [
461
+ 301,
462
+ 512,
463
+ 512
464
+ ],
465
+ [
466
+ 274,
467
+ 512,
468
+ 512
469
+ ],
470
+ [
471
+ 293,
472
+ 512,
473
+ 512
474
+ ],
475
+ [
476
+ 325,
477
+ 512,
478
+ 512
479
+ ],
480
+ [
481
+ 313,
482
+ 512,
483
+ 512
484
+ ],
485
+ [
486
+ 257,
487
+ 512,
488
+ 512
489
+ ],
490
+ [
491
+ 350,
492
+ 512,
493
+ 512
494
+ ],
495
+ [
496
+ 300,
497
+ 512,
498
+ 512
499
+ ],
500
+ [
501
+ 350,
502
+ 512,
503
+ 512
504
+ ],
505
+ [
506
+ 285,
507
+ 238,
508
+ 459
509
+ ],
510
+ [
511
+ 350,
512
+ 512,
513
+ 512
514
+ ]
515
+ ],
516
+ "spacings": [
517
+ [
518
+ 0.800000011920929,
519
+ 0.78125,
520
+ 0.78125
521
+ ],
522
+ [
523
+ 0.800000011920929,
524
+ 0.82421875,
525
+ 0.82421875
526
+ ],
527
+ [
528
+ 1.0,
529
+ 0.759765625,
530
+ 0.759765625
531
+ ],
532
+ [
533
+ 1.0,
534
+ 0.84375,
535
+ 0.84375
536
+ ],
537
+ [
538
+ 1.0,
539
+ 0.896484375,
540
+ 0.896484375
541
+ ],
542
+ [
543
+ 1.0,
544
+ 0.8125,
545
+ 0.8125
546
+ ],
547
+ [
548
+ 1.0,
549
+ 0.8359375,
550
+ 0.8359375
551
+ ],
552
+ [
553
+ 0.800000011920929,
554
+ 0.7421875,
555
+ 0.7421875
556
+ ],
557
+ [
558
+ 0.8000000715255737,
559
+ 0.8125,
560
+ 0.8125
561
+ ],
562
+ [
563
+ 0.7989501953125,
564
+ 0.9459999799728394,
565
+ 0.9459999799728394
566
+ ],
567
+ [
568
+ 0.7989501953125,
569
+ 0.9760000109672546,
570
+ 0.9760000109672546
571
+ ],
572
+ [
573
+ 1.0,
574
+ 0.87890625,
575
+ 0.87890625
576
+ ],
577
+ [
578
+ 0.800000011920929,
579
+ 0.78125,
580
+ 0.78125
581
+ ],
582
+ [
583
+ 1.0,
584
+ 0.7890625,
585
+ 0.7890625
586
+ ],
587
+ [
588
+ 1.0,
589
+ 0.81640625,
590
+ 0.81640625
591
+ ],
592
+ [
593
+ 1.0,
594
+ 0.78125,
595
+ 0.78125
596
+ ],
597
+ [
598
+ 1.0,
599
+ 0.78125,
600
+ 0.78125
601
+ ],
602
+ [
603
+ 0.800000011920929,
604
+ 0.78125,
605
+ 0.78125
606
+ ],
607
+ [
608
+ 0.800048828125,
609
+ 0.6919999718666077,
610
+ 0.6919999718666077
611
+ ],
612
+ [
613
+ 0.7989501953125,
614
+ 0.7820000052452087,
615
+ 0.7820000052452087
616
+ ],
617
+ [
618
+ 0.7989501953125,
619
+ 0.7820000052452087,
620
+ 0.7820000052452087
621
+ ],
622
+ [
623
+ 1.0,
624
+ 0.84765625,
625
+ 0.84765625
626
+ ],
627
+ [
628
+ 0.800000011920929,
629
+ 0.77734375,
630
+ 0.77734375
631
+ ],
632
+ [
633
+ 0.7989501953125,
634
+ 0.781000018119812,
635
+ 0.781000018119812
636
+ ],
637
+ [
638
+ 1.0,
639
+ 0.77734375,
640
+ 0.77734375
641
+ ],
642
+ [
643
+ 1.0,
644
+ 0.814453125,
645
+ 0.814453125
646
+ ],
647
+ [
648
+ 0.7999998927116394,
649
+ 0.8309999704360962,
650
+ 0.8309999704360962
651
+ ],
652
+ [
653
+ 1.0,
654
+ 0.83203125,
655
+ 0.83203125
656
+ ],
657
+ [
658
+ 0.7999998927116394,
659
+ 0.783203125,
660
+ 0.783203125
661
+ ],
662
+ [
663
+ 1.0,
664
+ 0.771484375,
665
+ 0.771484375
666
+ ],
667
+ [
668
+ 1.0,
669
+ 0.78125,
670
+ 0.78125
671
+ ],
672
+ [
673
+ 0.7989501953125,
674
+ 0.7820000052452087,
675
+ 0.7820000052452087
676
+ ],
677
+ [
678
+ 0.800000011920929,
679
+ 0.798828125,
680
+ 0.798828125
681
+ ],
682
+ [
683
+ 0.7999999523162842,
684
+ 0.8500000238418579,
685
+ 0.8500000238418579
686
+ ],
687
+ [
688
+ 0.800000011920929,
689
+ 0.8080000281333923,
690
+ 0.8080000281333923
691
+ ],
692
+ [
693
+ 1.0,
694
+ 0.78125,
695
+ 0.78125
696
+ ],
697
+ [
698
+ 1.0060241222381592,
699
+ 0.78125,
700
+ 0.78125
701
+ ],
702
+ [
703
+ 0.801025390625,
704
+ 0.7820000052452087,
705
+ 0.7820000052452087
706
+ ],
707
+ [
708
+ 0.800000011920929,
709
+ 1.2200000286102295,
710
+ 1.2200000286102295
711
+ ],
712
+ [
713
+ 0.801025390625,
714
+ 0.7820000052452087,
715
+ 0.7820000052452087
716
+ ],
717
+ [
718
+ 0.800000011920929,
719
+ 0.7820000052452087,
720
+ 0.7820000052452087
721
+ ],
722
+ [
723
+ 0.7989501953125,
724
+ 0.8500000238418579,
725
+ 0.8500000238418579
726
+ ],
727
+ [
728
+ 1.0,
729
+ 0.86328125,
730
+ 0.86328125
731
+ ],
732
+ [
733
+ 0.625,
734
+ 0.8730469942092896,
735
+ 0.8730469942092896
736
+ ],
737
+ [
738
+ 1.0,
739
+ 0.859375,
740
+ 0.859375
741
+ ],
742
+ [
743
+ 1.0,
744
+ 0.78125,
745
+ 0.78125
746
+ ],
747
+ [
748
+ 1.2500052452087402,
749
+ 0.7480469942092896,
750
+ 0.7480469942092896
751
+ ],
752
+ [
753
+ 0.800000011920929,
754
+ 0.8828125,
755
+ 0.8828125
756
+ ],
757
+ [
758
+ 0.79998779296875,
759
+ 0.767578125,
760
+ 0.767578125
761
+ ],
762
+ [
763
+ 0.8000000715255737,
764
+ 0.68359375,
765
+ 0.68359375
766
+ ],
767
+ [
768
+ 0.800048828125,
769
+ 0.8579999804496765,
770
+ 0.8579999804496765
771
+ ],
772
+ [
773
+ 0.7989501953125,
774
+ 0.8659999966621399,
775
+ 0.8659999966621399
776
+ ],
777
+ [
778
+ 1.0,
779
+ 0.78125,
780
+ 0.78125
781
+ ],
782
+ [
783
+ 1.0,
784
+ 0.833984375,
785
+ 0.833984375
786
+ ],
787
+ [
788
+ 1.0,
789
+ 0.828125,
790
+ 0.828125
791
+ ],
792
+ [
793
+ 1.0,
794
+ 0.77734375,
795
+ 0.77734375
796
+ ],
797
+ [
798
+ 1.0,
799
+ 0.970703125,
800
+ 0.970703125
801
+ ],
802
+ [
803
+ 1.0,
804
+ 0.94140625,
805
+ 0.94140625
806
+ ],
807
+ [
808
+ 0.7999801635742188,
809
+ 0.873046875,
810
+ 0.873046875
811
+ ],
812
+ [
813
+ 0.800000011920929,
814
+ 0.8579999804496765,
815
+ 0.8579999804496765
816
+ ],
817
+ [
818
+ 0.7989501953125,
819
+ 0.7850000262260437,
820
+ 0.7850000262260437
821
+ ],
822
+ [
823
+ 1.0,
824
+ 0.783203125,
825
+ 0.783203125
826
+ ],
827
+ [
828
+ 1.0,
829
+ 0.806640625,
830
+ 0.806640625
831
+ ],
832
+ [
833
+ 0.801025390625,
834
+ 0.7820000052452087,
835
+ 0.7820000052452087
836
+ ],
837
+ [
838
+ 0.7989501953125,
839
+ 0.7820000052452087,
840
+ 0.7820000052452087
841
+ ],
842
+ [
843
+ 1.0,
844
+ 0.783203125,
845
+ 0.783203125
846
+ ],
847
+ [
848
+ 0.7999989986419678,
849
+ 0.9039999842643738,
850
+ 0.9039999842643738
851
+ ],
852
+ [
853
+ 0.7989501953125,
854
+ 0.8119999766349792,
855
+ 0.8119999766349792
856
+ ],
857
+ [
858
+ 0.800048828125,
859
+ 0.9039999842643738,
860
+ 0.9039999842643738
861
+ ],
862
+ [
863
+ 0.800048828125,
864
+ 0.6620000004768372,
865
+ 0.6620000004768372
866
+ ],
867
+ [
868
+ 0.800000011920929,
869
+ 0.9296875,
870
+ 0.9296875
871
+ ],
872
+ [
873
+ 1.0,
874
+ 0.8203125,
875
+ 0.8203125
876
+ ],
877
+ [
878
+ 0.800048828125,
879
+ 0.78125,
880
+ 0.78125
881
+ ],
882
+ [
883
+ 0.801025390625,
884
+ 0.9760000109672546,
885
+ 0.9760000109672546
886
+ ],
887
+ [
888
+ 1.0,
889
+ 0.853515625,
890
+ 0.853515625
891
+ ],
892
+ [
893
+ 0.801025390625,
894
+ 0.7820000052452087,
895
+ 0.7820000052452087
896
+ ],
897
+ [
898
+ 0.800048828125,
899
+ 0.7239999771118164,
900
+ 0.7239999771118164
901
+ ],
902
+ [
903
+ 0.7999267578125,
904
+ 0.8429999947547913,
905
+ 0.8429999947547913
906
+ ],
907
+ [
908
+ 1.0,
909
+ 0.693359375,
910
+ 0.693359375
911
+ ],
912
+ [
913
+ 1.0,
914
+ 0.775390625,
915
+ 0.775390625
916
+ ],
917
+ [
918
+ 0.800000011920929,
919
+ 0.796999990940094,
920
+ 0.796999990940094
921
+ ],
922
+ [
923
+ 1.0,
924
+ 0.8125,
925
+ 0.8125
926
+ ],
927
+ [
928
+ 0.79998779296875,
929
+ 0.748046875,
930
+ 0.748046875
931
+ ],
932
+ [
933
+ 1.0,
934
+ 0.849609375,
935
+ 0.849609375
936
+ ],
937
+ [
938
+ 0.8000000715255737,
939
+ 0.9140625,
940
+ 0.9140625
941
+ ],
942
+ [
943
+ 1.0,
944
+ 0.658203125,
945
+ 0.658203125
946
+ ],
947
+ [
948
+ 0.800000011920929,
949
+ 0.80859375,
950
+ 0.80859375
951
+ ],
952
+ [
953
+ 0.800000011920929,
954
+ 0.890625,
955
+ 0.890625
956
+ ],
957
+ [
958
+ 1.0,
959
+ 0.775390625,
960
+ 0.775390625
961
+ ],
962
+ [
963
+ 0.801025390625,
964
+ 0.9760000109672546,
965
+ 0.9760000109672546
966
+ ],
967
+ [
968
+ 1.0,
969
+ 0.6953125,
970
+ 0.6953125
971
+ ],
972
+ [
973
+ 1.0,
974
+ 0.880859375,
975
+ 0.880859375
976
+ ],
977
+ [
978
+ 1.0,
979
+ 0.763671875,
980
+ 0.763671875
981
+ ],
982
+ [
983
+ 0.7989501953125,
984
+ 0.9760000109672546,
985
+ 0.9760000109672546
986
+ ],
987
+ [
988
+ 1.0,
989
+ 0.6796875,
990
+ 0.6796875
991
+ ],
992
+ [
993
+ 0.7989501953125,
994
+ 0.7820000052452087,
995
+ 0.7820000052452087
996
+ ],
997
+ [
998
+ 1.0,
999
+ 0.87890625,
1000
+ 0.87890625
1001
+ ],
1002
+ [
1003
+ 0.801025390625,
1004
+ 0.7820000052452087,
1005
+ 0.7820000052452087
1006
+ ],
1007
+ [
1008
+ 1.0,
1009
+ 0.779296875,
1010
+ 0.779296875
1011
+ ],
1012
+ [
1013
+ 0.801025390625,
1014
+ 0.8659999966621399,
1015
+ 0.8659999966621399
1016
+ ]
1017
+ ]
1018
+ }
Code/PENGWIN_Challenge/Inference/CT/models/Dataset199_Pelvic/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/plans.json ADDED
@@ -0,0 +1,532 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "dataset_name": "Dataset199_Pelvic",
3
+ "plans_name": "nnUNetResEncUNetMPlans",
4
+ "original_median_spacing_after_transp": [
5
+ 0.801025390625,
6
+ 0.797914057970047,
7
+ 0.797914057970047
8
+ ],
9
+ "original_median_shape_after_transp": [
10
+ 326,
11
+ 512,
12
+ 512
13
+ ],
14
+ "image_reader_writer": "SimpleITKIO",
15
+ "transpose_forward": [
16
+ 0,
17
+ 1,
18
+ 2
19
+ ],
20
+ "transpose_backward": [
21
+ 0,
22
+ 1,
23
+ 2
24
+ ],
25
+ "configurations": {
26
+ "2d": {
27
+ "data_identifier": "nnUNetPlans_2d",
28
+ "preprocessor_name": "DefaultPreprocessor",
29
+ "batch_size": 12,
30
+ "patch_size": [
31
+ 512,
32
+ 512
33
+ ],
34
+ "median_image_size_in_voxels": [
35
+ 501.0,
36
+ 501.0
37
+ ],
38
+ "spacing": [
39
+ 0.797914057970047,
40
+ 0.797914057970047
41
+ ],
42
+ "normalization_schemes": [
43
+ "CTNormalization"
44
+ ],
45
+ "use_mask_for_norm": [
46
+ false
47
+ ],
48
+ "resampling_fn_data": "resample_data_or_seg_to_shape",
49
+ "resampling_fn_seg": "resample_data_or_seg_to_shape",
50
+ "resampling_fn_data_kwargs": {
51
+ "is_seg": false,
52
+ "order": 3,
53
+ "order_z": 0,
54
+ "force_separate_z": null
55
+ },
56
+ "resampling_fn_seg_kwargs": {
57
+ "is_seg": true,
58
+ "order": 1,
59
+ "order_z": 0,
60
+ "force_separate_z": null
61
+ },
62
+ "resampling_fn_probabilities": "resample_data_or_seg_to_shape",
63
+ "resampling_fn_probabilities_kwargs": {
64
+ "is_seg": false,
65
+ "order": 1,
66
+ "order_z": 0,
67
+ "force_separate_z": null
68
+ },
69
+ "architecture": {
70
+ "network_class_name": "dynamic_network_architectures.architectures.unet.ResidualEncoderUNet",
71
+ "arch_kwargs": {
72
+ "n_stages": 8,
73
+ "features_per_stage": [
74
+ 32,
75
+ 64,
76
+ 128,
77
+ 256,
78
+ 512,
79
+ 512,
80
+ 512,
81
+ 512
82
+ ],
83
+ "conv_op": "torch.nn.modules.conv.Conv2d",
84
+ "kernel_sizes": [
85
+ [
86
+ 3,
87
+ 3
88
+ ],
89
+ [
90
+ 3,
91
+ 3
92
+ ],
93
+ [
94
+ 3,
95
+ 3
96
+ ],
97
+ [
98
+ 3,
99
+ 3
100
+ ],
101
+ [
102
+ 3,
103
+ 3
104
+ ],
105
+ [
106
+ 3,
107
+ 3
108
+ ],
109
+ [
110
+ 3,
111
+ 3
112
+ ],
113
+ [
114
+ 3,
115
+ 3
116
+ ]
117
+ ],
118
+ "strides": [
119
+ [
120
+ 1,
121
+ 1
122
+ ],
123
+ [
124
+ 2,
125
+ 2
126
+ ],
127
+ [
128
+ 2,
129
+ 2
130
+ ],
131
+ [
132
+ 2,
133
+ 2
134
+ ],
135
+ [
136
+ 2,
137
+ 2
138
+ ],
139
+ [
140
+ 2,
141
+ 2
142
+ ],
143
+ [
144
+ 2,
145
+ 2
146
+ ],
147
+ [
148
+ 2,
149
+ 2
150
+ ]
151
+ ],
152
+ "n_blocks_per_stage": [
153
+ 1,
154
+ 3,
155
+ 4,
156
+ 6,
157
+ 6,
158
+ 6,
159
+ 6,
160
+ 6
161
+ ],
162
+ "n_conv_per_stage_decoder": [
163
+ 1,
164
+ 1,
165
+ 1,
166
+ 1,
167
+ 1,
168
+ 1,
169
+ 1
170
+ ],
171
+ "conv_bias": true,
172
+ "norm_op": "torch.nn.modules.instancenorm.InstanceNorm2d",
173
+ "norm_op_kwargs": {
174
+ "eps": 1e-05,
175
+ "affine": true
176
+ },
177
+ "dropout_op": null,
178
+ "dropout_op_kwargs": null,
179
+ "nonlin": "torch.nn.LeakyReLU",
180
+ "nonlin_kwargs": {
181
+ "inplace": true
182
+ }
183
+ },
184
+ "_kw_requires_import": [
185
+ "conv_op",
186
+ "norm_op",
187
+ "dropout_op",
188
+ "nonlin"
189
+ ]
190
+ },
191
+ "batch_dice": true
192
+ },
193
+ "3d_lowres": {
194
+ "data_identifier": "nnUNetResEncUNetMPlans_3d_lowres",
195
+ "preprocessor_name": "DefaultPreprocessor",
196
+ "batch_size": 2,
197
+ "patch_size": [
198
+ 96,
199
+ 160,
200
+ 160
201
+ ],
202
+ "median_image_size_in_voxels": [
203
+ 167,
204
+ 239,
205
+ 239
206
+ ],
207
+ "spacing": [
208
+ 1.6771692839832721,
209
+ 1.670654844338519,
210
+ 1.670654844338519
211
+ ],
212
+ "normalization_schemes": [
213
+ "CTNormalization"
214
+ ],
215
+ "use_mask_for_norm": [
216
+ false
217
+ ],
218
+ "resampling_fn_data": "resample_data_or_seg_to_shape",
219
+ "resampling_fn_seg": "resample_data_or_seg_to_shape",
220
+ "resampling_fn_data_kwargs": {
221
+ "is_seg": false,
222
+ "order": 3,
223
+ "order_z": 0,
224
+ "force_separate_z": null
225
+ },
226
+ "resampling_fn_seg_kwargs": {
227
+ "is_seg": true,
228
+ "order": 1,
229
+ "order_z": 0,
230
+ "force_separate_z": null
231
+ },
232
+ "resampling_fn_probabilities": "resample_data_or_seg_to_shape",
233
+ "resampling_fn_probabilities_kwargs": {
234
+ "is_seg": false,
235
+ "order": 1,
236
+ "order_z": 0,
237
+ "force_separate_z": null
238
+ },
239
+ "architecture": {
240
+ "network_class_name": "dynamic_network_architectures.architectures.unet.ResidualEncoderUNet",
241
+ "arch_kwargs": {
242
+ "n_stages": 6,
243
+ "features_per_stage": [
244
+ 32,
245
+ 64,
246
+ 128,
247
+ 256,
248
+ 320,
249
+ 320
250
+ ],
251
+ "conv_op": "torch.nn.modules.conv.Conv3d",
252
+ "kernel_sizes": [
253
+ [
254
+ 3,
255
+ 3,
256
+ 3
257
+ ],
258
+ [
259
+ 3,
260
+ 3,
261
+ 3
262
+ ],
263
+ [
264
+ 3,
265
+ 3,
266
+ 3
267
+ ],
268
+ [
269
+ 3,
270
+ 3,
271
+ 3
272
+ ],
273
+ [
274
+ 3,
275
+ 3,
276
+ 3
277
+ ],
278
+ [
279
+ 3,
280
+ 3,
281
+ 3
282
+ ]
283
+ ],
284
+ "strides": [
285
+ [
286
+ 1,
287
+ 1,
288
+ 1
289
+ ],
290
+ [
291
+ 2,
292
+ 2,
293
+ 2
294
+ ],
295
+ [
296
+ 2,
297
+ 2,
298
+ 2
299
+ ],
300
+ [
301
+ 2,
302
+ 2,
303
+ 2
304
+ ],
305
+ [
306
+ 2,
307
+ 2,
308
+ 2
309
+ ],
310
+ [
311
+ 1,
312
+ 2,
313
+ 2
314
+ ]
315
+ ],
316
+ "n_blocks_per_stage": [
317
+ 1,
318
+ 3,
319
+ 4,
320
+ 6,
321
+ 6,
322
+ 6
323
+ ],
324
+ "n_conv_per_stage_decoder": [
325
+ 1,
326
+ 1,
327
+ 1,
328
+ 1,
329
+ 1
330
+ ],
331
+ "conv_bias": true,
332
+ "norm_op": "torch.nn.modules.instancenorm.InstanceNorm3d",
333
+ "norm_op_kwargs": {
334
+ "eps": 1e-05,
335
+ "affine": true
336
+ },
337
+ "dropout_op": null,
338
+ "dropout_op_kwargs": null,
339
+ "nonlin": "torch.nn.LeakyReLU",
340
+ "nonlin_kwargs": {
341
+ "inplace": true
342
+ }
343
+ },
344
+ "_kw_requires_import": [
345
+ "conv_op",
346
+ "norm_op",
347
+ "dropout_op",
348
+ "nonlin"
349
+ ]
350
+ },
351
+ "batch_dice": false,
352
+ "next_stage": "3d_cascade_fullres"
353
+ },
354
+ "3d_fullres": {
355
+ "data_identifier": "nnUNetPlans_3d_fullres",
356
+ "preprocessor_name": "DefaultPreprocessor",
357
+ "batch_size": 2,
358
+ "patch_size": [
359
+ 96,
360
+ 160,
361
+ 160
362
+ ],
363
+ "median_image_size_in_voxels": [
364
+ 350.0,
365
+ 501.0,
366
+ 501.0
367
+ ],
368
+ "spacing": [
369
+ 0.801025390625,
370
+ 0.797914057970047,
371
+ 0.797914057970047
372
+ ],
373
+ "normalization_schemes": [
374
+ "CTNormalization"
375
+ ],
376
+ "use_mask_for_norm": [
377
+ false
378
+ ],
379
+ "resampling_fn_data": "resample_data_or_seg_to_shape",
380
+ "resampling_fn_seg": "resample_data_or_seg_to_shape",
381
+ "resampling_fn_data_kwargs": {
382
+ "is_seg": false,
383
+ "order": 3,
384
+ "order_z": 0,
385
+ "force_separate_z": null
386
+ },
387
+ "resampling_fn_seg_kwargs": {
388
+ "is_seg": true,
389
+ "order": 1,
390
+ "order_z": 0,
391
+ "force_separate_z": null
392
+ },
393
+ "resampling_fn_probabilities": "resample_data_or_seg_to_shape",
394
+ "resampling_fn_probabilities_kwargs": {
395
+ "is_seg": false,
396
+ "order": 1,
397
+ "order_z": 0,
398
+ "force_separate_z": null
399
+ },
400
+ "architecture": {
401
+ "network_class_name": "dynamic_network_architectures.architectures.unet.ResidualEncoderUNet",
402
+ "arch_kwargs": {
403
+ "n_stages": 6,
404
+ "features_per_stage": [
405
+ 32,
406
+ 64,
407
+ 128,
408
+ 256,
409
+ 320,
410
+ 320
411
+ ],
412
+ "conv_op": "torch.nn.modules.conv.Conv3d",
413
+ "kernel_sizes": [
414
+ [
415
+ 3,
416
+ 3,
417
+ 3
418
+ ],
419
+ [
420
+ 3,
421
+ 3,
422
+ 3
423
+ ],
424
+ [
425
+ 3,
426
+ 3,
427
+ 3
428
+ ],
429
+ [
430
+ 3,
431
+ 3,
432
+ 3
433
+ ],
434
+ [
435
+ 3,
436
+ 3,
437
+ 3
438
+ ],
439
+ [
440
+ 3,
441
+ 3,
442
+ 3
443
+ ]
444
+ ],
445
+ "strides": [
446
+ [
447
+ 1,
448
+ 1,
449
+ 1
450
+ ],
451
+ [
452
+ 2,
453
+ 2,
454
+ 2
455
+ ],
456
+ [
457
+ 2,
458
+ 2,
459
+ 2
460
+ ],
461
+ [
462
+ 2,
463
+ 2,
464
+ 2
465
+ ],
466
+ [
467
+ 2,
468
+ 2,
469
+ 2
470
+ ],
471
+ [
472
+ 1,
473
+ 2,
474
+ 2
475
+ ]
476
+ ],
477
+ "n_blocks_per_stage": [
478
+ 1,
479
+ 3,
480
+ 4,
481
+ 6,
482
+ 6,
483
+ 6
484
+ ],
485
+ "n_conv_per_stage_decoder": [
486
+ 1,
487
+ 1,
488
+ 1,
489
+ 1,
490
+ 1
491
+ ],
492
+ "conv_bias": true,
493
+ "norm_op": "torch.nn.modules.instancenorm.InstanceNorm3d",
494
+ "norm_op_kwargs": {
495
+ "eps": 1e-05,
496
+ "affine": true
497
+ },
498
+ "dropout_op": null,
499
+ "dropout_op_kwargs": null,
500
+ "nonlin": "torch.nn.LeakyReLU",
501
+ "nonlin_kwargs": {
502
+ "inplace": true
503
+ }
504
+ },
505
+ "_kw_requires_import": [
506
+ "conv_op",
507
+ "norm_op",
508
+ "dropout_op",
509
+ "nonlin"
510
+ ]
511
+ },
512
+ "batch_dice": true
513
+ },
514
+ "3d_cascade_fullres": {
515
+ "inherits_from": "3d_fullres",
516
+ "previous_stage": "3d_lowres"
517
+ }
518
+ },
519
+ "experiment_planner_used": "nnUNetPlannerResEncM",
520
+ "label_manager": "LabelManager",
521
+ "foreground_intensity_properties_per_channel": {
522
+ "0": {
523
+ "max": 3504.0,
524
+ "mean": 355.5995178222656,
525
+ "median": 286.0,
526
+ "min": -686.0,
527
+ "percentile_00_5": -72.0,
528
+ "percentile_99_5": 1285.0,
529
+ "std": 280.5046691894531
530
+ }
531
+ }
532
+ }
Code/PENGWIN_Challenge/Inference/CT/models/Dataset202_newSacrum/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/dataset.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "channel_names": {
3
+ "0": "CT"
4
+ },
5
+ "labels": {
6
+ "background": 0,
7
+ "main_fragment": 1,
8
+ "minor_fragment": 2
9
+ },
10
+ "numTraining": 100,
11
+ "file_ending": ".mha",
12
+ "overwrite_image_reader_writer": "SimpleITKIO"
13
+ }
Code/PENGWIN_Challenge/Inference/CT/models/Dataset202_newSacrum/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/dataset_fingerprint.json ADDED
@@ -0,0 +1,1018 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "foreground_intensity_properties_per_channel": {
3
+ "0": {
4
+ "max": 2226.0,
5
+ "mean": 262.4073486328125,
6
+ "median": 220.0,
7
+ "min": -557.0,
8
+ "percentile_00_5": -94.0,
9
+ "percentile_99_5": 1088.0,
10
+ "std": 223.74400329589844
11
+ }
12
+ },
13
+ "median_relative_size_after_cropping": 1.0,
14
+ "shapes_after_crop": [
15
+ [
16
+ 190,
17
+ 156,
18
+ 173
19
+ ],
20
+ [
21
+ 187,
22
+ 122,
23
+ 166
24
+ ],
25
+ [
26
+ 154,
27
+ 152,
28
+ 190
29
+ ],
30
+ [
31
+ 157,
32
+ 108,
33
+ 159
34
+ ],
35
+ [
36
+ 143,
37
+ 113,
38
+ 149
39
+ ],
40
+ [
41
+ 162,
42
+ 125,
43
+ 174
44
+ ],
45
+ [
46
+ 146,
47
+ 129,
48
+ 147
49
+ ],
50
+ [
51
+ 178,
52
+ 136,
53
+ 180
54
+ ],
55
+ [
56
+ 195,
57
+ 137,
58
+ 163
59
+ ],
60
+ [
61
+ 202,
62
+ 135,
63
+ 146
64
+ ],
65
+ [
66
+ 183,
67
+ 117,
68
+ 146
69
+ ],
70
+ [
71
+ 163,
72
+ 128,
73
+ 155
74
+ ],
75
+ [
76
+ 186,
77
+ 124,
78
+ 179
79
+ ],
80
+ [
81
+ 157,
82
+ 129,
83
+ 171
84
+ ],
85
+ [
86
+ 152,
87
+ 138,
88
+ 173
89
+ ],
90
+ [
91
+ 156,
92
+ 141,
93
+ 175
94
+ ],
95
+ [
96
+ 152,
97
+ 131,
98
+ 164
99
+ ],
100
+ [
101
+ 213,
102
+ 151,
103
+ 175
104
+ ],
105
+ [
106
+ 217,
107
+ 155,
108
+ 202
109
+ ],
110
+ [
111
+ 189,
112
+ 119,
113
+ 185
114
+ ],
115
+ [
116
+ 205,
117
+ 138,
118
+ 181
119
+ ],
120
+ [
121
+ 178,
122
+ 120,
123
+ 146
124
+ ],
125
+ [
126
+ 209,
127
+ 138,
128
+ 154
129
+ ],
130
+ [
131
+ 190,
132
+ 152,
133
+ 170
134
+ ],
135
+ [
136
+ 152,
137
+ 121,
138
+ 173
139
+ ],
140
+ [
141
+ 156,
142
+ 140,
143
+ 176
144
+ ],
145
+ [
146
+ 189,
147
+ 121,
148
+ 166
149
+ ],
150
+ [
151
+ 157,
152
+ 143,
153
+ 165
154
+ ],
155
+ [
156
+ 174,
157
+ 141,
158
+ 168
159
+ ],
160
+ [
161
+ 163,
162
+ 140,
163
+ 175
164
+ ],
165
+ [
166
+ 155,
167
+ 145,
168
+ 158
169
+ ],
170
+ [
171
+ 189,
172
+ 132,
173
+ 167
174
+ ],
175
+ [
176
+ 215,
177
+ 139,
178
+ 175
179
+ ],
180
+ [
181
+ 186,
182
+ 121,
183
+ 168
184
+ ],
185
+ [
186
+ 219,
187
+ 128,
188
+ 168
189
+ ],
190
+ [
191
+ 167,
192
+ 154,
193
+ 178
194
+ ],
195
+ [
196
+ 168,
197
+ 142,
198
+ 167
199
+ ],
200
+ [
201
+ 194,
202
+ 134,
203
+ 159
204
+ ],
205
+ [
206
+ 184,
207
+ 104,
208
+ 130
209
+ ],
210
+ [
211
+ 187,
212
+ 137,
213
+ 169
214
+ ],
215
+ [
216
+ 181,
217
+ 151,
218
+ 167
219
+ ],
220
+ [
221
+ 203,
222
+ 132,
223
+ 147
224
+ ],
225
+ [
226
+ 161,
227
+ 131,
228
+ 169
229
+ ],
230
+ [
231
+ 237,
232
+ 123,
233
+ 152
234
+ ],
235
+ [
236
+ 164,
237
+ 128,
238
+ 159
239
+ ],
240
+ [
241
+ 164,
242
+ 160,
243
+ 167
244
+ ],
245
+ [
246
+ 155,
247
+ 143,
248
+ 180
249
+ ],
250
+ [
251
+ 183,
252
+ 121,
253
+ 149
254
+ ],
255
+ [
256
+ 190,
257
+ 132,
258
+ 170
259
+ ],
260
+ [
261
+ 199,
262
+ 145,
263
+ 191
264
+ ],
265
+ [
266
+ 212,
267
+ 133,
268
+ 159
269
+ ],
270
+ [
271
+ 205,
272
+ 148,
273
+ 154
274
+ ],
275
+ [
276
+ 148,
277
+ 133,
278
+ 180
279
+ ],
280
+ [
281
+ 164,
282
+ 156,
283
+ 158
284
+ ],
285
+ [
286
+ 167,
287
+ 141,
288
+ 153
289
+ ],
290
+ [
291
+ 149,
292
+ 139,
293
+ 167
294
+ ],
295
+ [
296
+ 186,
297
+ 123,
298
+ 150
299
+ ],
300
+ [
301
+ 151,
302
+ 118,
303
+ 155
304
+ ],
305
+ [
306
+ 196,
307
+ 129,
308
+ 166
309
+ ],
310
+ [
311
+ 223,
312
+ 138,
313
+ 165
314
+ ],
315
+ [
316
+ 191,
317
+ 139,
318
+ 164
319
+ ],
320
+ [
321
+ 161,
322
+ 145,
323
+ 164
324
+ ],
325
+ [
326
+ 154,
327
+ 153,
328
+ 192
329
+ ],
330
+ [
331
+ 185,
332
+ 135,
333
+ 187
334
+ ],
335
+ [
336
+ 160,
337
+ 157,
338
+ 180
339
+ ],
340
+ [
341
+ 186,
342
+ 165,
343
+ 184
344
+ ],
345
+ [
346
+ 224,
347
+ 118,
348
+ 151
349
+ ],
350
+ [
351
+ 212,
352
+ 141,
353
+ 177
354
+ ],
355
+ [
356
+ 197,
357
+ 132,
358
+ 156
359
+ ],
360
+ [
361
+ 225,
362
+ 166,
363
+ 179
364
+ ],
365
+ [
366
+ 179,
367
+ 116,
368
+ 141
369
+ ],
370
+ [
371
+ 156,
372
+ 139,
373
+ 164
374
+ ],
375
+ [
376
+ 186,
377
+ 159,
378
+ 178
379
+ ],
380
+ [
381
+ 224,
382
+ 114,
383
+ 146
384
+ ],
385
+ [
386
+ 189,
387
+ 122,
388
+ 181
389
+ ],
390
+ [
391
+ 182,
392
+ 136,
393
+ 168
394
+ ],
395
+ [
396
+ 207,
397
+ 161,
398
+ 181
399
+ ],
400
+ [
401
+ 204,
402
+ 140,
403
+ 158
404
+ ],
405
+ [
406
+ 167,
407
+ 146,
408
+ 174
409
+ ],
410
+ [
411
+ 168,
412
+ 147,
413
+ 185
414
+ ],
415
+ [
416
+ 180,
417
+ 148,
418
+ 160
419
+ ],
420
+ [
421
+ 167,
422
+ 132,
423
+ 160
424
+ ],
425
+ [
426
+ 172,
427
+ 137,
428
+ 178
429
+ ],
430
+ [
431
+ 164,
432
+ 132,
433
+ 182
434
+ ],
435
+ [
436
+ 201,
437
+ 115,
438
+ 152
439
+ ],
440
+ [
441
+ 159,
442
+ 147,
443
+ 194
444
+ ],
445
+ [
446
+ 190,
447
+ 119,
448
+ 173
449
+ ],
450
+ [
451
+ 208,
452
+ 138,
453
+ 155
454
+ ],
455
+ [
456
+ 180,
457
+ 162,
458
+ 169
459
+ ],
460
+ [
461
+ 200,
462
+ 127,
463
+ 141
464
+ ],
465
+ [
466
+ 142,
467
+ 158,
468
+ 179
469
+ ],
470
+ [
471
+ 157,
472
+ 134,
473
+ 153
474
+ ],
475
+ [
476
+ 185,
477
+ 147,
478
+ 183
479
+ ],
480
+ [
481
+ 191,
482
+ 120,
483
+ 139
484
+ ],
485
+ [
486
+ 151,
487
+ 150,
488
+ 192
489
+ ],
490
+ [
491
+ 193,
492
+ 128,
493
+ 172
494
+ ],
495
+ [
496
+ 169,
497
+ 124,
498
+ 156
499
+ ],
500
+ [
501
+ 218,
502
+ 149,
503
+ 171
504
+ ],
505
+ [
506
+ 150,
507
+ 138,
508
+ 169
509
+ ],
510
+ [
511
+ 172,
512
+ 116,
513
+ 148
514
+ ]
515
+ ],
516
+ "spacings": [
517
+ [
518
+ 0.800000011920929,
519
+ 0.78125,
520
+ 0.78125
521
+ ],
522
+ [
523
+ 0.800000011920929,
524
+ 0.82421875,
525
+ 0.82421875
526
+ ],
527
+ [
528
+ 1.0,
529
+ 0.759765625,
530
+ 0.759765625
531
+ ],
532
+ [
533
+ 1.0,
534
+ 0.84375,
535
+ 0.84375
536
+ ],
537
+ [
538
+ 1.0,
539
+ 0.896484375,
540
+ 0.896484375
541
+ ],
542
+ [
543
+ 1.0,
544
+ 0.8125,
545
+ 0.8125
546
+ ],
547
+ [
548
+ 1.0,
549
+ 0.8359375,
550
+ 0.8359375
551
+ ],
552
+ [
553
+ 0.800000011920929,
554
+ 0.7421875,
555
+ 0.7421875
556
+ ],
557
+ [
558
+ 0.8000000715255737,
559
+ 0.8125,
560
+ 0.8125
561
+ ],
562
+ [
563
+ 0.7989501953125,
564
+ 0.9459999799728394,
565
+ 0.9459999799728394
566
+ ],
567
+ [
568
+ 0.7989501953125,
569
+ 0.9760000109672546,
570
+ 0.9760000109672546
571
+ ],
572
+ [
573
+ 1.0,
574
+ 0.87890625,
575
+ 0.87890625
576
+ ],
577
+ [
578
+ 0.800000011920929,
579
+ 0.78125,
580
+ 0.78125
581
+ ],
582
+ [
583
+ 1.0,
584
+ 0.7890625,
585
+ 0.7890625
586
+ ],
587
+ [
588
+ 1.0,
589
+ 0.81640625,
590
+ 0.81640625
591
+ ],
592
+ [
593
+ 1.0,
594
+ 0.78125,
595
+ 0.78125
596
+ ],
597
+ [
598
+ 1.0,
599
+ 0.78125,
600
+ 0.78125
601
+ ],
602
+ [
603
+ 0.800000011920929,
604
+ 0.78125,
605
+ 0.78125
606
+ ],
607
+ [
608
+ 0.800048828125,
609
+ 0.6919999718666077,
610
+ 0.6919999718666077
611
+ ],
612
+ [
613
+ 0.7989501953125,
614
+ 0.7820000052452087,
615
+ 0.7820000052452087
616
+ ],
617
+ [
618
+ 0.7989501953125,
619
+ 0.7820000052452087,
620
+ 0.7820000052452087
621
+ ],
622
+ [
623
+ 1.0,
624
+ 0.84765625,
625
+ 0.84765625
626
+ ],
627
+ [
628
+ 0.800000011920929,
629
+ 0.77734375,
630
+ 0.77734375
631
+ ],
632
+ [
633
+ 0.7989501953125,
634
+ 0.781000018119812,
635
+ 0.781000018119812
636
+ ],
637
+ [
638
+ 1.0,
639
+ 0.77734375,
640
+ 0.77734375
641
+ ],
642
+ [
643
+ 1.0,
644
+ 0.814453125,
645
+ 0.814453125
646
+ ],
647
+ [
648
+ 0.7999998927116394,
649
+ 0.8309999704360962,
650
+ 0.8309999704360962
651
+ ],
652
+ [
653
+ 1.0,
654
+ 0.83203125,
655
+ 0.83203125
656
+ ],
657
+ [
658
+ 0.7999998927116394,
659
+ 0.783203125,
660
+ 0.783203125
661
+ ],
662
+ [
663
+ 1.0,
664
+ 0.771484375,
665
+ 0.771484375
666
+ ],
667
+ [
668
+ 1.0,
669
+ 0.78125,
670
+ 0.78125
671
+ ],
672
+ [
673
+ 0.7989501953125,
674
+ 0.7820000052452087,
675
+ 0.7820000052452087
676
+ ],
677
+ [
678
+ 0.800000011920929,
679
+ 0.798828125,
680
+ 0.798828125
681
+ ],
682
+ [
683
+ 0.7999999523162842,
684
+ 0.8500000238418579,
685
+ 0.8500000238418579
686
+ ],
687
+ [
688
+ 0.800000011920929,
689
+ 0.8080000281333923,
690
+ 0.8080000281333923
691
+ ],
692
+ [
693
+ 1.0,
694
+ 0.78125,
695
+ 0.78125
696
+ ],
697
+ [
698
+ 1.0060241222381592,
699
+ 0.78125,
700
+ 0.78125
701
+ ],
702
+ [
703
+ 0.801025390625,
704
+ 0.7820000052452087,
705
+ 0.7820000052452087
706
+ ],
707
+ [
708
+ 0.800000011920929,
709
+ 1.2200000286102295,
710
+ 1.2200000286102295
711
+ ],
712
+ [
713
+ 0.801025390625,
714
+ 0.7820000052452087,
715
+ 0.7820000052452087
716
+ ],
717
+ [
718
+ 0.800000011920929,
719
+ 0.7820000052452087,
720
+ 0.7820000052452087
721
+ ],
722
+ [
723
+ 0.7989501953125,
724
+ 0.8500000238418579,
725
+ 0.8500000238418579
726
+ ],
727
+ [
728
+ 1.0,
729
+ 0.86328125,
730
+ 0.86328125
731
+ ],
732
+ [
733
+ 0.625,
734
+ 0.8730469942092896,
735
+ 0.8730469942092896
736
+ ],
737
+ [
738
+ 1.0,
739
+ 0.859375,
740
+ 0.859375
741
+ ],
742
+ [
743
+ 1.0,
744
+ 0.78125,
745
+ 0.78125
746
+ ],
747
+ [
748
+ 1.2500052452087402,
749
+ 0.7480469942092896,
750
+ 0.7480469942092896
751
+ ],
752
+ [
753
+ 0.800000011920929,
754
+ 0.8828125,
755
+ 0.8828125
756
+ ],
757
+ [
758
+ 0.79998779296875,
759
+ 0.767578125,
760
+ 0.767578125
761
+ ],
762
+ [
763
+ 0.8000000715255737,
764
+ 0.68359375,
765
+ 0.68359375
766
+ ],
767
+ [
768
+ 0.800048828125,
769
+ 0.8579999804496765,
770
+ 0.8579999804496765
771
+ ],
772
+ [
773
+ 0.7989501953125,
774
+ 0.8659999966621399,
775
+ 0.8659999966621399
776
+ ],
777
+ [
778
+ 1.0,
779
+ 0.78125,
780
+ 0.78125
781
+ ],
782
+ [
783
+ 1.0,
784
+ 0.833984375,
785
+ 0.833984375
786
+ ],
787
+ [
788
+ 1.0,
789
+ 0.828125,
790
+ 0.828125
791
+ ],
792
+ [
793
+ 1.0,
794
+ 0.77734375,
795
+ 0.77734375
796
+ ],
797
+ [
798
+ 1.0,
799
+ 0.970703125,
800
+ 0.970703125
801
+ ],
802
+ [
803
+ 1.0,
804
+ 0.94140625,
805
+ 0.94140625
806
+ ],
807
+ [
808
+ 0.7999801635742188,
809
+ 0.873046875,
810
+ 0.873046875
811
+ ],
812
+ [
813
+ 0.800000011920929,
814
+ 0.8579999804496765,
815
+ 0.8579999804496765
816
+ ],
817
+ [
818
+ 0.7989501953125,
819
+ 0.7850000262260437,
820
+ 0.7850000262260437
821
+ ],
822
+ [
823
+ 1.0,
824
+ 0.783203125,
825
+ 0.783203125
826
+ ],
827
+ [
828
+ 1.0,
829
+ 0.806640625,
830
+ 0.806640625
831
+ ],
832
+ [
833
+ 0.801025390625,
834
+ 0.7820000052452087,
835
+ 0.7820000052452087
836
+ ],
837
+ [
838
+ 0.7989501953125,
839
+ 0.7820000052452087,
840
+ 0.7820000052452087
841
+ ],
842
+ [
843
+ 1.0,
844
+ 0.783203125,
845
+ 0.783203125
846
+ ],
847
+ [
848
+ 0.7999989986419678,
849
+ 0.9039999842643738,
850
+ 0.9039999842643738
851
+ ],
852
+ [
853
+ 0.7989501953125,
854
+ 0.8119999766349792,
855
+ 0.8119999766349792
856
+ ],
857
+ [
858
+ 0.800048828125,
859
+ 0.9039999842643738,
860
+ 0.9039999842643738
861
+ ],
862
+ [
863
+ 0.800048828125,
864
+ 0.6620000004768372,
865
+ 0.6620000004768372
866
+ ],
867
+ [
868
+ 0.800000011920929,
869
+ 0.9296875,
870
+ 0.9296875
871
+ ],
872
+ [
873
+ 1.0,
874
+ 0.8203125,
875
+ 0.8203125
876
+ ],
877
+ [
878
+ 0.800048828125,
879
+ 0.78125,
880
+ 0.78125
881
+ ],
882
+ [
883
+ 0.801025390625,
884
+ 0.9760000109672546,
885
+ 0.9760000109672546
886
+ ],
887
+ [
888
+ 1.0,
889
+ 0.853515625,
890
+ 0.853515625
891
+ ],
892
+ [
893
+ 0.801025390625,
894
+ 0.7820000052452087,
895
+ 0.7820000052452087
896
+ ],
897
+ [
898
+ 0.800048828125,
899
+ 0.7239999771118164,
900
+ 0.7239999771118164
901
+ ],
902
+ [
903
+ 0.7999267578125,
904
+ 0.8429999947547913,
905
+ 0.8429999947547913
906
+ ],
907
+ [
908
+ 1.0,
909
+ 0.693359375,
910
+ 0.693359375
911
+ ],
912
+ [
913
+ 1.0,
914
+ 0.775390625,
915
+ 0.775390625
916
+ ],
917
+ [
918
+ 0.800000011920929,
919
+ 0.796999990940094,
920
+ 0.796999990940094
921
+ ],
922
+ [
923
+ 1.0,
924
+ 0.8125,
925
+ 0.8125
926
+ ],
927
+ [
928
+ 0.79998779296875,
929
+ 0.748046875,
930
+ 0.748046875
931
+ ],
932
+ [
933
+ 1.0,
934
+ 0.849609375,
935
+ 0.849609375
936
+ ],
937
+ [
938
+ 0.8000000715255737,
939
+ 0.9140625,
940
+ 0.9140625
941
+ ],
942
+ [
943
+ 1.0,
944
+ 0.658203125,
945
+ 0.658203125
946
+ ],
947
+ [
948
+ 0.800000011920929,
949
+ 0.80859375,
950
+ 0.80859375
951
+ ],
952
+ [
953
+ 0.800000011920929,
954
+ 0.890625,
955
+ 0.890625
956
+ ],
957
+ [
958
+ 1.0,
959
+ 0.775390625,
960
+ 0.775390625
961
+ ],
962
+ [
963
+ 0.801025390625,
964
+ 0.9760000109672546,
965
+ 0.9760000109672546
966
+ ],
967
+ [
968
+ 1.0,
969
+ 0.6953125,
970
+ 0.6953125
971
+ ],
972
+ [
973
+ 1.0,
974
+ 0.880859375,
975
+ 0.880859375
976
+ ],
977
+ [
978
+ 1.0,
979
+ 0.763671875,
980
+ 0.763671875
981
+ ],
982
+ [
983
+ 0.7989501953125,
984
+ 0.9760000109672546,
985
+ 0.9760000109672546
986
+ ],
987
+ [
988
+ 1.0,
989
+ 0.6796875,
990
+ 0.6796875
991
+ ],
992
+ [
993
+ 0.7989501953125,
994
+ 0.7820000052452087,
995
+ 0.7820000052452087
996
+ ],
997
+ [
998
+ 1.0,
999
+ 0.87890625,
1000
+ 0.87890625
1001
+ ],
1002
+ [
1003
+ 0.801025390625,
1004
+ 0.7820000052452087,
1005
+ 0.7820000052452087
1006
+ ],
1007
+ [
1008
+ 1.0,
1009
+ 0.779296875,
1010
+ 0.779296875
1011
+ ],
1012
+ [
1013
+ 0.801025390625,
1014
+ 0.8659999966621399,
1015
+ 0.8659999966621399
1016
+ ]
1017
+ ]
1018
+ }
Code/PENGWIN_Challenge/Inference/CT/models/Dataset202_newSacrum/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/fold_0/debug.json ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_best_ema": "None",
3
+ "batch_size": "2",
4
+ "configuration_manager": "{'data_identifier': 'nnUNetPlans_3d_fullres', 'preprocessor_name': 'DefaultPreprocessor', 'batch_size': 2, 'patch_size': [160, 112, 128], 'median_image_size_in_voxels': [196.0, 138.0, 169.0], 'spacing': [0.801025390625, 0.797914057970047, 0.797914057970047], 'normalization_schemes': ['CTNormalization'], 'use_mask_for_norm': [False], 'resampling_fn_data': 'resample_data_or_seg_to_shape', 'resampling_fn_seg': 'resample_data_or_seg_to_shape', 'resampling_fn_data_kwargs': {'is_seg': False, 'order': 3, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_seg_kwargs': {'is_seg': True, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_probabilities': 'resample_data_or_seg_to_shape', 'resampling_fn_probabilities_kwargs': {'is_seg': False, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'architecture': {'network_class_name': 'dynamic_network_architectures.architectures.unet.ResidualEncoderUNet', 'arch_kwargs': {'n_stages': 6, 'features_per_stage': [32, 64, 128, 256, 320, 320], 'conv_op': 'torch.nn.modules.conv.Conv3d', 'kernel_sizes': [[3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3]], 'strides': [[1, 1, 1], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 1, 2]], 'n_blocks_per_stage': [1, 3, 4, 6, 6, 6], 'n_conv_per_stage_decoder': [1, 1, 1, 1, 1], 'conv_bias': True, 'norm_op': 'torch.nn.modules.instancenorm.InstanceNorm3d', 'norm_op_kwargs': {'eps': 1e-05, 'affine': True}, 'dropout_op': None, 'dropout_op_kwargs': None, 'nonlin': 'torch.nn.LeakyReLU', 'nonlin_kwargs': {'inplace': True}, 'deep_supervision': True}, '_kw_requires_import': ['conv_op', 'norm_op', 'dropout_op', 'nonlin']}, 'batch_dice': False}",
5
+ "configuration_name": "3d_fullres",
6
+ "cudnn_version": 8700,
7
+ "current_epoch": "0",
8
+ "dataloader_train": "<batchgenerators.dataloading.nondet_multi_threaded_augmenter.NonDetMultiThreadedAugmenter object at 0x7fe42307d6a0>",
9
+ "dataloader_train.generator": "<nnunetv2.training.dataloading.data_loader_3d.nnUNetDataLoader3D object at 0x7fe42307dd00>",
10
+ "dataloader_train.num_processes": "12",
11
+ "dataloader_train.transform": "None",
12
+ "dataloader_val": "<batchgenerators.dataloading.nondet_multi_threaded_augmenter.NonDetMultiThreadedAugmenter object at 0x7fe42307d730>",
13
+ "dataloader_val.generator": "<nnunetv2.training.dataloading.data_loader_3d.nnUNetDataLoader3D object at 0x7fe42307dcd0>",
14
+ "dataloader_val.num_processes": "6",
15
+ "dataloader_val.transform": "None",
16
+ "dataset_json": "{'channel_names': {'0': 'CT'}, 'labels': {'background': 0, 'main_fragment': 1, 'minor_fragment': 2}, 'numTraining': 100, 'file_ending': '.mha', 'overwrite_image_reader_writer': 'SimpleITKIO'}",
17
+ "device": "cuda:0",
18
+ "disable_checkpointing": "False",
19
+ "enable_deep_supervision": "True",
20
+ "fold": "0",
21
+ "folder_with_segs_from_previous_stage": "None",
22
+ "gpu_name": "NVIDIA GeForce RTX 2080 Ti",
23
+ "grad_scaler": "<torch.cuda.amp.grad_scaler.GradScaler object at 0x7fe42308d490>",
24
+ "hostname": "wh-Super-Server",
25
+ "inference_allowed_mirroring_axes": "(0, 1, 2)",
26
+ "initial_lr": "0.01",
27
+ "is_cascaded": "False",
28
+ "is_ddp": "False",
29
+ "label_manager": "<nnunetv2.utilities.label_handling.label_handling.LabelManager object at 0x7fe42308d4f0>",
30
+ "local_rank": "0",
31
+ "log_file": "/data/ypy/dataset/nnUNet_datasets/nnUNet_results/Dataset202_newSacrum/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/fold_0/training_log_2024_7_19_19_43_14.txt",
32
+ "logger": "<nnunetv2.training.logging.nnunet_logger.nnUNetLogger object at 0x7fe42308d370>",
33
+ "loss": "DeepSupervisionWrapper(\n (loss): DC_and_CE_loss(\n (ce): RobustCrossEntropyLoss()\n (dc): OptimizedModule(\n (_orig_mod): MemoryEfficientSoftDiceLoss()\n )\n )\n)",
34
+ "lr_scheduler": "<nnunetv2.training.lr_scheduler.polylr.PolyLRScheduler object at 0x7fe413ef3460>",
35
+ "my_init_kwargs": "{'plans': {'dataset_name': 'Dataset202_newSacrum', 'plans_name': 'nnUNetResEncUNetMPlans', 'original_median_spacing_after_transp': [0.801025390625, 0.797914057970047, 0.797914057970047], 'original_median_shape_after_transp': [182, 137, 168], 'image_reader_writer': 'SimpleITKIO', 'transpose_forward': [0, 1, 2], 'transpose_backward': [0, 1, 2], 'configurations': {'2d': {'data_identifier': 'nnUNetPlans_2d', 'preprocessor_name': 'DefaultPreprocessor', 'batch_size': 107, 'patch_size': [160, 192], 'median_image_size_in_voxels': [138.0, 169.0], 'spacing': [0.797914057970047, 0.797914057970047], 'normalization_schemes': ['CTNormalization'], 'use_mask_for_norm': [False], 'resampling_fn_data': 'resample_data_or_seg_to_shape', 'resampling_fn_seg': 'resample_data_or_seg_to_shape', 'resampling_fn_data_kwargs': {'is_seg': False, 'order': 3, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_seg_kwargs': {'is_seg': True, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_probabilities': 'resample_data_or_seg_to_shape', 'resampling_fn_probabilities_kwargs': {'is_seg': False, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'architecture': {'network_class_name': 'dynamic_network_architectures.architectures.unet.ResidualEncoderUNet', 'arch_kwargs': {'n_stages': 6, 'features_per_stage': [32, 64, 128, 256, 512, 512], 'conv_op': 'torch.nn.modules.conv.Conv2d', 'kernel_sizes': [[3, 3], [3, 3], [3, 3], [3, 3], [3, 3], [3, 3]], 'strides': [[1, 1], [2, 2], [2, 2], [2, 2], [2, 2], [2, 2]], 'n_blocks_per_stage': [1, 3, 4, 6, 6, 6], 'n_conv_per_stage_decoder': [1, 1, 1, 1, 1], 'conv_bias': True, 'norm_op': 'torch.nn.modules.instancenorm.InstanceNorm2d', 'norm_op_kwargs': {'eps': 1e-05, 'affine': True}, 'dropout_op': None, 'dropout_op_kwargs': None, 'nonlin': 'torch.nn.LeakyReLU', 'nonlin_kwargs': {'inplace': True}}, '_kw_requires_import': ['conv_op', 'norm_op', 'dropout_op', 'nonlin']}, 'batch_dice': True}, '3d_fullres': {'data_identifier': 'nnUNetPlans_3d_fullres', 'preprocessor_name': 'DefaultPreprocessor', 'batch_size': 2, 'patch_size': [160, 112, 128], 'median_image_size_in_voxels': [196.0, 138.0, 169.0], 'spacing': [0.801025390625, 0.797914057970047, 0.797914057970047], 'normalization_schemes': ['CTNormalization'], 'use_mask_for_norm': [False], 'resampling_fn_data': 'resample_data_or_seg_to_shape', 'resampling_fn_seg': 'resample_data_or_seg_to_shape', 'resampling_fn_data_kwargs': {'is_seg': False, 'order': 3, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_seg_kwargs': {'is_seg': True, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_probabilities': 'resample_data_or_seg_to_shape', 'resampling_fn_probabilities_kwargs': {'is_seg': False, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'architecture': {'network_class_name': 'dynamic_network_architectures.architectures.unet.ResidualEncoderUNet', 'arch_kwargs': {'n_stages': 6, 'features_per_stage': [32, 64, 128, 256, 320, 320], 'conv_op': 'torch.nn.modules.conv.Conv3d', 'kernel_sizes': [[3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3]], 'strides': [[1, 1, 1], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 1, 2]], 'n_blocks_per_stage': [1, 3, 4, 6, 6, 6], 'n_conv_per_stage_decoder': [1, 1, 1, 1, 1], 'conv_bias': True, 'norm_op': 'torch.nn.modules.instancenorm.InstanceNorm3d', 'norm_op_kwargs': {'eps': 1e-05, 'affine': True}, 'dropout_op': None, 'dropout_op_kwargs': None, 'nonlin': 'torch.nn.LeakyReLU', 'nonlin_kwargs': {'inplace': True}}, '_kw_requires_import': ['conv_op', 'norm_op', 'dropout_op', 'nonlin']}, 'batch_dice': False}}, 'experiment_planner_used': 'nnUNetPlannerResEncM', 'label_manager': 'LabelManager', 'foreground_intensity_properties_per_channel': {'0': {'max': 2226.0, 'mean': 262.4073486328125, 'median': 220.0, 'min': -557.0, 'percentile_00_5': -94.0, 'percentile_99_5': 1088.0, 'std': 223.74400329589844}}}, 'configuration': '3d_fullres', 'fold': 0, 'dataset_json': {'channel_names': {'0': 'CT'}, 'labels': {'background': 0, 'main_fragment': 1, 'minor_fragment': 2}, 'numTraining': 100, 'file_ending': '.mha', 'overwrite_image_reader_writer': 'SimpleITKIO'}, 'unpack_dataset': True, 'device': device(type='cuda')}",
36
+ "network": "OptimizedModule",
37
+ "num_epochs": "1000",
38
+ "num_input_channels": "1",
39
+ "num_iterations_per_epoch": "250",
40
+ "num_val_iterations_per_epoch": "50",
41
+ "optimizer": "SGD (\nParameter Group 0\n dampening: 0\n differentiable: False\n foreach: None\n fused: None\n initial_lr: 0.01\n lr: 0.01\n maximize: False\n momentum: 0.99\n nesterov: True\n weight_decay: 3e-05\n)",
42
+ "output_folder": "/data/ypy/dataset/nnUNet_datasets/nnUNet_results/Dataset202_newSacrum/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/fold_0",
43
+ "output_folder_base": "/data/ypy/dataset/nnUNet_datasets/nnUNet_results/Dataset202_newSacrum/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres",
44
+ "oversample_foreground_percent": "0.33",
45
+ "plans_manager": "{'dataset_name': 'Dataset202_newSacrum', 'plans_name': 'nnUNetResEncUNetMPlans', 'original_median_spacing_after_transp': [0.801025390625, 0.797914057970047, 0.797914057970047], 'original_median_shape_after_transp': [182, 137, 168], 'image_reader_writer': 'SimpleITKIO', 'transpose_forward': [0, 1, 2], 'transpose_backward': [0, 1, 2], 'configurations': {'2d': {'data_identifier': 'nnUNetPlans_2d', 'preprocessor_name': 'DefaultPreprocessor', 'batch_size': 107, 'patch_size': [160, 192], 'median_image_size_in_voxels': [138.0, 169.0], 'spacing': [0.797914057970047, 0.797914057970047], 'normalization_schemes': ['CTNormalization'], 'use_mask_for_norm': [False], 'resampling_fn_data': 'resample_data_or_seg_to_shape', 'resampling_fn_seg': 'resample_data_or_seg_to_shape', 'resampling_fn_data_kwargs': {'is_seg': False, 'order': 3, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_seg_kwargs': {'is_seg': True, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_probabilities': 'resample_data_or_seg_to_shape', 'resampling_fn_probabilities_kwargs': {'is_seg': False, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'architecture': {'network_class_name': 'dynamic_network_architectures.architectures.unet.ResidualEncoderUNet', 'arch_kwargs': {'n_stages': 6, 'features_per_stage': [32, 64, 128, 256, 512, 512], 'conv_op': 'torch.nn.modules.conv.Conv2d', 'kernel_sizes': [[3, 3], [3, 3], [3, 3], [3, 3], [3, 3], [3, 3]], 'strides': [[1, 1], [2, 2], [2, 2], [2, 2], [2, 2], [2, 2]], 'n_blocks_per_stage': [1, 3, 4, 6, 6, 6], 'n_conv_per_stage_decoder': [1, 1, 1, 1, 1], 'conv_bias': True, 'norm_op': 'torch.nn.modules.instancenorm.InstanceNorm2d', 'norm_op_kwargs': {'eps': 1e-05, 'affine': True}, 'dropout_op': None, 'dropout_op_kwargs': None, 'nonlin': 'torch.nn.LeakyReLU', 'nonlin_kwargs': {'inplace': True}}, '_kw_requires_import': ['conv_op', 'norm_op', 'dropout_op', 'nonlin']}, 'batch_dice': True}, '3d_fullres': {'data_identifier': 'nnUNetPlans_3d_fullres', 'preprocessor_name': 'DefaultPreprocessor', 'batch_size': 2, 'patch_size': [160, 112, 128], 'median_image_size_in_voxels': [196.0, 138.0, 169.0], 'spacing': [0.801025390625, 0.797914057970047, 0.797914057970047], 'normalization_schemes': ['CTNormalization'], 'use_mask_for_norm': [False], 'resampling_fn_data': 'resample_data_or_seg_to_shape', 'resampling_fn_seg': 'resample_data_or_seg_to_shape', 'resampling_fn_data_kwargs': {'is_seg': False, 'order': 3, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_seg_kwargs': {'is_seg': True, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_probabilities': 'resample_data_or_seg_to_shape', 'resampling_fn_probabilities_kwargs': {'is_seg': False, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'architecture': {'network_class_name': 'dynamic_network_architectures.architectures.unet.ResidualEncoderUNet', 'arch_kwargs': {'n_stages': 6, 'features_per_stage': [32, 64, 128, 256, 320, 320], 'conv_op': 'torch.nn.modules.conv.Conv3d', 'kernel_sizes': [[3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3]], 'strides': [[1, 1, 1], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 1, 2]], 'n_blocks_per_stage': [1, 3, 4, 6, 6, 6], 'n_conv_per_stage_decoder': [1, 1, 1, 1, 1], 'conv_bias': True, 'norm_op': 'torch.nn.modules.instancenorm.InstanceNorm3d', 'norm_op_kwargs': {'eps': 1e-05, 'affine': True}, 'dropout_op': None, 'dropout_op_kwargs': None, 'nonlin': 'torch.nn.LeakyReLU', 'nonlin_kwargs': {'inplace': True}}, '_kw_requires_import': ['conv_op', 'norm_op', 'dropout_op', 'nonlin']}, 'batch_dice': False}}, 'experiment_planner_used': 'nnUNetPlannerResEncM', 'label_manager': 'LabelManager', 'foreground_intensity_properties_per_channel': {'0': {'max': 2226.0, 'mean': 262.4073486328125, 'median': 220.0, 'min': -557.0, 'percentile_00_5': -94.0, 'percentile_99_5': 1088.0, 'std': 223.74400329589844}}}",
46
+ "preprocessed_dataset_folder": "/data/ypy/dataset/nnUNet_datasets/nnUNet_preprocessed/Dataset202_newSacrum/nnUNetPlans_3d_fullres",
47
+ "preprocessed_dataset_folder_base": "/data/ypy/dataset/nnUNet_datasets/nnUNet_preprocessed/Dataset202_newSacrum",
48
+ "save_every": "50",
49
+ "torch_version": "2.3.1",
50
+ "unpack_dataset": "True",
51
+ "was_initialized": "True",
52
+ "weight_decay": "3e-05"
53
+ }
Code/PENGWIN_Challenge/Inference/CT/models/Dataset202_newSacrum/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/plans.json ADDED
@@ -0,0 +1,345 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "dataset_name": "Dataset202_newSacrum",
3
+ "plans_name": "nnUNetResEncUNetMPlans",
4
+ "original_median_spacing_after_transp": [
5
+ 0.801025390625,
6
+ 0.797914057970047,
7
+ 0.797914057970047
8
+ ],
9
+ "original_median_shape_after_transp": [
10
+ 182,
11
+ 137,
12
+ 168
13
+ ],
14
+ "image_reader_writer": "SimpleITKIO",
15
+ "transpose_forward": [
16
+ 0,
17
+ 1,
18
+ 2
19
+ ],
20
+ "transpose_backward": [
21
+ 0,
22
+ 1,
23
+ 2
24
+ ],
25
+ "configurations": {
26
+ "2d": {
27
+ "data_identifier": "nnUNetPlans_2d",
28
+ "preprocessor_name": "DefaultPreprocessor",
29
+ "batch_size": 107,
30
+ "patch_size": [
31
+ 160,
32
+ 192
33
+ ],
34
+ "median_image_size_in_voxels": [
35
+ 138.0,
36
+ 169.0
37
+ ],
38
+ "spacing": [
39
+ 0.797914057970047,
40
+ 0.797914057970047
41
+ ],
42
+ "normalization_schemes": [
43
+ "CTNormalization"
44
+ ],
45
+ "use_mask_for_norm": [
46
+ false
47
+ ],
48
+ "resampling_fn_data": "resample_data_or_seg_to_shape",
49
+ "resampling_fn_seg": "resample_data_or_seg_to_shape",
50
+ "resampling_fn_data_kwargs": {
51
+ "is_seg": false,
52
+ "order": 3,
53
+ "order_z": 0,
54
+ "force_separate_z": null
55
+ },
56
+ "resampling_fn_seg_kwargs": {
57
+ "is_seg": true,
58
+ "order": 1,
59
+ "order_z": 0,
60
+ "force_separate_z": null
61
+ },
62
+ "resampling_fn_probabilities": "resample_data_or_seg_to_shape",
63
+ "resampling_fn_probabilities_kwargs": {
64
+ "is_seg": false,
65
+ "order": 1,
66
+ "order_z": 0,
67
+ "force_separate_z": null
68
+ },
69
+ "architecture": {
70
+ "network_class_name": "dynamic_network_architectures.architectures.unet.ResidualEncoderUNet",
71
+ "arch_kwargs": {
72
+ "n_stages": 6,
73
+ "features_per_stage": [
74
+ 32,
75
+ 64,
76
+ 128,
77
+ 256,
78
+ 512,
79
+ 512
80
+ ],
81
+ "conv_op": "torch.nn.modules.conv.Conv2d",
82
+ "kernel_sizes": [
83
+ [
84
+ 3,
85
+ 3
86
+ ],
87
+ [
88
+ 3,
89
+ 3
90
+ ],
91
+ [
92
+ 3,
93
+ 3
94
+ ],
95
+ [
96
+ 3,
97
+ 3
98
+ ],
99
+ [
100
+ 3,
101
+ 3
102
+ ],
103
+ [
104
+ 3,
105
+ 3
106
+ ]
107
+ ],
108
+ "strides": [
109
+ [
110
+ 1,
111
+ 1
112
+ ],
113
+ [
114
+ 2,
115
+ 2
116
+ ],
117
+ [
118
+ 2,
119
+ 2
120
+ ],
121
+ [
122
+ 2,
123
+ 2
124
+ ],
125
+ [
126
+ 2,
127
+ 2
128
+ ],
129
+ [
130
+ 2,
131
+ 2
132
+ ]
133
+ ],
134
+ "n_blocks_per_stage": [
135
+ 1,
136
+ 3,
137
+ 4,
138
+ 6,
139
+ 6,
140
+ 6
141
+ ],
142
+ "n_conv_per_stage_decoder": [
143
+ 1,
144
+ 1,
145
+ 1,
146
+ 1,
147
+ 1
148
+ ],
149
+ "conv_bias": true,
150
+ "norm_op": "torch.nn.modules.instancenorm.InstanceNorm2d",
151
+ "norm_op_kwargs": {
152
+ "eps": 1e-05,
153
+ "affine": true
154
+ },
155
+ "dropout_op": null,
156
+ "dropout_op_kwargs": null,
157
+ "nonlin": "torch.nn.LeakyReLU",
158
+ "nonlin_kwargs": {
159
+ "inplace": true
160
+ }
161
+ },
162
+ "_kw_requires_import": [
163
+ "conv_op",
164
+ "norm_op",
165
+ "dropout_op",
166
+ "nonlin"
167
+ ]
168
+ },
169
+ "batch_dice": true
170
+ },
171
+ "3d_fullres": {
172
+ "data_identifier": "nnUNetPlans_3d_fullres",
173
+ "preprocessor_name": "DefaultPreprocessor",
174
+ "batch_size": 2,
175
+ "patch_size": [
176
+ 160,
177
+ 112,
178
+ 128
179
+ ],
180
+ "median_image_size_in_voxels": [
181
+ 196.0,
182
+ 138.0,
183
+ 169.0
184
+ ],
185
+ "spacing": [
186
+ 0.801025390625,
187
+ 0.797914057970047,
188
+ 0.797914057970047
189
+ ],
190
+ "normalization_schemes": [
191
+ "CTNormalization"
192
+ ],
193
+ "use_mask_for_norm": [
194
+ false
195
+ ],
196
+ "resampling_fn_data": "resample_data_or_seg_to_shape",
197
+ "resampling_fn_seg": "resample_data_or_seg_to_shape",
198
+ "resampling_fn_data_kwargs": {
199
+ "is_seg": false,
200
+ "order": 3,
201
+ "order_z": 0,
202
+ "force_separate_z": null
203
+ },
204
+ "resampling_fn_seg_kwargs": {
205
+ "is_seg": true,
206
+ "order": 1,
207
+ "order_z": 0,
208
+ "force_separate_z": null
209
+ },
210
+ "resampling_fn_probabilities": "resample_data_or_seg_to_shape",
211
+ "resampling_fn_probabilities_kwargs": {
212
+ "is_seg": false,
213
+ "order": 1,
214
+ "order_z": 0,
215
+ "force_separate_z": null
216
+ },
217
+ "architecture": {
218
+ "network_class_name": "dynamic_network_architectures.architectures.unet.ResidualEncoderUNet",
219
+ "arch_kwargs": {
220
+ "n_stages": 6,
221
+ "features_per_stage": [
222
+ 32,
223
+ 64,
224
+ 128,
225
+ 256,
226
+ 320,
227
+ 320
228
+ ],
229
+ "conv_op": "torch.nn.modules.conv.Conv3d",
230
+ "kernel_sizes": [
231
+ [
232
+ 3,
233
+ 3,
234
+ 3
235
+ ],
236
+ [
237
+ 3,
238
+ 3,
239
+ 3
240
+ ],
241
+ [
242
+ 3,
243
+ 3,
244
+ 3
245
+ ],
246
+ [
247
+ 3,
248
+ 3,
249
+ 3
250
+ ],
251
+ [
252
+ 3,
253
+ 3,
254
+ 3
255
+ ],
256
+ [
257
+ 3,
258
+ 3,
259
+ 3
260
+ ]
261
+ ],
262
+ "strides": [
263
+ [
264
+ 1,
265
+ 1,
266
+ 1
267
+ ],
268
+ [
269
+ 2,
270
+ 2,
271
+ 2
272
+ ],
273
+ [
274
+ 2,
275
+ 2,
276
+ 2
277
+ ],
278
+ [
279
+ 2,
280
+ 2,
281
+ 2
282
+ ],
283
+ [
284
+ 2,
285
+ 2,
286
+ 2
287
+ ],
288
+ [
289
+ 2,
290
+ 1,
291
+ 2
292
+ ]
293
+ ],
294
+ "n_blocks_per_stage": [
295
+ 1,
296
+ 3,
297
+ 4,
298
+ 6,
299
+ 6,
300
+ 6
301
+ ],
302
+ "n_conv_per_stage_decoder": [
303
+ 1,
304
+ 1,
305
+ 1,
306
+ 1,
307
+ 1
308
+ ],
309
+ "conv_bias": true,
310
+ "norm_op": "torch.nn.modules.instancenorm.InstanceNorm3d",
311
+ "norm_op_kwargs": {
312
+ "eps": 1e-05,
313
+ "affine": true
314
+ },
315
+ "dropout_op": null,
316
+ "dropout_op_kwargs": null,
317
+ "nonlin": "torch.nn.LeakyReLU",
318
+ "nonlin_kwargs": {
319
+ "inplace": true
320
+ }
321
+ },
322
+ "_kw_requires_import": [
323
+ "conv_op",
324
+ "norm_op",
325
+ "dropout_op",
326
+ "nonlin"
327
+ ]
328
+ },
329
+ "batch_dice": false
330
+ }
331
+ },
332
+ "experiment_planner_used": "nnUNetPlannerResEncM",
333
+ "label_manager": "LabelManager",
334
+ "foreground_intensity_properties_per_channel": {
335
+ "0": {
336
+ "max": 2226.0,
337
+ "mean": 262.4073486328125,
338
+ "median": 220.0,
339
+ "min": -557.0,
340
+ "percentile_00_5": -94.0,
341
+ "percentile_99_5": 1088.0,
342
+ "std": 223.74400329589844
343
+ }
344
+ }
345
+ }
Code/PENGWIN_Challenge/Inference/CT/models/Dataset203_newHips/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/dataset.json ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "channel_names": {
3
+ "0": "CT"
4
+ },
5
+ "labels": {
6
+ "background": 0,
7
+ "main_fragment": 1,
8
+ "minor_fragment": 2
9
+ },
10
+ "numTraining": 200,
11
+ "file_ending": ".mha",
12
+ "overwrite_image_reader_writer": "SimpleITKIO"
13
+ }
Code/PENGWIN_Challenge/Inference/CT/models/Dataset203_newHips/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/dataset_fingerprint.json ADDED
@@ -0,0 +1,2018 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "foreground_intensity_properties_per_channel": {
3
+ "0": {
4
+ "max": 3467.0,
5
+ "mean": 386.2413635253906,
6
+ "median": 311.0,
7
+ "min": -706.0,
8
+ "percentile_00_5": -53.0,
9
+ "percentile_99_5": 1311.0,
10
+ "std": 290.2442626953125
11
+ }
12
+ },
13
+ "median_relative_size_after_cropping": 1.0,
14
+ "shapes_after_crop": [
15
+ [
16
+ 294,
17
+ 210,
18
+ 216
19
+ ],
20
+ [
21
+ 303,
22
+ 207,
23
+ 202
24
+ ],
25
+ [
26
+ 266,
27
+ 182,
28
+ 173
29
+ ],
30
+ [
31
+ 265,
32
+ 177,
33
+ 203
34
+ ],
35
+ [
36
+ 226,
37
+ 224,
38
+ 211
39
+ ],
40
+ [
41
+ 230,
42
+ 213,
43
+ 203
44
+ ],
45
+ [
46
+ 194,
47
+ 202,
48
+ 198
49
+ ],
50
+ [
51
+ 202,
52
+ 207,
53
+ 162
54
+ ],
55
+ [
56
+ 212,
57
+ 155,
58
+ 177
59
+ ],
60
+ [
61
+ 213,
62
+ 164,
63
+ 191
64
+ ],
65
+ [
66
+ 205,
67
+ 180,
68
+ 156
69
+ ],
70
+ [
71
+ 206,
72
+ 175,
73
+ 191
74
+ ],
75
+ [
76
+ 214,
77
+ 189,
78
+ 182
79
+ ],
80
+ [
81
+ 218,
82
+ 187,
83
+ 185
84
+ ],
85
+ [
86
+ 265,
87
+ 228,
88
+ 214
89
+ ],
90
+ [
91
+ 280,
92
+ 225,
93
+ 188
94
+ ],
95
+ [
96
+ 272,
97
+ 203,
98
+ 191
99
+ ],
100
+ [
101
+ 280,
102
+ 203,
103
+ 178
104
+ ],
105
+ [
106
+ 265,
107
+ 157,
108
+ 173
109
+ ],
110
+ [
111
+ 270,
112
+ 164,
113
+ 182
114
+ ],
115
+ [
116
+ 290,
117
+ 146,
118
+ 179
119
+ ],
120
+ [
121
+ 280,
122
+ 161,
123
+ 160
124
+ ],
125
+ [
126
+ 213,
127
+ 192,
128
+ 158
129
+ ],
130
+ [
131
+ 203,
132
+ 185,
133
+ 177
134
+ ],
135
+ [
136
+ 281,
137
+ 194,
138
+ 206
139
+ ],
140
+ [
141
+ 282,
142
+ 194,
143
+ 209
144
+ ],
145
+ [
146
+ 237,
147
+ 211,
148
+ 189
149
+ ],
150
+ [
151
+ 227,
152
+ 202,
153
+ 212
154
+ ],
155
+ [
156
+ 233,
157
+ 163,
158
+ 196
159
+ ],
160
+ [
161
+ 226,
162
+ 177,
163
+ 191
164
+ ],
165
+ [
166
+ 229,
167
+ 183,
168
+ 216
169
+ ],
170
+ [
171
+ 210,
172
+ 199,
173
+ 207
174
+ ],
175
+ [
176
+ 201,
177
+ 181,
178
+ 195
179
+ ],
180
+ [
181
+ 209,
182
+ 171,
183
+ 185
184
+ ],
185
+ [
186
+ 282,
187
+ 212,
188
+ 186
189
+ ],
190
+ [
191
+ 282,
192
+ 220,
193
+ 215
194
+ ],
195
+ [
196
+ 267,
197
+ 205,
198
+ 190
199
+ ],
200
+ [
201
+ 257,
202
+ 209,
203
+ 202
204
+ ],
205
+ [
206
+ 275,
207
+ 202,
208
+ 193
209
+ ],
210
+ [
211
+ 261,
212
+ 208,
213
+ 213
214
+ ],
215
+ [
216
+ 300,
217
+ 199,
218
+ 206
219
+ ],
220
+ [
221
+ 289,
222
+ 190,
223
+ 215
224
+ ],
225
+ [
226
+ 234,
227
+ 202,
228
+ 173
229
+ ],
230
+ [
231
+ 237,
232
+ 185,
233
+ 167
234
+ ],
235
+ [
236
+ 289,
237
+ 201,
238
+ 179
239
+ ],
240
+ [
241
+ 280,
242
+ 210,
243
+ 203
244
+ ],
245
+ [
246
+ 298,
247
+ 190,
248
+ 196
249
+ ],
250
+ [
251
+ 294,
252
+ 193,
253
+ 187
254
+ ],
255
+ [
256
+ 212,
257
+ 202,
258
+ 185
259
+ ],
260
+ [
261
+ 202,
262
+ 188,
263
+ 201
264
+ ],
265
+ [
266
+ 223,
267
+ 200,
268
+ 206
269
+ ],
270
+ [
271
+ 234,
272
+ 206,
273
+ 183
274
+ ],
275
+ [
276
+ 265,
277
+ 168,
278
+ 182
279
+ ],
280
+ [
281
+ 262,
282
+ 172,
283
+ 188
284
+ ],
285
+ [
286
+ 219,
287
+ 177,
288
+ 189
289
+ ],
290
+ [
291
+ 231,
292
+ 176,
293
+ 227
294
+ ],
295
+ [
296
+ 259,
297
+ 186,
298
+ 202
299
+ ],
300
+ [
301
+ 271,
302
+ 191,
303
+ 185
304
+ ],
305
+ [
306
+ 242,
307
+ 193,
308
+ 194
309
+ ],
310
+ [
311
+ 225,
312
+ 205,
313
+ 244
314
+ ],
315
+ [
316
+ 242,
317
+ 192,
318
+ 175
319
+ ],
320
+ [
321
+ 239,
322
+ 196,
323
+ 190
324
+ ],
325
+ [
326
+ 264,
327
+ 191,
328
+ 182
329
+ ],
330
+ [
331
+ 255,
332
+ 193,
333
+ 204
334
+ ],
335
+ [
336
+ 290,
337
+ 196,
338
+ 183
339
+ ],
340
+ [
341
+ 298,
342
+ 187,
343
+ 198
344
+ ],
345
+ [
346
+ 275,
347
+ 192,
348
+ 185
349
+ ],
350
+ [
351
+ 283,
352
+ 198,
353
+ 191
354
+ ],
355
+ [
356
+ 315,
357
+ 199,
358
+ 184
359
+ ],
360
+ [
361
+ 292,
362
+ 201,
363
+ 203
364
+ ],
365
+ [
366
+ 229,
367
+ 191,
368
+ 195
369
+ ],
370
+ [
371
+ 233,
372
+ 192,
373
+ 205
374
+ ],
375
+ [
376
+ 221,
377
+ 197,
378
+ 204
379
+ ],
380
+ [
381
+ 223,
382
+ 203,
383
+ 180
384
+ ],
385
+ [
386
+ 271,
387
+ 183,
388
+ 185
389
+ ],
390
+ [
391
+ 271,
392
+ 192,
393
+ 207
394
+ ],
395
+ [
396
+ 263,
397
+ 128,
398
+ 156
399
+ ],
400
+ [
401
+ 272,
402
+ 125,
403
+ 134
404
+ ],
405
+ [
406
+ 265,
407
+ 178,
408
+ 175
409
+ ],
410
+ [
411
+ 255,
412
+ 185,
413
+ 199
414
+ ],
415
+ [
416
+ 279,
417
+ 191,
418
+ 192
419
+ ],
420
+ [
421
+ 276,
422
+ 209,
423
+ 202
424
+ ],
425
+ [
426
+ 278,
427
+ 185,
428
+ 163
429
+ ],
430
+ [
431
+ 272,
432
+ 180,
433
+ 182
434
+ ],
435
+ [
436
+ 224,
437
+ 177,
438
+ 181
439
+ ],
440
+ [
441
+ 222,
442
+ 167,
443
+ 199
444
+ ],
445
+ [
446
+ 360,
447
+ 171,
448
+ 177
449
+ ],
450
+ [
451
+ 365,
452
+ 170,
453
+ 176
454
+ ],
455
+ [
456
+ 233,
457
+ 195,
458
+ 181
459
+ ],
460
+ [
461
+ 235,
462
+ 173,
463
+ 181
464
+ ],
465
+ [
466
+ 233,
467
+ 205,
468
+ 164
469
+ ],
470
+ [
471
+ 224,
472
+ 204,
473
+ 206
474
+ ],
475
+ [
476
+ 184,
477
+ 212,
478
+ 198
479
+ ],
480
+ [
481
+ 174,
482
+ 208,
483
+ 195
484
+ ],
485
+ [
486
+ 274,
487
+ 154,
488
+ 164
489
+ ],
490
+ [
491
+ 270,
492
+ 150,
493
+ 195
494
+ ],
495
+ [
496
+ 283,
497
+ 215,
498
+ 200
499
+ ],
500
+ [
501
+ 291,
502
+ 226,
503
+ 181
504
+ ],
505
+ [
506
+ 287,
507
+ 247,
508
+ 234
509
+ ],
510
+ [
511
+ 296,
512
+ 250,
513
+ 235
514
+ ],
515
+ [
516
+ 281,
517
+ 185,
518
+ 184
519
+ ],
520
+ [
521
+ 289,
522
+ 173,
523
+ 170
524
+ ],
525
+ [
526
+ 256,
527
+ 147,
528
+ 166
529
+ ],
530
+ [
531
+ 261,
532
+ 162,
533
+ 172
534
+ ],
535
+ [
536
+ 217,
537
+ 208,
538
+ 185
539
+ ],
540
+ [
541
+ 210,
542
+ 206,
543
+ 193
544
+ ],
545
+ [
546
+ 240,
547
+ 141,
548
+ 208
549
+ ],
550
+ [
551
+ 219,
552
+ 173,
553
+ 192
554
+ ],
555
+ [
556
+ 216,
557
+ 182,
558
+ 173
559
+ ],
560
+ [
561
+ 210,
562
+ 182,
563
+ 179
564
+ ],
565
+ [
566
+ 217,
567
+ 196,
568
+ 190
569
+ ],
570
+ [
571
+ 208,
572
+ 203,
573
+ 202
574
+ ],
575
+ [
576
+ 251,
577
+ 170,
578
+ 190
579
+ ],
580
+ [
581
+ 254,
582
+ 183,
583
+ 160
584
+ ],
585
+ [
586
+ 245,
587
+ 158,
588
+ 173
589
+ ],
590
+ [
591
+ 226,
592
+ 165,
593
+ 180
594
+ ],
595
+ [
596
+ 271,
597
+ 175,
598
+ 187
599
+ ],
600
+ [
601
+ 276,
602
+ 172,
603
+ 184
604
+ ],
605
+ [
606
+ 289,
607
+ 189,
608
+ 188
609
+ ],
610
+ [
611
+ 295,
612
+ 186,
613
+ 176
614
+ ],
615
+ [
616
+ 283,
617
+ 187,
618
+ 160
619
+ ],
620
+ [
621
+ 269,
622
+ 200,
623
+ 182
624
+ ],
625
+ [
626
+ 237,
627
+ 196,
628
+ 188
629
+ ],
630
+ [
631
+ 238,
632
+ 199,
633
+ 197
634
+ ],
635
+ [
636
+ 234,
637
+ 203,
638
+ 185
639
+ ],
640
+ [
641
+ 232,
642
+ 185,
643
+ 204
644
+ ],
645
+ [
646
+ 267,
647
+ 205,
648
+ 193
649
+ ],
650
+ [
651
+ 265,
652
+ 207,
653
+ 217
654
+ ],
655
+ [
656
+ 266,
657
+ 219,
658
+ 188
659
+ ],
660
+ [
661
+ 272,
662
+ 194,
663
+ 214
664
+ ],
665
+ [
666
+ 221,
667
+ 212,
668
+ 252
669
+ ],
670
+ [
671
+ 225,
672
+ 212,
673
+ 209
674
+ ],
675
+ [
676
+ 268,
677
+ 173,
678
+ 171
679
+ ],
680
+ [
681
+ 268,
682
+ 180,
683
+ 171
684
+ ],
685
+ [
686
+ 292,
687
+ 204,
688
+ 203
689
+ ],
690
+ [
691
+ 302,
692
+ 198,
693
+ 202
694
+ ],
695
+ [
696
+ 267,
697
+ 165,
698
+ 159
699
+ ],
700
+ [
701
+ 266,
702
+ 164,
703
+ 166
704
+ ],
705
+ [
706
+ 273,
707
+ 216,
708
+ 221
709
+ ],
710
+ [
711
+ 277,
712
+ 231,
713
+ 201
714
+ ],
715
+ [
716
+ 270,
717
+ 166,
718
+ 158
719
+ ],
720
+ [
721
+ 264,
722
+ 162,
723
+ 176
724
+ ],
725
+ [
726
+ 240,
727
+ 200,
728
+ 155
729
+ ],
730
+ [
731
+ 234,
732
+ 204,
733
+ 171
734
+ ],
735
+ [
736
+ 274,
737
+ 186,
738
+ 197
739
+ ],
740
+ [
741
+ 273,
742
+ 177,
743
+ 200
744
+ ],
745
+ [
746
+ 278,
747
+ 175,
748
+ 169
749
+ ],
750
+ [
751
+ 271,
752
+ 173,
753
+ 165
754
+ ],
755
+ [
756
+ 235,
757
+ 167,
758
+ 212
759
+ ],
760
+ [
761
+ 230,
762
+ 191,
763
+ 173
764
+ ],
765
+ [
766
+ 265,
767
+ 187,
768
+ 201
769
+ ],
770
+ [
771
+ 273,
772
+ 191,
773
+ 170
774
+ ],
775
+ [
776
+ 255,
777
+ 214,
778
+ 193
779
+ ],
780
+ [
781
+ 274,
782
+ 201,
783
+ 171
784
+ ],
785
+ [
786
+ 282,
787
+ 184,
788
+ 174
789
+ ],
790
+ [
791
+ 275,
792
+ 174,
793
+ 193
794
+ ],
795
+ [
796
+ 236,
797
+ 246,
798
+ 177
799
+ ],
800
+ [
801
+ 233,
802
+ 219,
803
+ 190
804
+ ],
805
+ [
806
+ 237,
807
+ 222,
808
+ 224
809
+ ],
810
+ [
811
+ 242,
812
+ 213,
813
+ 207
814
+ ],
815
+ [
816
+ 280,
817
+ 193,
818
+ 176
819
+ ],
820
+ [
821
+ 276,
822
+ 187,
823
+ 209
824
+ ],
825
+ [
826
+ 236,
827
+ 192,
828
+ 188
829
+ ],
830
+ [
831
+ 236,
832
+ 196,
833
+ 180
834
+ ],
835
+ [
836
+ 270,
837
+ 216,
838
+ 222
839
+ ],
840
+ [
841
+ 270,
842
+ 207,
843
+ 207
844
+ ],
845
+ [
846
+ 241,
847
+ 191,
848
+ 180
849
+ ],
850
+ [
851
+ 219,
852
+ 191,
853
+ 196
854
+ ],
855
+ [
856
+ 277,
857
+ 169,
858
+ 166
859
+ ],
860
+ [
861
+ 272,
862
+ 159,
863
+ 181
864
+ ],
865
+ [
866
+ 239,
867
+ 212,
868
+ 223
869
+ ],
870
+ [
871
+ 236,
872
+ 226,
873
+ 224
874
+ ],
875
+ [
876
+ 284,
877
+ 191,
878
+ 191
879
+ ],
880
+ [
881
+ 281,
882
+ 194,
883
+ 199
884
+ ],
885
+ [
886
+ 287,
887
+ 182,
888
+ 167
889
+ ],
890
+ [
891
+ 279,
892
+ 185,
893
+ 188
894
+ ],
895
+ [
896
+ 222,
897
+ 193,
898
+ 193
899
+ ],
900
+ [
901
+ 217,
902
+ 197,
903
+ 198
904
+ ],
905
+ [
906
+ 266,
907
+ 161,
908
+ 154
909
+ ],
910
+ [
911
+ 272,
912
+ 163,
913
+ 159
914
+ ],
915
+ [
916
+ 205,
917
+ 188,
918
+ 194
919
+ ],
920
+ [
921
+ 200,
922
+ 218,
923
+ 224
924
+ ],
925
+ [
926
+ 246,
927
+ 178,
928
+ 169
929
+ ],
930
+ [
931
+ 234,
932
+ 171,
933
+ 193
934
+ ],
935
+ [
936
+ 229,
937
+ 215,
938
+ 222
939
+ ],
940
+ [
941
+ 236,
942
+ 216,
943
+ 203
944
+ ],
945
+ [
946
+ 283,
947
+ 170,
948
+ 159
949
+ ],
950
+ [
951
+ 287,
952
+ 172,
953
+ 172
954
+ ],
955
+ [
956
+ 211,
957
+ 236,
958
+ 220
959
+ ],
960
+ [
961
+ 222,
962
+ 233,
963
+ 232
964
+ ],
965
+ [
966
+ 264,
967
+ 198,
968
+ 203
969
+ ],
970
+ [
971
+ 263,
972
+ 188,
973
+ 237
974
+ ],
975
+ [
976
+ 234,
977
+ 192,
978
+ 190
979
+ ],
980
+ [
981
+ 230,
982
+ 187,
983
+ 179
984
+ ],
985
+ [
986
+ 270,
987
+ 209,
988
+ 198
989
+ ],
990
+ [
991
+ 263,
992
+ 203,
993
+ 202
994
+ ],
995
+ [
996
+ 204,
997
+ 179,
998
+ 212
999
+ ],
1000
+ [
1001
+ 211,
1002
+ 176,
1003
+ 172
1004
+ ],
1005
+ [
1006
+ 259,
1007
+ 176,
1008
+ 156
1009
+ ],
1010
+ [
1011
+ 255,
1012
+ 173,
1013
+ 172
1014
+ ]
1015
+ ],
1016
+ "spacings": [
1017
+ [
1018
+ 0.800000011920929,
1019
+ 0.78125,
1020
+ 0.78125
1021
+ ],
1022
+ [
1023
+ 0.800000011920929,
1024
+ 0.78125,
1025
+ 0.78125
1026
+ ],
1027
+ [
1028
+ 0.800000011920929,
1029
+ 0.82421875,
1030
+ 0.82421875
1031
+ ],
1032
+ [
1033
+ 0.800000011920929,
1034
+ 0.82421875,
1035
+ 0.82421875
1036
+ ],
1037
+ [
1038
+ 1.0,
1039
+ 0.759765625,
1040
+ 0.759765625
1041
+ ],
1042
+ [
1043
+ 1.0,
1044
+ 0.759765625,
1045
+ 0.759765625
1046
+ ],
1047
+ [
1048
+ 1.0,
1049
+ 0.84375,
1050
+ 0.84375
1051
+ ],
1052
+ [
1053
+ 1.0,
1054
+ 0.84375,
1055
+ 0.84375
1056
+ ],
1057
+ [
1058
+ 1.0,
1059
+ 0.896484375,
1060
+ 0.896484375
1061
+ ],
1062
+ [
1063
+ 1.0,
1064
+ 0.896484375,
1065
+ 0.896484375
1066
+ ],
1067
+ [
1068
+ 1.0,
1069
+ 0.8125,
1070
+ 0.8125
1071
+ ],
1072
+ [
1073
+ 1.0,
1074
+ 0.8125,
1075
+ 0.8125
1076
+ ],
1077
+ [
1078
+ 1.0,
1079
+ 0.8359375,
1080
+ 0.8359375
1081
+ ],
1082
+ [
1083
+ 1.0,
1084
+ 0.8359375,
1085
+ 0.8359375
1086
+ ],
1087
+ [
1088
+ 0.800000011920929,
1089
+ 0.7421875,
1090
+ 0.7421875
1091
+ ],
1092
+ [
1093
+ 0.800000011920929,
1094
+ 0.7421875,
1095
+ 0.7421875
1096
+ ],
1097
+ [
1098
+ 0.8000000715255737,
1099
+ 0.8125,
1100
+ 0.8125
1101
+ ],
1102
+ [
1103
+ 0.8000000715255737,
1104
+ 0.8125,
1105
+ 0.8125
1106
+ ],
1107
+ [
1108
+ 0.7989501953125,
1109
+ 0.9459999799728394,
1110
+ 0.9459999799728394
1111
+ ],
1112
+ [
1113
+ 0.7989501953125,
1114
+ 0.9459999799728394,
1115
+ 0.9459999799728394
1116
+ ],
1117
+ [
1118
+ 0.7989501953125,
1119
+ 0.9760000109672546,
1120
+ 0.9760000109672546
1121
+ ],
1122
+ [
1123
+ 0.7989501953125,
1124
+ 0.9760000109672546,
1125
+ 0.9760000109672546
1126
+ ],
1127
+ [
1128
+ 1.0,
1129
+ 0.87890625,
1130
+ 0.87890625
1131
+ ],
1132
+ [
1133
+ 1.0,
1134
+ 0.87890625,
1135
+ 0.87890625
1136
+ ],
1137
+ [
1138
+ 0.800000011920929,
1139
+ 0.78125,
1140
+ 0.78125
1141
+ ],
1142
+ [
1143
+ 0.800000011920929,
1144
+ 0.78125,
1145
+ 0.78125
1146
+ ],
1147
+ [
1148
+ 1.0,
1149
+ 0.7890625,
1150
+ 0.7890625
1151
+ ],
1152
+ [
1153
+ 1.0,
1154
+ 0.7890625,
1155
+ 0.7890625
1156
+ ],
1157
+ [
1158
+ 1.0,
1159
+ 0.81640625,
1160
+ 0.81640625
1161
+ ],
1162
+ [
1163
+ 1.0,
1164
+ 0.81640625,
1165
+ 0.81640625
1166
+ ],
1167
+ [
1168
+ 1.0,
1169
+ 0.78125,
1170
+ 0.78125
1171
+ ],
1172
+ [
1173
+ 1.0,
1174
+ 0.78125,
1175
+ 0.78125
1176
+ ],
1177
+ [
1178
+ 1.0,
1179
+ 0.78125,
1180
+ 0.78125
1181
+ ],
1182
+ [
1183
+ 1.0,
1184
+ 0.78125,
1185
+ 0.78125
1186
+ ],
1187
+ [
1188
+ 0.800000011920929,
1189
+ 0.78125,
1190
+ 0.78125
1191
+ ],
1192
+ [
1193
+ 0.800000011920929,
1194
+ 0.78125,
1195
+ 0.78125
1196
+ ],
1197
+ [
1198
+ 0.800048828125,
1199
+ 0.6919999718666077,
1200
+ 0.6919999718666077
1201
+ ],
1202
+ [
1203
+ 0.800048828125,
1204
+ 0.6919999718666077,
1205
+ 0.6919999718666077
1206
+ ],
1207
+ [
1208
+ 0.7989501953125,
1209
+ 0.7820000052452087,
1210
+ 0.7820000052452087
1211
+ ],
1212
+ [
1213
+ 0.7989501953125,
1214
+ 0.7820000052452087,
1215
+ 0.7820000052452087
1216
+ ],
1217
+ [
1218
+ 0.7989501953125,
1219
+ 0.7820000052452087,
1220
+ 0.7820000052452087
1221
+ ],
1222
+ [
1223
+ 0.7989501953125,
1224
+ 0.7820000052452087,
1225
+ 0.7820000052452087
1226
+ ],
1227
+ [
1228
+ 1.0,
1229
+ 0.84765625,
1230
+ 0.84765625
1231
+ ],
1232
+ [
1233
+ 1.0,
1234
+ 0.84765625,
1235
+ 0.84765625
1236
+ ],
1237
+ [
1238
+ 0.800000011920929,
1239
+ 0.77734375,
1240
+ 0.77734375
1241
+ ],
1242
+ [
1243
+ 0.800000011920929,
1244
+ 0.77734375,
1245
+ 0.77734375
1246
+ ],
1247
+ [
1248
+ 0.7989501953125,
1249
+ 0.781000018119812,
1250
+ 0.781000018119812
1251
+ ],
1252
+ [
1253
+ 0.7989501953125,
1254
+ 0.781000018119812,
1255
+ 0.781000018119812
1256
+ ],
1257
+ [
1258
+ 1.0,
1259
+ 0.77734375,
1260
+ 0.77734375
1261
+ ],
1262
+ [
1263
+ 1.0,
1264
+ 0.77734375,
1265
+ 0.77734375
1266
+ ],
1267
+ [
1268
+ 1.0,
1269
+ 0.814453125,
1270
+ 0.814453125
1271
+ ],
1272
+ [
1273
+ 1.0,
1274
+ 0.814453125,
1275
+ 0.814453125
1276
+ ],
1277
+ [
1278
+ 0.7999998927116394,
1279
+ 0.8309999704360962,
1280
+ 0.8309999704360962
1281
+ ],
1282
+ [
1283
+ 0.7999998927116394,
1284
+ 0.8309999704360962,
1285
+ 0.8309999704360962
1286
+ ],
1287
+ [
1288
+ 1.0,
1289
+ 0.83203125,
1290
+ 0.83203125
1291
+ ],
1292
+ [
1293
+ 1.0,
1294
+ 0.83203125,
1295
+ 0.83203125
1296
+ ],
1297
+ [
1298
+ 0.7999998927116394,
1299
+ 0.783203125,
1300
+ 0.783203125
1301
+ ],
1302
+ [
1303
+ 0.7999998927116394,
1304
+ 0.783203125,
1305
+ 0.783203125
1306
+ ],
1307
+ [
1308
+ 1.0,
1309
+ 0.771484375,
1310
+ 0.771484375
1311
+ ],
1312
+ [
1313
+ 1.0,
1314
+ 0.771484375,
1315
+ 0.771484375
1316
+ ],
1317
+ [
1318
+ 1.0,
1319
+ 0.78125,
1320
+ 0.78125
1321
+ ],
1322
+ [
1323
+ 1.0,
1324
+ 0.78125,
1325
+ 0.78125
1326
+ ],
1327
+ [
1328
+ 0.7989501953125,
1329
+ 0.7820000052452087,
1330
+ 0.7820000052452087
1331
+ ],
1332
+ [
1333
+ 0.7989501953125,
1334
+ 0.7820000052452087,
1335
+ 0.7820000052452087
1336
+ ],
1337
+ [
1338
+ 0.800000011920929,
1339
+ 0.798828125,
1340
+ 0.798828125
1341
+ ],
1342
+ [
1343
+ 0.800000011920929,
1344
+ 0.798828125,
1345
+ 0.798828125
1346
+ ],
1347
+ [
1348
+ 0.7999999523162842,
1349
+ 0.8500000238418579,
1350
+ 0.8500000238418579
1351
+ ],
1352
+ [
1353
+ 0.7999999523162842,
1354
+ 0.8500000238418579,
1355
+ 0.8500000238418579
1356
+ ],
1357
+ [
1358
+ 0.800000011920929,
1359
+ 0.8080000281333923,
1360
+ 0.8080000281333923
1361
+ ],
1362
+ [
1363
+ 0.800000011920929,
1364
+ 0.8080000281333923,
1365
+ 0.8080000281333923
1366
+ ],
1367
+ [
1368
+ 1.0,
1369
+ 0.78125,
1370
+ 0.78125
1371
+ ],
1372
+ [
1373
+ 1.0,
1374
+ 0.78125,
1375
+ 0.78125
1376
+ ],
1377
+ [
1378
+ 1.0060241222381592,
1379
+ 0.78125,
1380
+ 0.78125
1381
+ ],
1382
+ [
1383
+ 1.0060241222381592,
1384
+ 0.78125,
1385
+ 0.78125
1386
+ ],
1387
+ [
1388
+ 0.801025390625,
1389
+ 0.7820000052452087,
1390
+ 0.7820000052452087
1391
+ ],
1392
+ [
1393
+ 0.801025390625,
1394
+ 0.7820000052452087,
1395
+ 0.7820000052452087
1396
+ ],
1397
+ [
1398
+ 0.800000011920929,
1399
+ 1.2200000286102295,
1400
+ 1.2200000286102295
1401
+ ],
1402
+ [
1403
+ 0.800000011920929,
1404
+ 1.2200000286102295,
1405
+ 1.2200000286102295
1406
+ ],
1407
+ [
1408
+ 0.801025390625,
1409
+ 0.7820000052452087,
1410
+ 0.7820000052452087
1411
+ ],
1412
+ [
1413
+ 0.801025390625,
1414
+ 0.7820000052452087,
1415
+ 0.7820000052452087
1416
+ ],
1417
+ [
1418
+ 0.800000011920929,
1419
+ 0.7820000052452087,
1420
+ 0.7820000052452087
1421
+ ],
1422
+ [
1423
+ 0.800000011920929,
1424
+ 0.7820000052452087,
1425
+ 0.7820000052452087
1426
+ ],
1427
+ [
1428
+ 0.7989501953125,
1429
+ 0.8500000238418579,
1430
+ 0.8500000238418579
1431
+ ],
1432
+ [
1433
+ 0.7989501953125,
1434
+ 0.8500000238418579,
1435
+ 0.8500000238418579
1436
+ ],
1437
+ [
1438
+ 1.0,
1439
+ 0.86328125,
1440
+ 0.86328125
1441
+ ],
1442
+ [
1443
+ 1.0,
1444
+ 0.86328125,
1445
+ 0.86328125
1446
+ ],
1447
+ [
1448
+ 0.625,
1449
+ 0.8730469942092896,
1450
+ 0.8730469942092896
1451
+ ],
1452
+ [
1453
+ 0.625,
1454
+ 0.8730469942092896,
1455
+ 0.8730469942092896
1456
+ ],
1457
+ [
1458
+ 1.0,
1459
+ 0.859375,
1460
+ 0.859375
1461
+ ],
1462
+ [
1463
+ 1.0,
1464
+ 0.859375,
1465
+ 0.859375
1466
+ ],
1467
+ [
1468
+ 1.0,
1469
+ 0.78125,
1470
+ 0.78125
1471
+ ],
1472
+ [
1473
+ 1.0,
1474
+ 0.78125,
1475
+ 0.78125
1476
+ ],
1477
+ [
1478
+ 1.2500052452087402,
1479
+ 0.7480469942092896,
1480
+ 0.7480469942092896
1481
+ ],
1482
+ [
1483
+ 1.2500052452087402,
1484
+ 0.7480469942092896,
1485
+ 0.7480469942092896
1486
+ ],
1487
+ [
1488
+ 0.800000011920929,
1489
+ 0.8828125,
1490
+ 0.8828125
1491
+ ],
1492
+ [
1493
+ 0.800000011920929,
1494
+ 0.8828125,
1495
+ 0.8828125
1496
+ ],
1497
+ [
1498
+ 0.79998779296875,
1499
+ 0.767578125,
1500
+ 0.767578125
1501
+ ],
1502
+ [
1503
+ 0.79998779296875,
1504
+ 0.767578125,
1505
+ 0.767578125
1506
+ ],
1507
+ [
1508
+ 0.8000000715255737,
1509
+ 0.68359375,
1510
+ 0.68359375
1511
+ ],
1512
+ [
1513
+ 0.8000000715255737,
1514
+ 0.68359375,
1515
+ 0.68359375
1516
+ ],
1517
+ [
1518
+ 0.800048828125,
1519
+ 0.8579999804496765,
1520
+ 0.8579999804496765
1521
+ ],
1522
+ [
1523
+ 0.800048828125,
1524
+ 0.8579999804496765,
1525
+ 0.8579999804496765
1526
+ ],
1527
+ [
1528
+ 0.7989501953125,
1529
+ 0.8659999966621399,
1530
+ 0.8659999966621399
1531
+ ],
1532
+ [
1533
+ 0.7989501953125,
1534
+ 0.8659999966621399,
1535
+ 0.8659999966621399
1536
+ ],
1537
+ [
1538
+ 1.0,
1539
+ 0.78125,
1540
+ 0.78125
1541
+ ],
1542
+ [
1543
+ 1.0,
1544
+ 0.78125,
1545
+ 0.78125
1546
+ ],
1547
+ [
1548
+ 1.0,
1549
+ 0.833984375,
1550
+ 0.833984375
1551
+ ],
1552
+ [
1553
+ 1.0,
1554
+ 0.833984375,
1555
+ 0.833984375
1556
+ ],
1557
+ [
1558
+ 1.0,
1559
+ 0.828125,
1560
+ 0.828125
1561
+ ],
1562
+ [
1563
+ 1.0,
1564
+ 0.828125,
1565
+ 0.828125
1566
+ ],
1567
+ [
1568
+ 1.0,
1569
+ 0.77734375,
1570
+ 0.77734375
1571
+ ],
1572
+ [
1573
+ 1.0,
1574
+ 0.77734375,
1575
+ 0.77734375
1576
+ ],
1577
+ [
1578
+ 1.0,
1579
+ 0.970703125,
1580
+ 0.970703125
1581
+ ],
1582
+ [
1583
+ 1.0,
1584
+ 0.970703125,
1585
+ 0.970703125
1586
+ ],
1587
+ [
1588
+ 1.0,
1589
+ 0.94140625,
1590
+ 0.94140625
1591
+ ],
1592
+ [
1593
+ 1.0,
1594
+ 0.94140625,
1595
+ 0.94140625
1596
+ ],
1597
+ [
1598
+ 0.7999801635742188,
1599
+ 0.873046875,
1600
+ 0.873046875
1601
+ ],
1602
+ [
1603
+ 0.7999801635742188,
1604
+ 0.873046875,
1605
+ 0.873046875
1606
+ ],
1607
+ [
1608
+ 0.800000011920929,
1609
+ 0.8579999804496765,
1610
+ 0.8579999804496765
1611
+ ],
1612
+ [
1613
+ 0.800000011920929,
1614
+ 0.8579999804496765,
1615
+ 0.8579999804496765
1616
+ ],
1617
+ [
1618
+ 0.7989501953125,
1619
+ 0.7850000262260437,
1620
+ 0.7850000262260437
1621
+ ],
1622
+ [
1623
+ 0.7989501953125,
1624
+ 0.7850000262260437,
1625
+ 0.7850000262260437
1626
+ ],
1627
+ [
1628
+ 1.0,
1629
+ 0.783203125,
1630
+ 0.783203125
1631
+ ],
1632
+ [
1633
+ 1.0,
1634
+ 0.783203125,
1635
+ 0.783203125
1636
+ ],
1637
+ [
1638
+ 1.0,
1639
+ 0.806640625,
1640
+ 0.806640625
1641
+ ],
1642
+ [
1643
+ 1.0,
1644
+ 0.806640625,
1645
+ 0.806640625
1646
+ ],
1647
+ [
1648
+ 0.801025390625,
1649
+ 0.7820000052452087,
1650
+ 0.7820000052452087
1651
+ ],
1652
+ [
1653
+ 0.801025390625,
1654
+ 0.7820000052452087,
1655
+ 0.7820000052452087
1656
+ ],
1657
+ [
1658
+ 0.7989501953125,
1659
+ 0.7820000052452087,
1660
+ 0.7820000052452087
1661
+ ],
1662
+ [
1663
+ 0.7989501953125,
1664
+ 0.7820000052452087,
1665
+ 0.7820000052452087
1666
+ ],
1667
+ [
1668
+ 1.0,
1669
+ 0.783203125,
1670
+ 0.783203125
1671
+ ],
1672
+ [
1673
+ 1.0,
1674
+ 0.783203125,
1675
+ 0.783203125
1676
+ ],
1677
+ [
1678
+ 0.7999989986419678,
1679
+ 0.9039999842643738,
1680
+ 0.9039999842643738
1681
+ ],
1682
+ [
1683
+ 0.7999989986419678,
1684
+ 0.9039999842643738,
1685
+ 0.9039999842643738
1686
+ ],
1687
+ [
1688
+ 0.7989501953125,
1689
+ 0.8119999766349792,
1690
+ 0.8119999766349792
1691
+ ],
1692
+ [
1693
+ 0.7989501953125,
1694
+ 0.8119999766349792,
1695
+ 0.8119999766349792
1696
+ ],
1697
+ [
1698
+ 0.800048828125,
1699
+ 0.9039999842643738,
1700
+ 0.9039999842643738
1701
+ ],
1702
+ [
1703
+ 0.800048828125,
1704
+ 0.9039999842643738,
1705
+ 0.9039999842643738
1706
+ ],
1707
+ [
1708
+ 0.800048828125,
1709
+ 0.6620000004768372,
1710
+ 0.6620000004768372
1711
+ ],
1712
+ [
1713
+ 0.800048828125,
1714
+ 0.6620000004768372,
1715
+ 0.6620000004768372
1716
+ ],
1717
+ [
1718
+ 0.800000011920929,
1719
+ 0.9296875,
1720
+ 0.9296875
1721
+ ],
1722
+ [
1723
+ 0.800000011920929,
1724
+ 0.9296875,
1725
+ 0.9296875
1726
+ ],
1727
+ [
1728
+ 1.0,
1729
+ 0.8203125,
1730
+ 0.8203125
1731
+ ],
1732
+ [
1733
+ 1.0,
1734
+ 0.8203125,
1735
+ 0.8203125
1736
+ ],
1737
+ [
1738
+ 0.800048828125,
1739
+ 0.78125,
1740
+ 0.78125
1741
+ ],
1742
+ [
1743
+ 0.800048828125,
1744
+ 0.78125,
1745
+ 0.78125
1746
+ ],
1747
+ [
1748
+ 0.801025390625,
1749
+ 0.9760000109672546,
1750
+ 0.9760000109672546
1751
+ ],
1752
+ [
1753
+ 0.801025390625,
1754
+ 0.9760000109672546,
1755
+ 0.9760000109672546
1756
+ ],
1757
+ [
1758
+ 1.0,
1759
+ 0.853515625,
1760
+ 0.853515625
1761
+ ],
1762
+ [
1763
+ 1.0,
1764
+ 0.853515625,
1765
+ 0.853515625
1766
+ ],
1767
+ [
1768
+ 0.801025390625,
1769
+ 0.7820000052452087,
1770
+ 0.7820000052452087
1771
+ ],
1772
+ [
1773
+ 0.801025390625,
1774
+ 0.7820000052452087,
1775
+ 0.7820000052452087
1776
+ ],
1777
+ [
1778
+ 0.800048828125,
1779
+ 0.7239999771118164,
1780
+ 0.7239999771118164
1781
+ ],
1782
+ [
1783
+ 0.800048828125,
1784
+ 0.7239999771118164,
1785
+ 0.7239999771118164
1786
+ ],
1787
+ [
1788
+ 0.7999267578125,
1789
+ 0.8429999947547913,
1790
+ 0.8429999947547913
1791
+ ],
1792
+ [
1793
+ 0.7999267578125,
1794
+ 0.8429999947547913,
1795
+ 0.8429999947547913
1796
+ ],
1797
+ [
1798
+ 1.0,
1799
+ 0.693359375,
1800
+ 0.693359375
1801
+ ],
1802
+ [
1803
+ 1.0,
1804
+ 0.693359375,
1805
+ 0.693359375
1806
+ ],
1807
+ [
1808
+ 1.0,
1809
+ 0.775390625,
1810
+ 0.775390625
1811
+ ],
1812
+ [
1813
+ 1.0,
1814
+ 0.775390625,
1815
+ 0.775390625
1816
+ ],
1817
+ [
1818
+ 0.800000011920929,
1819
+ 0.796999990940094,
1820
+ 0.796999990940094
1821
+ ],
1822
+ [
1823
+ 0.800000011920929,
1824
+ 0.796999990940094,
1825
+ 0.796999990940094
1826
+ ],
1827
+ [
1828
+ 1.0,
1829
+ 0.8125,
1830
+ 0.8125
1831
+ ],
1832
+ [
1833
+ 1.0,
1834
+ 0.8125,
1835
+ 0.8125
1836
+ ],
1837
+ [
1838
+ 0.79998779296875,
1839
+ 0.748046875,
1840
+ 0.748046875
1841
+ ],
1842
+ [
1843
+ 0.79998779296875,
1844
+ 0.748046875,
1845
+ 0.748046875
1846
+ ],
1847
+ [
1848
+ 1.0,
1849
+ 0.849609375,
1850
+ 0.849609375
1851
+ ],
1852
+ [
1853
+ 1.0,
1854
+ 0.849609375,
1855
+ 0.849609375
1856
+ ],
1857
+ [
1858
+ 0.8000000715255737,
1859
+ 0.9140625,
1860
+ 0.9140625
1861
+ ],
1862
+ [
1863
+ 0.8000000715255737,
1864
+ 0.9140625,
1865
+ 0.9140625
1866
+ ],
1867
+ [
1868
+ 1.0,
1869
+ 0.658203125,
1870
+ 0.658203125
1871
+ ],
1872
+ [
1873
+ 1.0,
1874
+ 0.658203125,
1875
+ 0.658203125
1876
+ ],
1877
+ [
1878
+ 0.800000011920929,
1879
+ 0.80859375,
1880
+ 0.80859375
1881
+ ],
1882
+ [
1883
+ 0.800000011920929,
1884
+ 0.80859375,
1885
+ 0.80859375
1886
+ ],
1887
+ [
1888
+ 0.800000011920929,
1889
+ 0.890625,
1890
+ 0.890625
1891
+ ],
1892
+ [
1893
+ 0.800000011920929,
1894
+ 0.890625,
1895
+ 0.890625
1896
+ ],
1897
+ [
1898
+ 1.0,
1899
+ 0.775390625,
1900
+ 0.775390625
1901
+ ],
1902
+ [
1903
+ 1.0,
1904
+ 0.775390625,
1905
+ 0.775390625
1906
+ ],
1907
+ [
1908
+ 0.801025390625,
1909
+ 0.9760000109672546,
1910
+ 0.9760000109672546
1911
+ ],
1912
+ [
1913
+ 0.801025390625,
1914
+ 0.9760000109672546,
1915
+ 0.9760000109672546
1916
+ ],
1917
+ [
1918
+ 1.0,
1919
+ 0.6953125,
1920
+ 0.6953125
1921
+ ],
1922
+ [
1923
+ 1.0,
1924
+ 0.6953125,
1925
+ 0.6953125
1926
+ ],
1927
+ [
1928
+ 1.0,
1929
+ 0.880859375,
1930
+ 0.880859375
1931
+ ],
1932
+ [
1933
+ 1.0,
1934
+ 0.880859375,
1935
+ 0.880859375
1936
+ ],
1937
+ [
1938
+ 1.0,
1939
+ 0.763671875,
1940
+ 0.763671875
1941
+ ],
1942
+ [
1943
+ 1.0,
1944
+ 0.763671875,
1945
+ 0.763671875
1946
+ ],
1947
+ [
1948
+ 0.7989501953125,
1949
+ 0.9760000109672546,
1950
+ 0.9760000109672546
1951
+ ],
1952
+ [
1953
+ 0.7989501953125,
1954
+ 0.9760000109672546,
1955
+ 0.9760000109672546
1956
+ ],
1957
+ [
1958
+ 1.0,
1959
+ 0.6796875,
1960
+ 0.6796875
1961
+ ],
1962
+ [
1963
+ 1.0,
1964
+ 0.6796875,
1965
+ 0.6796875
1966
+ ],
1967
+ [
1968
+ 0.7989501953125,
1969
+ 0.7820000052452087,
1970
+ 0.7820000052452087
1971
+ ],
1972
+ [
1973
+ 0.7989501953125,
1974
+ 0.7820000052452087,
1975
+ 0.7820000052452087
1976
+ ],
1977
+ [
1978
+ 1.0,
1979
+ 0.87890625,
1980
+ 0.87890625
1981
+ ],
1982
+ [
1983
+ 1.0,
1984
+ 0.87890625,
1985
+ 0.87890625
1986
+ ],
1987
+ [
1988
+ 0.801025390625,
1989
+ 0.7820000052452087,
1990
+ 0.7820000052452087
1991
+ ],
1992
+ [
1993
+ 0.801025390625,
1994
+ 0.7820000052452087,
1995
+ 0.7820000052452087
1996
+ ],
1997
+ [
1998
+ 1.0,
1999
+ 0.779296875,
2000
+ 0.779296875
2001
+ ],
2002
+ [
2003
+ 1.0,
2004
+ 0.779296875,
2005
+ 0.779296875
2006
+ ],
2007
+ [
2008
+ 0.801025390625,
2009
+ 0.8659999966621399,
2010
+ 0.8659999966621399
2011
+ ],
2012
+ [
2013
+ 0.801025390625,
2014
+ 0.8659999966621399,
2015
+ 0.8659999966621399
2016
+ ]
2017
+ ]
2018
+ }
Code/PENGWIN_Challenge/Inference/CT/models/Dataset203_newHips/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/fold_0/debug.json ADDED
@@ -0,0 +1,53 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_best_ema": "None",
3
+ "batch_size": "2",
4
+ "configuration_manager": "{'data_identifier': 'nnUNetPlans_3d_fullres', 'preprocessor_name': 'DefaultPreprocessor', 'batch_size': 2, 'patch_size': [160, 128, 112], 'median_image_size_in_voxels': [277.0, 194.0, 194.0], 'spacing': [0.801025390625, 0.797914057970047, 0.797914057970047], 'normalization_schemes': ['CTNormalization'], 'use_mask_for_norm': [False], 'resampling_fn_data': 'resample_data_or_seg_to_shape', 'resampling_fn_seg': 'resample_data_or_seg_to_shape', 'resampling_fn_data_kwargs': {'is_seg': False, 'order': 3, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_seg_kwargs': {'is_seg': True, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_probabilities': 'resample_data_or_seg_to_shape', 'resampling_fn_probabilities_kwargs': {'is_seg': False, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'architecture': {'network_class_name': 'dynamic_network_architectures.architectures.unet.ResidualEncoderUNet', 'arch_kwargs': {'n_stages': 6, 'features_per_stage': [32, 64, 128, 256, 320, 320], 'conv_op': 'torch.nn.modules.conv.Conv3d', 'kernel_sizes': [[3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3]], 'strides': [[1, 1, 1], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 1]], 'n_blocks_per_stage': [1, 3, 4, 6, 6, 6], 'n_conv_per_stage_decoder': [1, 1, 1, 1, 1], 'conv_bias': True, 'norm_op': 'torch.nn.modules.instancenorm.InstanceNorm3d', 'norm_op_kwargs': {'eps': 1e-05, 'affine': True}, 'dropout_op': None, 'dropout_op_kwargs': None, 'nonlin': 'torch.nn.LeakyReLU', 'nonlin_kwargs': {'inplace': True}, 'deep_supervision': True}, '_kw_requires_import': ['conv_op', 'norm_op', 'dropout_op', 'nonlin']}, 'batch_dice': False}",
5
+ "configuration_name": "3d_fullres",
6
+ "cudnn_version": 8700,
7
+ "current_epoch": "0",
8
+ "dataloader_train": "<batchgenerators.dataloading.nondet_multi_threaded_augmenter.NonDetMultiThreadedAugmenter object at 0x7fc6813f3c40>",
9
+ "dataloader_train.generator": "<nnunetv2.training.dataloading.data_loader_3d.nnUNetDataLoader3D object at 0x7fc6813f3af0>",
10
+ "dataloader_train.num_processes": "12",
11
+ "dataloader_train.transform": "None",
12
+ "dataloader_val": "<batchgenerators.dataloading.nondet_multi_threaded_augmenter.NonDetMultiThreadedAugmenter object at 0x7fc6813f3e50>",
13
+ "dataloader_val.generator": "<nnunetv2.training.dataloading.data_loader_3d.nnUNetDataLoader3D object at 0x7fc6813f3bb0>",
14
+ "dataloader_val.num_processes": "6",
15
+ "dataloader_val.transform": "None",
16
+ "dataset_json": "{'channel_names': {'0': 'CT'}, 'labels': {'background': 0, 'main_fragment': 1, 'minor_fragment': 2}, 'numTraining': 200, 'file_ending': '.mha', 'overwrite_image_reader_writer': 'SimpleITKIO'}",
17
+ "device": "cuda:0",
18
+ "disable_checkpointing": "False",
19
+ "enable_deep_supervision": "True",
20
+ "fold": "0",
21
+ "folder_with_segs_from_previous_stage": "None",
22
+ "gpu_name": "NVIDIA GeForce RTX 2080 Ti",
23
+ "grad_scaler": "<torch.cuda.amp.grad_scaler.GradScaler object at 0x7fc6814a8e80>",
24
+ "hostname": "wh-Super-Server",
25
+ "inference_allowed_mirroring_axes": "(0, 1, 2)",
26
+ "initial_lr": "0.01",
27
+ "is_cascaded": "False",
28
+ "is_ddp": "False",
29
+ "label_manager": "<nnunetv2.utilities.label_handling.label_handling.LabelManager object at 0x7fc6814b6460>",
30
+ "local_rank": "0",
31
+ "log_file": "/data/ypy/dataset/nnUNet_datasets/nnUNet_results/Dataset203_newHips/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/fold_0/training_log_2024_7_29_21_01_11.txt",
32
+ "logger": "<nnunetv2.training.logging.nnunet_logger.nnUNetLogger object at 0x7fc6814b6400>",
33
+ "loss": "DeepSupervisionWrapper(\n (loss): DC_and_CE_loss(\n (ce): RobustCrossEntropyLoss()\n (dc): OptimizedModule(\n (_orig_mod): MemoryEfficientSoftDiceLoss()\n )\n )\n)",
34
+ "lr_scheduler": "<nnunetv2.training.lr_scheduler.polylr.PolyLRScheduler object at 0x7fc6728d72e0>",
35
+ "my_init_kwargs": "{'plans': {'dataset_name': 'Dataset203_newHips', 'plans_name': 'nnUNetResEncUNetMPlans', 'original_median_spacing_after_transp': [0.801025390625, 0.797914057970047, 0.797914057970047], 'original_median_shape_after_transp': [262, 191, 190], 'image_reader_writer': 'SimpleITKIO', 'transpose_forward': [0, 1, 2], 'transpose_backward': [0, 1, 2], 'configurations': {'2d': {'data_identifier': 'nnUNetPlans_2d', 'preprocessor_name': 'DefaultPreprocessor', 'batch_size': 66, 'patch_size': [224, 224], 'median_image_size_in_voxels': [194.0, 194.0], 'spacing': [0.797914057970047, 0.797914057970047], 'normalization_schemes': ['CTNormalization'], 'use_mask_for_norm': [False], 'resampling_fn_data': 'resample_data_or_seg_to_shape', 'resampling_fn_seg': 'resample_data_or_seg_to_shape', 'resampling_fn_data_kwargs': {'is_seg': False, 'order': 3, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_seg_kwargs': {'is_seg': True, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_probabilities': 'resample_data_or_seg_to_shape', 'resampling_fn_probabilities_kwargs': {'is_seg': False, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'architecture': {'network_class_name': 'dynamic_network_architectures.architectures.unet.ResidualEncoderUNet', 'arch_kwargs': {'n_stages': 6, 'features_per_stage': [32, 64, 128, 256, 512, 512], 'conv_op': 'torch.nn.modules.conv.Conv2d', 'kernel_sizes': [[3, 3], [3, 3], [3, 3], [3, 3], [3, 3], [3, 3]], 'strides': [[1, 1], [2, 2], [2, 2], [2, 2], [2, 2], [2, 2]], 'n_blocks_per_stage': [1, 3, 4, 6, 6, 6], 'n_conv_per_stage_decoder': [1, 1, 1, 1, 1], 'conv_bias': True, 'norm_op': 'torch.nn.modules.instancenorm.InstanceNorm2d', 'norm_op_kwargs': {'eps': 1e-05, 'affine': True}, 'dropout_op': None, 'dropout_op_kwargs': None, 'nonlin': 'torch.nn.LeakyReLU', 'nonlin_kwargs': {'inplace': True}}, '_kw_requires_import': ['conv_op', 'norm_op', 'dropout_op', 'nonlin']}, 'batch_dice': True}, '3d_fullres': {'data_identifier': 'nnUNetPlans_3d_fullres', 'preprocessor_name': 'DefaultPreprocessor', 'batch_size': 2, 'patch_size': [160, 128, 112], 'median_image_size_in_voxels': [277.0, 194.0, 194.0], 'spacing': [0.801025390625, 0.797914057970047, 0.797914057970047], 'normalization_schemes': ['CTNormalization'], 'use_mask_for_norm': [False], 'resampling_fn_data': 'resample_data_or_seg_to_shape', 'resampling_fn_seg': 'resample_data_or_seg_to_shape', 'resampling_fn_data_kwargs': {'is_seg': False, 'order': 3, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_seg_kwargs': {'is_seg': True, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_probabilities': 'resample_data_or_seg_to_shape', 'resampling_fn_probabilities_kwargs': {'is_seg': False, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'architecture': {'network_class_name': 'dynamic_network_architectures.architectures.unet.ResidualEncoderUNet', 'arch_kwargs': {'n_stages': 6, 'features_per_stage': [32, 64, 128, 256, 320, 320], 'conv_op': 'torch.nn.modules.conv.Conv3d', 'kernel_sizes': [[3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3]], 'strides': [[1, 1, 1], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 1]], 'n_blocks_per_stage': [1, 3, 4, 6, 6, 6], 'n_conv_per_stage_decoder': [1, 1, 1, 1, 1], 'conv_bias': True, 'norm_op': 'torch.nn.modules.instancenorm.InstanceNorm3d', 'norm_op_kwargs': {'eps': 1e-05, 'affine': True}, 'dropout_op': None, 'dropout_op_kwargs': None, 'nonlin': 'torch.nn.LeakyReLU', 'nonlin_kwargs': {'inplace': True}}, '_kw_requires_import': ['conv_op', 'norm_op', 'dropout_op', 'nonlin']}, 'batch_dice': False}}, 'experiment_planner_used': 'nnUNetPlannerResEncM', 'label_manager': 'LabelManager', 'foreground_intensity_properties_per_channel': {'0': {'max': 3467.0, 'mean': 386.2413635253906, 'median': 311.0, 'min': -706.0, 'percentile_00_5': -53.0, 'percentile_99_5': 1311.0, 'std': 290.2442626953125}}}, 'configuration': '3d_fullres', 'fold': 0, 'dataset_json': {'channel_names': {'0': 'CT'}, 'labels': {'background': 0, 'main_fragment': 1, 'minor_fragment': 2}, 'numTraining': 200, 'file_ending': '.mha', 'overwrite_image_reader_writer': 'SimpleITKIO'}, 'unpack_dataset': True, 'device': device(type='cuda')}",
36
+ "network": "OptimizedModule",
37
+ "num_epochs": "1000",
38
+ "num_input_channels": "1",
39
+ "num_iterations_per_epoch": "250",
40
+ "num_val_iterations_per_epoch": "50",
41
+ "optimizer": "SGD (\nParameter Group 0\n dampening: 0\n differentiable: False\n foreach: None\n fused: None\n initial_lr: 0.01\n lr: 0.01\n maximize: False\n momentum: 0.99\n nesterov: True\n weight_decay: 3e-05\n)",
42
+ "output_folder": "/data/ypy/dataset/nnUNet_datasets/nnUNet_results/Dataset203_newHips/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/fold_0",
43
+ "output_folder_base": "/data/ypy/dataset/nnUNet_datasets/nnUNet_results/Dataset203_newHips/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres",
44
+ "oversample_foreground_percent": "0.33",
45
+ "plans_manager": "{'dataset_name': 'Dataset203_newHips', 'plans_name': 'nnUNetResEncUNetMPlans', 'original_median_spacing_after_transp': [0.801025390625, 0.797914057970047, 0.797914057970047], 'original_median_shape_after_transp': [262, 191, 190], 'image_reader_writer': 'SimpleITKIO', 'transpose_forward': [0, 1, 2], 'transpose_backward': [0, 1, 2], 'configurations': {'2d': {'data_identifier': 'nnUNetPlans_2d', 'preprocessor_name': 'DefaultPreprocessor', 'batch_size': 66, 'patch_size': [224, 224], 'median_image_size_in_voxels': [194.0, 194.0], 'spacing': [0.797914057970047, 0.797914057970047], 'normalization_schemes': ['CTNormalization'], 'use_mask_for_norm': [False], 'resampling_fn_data': 'resample_data_or_seg_to_shape', 'resampling_fn_seg': 'resample_data_or_seg_to_shape', 'resampling_fn_data_kwargs': {'is_seg': False, 'order': 3, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_seg_kwargs': {'is_seg': True, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_probabilities': 'resample_data_or_seg_to_shape', 'resampling_fn_probabilities_kwargs': {'is_seg': False, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'architecture': {'network_class_name': 'dynamic_network_architectures.architectures.unet.ResidualEncoderUNet', 'arch_kwargs': {'n_stages': 6, 'features_per_stage': [32, 64, 128, 256, 512, 512], 'conv_op': 'torch.nn.modules.conv.Conv2d', 'kernel_sizes': [[3, 3], [3, 3], [3, 3], [3, 3], [3, 3], [3, 3]], 'strides': [[1, 1], [2, 2], [2, 2], [2, 2], [2, 2], [2, 2]], 'n_blocks_per_stage': [1, 3, 4, 6, 6, 6], 'n_conv_per_stage_decoder': [1, 1, 1, 1, 1], 'conv_bias': True, 'norm_op': 'torch.nn.modules.instancenorm.InstanceNorm2d', 'norm_op_kwargs': {'eps': 1e-05, 'affine': True}, 'dropout_op': None, 'dropout_op_kwargs': None, 'nonlin': 'torch.nn.LeakyReLU', 'nonlin_kwargs': {'inplace': True}}, '_kw_requires_import': ['conv_op', 'norm_op', 'dropout_op', 'nonlin']}, 'batch_dice': True}, '3d_fullres': {'data_identifier': 'nnUNetPlans_3d_fullres', 'preprocessor_name': 'DefaultPreprocessor', 'batch_size': 2, 'patch_size': [160, 128, 112], 'median_image_size_in_voxels': [277.0, 194.0, 194.0], 'spacing': [0.801025390625, 0.797914057970047, 0.797914057970047], 'normalization_schemes': ['CTNormalization'], 'use_mask_for_norm': [False], 'resampling_fn_data': 'resample_data_or_seg_to_shape', 'resampling_fn_seg': 'resample_data_or_seg_to_shape', 'resampling_fn_data_kwargs': {'is_seg': False, 'order': 3, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_seg_kwargs': {'is_seg': True, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'resampling_fn_probabilities': 'resample_data_or_seg_to_shape', 'resampling_fn_probabilities_kwargs': {'is_seg': False, 'order': 1, 'order_z': 0, 'force_separate_z': None}, 'architecture': {'network_class_name': 'dynamic_network_architectures.architectures.unet.ResidualEncoderUNet', 'arch_kwargs': {'n_stages': 6, 'features_per_stage': [32, 64, 128, 256, 320, 320], 'conv_op': 'torch.nn.modules.conv.Conv3d', 'kernel_sizes': [[3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3], [3, 3, 3]], 'strides': [[1, 1, 1], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 2], [2, 2, 1]], 'n_blocks_per_stage': [1, 3, 4, 6, 6, 6], 'n_conv_per_stage_decoder': [1, 1, 1, 1, 1], 'conv_bias': True, 'norm_op': 'torch.nn.modules.instancenorm.InstanceNorm3d', 'norm_op_kwargs': {'eps': 1e-05, 'affine': True}, 'dropout_op': None, 'dropout_op_kwargs': None, 'nonlin': 'torch.nn.LeakyReLU', 'nonlin_kwargs': {'inplace': True}}, '_kw_requires_import': ['conv_op', 'norm_op', 'dropout_op', 'nonlin']}, 'batch_dice': False}}, 'experiment_planner_used': 'nnUNetPlannerResEncM', 'label_manager': 'LabelManager', 'foreground_intensity_properties_per_channel': {'0': {'max': 3467.0, 'mean': 386.2413635253906, 'median': 311.0, 'min': -706.0, 'percentile_00_5': -53.0, 'percentile_99_5': 1311.0, 'std': 290.2442626953125}}}",
46
+ "preprocessed_dataset_folder": "/data/ypy/dataset/nnUNet_datasets/nnUNet_preprocessed/Dataset203_newHips/nnUNetPlans_3d_fullres",
47
+ "preprocessed_dataset_folder_base": "/data/ypy/dataset/nnUNet_datasets/nnUNet_preprocessed/Dataset203_newHips",
48
+ "save_every": "50",
49
+ "torch_version": "2.3.1",
50
+ "unpack_dataset": "True",
51
+ "was_initialized": "True",
52
+ "weight_decay": "3e-05"
53
+ }
Code/PENGWIN_Challenge/Inference/CT/models/Dataset203_newHips/nnUNetTrainer__nnUNetResEncUNetMPlans__3d_fullres/plans.json ADDED
@@ -0,0 +1,345 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "dataset_name": "Dataset203_newHips",
3
+ "plans_name": "nnUNetResEncUNetMPlans",
4
+ "original_median_spacing_after_transp": [
5
+ 0.801025390625,
6
+ 0.797914057970047,
7
+ 0.797914057970047
8
+ ],
9
+ "original_median_shape_after_transp": [
10
+ 262,
11
+ 191,
12
+ 190
13
+ ],
14
+ "image_reader_writer": "SimpleITKIO",
15
+ "transpose_forward": [
16
+ 0,
17
+ 1,
18
+ 2
19
+ ],
20
+ "transpose_backward": [
21
+ 0,
22
+ 1,
23
+ 2
24
+ ],
25
+ "configurations": {
26
+ "2d": {
27
+ "data_identifier": "nnUNetPlans_2d",
28
+ "preprocessor_name": "DefaultPreprocessor",
29
+ "batch_size": 66,
30
+ "patch_size": [
31
+ 224,
32
+ 224
33
+ ],
34
+ "median_image_size_in_voxels": [
35
+ 194.0,
36
+ 194.0
37
+ ],
38
+ "spacing": [
39
+ 0.797914057970047,
40
+ 0.797914057970047
41
+ ],
42
+ "normalization_schemes": [
43
+ "CTNormalization"
44
+ ],
45
+ "use_mask_for_norm": [
46
+ false
47
+ ],
48
+ "resampling_fn_data": "resample_data_or_seg_to_shape",
49
+ "resampling_fn_seg": "resample_data_or_seg_to_shape",
50
+ "resampling_fn_data_kwargs": {
51
+ "is_seg": false,
52
+ "order": 3,
53
+ "order_z": 0,
54
+ "force_separate_z": null
55
+ },
56
+ "resampling_fn_seg_kwargs": {
57
+ "is_seg": true,
58
+ "order": 1,
59
+ "order_z": 0,
60
+ "force_separate_z": null
61
+ },
62
+ "resampling_fn_probabilities": "resample_data_or_seg_to_shape",
63
+ "resampling_fn_probabilities_kwargs": {
64
+ "is_seg": false,
65
+ "order": 1,
66
+ "order_z": 0,
67
+ "force_separate_z": null
68
+ },
69
+ "architecture": {
70
+ "network_class_name": "dynamic_network_architectures.architectures.unet.ResidualEncoderUNet",
71
+ "arch_kwargs": {
72
+ "n_stages": 6,
73
+ "features_per_stage": [
74
+ 32,
75
+ 64,
76
+ 128,
77
+ 256,
78
+ 512,
79
+ 512
80
+ ],
81
+ "conv_op": "torch.nn.modules.conv.Conv2d",
82
+ "kernel_sizes": [
83
+ [
84
+ 3,
85
+ 3
86
+ ],
87
+ [
88
+ 3,
89
+ 3
90
+ ],
91
+ [
92
+ 3,
93
+ 3
94
+ ],
95
+ [
96
+ 3,
97
+ 3
98
+ ],
99
+ [
100
+ 3,
101
+ 3
102
+ ],
103
+ [
104
+ 3,
105
+ 3
106
+ ]
107
+ ],
108
+ "strides": [
109
+ [
110
+ 1,
111
+ 1
112
+ ],
113
+ [
114
+ 2,
115
+ 2
116
+ ],
117
+ [
118
+ 2,
119
+ 2
120
+ ],
121
+ [
122
+ 2,
123
+ 2
124
+ ],
125
+ [
126
+ 2,
127
+ 2
128
+ ],
129
+ [
130
+ 2,
131
+ 2
132
+ ]
133
+ ],
134
+ "n_blocks_per_stage": [
135
+ 1,
136
+ 3,
137
+ 4,
138
+ 6,
139
+ 6,
140
+ 6
141
+ ],
142
+ "n_conv_per_stage_decoder": [
143
+ 1,
144
+ 1,
145
+ 1,
146
+ 1,
147
+ 1
148
+ ],
149
+ "conv_bias": true,
150
+ "norm_op": "torch.nn.modules.instancenorm.InstanceNorm2d",
151
+ "norm_op_kwargs": {
152
+ "eps": 1e-05,
153
+ "affine": true
154
+ },
155
+ "dropout_op": null,
156
+ "dropout_op_kwargs": null,
157
+ "nonlin": "torch.nn.LeakyReLU",
158
+ "nonlin_kwargs": {
159
+ "inplace": true
160
+ }
161
+ },
162
+ "_kw_requires_import": [
163
+ "conv_op",
164
+ "norm_op",
165
+ "dropout_op",
166
+ "nonlin"
167
+ ]
168
+ },
169
+ "batch_dice": true
170
+ },
171
+ "3d_fullres": {
172
+ "data_identifier": "nnUNetPlans_3d_fullres",
173
+ "preprocessor_name": "FDMPreprocessor",
174
+ "batch_size": 2,
175
+ "patch_size": [
176
+ 160,
177
+ 128,
178
+ 112
179
+ ],
180
+ "median_image_size_in_voxels": [
181
+ 277.0,
182
+ 194.0,
183
+ 194.0
184
+ ],
185
+ "spacing": [
186
+ 0.801025390625,
187
+ 0.797914057970047,
188
+ 0.797914057970047
189
+ ],
190
+ "normalization_schemes": [
191
+ "CTNormalization"
192
+ ],
193
+ "use_mask_for_norm": [
194
+ false
195
+ ],
196
+ "resampling_fn_data": "resample_data_or_seg_to_shape",
197
+ "resampling_fn_seg": "resample_data_or_seg_to_shape",
198
+ "resampling_fn_data_kwargs": {
199
+ "is_seg": false,
200
+ "order": 3,
201
+ "order_z": 0,
202
+ "force_separate_z": null
203
+ },
204
+ "resampling_fn_seg_kwargs": {
205
+ "is_seg": true,
206
+ "order": 1,
207
+ "order_z": 0,
208
+ "force_separate_z": null
209
+ },
210
+ "resampling_fn_probabilities": "resample_data_or_seg_to_shape",
211
+ "resampling_fn_probabilities_kwargs": {
212
+ "is_seg": false,
213
+ "order": 1,
214
+ "order_z": 0,
215
+ "force_separate_z": null
216
+ },
217
+ "architecture": {
218
+ "network_class_name": "dynamic_network_architectures.architectures.unet.ResidualEncoderUNet",
219
+ "arch_kwargs": {
220
+ "n_stages": 6,
221
+ "features_per_stage": [
222
+ 32,
223
+ 64,
224
+ 128,
225
+ 256,
226
+ 320,
227
+ 320
228
+ ],
229
+ "conv_op": "torch.nn.modules.conv.Conv3d",
230
+ "kernel_sizes": [
231
+ [
232
+ 3,
233
+ 3,
234
+ 3
235
+ ],
236
+ [
237
+ 3,
238
+ 3,
239
+ 3
240
+ ],
241
+ [
242
+ 3,
243
+ 3,
244
+ 3
245
+ ],
246
+ [
247
+ 3,
248
+ 3,
249
+ 3
250
+ ],
251
+ [
252
+ 3,
253
+ 3,
254
+ 3
255
+ ],
256
+ [
257
+ 3,
258
+ 3,
259
+ 3
260
+ ]
261
+ ],
262
+ "strides": [
263
+ [
264
+ 1,
265
+ 1,
266
+ 1
267
+ ],
268
+ [
269
+ 2,
270
+ 2,
271
+ 2
272
+ ],
273
+ [
274
+ 2,
275
+ 2,
276
+ 2
277
+ ],
278
+ [
279
+ 2,
280
+ 2,
281
+ 2
282
+ ],
283
+ [
284
+ 2,
285
+ 2,
286
+ 2
287
+ ],
288
+ [
289
+ 2,
290
+ 2,
291
+ 1
292
+ ]
293
+ ],
294
+ "n_blocks_per_stage": [
295
+ 1,
296
+ 3,
297
+ 4,
298
+ 6,
299
+ 6,
300
+ 6
301
+ ],
302
+ "n_conv_per_stage_decoder": [
303
+ 1,
304
+ 1,
305
+ 1,
306
+ 1,
307
+ 1
308
+ ],
309
+ "conv_bias": true,
310
+ "norm_op": "torch.nn.modules.instancenorm.InstanceNorm3d",
311
+ "norm_op_kwargs": {
312
+ "eps": 1e-05,
313
+ "affine": true
314
+ },
315
+ "dropout_op": null,
316
+ "dropout_op_kwargs": null,
317
+ "nonlin": "torch.nn.LeakyReLU",
318
+ "nonlin_kwargs": {
319
+ "inplace": true
320
+ }
321
+ },
322
+ "_kw_requires_import": [
323
+ "conv_op",
324
+ "norm_op",
325
+ "dropout_op",
326
+ "nonlin"
327
+ ]
328
+ },
329
+ "batch_dice": false
330
+ }
331
+ },
332
+ "experiment_planner_used": "nnUNetPlannerResEncM",
333
+ "label_manager": "LabelManager",
334
+ "foreground_intensity_properties_per_channel": {
335
+ "0": {
336
+ "max": 3467.0,
337
+ "mean": 386.2413635253906,
338
+ "median": 311.0,
339
+ "min": -706.0,
340
+ "percentile_00_5": -53.0,
341
+ "percentile_99_5": 1311.0,
342
+ "std": 290.2442626953125
343
+ }
344
+ }
345
+ }
Code/PENGWIN_Challenge/Inference/CT/requirements.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ nnunetv2
2
+ SimpleITK
3
+
4
+
5
+
Code/PENGWIN_Challenge/Inference/CT/save.sh ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env bash
2
+
3
+ # Stop at first error
4
+ set -e
5
+
6
+ SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
7
+
8
+ # Set default container name
9
+ container_tag="preliminary-development-phase-ct-1st-test"
10
+
11
+ # Check if an argument is provided
12
+ if [ "$#" -eq 1 ]; then
13
+ container_tag="$1"
14
+ fi
15
+
16
+ # Get the build information from the Docker image tag
17
+ build_timestamp=$( docker inspect --format='{{ .Created }}' "$container_tag")
18
+
19
+ if [ -z "$build_timestamp" ]; then
20
+ echo "Error: Failed to retrieve build information for container $container_tag"
21
+ exit 1
22
+ fi
23
+
24
+ # Format the build information to remove special characters
25
+ formatted_build_info=$(date -d "$build_timestamp" +"%Y%m%d_%H%M%S")
26
+
27
+ # Set the output filename with timestamp and build information
28
+ output_filename="${SCRIPT_DIR}/${container_tag}_${formatted_build_info}.tar.gz"
29
+
30
+ # Save the Docker container and gzip it
31
+ docker save "$container_tag" | gzip -c > "$output_filename"
32
+
33
+ echo "Container saved as ${output_filename}"
Code/PENGWIN_Challenge/Inference/CT/test/input/images/pelvic-fracture-ct/replace_to_your_image.mha ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:7057858d374336631f8f24ada75065a2e04a48a0f08ebcaef590fca703076ea7
3
+ size 803232
Code/PENGWIN_Challenge/Inference/CT/test/output/Readme.md ADDED
@@ -0,0 +1 @@
 
 
1
+ Your output file will be find here.
Code/PENGWIN_Challenge/Inference/CT/test_run.sh ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env bash
2
+
3
+ # Stop at first error
4
+ set -e
5
+
6
+ SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
7
+ DOCKER_TAG="ct-final_submit_0820_2"
8
+ DOCKER_NOOP_VOLUME="${DOCKER_TAG}-volume"
9
+
10
+ INPUT_DIR="${SCRIPT_DIR}/test/input"
11
+ OUTPUT_DIR="${SCRIPT_DIR}/test/output"
12
+
13
+
14
+ echo "=+= Cleaning up any earlier output"
15
+ if [ -d "$OUTPUT_DIR" ]; then
16
+ # Ensure permissions are setup correctly
17
+ # This allows for the Docker user to write to this location
18
+ rm -rf "${OUTPUT_DIR}"/*
19
+ chmod -f o+rwx "$OUTPUT_DIR"
20
+ else
21
+ mkdir --mode=o+rwx "$OUTPUT_DIR"
22
+ fi
23
+
24
+
25
+ echo "=+= (Re)build the container"
26
+ docker build "$SCRIPT_DIR" \
27
+ --platform=linux/amd64 \
28
+ --tag $DOCKER_TAG 2>&1
29
+
30
+
31
+ echo "=+= Doing a forward pass"
32
+ ## Note the extra arguments that are passed here:
33
+ # '--network none'
34
+ # entails there is no internet connection
35
+ # 'gpus all'
36
+ # enables access to any GPUs present
37
+ # '--volume <NAME>:/tmp'
38
+ # is added because on Grand Challenge this directory cannot be used to store permanent files
39
+ docker volume create "$DOCKER_NOOP_VOLUME"
40
+ docker run --rm \
41
+ --platform=linux/amd64 \
42
+ --network none \
43
+ --gpus all \
44
+ --volume "$INPUT_DIR":/input \
45
+ --volume "$OUTPUT_DIR":/output \
46
+ --volume "$DOCKER_NOOP_VOLUME":/tmp \
47
+ $DOCKER_TAG
48
+ docker volume rm "$DOCKER_NOOP_VOLUME"
49
+
50
+ # Ensure permissions are set correctly on the output
51
+ # This allows the host user (e.g. you) to access and handle these files
52
+ docker run --rm \
53
+ --quiet \
54
+ --env HOST_UID=`id --user` \
55
+ --env HOST_GID=`id --group` \
56
+ --volume "$OUTPUT_DIR":/output \
57
+ alpine:latest \
58
+ /bin/sh -c 'chown -R ${HOST_UID}:${HOST_GID} /output'
59
+
60
+ echo "=+= Wrote results to ${OUTPUT_DIR}"
61
+
62
+ echo "=+= Save this image for uploading via save.sh \"${DOCKER_TAG}\""
Code/PENGWIN_Challenge/Inference/CT/two_stage_inference.py ADDED
@@ -0,0 +1,284 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import time
3
+
4
+ import SimpleITK as sitk
5
+ import numpy as np
6
+ import torch
7
+ from nnunetv2.inference.predict_from_raw_data import nnUNetPredictor
8
+
9
+ from utils.utils import split_connected_components, remove_small_connected_components, relabel_connected_components, \
10
+ process_final_label, refine_labels
11
+
12
+
13
+ def create_and_apply_mask(volume, prediction, label, padding=10, use_mask: bool = True):
14
+ """
15
+ Create a mask for a specific label and apply it to the volume, keeping only the regions with the specified label.
16
+ Also crop the volume to the bounding box of the mask.
17
+ """
18
+ mask = (prediction == label).astype(np.float32)
19
+ if use_mask:
20
+ volume = volume * mask
21
+
22
+ # Find the bounding box of the mask
23
+ coords = np.array(np.nonzero(mask))
24
+ top_left = np.min(coords, axis=1)
25
+ bottom_right = np.max(coords, axis=1) + 1
26
+
27
+ # Apply padding to the bounding box
28
+ top_left = np.maximum(top_left - padding, 0)
29
+ bottom_right = np.minimum(bottom_right + padding, np.array(volume.shape))
30
+
31
+ # Crop the volume to the bounding box of the mask
32
+ cropped_volume = volume[top_left[0]:bottom_right[0], top_left[1]:bottom_right[1],
33
+ top_left[2]:bottom_right[2]]
34
+ return cropped_volume, top_left, bottom_right
35
+
36
+
37
+ def resample_image(image, new_spacing, interpolator=sitk.sitkLinear):
38
+ original_spacing = image.GetSpacing()
39
+ original_size = image.GetSize()
40
+
41
+ new_size = [
42
+ int(round(original_size[0] * (original_spacing[0] / new_spacing[0]))),
43
+ int(round(original_size[1] * (original_spacing[1] / new_spacing[1]))),
44
+ int(round(original_size[2] * (original_spacing[2] / new_spacing[2])))
45
+ ]
46
+
47
+ resample = sitk.ResampleImageFilter()
48
+ resample.SetOutputSpacing(new_spacing)
49
+ resample.SetSize(new_size)
50
+ resample.SetOutputDirection(image.GetDirection())
51
+ resample.SetOutputOrigin(image.GetOrigin())
52
+ resample.SetInterpolator(interpolator)
53
+
54
+ resampled_image = resample.Execute(image)
55
+ return resampled_image
56
+
57
+
58
+ def adjust_shape(a, b):
59
+ """
60
+ Adjust the shape of 3D matrix a to match the shape of 3D matrix b by padding or cropping.
61
+
62
+ Parameters:
63
+ a (np.ndarray): The input 3D matrix to be adjusted.
64
+ b (np.ndarray): The reference 3D matrix whose shape we want to match.
65
+
66
+ Returns:
67
+ np.ndarray: The adjusted 3D matrix a with the same shape as b.
68
+ """
69
+ # Get the shapes of a and b
70
+ a_shape = a.shape
71
+ b_shape = b.shape
72
+
73
+ # Initialize the adjusted array
74
+ adjusted_a = a.copy()
75
+
76
+ # Pad or crop each dimension to match the shape of b
77
+ for dim in range(3):
78
+ if a_shape[dim] < b_shape[dim]:
79
+ # Padding
80
+ pad_width = b_shape[dim] - a_shape[dim]
81
+ pad_before = pad_width // 2
82
+ pad_after = pad_width - pad_before
83
+ pad_tuple = [(0, 0), (0, 0), (0, 0)]
84
+ pad_tuple[dim] = (pad_before, pad_after)
85
+ adjusted_a = np.pad(adjusted_a, pad_tuple, mode='constant', constant_values=0)
86
+ elif a_shape[dim] > b_shape[dim]:
87
+ # Cropping
88
+ crop_before = (a_shape[dim] - b_shape[dim]) // 2
89
+ crop_after = crop_before + b_shape[dim]
90
+ if dim == 0:
91
+ adjusted_a = adjusted_a[crop_before:crop_after, :, :]
92
+ elif dim == 1:
93
+ adjusted_a = adjusted_a[:, crop_before:crop_after, :]
94
+ else:
95
+ adjusted_a = adjusted_a[:, :, crop_before:crop_after]
96
+
97
+ return adjusted_a
98
+
99
+
100
+ def restore_cropped_volume(cropped_volume, original_shape, top_left, bottom_right):
101
+ """
102
+ Restore the cropped volume to its original shape using the recorded bounding box.
103
+ """
104
+ restored_volume = np.zeros(original_shape, dtype=cropped_volume.dtype)
105
+ restored_volume[top_left[0]:bottom_right[0], top_left[1]:bottom_right[1],
106
+ top_left[2]:bottom_right[2]] = cropped_volume
107
+ return restored_volume
108
+
109
+
110
+ def save_nifti(data, spacing, direction, origin, output_path):
111
+ """
112
+ Save the numpy array as a NIfTI file.
113
+ """
114
+ img = sitk.GetImageFromArray(data.astype(np.int8))
115
+ img.SetSpacing(spacing)
116
+ img.SetDirection(direction)
117
+ img.SetOrigin(origin)
118
+ sitk.WriteImage(img, output_path, useCompression=True)
119
+
120
+
121
+ def inference_one_image(input_dir, output_dir):
122
+ start_time = time.time()
123
+ model_stage0 = nnUNetPredictor(tile_step_size=0.5, use_gaussian=True, use_mirroring=False,
124
+ perform_everything_on_device=True, device=torch.device('cuda'), verbose=False,
125
+ verbose_preprocessing=False, allow_tqdm=False
126
+ )
127
+ model_stage0.initialize_from_trained_model_folder(
128
+ '/research/phd_y3/pelvic_project/Code/for_nnUNet/nnUNet_results/Dataset001_pelvic_three_parts/nnUNetTrainer__nnUNetPlans__3d_fullres',
129
+ use_folds=(0,), checkpoint_name='checkpoint_best.pth',
130
+ )
131
+ volume = sitk.ReadImage(input_dir)
132
+ # Record raw image information
133
+ old_spacing = volume.GetSpacing()
134
+ old_direction = volume.GetDirection()
135
+ old_origin = volume.GetOrigin()
136
+ old_props = {
137
+ "sitk_stuff":
138
+ {
139
+ 'spacing': old_spacing,
140
+ 'origin': old_origin,
141
+ 'direction': old_direction
142
+ },
143
+ 'spacing': [old_spacing[2], old_spacing[1], old_spacing[0]]
144
+ }
145
+ # Resampling to spacing used in training stage0 model
146
+ lowres_volume = resample_image(volume, [2.5, 2.5, 2.5], sitk.sitkLinear)
147
+ # Record the image information after resampling
148
+ new_spacing = lowres_volume.GetSpacing()
149
+ new_direction = lowres_volume.GetDirection()
150
+ new_origin = lowres_volume.GetOrigin()
151
+ new_props = {
152
+ "sitk_stuff":
153
+ {
154
+ 'spacing': new_spacing,
155
+ 'origin': new_origin,
156
+ 'direction': new_direction
157
+ },
158
+ 'spacing': [new_spacing[2], new_spacing[1], new_spacing[0]]
159
+ }
160
+ lowres_volume = sitk.GetArrayFromImage(lowres_volume)
161
+
162
+ # start stage 1 inference
163
+ stage0_start_time = time.time()
164
+
165
+ prediction_stage1 = model_stage0.predict_single_npy_array(np.expand_dims(lowres_volume, axis=0), new_props,
166
+ None, None, False)
167
+ prediction_stage1 = relabel_connected_components(prediction_stage1)
168
+ print(f"Stage 0 inference completed in {time.time() - stage0_start_time:.2f} seconds.")
169
+
170
+ # Sample the label back to its original size
171
+ prediction_stage1 = sitk.GetImageFromArray(prediction_stage1)
172
+ prediction_stage1.SetOrigin(new_origin)
173
+ prediction_stage1.SetDirection(new_direction)
174
+ prediction_stage1.SetSpacing(new_spacing)
175
+ prediction_stage1 = resample_image(prediction_stage1, old_spacing, sitk.sitkNearestNeighbor)
176
+ prediction_stage1 = sitk.GetArrayFromImage(prediction_stage1)
177
+
178
+ # Remove small segmentations
179
+ prediction_stage1 = remove_small_connected_components(prediction_stage1, [400, 200, 200], [1, 2, 3])
180
+
181
+ model_stage1 = nnUNetPredictor(tile_step_size=0.5, use_gaussian=True, use_mirroring=False,
182
+ perform_everything_on_device=True, device=torch.device('cuda'), verbose=False,
183
+ verbose_preprocessing=False, allow_tqdm=False
184
+ )
185
+ model_stage1.initialize_from_trained_model_folder(
186
+ '/research/phd_y3/pelvic_project/Code/for_nnUNet/nnUNet_results/Dataset001_pelvic_three_parts/nnUNetTrainer__nnUNetPlans__3d_fullres',
187
+ use_folds=(0,), checkpoint_name='checkpoint_best.pth',
188
+ )
189
+
190
+
191
+ volume = sitk.GetArrayFromImage(volume)
192
+ # Adjust the interpolated prediction so that its shape is consistent with the original image
193
+ prediction_stage1 = adjust_shape(prediction_stage1, volume)
194
+
195
+ # create mask for stage 2 inference
196
+ sacrum_input, sacrum_top_left, sacrum_bottom_right = create_and_apply_mask(volume, prediction_stage1, 1, padding=5, use_mask=False)
197
+ pred_sacrum_stage1 = model_stage1.predict_single_npy_array(np.expand_dims(sacrum_input, axis=0), old_props, None, None, False)
198
+ pred_sacrum_stage1 = np.where(pred_sacrum_stage1 == 1, 1, 0)
199
+
200
+
201
+ left_input, left_top_left, left_bottom_right = create_and_apply_mask(volume, prediction_stage1, 2, padding=5, use_mask=False)
202
+ pred_left_stage1 = model_stage1.predict_single_npy_array(np.expand_dims(left_input, axis=0), old_props, None, None, False)
203
+ pred_left_stage1 = np.where(pred_left_stage1 == 2, 2, 0)
204
+
205
+ right_input, right_top_left, right_bottom_right = create_and_apply_mask(volume, prediction_stage1, 3, padding=5, use_mask=False)
206
+ pred_right_stage1 = model_stage1.predict_single_npy_array(np.expand_dims(right_input, axis=0), old_props, None, None, False)
207
+ pred_right_stage1 = np.where(pred_right_stage1 == 3, 3, 0)
208
+
209
+ # start stage 2 inference
210
+ step_size = 0.5
211
+ gaussian_flag = True
212
+ mirror_flag = True
213
+ print("Starting stage2 inference...")
214
+ print(f"step size: {step_size}, gaussian flag: {gaussian_flag}, mirror flag: {mirror_flag}")
215
+ model_sacrum = nnUNetPredictor(tile_step_size=step_size, use_gaussian=gaussian_flag, use_mirroring=mirror_flag,
216
+ perform_everything_on_device=True, device=torch.device('cuda'), verbose=False,
217
+ verbose_preprocessing=False, allow_tqdm=False
218
+ )
219
+ model_sacrum.initialize_from_trained_model_folder(
220
+ '/research/phd_y3/pelvic_project/Code/for_nnUNet/nnUNet_results/Dataset002_mid_sacrum/nnUNetTrainer__nnUNetPlans__3d_fullres',
221
+ use_folds=(0,), checkpoint_name='checkpoint_best.pth',
222
+ )
223
+
224
+ model_hips = nnUNetPredictor(tile_step_size=step_size, use_gaussian=gaussian_flag, use_mirroring=mirror_flag,
225
+ perform_everything_on_device=True, device=torch.device('cuda'), verbose=False,
226
+ verbose_preprocessing=False, allow_tqdm=False
227
+ )
228
+ model_hips.initialize_from_trained_model_folder(
229
+ '/research/phd_y3/pelvic_project/Code/for_nnUNet/nnUNet_results/Dataset003_right_hip/nnUNetTrainer__nnUNetPlans__3d_fullres',
230
+ use_folds=(0,), checkpoint_name='checkpoint_best.pth',
231
+ )
232
+
233
+ sacrum_start_time = time.time()
234
+ pred_s_1, prob_s_1 = model_sacrum.predict_single_npy_array(np.expand_dims(sacrum_input, axis=0), old_props, None, None, True)
235
+ pred_s_1 = split_connected_components(pred_s_1, 2, 2, min_volume=400)
236
+ prediction_sacrum = refine_labels(pred_sacrum_stage1, pred_s_1)
237
+ prediction_sacrum = restore_cropped_volume(prediction_sacrum, volume.shape, sacrum_top_left, sacrum_bottom_right)
238
+ print(f"Sacrum inference completed in {time.time() - sacrum_start_time:.2f} seconds.")
239
+
240
+ left_start_time = time.time()
241
+ pred_l_1, prob_l_1 = model_hips.predict_single_npy_array(np.expand_dims(left_input, axis=0), old_props, None, None, True)
242
+ pred_l_1 = split_connected_components(pred_l_1, 2, 12)
243
+ pred_l_1 = np.where(pred_l_1 == 2, 0, pred_l_1)
244
+ pred_l_1 = np.where(pred_l_1 == 1, 11, pred_l_1)
245
+ prediction_left = refine_labels(pred_left_stage1, pred_l_1)
246
+ prediction_left = restore_cropped_volume(prediction_left, volume.shape, left_top_left, left_bottom_right)
247
+ print(f"Left hip inference completed in {time.time() - left_start_time:.2f} seconds.")
248
+
249
+ right_start_time = time.time()
250
+ pred_r_1, prob_r_1 = model_hips.predict_single_npy_array(np.expand_dims(right_input, axis=0), old_props, None, None, True)
251
+ pred_r_1 = split_connected_components(pred_r_1, 2, 22)
252
+ pred_r_1 = np.where(pred_r_1 == 2, 0, pred_r_1)
253
+ pred_r_1 = np.where(pred_r_1 == 1, 21, pred_r_1)
254
+ prediction_right = refine_labels(pred_right_stage1, pred_r_1)
255
+ prediction_right = restore_cropped_volume(prediction_right, volume.shape, right_top_left, right_bottom_right)
256
+ print(f"Right hip inference completed in {time.time() - right_start_time:.2f} seconds.")
257
+
258
+ # merge and save labels
259
+ combined_prediction = np.maximum.reduce([prediction_sacrum, prediction_left, prediction_right])
260
+ combined_prediction = process_final_label(combined_prediction)
261
+ combined_prediction = remove_small_connected_components(combined_prediction, [1000, 400, 400], [1, 11, 21])
262
+ assert {1, 11, 21}.issubset(np.unique(combined_prediction)), "something wrong in label processing!"
263
+ input_filename = os.path.basename(input_dir)
264
+ if not os.path.exists(output_dir):
265
+ os.makedirs(output_dir)
266
+ output_path = os.path.join(output_dir, input_filename)
267
+ save_nifti(combined_prediction, old_spacing, old_direction, old_origin, output_path)
268
+ total_time = time.time() - start_time
269
+ print(f"Total inference time: {total_time:.2f} seconds.")
270
+
271
+
272
+ if __name__ == "__main__":
273
+ input_dir = "/research/phd_y3/pelvic_project/Data/test_image_cases"
274
+ output_dir = "/research/phd_y3/pelvic_project/Code/for_nnUNet/inference_results/two_stage_results"
275
+
276
+ if not os.path.exists(output_dir):
277
+ os.makedirs(output_dir)
278
+
279
+ for filename in os.listdir(input_dir):
280
+ if filename.endswith(".nii.gz") or filename.endswith(".mha"):
281
+ print("*********************************Processing {}**********************************".format(filename))
282
+ input_path = os.path.join(input_dir, filename)
283
+ # output_path = os.path.join(output_dir, filename)
284
+ inference_one_image(input_path, output_dir)
Code/PENGWIN_Challenge/Inference/CT/utils/__pycache__/utils.cpython-310.pyc ADDED
Binary file (7.62 kB). View file
 
Code/PENGWIN_Challenge/Inference/CT/utils/utils.py ADDED
@@ -0,0 +1,308 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import SimpleITK as sitk
2
+ import numpy as np
3
+ import scipy.ndimage as ndimage
4
+ from scipy.ndimage import distance_transform_edt
5
+ from scipy.ndimage import label, find_objects
6
+
7
+
8
+ def split_connected_components(labels, label_value, offset, min_volume=400, top_n=6):
9
+ """
10
+ Split the region with the specified label_value into multiple connected components and reassign labels.
11
+
12
+ Parameters:
13
+ labels (np.ndarray): Input label array
14
+ label_value (int): The label value to split
15
+ offset (int): Offset used to generate new label values
16
+ min_volume (int): Minimum volume to retain connected components
17
+ top_n (int): Retain the top-n connected components by volume
18
+
19
+ Returns:
20
+ np.ndarray: Relabeled array
21
+ """
22
+ # Get a binary mask where the label is equal to label_value
23
+ binary_mask = (labels == label_value)
24
+
25
+ structure = np.array([[[0, 0, 0],
26
+ [0, 1, 0],
27
+ [0, 0, 0]],
28
+ [[0, 1, 0],
29
+ [1, 1, 1],
30
+ [0, 1, 0]],
31
+ [[0, 0, 0],
32
+ [0, 1, 0],
33
+ [0, 0, 0]]], dtype=int)
34
+
35
+ # Use scipy.ndimage.label to mark connected components
36
+ labeled_array, num_features = label(binary_mask, structure=structure)
37
+
38
+ # Create new_labels as a copy of the input labels
39
+ new_labels = labels.copy()
40
+
41
+ # Get the volume of all connected components
42
+ volumes = [np.sum(labeled_array == i) for i in range(1, num_features + 1)]
43
+
44
+ # Get indices of the top-n connected components by volume
45
+ top_n_indices = np.argsort(volumes)[-top_n:][::-1]
46
+ top_n_volumes_labels = [(volumes[i], i + 1) for i in top_n_indices] # Note that component indices start from 1
47
+
48
+ # Iterate through all connected components in descending order of volume and reassign labels to avoid conflicts
49
+ current_label = offset
50
+ for volume, i in top_n_volumes_labels:
51
+ region_mask = (labeled_array == i)
52
+ if volume >= min_volume:
53
+ new_labels[region_mask] = current_label
54
+ current_label += 1
55
+ else:
56
+ new_labels[region_mask] = 0
57
+
58
+ return new_labels
59
+
60
+
61
+ def remove_small_connected_components(prediction, min_volume, label_values):
62
+ """
63
+ Remove small connected components and set them as background.
64
+
65
+ Parameters:
66
+ prediction (np.ndarray): Model output predictions
67
+ min_volume (int): Minimum volume to retain connected components
68
+ label_values (list): List of label values to process
69
+
70
+ Returns:
71
+ np.ndarray: Processed prediction array
72
+ """
73
+ new_prediction = prediction.copy()
74
+
75
+ # Define the connectivity structure for identifying connected components
76
+ structure = np.array([[[0, 0, 0],
77
+ [0, 1, 0],
78
+ [0, 0, 0]],
79
+ [[0, 1, 0],
80
+ [1, 1, 1],
81
+ [0, 1, 0]],
82
+ [[0, 0, 0],
83
+ [0, 1, 0],
84
+ [0, 0, 0]]], dtype=int)
85
+
86
+ for index, label_value in enumerate(label_values):
87
+ print(f"Processing label {label_value}:")
88
+ # Get binary mask for the specified label
89
+ binary_mask = (prediction == label_value)
90
+ minimum = min_volume[index]
91
+
92
+ labeled_array, num_features = label(binary_mask, structure=structure)
93
+
94
+ # Get slices of each connected component
95
+ slices = find_objects(labeled_array)
96
+
97
+ retained_sizes = []
98
+ removed_sizes = []
99
+
100
+ # Iterate through each connected component and remove those smaller than the minimum volume
101
+ for i, slice_ in enumerate(slices):
102
+ region_size = np.sum(labeled_array[slice_] == (i + 1))
103
+ if region_size <= minimum:
104
+ removed_sizes.append(region_size)
105
+ new_prediction[labeled_array == (i + 1)] = 0
106
+ else:
107
+ retained_sizes.append(region_size)
108
+
109
+ # Print the sizes of retained and removed regions
110
+ if retained_sizes:
111
+ print(f" Retained regions sizes: {retained_sizes}")
112
+ if removed_sizes:
113
+ print(f" Removed regions sizes: {removed_sizes}")
114
+
115
+ return new_prediction
116
+
117
+
118
+
119
+ def calculate_iou(label1, label2):
120
+ intersection = np.logical_and(label1, label2).sum()
121
+ union = np.logical_or(label1, label2).sum()
122
+ return intersection / union
123
+
124
+
125
+ def relabel_connected_components(segmentation):
126
+ """
127
+ Handle partial confusion between left and right hip bones in stage0 segmentation.
128
+
129
+ Parameters:
130
+ segmentation (np.ndarray): The segmentation labels from stage0.
131
+
132
+ Returns:
133
+ np.ndarray: Relabeled segmentation with confusion resolved.
134
+ """
135
+ # Detect connected components for labels 2 and 3
136
+ label_2, num_features_2 = ndimage.label(segmentation == 2)
137
+ label_3, num_features_3 = ndimage.label(segmentation == 3)
138
+
139
+ # Create arrays to store the sizes of the connected components
140
+ size_2 = np.bincount(label_2.ravel())
141
+ size_3 = np.bincount(label_3.ravel())
142
+
143
+ # Initialize a new segmentation array for relabeling
144
+ new_segmentation = np.copy(segmentation)
145
+
146
+ # Create a structural element to detect boundaries, suitable for 3D
147
+ struct = ndimage.generate_binary_structure(3, 1)
148
+
149
+ # Iterate over the connected components for label 2
150
+ for label in range(1, num_features_2 + 1):
151
+ current_region = (label_2 == label)
152
+ neighbors = ndimage.binary_dilation(current_region, structure=struct) & (segmentation == 3)
153
+
154
+ if neighbors.any():
155
+ touching_labels_3 = np.unique(label_3[neighbors])
156
+ for lbl_3 in touching_labels_3:
157
+ if lbl_3 > 0:
158
+ if 5 * size_2[label] < size_3[lbl_3]:
159
+ print(f"Change: class_2 (size: {size_2[label]}) -> class_3 (size: {size_3[lbl_3]})")
160
+ new_segmentation[current_region] = 3
161
+ elif 5 * size_3[lbl_3] < size_2[label]:
162
+ print(f"Change: class_3 (size: {size_3[lbl_3]}) -> class_2 (size: {size_2[label]})")
163
+ new_segmentation[label_3 == lbl_3] = 2
164
+
165
+ return new_segmentation
166
+
167
+
168
+ def refine_labels(label1, label2, threshold=0.99):
169
+ """
170
+ Refine label2 based on reference from label1 if IoU is below the threshold.
171
+
172
+ Parameters:
173
+ label1 (np.ndarray): The reference label.
174
+ label2 (np.ndarray): The label to be refined.
175
+ threshold (float): IoU threshold for refinement. Default is 0.99.
176
+
177
+ Returns:
178
+ np.ndarray: Refined label.
179
+ """
180
+ iou = calculate_iou(label1 > 0, label2 > 0) # Calculate IoU considering only foreground and background
181
+ if iou >= threshold:
182
+ return label2
183
+
184
+ print('Refining label...')
185
+ fixed_label2 = label2.copy()
186
+
187
+ # Label the connected components in label2
188
+ structure = np.array([[[0, 0, 0],
189
+ [0, 1, 0],
190
+ [0, 0, 0]],
191
+ [[0, 1, 0],
192
+ [1, 1, 1],
193
+ [0, 1, 0]],
194
+ [[0, 0, 0],
195
+ [0, 1, 0],
196
+ [0, 0, 0]]], dtype=int)
197
+ labeled_a, num_features_a = label(label2, structure=structure)
198
+
199
+ # Iterate over the connected components in label2
200
+ for component_id in range(1, num_features_a + 1):
201
+ component_mask = (labeled_a == component_id)
202
+ if not np.any(component_mask & (label1 > 0)):
203
+ # If there is no intersection with label1, set the component to background
204
+ fixed_label2[component_mask] = 0
205
+
206
+ # Foreground areas in label1 that are background in label2
207
+ fg_to_bg_mask = (label1 > 0) & (label2 == 0)
208
+
209
+ # Find the nearest foreground pixel label
210
+ if fg_to_bg_mask.any():
211
+ distance, indices = distance_transform_edt(fixed_label2 == 0, return_indices=True)
212
+ nearest_foreground = label2[tuple(indices)]
213
+ fixed_label2[fg_to_bg_mask] = nearest_foreground[fg_to_bg_mask]
214
+
215
+ return fixed_label2
216
+
217
+
218
+ def process_final_label(segmentation):
219
+ """
220
+ Process the final segmentation labels by refining the connected components of specific labels.
221
+
222
+ Parameters:
223
+ segmentation (np.ndarray): The final segmentation labels.
224
+
225
+ Returns:
226
+ np.ndarray: Refined segmentation with certain connected components removed or relabeled.
227
+ """
228
+ # Initialize a new segmentation array for relabeling
229
+ new_segmentation = np.copy(segmentation)
230
+
231
+ # Mask out sacrum labels (set to background)
232
+ segmentation = np.where((segmentation >= 1) & (segmentation <= 10), 0, segmentation)
233
+
234
+ # Detect connected components for labels 11 and 21
235
+ label_11, num_features_11 = ndimage.label(segmentation == 11)
236
+ label_21, num_features_21 = ndimage.label(segmentation == 21)
237
+
238
+ # Calculate the size of each connected component
239
+ size_11 = np.bincount(label_11.ravel())
240
+ size_21 = np.bincount(label_21.ravel())
241
+
242
+ # Find the index of the largest connected component for labels 11 and 21
243
+ largest_label_11_index = np.argmax(size_11[1:]) + 1 # Skip index 0 (background)
244
+ largest_label_21_index = np.argmax(size_21[1:]) + 1
245
+
246
+ assert num_features_11 > 0 and num_features_21 > 0, "label 11 and label 21 have no connected components!!"
247
+
248
+ # Remove the largest connected components from label_11 and label_21 (mark them as background)
249
+ label_11[label_11 == largest_label_11_index] = 0 # Mark largest connected component as background
250
+ num_features_11 -= 1 # Update the number of connected components
251
+
252
+ label_21[label_21 == largest_label_21_index] = 0 # Mark largest connected component as background
253
+ num_features_21 -= 1 # Update the number of connected components
254
+
255
+ # Create a structural element for boundary detection, suitable for 3D
256
+ struct = ndimage.generate_binary_structure(3, 1)
257
+
258
+ # Define a function to process connected components for a given label
259
+ def process_label(label, segment_label, num_features):
260
+ if num_features < 1:
261
+ return # Do not process if no connected components remain after removing the largest one
262
+
263
+ for lbl in range(1, num_features + 1):
264
+ current_region = (label == lbl)
265
+ neighbors = ndimage.binary_dilation(current_region, structure=struct) & (segmentation != segment_label)
266
+
267
+ if neighbors.any():
268
+ # Find all touching labels, excluding background
269
+ touching_labels = np.unique(segmentation[neighbors])
270
+ touching_labels = touching_labels[touching_labels != 0] # Exclude background
271
+ touching_labels = touching_labels[touching_labels != segment_label] # Exclude current label
272
+
273
+ if touching_labels.size > 0:
274
+ # Calculate the volume of each touching label
275
+ touching_label_sizes = {label: np.sum(segmentation == label) for label in touching_labels}
276
+
277
+ # Find the touching label with the largest volume
278
+ max_touching_label = max(touching_label_sizes, key=touching_label_sizes.get)
279
+ print(f"Changing segment {lbl} from {segment_label} to {max_touching_label}")
280
+ new_segmentation[current_region] = max_touching_label
281
+
282
+ # Process connected components for label 11
283
+ process_label(label_11, 11, num_features_11)
284
+
285
+ # Process connected components for label 21
286
+ process_label(label_21, 21, num_features_21)
287
+
288
+ return new_segmentation
289
+
290
+
291
+ if __name__ == "__main__":
292
+ labels = sitk.ReadImage(
293
+ "/home/ypy/Code/PENGWIN-example-algorithm-main/PENGWIN-challenge-packages/preliminary-development-phase-ct/stage1_label_after_remove_101_1.nii.gz")
294
+ spacing = labels.GetSpacing()
295
+ direction = labels.GetDirection()
296
+ origin = labels.GetOrigin()
297
+ labels = sitk.GetArrayFromImage(labels)
298
+ # label_value = 2
299
+ # offset = 22
300
+ # new_labels = split_connected_components(labels, label_value, offset)
301
+ # new_labels = np.where(new_labels == 1, 21, new_labels)
302
+ # new_labels = remove_small_connected_components(labels, 20000, [1, 2, 3])
303
+ new_labels = relabel_connected_components(labels)
304
+ save_label = sitk.GetImageFromArray(new_labels.astype(np.int8))
305
+ save_label.SetSpacing(spacing)
306
+ save_label.SetDirection(direction)
307
+ save_label.SetOrigin(origin)
308
+ sitk.WriteImage(save_label, "stage1_label_after.nii.gz", useCompression=True)
Code/PENGWIN_Challenge/Inference/X-ray/Dockerfile ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ FROM --platform=linux/amd64 pytorch/pytorch:2.3.1-cuda12.1-cudnn8-runtime
2
+ # Use a 'large' base container to show-case how to load pytorch and use the GPU (when enabled)
3
+
4
+ # Ensures that Python output to stdout/stderr is not buffered: prevents missing information when terminating
5
+ ENV PYTHONUNBUFFERED=1
6
+
7
+ # Ensures that NVIDIA runtime is used
8
+ ENV NVIDIA_VISIBLE_DEVICES=all
9
+ ENV NVIDIA_DRIVER_CAPABILITIES=compute,utility
10
+
11
+ RUN groupadd -r user && useradd -m --no-log-init -r -g user user
12
+ USER user
13
+
14
+ WORKDIR /opt/app
15
+
16
+ COPY --chown=user:user models /opt/app/models
17
+ COPY --chown=user:user packages /opt/app/packages
18
+
19
+ COPY --chown=user:user requirements.txt /opt/app/
20
+ COPY --chown=user:user utils/utils.py /opt/app/utils/
21
+ COPY --chown=user:user inference.py /opt/app/
22
+ COPY --chown=user:user Xray_inference_nnunet.py /opt/app/
23
+
24
+ RUN pip install --no-index --find-links=/opt/app/packages --user --no-cache-dir --no-color --requirement /opt/app/requirements.txt
25
+ ENTRYPOINT ["python", "inference.py"]
Code/PENGWIN_Challenge/Inference/X-ray/inference.py ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ The following is a simple example algorithm.
3
+
4
+ It is meant to run within a container.
5
+
6
+ To run it locally, you can call the following bash script:
7
+
8
+ ./test_run.sh
9
+
10
+ This will start the inference and reads from ./test/input and outputs to ./test/output
11
+
12
+ To save the container and prep it for upload to Grand-Challenge.org you can call:
13
+
14
+ ./save.sh
15
+
16
+ Any container that shows the same behavior will do, this is purely an example of how one COULD do it.
17
+
18
+ Happy programming!
19
+ """
20
+ from pathlib import Path
21
+
22
+ from glob import glob
23
+ import SimpleITK
24
+ import numpy
25
+ from Xray_inference_nnunet import inference_one_image
26
+
27
+ INPUT_PATH = Path("/input")
28
+ OUTPUT_PATH = Path("/output")
29
+ RESOURCE_PATH = Path("resources")
30
+
31
+
32
+ def run():
33
+ _show_torch_cuda_info()
34
+ output_path = OUTPUT_PATH / "images/pelvic-fracture-x-ray-segmentation"
35
+ inference_one_image(input_dir=load_image_dir(location=INPUT_PATH / "images/pelvic-fracture-x-ray"),
36
+ output_dir=output_path)
37
+ # Read the input
38
+ # pelvic_fracture_x_ray = load_image_file_as_array(
39
+ # location=INPUT_PATH / "images/pelvic-fracture-x-ray",
40
+ # )
41
+ #
42
+ # # Process the inputs: any way you'd like
43
+ # _show_torch_cuda_info()
44
+ #
45
+ # with open(RESOURCE_PATH / "some_resource.txt", "r") as f:
46
+ # print(f.read())
47
+ #
48
+ # # For now, let us set make bogus predictions
49
+ # pelvic_fracture_x_ray_segmentation = numpy.eye(4, 2)
50
+ #
51
+ # # Save your output
52
+ # write_array_as_image_file(
53
+ # location=OUTPUT_PATH / "images/pelvic-fracture-x-ray-segmentation",
54
+ # array=pelvic_fracture_x_ray_segmentation,
55
+ # )
56
+ return 0
57
+
58
+ def load_image_dir(*, location):
59
+ # Use SimpleITK to read a file
60
+ input_files = glob(str(location / "*.tif"))
61
+ result = input_files[0]
62
+
63
+ # Convert it to a Numpy array
64
+ return result
65
+
66
+
67
+ def load_image_file_as_array(*, location):
68
+ # Use SimpleITK to read a file
69
+ input_files = glob(str(location / "*.tiff")) + glob(str(location / "*.tif"))
70
+ result = SimpleITK.ReadImage(input_files[0])
71
+
72
+ # Convert it to a Numpy array
73
+ return SimpleITK.GetArrayFromImage(result)
74
+
75
+
76
+ def write_array_as_image_file(*, location, array):
77
+ location.mkdir(parents=True, exist_ok=True)
78
+
79
+ # You may need to change the suffix to .tiff to match the expected output
80
+ suffix = ".mha"
81
+
82
+ image = SimpleITK.GetImageFromArray(array)
83
+ SimpleITK.WriteImage(
84
+ image,
85
+ location / f"output{suffix}",
86
+ useCompression=True,
87
+ )
88
+
89
+
90
+ def _show_torch_cuda_info():
91
+ import torch
92
+
93
+ print("=+=" * 10)
94
+ print("Collecting Torch CUDA information")
95
+ print(f"Torch CUDA is available: {(available := torch.cuda.is_available())}")
96
+ if available:
97
+ print(f"\tnumber of devices: {torch.cuda.device_count()}")
98
+ print(f"\tcurrent device: { (current_device := torch.cuda.current_device())}")
99
+ print(f"\tproperties: {torch.cuda.get_device_properties(current_device)}")
100
+ print("=+=" * 10)
101
+
102
+
103
+ if __name__ == "__main__":
104
+ raise SystemExit(run())
Code/PENGWIN_Challenge/Inference/X-ray/requirements.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ nnunetv2
2
+ tifffile
Code/PENGWIN_Challenge/Inference/X-ray/save.sh ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env bash
2
+
3
+ # Stop at first error
4
+ set -e
5
+
6
+ SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
7
+
8
+ # Set default container name
9
+ container_tag="example-algorithm-preliminary-development-phase-x-ray"
10
+
11
+ # Check if an argument is provided
12
+ if [ "$#" -eq 1 ]; then
13
+ container_tag="$1"
14
+ fi
15
+
16
+ # Get the build information from the Docker image tag
17
+ build_timestamp=$( docker inspect --format='{{ .Created }}' "$container_tag")
18
+
19
+ if [ -z "$build_timestamp" ]; then
20
+ echo "Error: Failed to retrieve build information for container $container_tag"
21
+ exit 1
22
+ fi
23
+
24
+ # Format the build information to remove special characters
25
+ formatted_build_info=$(date -d "$build_timestamp" +"%Y%m%d_%H%M%S")
26
+
27
+ # Set the output filename with timestamp and build information
28
+ output_filename="${SCRIPT_DIR}/${container_tag}_${formatted_build_info}.tar.gz"
29
+
30
+ # Save the Docker container and gzip it
31
+ docker save "$container_tag" | gzip -c > "$output_filename"
32
+
33
+ echo "Container saved as ${output_filename}"
Code/PENGWIN_Challenge/Inference/X-ray/test_run.sh ADDED
@@ -0,0 +1,62 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env bash
2
+
3
+ # Stop at first error
4
+ set -e
5
+
6
+ SCRIPT_DIR=$( cd -- "$( dirname -- "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )
7
+ DOCKER_TAG="preliminary-development-phase-x-ray_0818"
8
+ DOCKER_NOOP_VOLUME="${DOCKER_TAG}-volume"
9
+
10
+ INPUT_DIR="${SCRIPT_DIR}/test/input"
11
+ OUTPUT_DIR="${SCRIPT_DIR}/test/output"
12
+
13
+
14
+ echo "=+= Cleaning up any earlier output"
15
+ if [ -d "$OUTPUT_DIR" ]; then
16
+ # Ensure permissions are setup correctly
17
+ # This allows for the Docker user to write to this location
18
+ rm -rf "${OUTPUT_DIR}"/*
19
+ chmod -f o+rwx "$OUTPUT_DIR"
20
+ else
21
+ mkdir --mode=o+rwx "$OUTPUT_DIR"
22
+ fi
23
+
24
+
25
+ echo "=+= (Re)build the container"
26
+ docker build "$SCRIPT_DIR" \
27
+ --platform=linux/amd64 \
28
+ --tag $DOCKER_TAG 2>&1
29
+
30
+
31
+ echo "=+= Doing a forward pass"
32
+ ## Note the extra arguments that are passed here:
33
+ # '--network none'
34
+ # entails there is no internet connection
35
+ # 'gpus all'
36
+ # enables access to any GPUs present
37
+ # '--volume <NAME>:/tmp'
38
+ # is added because on Grand Challenge this directory cannot be used to store permanent files
39
+ docker volume create "$DOCKER_NOOP_VOLUME"
40
+ docker run --rm \
41
+ --platform=linux/amd64 \
42
+ --network none \
43
+ --gpus all \
44
+ --volume "$INPUT_DIR":/input \
45
+ --volume "$OUTPUT_DIR":/output \
46
+ --volume "$DOCKER_NOOP_VOLUME":/tmp \
47
+ $DOCKER_TAG
48
+ docker volume rm "$DOCKER_NOOP_VOLUME"
49
+
50
+ # Ensure permissions are set correctly on the output
51
+ # This allows the host user (e.g. you) to access and handle these files
52
+ docker run --rm \
53
+ --quiet \
54
+ --env HOST_UID=`id --user` \
55
+ --env HOST_GID=`id --group` \
56
+ --volume "$OUTPUT_DIR":/output \
57
+ alpine:latest \
58
+ /bin/sh -c 'chown -R ${HOST_UID}:${HOST_GID} /output'
59
+
60
+ echo "=+= Wrote results to ${OUTPUT_DIR}"
61
+
62
+ echo "=+= Save this image for uploading via save.sh \"${DOCKER_TAG}\""
Code/PENGWIN_Challenge/Inference/X-ray/two_stage_inference.py ADDED
@@ -0,0 +1,243 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import time
3
+
4
+ import numpy as np
5
+ import torch
6
+ import tifffile as tiff
7
+ from nnunetv2.inference.predict_from_raw_data import nnUNetPredictor
8
+ from utils.utils import split_connected_components, remove_small_connected_components, refine_labels
9
+ import warnings
10
+ warnings.filterwarnings("ignore", category=DeprecationWarning)
11
+
12
+
13
+ def create_and_apply_mask(volume, prediction, label, padding=10, use_mask: bool = True):
14
+ """
15
+ Create a mask for a specific label and apply it to the volume, keeping only the regions with the specified label.
16
+ Also crop the volume to the bounding box of the mask.
17
+ """
18
+ mask = (prediction == label).astype(np.float32)
19
+ if use_mask:
20
+ volume = volume * mask
21
+
22
+ # Find the bounding box of the mask
23
+ coords = np.array(np.nonzero(mask))
24
+ top_left = np.min(coords, axis=1)
25
+ bottom_right = np.max(coords, axis=1) + 1
26
+
27
+ # Apply padding to the bounding box
28
+ top_left = np.maximum(top_left - padding, 0)
29
+ bottom_right = np.minimum(bottom_right + padding, np.array(volume.shape))
30
+
31
+ # Crop the volume to the bounding box of the mask
32
+ cropped_volume = volume[top_left[0]:bottom_right[0], top_left[1]:bottom_right[1],
33
+ top_left[2]:bottom_right[2]]
34
+ return cropped_volume, top_left, bottom_right
35
+
36
+
37
+ def restore_cropped_volume(cropped_volume, original_shape, top_left, bottom_right):
38
+ """
39
+ Restore the cropped volume to its original shape using the recorded bounding box.
40
+ """
41
+ restored_volume = np.zeros(original_shape, dtype=cropped_volume.dtype)
42
+ restored_volume[top_left[0]:bottom_right[0], top_left[1]:bottom_right[1],
43
+ top_left[2]:bottom_right[2]] = cropped_volume
44
+ return restored_volume
45
+
46
+
47
+ def merge_labels(label_list):
48
+ """
49
+ Merge multiple 2D segmentation labels into a single uint32 label, where each pixel can belong to multiple categories.
50
+
51
+ Parameters:
52
+ label_list (list of np.ndarray): A list of labels, each element is a 2D numpy array with values between 0 and 30.
53
+
54
+ Returns:
55
+ np.ndarray: The merged label array, with data type uint32.
56
+ """
57
+ # Assume all labels have the same shape, get the shape of the arrays
58
+ shape = label_list[0].shape
59
+
60
+ # Create an array of the same size with data type uint32
61
+ combined_label = np.zeros(shape, dtype=np.uint32)
62
+
63
+ # Iterate over each label
64
+ for label in label_list:
65
+ # Iterate over each pixel
66
+ for i in range(shape[0]):
67
+ for j in range(shape[1]):
68
+ value = label[i, j]
69
+ if value > 0: # If the label value is greater than 0, update the corresponding bit
70
+ combined_label[i, j] |= (1 << value)
71
+
72
+ return combined_label
73
+
74
+
75
+
76
+ def inference_one_image(input_dir, output_dir):
77
+ start_time = time.time()
78
+ # Load the image to be predicted
79
+ image = tiff.imread(input_dir)
80
+ # Expand the image by one dimension to fit the input of NNUNET
81
+ image = np.expand_dims(image, axis=0)
82
+ props = {
83
+ 'spacing': [999.0, 1.0, 1.0]
84
+ }
85
+ # Load three models in stage1
86
+ model_stage1_s = nnUNetPredictor(tile_step_size=0.5, use_gaussian=True, use_mirroring=True,
87
+ perform_everything_on_device=True, device=torch.device('cuda'), verbose=False,
88
+ verbose_preprocessing=False, allow_tqdm=False
89
+ )
90
+ model_stage1_s.initialize_from_trained_model_folder(
91
+ 'models/Dataset197_xray_sacrum/nnUNetTrainer__nnUNetResEncUNetMPlans__2d',
92
+ use_folds=(0,), checkpoint_name='checkpoint_final.pth',
93
+ )
94
+ model_stage1_l = nnUNetPredictor(tile_step_size=0.5, use_gaussian=True, use_mirroring=False,
95
+ perform_everything_on_device=True, device=torch.device('cuda'), verbose=False,
96
+ verbose_preprocessing=False, allow_tqdm=False
97
+ )
98
+ model_stage1_l.initialize_from_trained_model_folder(
99
+ 'models/Dataset195_xray_left/nnUNetTrainerPelvic__nnUNetResEncUNetMPlans__2d',
100
+ use_folds=(0,), checkpoint_name='checkpoint_best.pth',
101
+ )
102
+ model_stage1_r = nnUNetPredictor(tile_step_size=0.5, use_gaussian=True, use_mirroring=False,
103
+ perform_everything_on_device=True, device=torch.device('cuda'), verbose=False,
104
+ verbose_preprocessing=False, allow_tqdm=False
105
+ )
106
+ model_stage1_r.initialize_from_trained_model_folder(
107
+ 'models/Dataset196_xray_right/nnUNetTrainerPelvic__nnUNetResEncUNetMPlans__2d',
108
+ use_folds=(0,), checkpoint_name='checkpoint_best.pth',
109
+ )
110
+
111
+ # # start stage 1 inference
112
+ stage1_start_time = time.time()
113
+ pred_stage1_s = model_stage1_s.predict_single_npy_array(np.expand_dims(image, axis=0), props, None, None, False)
114
+ pred_stage1_l = model_stage1_l.predict_single_npy_array(np.expand_dims(image, axis=0), props, None, None, False)
115
+ pred_stage1_r = model_stage1_r.predict_single_npy_array(np.expand_dims(image, axis=0), props, None, None, False)
116
+
117
+ print(f"Stage 1 inference completed in {time.time() - stage1_start_time:.2f} seconds.")
118
+
119
+ # start stage 2 inference
120
+ step_size = 0.5
121
+ gaussian_flag = True
122
+ mirror_flag = True
123
+ # load stage2 models
124
+ model_stage2_s_1 = nnUNetPredictor(tile_step_size=step_size, use_gaussian=gaussian_flag,
125
+ use_mirroring=mirror_flag,
126
+ perform_everything_on_device=True, device=torch.device('cuda'),
127
+ verbose=False,
128
+ verbose_preprocessing=False, allow_tqdm=False
129
+ )
130
+ model_stage2_s_1.initialize_from_trained_model_folder(
131
+ 'models/Dataset193_xray_sacrum1/nnUNetTrainer__nnUNetResEncUNetMPlans__2d',
132
+ use_folds=(0,), checkpoint_name='checkpoint_final.pth',
133
+ )
134
+ model_stage2_s_2 = nnUNetPredictor(tile_step_size=step_size, use_gaussian=gaussian_flag,
135
+ use_mirroring=mirror_flag,
136
+ perform_everything_on_device=True, device=torch.device('cuda'),
137
+ verbose=False,
138
+ verbose_preprocessing=False, allow_tqdm=False
139
+ )
140
+ model_stage2_s_2.initialize_from_trained_model_folder(
141
+ 'models/Dataset194_xray_sacrum2/nnUNetTrainer__nnUNetResEncUNetMPlans__2d',
142
+ use_folds=(0,), checkpoint_name='checkpoint_final.pth',
143
+ )
144
+
145
+ model_stage2_hip_1 = nnUNetPredictor(tile_step_size=step_size, use_gaussian=gaussian_flag,
146
+ use_mirroring=mirror_flag,
147
+ perform_everything_on_device=True, device=torch.device('cuda'),
148
+ verbose=False,
149
+ verbose_preprocessing=False, allow_tqdm=False
150
+ )
151
+ model_stage2_hip_1.initialize_from_trained_model_folder(
152
+ 'models/Dataset187_xray_hips1/nnUNetTrainer__nnUNetResEncUNetMPlans__2d',
153
+ use_folds=(0,), checkpoint_name='checkpoint_final.pth',
154
+ )
155
+
156
+ model_stage2_hip_2 = nnUNetPredictor(tile_step_size=step_size, use_gaussian=gaussian_flag,
157
+ use_mirroring=mirror_flag,
158
+ perform_everything_on_device=True, device=torch.device('cuda'),
159
+ verbose=False,
160
+ verbose_preprocessing=False, allow_tqdm=False
161
+ )
162
+ model_stage2_hip_2.initialize_from_trained_model_folder(
163
+ 'models/Dataset188_xray_hips2/nnUNetTrainer__nnUNetResEncUNetMPlans__2d',
164
+ use_folds=(0,), checkpoint_name='checkpoint_best.pth',
165
+ )
166
+
167
+ label_list = []
168
+ if pred_stage1_s.any():
169
+ sacrum_start_time = time.time()
170
+ sacrum_input, sacrum_top_left, sacrum_bottom_right = create_and_apply_mask(image, pred_stage1_s, 1, padding=5, use_mask=False)
171
+
172
+ pred_s_1, prob_s_1 = model_stage2_s_1.predict_single_npy_array(np.expand_dims(sacrum_input, axis=0), props, None, None, True)
173
+ pred_s_1 = remove_small_connected_components(pred_s_1, [10], [1])
174
+ pred_s_1 = restore_cropped_volume(pred_s_1, image.shape, sacrum_top_left, sacrum_bottom_right)
175
+ pred_s_1 = refine_labels(pred_stage1_s, pred_s_1)
176
+ label_list.append(pred_s_1)
177
+
178
+ pred_s_2, prob_s_2 = model_stage2_s_2.predict_single_npy_array(np.expand_dims(sacrum_input, axis=0), props, None, None, True)
179
+ pred_s_2 = restore_cropped_volume(pred_s_2, image.shape, sacrum_top_left, sacrum_bottom_right)
180
+ pred_s_2 = refine_labels(pred_stage1_s, pred_s_2)
181
+ pred_s_2 = remove_small_connected_components(pred_s_2, [10], [1])
182
+ pred_s_2 = split_connected_components(pred_s_2, 1, 2)
183
+ label_list.append(pred_s_2)
184
+ print(f"Sacrum inference completed in {time.time() - sacrum_start_time:.2f} seconds.")
185
+
186
+ if pred_stage1_l.any():
187
+ left_start_time = time.time()
188
+ left_input, left_top_left, left_bottom_right = create_and_apply_mask(image, pred_stage1_l, 1, padding=5, use_mask=False)
189
+
190
+ pred_l_1, prob_l_1 = model_stage2_hip_1.predict_single_npy_array(np.expand_dims(left_input, axis=0), props, None, None, True)
191
+ pred_l_1 = remove_small_connected_components(pred_l_1, [10], [1])
192
+ pred_l_1 = np.where(pred_l_1 == 1, 11, pred_l_1)
193
+ pred_l_1 = restore_cropped_volume(pred_l_1, image.shape, left_top_left, left_bottom_right)
194
+ label_list.append(pred_l_1)
195
+
196
+ pred_l_2, prob_l_2 = model_stage2_hip_2.predict_single_npy_array(np.expand_dims(left_input, axis=0), props, None, None, True)
197
+ pred_l_2 = remove_small_connected_components(pred_l_2, [10], [1])
198
+ pred_l_2 = split_connected_components(pred_l_2, 1, 12)
199
+ pred_l_2 = restore_cropped_volume(pred_l_2, image.shape, left_top_left, left_bottom_right)
200
+ label_list.append(pred_l_2)
201
+ print(f"Left hip inference completed in {time.time() - left_start_time:.2f} seconds.")
202
+
203
+ if pred_stage1_r.any():
204
+ right_start_time = time.time()
205
+ right_input, right_top_left, right_bottom_right = create_and_apply_mask(image, pred_stage1_r, 1, padding=5, use_mask=False)
206
+
207
+ pred_r_1, prob_r_1 = model_stage2_hip_1.predict_single_npy_array(np.expand_dims(right_input, axis=0), props, None, None, True)
208
+ pred_r_1 = remove_small_connected_components(pred_r_1, [10], [1])
209
+ pred_r_1 = np.where(pred_r_1 == 1, 21, pred_r_1)
210
+ pred_r_1 = restore_cropped_volume(pred_r_1, image.shape, right_top_left, right_bottom_right)
211
+ label_list.append(pred_r_1)
212
+
213
+ pred_r_2, prob_r_2 = model_stage2_hip_2.predict_single_npy_array(np.expand_dims(right_input, axis=0), props, None, None, True)
214
+ pred_r_2 = remove_small_connected_components(pred_r_2, [10], [1])
215
+ pred_r_2 = split_connected_components(pred_r_2, 1, 22)
216
+ pred_r_2 = restore_cropped_volume(pred_r_2, image.shape, right_top_left, right_bottom_right)
217
+ label_list.append(pred_r_2)
218
+ print(f"Right hip inference completed in {time.time() - right_start_time:.2f} seconds.")
219
+
220
+ label_list = [np.squeeze(label, axis=0) for label in label_list]
221
+ combined_prediction = merge_labels(label_list)
222
+ input_filename = os.path.basename(input_dir)
223
+ if not os.path.exists(output_dir):
224
+ os.makedirs(output_dir)
225
+ output_path = os.path.join(output_dir, input_filename)
226
+ with tiff.TiffWriter(output_path, bigtiff=True) as tif:
227
+ tif.write(combined_prediction, photometric='minisblack', metadata={'spacing': 1, 'unit': 'um'}, resolution=(1, 1, 'CENTIMETER'))
228
+ total_time = time.time() - start_time
229
+ print(f"Total inference time: {total_time:.2f} seconds.")
230
+
231
+
232
+ if __name__ == "__main__":
233
+ input_dir = r"/data/ypy/dataset/miccai_challenge_2024/miccai_challenge_2024_nii/xray_10case"
234
+ output_dir = r"/data/ypy/dataset/miccai_challenge_2024/miccai_challenge_2024_nii/xray_10case/mask0817"
235
+
236
+ if not os.path.exists(output_dir):
237
+ os.makedirs(output_dir)
238
+
239
+ for filename in os.listdir(input_dir):
240
+ if filename.endswith(".tif"):
241
+ print("*********************************Processing {}**********************************".format(filename))
242
+ input_path = os.path.join(input_dir, filename)
243
+ inference_one_image(input_path, output_dir)
Code/PENGWIN_Challenge/Inference/X-ray/utils/utils.py ADDED
@@ -0,0 +1,134 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import numpy as np
2
+ from scipy.ndimage import label, find_objects
3
+
4
+
5
+ def split_connected_components(labels, label_value, offset, min_volume=400, top_n=6):
6
+ """
7
+ Split the region with the specified label_value into multiple connected components and reassign labels.
8
+
9
+ Parameters:
10
+ labels (np.ndarray): Input label array
11
+ label_value (int): The label value to split
12
+ offset (int): Offset used to generate new label values
13
+ min_volume (int): Minimum volume to retain connected components
14
+ top_n (int): Retain the top-n connected components by volume
15
+
16
+ Returns:
17
+ np.ndarray: Relabeled array
18
+ """
19
+ # Get a binary mask where the label is equal to label_value
20
+ binary_mask = (labels == label_value)
21
+
22
+ structure = np.array([[[0, 0, 0],
23
+ [0, 1, 0],
24
+ [0, 0, 0]],
25
+ [[0, 1, 0],
26
+ [1, 1, 1],
27
+ [0, 1, 0]],
28
+ [[0, 0, 0],
29
+ [0, 1, 0],
30
+ [0, 0, 0]]], dtype=int)
31
+
32
+ # Use scipy.ndimage.label to mark connected components
33
+ labeled_array, num_features = label(binary_mask, structure=structure)
34
+
35
+ # Create new_labels as a copy of the input labels
36
+ new_labels = labels.copy()
37
+
38
+ # Get the volume of all connected components
39
+ volumes = [np.sum(labeled_array == i) for i in range(1, num_features + 1)]
40
+
41
+ # Get indices of the top-n connected components by volume
42
+ top_n_indices = np.argsort(volumes)[-top_n:][::-1]
43
+ top_n_volumes_labels = [(volumes[i], i + 1) for i in top_n_indices] # Note that component indices start from 1
44
+
45
+ # Iterate through all connected components in descending order of volume and reassign labels to avoid conflicts
46
+ current_label = offset
47
+ for volume, i in top_n_volumes_labels:
48
+ region_mask = (labeled_array == i)
49
+ if volume >= min_volume:
50
+ new_labels[region_mask] = current_label
51
+ current_label += 1
52
+ else:
53
+ new_labels[region_mask] = 0
54
+
55
+ return new_labels
56
+
57
+
58
+ def remove_small_connected_components(prediction, min_volume, label_values):
59
+ """
60
+ Remove small connected components and set them as background.
61
+
62
+ Parameters:
63
+ prediction (np.ndarray): Model output predictions
64
+ min_volume (int): Minimum volume to retain connected components
65
+ label_values (list): List of label values to process
66
+
67
+ Returns:
68
+ np.ndarray: Processed prediction array
69
+ """
70
+ new_prediction = prediction.copy()
71
+
72
+ # Define the connectivity structure for identifying connected components
73
+ structure = np.array([[[0, 0, 0],
74
+ [0, 1, 0],
75
+ [0, 0, 0]],
76
+ [[0, 1, 0],
77
+ [1, 1, 1],
78
+ [0, 1, 0]],
79
+ [[0, 0, 0],
80
+ [0, 1, 0],
81
+ [0, 0, 0]]], dtype=int)
82
+
83
+ for index, label_value in enumerate(label_values):
84
+ print(f"Processing label {label_value}:")
85
+ # Get binary mask for the specified label
86
+ binary_mask = (prediction == label_value)
87
+ minimum = min_volume[index]
88
+
89
+ labeled_array, num_features = label(binary_mask, structure=structure)
90
+
91
+ # Get slices of each connected component
92
+ slices = find_objects(labeled_array)
93
+
94
+ retained_sizes = []
95
+ removed_sizes = []
96
+
97
+ # Iterate through each connected component and remove those smaller than the minimum volume
98
+ for i, slice_ in enumerate(slices):
99
+ region_size = np.sum(labeled_array[slice_] == (i + 1))
100
+ if region_size <= minimum:
101
+ removed_sizes.append(region_size)
102
+ new_prediction[labeled_array == (i + 1)] = 0
103
+ else:
104
+ retained_sizes.append(region_size)
105
+
106
+ # Print the sizes of retained and removed regions
107
+ if retained_sizes:
108
+ print(f" Retained regions sizes: {retained_sizes}")
109
+ if removed_sizes:
110
+ print(f" Removed regions sizes: {removed_sizes}")
111
+
112
+ return new_prediction
113
+
114
+
115
+ def refine_labels(label1, label2):
116
+ """
117
+ Refine label2 based on label1 by adjusting foreground and background regions.
118
+
119
+ Parameters:
120
+ label1 (np.ndarray): The reference label.
121
+ label2 (np.ndarray): The label to be refined.
122
+
123
+ Returns:
124
+ np.ndarray: Refined label.
125
+ """
126
+ fixed_label2 = label2.copy()
127
+
128
+ # Regions that are background in label1 but foreground in label2
129
+ bg_to_fg_mask = (label1 == 0) & (label2 > 0)
130
+ fixed_label2[bg_to_fg_mask] = 0
131
+
132
+ return fixed_label2
133
+
134
+
Code/PENGWIN_Challenge/README.MD ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Two-Stage Segmentation of Pelvic Bone Fragments with Injuries in CT/X-ray Images Using nnUNet
2
+
3
+ Here is the code for the solution from the SMILE team that ranked 2nd in Task1 and 1st in Task2 of the PENWIN Challenge.
4
+ (https://pengwin.grand-challenge.org/result/)
5
+
6
+ The models used in the two tasks were all trained using nnunetv2 with default configuration.
7
+
8
+
9
+ ## Installation
10
+ 1. Clone the repository:
11
+ ```bash
12
+ git clone https://github.com/yuepeiyan/PENGWIN_Challenge.git
13
+ 2. Install nnunetv2:
14
+ ```bash
15
+ cd nnUNet
16
+ pip install -e .
17
+ 3. Download the CT weights and place the `models` folder into the `Inference/CT` directory. (Using Task 1 as an example, the steps for Task 2 are identical.)
18
+
19
+ ## Usage
20
+ 1. Copy an image to the `test/input/images/plevic-fracture-ct` directory.
21
+ 2. Run `test_run.sh` to build and run a docker container that performs inference on the image placed in the folder mentioned above.
22
+ 3. The output will be saved to the `test/output/images/plevic-fracture-ct-segmentation` directory.
23
+
24
+ If you prefer not to run a Docker container, you can simply use the `inference_one_image` function in `two_stage_inference.py`.
25
+ This function allows you to perform inference on a single image and save the output to a specified path.
Code/PENGWIN_Challenge/nnUNet/.gitignore ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Byte-compiled / optimized / DLL files
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+
6
+ # C extensions
7
+ *.so
8
+
9
+ # Distribution / packaging
10
+ .Python
11
+ env/
12
+ build/
13
+ develop-eggs/
14
+ dist/
15
+ downloads/
16
+ eggs/
17
+ .eggs/
18
+ lib/
19
+ lib64/
20
+ parts/
21
+ sdist/
22
+ var/
23
+ *.egg-info/
24
+ .installed.cfg
25
+ *.egg
26
+
27
+ # PyInstaller
28
+ # Usually these files are written by a python script from a template
29
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
30
+ *.manifest
31
+ *.spec
32
+
33
+ # Installer logs
34
+ pip-log.txt
35
+ pip-delete-this-directory.txt
36
+
37
+ # Unit test / coverage reports
38
+ htmlcov/
39
+ .tox/
40
+ .coverage
41
+ .coverage.*
42
+ .cache
43
+ nosetests.xml
44
+ coverage.xml
45
+ *,cover
46
+ .hypothesis/
47
+
48
+ # Translations
49
+ *.mo
50
+ *.pot
51
+
52
+ # Django stuff:
53
+ *.log
54
+ local_settings.py
55
+
56
+ # Flask stuff:
57
+ instance/
58
+ .webassets-cache
59
+
60
+ # Scrapy stuff:
61
+ .scrapy
62
+
63
+ # Sphinx documentation
64
+ docs/_build/
65
+
66
+ # PyBuilder
67
+ target/
68
+
69
+ # IPython Notebook
70
+ .ipynb_checkpoints
71
+
72
+ # pyenv
73
+ .python-version
74
+
75
+ # celery beat schedule file
76
+ celerybeat-schedule
77
+
78
+ # dotenv
79
+ .env
80
+
81
+ # virtualenv
82
+ venv/
83
+ ENV/
84
+
85
+ # Spyder project settings
86
+ .spyderproject
87
+
88
+ # Rope project settings
89
+ .ropeproject
90
+
91
+ *.memmap
92
+ *.png
93
+ *.zip
94
+ *.npz
95
+ *.npy
96
+ *.jpg
97
+ *.jpeg
98
+ .idea
99
+ *.txt
100
+ .idea/*
101
+ *.png
102
+ *.nii.gz
103
+ *.nii
104
+ *.tif
105
+ *.bmp
106
+ *.pkl
107
+ *.xml
108
+ *.pkl
109
+ *.pdf
110
+ *.png
111
+ *.jpg
112
+ *.jpeg
113
+
114
+ *.model
115
+
116
+ !documentation/assets/scribble_example.png
Code/PENGWIN_Challenge/nnUNet/documentation/__init__.py ADDED
File without changes
Code/PENGWIN_Challenge/nnUNet/documentation/assets/scribble_example.png ADDED

Git LFS Details

  • SHA256: 9b75d9db30953164f58395d4962d8be003e431627c5293ebdb65202072c230d5
  • Pointer size: 132 Bytes
  • Size of remote file: 2.28 MB
Code/PENGWIN_Challenge/nnUNet/documentation/benchmarking.md ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # nnU-Netv2 benchmarks
2
+
3
+ Does your system run like it should? Is your epoch time longer than expected? What epoch times should you expect?
4
+
5
+ Look no further for we have the solution here!
6
+
7
+ ## What does the nnU-netv2 benchmark do?
8
+
9
+ nnU-Net's benchmark trains models for 5 epochs. At the end, the fastest epoch will
10
+ be noted down, along with the GPU name, torch version and cudnn version. You can find the benchmark output in the
11
+ corresponding nnUNet_results subfolder (see example below). Don't worry, we also provide scripts to collect your
12
+ results. Or you just start a benchmark and look at the console output. Everything is possible. Nothing is forbidden.
13
+
14
+ The benchmark implementation revolves around two trainers:
15
+ - `nnUNetTrainerBenchmark_5epochs` runs a regular training for 5 epochs. When completed, writes a .json file with the fastest
16
+ epoch time as well as the GPU used and the torch and cudnn versions. Useful for speed testing the entire pipeline
17
+ (data loading, augmentation, GPU training)
18
+ - `nnUNetTrainerBenchmark_5epochs_noDataLoading` is the same, but it doesn't do any data loading or augmentation. It
19
+ just presents dummy arrays to the GPU. Useful for checking pure GPU speed.
20
+
21
+ ## How to run the nnU-Netv2 benchmark?
22
+ It's quite simple, actually. It looks just like a regular nnU-Net training.
23
+
24
+ We provide reference numbers for some of the Medical Segmentation Decathlon datasets because they are easily
25
+ accessible: [download here](https://drive.google.com/drive/folders/1HqEgzS8BV2c7xYNrZdEAnrHk7osJJ--2). If it needs to be
26
+ quick and dirty, focus on Tasks 2 and 4. Download and extract the data and convert them to the nnU-Net format with
27
+ `nnUNetv2_convert_MSD_dataset`.
28
+ Run `nnUNetv2_plan_and_preprocess` for them.
29
+
30
+ Then, for each dataset, run the following commands (only one per GPU! Or one after the other):
31
+
32
+ ```bash
33
+ nnUNetv2_train DATSET_ID 2d 0 -tr nnUNetTrainerBenchmark_5epochs
34
+ nnUNetv2_train DATSET_ID 3d_fullres 0 -tr nnUNetTrainerBenchmark_5epochs
35
+ nnUNetv2_train DATSET_ID 2d 0 -tr nnUNetTrainerBenchmark_5epochs_noDataLoading
36
+ nnUNetv2_train DATSET_ID 3d_fullres 0 -tr nnUNetTrainerBenchmark_5epochs_noDataLoading
37
+ ```
38
+
39
+ If you want to inspect the outcome manually, check (for example!) your
40
+ `nnUNet_results/DATASET_NAME/nnUNetTrainerBenchmark_5epochs__nnUNetPlans__3d_fullres/fold_0/` folder for the `benchmark_result.json` file.
41
+
42
+ Note that there can be multiple entries in this file if the benchmark was run on different GPU types, torch versions or cudnn versions!
43
+
44
+ If you want to summarize your results like we did in our [results](#results), check the
45
+ [summary script](../nnunetv2/batch_running/benchmarking/summarize_benchmark_results.py). Here you need to change the
46
+ torch version, cudnn version and dataset you want to summarize, then execute the script. You can find the exact
47
+ values you need to put there in one of your `benchmark_result.json` files.
48
+
49
+ ## Results
50
+ We have tested a variety of GPUs and summarized the results in a
51
+ [spreadsheet](https://docs.google.com/spreadsheets/d/12Cvt_gr8XU2qWaE0XJk5jJlxMEESPxyqW0CWbQhTNNY/edit?usp=sharing).
52
+ Note that you can select the torch and cudnn versions at the bottom! There may be comments in this spreadsheet. Read them!
53
+
54
+ ## Result interpretation
55
+
56
+ Results are shown as epoch time in seconds. Lower is better (duh). Epoch times can fluctuate between runs, so as
57
+ long as you are within like 5-10% of the numbers we report, everything should be dandy.
58
+
59
+ If not, here is how you can try to find the culprit!
60
+
61
+ The first thing to do is to compare the performance between the `nnUNetTrainerBenchmark_5epochs_noDataLoading` and
62
+ `nnUNetTrainerBenchmark_5epochs` trainers. If the difference is about the same as we report in our spreadsheet, but
63
+ both your numbers are worse, the problem is with your GPU:
64
+
65
+ - Are you certain you compare the correct GPU? (duh)
66
+ - If yes, then you might want to install PyTorch in a different way. Never `pip install torch`! Go to the
67
+ [PyTorch installation](https://pytorch.org/get-started/locally/) page, select the most recent cuda version your
68
+ system supports and only then copy and execute the correct command! Either pip or conda should work
69
+ - If the problem is still not fixed, we recommend you try
70
+ [compiling pytorch from source](https://github.com/pytorch/pytorch#from-source). It's more difficult but that's
71
+ how we roll here at the DKFZ (at least the cool kids here).
72
+ - Another thing to consider is to try exactly the same torch + cudnn version as we did in our spreadsheet.
73
+ Sometimes newer versions can actually degrade performance and there might be bugs from time to time. Older versions
74
+ are also often a lot slower!
75
+ - Finally, some very basic things that could impact your GPU performance:
76
+ - Is the GPU cooled adequately? Check the temperature with `nvidia-smi`. Hot GPUs throttle performance in order to not self-destruct
77
+ - Is your OS using the GPU for displaying your desktop at the same time? If so then you can expect a performance
78
+ penalty (I dunno like 10% !?). That's expected and OK.
79
+ - Are other users using the GPU as well?
80
+
81
+
82
+ If you see a large performance difference between `nnUNetTrainerBenchmark_5epochs_noDataLoading` (fast) and
83
+ `nnUNetTrainerBenchmark_5epochs` (slow) then the problem might be related to data loading and augmentation. As a
84
+ reminder, nnU-net does not use pre-augmented images (offline augmentation) but instead generates augmented training
85
+ samples on the fly during training (no, you cannot switch it to offline). This requires that your system can do partial
86
+ reads of the image files fast enough (SSD storage required!) and that your CPU is powerful enough to run the augmentations.
87
+
88
+ Check the following:
89
+
90
+ - [CPU bottleneck] How many CPU threads are running during the training? nnU-Net uses 12 processes for data augmentation by default.
91
+ If you see those 12 running constantly during training, consider increasing the number of processes used for data
92
+ augmentation (provided there is headroom on your CPU!). Increase the number until you see less active workers than
93
+ you configured (or just set the number to 32 and forget about it). You can do so by setting the `nnUNet_n_proc_DA`
94
+ environment variable (Linux: `export nnUNet_n_proc_DA=24`). Read [here](set_environment_variables.md) on how to do this.
95
+ If your CPU does not support more processes (setting more processes than your CPU has threads makes
96
+ no sense!) you are out of luck and in desperate need of a system upgrade!
97
+ - [I/O bottleneck] If you don't see 12 (or nnUNet_n_proc_DA if you set it) processes running but your training times
98
+ are still slow then open up `top` (sorry, Windows users. I don't know how to do this on Windows) and look at the value
99
+ left of 'wa' in the row that begins
100
+ with '%Cpu (s)'. If this is >1.0 (arbitrarily set threshold here, essentially look for unusually high 'wa'. In a
101
+ healthy training 'wa' will be almost 0) then your storage cannot keep up with data loading. Make sure to set
102
+ nnUNet_preprocessed to a folder that is located on an SSD. nvme is preferred over SATA. PCIe3 is enough. 3000MB/s
103
+ sequential read recommended.
104
+ - [funky stuff] Sometimes there is funky stuff going on, especially when batch sizes are large, files are small and
105
+ patch sizes are small as well. As part of the data loading process, nnU-Net needs to open and close a file for each
106
+ training sample. Now imagine a dataset like Dataset004_Hippocampus where for the 2d config we have a batch size of
107
+ 366 and we run 250 iterations in <10s on an A100. That's a lotta files per second (366 * 250 / 10 = 9150 files per second).
108
+ Oof. If the files are on some network drive (even if it's nvme) then (probably) good night. The good news: nnU-Net
109
+ has got you covered: add `export nnUNet_keep_files_open=True` to your .bashrc and the problem goes away. The neat
110
+ part: it causes new problems if you are not allowed to have enough open files. You may have to increase the number
111
+ of allowed open files. `ulimit -n` gives your current limit (Linux only). It should not be something like 1024.
112
+ Increasing that to 65535 works well for me. See here for how to change these limits:
113
+ [Link](https://kupczynski.info/posts/ubuntu-18-10-ulimits/)
114
+ (works for Ubuntu 18, google for your OS!).
115
+
Code/PENGWIN_Challenge/nnUNet/documentation/changelog.md ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # What is different in v2?
2
+
3
+ - We now support **hierarchical labels** (named regions in nnU-Net). For example, instead of training BraTS with the
4
+ 'edema', 'necrosis' and 'enhancing tumor' labels you can directly train it on the target areas 'whole tumor',
5
+ 'tumor core' and 'enhancing tumor'. See [here](region_based_training.md) for a detailed description + also have a look at the
6
+ [BraTS 2021 conversion script](../nnunetv2/dataset_conversion/Dataset137_BraTS21.py).
7
+ - Cross-platform support. Cuda, mps (Apple M1/M2) and of course CPU support! Simply select the device with
8
+ `-device` in `nnUNetv2_train` and `nnUNetv2_predict`.
9
+ - Unified trainer class: nnUNetTrainer. No messing around with cascaded trainer, DDP trainer, region-based trainer,
10
+ ignore trainer etc. All default functionality is in there!
11
+ - Supports more input/output data formats through ImageIO classes.
12
+ - I/O formats can be extended by implementing new Adapters based on `BaseReaderWriter`.
13
+ - The nnUNet_raw_cropped folder no longer exists -> saves disk space at no performance penalty. magic! (no jk the
14
+ saving of cropped npz files was really slow, so it's actually faster to crop on the fly).
15
+ - Preprocessed data and segmentation are stored in different files when unpacked. Seg is stored as int8 and thus
16
+ takes 1/4 of the disk space per pixel (and I/O throughput) as in v1.
17
+ - Native support for multi-GPU (DDP) TRAINING.
18
+ Multi-GPU INFERENCE should still be run with `CUDA_VISIBLE_DEVICES=X nnUNetv2_predict [...] -num_parts Y -part_id X`.
19
+ There is no cross-GPU communication in inference, so it doesn't make sense to add additional complexity with DDP.
20
+ - All nnU-Net functionality is now also accessible via API. Check the corresponding entry point in `setup.py` to see
21
+ what functions you need to call.
22
+ - Dataset fingerprint is now explicitly created and saved in a json file (see nnUNet_preprocessed).
23
+
24
+ - Complete overhaul of plans files (read also [this](explanation_plans_files.md):
25
+ - Plans are now .json and can be opened and read more easily
26
+ - Configurations are explicitly named ("3d_fullres" , ...)
27
+ - Configurations can inherit from each other to make manual experimentation easier
28
+ - A ton of additional functionality is now included in and can be changed through the plans, for example normalization strategy, resampling etc.
29
+ - Stages of the cascade are now explicitly listed in the plans. 3d_lowres has 'next_stage' (which can also be a
30
+ list of configurations!). 3d_cascade_fullres has a 'previous_stage' entry. By manually editing plans files you can
31
+ now connect anything you want, for example 2d with 3d_fullres or whatever. Be wild! (But don't create cycles!)
32
+ - Multiple configurations can point to the same preprocessed data folder to save disk space. Careful! Only
33
+ configurations that use the same spacing, resampling, normalization etc. should share a data source! By default,
34
+ 3d_fullres and 3d_cascade_fullres share the same data
35
+ - Any number of configurations can be added to the plans (remember to give them a unique "data_identifier"!)
36
+
37
+ Folder structures are different and more user-friendly:
38
+ - nnUNet_preprocessed
39
+ - By default, preprocessed data is now saved as: `nnUNet_preprocessed/DATASET_NAME/PLANS_IDENTIFIER_CONFIGURATION` to clearly link them to their corresponding plans and configuration
40
+ - Name of the folder containing the preprocessed images can be adapted with the `data_identifier` key.
41
+ - nnUNet_results
42
+ - Results are now sorted as follows: DATASET_NAME/TRAINERCLASS__PLANSIDENTIFIER__CONFIGURATION/FOLD
43
+
44
+ ## What other changes are planned and not yet implemented?
45
+ - Integration into MONAI (together with our friends at Nvidia)
46
+ - New pretrained weights for a large number of datasets (coming very soon))
47
+
48
+
49
+ [//]: # (- nnU-Net now also natively supports an **ignore label**. Pixels with this label will not contribute to the loss. )
50
+
51
+ [//]: # (Use this to learn from sparsely annotated data, or excluding irrelevant areas from training. Read more [here]&#40;ignore_label.md&#41;.)
Code/PENGWIN_Challenge/nnUNet/documentation/competitions/AutoPETII.md ADDED
@@ -0,0 +1,129 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Look Ma, no code: fine tuning nnU-Net for the AutoPET II challenge by only adjusting its JSON plans
2
+
3
+ Please cite our paper :-*
4
+
5
+ ```text
6
+ COMING SOON
7
+ ```
8
+
9
+ ## Intro
10
+
11
+ See the [Challenge Website](https://autopet-ii.grand-challenge.org/) for details on the challenge.
12
+
13
+ Our solution to this challenge rewuires no code changes at all. All we do is optimize nnU-Net's hyperparameters
14
+ (architecture, batch size, patch size) through modifying the nnUNetplans.json file.
15
+
16
+ ## Prerequisites
17
+ Use the latest pytorch version!
18
+
19
+ We recommend you use the latest nnU-Net version as well! We ran our trainings with commit 913705f which you can try in case something doesn't work as expected:
20
+ `pip install git+https://github.com/MIC-DKFZ/nnUNet.git@913705f`
21
+
22
+ ## How to reproduce our trainings
23
+
24
+ ### Download and convert the data
25
+ 1. Download and extract the AutoPET II dataset
26
+ 2. Convert it to nnU-Net format by running `python nnunetv2/dataset_conversion/Dataset221_AutoPETII_2023.py FOLDER` where folder is the extracted AutoPET II dataset.
27
+
28
+ ### Experiment planning and preprocessing
29
+ We deviate a little from the standard nnU-Net procedure because all our experiments are based on just the 3d_fullres configuration
30
+
31
+ Run the following commands:
32
+ - `nnUNetv2_extract_fingerprint -d 221` extracts the dataset fingerprint
33
+ - `nnUNetv2_plan_experiment -d 221` does the planning for the plain unet
34
+ - `nnUNetv2_plan_experiment -d 221 -pl ResEncUNetPlanner` does the planning for the residual encoder unet
35
+ - `nnUNetv2_preprocess -d 221 -c 3d_fullres` runs all the preprocessing we need
36
+
37
+ ### Modification of plans files
38
+ Please read the [information on how to modify plans files](../explanation_plans_files.md) first!!!
39
+
40
+
41
+ It is easier to have everything in one plans file, so the first thing we do is transfer the ResEnc UNet to the
42
+ default plans file. We use the configuration inheritance feature of nnU-Net to make it use the same data as the
43
+ 3d_fullres configuration.
44
+ Add the following to the 'configurations' dict in 'nnUNetPlans.json':
45
+
46
+ ```json
47
+ "3d_fullres_resenc": {
48
+ "inherits_from": "3d_fullres",
49
+ "network_arch_class_name": "ResidualEncoderUNet",
50
+ "n_conv_per_stage_encoder": [
51
+ 1,
52
+ 3,
53
+ 4,
54
+ 6,
55
+ 6,
56
+ 6
57
+ ],
58
+ "n_conv_per_stage_decoder": [
59
+ 1,
60
+ 1,
61
+ 1,
62
+ 1,
63
+ 1
64
+ ]
65
+ },
66
+ ```
67
+
68
+ (these values are basically just copied from the 'nnUNetResEncUNetPlans.json' file! With everything redundant being omitted thanks to inheritance from 3d_fullres)
69
+
70
+ Now we crank up the patch and batch sizes. Add the following configurations:
71
+ ```json
72
+ "3d_fullres_resenc_bs80": {
73
+ "inherits_from": "3d_fullres_resenc",
74
+ "batch_size": 80
75
+ },
76
+ "3d_fullres_resenc_192x192x192_b24": {
77
+ "inherits_from": "3d_fullres_resenc",
78
+ "patch_size": [
79
+ 192,
80
+ 192,
81
+ 192
82
+ ],
83
+ "batch_size": 24
84
+ }
85
+ ```
86
+
87
+ Save the file (and check for potential Syntax Errors!)
88
+
89
+ ### Run trainings
90
+ Training each model requires 8 Nvidia A100 40GB GPUs. Expect training to run for 5-7 days. You'll need a really good
91
+ CPU to handle the data augmentation! 128C/256T are a must! If you have less threads available, scale down nnUNet_n_proc_DA accordingly.
92
+
93
+ ```bash
94
+ nnUNet_compile=T nnUNet_n_proc_DA=28 nnUNetv2_train 221 3d_fullres_resenc_bs80 0 -num_gpus 8
95
+ nnUNet_compile=T nnUNet_n_proc_DA=28 nnUNetv2_train 221 3d_fullres_resenc_bs80 1 -num_gpus 8
96
+ nnUNet_compile=T nnUNet_n_proc_DA=28 nnUNetv2_train 221 3d_fullres_resenc_bs80 2 -num_gpus 8
97
+ nnUNet_compile=T nnUNet_n_proc_DA=28 nnUNetv2_train 221 3d_fullres_resenc_bs80 3 -num_gpus 8
98
+ nnUNet_compile=T nnUNet_n_proc_DA=28 nnUNetv2_train 221 3d_fullres_resenc_bs80 4 -num_gpus 8
99
+
100
+ nnUNet_compile=T nnUNet_n_proc_DA=28 nnUNetv2_train 221 3d_fullres_resenc_192x192x192_b24 0 -num_gpus 8
101
+ nnUNet_compile=T nnUNet_n_proc_DA=28 nnUNetv2_train 221 3d_fullres_resenc_192x192x192_b24 1 -num_gpus 8
102
+ nnUNet_compile=T nnUNet_n_proc_DA=28 nnUNetv2_train 221 3d_fullres_resenc_192x192x192_b24 2 -num_gpus 8
103
+ nnUNet_compile=T nnUNet_n_proc_DA=28 nnUNetv2_train 221 3d_fullres_resenc_192x192x192_b24 3 -num_gpus 8
104
+ nnUNet_compile=T nnUNet_n_proc_DA=28 nnUNetv2_train 221 3d_fullres_resenc_192x192x192_b24 4 -num_gpus 8
105
+ ```
106
+
107
+ Done!
108
+
109
+ (We also provide pretrained weights in case you don't want to invest the GPU resources, see below)
110
+
111
+ ## How to make predictions with pretrained weights
112
+ Our final model is an ensemble of two configurations:
113
+ - ResEnc UNet with batch size 80
114
+ - ResEnc UNet with patch size 192x192x192 and batch size 24
115
+
116
+ To run inference with these models, do the following:
117
+
118
+ 1. Download the pretrained model weights from [Zenodo](https://zenodo.org/record/8362371)
119
+ 2. Install both .zip files using `nnUNetv2_install_pretrained_model_from_zip`
120
+ 3. Make sure
121
+ 4. Now you can run inference on new cases with `nnUNetv2_predict`:
122
+ - `nnUNetv2_predict -i INPUT -o OUTPUT1 -d 221 -c 3d_fullres_resenc_bs80 -f 0 1 2 3 4 -step_size 0.6 --save_probabilities`
123
+ - `nnUNetv2_predict -i INPUT -o OUTPUT2 -d 221 -c 3d_fullres_resenc_192x192x192_b24 -f 0 1 2 3 4 --save_probabilities`
124
+ - `nnUNetv2_ensemble -i OUTPUT1 OUTPUT2 -o OUTPUT_ENSEMBLE`
125
+
126
+ Note that our inference Docker omitted TTA via mirroring along the axial direction during prediction (only sagittal +
127
+ coronal mirroring). This was
128
+ done to keep the inference time below 10 minutes per image on a T4 GPU (we actually never tested whether we could
129
+ have left this enabled). Just leave it on! You can also leave the step_size at default for the 3d_fullres_resenc_bs80.
Code/PENGWIN_Challenge/nnUNet/documentation/convert_msd_dataset.md ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ Use `nnUNetv2_convert_MSD_dataset`.
2
+
3
+ Read `nnUNetv2_convert_MSD_dataset -h` for usage instructions.
Code/PENGWIN_Challenge/nnUNet/documentation/dataset_format.md ADDED
@@ -0,0 +1,254 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # nnU-Net dataset format
2
+ The only way to bring your data into nnU-Net is by storing it in a specific format. Due to nnU-Net's roots in the
3
+ [Medical Segmentation Decathlon](http://medicaldecathlon.com/) (MSD), its dataset is heavily inspired but has since
4
+ diverged (see also [here](#how-to-use-decathlon-datasets)) from the format used in the MSD.
5
+
6
+ Datasets consist of three components: raw images, corresponding segmentation maps and a dataset.json file specifying
7
+ some metadata.
8
+
9
+ If you are migrating from nnU-Net v1, read [this](#how-to-use-nnu-net-v1-tasks) to convert your existing Tasks.
10
+
11
+
12
+ ## What do training cases look like?
13
+ Each training case is associated with an identifier = a unique name for that case. This identifier is used by nnU-Net to
14
+ connect images with the correct segmentation.
15
+
16
+ A training case consists of images and their corresponding segmentation.
17
+
18
+ **Images** is plural because nnU-Net supports arbitrarily many input channels. In order to be as flexible as possible,
19
+ nnU-net requires each input channel to be stored in a separate image (with the sole exception being RGB natural
20
+ images). So these images could for example be a T1 and a T2 MRI (or whatever else you want). The different input
21
+ channels MUST have the same geometry (same shape, spacing (if applicable) etc.) and
22
+ must be co-registered (if applicable). Input channels are identified by nnU-Net by their FILE_ENDING: a four-digit integer at the end
23
+ of the filename. Image files must therefore follow the following naming convention: {CASE_IDENTIFIER}_{XXXX}.{FILE_ENDING}.
24
+ Hereby, XXXX is the 4-digit modality/channel identifier (should be unique for each modality/channel, e.g., “0000” for T1, “0001” for
25
+ T2 MRI, …) and FILE_ENDING is the file extension used by your image format (.png, .nii.gz, ...). See below for concrete examples.
26
+ The dataset.json file connects channel names with the channel identifiers in the 'channel_names' key (see below for details).
27
+
28
+ Side note: Typically, each channel/modality needs to be stored in a separate file and is accessed with the XXXX channel identifier.
29
+ Exception are natural images (RGB; .png) where the three color channels can all be stored in one file (see the
30
+ [road segmentation](../nnunetv2/dataset_conversion/Dataset120_RoadSegmentation.py) dataset as an example).
31
+
32
+ **Segmentations** must share the same geometry with their corresponding images (same shape etc.). Segmentations are
33
+ integer maps with each value representing a semantic class. The background must be 0. If there is no background, then
34
+ do not use the label 0 for something else! Integer values of your semantic classes must be consecutive (0, 1, 2, 3,
35
+ ...). Of course, not all labels have to be present in each training case. Segmentations are saved as {CASE_IDENTIFER}.{FILE_ENDING} .
36
+
37
+ Within a training case, all image geometries (input channels, corresponding segmentation) must match. Between training
38
+ cases, they can of course differ. nnU-Net takes care of that.
39
+
40
+ Important: The input channels must be consistent! Concretely, **all images need the same input channels in the same
41
+ order and all input channels have to be present every time**. This is also true for inference!
42
+
43
+
44
+ ## Supported file formats
45
+ nnU-Net expects the same file format for images and segmentations! These will also be used for inference. For now, it
46
+ is thus not possible to train .png and then run inference on .jpg.
47
+
48
+ One big change in nnU-Net V2 is the support of multiple input file types. Gone are the days of converting everything to .nii.gz!
49
+ This is implemented by abstracting the input and output of images + segmentations through `BaseReaderWriter`. nnU-Net
50
+ comes with a broad collection of Readers+Writers and you can even add your own to support your data format!
51
+ See [here](../nnunetv2/imageio/readme.md).
52
+
53
+ As a nice bonus, nnU-Net now also natively supports 2D input images and you no longer have to mess around with
54
+ conversions to pseudo 3D niftis. Yuck. That was disgusting.
55
+
56
+ Note that internally (for storing and accessing preprocessed images) nnU-Net will use its own file format, irrespective
57
+ of what the raw data was provided in! This is for performance reasons.
58
+
59
+
60
+ By default, the following file formats are supported:
61
+
62
+ - NaturalImage2DIO: .png, .bmp, .tif
63
+ - NibabelIO: .nii.gz, .nrrd, .mha
64
+ - NibabelIOWithReorient: .nii.gz, .nrrd, .mha. This reader will reorient images to RAS!
65
+ - SimpleITKIO: .nii.gz, .nrrd, .mha
66
+ - Tiff3DIO: .tif, .tiff. 3D tif images! Since TIF does not have a standardized way of storing spacing information,
67
+ nnU-Net expects each TIF file to be accompanied by an identically named .json file that contains this information (see
68
+ [here](#datasetjson)).
69
+
70
+ The file extension lists are not exhaustive and depend on what the backend supports. For example, nibabel and SimpleITK
71
+ support more than the three given here. The file endings given here are just the ones we tested!
72
+
73
+ IMPORTANT: nnU-Net can only be used with file formats that use lossless (or no) compression! Because the file
74
+ format is defined for an entire dataset (and not separately for images and segmentations, this could be a todo for
75
+ the future), we must ensure that there are no compression artifacts that destroy the segmentation maps. So no .jpg and
76
+ the likes!
77
+
78
+ ## Dataset folder structure
79
+ Datasets must be located in the `nnUNet_raw` folder (which you either define when installing nnU-Net or export/set every
80
+ time you intend to run nnU-Net commands!).
81
+ Each segmentation dataset is stored as a separate 'Dataset'. Datasets are associated with a dataset ID, a three digit
82
+ integer, and a dataset name (which you can freely choose): For example, Dataset005_Prostate has 'Prostate' as dataset name and
83
+ the dataset id is 5. Datasets are stored in the `nnUNet_raw` folder like this:
84
+
85
+ nnUNet_raw/
86
+ ├── Dataset001_BrainTumour
87
+ ├── Dataset002_Heart
88
+ ├── Dataset003_Liver
89
+ ├── Dataset004_Hippocampus
90
+ ├── Dataset005_Prostate
91
+ ├── ...
92
+
93
+ Within each dataset folder, the following structure is expected:
94
+
95
+ Dataset001_BrainTumour/
96
+ ├── dataset.json
97
+ ├── imagesTr
98
+ ├── imagesTs # optional
99
+ └── labelsTr
100
+
101
+
102
+ When adding your custom dataset, take a look at the [dataset_conversion](../nnunetv2/dataset_conversion) folder and
103
+ pick an id that is not already taken. IDs 001-010 are for the Medical Segmentation Decathlon.
104
+
105
+ - **imagesTr** contains the images belonging to the training cases. nnU-Net will perform pipeline configuration, training with
106
+ cross-validation, as well as finding postprocessing and the best ensemble using this data.
107
+ - **imagesTs** (optional) contains the images that belong to the test cases. nnU-Net does not use them! This could just
108
+ be a convenient location for you to store these images. Remnant of the Medical Segmentation Decathlon folder structure.
109
+ - **labelsTr** contains the images with the ground truth segmentation maps for the training cases.
110
+ - **dataset.json** contains metadata of the dataset.
111
+
112
+ The scheme introduced [above](#what-do-training-cases-look-like) results in the following folder structure. Given
113
+ is an example for the first Dataset of the MSD: BrainTumour. This dataset hat four input channels: FLAIR (0000),
114
+ T1w (0001), T1gd (0002) and T2w (0003). Note that the imagesTs folder is optional and does not have to be present.
115
+
116
+ nnUNet_raw/Dataset001_BrainTumour/
117
+ ├── dataset.json
118
+ ├── imagesTr
119
+ │   ├── BRATS_001_0000.nii.gz
120
+ │   ├── BRATS_001_0001.nii.gz
121
+ │   ├── BRATS_001_0002.nii.gz
122
+ │   ├── BRATS_001_0003.nii.gz
123
+ │   ├── BRATS_002_0000.nii.gz
124
+ │   ├── BRATS_002_0001.nii.gz
125
+ │   ├── BRATS_002_0002.nii.gz
126
+ │   ├── BRATS_002_0003.nii.gz
127
+ │   ├── ...
128
+ ├── imagesTs
129
+ │   ├── BRATS_485_0000.nii.gz
130
+ │   ├── BRATS_485_0001.nii.gz
131
+ │   ├── BRATS_485_0002.nii.gz
132
+ │   ├── BRATS_485_0003.nii.gz
133
+ │   ├── BRATS_486_0000.nii.gz
134
+ │   ├── BRATS_486_0001.nii.gz
135
+ │   ├── BRATS_486_0002.nii.gz
136
+ │   ├── BRATS_486_0003.nii.gz
137
+ │   ├── ...
138
+ └── labelsTr
139
+ ├── BRATS_001.nii.gz
140
+ ├── BRATS_002.nii.gz
141
+ ├── ...
142
+
143
+ Here is another example of the second dataset of the MSD, which has only one input channel:
144
+
145
+ nnUNet_raw/Dataset002_Heart/
146
+ ├── dataset.json
147
+ ├── imagesTr
148
+ │   ├── la_003_0000.nii.gz
149
+ │   ├── la_004_0000.nii.gz
150
+ │   ├── ...
151
+ ├── imagesTs
152
+ │   ├── la_001_0000.nii.gz
153
+ │   ├── la_002_0000.nii.gz
154
+ │   ├── ...
155
+ └── labelsTr
156
+ ├── la_003.nii.gz
157
+ ├── la_004.nii.gz
158
+ ├── ...
159
+
160
+ Remember: For each training case, all images must have the same geometry to ensure that their pixel arrays are aligned. Also
161
+ make sure that all your data is co-registered!
162
+
163
+ See also [dataset format inference](dataset_format_inference.md)!!
164
+
165
+ ## dataset.json
166
+ The dataset.json contains metadata that nnU-Net needs for training. We have greatly reduced the number of required
167
+ fields since version 1!
168
+
169
+ Here is what the dataset.json should look like at the example of the Dataset005_Prostate from the MSD:
170
+
171
+ {
172
+ "channel_names": { # formerly modalities
173
+ "0": "T2",
174
+ "1": "ADC"
175
+ },
176
+ "labels": { # THIS IS DIFFERENT NOW!
177
+ "background": 0,
178
+ "PZ": 1,
179
+ "TZ": 2
180
+ },
181
+ "numTraining": 32,
182
+ "file_ending": ".nii.gz"
183
+ "overwrite_image_reader_writer": "SimpleITKIO" # optional! If not provided nnU-Net will automatically determine the ReaderWriter
184
+ }
185
+
186
+ The channel_names determine the normalization used by nnU-Net. If a channel is marked as 'CT', then a global
187
+ normalization based on the intensities in the foreground pixels will be used. If it is something else, per-channel
188
+ z-scoring will be used. Refer to the methods section in [our paper](https://www.nature.com/articles/s41592-020-01008-z)
189
+ for more details. nnU-Net v2 introduces a few more normalization schemes to
190
+ choose from and allows you to define your own, see [here](explanation_normalization.md) for more information.
191
+
192
+ Important changes relative to nnU-Net v1:
193
+ - "modality" is now called "channel_names" to remove strong bias to medical images
194
+ - labels are structured differently (name -> int instead of int -> name). This was needed to support [region-based training](region_based_training.md)
195
+ - "file_ending" is added to support different input file types
196
+ - "overwrite_image_reader_writer" optional! Can be used to specify a certain (custom) ReaderWriter class that should
197
+ be used with this dataset. If not provided, nnU-Net will automatically determine the ReaderWriter
198
+ - "regions_class_order" only used in [region-based training](region_based_training.md)
199
+
200
+ There is a utility with which you can generate the dataset.json automatically. You can find it
201
+ [here](../nnunetv2/dataset_conversion/generate_dataset_json.py).
202
+ See our examples in [dataset_conversion](../nnunetv2/dataset_conversion) for how to use it. And read its documentation!
203
+
204
+ As described above, a json file that contains spacing information is required for TIFF files.
205
+ An example for a 3D TIFF stack with units corresponding to 7.6 in x and y, 80 in z is:
206
+
207
+ ```
208
+ {
209
+ "spacing": [7.6, 7.6, 80.0]
210
+ }
211
+ ```
212
+
213
+ Within the dataset folder, this file (named `cell6.json` in this example) would be placed in the following folders:
214
+
215
+ nnUNet_raw/Dataset123_Foo/
216
+ ├── dataset.json
217
+ ├── imagesTr
218
+ │   ├── cell6.json
219
+ │   └── cell6_0000.tif
220
+ └── labelsTr
221
+ ├── cell6.json
222
+ └── cell6.tif
223
+
224
+
225
+ ## How to use nnU-Net v1 Tasks
226
+ If you are migrating from the old nnU-Net, convert your existing datasets with `nnUNetv2_convert_old_nnUNet_dataset`!
227
+
228
+ Example for migrating a nnU-Net v1 Task:
229
+ ```bash
230
+ nnUNetv2_convert_old_nnUNet_dataset /media/isensee/raw_data/nnUNet_raw_data_base/nnUNet_raw_data/Task027_ACDC Dataset027_ACDC
231
+ ```
232
+ Use `nnUNetv2_convert_old_nnUNet_dataset -h` for detailed usage instructions.
233
+
234
+
235
+ ## How to use decathlon datasets
236
+ See [convert_msd_dataset.md](convert_msd_dataset.md)
237
+
238
+ ## How to use 2D data with nnU-Net
239
+ 2D is now natively supported (yay!). See [here](#supported-file-formats) as well as the example dataset in this
240
+ [script](../nnunetv2/dataset_conversion/Dataset120_RoadSegmentation.py).
241
+
242
+
243
+ ## How to update an existing dataset
244
+ When updating a dataset it is best practice to remove the preprocessed data in `nnUNet_preprocessed/DatasetXXX_NAME`
245
+ to ensure a fresh start. Then replace the data in `nnUNet_raw` and rerun `nnUNetv2_plan_and_preprocess`. Optionally,
246
+ also remove the results from old trainings.
247
+
248
+ # Example dataset conversion scripts
249
+ In the `dataset_conversion` folder (see [here](../nnunetv2/dataset_conversion)) are multiple example scripts for
250
+ converting datasets into nnU-Net format. These scripts cannot be run as they are (you need to open them and change
251
+ some paths) but they are excellent examples for you to learn how to convert your own datasets into nnU-Net format.
252
+ Just pick the dataset that is closest to yours as a starting point.
253
+ The list of dataset conversion scripts is continually updated. If you find that some publicly available dataset is
254
+ missing, feel free to open a PR to add it!
Code/PENGWIN_Challenge/nnUNet/documentation/dataset_format_inference.md ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Data format for Inference
2
+ Read the documentation on the overall [data format](dataset_format.md) first!
3
+
4
+ The data format for inference must match the one used for the raw data (**specifically, the images must be in exactly
5
+ the same format as in the imagesTr folder**). As before, the filenames must start with a
6
+ unique identifier, followed by a 4-digit modality identifier. Here is an example for two different datasets:
7
+
8
+ 1) Task005_Prostate:
9
+
10
+ This task has 2 modalities, so the files in the input folder must look like this:
11
+
12
+ input_folder
13
+ ├── prostate_03_0000.nii.gz
14
+ ├── prostate_03_0001.nii.gz
15
+ ├── prostate_05_0000.nii.gz
16
+ ├── prostate_05_0001.nii.gz
17
+ ├── prostate_08_0000.nii.gz
18
+ ├── prostate_08_0001.nii.gz
19
+ ├── ...
20
+
21
+ _0000 has to be the T2 image and _0001 has to be the ADC image (as specified by 'channel_names' in the
22
+ dataset.json), exactly the same as was used for training.
23
+
24
+ 2) Task002_Heart:
25
+
26
+ imagesTs
27
+ ├── la_001_0000.nii.gz
28
+ ├── la_002_0000.nii.gz
29
+ ├── la_006_0000.nii.gz
30
+ ├── ...
31
+
32
+ Task002 only has one modality, so each case only has one _0000.nii.gz file.
33
+
34
+
35
+ The segmentations in the output folder will be named {CASE_IDENTIFIER}.nii.gz (omitting the modality identifier).
36
+
37
+ Remember that the file format used for inference (.nii.gz in this example) must be the same as was used for training
38
+ (and as was specified in 'file_ending' in the dataset.json)!
39
+
Code/PENGWIN_Challenge/nnUNet/documentation/explanation_normalization.md ADDED
@@ -0,0 +1,45 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Intensity normalization in nnU-Net
2
+
3
+ The type of intensity normalization applied in nnU-Net can be controlled via the `channel_names` (former `modalities`)
4
+ entry in the dataset.json. Just like the old nnU-Net, per-channel z-scoring as well as dataset-wide z-scoring based on
5
+ foreground intensities are supported. However, there have been a few additions as well.
6
+
7
+ Reminder: The `channel_names` entry typically looks like this:
8
+
9
+ "channel_names": {
10
+ "0": "T2",
11
+ "1": "ADC"
12
+ },
13
+
14
+ It has as many entries as there are input channels for the given dataset.
15
+
16
+ To tell you a secret, nnU-Net does not really care what your channels are called. We just use this to determine what normalization
17
+ scheme will be used for the given dataset. nnU-Net requires you to specify a normalization strategy for each of your input channels!
18
+ If you enter a channel name that is not in the following list, the default (`zscore`) will be used.
19
+
20
+ Here is a list of currently available normalization schemes:
21
+
22
+ - `CT`: Perform CT normalization. Specifically, collect intensity values from the foreground classes (all but the
23
+ background and ignore) from all training cases, compute the mean, standard deviation as well as the 0.5 and
24
+ 99.5 percentile of the values. Then clip to the percentiles, followed by subtraction of the mean and division with the
25
+ standard deviation. The normalization that is applied is the same for each training case (for this input channel).
26
+ The values used by nnU-Net for normalization are stored in the `foreground_intensity_properties_per_channel` entry in the
27
+ corresponding plans file. This normalization is suitable for modalities presenting physical quantities such as CT
28
+ images and ADC maps.
29
+ - `noNorm` : do not perform any normalization at all
30
+ - `rescale_to_0_1`: rescale the intensities to [0, 1]
31
+ - `rgb_to_0_1`: assumes uint8 inputs. Divides by 255 to rescale uint8 to [0, 1]
32
+ - `zscore`/anything else: perform z-scoring (subtract mean and standard deviation) separately for each train case
33
+
34
+ **Important:** The nnU-Net default is to perform 'CT' normalization for CT images and 'zscore' for everything else! If
35
+ you deviate from that path, make sure to benchmark whether that actually improves results!
36
+
37
+ # How to implement custom normalization strategies?
38
+ - Head over to nnunetv2/preprocessing/normalization
39
+ - implement a new image normalization class by deriving from ImageNormalization
40
+ - register it in nnunetv2/preprocessing/normalization/map_channel_name_to_normalization.py:channel_name_to_normalization_mapping.
41
+ This is where you specify a channel name that should be associated with it
42
+ - use it by specifying the correct channel_name
43
+
44
+ Normalization can only be applied to one channel at a time. There is currently no way of implementing a normalization scheme
45
+ that gets multiple channels as input to be used jointly!
Code/PENGWIN_Challenge/nnUNet/documentation/explanation_plans_files.md ADDED
@@ -0,0 +1,185 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Modifying the nnU-Net Configurations
2
+
3
+ nnU-Net provides unprecedented out-of-the-box segmentation performance for essentially any dataset we have evaluated
4
+ it on. That said, there is always room for improvements. A fool-proof strategy for squeezing out the last bit of
5
+ performance is to start with the default nnU-Net, and then further tune it manually to a concrete dataset at hand.
6
+ **This guide is about changes to the nnU-Net configuration you can make via the plans files. It does not cover code
7
+ extensions of nnU-Net. For that, take a look [here](extending_nnunet.md)**
8
+
9
+ In nnU-Net V2, plans files are SO MUCH MORE powerful than they were in v1. There are a lot more knobs that you can
10
+ turn without resorting to hacky solutions or even having to touch the nnU-Net code at all! And as an added bonus:
11
+ plans files are now also .json files and no longer require users to fiddle with pickle. Just open them in your text
12
+ editor of choice!
13
+
14
+ If overwhelmed, look at our [Examples](#examples)!
15
+
16
+ # plans.json structure
17
+
18
+ Plans have global and local settings. Global settings are applied to all configurations in that plans file while
19
+ local settings are attached to a specific configuration.
20
+
21
+ ## Global settings
22
+
23
+ - `foreground_intensity_properties_by_modality`: Intensity statistics of the foreground regions (all labels except
24
+ background and ignore label), computed over all training cases. Used by [CT normalization scheme](explanation_normalization.md).
25
+ - `image_reader_writer`: Name of the image reader/writer class that should be used with this dataset. You might want
26
+ to change this if, for example, you would like to run inference with files that have a different file format. The
27
+ class that is named here must be located in nnunetv2.imageio!
28
+ - `label_manager`: The name of the class that does label handling. Take a look at
29
+ nnunetv2.utilities.label_handling.LabelManager to see what it does. If you decide to change it, place your version
30
+ in nnunetv2.utilities.label_handling!
31
+ - `transpose_forward`: nnU-Net transposes the input data so that the axes with the highest resolution (lowest spacing)
32
+ come last. This is because the 2D U-Net operates on the trailing dimensions (more efficient slicing due to internal
33
+ memory layout of arrays). Future work might move this setting to affect only individual configurations.
34
+ - transpose_backward is what numpy.transpose gets as new axis ordering.
35
+ - `transpose_backward`: the axis ordering that inverts "transpose_forward"
36
+ - \[`original_median_shape_after_transp`\]: just here for your information
37
+ - \[`original_median_spacing_after_transp`\]: just here for your information
38
+ - \[`plans_name`\]: do not change. Used internally
39
+ - \[`experiment_planner_used`\]: just here as metadata so that we know what planner originally generated this file
40
+ - \[`dataset_name`\]: do not change. This is the dataset these plans are intended for
41
+
42
+ ## Local settings
43
+ Plans also have a `configurations` key in which the actual configurations are stored. `configurations` are again a
44
+ dictionary, where the keys are the configuration names and the values are the local settings for each configuration.
45
+
46
+ To better understand the components describing the network topology in our plans files, please read section 6.2
47
+ in the [supplementary information](https://static-content.springer.com/esm/art%3A10.1038%2Fs41592-020-01008-z/MediaObjects/41592_2020_1008_MOESM1_ESM.pdf)
48
+ (page 13) of our paper!
49
+
50
+ Local settings:
51
+ - `spacing`: the target spacing used in this configuration
52
+ - `patch_size`: the patch size used for training this configuration
53
+ - `data_identifier`: the preprocessed data for this configuration will be saved in
54
+ nnUNet_preprocessed/DATASET_NAME/_data_identifier_. If you add a new configuration, remember to set a unique
55
+ data_identifier in order to not create conflicts with other configurations (unless you plan to reuse the data from
56
+ another configuration, for example as is done in the cascade)
57
+ - `batch_size`: batch size used for training
58
+ - `batch_dice`: whether to use batch dice (pretend all samples in the batch are one image, compute dice loss over that)
59
+ or not (each sample in the batch is a separate image, compute dice loss for each sample and average over samples)
60
+ - `preprocessor_name`: Name of the preprocessor class used for running preprocessing. Class must be located in
61
+ nnunetv2.preprocessing.preprocessors
62
+ - `use_mask_for_norm`: whether to use the nonzero mask for normalization or not (relevant for BraTS and the like,
63
+ probably False for all other datasets). Interacts with ImageNormalization class
64
+ - `normalization_schemes`: mapping of channel identifier to ImageNormalization class name. ImageNormalization
65
+ classes must be located in nnunetv2.preprocessing.normalization. Also see [here](explanation_normalization.md)
66
+ - `resampling_fn_data`: name of resampling function to be used for resizing image data. resampling function must be
67
+ callable(data, current_spacing, new_spacing, **kwargs). It must be located in nnunetv2.preprocessing.resampling
68
+ - `resampling_fn_data_kwargs`: kwargs for resampling_fn_data
69
+ - `resampling_fn_probabilities`: name of resampling function to be used for resizing predicted class probabilities/logits.
70
+ resampling function must be `callable(data: Union[np.ndarray, torch.Tensor], current_spacing, new_spacing, **kwargs)`. It must be located in
71
+ nnunetv2.preprocessing.resampling
72
+ - `resampling_fn_probabilities_kwargs`: kwargs for resampling_fn_probabilities
73
+ - `resampling_fn_seg`: name of resampling function to be used for resizing segmentation maps (integer: 0, 1, 2, 3, etc).
74
+ resampling function must be callable(data, current_spacing, new_spacing, **kwargs). It must be located in
75
+ nnunetv2.preprocessing.resampling
76
+ - `resampling_fn_seg_kwargs`: kwargs for resampling_fn_seg
77
+ - `network_arch_class_name`: UNet class name, can be used to integrate custom dynamic architectures
78
+ - `UNet_base_num_features`: The number of starting features for the UNet architecture. Default is 32. Default: Features
79
+ are doubled with each downsampling
80
+ - `unet_max_num_features`: Maximum number of features (default: capped at 320 for 3D and 512 for 2d). The purpose is to
81
+ prevent parameters from exploding too much.
82
+ - `conv_kernel_sizes`: the convolutional kernel sizes used by nnU-Net in each stage of the encoder. The decoder
83
+ mirrors the encoder and is therefore not explicitly listed here! The list is as long as `n_conv_per_stage_encoder` has
84
+ entries
85
+ - `n_conv_per_stage_encoder`: number of convolutions used per stage (=at a feature map resolution in the encoder) in the encoder.
86
+ Default is 2. The list has as many entries as the encoder has stages
87
+ - `n_conv_per_stage_decoder`: number of convolutions used per stage in the decoder. Also see `n_conv_per_stage_encoder`
88
+ - `num_pool_per_axis`: number of times each of the spatial axes is pooled in the network. Needed to know how to pad
89
+ image sizes during inference (num_pool = 5 means input must be divisible by 2**5=32)
90
+ - `pool_op_kernel_sizes`: the pooling kernel sizes (and at the same time strides) for each stage of the encoder
91
+ - \[`median_image_size_in_voxels`\]: the median size of the images of the training set at the current target spacing.
92
+ Do not modify this as this is not used. It is just here for your information.
93
+
94
+ Special local settings:
95
+ - `inherits_from`: configurations can inherit from each other. This makes it easy to add new configurations that only
96
+ differ in a few local settings from another. If using this, remember to set a new `data_identifier` (if needed)!
97
+ - `previous_stage`: if this configuration is part of a cascade, we need to know what the previous stage (for example
98
+ the low resolution configuration) was. This needs to be specified here.
99
+ - `next_stage`: if this configuration is part of a cascade, we need to know what possible subsequent stages are! This
100
+ is because we need to export predictions in the correct spacing when running the validation. `next_stage` can either
101
+ be a string or a list of strings
102
+
103
+ # Examples
104
+
105
+ ## Increasing the batch size for large datasets
106
+ If your dataset is large the training can benefit from larger batch_sizes. To do this, simply create a new
107
+ configuration in the `configurations` dict
108
+
109
+ "configurations": {
110
+ "3d_fullres_bs40": {
111
+ "inherits_from": "3d_fullres",
112
+ "batch_size": 40
113
+ }
114
+ }
115
+
116
+ No need to change the data_identifier. `3d_fullres_bs40` will just use the preprocessed data from `3d_fullres`.
117
+ No need to rerun `nnUNetv2_preprocess` because we can use already existing data (if available) from `3d_fullres`.
118
+
119
+ ## Using custom preprocessors
120
+ If you would like to use a different preprocessor class then this can be specified as follows:
121
+
122
+ "configurations": {
123
+ "3d_fullres_my_preprocesor": {
124
+ "inherits_from": "3d_fullres",
125
+ "preprocessor_name": MY_PREPROCESSOR,
126
+ "data_identifier": "3d_fullres_my_preprocesor"
127
+ }
128
+ }
129
+
130
+ You need to run preprocessing for this new configuration:
131
+ `nnUNetv2_preprocess -d DATASET_ID -c 3d_fullres_my_preprocesor` because it changes the preprocessing. Remember to
132
+ set a unique `data_identifier` whenever you make modifications to the preprocessed data!
133
+
134
+ ## Change target spacing
135
+
136
+ "configurations": {
137
+ "3d_fullres_my_spacing": {
138
+ "inherits_from": "3d_fullres",
139
+ "spacing": [X, Y, Z],
140
+ "data_identifier": "3d_fullres_my_spacing"
141
+ }
142
+ }
143
+
144
+ You need to run preprocessing for this new configuration:
145
+ `nnUNetv2_preprocess -d DATASET_ID -c 3d_fullres_my_spacing` because it changes the preprocessing. Remember to
146
+ set a unique `data_identifier` whenever you make modifications to the preprocessed data!
147
+
148
+ ## Adding a cascade to a dataset where it does not exist
149
+ Hippocampus is small. It doesn't have a cascade. It also doesn't really make sense to add a cascade here but hey for
150
+ the sake of demonstration we can do that.
151
+ We change the following things here:
152
+
153
+ - `spacing`: The lowres stage should operate at a lower resolution
154
+ - we modify the `median_image_size_in_voxels` entry as a guide for what original image sizes we deal with
155
+ - we set some patch size that is inspired by `median_image_size_in_voxels`
156
+ - we need to remember that the patch size must be divisible by 2**num_pool in each axis!
157
+ - network parameters such as kernel sizes, pooling operations are changed accordingly
158
+ - we need to specify the name of the next stage
159
+ - we need to add the highres stage
160
+
161
+ This is how this would look like (comparisons with 3d_fullres given as reference):
162
+
163
+ "configurations": {
164
+ "3d_lowres": {
165
+ "inherits_from": "3d_fullres",
166
+ "data_identifier": "3d_lowres"
167
+ "spacing": [2.0, 2.0, 2.0], # from [1.0, 1.0, 1.0] in 3d_fullres
168
+ "median_image_size_in_voxels": [18, 25, 18], # from [36, 50, 35]
169
+ "patch_size": [20, 28, 20], # from [40, 56, 40]
170
+ "n_conv_per_stage_encoder": [2, 2, 2], # one less entry than 3d_fullres ([2, 2, 2, 2])
171
+ "n_conv_per_stage_decoder": [2, 2], # one less entry than 3d_fullres
172
+ "num_pool_per_axis": [2, 2, 2], # one less pooling than 3d_fullres in each dimension (3d_fullres: [3, 3, 3])
173
+ "pool_op_kernel_sizes": [[1, 1, 1], [2, 2, 2], [2, 2, 2]], # one less [2, 2, 2]
174
+ "conv_kernel_sizes": [[3, 3, 3], [3, 3, 3], [3, 3, 3]], # one less [3, 3, 3]
175
+ "next_stage": "3d_cascade_fullres" # name of the next stage in the cascade
176
+ },
177
+ "3d_cascade_fullres": { # does not need a data_identifier because we can use the data of 3d_fullres
178
+ "inherits_from": "3d_fullres",
179
+ "previous_stage": "3d_lowres" # name of the previous stage
180
+ }
181
+ }
182
+
183
+ To better understand the components describing the network topology in our plans files, please read section 6.2
184
+ in the [supplementary information](https://static-content.springer.com/esm/art%3A10.1038%2Fs41592-020-01008-z/MediaObjects/41592_2020_1008_MOESM1_ESM.pdf)
185
+ (page 13) of our paper!
Code/PENGWIN_Challenge/nnUNet/documentation/extending_nnunet.md ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Extending nnU-Net
2
+ We hope that the new structure of nnU-Net v2 makes it much more intuitive on how to modify it! We cannot give an
3
+ extensive tutorial on how each and every bit of it can be modified. It is better for you to search for the position
4
+ in the repository where the thing you intend to change is implemented and start working your way through the code from
5
+ there. Setting breakpoints and debugging into nnU-Net really helps in understanding it and thus will help you make the
6
+ necessary modifications!
7
+
8
+ Here are some things you might want to read before you start:
9
+ - Editing nnU-Net configurations through plans files is really powerful now and allows you to change a lot of things regarding
10
+ preprocessing, resampling, network topology etc. Read [this](explanation_plans_files.md)!
11
+ - [Image normalization](explanation_normalization.md) and [i/o formats](dataset_format.md#supported-file-formats) are easy to extend!
12
+ - Manual data splits can be defined as described [here](manual_data_splits.md)
13
+ - You can chain arbitrary configurations together into cascades, see [this again](explanation_plans_files.md)
14
+ - Read about our support for [region-based training](region_based_training.md)
15
+ - If you intend to modify the training procedure (loss, sampling, data augmentation, lr scheduler, etc) then you need
16
+ to implement your own trainer class. Best practice is to create a class that inherits from nnUNetTrainer and
17
+ implements the necessary changes. Head over to our [trainer classes folder](../nnunetv2/training/nnUNetTrainer) for
18
+ inspiration! There will be similar trainers for what you intend to change and you can take them as a guide. nnUNetTrainer
19
+ are structured similarly to PyTorch lightning trainers, this should also make things easier!
20
+ - Integrating new network architectures can be done in two ways:
21
+ - Quick and dirty: implement a new nnUNetTrainer class and overwrite its `build_network_architecture` function.
22
+ Make sure your architecture is compatible with deep supervision (if not, use `nnUNetTrainerNoDeepSupervision`
23
+ as basis!) and that it can handle the patch sizes that are thrown at it! Your architecture should NOT apply any
24
+ nonlinearities at the end (softmax, sigmoid etc). nnU-Net does that!
25
+ - The 'proper' (but difficult) way: Build a dynamically configurable architecture such as the `PlainConvUNet` class
26
+ used by default. It needs to have some sort of GPU memory estimation method that can be used to evaluate whether
27
+ certain patch sizes and
28
+ topologies fit into a specified GPU memory target. Build a new `ExperimentPlanner` that can configure your new
29
+ class and communicate with its memory budget estimation. Run `nnUNetv2_plan_and_preprocess` while specifying your
30
+ custom `ExperimentPlanner` and a custom `plans_name`. Implement a nnUNetTrainer that can use the plans generated by
31
+ your `ExperimentPlanner` to instantiate the network architecture. Specify your plans and trainer when running `nnUNetv2_train`.
32
+ It always pays off to first read and understand the corresponding nnU-Net code and use it as a template for your implementation!
33
+ - Remember that multi-GPU training, region-based training, ignore label and cascaded training are now simply integrated
34
+ into one unified nnUNetTrainer class. No separate classes needed (remember that when implementing your own trainer
35
+ classes and ensure support for all of these features! Or raise `NotImplementedError`)
36
+
37
+ [//]: # (- Read about our support for [ignore label]&#40;ignore_label.md&#41; and [region-based training]&#40;region_based_training.md&#41;)
Code/PENGWIN_Challenge/nnUNet/documentation/how_to_use_nnunet.md ADDED
@@ -0,0 +1,310 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## **2024-04-18 UPDATE: New residual encoder UNet presets available!**
2
+ The recommended nnU-Net presets have changed! See [here](resenc_presets.md) how to unlock them!
3
+
4
+
5
+ ## How to run nnU-Net on a new dataset
6
+
7
+
8
+ Given some dataset, nnU-Net fully automatically configures an entire segmentation pipeline that matches its properties.
9
+ nnU-Net covers the entire pipeline, from preprocessing to model configuration, model training, postprocessing
10
+ all the way to ensembling. After running nnU-Net, the trained model(s) can be applied to the test cases for inference.
11
+
12
+ ### Dataset Format
13
+ nnU-Net expects datasets in a structured format. This format is inspired by the data structure of
14
+ the [Medical Segmentation Decthlon](http://medicaldecathlon.com/). Please read
15
+ [this](dataset_format.md) for information on how to set up datasets to be compatible with nnU-Net.
16
+
17
+ **Since version 2 we support multiple image file formats (.nii.gz, .png, .tif, ...)! Read the dataset_format
18
+ documentation to learn more!**
19
+
20
+ **Datasets from nnU-Net v1 can be converted to V2 by running `nnUNetv2_convert_old_nnUNet_dataset INPUT_FOLDER
21
+ OUTPUT_DATASET_NAME`.** Remember that v2 calls datasets DatasetXXX_Name (not Task) where XXX is a 3-digit number.
22
+ Please provide the **path** to the old task, not just the Task name. nnU-Net V2 doesn't know where v1 tasks were!
23
+
24
+ ### Experiment planning and preprocessing
25
+ Given a new dataset, nnU-Net will extract a dataset fingerprint (a set of dataset-specific properties such as
26
+ image sizes, voxel spacings, intensity information etc). This information is used to design three U-Net configurations.
27
+ Each of these pipelines operates on its own preprocessed version of the dataset.
28
+
29
+ The easiest way to run fingerprint extraction, experiment planning and preprocessing is to use:
30
+
31
+ ```bash
32
+ nnUNetv2_plan_and_preprocess -d DATASET_ID --verify_dataset_integrity
33
+ ```
34
+
35
+ Where `DATASET_ID` is the dataset id (duh). We recommend `--verify_dataset_integrity` whenever it's the first time
36
+ you run this command. This will check for some of the most common error sources!
37
+
38
+ You can also process several datasets at once by giving `-d 1 2 3 [...]`. If you already know what U-Net configuration
39
+ you need you can also specify that with `-c 3d_fullres` (make sure to adapt -np in this case!). For more information
40
+ about all the options available to you please run `nnUNetv2_plan_and_preprocess -h`.
41
+
42
+ nnUNetv2_plan_and_preprocess will create a new subfolder in your nnUNet_preprocessed folder named after the dataset.
43
+ Once the command is completed there will be a dataset_fingerprint.json file as well as a nnUNetPlans.json file for you to look at
44
+ (in case you are interested!). There will also be subfolders containing the preprocessed data for your UNet configurations.
45
+
46
+ [Optional]
47
+ If you prefer to keep things separate, you can also use `nnUNetv2_extract_fingerprint`, `nnUNetv2_plan_experiment`
48
+ and `nnUNetv2_preprocess` (in that order).
49
+
50
+ ### Model training
51
+ #### Overview
52
+ You pick which configurations (2d, 3d_fullres, 3d_lowres, 3d_cascade_fullres) should be trained! If you have no idea
53
+ what performs best on your data, just run all of them and let nnU-Net identify the best one. It's up to you!
54
+
55
+ nnU-Net trains all configurations in a 5-fold cross-validation over the training cases. This is 1) needed so that
56
+ nnU-Net can estimate the performance of each configuration and tell you which one should be used for your
57
+ segmentation problem and 2) a natural way of obtaining a good model ensemble (average the output of these 5 models
58
+ for prediction) to boost performance.
59
+
60
+ You can influence the splits nnU-Net uses for 5-fold cross-validation (see [here](manual_data_splits.md)). If you
61
+ prefer to train a single model on all training cases, this is also possible (see below).
62
+
63
+ **Note that not all U-Net configurations are created for all datasets. In datasets with small image sizes, the U-Net
64
+ cascade (and with it the 3d_lowres configuration) is omitted because the patch size of the full resolution U-Net
65
+ already covers a large part of the input images.**
66
+
67
+ Training models is done with the `nnUNetv2_train` command. The general structure of the command is:
68
+ ```bash
69
+ nnUNetv2_train DATASET_NAME_OR_ID UNET_CONFIGURATION FOLD [additional options, see -h]
70
+ ```
71
+
72
+ UNET_CONFIGURATION is a string that identifies the requested U-Net configuration (defaults: 2d, 3d_fullres, 3d_lowres,
73
+ 3d_cascade_lowres). DATASET_NAME_OR_ID specifies what dataset should be trained on and FOLD specifies which fold of
74
+ the 5-fold-cross-validation is trained.
75
+
76
+ nnU-Net stores a checkpoint every 50 epochs. If you need to continue a previous training, just add a `--c` to the
77
+ training command.
78
+
79
+ IMPORTANT: If you plan to use `nnUNetv2_find_best_configuration` (see below) add the `--npz` flag. This makes
80
+ nnU-Net save the softmax outputs during the final validation. They are needed for that. Exported softmax
81
+ predictions are very large and therefore can take up a lot of disk space, which is why this is not enabled by default.
82
+ If you ran initially without the `--npz` flag but now require the softmax predictions, simply rerun the validation with:
83
+ ```bash
84
+ nnUNetv2_train DATASET_NAME_OR_ID UNET_CONFIGURATION FOLD --val --npz
85
+ ```
86
+
87
+ You can specify the device nnU-net should use by using `-device DEVICE`. DEVICE can only be cpu, cuda or mps. If
88
+ you have multiple GPUs, please select the gpu id using `CUDA_VISIBLE_DEVICES=X nnUNetv2_train [...]` (requires device to be cuda).
89
+
90
+ See `nnUNetv2_train -h` for additional options.
91
+
92
+ ### 2D U-Net
93
+ For FOLD in [0, 1, 2, 3, 4], run:
94
+ ```bash
95
+ nnUNetv2_train DATASET_NAME_OR_ID 2d FOLD [--npz]
96
+ ```
97
+
98
+ ### 3D full resolution U-Net
99
+ For FOLD in [0, 1, 2, 3, 4], run:
100
+ ```bash
101
+ nnUNetv2_train DATASET_NAME_OR_ID 3d_fullres FOLD [--npz]
102
+ ```
103
+
104
+ ### 3D U-Net cascade
105
+ #### 3D low resolution U-Net
106
+ For FOLD in [0, 1, 2, 3, 4], run:
107
+ ```bash
108
+ nnUNetv2_train DATASET_NAME_OR_ID 3d_lowres FOLD [--npz]
109
+ ```
110
+
111
+ #### 3D full resolution U-Net
112
+ For FOLD in [0, 1, 2, 3, 4], run:
113
+ ```bash
114
+ nnUNetv2_train DATASET_NAME_OR_ID 3d_cascade_fullres FOLD [--npz]
115
+ ```
116
+ **Note that the 3D full resolution U-Net of the cascade requires the five folds of the low resolution U-Net to be
117
+ completed!**
118
+
119
+ The trained models will be written to the nnUNet_results folder. Each training obtains an automatically generated
120
+ output folder name:
121
+
122
+ nnUNet_results/DatasetXXX_MYNAME/TRAINER_CLASS_NAME__PLANS_NAME__CONFIGURATION/FOLD
123
+
124
+ For Dataset002_Heart (from the MSD), for example, this looks like this:
125
+
126
+ nnUNet_results/
127
+ ├── Dataset002_Heart
128
+ │── nnUNetTrainer__nnUNetPlans__2d
129
+ │ ├── fold_0
130
+ │ ├── fold_1
131
+ │ ├── fold_2
132
+ │ ├── fold_3
133
+ │ ├── fold_4
134
+ │ ├── dataset.json
135
+ │ ├── dataset_fingerprint.json
136
+ │ └── plans.json
137
+ └── nnUNetTrainer__nnUNetPlans__3d_fullres
138
+ ├── fold_0
139
+ ├── fold_1
140
+ ├── fold_2
141
+ ├── fold_3
142
+ ├── fold_4
143
+ ├── dataset.json
144
+ ├── dataset_fingerprint.json
145
+ └── plans.json
146
+
147
+ Note that 3d_lowres and 3d_cascade_fullres do not exist here because this dataset did not trigger the cascade. In each
148
+ model training output folder (each of the fold_x folder), the following files will be created:
149
+ - debug.json: Contains a summary of blueprint and inferred parameters used for training this model as well as a
150
+ bunch of additional stuff. Not easy to read, but very useful for debugging ;-)
151
+ - checkpoint_best.pth: checkpoint files of the best model identified during training. Not used right now unless you
152
+ explicitly tell nnU-Net to use it.
153
+ - checkpoint_final.pth: checkpoint file of the final model (after training has ended). This is what is used for both
154
+ validation and inference.
155
+ - network_architecture.pdf (only if hiddenlayer is installed!): a pdf document with a figure of the network architecture in it.
156
+ - progress.png: Shows losses, pseudo dice, learning rate and epoch times ofer the course of the training. At the top is
157
+ a plot of the training (blue) and validation (red) loss during training. Also shows an approximation of
158
+ the dice (green) as well as a moving average of it (dotted green line). This approximation is the average Dice score
159
+ of the foreground classes. **It needs to be taken with a big (!)
160
+ grain of salt** because it is computed on randomly drawn patches from the validation
161
+ data at the end of each epoch, and the aggregation of TP, FP and FN for the Dice computation treats the patches as if
162
+ they all originate from the same volume ('global Dice'; we do not compute a Dice for each validation case and then
163
+ average over all cases but pretend that there is only one validation case from which we sample patches). The reason for
164
+ this is that the 'global Dice' is easy to compute during training and is still quite useful to evaluate whether a model
165
+ is training at all or not. A proper validation takes way too long to be done each epoch. It is run at the end of the training.
166
+ - validation: in this folder are the predicted validation cases after the training has finished. The summary.json file in here
167
+ contains the validation metrics (a mean over all cases is provided at the start of the file). If `--npz` was set then
168
+ the compressed softmax outputs (saved as .npz files) are in here as well.
169
+
170
+ During training it is often useful to watch the progress. We therefore recommend that you have a look at the generated
171
+ progress.png when running the first training. It will be updated after each epoch.
172
+
173
+ Training times largely depend on the GPU. The smallest GPU we recommend for training is the Nvidia RTX 2080ti. With
174
+ that all network trainings take less than 2 days. Refer to our [benchmarks](benchmarking.md) to see if your system is
175
+ performing as expected.
176
+
177
+ ### Using multiple GPUs for training
178
+
179
+ If multiple GPUs are at your disposal, the best way of using them is to train multiple nnU-Net trainings at once, one
180
+ on each GPU. This is because data parallelism never scales perfectly linearly, especially not with small networks such
181
+ as the ones used by nnU-Net.
182
+
183
+ Example:
184
+
185
+ ```bash
186
+ CUDA_VISIBLE_DEVICES=0 nnUNetv2_train DATASET_NAME_OR_ID 2d 0 [--npz] & # train on GPU 0
187
+ CUDA_VISIBLE_DEVICES=1 nnUNetv2_train DATASET_NAME_OR_ID 2d 1 [--npz] & # train on GPU 1
188
+ CUDA_VISIBLE_DEVICES=2 nnUNetv2_train DATASET_NAME_OR_ID 2d 2 [--npz] & # train on GPU 2
189
+ CUDA_VISIBLE_DEVICES=3 nnUNetv2_train DATASET_NAME_OR_ID 2d 3 [--npz] & # train on GPU 3
190
+ CUDA_VISIBLE_DEVICES=4 nnUNetv2_train DATASET_NAME_OR_ID 2d 4 [--npz] & # train on GPU 4
191
+ ...
192
+ wait
193
+ ```
194
+
195
+ **Important: The first time a training is run nnU-Net will extract the preprocessed data into uncompressed numpy
196
+ arrays for speed reasons! This operation must be completed before starting more than one training of the same
197
+ configuration! Wait with starting subsequent folds until the first training is using the GPU! Depending on the
198
+ dataset size and your System this should only take a couple of minutes at most.**
199
+
200
+ If you insist on running DDP multi-GPU training, we got you covered:
201
+
202
+ `nnUNetv2_train DATASET_NAME_OR_ID 2d 0 [--npz] -num_gpus X`
203
+
204
+ Again, note that this will be slower than running separate training on separate GPUs. DDP only makes sense if you have
205
+ manually interfered with the nnU-Net configuration and are training larger models with larger patch and/or batch sizes!
206
+
207
+ Important when using `-num_gpus`:
208
+ 1) If you train using, say, 2 GPUs but have more GPUs in the system you need to specify which GPUs should be used via
209
+ CUDA_VISIBLE_DEVICES=0,1 (or whatever your ids are).
210
+ 2) You cannot specify more GPUs than you have samples in your minibatches. If the batch size is 2, 2 GPUs is the maximum!
211
+ 3) Make sure your batch size is divisible by the numbers of GPUs you use or you will not make good use of your hardware.
212
+
213
+ In contrast to the old nnU-Net, DDP is now completely hassle free. Enjoy!
214
+
215
+ ### Automatically determine the best configuration
216
+ Once the desired configurations were trained (full cross-validation) you can tell nnU-Net to automatically identify
217
+ the best combination for you:
218
+
219
+ ```commandline
220
+ nnUNetv2_find_best_configuration DATASET_NAME_OR_ID -c CONFIGURATIONS
221
+ ```
222
+
223
+ `CONFIGURATIONS` hereby is the list of configurations you would like to explore. Per default, ensembling is enabled
224
+ meaning that nnU-Net will generate all possible combinations of ensembles (2 configurations per ensemble). This requires
225
+ the .npz files containing the predicted probabilities of the validation set to be present (use `nnUNetv2_train` with
226
+ `--npz` flag, see above). You can disable ensembling by setting the `--disable_ensembling` flag.
227
+
228
+ See `nnUNetv2_find_best_configuration -h` for more options.
229
+
230
+ nnUNetv2_find_best_configuration will also automatically determine the postprocessing that should be used.
231
+ Postprocessing in nnU-Net only considers the removal of all but the largest component in the prediction (once for
232
+ foreground vs background and once for each label/region).
233
+
234
+ Once completed, the command will print to your console exactly what commands you need to run to make predictions. It
235
+ will also create two files in the `nnUNet_results/DATASET_NAME` folder for you to inspect:
236
+ - `inference_instructions.txt` again contains the exact commands you need to use for predictions
237
+ - `inference_information.json` can be inspected to see the performance of all configurations and ensembles, as well
238
+ as the effect of the postprocessing plus some debug information.
239
+
240
+ ### Run inference
241
+ Remember that the data located in the input folder must have the file endings as the dataset you trained the model on
242
+ and must adhere to the nnU-Net naming scheme for image files (see [dataset format](dataset_format.md) and
243
+ [inference data format](dataset_format_inference.md)!)
244
+
245
+ `nnUNetv2_find_best_configuration` (see above) will print a string to the terminal with the inference commands you need to use.
246
+ The easiest way to run inference is to simply use these commands.
247
+
248
+ If you wish to manually specify the configuration(s) used for inference, use the following commands:
249
+
250
+ #### Run prediction
251
+ For each of the desired configurations, run:
252
+ ```
253
+ nnUNetv2_predict -i INPUT_FOLDER -o OUTPUT_FOLDER -d DATASET_NAME_OR_ID -c CONFIGURATION --save_probabilities
254
+ ```
255
+
256
+ Only specify `--save_probabilities` if you intend to use ensembling. `--save_probabilities` will make the command save the predicted
257
+ probabilities alongside of the predicted segmentation masks requiring a lot of disk space.
258
+
259
+ Please select a separate `OUTPUT_FOLDER` for each configuration!
260
+
261
+ Note that per default, inference will be done with all 5 folds from the cross-validation as an ensemble. We very
262
+ strongly recommend you use all 5 folds. Thus, all 5 folds must have been trained prior to running inference.
263
+
264
+ If you wish to make predictions with a single model, train the `all` fold and specify it in `nnUNetv2_predict`
265
+ with `-f all`
266
+
267
+ #### Ensembling multiple configurations
268
+ If you wish to ensemble multiple predictions (typically form different configurations), you can do so with the following command:
269
+ ```bash
270
+ nnUNetv2_ensemble -i FOLDER1 FOLDER2 ... -o OUTPUT_FOLDER -np NUM_PROCESSES
271
+ ```
272
+
273
+ You can specify an arbitrary number of folders, but remember that each folder needs to contain npz files that were
274
+ generated by `nnUNetv2_predict`. Again, `nnUNetv2_ensemble -h` will tell you more about additional options.
275
+
276
+ #### Apply postprocessing
277
+ Finally, apply the previously determined postprocessing to the (ensembled) predictions:
278
+
279
+ ```commandline
280
+ nnUNetv2_apply_postprocessing -i FOLDER_WITH_PREDICTIONS -o OUTPUT_FOLDER --pp_pkl_file POSTPROCESSING_FILE -plans_json PLANS_FILE -dataset_json DATASET_JSON_FILE
281
+ ```
282
+
283
+ `nnUNetv2_find_best_configuration` (or its generated `inference_instructions.txt` file) will tell you where to find
284
+ the postprocessing file. If not you can just look for it in your results folder (it's creatively named
285
+ `postprocessing.pkl`). If your source folder is from an ensemble, you also need to specify a `-plans_json` file and
286
+ a `-dataset_json` file that should be used (for single configuration predictions these are automatically copied
287
+ from the respective training). You can pick these files from any of the ensemble members.
288
+
289
+
290
+ ## How to run inference with pretrained models
291
+ See [here](run_inference_with_pretrained_models.md)
292
+
293
+ ## How to Deploy and Run Inference with YOUR Pretrained Models
294
+ To facilitate the use of pretrained models on a different computer for inference purposes, follow these streamlined steps:
295
+ 1. Exporting the Model: Utilize the `nnUNetv2_export_model_to_zip` function to package your trained model into a .zip file. This file will contain all necessary model files.
296
+ 2. Transferring the Model: Transfer the .zip file to the target computer where inference will be performed.
297
+ 3. Importing the Model: On the new PC, use the `nnUNetv2_install_pretrained_model_from_zip` to load the pretrained model from the .zip file.
298
+ Please note that both computers must have nnU-Net installed along with all its dependencies to ensure compatibility and functionality of the model.
299
+
300
+ [//]: # (## Examples)
301
+
302
+ [//]: # ()
303
+ [//]: # (To get you started we compiled two simple to follow examples:)
304
+
305
+ [//]: # (- run a training with the 3d full resolution U-Net on the Hippocampus dataset. See [here]&#40;documentation/training_example_Hippocampus.md&#41;.)
306
+
307
+ [//]: # (- run inference with nnU-Net's pretrained models on the Prostate dataset. See [here]&#40;documentation/inference_example_Prostate.md&#41;.)
308
+
309
+ [//]: # ()
310
+ [//]: # (Usability not good enough? Let us know!)
Code/PENGWIN_Challenge/nnUNet/documentation/ignore_label.md ADDED
@@ -0,0 +1,104 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Ignore Label
2
+
3
+ The _ignore label_ can be used to mark regions that should be ignored by nnU-Net. This can be used to
4
+ learn from images where only sparse annotations are available, for example in the form of scribbles or a limited
5
+ amount of annotated slices. Internally, this is accomplished by using partial losses, i.e. losses that are only
6
+ computed on annotated pixels while ignoring the rest. Take a look at our
7
+ [`DC_and_BCE_loss` loss](../nnunetv2/training/loss/compound_losses.py) to see how this is done.
8
+ During inference (validation and prediction), nnU-Net will always predict dense segmentations. Metric computation in
9
+ validation is of course only done on annotated pixels.
10
+
11
+ Using sparse annotations can be used to train a model for application to new, unseen images or to autocomplete the
12
+ provided training cases given the sparse labels.
13
+
14
+ (See our [paper](https://arxiv.org/abs/2403.12834) for more information)
15
+
16
+ Typical use-cases for the ignore label are:
17
+ - Save annotation time through sparse annotation schemes
18
+ - Annotation of all or a subset of slices with scribbles (Scribble Supervision)
19
+ - Dense annotation of a subset of slices
20
+ - Dense annotation of chosen patches/cubes within an image
21
+ - Coarsly masking out faulty segmentations in the reference segmentations
22
+ - Masking areas for other reasons
23
+
24
+ If you are using nnU-Net's ignore label, please cite the following paper in addition to the original nnU-net paper:
25
+
26
+ ```
27
+ Gotkowski, K., Lüth, C., Jäger, P. F., Ziegler, S., Krämer, L., Denner, S., Xiao, S., Disch, N., H., K., & Isensee, F.
28
+ (2024). Embarrassingly Simple Scribble Supervision for 3D Medical Segmentation. ArXiv. /abs/2403.12834
29
+ ```
30
+
31
+ ## Usecases
32
+
33
+ ### Scribble Supervision
34
+
35
+ Scribbles are free-form drawings to coarsly annotate an image. As we have demonstrated in our recent [paper](https://arxiv.org/abs/2403.12834), nnU-Net's partial loss implementation enables state-of-the-art learning from partially annotated data and even surpasses many purpose-built methods for learning from scribbles. As a starting point, for each image slice and each class (including background), an interior and a border scribble should be generated:
36
+
37
+ - Interior Scribble: A scribble placed randomly within the class interior of a class instance
38
+ - Border Scribble: A scribble roughly delineating a small part of the class border of a class instance
39
+
40
+ An example of such scribble annotations is depicted in Figure 1 and an animation in Animation 1.
41
+ Depending on the availability of data and their variability it is also possible to only annotated a subset of selected slices.
42
+
43
+ <p align="center">
44
+ <img src="assets/scribble_example.png" width="1024px" />
45
+ <figcaption>Figure 1: Examples of segmentation types with (A) depicting a dense segmentation and (B) a scribble segmentation.</figcaption>
46
+ </figure>
47
+ </p>
48
+
49
+ <p align="center">
50
+ <img width="512px" src="https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExbmdndHQwMG96M3FqZWtwbHR2enUwZXhwNHVsbndzNmNpZnVlbHJ6OSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/KRJ48evmroDlIgcqcO/giphy.gif">
51
+ <img width="512px" src="https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExem10Z3ZqZHQ2MWNsMjdibG1zc3M2NzNqbG9mazdudG5raTk4d3h4MSZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/ifVxQQfco5ro1gH6bQ/giphy.gif">
52
+ <figcaption>Animation 1: Depiction of a dense segmentation and a scribble annotation. Background scribbles have been excluded for better visualization.</figcaption>
53
+ </p>
54
+
55
+ ### Dense annotation of a subset of slices
56
+
57
+ Another form of sparse annotation is the dense annotation of a subset of slices. These slices should be selected by the user either randomly, based on visual class variation between slices or in an active learning setting. An example with only 10% of slices annotated is depicted in Figure 2.
58
+
59
+ <p align="center">
60
+ <img src="assets/amos2022_sparseseg10_2d.png" width="512px" />
61
+ <img src="assets/amos2022_sparseseg10.png" width="512px" />
62
+ <figcaption>Figure 2: Examples of a dense annotation of a subset of slices. The ignored areas are shown in red.</figcaption>
63
+ </figure>
64
+ </p>
65
+
66
+
67
+ ## Usage within nnU-Net
68
+
69
+ Usage of the ignore label in nnU-Net is straightforward and only requires the definition of an _ignore_ label in the _dataset.json_.
70
+ This ignore label MUST be the highest integer label value in the segmentation. Exemplary, given the classes background and two foreground classes, then the ignore label must have the integer 3. The ignore label must be named _ignore_ in the _dataset.json_. Given the BraTS dataset as an example the labels dict of the _dataset.json_ must look like this:
71
+
72
+ ```python
73
+ ...
74
+ "labels": {
75
+ "background": 0,
76
+ "edema": 1,
77
+ "non_enhancing_and_necrosis": 2,
78
+ "enhancing_tumor": 3,
79
+ "ignore": 4
80
+ },
81
+ ...
82
+ ```
83
+
84
+ Of course, the ignore label is compatible with [region-based training](region_based_training.md):
85
+
86
+ ```python
87
+ ...
88
+ "labels": {
89
+ "background": 0,
90
+ "whole_tumor": (1, 2, 3),
91
+ "tumor_core": (2, 3),
92
+ "enhancing_tumor": 3, # or (3, )
93
+ "ignore": 4
94
+ },
95
+ "regions_class_order": (1, 2, 3), # don't declare ignore label here! It is not predicted
96
+ ...
97
+ ```
98
+
99
+ Then use the dataset as you would any other.
100
+
101
+ Remember that nnU-Net runs a cross-validation. Thus, it will also evaluate on your partially annotated data. This
102
+ will of course work! If you wish to compare different sparse annotation strategies (through simulations for example),
103
+ we recommend evaluating on densely annotated images by running inference and then using `nnUNetv2_evaluate_folder` or
104
+ `nnUNetv2_evaluate_simple`.
Code/PENGWIN_Challenge/nnUNet/documentation/installation_instructions.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # System requirements
2
+
3
+ ## Operating System
4
+ nnU-Net has been tested on Linux (Ubuntu 18.04, 20.04, 22.04; centOS, RHEL), Windows and MacOS! It should work out of the box!
5
+
6
+ ## Hardware requirements
7
+ We support GPU (recommended), CPU and Apple M1/M2 as devices (currently Apple mps does not implement 3D
8
+ convolutions, so you might have to use the CPU on those devices).
9
+
10
+ ### Hardware requirements for Training
11
+ We recommend you use a GPU for training as this will take a really long time on CPU or MPS (Apple M1/M2).
12
+ For training a GPU with at least 10 GB (popular non-datacenter options are the RTX 2080ti, RTX 3080/3090 or RTX 4080/4090) is
13
+ required. We also recommend a strong CPU to go along with the GPU. 6 cores (12 threads)
14
+ are the bare minimum! CPU requirements are mostly related to data augmentation and scale with the number of
15
+ input channels and target structures. Plus, the faster the GPU, the better the CPU should be!
16
+
17
+ ### Hardware Requirements for inference
18
+ Again we recommend a GPU to make predictions as this will be substantially faster than the other options. However,
19
+ inference times are typically still manageable on CPU and MPS (Apple M1/M2). If using a GPU, it should have at least
20
+ 4 GB of available (unused) VRAM.
21
+
22
+ ### Example hardware configurations
23
+ Example workstation configurations for training:
24
+ - CPU: Ryzen 5800X - 5900X or 7900X would be even better! We have not yet tested Intel Alder/Raptor lake but they will likely work as well.
25
+ - GPU: RTX 3090 or RTX 4090
26
+ - RAM: 64GB
27
+ - Storage: SSD (M.2 PCIe Gen 3 or better!)
28
+
29
+ Example Server configuration for training:
30
+ - CPU: 2x AMD EPYC7763 for a total of 128C/256T. 16C/GPU are highly recommended for fast GPUs such as the A100!
31
+ - GPU: 8xA100 PCIe (price/performance superior to SXM variant + they use less power)
32
+ - RAM: 1 TB
33
+ - Storage: local SSD storage (PCIe Gen 3 or better) or ultra fast network storage
34
+
35
+ (nnU-net by default uses one GPU per training. The server configuration can run up to 8 model trainings simultaneously)
36
+
37
+ ### Setting the correct number of Workers for data augmentation (training only)
38
+ Note that you will need to manually set the number of processes nnU-Net uses for data augmentation according to your
39
+ CPU/GPU ratio. For the server above (256 threads for 8 GPUs), a good value would be 24-30. You can do this by
40
+ setting the `nnUNet_n_proc_DA` environment variable (`export nnUNet_n_proc_DA=XX`).
41
+ Recommended values (assuming a recent CPU with good IPC) are 10-12 for RTX 2080 ti, 12 for a RTX 3090, 16-18 for
42
+ RTX 4090, 28-32 for A100. Optimal values may vary depending on the number of input channels/modalities and number of classes.
43
+
44
+ # Installation instructions
45
+ We strongly recommend that you install nnU-Net in a virtual environment! Pip or anaconda are both fine. If you choose to
46
+ compile PyTorch from source (see below), you will need to use conda instead of pip.
47
+
48
+ Use a recent version of Python! 3.9 or newer is guaranteed to work!
49
+
50
+ **nnU-Net v2 can coexist with nnU-Net v1! Both can be installed at the same time.**
51
+
52
+ 1) Install [PyTorch](https://pytorch.org/get-started/locally/) as described on their website (conda/pip). Please
53
+ install the latest version with support for your hardware (cuda, mps, cpu).
54
+ **DO NOT JUST `pip install nnunetv2` WITHOUT PROPERLY INSTALLING PYTORCH FIRST**. For maximum speed, consider
55
+ [compiling pytorch yourself](https://github.com/pytorch/pytorch#from-source) (experienced users only!).
56
+ 2) Install nnU-Net depending on your use case:
57
+ 1) For use as **standardized baseline**, **out-of-the-box segmentation algorithm** or for running
58
+ **inference with pretrained models**:
59
+
60
+ ```pip install nnunetv2```
61
+
62
+ 2) For use as integrative **framework** (this will create a copy of the nnU-Net code on your computer so that you
63
+ can modify it as needed):
64
+ ```bash
65
+ git clone https://github.com/MIC-DKFZ/nnUNet.git
66
+ cd nnUNet
67
+ pip install -e .
68
+ ```
69
+ 3) nnU-Net needs to know where you intend to save raw data, preprocessed data and trained models. For this you need to
70
+ set a few environment variables. Please follow the instructions [here](setting_up_paths.md).
71
+ 4) (OPTIONAL) Install [hiddenlayer](https://github.com/waleedka/hiddenlayer). hiddenlayer enables nnU-net to generate
72
+ plots of the network topologies it generates (see [Model training](how_to_use_nnunet.md#model-training)).
73
+ To install hiddenlayer,
74
+ run the following command:
75
+ ```bash
76
+ pip install --upgrade git+https://github.com/FabianIsensee/hiddenlayer.git
77
+ ```
78
+
79
+ Installing nnU-Net will add several new commands to your terminal. These commands are used to run the entire nnU-Net
80
+ pipeline. You can execute them from any location on your system. All nnU-Net commands have the prefix `nnUNetv2_` for
81
+ easy identification.
82
+
83
+ Note that these commands simply execute python scripts. If you installed nnU-Net in a virtual environment, this
84
+ environment must be activated when executing the commands. You can see what scripts/functions are executed by
85
+ checking the project.scripts in the [pyproject.toml](../pyproject.toml) file.
86
+
87
+ All nnU-Net commands have a `-h` option which gives information on how to use them.
Code/PENGWIN_Challenge/nnUNet/documentation/manual_data_splits.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # How to generate custom splits in nnU-Net
2
+
3
+ Sometimes, the default 5-fold cross-validation split by nnU-Net does not fit a project. Maybe you want to run 3-fold
4
+ cross-validation instead? Or maybe your training cases cannot be split randomly and require careful stratification.
5
+ Fear not, for nnU-Net has got you covered (it really can do anything <3).
6
+
7
+ The splits nnU-Net uses are generated in the `do_split` function of nnUNetTrainer. This function will first look for
8
+ existing splits, stored as a file, and if no split exists it will create one. So if you wish to influence the split,
9
+ manually creating a split file that will then be recognized and used is the way to go!
10
+
11
+ The split file is located in the `nnUNet_preprocessed/DATASETXXX_NAME` folder. So it is best practice to first
12
+ populate this folder by running `nnUNetv2_plan_and_preproccess`.
13
+
14
+ Splits are stored as a .json file. They are a simple python list. The length of that list is the number of splits it
15
+ contains (so it's 5 in the default nnU-Net). Each list entry is a dictionary with keys 'train' and 'val'. Values are
16
+ again simply lists with the train identifiers in each set. To illustrate this, I am just messing with the Dataset002
17
+ file as an example:
18
+
19
+ ```commandline
20
+ In [1]: from batchgenerators.utilities.file_and_folder_operations import load_json
21
+
22
+ In [2]: splits = load_json('splits_final.json')
23
+
24
+ In [3]: len(splits)
25
+ Out[3]: 5
26
+
27
+ In [4]: splits[0].keys()
28
+ Out[4]: dict_keys(['train', 'val'])
29
+
30
+ In [5]: len(splits[0]['train'])
31
+ Out[5]: 16
32
+
33
+ In [6]: len(splits[0]['val'])
34
+ Out[6]: 4
35
+
36
+ In [7]: print(splits[0])
37
+ {'train': ['la_003', 'la_004', 'la_005', 'la_009', 'la_010', 'la_011', 'la_014', 'la_017', 'la_018', 'la_019', 'la_020', 'la_022', 'la_023', 'la_026', 'la_029', 'la_030'],
38
+ 'val': ['la_007', 'la_016', 'la_021', 'la_024']}
39
+ ```
40
+
41
+ If you are still not sure what splits are supposed to look like, simply download some reference dataset from the
42
+ [Medical Decathlon](http://medicaldecathlon.com/), start some training (to generate the splits) and manually inspect
43
+ the .json file with your text editor of choice!
44
+
45
+ In order to generate your custom splits, all you need to do is reproduce the data structure explained above and save it as
46
+ `splits_final.json` in the `nnUNet_preprocessed/DATASETXXX_NAME` folder. Then use `nnUNetv2_train` etc. as usual.
Code/PENGWIN_Challenge/nnUNet/documentation/pretraining_and_finetuning.md ADDED
@@ -0,0 +1,82 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Pretraining with nnU-Net
2
+
3
+ ## Intro
4
+
5
+ So far nnU-Net only supports supervised pre-training, meaning that you train a regular nnU-Net on some pretraining dataset
6
+ and then use the final network weights as initialization for your target dataset.
7
+
8
+ As a reminder, many training hyperparameters such as patch size and network topology differ between datasets as a
9
+ result of the automated dataset analysis and experiment planning nnU-Net is known for. So, out of the box, it is not
10
+ possible to simply take the network weights from some dataset and then reuse them for another.
11
+
12
+ Consequently, the plans need to be aligned between the two tasks. In this README we show how this can be achieved and
13
+ how the resulting weights can then be used for initialization.
14
+
15
+ ### Terminology
16
+
17
+ Throughout this README we use the following terminology:
18
+
19
+ - `pretraining dataset` is the dataset you intend to run the pretraining on (former: source dataset)
20
+ - `target dataset` is the dataset you are interested in; the one you wish to fine tune on
21
+
22
+
23
+ ## Training on the pretraining dataset
24
+
25
+ In order to obtain matching network topologies we need to transfer the plans from one dataset to another. Since we are
26
+ only interested in the target dataset, we first need to run experiment planning (and preprocessing) for it:
27
+
28
+ ```bash
29
+ nnUNetv2_plan_and_preprocess -d TARGET_DATASET
30
+ ```
31
+
32
+ Then we need to extract the dataset fingerprint of the pretraining dataset, if not yet available:
33
+
34
+ ```bash
35
+ nnUNetv2_extract_fingerprint -d PRETRAINING_DATASET
36
+ ```
37
+
38
+ Now we can take the plans from the target dataset and transfer it to the pretraining dataset:
39
+
40
+ ```bash
41
+ nnUNetv2_move_plans_between_datasets -s PRETRAINING_DATASET -t TARGET_DATASET -sp PRETRAINING_PLANS_IDENTIFIER -tp TARGET_PLANS_IDENTIFIER
42
+ ```
43
+
44
+ `PRETRAINING_PLANS_IDENTIFIER` is hereby probably nnUNetPlans unless you changed the experiment planner in
45
+ nnUNetv2_plan_and_preprocess. For `TARGET_PLANS_IDENTIFIER` we recommend you set something custom in order to not
46
+ overwrite default plans.
47
+
48
+ Note that EVERYTHING is transferred between the datasets. Not just the network topology, batch size and patch size but
49
+ also the normalization scheme! Therefore, a transfer between datasets that use different normalization schemes may not
50
+ work well (but it could, depending on the schemes!).
51
+
52
+ Note on CT normalization: Yes, also the clip values, mean and std are transferred!
53
+
54
+ Now you can run the preprocessing on the pretraining dataset:
55
+
56
+ ```bash
57
+ nnUNetv2_preprocess -d PRETRAINING_DATASET -plans_name TARGET_PLANS_IDENTIFIER
58
+ ```
59
+
60
+ And run the training as usual:
61
+
62
+ ```bash
63
+ nnUNetv2_train PRETRAINING_DATASET CONFIG all -p TARGET_PLANS_IDENTIFIER
64
+ ```
65
+
66
+ Note how we use the 'all' fold to train on all available data. For pretraining it does not make sense to split the data.
67
+
68
+ ## Using pretrained weights
69
+
70
+ Once pretraining is completed (or you obtain compatible weights by other means) you can use them to initialize your model:
71
+
72
+ ```bash
73
+ nnUNetv2_train TARGET_DATASET CONFIG FOLD -pretrained_weights PATH_TO_CHECKPOINT
74
+ ```
75
+
76
+ Specify the checkpoint in PATH_TO_CHECKPOINT.
77
+
78
+ When loading pretrained weights, all layers except the segmentation layers will be used!
79
+
80
+ So far there are no specific nnUNet trainers for fine tuning, so the current recommendation is to just use
81
+ nnUNetTrainer. You can however easily write your own trainers with learning rate ramp up, fine-tuning of segmentation
82
+ heads or shorter training time.