FelixzeroSun's picture
Upload folder using huggingface_hub
19c1f58 verified
# Extending nnU-Net
We hope that the new structure of nnU-Net v2 makes it much more intuitive on how to modify it! We cannot give an
extensive tutorial on how each and every bit of it can be modified. It is better for you to search for the position
in the repository where the thing you intend to change is implemented and start working your way through the code from
there. Setting breakpoints and debugging into nnU-Net really helps in understanding it and thus will help you make the
necessary modifications!
Here are some things you might want to read before you start:
- Editing nnU-Net configurations through plans files is really powerful now and allows you to change a lot of things regarding
preprocessing, resampling, network topology etc. Read [this](explanation_plans_files.md)!
- [Image normalization](explanation_normalization.md) and [i/o formats](dataset_format.md#supported-file-formats) are easy to extend!
- Manual data splits can be defined as described [here](manual_data_splits.md)
- You can chain arbitrary configurations together into cascades, see [this again](explanation_plans_files.md)
- Read about our support for [region-based training](region_based_training.md)
- If you intend to modify the training procedure (loss, sampling, data augmentation, lr scheduler, etc) then you need
to implement your own trainer class. Best practice is to create a class that inherits from nnUNetTrainer and
implements the necessary changes. Head over to our [trainer classes folder](../nnunetv2/training/nnUNetTrainer) for
inspiration! There will be similar trainers for what you intend to change and you can take them as a guide. nnUNetTrainer
are structured similarly to PyTorch lightning trainers, this should also make things easier!
- Integrating new network architectures can be done in two ways:
- Quick and dirty: implement a new nnUNetTrainer class and overwrite its `build_network_architecture` function.
Make sure your architecture is compatible with deep supervision (if not, use `nnUNetTrainerNoDeepSupervision`
as basis!) and that it can handle the patch sizes that are thrown at it! Your architecture should NOT apply any
nonlinearities at the end (softmax, sigmoid etc). nnU-Net does that!
- The 'proper' (but difficult) way: Build a dynamically configurable architecture such as the `PlainConvUNet` class
used by default. It needs to have some sort of GPU memory estimation method that can be used to evaluate whether
certain patch sizes and
topologies fit into a specified GPU memory target. Build a new `ExperimentPlanner` that can configure your new
class and communicate with its memory budget estimation. Run `nnUNetv2_plan_and_preprocess` while specifying your
custom `ExperimentPlanner` and a custom `plans_name`. Implement a nnUNetTrainer that can use the plans generated by
your `ExperimentPlanner` to instantiate the network architecture. Specify your plans and trainer when running `nnUNetv2_train`.
It always pays off to first read and understand the corresponding nnU-Net code and use it as a template for your implementation!
- Remember that multi-GPU training, region-based training, ignore label and cascaded training are now simply integrated
into one unified nnUNetTrainer class. No separate classes needed (remember that when implementing your own trainer
classes and ensure support for all of these features! Or raise `NotImplementedError`)
[//]: # (- Read about our support for [ignore label](ignore_label.md) and [region-based training](region_based_training.md))