Update model card with Primus paper link and image-segmentation tag
#1
by nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,18 +1,22 @@
|
|
| 1 |
---
|
| 2 |
-
license: cc-by-4.0
|
| 3 |
datasets:
|
| 4 |
- AnonRes/OpenMind
|
| 5 |
-
|
|
|
|
| 6 |
tags:
|
| 7 |
- medical
|
| 8 |
---
|
| 9 |
|
| 10 |
# OpenMind Benchmark 3D SSL Models
|
| 11 |
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
> **
|
| 15 |
-
>
|
|
|
|
|
|
|
|
|
|
|
|
|
| 16 |
|
| 17 |
---
|
| 18 |
|
|
@@ -20,25 +24,22 @@ tags:
|
|
| 20 |
|
| 21 |
## Overview
|
| 22 |
|
| 23 |
-
This repository
|
| 24 |
-
📄 **An OpenMind for 3D medical vision self-supervised learning** (Wald, T., Ulrich, C., Suprijadi, J., Ziegler, S., Nohel, M., Peretzke, R., ... & Maier-Hein, K. H. (2024).)
|
| 25 |
-
([arXiv:2412.17041](https://arxiv.org/abs/2412.17041)) — the first extensive benchmark study for **self-supervised learning (SSL)** on **3D medical imaging** data.
|
| 26 |
|
| 27 |
-
|
| 28 |
|
| 29 |
-
**These models are not recommended to be used as-is for feature extraction.** Instead we recommend using the downstream fine-tuning frameworks for **segmentation**
|
| 30 |
-
*While manual download is possible, we recommend using the auto-download feature of the fine-tuning repository by providing the repository URL on Hugging Face instead of a local checkpoint path.*
|
| 31 |
|
| 32 |
---
|
| 33 |
|
| 34 |
## Model Variants
|
| 35 |
|
| 36 |
-
We release SSL checkpoints for two backbone architectures:
|
| 37 |
|
| 38 |
- **ResEnc-L**: A CNN-based encoder [[a](https://arxiv.org/abs/2410.23132), [b](https://arxiv.org/abs/2404.09556)]
|
| 39 |
-
- **Primus-M**: A transformer-based encoder [
|
| 40 |
|
| 41 |
-
Each encoder has been pre-trained using
|
| 42 |
|
| 43 |
| Method | Description |
|
| 44 |
|---------------|-------------|
|
|
@@ -46,7 +47,35 @@ Each encoder has been pre-trained using one of the following SSL techniques:
|
|
| 46 |
| [VolumeFusion (VF)](https://arxiv.org/abs/2306.16925) | Spatial volume fusion-based segmentation SSL method |
|
| 47 |
| [Models Genesis (MG)](https://www.sciencedirect.com/science/article/pii/S1361841520302048) | Reconstruction and denoising based pretraining method |
|
| 48 |
| [Masked Autoencoders (MAE)](https://openaccess.thecvf.com/content/CVPR2022/html/He_Masked_Autoencoders_Are_Scalable_Vision_Learners_CVPR_2022_paper) | Default reconstruction based pretraining method |
|
| 49 |
-
| [Spark 3D (S3D)](https://arxiv.org/abs/2410.23132) | Sparse reconstruction based pretraining
|
| 50 |
| [SimMIM](https://openaccess.thecvf.com/content/CVPR2022/html/Xie_SimMIM_A_Simple_Framework_for_Masked_Image_Modeling_CVPR_2022_paper.html) | Simple masked reconstruction based pretraining method (TR only) |
|
| 51 |
| [SwinUNETR SSL](https://arxiv.org/abs/2111.14791) | Rotation, Contrastive and Reconstruction based pre-training method. |
|
| 52 |
| [SimCLR](https://arxiv.org/abs/2002.05709) | Transfer of 2D Contrastive learning baseline method to 3D |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
|
|
|
| 2 |
datasets:
|
| 3 |
- AnonRes/OpenMind
|
| 4 |
+
license: cc-by-4.0
|
| 5 |
+
pipeline_tag: image-segmentation
|
| 6 |
tags:
|
| 7 |
- medical
|
| 8 |
---
|
| 9 |
|
| 10 |
# OpenMind Benchmark 3D SSL Models
|
| 11 |
|
| 12 |
+
This repository hosts pre-trained checkpoints from the **OpenMind** benchmark, including the **Primus** architecture.
|
| 13 |
+
|
| 14 |
+
> **Models from the papers**:
|
| 15 |
+
> - [Primus: Enforcing Attention Usage for 3D Medical Image Segmentation](https://huggingface.co/papers/2503.01835)
|
| 16 |
+
> - [An OpenMind for 3D medical vision self-supervised learning](https://arxiv.org/abs/2412.17041)
|
| 17 |
+
> **Pre-training codebase**: [MIC-DKFZ/nnssl](https://github.com/MIC-DKFZ/nnssl)
|
| 18 |
+
> **Downstream (segmentation) framework**: [TaWald/nnUNet](https://github.com/TaWald/nnUNet) / [MIC-DKFZ/nnUNet](https://github.com/MIC-DKFZ/nnUNet)
|
| 19 |
+
> **Dataset**: [AnonRes/OpenMind](https://huggingface.co/datasets/AnonRes/OpenMind)
|
| 20 |
|
| 21 |
---
|
| 22 |
|
|
|
|
| 24 |
|
| 25 |
## Overview
|
| 26 |
|
| 27 |
+
This repository provides self-supervised pre-trained weights for 3D medical image analysis. These models were pre-trained on the [OpenMind Dataset](https://huggingface.co/datasets/AnonRes/OpenMind), a large-scale collection of brain MRI data.
|
|
|
|
|
|
|
| 28 |
|
| 29 |
+
**Primus** and **PrimusV2** are Transformer-centric segmentation architectures designed to maximize the effectiveness of attention mechanisms in 3D medical imaging. By moving away from heavy convolutional reliance, Primus achieves state-of-the-art results on several benchmarks.
|
| 30 |
|
| 31 |
+
**These models are not recommended to be used as-is for feature extraction.** Instead, we recommend using the downstream fine-tuning frameworks for **segmentation** available in the [adaptation repository](https://github.com/TaWald/nnUNet).
|
|
|
|
| 32 |
|
| 33 |
---
|
| 34 |
|
| 35 |
## Model Variants
|
| 36 |
|
| 37 |
+
We release SSL checkpoints for two primary backbone architectures:
|
| 38 |
|
| 39 |
- **ResEnc-L**: A CNN-based encoder [[a](https://arxiv.org/abs/2410.23132), [b](https://arxiv.org/abs/2404.09556)]
|
| 40 |
+
- **Primus-M**: A transformer-based encoder introduced in the [Primus paper](https://huggingface.co/papers/2503.01835)
|
| 41 |
|
| 42 |
+
Each encoder has been pre-trained using various SSL techniques:
|
| 43 |
|
| 44 |
| Method | Description |
|
| 45 |
|---------------|-------------|
|
|
|
|
| 47 |
| [VolumeFusion (VF)](https://arxiv.org/abs/2306.16925) | Spatial volume fusion-based segmentation SSL method |
|
| 48 |
| [Models Genesis (MG)](https://www.sciencedirect.com/science/article/pii/S1361841520302048) | Reconstruction and denoising based pretraining method |
|
| 49 |
| [Masked Autoencoders (MAE)](https://openaccess.thecvf.com/content/CVPR2022/html/He_Masked_Autoencoders_Are_Scalable_Vision_Learners_CVPR_2022_paper) | Default reconstruction based pretraining method |
|
| 50 |
+
| [Spark 3D (S3D)](https://arxiv.org/abs/2410.23132) | Sparse reconstruction based pretraining method (CNN only) |
|
| 51 |
| [SimMIM](https://openaccess.thecvf.com/content/CVPR2022/html/Xie_SimMIM_A_Simple_Framework_for_Masked_Image_Modeling_CVPR_2022_paper.html) | Simple masked reconstruction based pretraining method (TR only) |
|
| 52 |
| [SwinUNETR SSL](https://arxiv.org/abs/2111.14791) | Rotation, Contrastive and Reconstruction based pre-training method. |
|
| 53 |
| [SimCLR](https://arxiv.org/abs/2002.05709) | Transfer of 2D Contrastive learning baseline method to 3D |
|
| 54 |
+
|
| 55 |
+
## Usage
|
| 56 |
+
|
| 57 |
+
To use these models for segmentation, please refer to the [nnU-Net documentation for Primus](https://github.com/MIC-DKFZ/nnUNet/blob/master/documentation/primus.md).
|
| 58 |
+
|
| 59 |
+
```bash
|
| 60 |
+
pip install nnunetv2
|
| 61 |
+
```
|
| 62 |
+
|
| 63 |
+
## Citation
|
| 64 |
+
|
| 65 |
+
If you use these models, please cite the following papers:
|
| 66 |
+
|
| 67 |
+
```bibtex
|
| 68 |
+
@article{wald2025primus,
|
| 69 |
+
title={Primus: Enforcing Attention Usage for 3D Medical Image Segmentation},
|
| 70 |
+
author={Wald, Tassilo and Roy, Saikat and Isensee, Fabian and Ulrich, Constantin and Ziegler, Sebastian and Trofimova, Dasha and Stock, Raphael and Baumgartner, Michael and K{\"o}hler, Gregor and Maier-Hein, Klaus},
|
| 71 |
+
journal={arXiv preprint arXiv:2503.01835},
|
| 72 |
+
year={2025}
|
| 73 |
+
}
|
| 74 |
+
|
| 75 |
+
@article{wald2024openmind,
|
| 76 |
+
title={An OpenMind for 3D medical vision self-supervised learning},
|
| 77 |
+
author={Wald, Tassilo and Ulrich, Constantin and Suprijadi, Jonathan and Ziegler, Sebastian and Nohel, Michal and Peretzke, Robin and K{\"{o}}hler, Gregor and Maier-Hein, Klaus H},
|
| 78 |
+
journal={arXiv preprint arXiv:2412.17041},
|
| 79 |
+
year={2024}
|
| 80 |
+
}
|
| 81 |
+
```
|