Update pipeline tag and add Primus paper link

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +42 -12
README.md CHANGED
@@ -1,18 +1,21 @@
1
  ---
2
- license: cc-by-4.0
3
  datasets:
4
  - AnonRes/OpenMind
5
- pipeline_tag: image-feature-extraction
 
6
  tags:
7
  - medical
8
  ---
9
 
10
  # OpenMind Benchmark 3D SSL Models
11
 
12
- > **Model from the paper**: [An OpenMind for 3D medical vision self-supervised learning](https://arxiv.org/abs/2412.17041)
13
- > **Pre-training codebase used to create checkpoint**: [MIC-DKFZ/nnssl](https://github.com/MIC-DKFZ/nnssl)
 
 
 
 
14
  > **Dataset**: [AnonRes/OpenMind](https://huggingface.co/datasets/AnonRes/OpenMind)
15
- > **Downstream (segmentation) fine-tuning**: [TaWald/nnUNet](https://github.com/TaWald/nnUNet)
16
 
17
  ---
18
 
@@ -20,14 +23,11 @@ tags:
20
 
21
  ## Overview
22
 
23
- This repository hosts pre-trained checkpoints from the **OpenMind** benchmark:
24
- 📄 **An OpenMind for 3D medical vision self-supervised learning** (Wald, T., Ulrich, C., Suprijadi, J., Ziegler, S., Nohel, M., Peretzke, R., ... & Maier-Hein, K. H. (2024).)
25
- ([arXiv:2412.17041](https://arxiv.org/abs/2412.17041)) — the first extensive benchmark study for **self-supervised learning (SSL)** on **3D medical imaging** data.
26
 
27
- Each model was pre-trained using a particular SSL method on the [OpenMind Dataset](https://huggingface.co/datasets/AnonRes/OpenMind), a large-scale, standardized collection of public brain MRI datasets.
28
 
29
- **These models are not recommended to be used as-is for feature extraction.** Instead we recommend using the downstream fine-tuning frameworks for **segmentation** and **classification** adaptation, available in the [adaptation repository](https://github.com/TaWald/nnUNet).
30
- *While manual download is possible, we recommend using the auto-download feature of the fine-tuning repository by providing the repository URL on Hugging Face instead of a local checkpoint path.*
31
 
32
  ---
33
 
@@ -36,7 +36,7 @@ Each model was pre-trained using a particular SSL method on the [OpenMind Datase
36
  We release SSL checkpoints for two backbone architectures:
37
 
38
  - **ResEnc-L**: A CNN-based encoder [[a](https://arxiv.org/abs/2410.23132), [b](https://arxiv.org/abs/2404.09556)]
39
- - **Primus-M**: A transformer-based encoder [[Primus paper](https://arxiv.org/abs/2503.01835)]
40
 
41
  Each encoder has been pre-trained using one of the following SSL techniques:
42
 
@@ -50,3 +50,33 @@ Each encoder has been pre-trained using one of the following SSL techniques:
50
  | [SimMIM](https://openaccess.thecvf.com/content/CVPR2022/html/Xie_SimMIM_A_Simple_Framework_for_Masked_Image_Modeling_CVPR_2022_paper.html) | Simple masked reconstruction based pretraining method (TR only) |
51
  | [SwinUNETR SSL](https://arxiv.org/abs/2111.14791) | Rotation, Contrastive and Reconstruction based pre-training method. |
52
  | [SimCLR](https://arxiv.org/abs/2002.05709) | Transfer of 2D Contrastive learning baseline method to 3D |
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  datasets:
3
  - AnonRes/OpenMind
4
+ license: cc-by-4.0
5
+ pipeline_tag: image-segmentation
6
  tags:
7
  - medical
8
  ---
9
 
10
  # OpenMind Benchmark 3D SSL Models
11
 
12
+ This repository hosts pre-trained checkpoints from the **OpenMind** benchmark and the **Primus** architecture series.
13
+
14
+ > **Model from the paper**: [Primus: Enforcing Attention Usage for 3D Medical Image Segmentation](https://huggingface.co/papers/2503.01835)
15
+ > **Benchmark paper**: [An OpenMind for 3D medical vision self-supervised learning](https://arxiv.org/abs/2412.17041)
16
+ > **Official Code**: [MIC-DKFZ/nnUNet](https://github.com/MIC-DKFZ/nnUNet)
17
+ > **Pre-training codebase**: [MIC-DKFZ/nnssl](https://github.com/MIC-DKFZ/nnssl)
18
  > **Dataset**: [AnonRes/OpenMind](https://huggingface.co/datasets/AnonRes/OpenMind)
 
19
 
20
  ---
21
 
 
23
 
24
  ## Overview
25
 
26
+ This repository hosts pre-trained checkpoints from the **OpenMind** benchmark. It includes the **Primus** architecture, a Transformer-centric segmentation model designed to maximally leverage self-attention for 3D medical images, matching or exceeding state-of-the-art CNNs across multiple datasets.
 
 
27
 
28
+ Each model was pre-trained using a particular self-supervised learning (SSL) method on the [OpenMind Dataset](https://huggingface.co/datasets/AnonRes/OpenMind), a large-scale collection of public brain MRI datasets.
29
 
30
+ **Note**: These models are intended for downstream fine-tuning. We recommend using the [nnU-Net](https://github.com/MIC-DKFZ/nnUNet) framework for segmentation adaptation.
 
31
 
32
  ---
33
 
 
36
  We release SSL checkpoints for two backbone architectures:
37
 
38
  - **ResEnc-L**: A CNN-based encoder [[a](https://arxiv.org/abs/2410.23132), [b](https://arxiv.org/abs/2404.09556)]
39
+ - **Primus-M**: A transformer-centric encoder introduced in the [Primus paper](https://huggingface.co/papers/2503.01835).
40
 
41
  Each encoder has been pre-trained using one of the following SSL techniques:
42
 
 
50
  | [SimMIM](https://openaccess.thecvf.com/content/CVPR2022/html/Xie_SimMIM_A_Simple_Framework_for_Masked_Image_Modeling_CVPR_2022_paper.html) | Simple masked reconstruction based pretraining method (TR only) |
51
  | [SwinUNETR SSL](https://arxiv.org/abs/2111.14791) | Rotation, Contrastive and Reconstruction based pre-training method. |
52
  | [SimCLR](https://arxiv.org/abs/2002.05709) | Transfer of 2D Contrastive learning baseline method to 3D |
53
+
54
+ ## Usage
55
+
56
+ These models are designed to be used within the **nnU-Net** framework. You can install it via:
57
+
58
+ ```bash
59
+ pip install nnunetv2
60
+ ```
61
+
62
+ For detailed instructions on adaptation and inference, please refer to the [official documentation](https://github.com/MIC-DKFZ/nnUNet).
63
+
64
+ ## Citation
65
+
66
+ If you use these models, please cite the following:
67
+
68
+ ```bibtex
69
+ @article{wald2025primus,
70
+ title={Primus: Enforcing Attention Usage for 3D Medical Image Segmentation},
71
+ author={Wald, Tassilo and Roy, Saikat and Isensee, Fabian and Ulrich, Constantin and Ziegler, Sebastian and Trofimova, Dasha and Stock, Raphael and Baumgartner, Michael and K{\"o}hler, Gregor and Maier-Hein, Klaus},
72
+ journal={arXiv preprint arXiv:2503.01835},
73
+ year={2025}
74
+ }
75
+
76
+ @article{wald2024openmind,
77
+ title={An OpenMind for 3D medical vision self-supervised learning},
78
+ author={Wald, Tassilo and Ulrich, Constantin and Suprijadi, J and Ziegler, Sebastian and Nohel, M and Peretzke, R and others},
79
+ journal={arXiv preprint arXiv:2412.17041},
80
+ year={2024}
81
+ }
82
+ ```