Update pipeline tag and add Primus paper information

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +33 -24
README.md CHANGED
@@ -1,15 +1,16 @@
1
  ---
2
- license: cc-by-4.0
3
  datasets:
4
  - AnonRes/OpenMind
5
- pipeline_tag: image-feature-extraction
 
6
  tags:
7
  - medical
8
  ---
9
 
10
- # OpenMind Benchmark 3D SSL Models
11
 
12
- > **Model from the paper**: [An OpenMind for 3D medical vision self-supervised learning](https://arxiv.org/abs/2412.17041)
 
13
  > **Pre-training codebase used to create checkpoint**: [MIC-DKFZ/nnssl](https://github.com/MIC-DKFZ/nnssl)
14
  > **Dataset**: [AnonRes/OpenMind](https://huggingface.co/datasets/AnonRes/OpenMind)
15
  > **Downstream (segmentation) fine-tuning**: [TaWald/nnUNet](https://github.com/TaWald/nnUNet)
@@ -20,33 +21,41 @@ tags:
20
 
21
  ## Overview
22
 
23
- This repository hosts pre-trained checkpoints from the **OpenMind** benchmark:
24
- 📄 **An OpenMind for 3D medical vision self-supervised learning** (Wald, T., Ulrich, C., Suprijadi, J., Ziegler, S., Nohel, M., Peretzke, R., ... & Maier-Hein, K. H. (2024).)
25
- ([arXiv:2412.17041](https://arxiv.org/abs/2412.17041)) — the first extensive benchmark study for **self-supervised learning (SSL)** on **3D medical imaging** data.
26
 
27
- Each model was pre-trained using a particular SSL method on the [OpenMind Dataset](https://huggingface.co/datasets/AnonRes/OpenMind), a large-scale, standardized collection of public brain MRI datasets.
 
 
28
 
29
- **These models are not recommended to be used as-is for feature extraction.** Instead we recommend using the downstream fine-tuning frameworks for **segmentation** and **classification** adaptation, available in the [adaptation repository](https://github.com/TaWald/nnUNet).
30
  *While manual download is possible, we recommend using the auto-download feature of the fine-tuning repository by providing the repository URL on Hugging Face instead of a local checkpoint path.*
31
 
32
  ---
33
 
34
  ## Model Variants
35
 
36
- We release SSL checkpoints for two backbone architectures:
37
 
38
  - **ResEnc-L**: A CNN-based encoder [[a](https://arxiv.org/abs/2410.23132), [b](https://arxiv.org/abs/2404.09556)]
39
- - **Primus-M**: A transformer-based encoder [[Primus paper](https://arxiv.org/abs/2503.01835)]
40
-
41
- Each encoder has been pre-trained using one of the following SSL techniques:
42
-
43
- | Method | Description |
44
- |---------------|-------------|
45
- | [Volume Contrastive (VoCo)](https://arxiv.org/abs/2402.17300) | Contrastive pretraining method for 3D volumes |
46
- | [VolumeFusion (VF)](https://arxiv.org/abs/2306.16925) | Spatial volume fusion-based segmentation SSL method |
47
- | [Models Genesis (MG)](https://www.sciencedirect.com/science/article/pii/S1361841520302048) | Reconstruction and denoising based pretraining method |
48
- | [Masked Autoencoders (MAE)](https://openaccess.thecvf.com/content/CVPR2022/html/He_Masked_Autoencoders_Are_Scalable_Vision_Learners_CVPR_2022_paper) | Default reconstruction based pretraining method |
49
- | [Spark 3D (S3D)](https://arxiv.org/abs/2410.23132) | Sparse reconstruction based pretraining mehtod (CNN only) |
50
- | [SimMIM](https://openaccess.thecvf.com/content/CVPR2022/html/Xie_SimMIM_A_Simple_Framework_for_Masked_Image_Modeling_CVPR_2022_paper.html) | Simple masked reconstruction based pretraining method (TR only) |
51
- | [SwinUNETR SSL](https://arxiv.org/abs/2111.14791) | Rotation, Contrastive and Reconstruction based pre-training method. |
52
- | [SimCLR](https://arxiv.org/abs/2002.05709) | Transfer of 2D Contrastive learning baseline method to 3D |
 
 
 
 
 
 
 
 
 
 
1
  ---
 
2
  datasets:
3
  - AnonRes/OpenMind
4
+ license: cc-by-4.0
5
+ pipeline_tag: image-segmentation
6
  tags:
7
  - medical
8
  ---
9
 
10
+ # Primus-M (SimCLR Pre-trained) OpenMind Benchmark
11
 
12
+ > **Model from the paper**: [Primus: Enforcing Attention Usage for 3D Medical Image Segmentation](https://huggingface.co/papers/2503.01835)
13
+ > **Benchmark paper**: [An OpenMind for 3D medical vision self-supervised learning](https://arxiv.org/abs/2412.17041)
14
  > **Pre-training codebase used to create checkpoint**: [MIC-DKFZ/nnssl](https://github.com/MIC-DKFZ/nnssl)
15
  > **Dataset**: [AnonRes/OpenMind](https://huggingface.co/datasets/AnonRes/OpenMind)
16
  > **Downstream (segmentation) fine-tuning**: [TaWald/nnUNet](https://github.com/TaWald/nnUNet)
 
21
 
22
  ## Overview
23
 
24
+ This repository hosts a pre-trained checkpoint from the **OpenMind** benchmark using the **Primus-M** backbone. Primus is a Transformer-centric segmentation architecture designed specifically to maximize the effectiveness of attention mechanisms in 3D medical imaging, surpassing hybrid architectures and competing with state-of-the-art CNNs.
 
 
25
 
26
+ Each model in this benchmark was pre-trained using a particular self-supervised learning (SSL) method on the [OpenMind Dataset](https://huggingface.co/datasets/AnonRes/OpenMind), a large-scale, standardized collection of public brain MRI datasets.
27
+
28
+ **Note**: These models are primarily intended for downstream fine-tuning. We recommend using the adaptation frameworks for segmentation and classification provided in the [nnU-Net adaptation repository](https://github.com/TaWald/nnUNet).
29
 
 
30
  *While manual download is possible, we recommend using the auto-download feature of the fine-tuning repository by providing the repository URL on Hugging Face instead of a local checkpoint path.*
31
 
32
  ---
33
 
34
  ## Model Variants
35
 
36
+ In the OpenMind benchmark, SSL checkpoints are released for two backbone architectures:
37
 
38
  - **ResEnc-L**: A CNN-based encoder [[a](https://arxiv.org/abs/2410.23132), [b](https://arxiv.org/abs/2404.09556)]
39
+ - **Primus-M**: A transformer-based encoder introduced in the [Primus paper](https://huggingface.co/papers/2503.01835).
40
+
41
+ Each encoder has been pre-trained using various SSL techniques such as SimCLR, Volume Contrastive (VoCo), VolumeFusion (VF), and Masked Autoencoders (MAE). This specific checkpoint uses **SimCLR**.
42
+
43
+ ## Citation
44
+
45
+ If you use this model, please cite:
46
+
47
+ ```bibtex
48
+ @article{wald2025primus,
49
+ title={Primus: Enforcing Attention Usage for 3D Medical Image Segmentation},
50
+ author={Wald, Tassilo and Roy, Saikat and Isensee, Fabian and Ulrich, Constantin and Ziegler, Sebastian and Trofimova, Dasha and Stock, Raphael and Baumgartner, Michael and Köhler, Gregor and Maier-Hein, Klaus},
51
+ journal={arXiv preprint arXiv:2503.01835},
52
+ year={2025}
53
+ }
54
+
55
+ @article{wald2024openmind,
56
+ title={An OpenMind for 3D medical vision self-supervised learning},
57
+ author={Wald, Tassilo and Ulrich, Constantin and Suprijadi, J. and Ziegler, Sebastian and Nohel, M. and Peretzke, R. and others},
58
+ journal={arXiv preprint arXiv:2412.17041},
59
+ year={2024}
60
+ }
61
+ ```