File size: 1,459 Bytes
eb09e10
a87f02b
eb09e10
fcd7ee8
eb09e10
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
---
language: en
license: apache-2.0
source_code: https://github.com/pwesp/sail
tags:
- sparse-autoencoder
- matryoshka
- ct
- mri
---

  # SAIL — Pretrained SAE Weights

  Pretrained Matryoshka Sparse Autoencoder (SAE) weights for the [SAIL](https://github.com/pwesp/sail) repository. See the project page for the full pipeline and usage instructions.

  Two checkpoints are provided, one for each foundation model (FM) embedding space:

  | File | Foundation model | Input dim | Dictionary sizes | k values |
  |------|-----------------|-----------|-----------------|----------|
  | `biomedparse_sae.ckpt` | BiomedParse | 1536 | 128, 512, 2048, 8192 | 20, 40, 80, 160 |
  | `dinov3_sae.ckpt` | DINOv3 | 1024 | 128, 512, 2048, 8192 | 5, 10, 20, 40 |

  Both SAEs were trained on CT and MRI embeddings from the [TotalSegmentator](https://github.com/wasserth/TotalSegmentator) dataset.

  ## Usage

  To download these weights and place them in the expected directory structure, run from the repo root:

  ```bash
  bash pretrained/download_weights.sh
  ```

  ## Citation

  If you find this work useful, please cite our paper:

```bibtex
  @misc{sail2026,
    title = {Sparse Autoencoders for Interpretable Medical Image Representation Learning},
    author = {Wesp, Philipp and Holland, Robbie and Sideri-Lampretsa, Vasiliki and Gatidis, Sergios},
    year = 2026,
    journal = {arXiv.org},
    howpublished = {https://arxiv.org/abs/2603.23794v1}
  }
```