| --- |
| license: other |
| license_name: mass-general-brigham-non-commercial |
| license_link: LICENSE |
| tags: |
| - brain |
| - mri |
| - neuroimaging |
| - vit |
| - foundation-model |
| - medical-imaging |
| library_name: brainiac |
| pipeline_tag: feature-extraction |
| --- |
| |
| # BrainIAC — Brain Imaging Adaptive Core |
|
|
| **A generalizable foundation model for analysis of human brain MRI** |
|
|
| BrainIAC is a Vision Transformer (ViT-B/16) pretrained with SimCLR on structural brain MRI scans. |
| Published in [Nature Neuroscience](https://www.nature.com/articles/s41593-026-02202-6) (2026). |
|
|
| ## Model Details |
|
|
| | Property | Value | |
| |----------|-------| |
| | Architecture | MONAI ViT-B/16³ (3D) | |
| | Parameters | 88.4M | |
| | Input | 96×96×96 single-channel brain MRI | |
| | Patches | 216 (6×6×6 grid, 16³ voxel patches) | |
| | Hidden dim | 768 | |
| | Layers | 12 transformer blocks | |
| | Heads | 12 attention heads | |
| | MLP dim | 3072 | |
| | Pretraining | SimCLR contrastive learning | |
| | Output | 768-dim feature vector (first patch token) | |
|
|
| ## Files |
|
|
| - `backbone.safetensors` — Pretrained ViT backbone weights |
| - `config.json` — Model configuration |
| - `LICENSE` — Non-commercial academic research license |
|
|
| ## Downstream Tasks |
|
|
| The backbone can be fine-tuned for: |
| - **Brain age prediction** (regression) |
| - **IDH mutation classification** (binary, dual-scan FLAIR+T1CE) |
| - **MCI classification** (binary) |
| - **Glioma overall survival** (binary, quad-scan T1+T1CE+T2+FLAIR) |
| - **MR sequence classification** (4-class: T1/T2/FLAIR/T1CE) |
| - **Time-to-stroke prediction** (regression) |
| - **Tumor segmentation** (UNETR decoder) |
|
|
| ## Usage with brainiac (Rust) |
|
|
| ```bash |
| cargo run --release --bin infer -- \ |
| --weights backbone.safetensors \ |
| --input brain_t1.nii.gz |
| ``` |
|
|
| ```rust |
| use brainiac::{BrainiacEncoder, TaskType}; |
| |
| let (encoder, _) = BrainiacEncoder::<B>::load( |
| "backbone.safetensors", None, |
| TaskType::FeatureExtraction, 1, device, |
| )?; |
| let features = encoder.encode_nifti(Path::new("brain.nii.gz"))?; |
| // features: Vec<f32> with 768 dimensions |
| ``` |
|
|
| ## Usage with Python |
|
|
| ```python |
| import torch |
| from monai.networks.nets import ViT |
| from safetensors.torch import load_file |
| |
| model = ViT(in_channels=1, img_size=(96,96,96), patch_size=(16,16,16), |
| hidden_size=768, mlp_dim=3072, num_layers=12, num_heads=12) |
| |
| weights = load_file("backbone.safetensors") |
| model.load_state_dict(weights, strict=False) |
| model.eval() |
| |
| # features[0][:, 0] gives the 768-dim feature vector |
| features = model(preprocessed_mri) |
| ``` |
|
|
| ## Preprocessing |
|
|
| Input MRI volumes must be: |
| 1. Skull-stripped (HD-BET recommended) |
| 2. Registered to standard space (MNI152) |
| 3. Bias field corrected (N4) |
| 4. Resized to 96×96×96 voxels (trilinear) |
| 5. Z-score normalized (nonzero voxels only) |
|
|
| ## Citation |
|
|
| ```bibtex |
| @article{tak2026generalizable, |
| title={A generalizable foundation model for analysis of human brain MRI}, |
| author={Tak, Divyanshu and Gormosa, B.A. and Zapaishchykova, A. and others}, |
| journal={Nature Neuroscience}, |
| year={2026}, |
| publisher={Springer Nature}, |
| doi={10.1038/s41593-026-02202-6} |
| } |
| ``` |
|
|
| ## License |
|
|
| This model is licensed for **non-commercial academic research use only**. |
| Commercial use requires a separate license from Mass General Brigham. |
| See [LICENSE](LICENSE) for details. |
|
|