File size: 3,230 Bytes
d90efc3 c858205 d90efc3 c858205 d90efc3 c858205 d90efc3 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 | ---
license: other
license_name: mass-general-brigham-non-commercial
license_link: LICENSE
tags:
- brain
- mri
- neuroimaging
- vit
- foundation-model
- medical-imaging
library_name: brainiac
pipeline_tag: feature-extraction
---
# BrainIAC — Brain Imaging Adaptive Core
**A generalizable foundation model for analysis of human brain MRI**
BrainIAC is a Vision Transformer (ViT-B/16) pretrained with SimCLR on structural brain MRI scans.
Published in [Nature Neuroscience](https://www.nature.com/articles/s41593-026-02202-6) (2026).
## Model Details
| Property | Value |
|----------|-------|
| Architecture | MONAI ViT-B/16³ (3D) |
| Parameters | 88.4M |
| Input | 96×96×96 single-channel brain MRI |
| Patches | 216 (6×6×6 grid, 16³ voxel patches) |
| Hidden dim | 768 |
| Layers | 12 transformer blocks |
| Heads | 12 attention heads |
| MLP dim | 3072 |
| Pretraining | SimCLR contrastive learning |
| Output | 768-dim feature vector (first patch token) |
## Files
- `backbone.safetensors` — Pretrained ViT backbone weights
- `config.json` — Model configuration
- `LICENSE` — Non-commercial academic research license
## Downstream Tasks
The backbone can be fine-tuned for:
- **Brain age prediction** (regression)
- **IDH mutation classification** (binary, dual-scan FLAIR+T1CE)
- **MCI classification** (binary)
- **Glioma overall survival** (binary, quad-scan T1+T1CE+T2+FLAIR)
- **MR sequence classification** (4-class: T1/T2/FLAIR/T1CE)
- **Time-to-stroke prediction** (regression)
- **Tumor segmentation** (UNETR decoder)
## Usage with brainiac (Rust)
```bash
cargo run --release --bin infer -- \
--weights backbone.safetensors \
--input brain_t1.nii.gz
```
```rust
use brainiac::{BrainiacEncoder, TaskType};
let (encoder, _) = BrainiacEncoder::<B>::load(
"backbone.safetensors", None,
TaskType::FeatureExtraction, 1, device,
)?;
let features = encoder.encode_nifti(Path::new("brain.nii.gz"))?;
// features: Vec<f32> with 768 dimensions
```
## Usage with Python
```python
import torch
from monai.networks.nets import ViT
from safetensors.torch import load_file
model = ViT(in_channels=1, img_size=(96,96,96), patch_size=(16,16,16),
hidden_size=768, mlp_dim=3072, num_layers=12, num_heads=12)
weights = load_file("backbone.safetensors")
model.load_state_dict(weights, strict=False)
model.eval()
# features[0][:, 0] gives the 768-dim feature vector
features = model(preprocessed_mri)
```
## Preprocessing
Input MRI volumes must be:
1. Skull-stripped (HD-BET recommended)
2. Registered to standard space (MNI152)
3. Bias field corrected (N4)
4. Resized to 96×96×96 voxels (trilinear)
5. Z-score normalized (nonzero voxels only)
## Citation
```bibtex
@article{tak2026generalizable,
title={A generalizable foundation model for analysis of human brain MRI},
author={Tak, Divyanshu and Gormosa, B.A. and Zapaishchykova, A. and others},
journal={Nature Neuroscience},
year={2026},
publisher={Springer Nature},
doi={10.1038/s41593-026-02202-6}
}
```
## License
This model is licensed for **non-commercial academic research use only**.
Commercial use requires a separate license from Mass General Brigham.
See [LICENSE](LICENSE) for details.
|