File size: 2,773 Bytes
687be3f 538668e 687be3f |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 |
---
tags:
- fMRI
- foundation_model
- neuroscience
---
# SLIM-BRAIN: A DATA- AND TRAINING-EFFICIENT FOUNDATION MODEL FOR FMRI DATA ANALYSIS
<div align="center">
[](https://www.arxiv.org/abs/2512.21881)
[](https://github.com/OneMore1/SLIM-Brain2026)
[](https://huggingface.co/OneMore1/Slim-Brain)
</div>
This repository contains the official implementation of SLIM-Brain. SLIM-Brain is a two-stage, selective-compute pipeline for voxel-level fMRI representation learning. A lightweight global branch ranks informative temporal windows; a high-capacity 4D HieraβJEPA encoder processes only those windows, focusing compute on brain voxels and drastically reducing memory.
<p align="center">
<img src="pipeline.png" width="800" alt="framework">
</p>
---
## Installation
Setting up the environment requires Python 3.13 and CUDA-compatible PyTorch for GPU acceleration:
```bash
conda create -n hiera-jepa python=3.13.5
conda activate hiera-jepa
# Install dependencies
pip install -r requirements.txt
```
## Project Structure
The codebase is organized into modular components for easy navigation and extension:
```
hiera-jepa/
βββ configs/ # YAML configuration files for training and model parameters
βββ checkpoints/ # Saved model weights and training checkpoints
βββ hiera/ # Hierarchical Vision Transformer backbone implementation
βββ scripts/ # Bash....
βββ finetune.py # Downstream task training and feature extraction script
βββ requirements.txt # Python package dependencies
```
## Downstream evaluation
1. Ensure your pre-train data structure as follow:
```
data_root/
βββ ABIDE_train/
βββ ABIDE_val/
βββ HCP_val/
βββ HCP_train/
βββ 0010001/ # Subject ID
βββ 0010002/
βββ 0010002_run-1_0000-0199_1.npz # Data chunk 1
βββ 0010002_run-1_0000-0199_2.npz # Data chunk 2
```
2. Loading downstream datasets as following data structure:
```yaml
task:
csv: "/path/to/data_csv"
data:
data_root: /path/to/data_root
datasets: ["HCP"]
mode: "directory"
```
3. Start downstream training:
```bash
# running downstream training
sh scripts/finetune.sh
```
#### Model Checkpoints
Our pre-trained model weights can be found in the checkpoints directory: `./checkpoints/best_model.pth` |