MnemoDyn: Learning Resting State Dynamics from 40K fMRI Sequences
Sourav Pal, Viet Luong, Hoseok Lee, Tingting Dan, Guorong Wu, Richard Davidson, Won Hwa Kim, Vikas Singh
MnemoDyn is an operator-learning foundation model for resting-state fMRI, combining multi-resolution wavelet dynamics with CDE-style temporal modeling.
Update
MnemoDyn is now published on Hugging Face: https://huggingface.co/vhluong/MnemoDyn
You can also publish your own trained checkpoint directly from this repo.
Tutorial
A usage walkthrough is available as a Google Colab notebook:
At A Glance
- Pretraining backbones:
coe/light/model/main.py,coe/light/model/main_masked_autoencode.py,coe/light/model/main_masked_autoencode_jepa.py,coe/light/model/main_denoise.py,coe/light/model/orion.py - Core model modules:
coe/light/model/conv1d_optimize.py,coe/light/model/dense_layer.py,coe/light/model/ema.py,coe/light/model/normalizer.py - Downstream tasks: HBN, ADHD200, ADNI, ABIDE, NKIR, UK Biobank, HCP Aging under
coe/light/*.py - Launch scripts:
coe/light/script/*.sh
Repository Layout
.
βββ highdim_req.txt
βββ pyproject.toml
βββ coe/
β βββ parcellation/
β βββ light/
β βββ model/
β βββ script/
β βββ *_dataset.py
β βββ *classification*.py, *regress*.py
βββ README.md
Environment Setup
Python 3.10+ is recommended.
Option A (recommended): uv
uv venv
source .venv/bin/activate
uv sync
Option B: pip
python -m venv .venv
source .venv/bin/activate
pip install -r highdim_req.txt
Ensure your PyTorch build matches your CUDA stack.
Preprocessing Pipeline (NIfTI to Parcellated CIFTI)
We provide a unified, Python-based CLI pipeline to automate mapping volumetric NIfTI images to fs_LR surfaces and parcellating the resulting dense time series. The pipeline dynamically extracts the Repetition Time (TR) from your NIfTI files to ensure downstream models learn accurate temporal dynamics.
Requirements
- Connectome Workbench (
wb_command) installed and on your system PATH. nibabelandtqdmPython packages.
Usage
Run the pipeline from the repository root:
python -m coe.preprocess.pipeline \
--input-dir /path/to/niftis \
--output-dir /path/to/output_dir \
--atlas /path/to/atlas.dlabel.nii \
--pattern "*_task-rest_space-MNI305_preproc.nii.gz"
The script will automatically orchestrate wb_command for left/right mapping and resampling, output an intermediate .dtseries.nii, and finally parcellate it using the provided atlas, injecting the correct native TR throughout.
Quick Start
1) Inspect pretraining CLIs
cd coe/light/model
python main.py --help
python main_masked_autoencode.py --help
python main_masked_autoencode_jepa.py --help
python main_denoise.py --help
2) Pretraining
bash orion.sh
3) Run downstream examples
cd coe/light
bash script/hbn_classification.sh
bash script/adhd_200_diagnose.sh
Typical Workflow
- Pretrain a foundation checkpoint (
coe/light/model/main*.py). - Save Lightning checkpoints under a versioned results directory.
- Fine-tune a downstream head using a task script in
coe/light/. - Track outputs and metrics under
Result/<ExperimentName>/....
Notes and Caveats
- This is a research codebase and is still being consolidated.
- Some scripts may require branch-specific import/path adjustments.
- Normalization and dataset utilities are partially duplicated across modules.
- Reproducibility depends on matching preprocessing, atlas/parcellation, and dataset splits.
Citation
If this work helps your research, please cite:
@inproceedings{
pal2026mnemodyn,
title={MnemoDyn: Learning Resting State Dynamics from $40$K {FMRI} sequences},
author={Sourav Pal and Viet Luong and Hoseok Lee and Tingting Dan and Guorong Wu and Richard Davidson and Won Hwa Kim and Vikas Singh},
booktitle={The Fourteenth International Conference on Learning Representations},
year={2026},
url={https://openreview.net/forum?id=zexMILcQOV}
}
