Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,262 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: cc-by-nd-4.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-nd-4.0
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
tags:
|
| 6 |
+
- eeg
|
| 7 |
+
- time-series
|
| 8 |
+
- cross-attention
|
| 9 |
+
- foundation-model
|
| 10 |
+
- neuroscience
|
| 11 |
+
library_name: pytorch
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
<div align="center">
|
| 16 |
+
<img src="https://raw.githubusercontent.com/pulp-bio/BioFoundation/refs/heads/main/docs/model/logo/LUNA_logo.png" alt="LUNA Logo" width="800"/>
|
| 17 |
+
<h1>LUNA: Efficient and Topology-Agnostic Foundation Model for EEG</h1>
|
| 18 |
+
</div>
|
| 19 |
+
<p align="center">
|
| 20 |
+
<a href="https://github.com/pulp-bio/BioFoundation">
|
| 21 |
+
<img src ="https://img.shields.io/github/stars/pulp-bio/BioFoundation?color=ccf" alt="Github">
|
| 22 |
+
</a>
|
| 23 |
+
<a href="https://creativecommons.org/licenses/by-nd/4.0/">
|
| 24 |
+
<img src="https://img.shields.io/badge/License-CC_BY--ND_4.0-lightgrey.svg" alt="License">
|
| 25 |
+
</a>
|
| 26 |
+
<a href="https://arxiv.org/abs/2510.22257">
|
| 27 |
+
<img src="https://img.shields.io/badge/arXiv-2510.22257-b31b1b.svg" alt="Paper">
|
| 28 |
+
</a>
|
| 29 |
+
</p>
|
| 30 |
+
|
| 31 |
+
**LUNA** (Latent Unified Network Architecture) is a **self-supervised foundation model for EEG** that makes models **agnostic to electrode topology**. LUNA projects arbitrary channel layouts into a **fixed-size latent space with learned queries + cross-attention**, then runs **patch-wise temporal self-attention** only on this compact latent. This **decouples compute from channel count**, yielding **linear-in-channels scaling**, large FLOPs/memory savings, and strong transfer across datasets and montages.
|
| 32 |
+
|
| 33 |
+
---
|
| 34 |
+
|
| 35 |
+
## 🔒 License & Usage Policy (Weights)
|
| 36 |
+
|
| 37 |
+
**Weights license:** The released model weights are licensed under **Creative Commons Attribution–NoDerivatives 4.0 (CC BY-ND 4.0)**. This section summarizes the practical implications for users. *This is not legal advice; please read the full license text.*
|
| 38 |
+
|
| 39 |
+
### ✅ You may
|
| 40 |
+
- **Use** and **redistribute** the **unmodified** LUNA weights (including in commercial settings) **with proper attribution** to the LUNA authors.
|
| 41 |
+
- **Fine-tune / adapt** the weights **for your internal use** (research or production) **without redistributing** the modified weights.
|
| 42 |
+
- **Publish your code, configs, logs, and papers** describing experiments with LUNA (please cite the paper).
|
| 43 |
+
|
| 44 |
+
### 🚫 You may not
|
| 45 |
+
- **Share, host, or redistribute any modified weights** (including LoRA/adapter/delta checkpoints or pruned/quantized variants). Any parameter set that encodes an adaptation is considered a derivative and cannot be shared under CC BY-ND 4.0.
|
| 46 |
+
- **Imply endorsement** by the LUNA authors for any derivative or evaluation without our written permission.
|
| 47 |
+
- **Use the LUNA name** in a way that suggests your modified model is an official LUNA release.
|
| 48 |
+
|
| 49 |
+
### 🤝 How to contribute improvements (PR-gated releases)
|
| 50 |
+
We welcome community improvements via a **pull-request (PR)** workflow. If you believe your improvements should become an **official LUNA release**:
|
| 51 |
+
1. **Open a PR** in the [BioFoundation repository](https://github.com/pulp-bio/BioFoundation) describing the change (architecture/head/training recipe, datasets, preprocessing, compute).
|
| 52 |
+
2. Include **reproducibility artifacts**: configs, seeds, scripts, environment details, training/validation logs, and the **evaluation protocol** (e.g., TUAB/TUAR/TUSL) with exact splits.
|
| 53 |
+
3. Provide **comprehensive results** (AUROC/AUPR/BA, FLOPs, memory) vs. the baselines reported in the LUNA paper.
|
| 54 |
+
4. After **maintainer review**, approved changes will be **retrained/validated** and, if accepted, **released by the maintainers** as a new **official LUNA** checkpoint under **CC BY-ND 4.0**.
|
| 55 |
+
|
| 56 |
+
> Rationale: CC BY-ND protects users from fragmented, lower-quality “LUNA variants,” while still enabling internal fine-tuning and a path for the community to upstream improvements through review.
|
| 57 |
+
|
| 58 |
+
---
|
| 59 |
+
|
| 60 |
+
## 🔎 Model Summary
|
| 61 |
+
|
| 62 |
+
- **Goal:** Topology-agnostic EEG modeling with **linear-in-channels** compute/memory.
|
| 63 |
+
- **Core idea:** **Channel-Unification Module** uses **learned queries** (Q) with **cross-attention** to map any set of channels to a fixed latent; **temporal Transformer** then operates on that latent sequence.
|
| 64 |
+
- **Pre-training data:** TUEG + Siena, **>21,000 hours** of raw EEG; downstream subjects removed to avoid leakage.
|
| 65 |
+
- **Downstream tasks:** TUAB (abnormal), **TUAR** (artifacts), **TUSL** (slowing), **SEED-V** (emotion; unseen 62-ch montage).
|
| 66 |
+
|
| 67 |
+
---
|
| 68 |
+
|
| 69 |
+
## 🚀 Model Variants
|
| 70 |
+
|
| 71 |
+
| Variant | Parameters |
|
| 72 |
+
| :--- | ---: |
|
| 73 |
+
| **LUNA-Base** | **7M** |
|
| 74 |
+
| **LUNA-Large** | **43M** |
|
| 75 |
+
| **LUNA-Huge** | **311M** |
|
| 76 |
+
|
| 77 |
+
*Scaling increases depth/width of the temporal encoder and the query/embedding sizes in the unification module.*
|
| 78 |
+
|
| 79 |
+
### ⚙️ Model size configs (ready-made YAMLs)
|
| 80 |
+
|
| 81 |
+
Pick a LUNA size by selecting one of the provided model configs:
|
| 82 |
+
|
| 83 |
+
- `config/model/LUNA_base.yaml` — Base (≈7M)
|
| 84 |
+
- `config/model/LUNA_large.yaml` — Large (≈43M)
|
| 85 |
+
- `config/model/LUNA_huge.yaml` — Huge (≈311M)
|
| 86 |
+
|
| 87 |
+
**Use it via experiment defaults override** (recommended):
|
| 88 |
+
|
| 89 |
+
```yaml
|
| 90 |
+
# inside config/experiment/LUNA_finetune.yaml
|
| 91 |
+
defaults:
|
| 92 |
+
- override /data_module: finetune_data_module # or subject_independent_data_module
|
| 93 |
+
- override /model: LUNA_base # change to LUNA_large or LUNA_huge
|
| 94 |
+
- override /scheduler: cosine
|
| 95 |
+
- override /task: finetune_task_LUNA
|
| 96 |
+
- override /criterion: finetune_criterion
|
| 97 |
+
```
|
| 98 |
+
|
| 99 |
+
**Or from the CLI** (no file edits):
|
| 100 |
+
|
| 101 |
+
```bash
|
| 102 |
+
python -u run_train.py +experiment=LUNA_finetune /model=LUNA_large
|
| 103 |
+
```
|
| 104 |
+
|
| 105 |
+
---
|
| 106 |
+
|
| 107 |
+
## 📊 Results (Highlights)
|
| 108 |
+
|
| 109 |
+
- **TUAR (artifact detection):** **AUROC 0.921** (LUNA-Huge).
|
| 110 |
+
- **TUSL (slowing, 4-class):** **AUROC 0.802** (LUNA-Huge).
|
| 111 |
+
- **TUAB (abnormal vs normal):** **Bal. Acc. 81.57%**, **AUROC 0.8957** (LUNA-Huge).
|
| 112 |
+
|
| 113 |
+
**Efficiency:** Up to **300× fewer FLOPs** and **≈10× lower GPU memory** vs quadratic spatio-temporal attention on dense caps / long windows, thanks to unifying channels **before** temporal attention.
|
| 114 |
+
|
| 115 |
+
---
|
| 116 |
+
|
| 117 |
+
## 🧠 Intended Use & Limitations
|
| 118 |
+
|
| 119 |
+
**Intended use.** Research on EEG representation learning & classification (abnormality, artifacts, slowing, emotion), especially when **montages vary** or **channel counts are high**.
|
| 120 |
+
|
| 121 |
+
**Limitations.**
|
| 122 |
+
- **Not a medical device.** Do **not** use for clinical decisions without proper validation & regulatory clearance.
|
| 123 |
+
- **Unseen topologies:** Zero-shot transfer to **very different/dense** layouts (e.g., SEED-V) can underperform SOTA despite positive scaling; consider augmenting pre-training montage diversity and spatial encodings.
|
| 124 |
+
- **Distribution shifts:** Performance varies across cohorts, devices, and label protocols; validate locally and consider domain adaptation.
|
| 125 |
+
|
| 126 |
+
---
|
| 127 |
+
|
| 128 |
+
## 🏗️ Architecture & Training
|
| 129 |
+
|
| 130 |
+
**Tokenizer & features.** EEG is patch-segmented; temporal features via 1D conv w/ GroupNorm+GELU; **frequency features** (FFT mag/phase → MLP) are added; 3D electrode coordinates encoded via **NeRF-style sinusoids → MLP** (positional enc).
|
| 131 |
+
|
| 132 |
+
**Channel-Unification Module.** **Q learned queries** cross-attend to **channel-wise patch features** to produce a **fixed Q×E latent** per patch; FFN + Transformer layers refine the query tokens. Complexity is **O(Q·C)** (linear in channels).
|
| 133 |
+
|
| 134 |
+
**Temporal encoder.** **Patch-wise Transformer** with **RoPE** operates on the latent sequence (length = #patches), **not** on channels×patches, reducing sequence length and cost substantially.
|
| 135 |
+
|
| 136 |
+
**Pre-training objective.** **Masked-patch reconstruction** with Smooth-L1; decoder uses **channel-indexed queries** to reconstruct masked tokens. **Query specialization loss** encourages diverse query–channel affinities.
|
| 137 |
+
|
| 138 |
+
---
|
| 139 |
+
|
| 140 |
+
## 🔧 Fine-tuning — General Checklist
|
| 141 |
+
|
| 142 |
+
0. **Install & read data prep**: clone the [BioFoundation repo](https://github.com/pulp-bio/BioFoundation), set up the environment as described there, then open `make_datasets/README.md` for dataset-specific notes (naming, expected folder layout, and common pitfalls).
|
| 143 |
+
1. **Choose model size**: set `- override /model: {LUNA_base|LUNA_large|LUNA_huge}` in your experiment YAML (or `/model=...` via CLI).
|
| 144 |
+
2. **Point to weights**: set `pretrained_safetensors_path: /path/to/LUNA_*.safetensors` in the experiment YAML.
|
| 145 |
+
3. **Pick data module**:
|
| 146 |
+
- **TUH datasets (TUAB/TUSL/TUAR)** → `- override /data_module: finetune_data_module` and optionally override `data_module.train/val/test.hdf5_file` paths.
|
| 147 |
+
- **Non-TUH (e.g., SEED-V)** → `- override /data_module: subject_independent_data_module` and remove the TUH-specific `data_module` block.
|
| 148 |
+
4. **Task settings**: set `classification_type` (`bc`, `mc`, `mmc`, `mcc`) and `model.num_classes` to match your downstream task.
|
| 149 |
+
5. **Env vars**: export `DATA_PATH` (dataset root) and `CHECKPOINT_DIR` (artifacts).
|
| 150 |
+
6. **Trainer/optimizer**: adjust `gpus/devices`, `batch_size`, `max_epochs`, LR/scheduler if needed.
|
| 151 |
+
7. **I/O**: set `io.base_output_path` and confirm `io.checkpoint_dirpath` exists.
|
| 152 |
+
|
| 153 |
+
---
|
| 154 |
+
|
| 155 |
+
## 🧪 Example: Fine-tune on TUSL (end-to-end)
|
| 156 |
+
|
| 157 |
+
**0) Install & acquire data**
|
| 158 |
+
- Follow the installation instructions in the [BioFoundation repository](https://github.com/pulp-bio/BioFoundation).
|
| 159 |
+
- Read `make_datasets/README.md` for exact dataset preparation details.
|
| 160 |
+
- Download the **raw TUSL** dataset from the official [TUH EEG corpus source](https://isip.piconepress.com/projects/nedc/html/tuh_eeg/index.shtml) and place it locally, e.g.: `/eeg_data/TUSL/`.
|
| 161 |
+
|
| 162 |
+
**1) Prepare data**
|
| 163 |
+
|
| 164 |
+
```bash
|
| 165 |
+
python make_datasets/process_raw_eeg.py tusl --root_dir /eeg_data/TUSL/edf --output_dir /processed_eeg
|
| 166 |
+
|
| 167 |
+
python make_datasets/make_hdf5.py --prepath /processed_eeg --dataset TUSL --remove_pkl
|
| 168 |
+
```
|
| 169 |
+
|
| 170 |
+
**2) Set environment variables**
|
| 171 |
+
|
| 172 |
+
```python
|
| 173 |
+
# run_train.py (example)
|
| 174 |
+
import os
|
| 175 |
+
os.environ["DATA_PATH"] = "/processed_eeg" # contains TUSL_data/{train,val,test}.h5
|
| 176 |
+
os.environ["CHECKPOINT_DIR"] = "/LUNA_runs" # directory for checkpoints & logs
|
| 177 |
+
```
|
| 178 |
+
|
| 179 |
+
**3) Edit the experiment file: `config/experiment/LUNA_finetune.yaml`**
|
| 180 |
+
|
| 181 |
+
```yaml
|
| 182 |
+
defaults:
|
| 183 |
+
- override /data_module: finetune_data_module # Change based on dataset, finetune_data_module for TUH and subject_independent_data_module for non-TUH
|
| 184 |
+
- override /model: LUNA_base # Pick the model size, here base, but also available are LUNA_large / LUNA_huge.
|
| 185 |
+
|
| 186 |
+
pretrained_safetensors_path: /path/to/LUNA_base.safetensors
|
| 187 |
+
|
| 188 |
+
classification_type: "mcc" # Set based on what type of classification task (Multiclass Classification (MCC), Binary (BC), etc.)
|
| 189 |
+
model:
|
| 190 |
+
num_classes: 4 # Set based on how many classes are in your dataset
|
| 191 |
+
|
| 192 |
+
# Write paths to preprocessed .h5 TUSL files
|
| 193 |
+
data_module:
|
| 194 |
+
train:
|
| 195 |
+
_target_: datasets.tuh_dataset.TUH_Dataset
|
| 196 |
+
hdf5_file: ${env:DATA_PATH}/TUSL_data/train.h5 #Here point to the correct file
|
| 197 |
+
finetune: true
|
| 198 |
+
val:
|
| 199 |
+
_target_: datasets.tuh_dataset.TUH_Dataset
|
| 200 |
+
hdf5_file: ${env:DATA_PATH}/TUSL_data/val.h5 #Here point to the correct file
|
| 201 |
+
finetune: true
|
| 202 |
+
test:
|
| 203 |
+
_target_: datasets.tuh_dataset.TUH_Dataset
|
| 204 |
+
hdf5_file: ${env:DATA_PATH}/TUSL_data/test.h5 #Here point to the correct file
|
| 205 |
+
finetune: true
|
| 206 |
+
```
|
| 207 |
+
|
| 208 |
+
**4) Launch**
|
| 209 |
+
|
| 210 |
+
```bash
|
| 211 |
+
python -u run_train.py +experiment=LUNA_finetune
|
| 212 |
+
```
|
| 213 |
+
|
| 214 |
+
*Tip*: to switch sizes without editing the file:
|
| 215 |
+
|
| 216 |
+
```bash
|
| 217 |
+
python -u run_train.py +experiment=LUNA_finetune /model=LUNA_large pretrained_safetensors_path=/path/to/LUNA_large.safetensors
|
| 218 |
+
```
|
| 219 |
+
|
| 220 |
+
---
|
| 221 |
+
|
| 222 |
+
## ⚖️ Responsible AI, Risks & Biases
|
| 223 |
+
|
| 224 |
+
- **Clinical safety:** research-only; human oversight required.
|
| 225 |
+
- **Bias & drift:** montage/device/population differences can induce shifts; validate and monitor.
|
| 226 |
+
- **Artifacts & rare events:** robustness varies; use QC and task-appropriate preprocessing.
|
| 227 |
+
|
| 228 |
+
---
|
| 229 |
+
|
| 230 |
+
## 🔗 Sources
|
| 231 |
+
|
| 232 |
+
- **Code:** https://github.com/pulp-bio/BioFoundation
|
| 233 |
+
- **Paper:** LUNA: Efficient and Topology-Agnostic Foundation Model for EEG Signal Analysis (arxiv:2510.22257).
|
| 234 |
+
|
| 235 |
+
---
|
| 236 |
+
|
| 237 |
+
## 📜 Citation
|
| 238 |
+
|
| 239 |
+
If you use LUNA, please cite:
|
| 240 |
+
|
| 241 |
+
```bibtex
|
| 242 |
+
@inproceedings{
|
| 243 |
+
doner2025luna,
|
| 244 |
+
title={{LUNA}: Efficient and Topology-Agnostic Foundation Model for {EEG} Signal Analysis},
|
| 245 |
+
author={Berkay D{\"o}ner and Thorir Mar Ingolfsson and Luca Benini and Yawei Li},
|
| 246 |
+
booktitle={The Thirty-ninth Annual Conference on Neural Information Processing Systems},
|
| 247 |
+
year={2025},
|
| 248 |
+
url={https://openreview.net/forum?id=uazfjnFL0G}
|
| 249 |
+
}
|
| 250 |
+
```
|
| 251 |
+
|
| 252 |
+
---
|
| 253 |
+
|
| 254 |
+
## 🛠️ Maintenance & Contact
|
| 255 |
+
|
| 256 |
+
- **Issues & support:** please open a GitHub issue in the BioFoundation repository.
|
| 257 |
+
|
| 258 |
+
---
|
| 259 |
+
|
| 260 |
+
## 🗒️ Changelog
|
| 261 |
+
|
| 262 |
+
- **v1.0:** Initial release of LUNA model card with task-specific checkpoints and instructions.
|