participant_id string | group string | post_test string | pre_test string | age int64 | sex string | hand string |
|---|---|---|---|---|---|---|
sub-01 | HK | done | done | 21 | F | R |
sub-02 | HK | done | done | 19 | F | R |
sub-03 | SA | done | done | 21 | F | R |
sub-04 | SA | done | done | 21 | F | R |
sub-05 | BF | done | done | 22 | F | R |
sub-06 | SA | done | done | 20 | F | L |
sub-07 | BF | done | done | 21 | M | R |
sub-08 | HK | done | done | 19 | F | R |
sub-09 | BF | done | done | 21 | M | L |
sub-10 | HK | done | done | 19 | F | R |
sub-11 | BF | done | done | 20 | F | R |
sub-12 | SA | done | done | 20 | F | L |
sub-13 | SA | done | done | 21 | M | R |
sub-14 | BF | done | done | 20 | F | R |
sub-15 | BF | done | done | 22 | M | R |
sub-16 | HK | done | done | 21 | F | R |
sub-17 | HK | done | done | 21 | F | R |
sub-18 | HK | done | done | 20 | M | R |
sub-19 | SA | done | done | 26 | F | R |
sub-20 | HK | done | done | 31 | M | R |
sub-21 | SA | done | done | 27 | M | R |
sub-22 | BF | done | done | 27 | F | R |
sub-23 | HK | done | done | 21 | F | R |
sub-24 | HK | done | done | 22 | F | R |
sub-25 | BF | done | done | 21 | F | R |
sub-26 | SA | done | done | 20 | M | R |
sub-27 | HK | done | done | 20 | M | R |
sub-28 | BF | done | done | 28 | M | R |
sub-29 | HK | done | done | 30 | F | R |
sub-30 | SA | done | done | 26 | M | R |
sub-31 | SA | null | done | 18 | F | R |
sub-32 | HK | null | done | 21 | F | R |
sub-33 | SA | done | done | 20 | F | R |
sub-34 | HK | done | done | 28 | M | R |
sub-35 | HK | null | done | 25 | F | L |
sub-36 | SA | done | done | 20 | M | R |
sub-37 | HK | null | done | 24 | M | R |
sub-38 | HK | done | done | 20 | M | R |
sub-39 | HK | done | done | 21 | M | R |
sub-40 | HK | done | done | 19 | M | R |
sub-41 | HK | done | done | 33 | M | L |
sub-42 | HK | null | done | 18 | M | R |
sub-43 | HK | done | done | 20 | F | R |
sub-44 | SA | null | done | 24 | F | R |
sub-45 | HK | null | done | 25 | F | R |
sub-46 | HK | done | done | 20 | F | null |
sub-47 | SA | done | done | 24 | F | R |
sub-48 | SA | done | done | 20 | F | R |
sub-49 | SA | done | done | 23 | M | R |
sub-50 | HK | null | done | 21 | M | R |
sub-51 | SA | done | done | 28 | M | R |
sub-52 | HK | null | done | 31 | F | R |
sub-53 | SA | done | done | 19 | F | R |
sub-54 | SA | null | done | 21 | M | R |
sub-55 | HK | null | done | 19 | F | R |
sub-56 | HK | null | done | 18 | M | R |
sub-57 | HK | null | done | 20 | M | R |
sub-58 | HK | null | done | 21 | F | R |
sub-59 | HK | null | done | 20 | F | R |
sub-60 | BF | null | done | 21 | F | R |
sub-61 | BF | null | done | 22 | F | R |
sub-62 | BF | null | done | 19 | M | R |
sub-63 | BF | null | done | 22 | F | R |
sub-64 | BF | null | done | 18 | F | R |
sub-65 | BF | null | done | 19 | F | R |
sub-66 | BF | null | done | 33 | F | R |
sub-67 | SA | null | done | 22 | M | R |
sub-68 | SA | null | done | 21 | F | R |
sub-69 | SA | null | done | 21 | F | R |
sub-70 | SA | null | done | 25 | M | R |
sub-71 | SA | null | done | 20 | F | R |
sub-72 | SA | null | done | 20 | F | R |
sub-73 | SA | null | done | 18 | F | R |
sub-74 | SA | null | done | 21 | F | R |
L-FAME: Longitudinal Focused Attention Meditation EEG Dataset and Benchmark
A longitudinal 64-channel EEG dataset and benchmark for studying focused attention meditation (FAM) and how its neural signatures evolve across a six-week training period. 74 healthy adults were recorded at a pre-intervention baseline; 44 of them returned for a post-intervention follow-up. Three FAM techniques are systematically compared: Hare Krishna mantra (HK), SA-TA-NA-MA mantra (SA), and Breath Focus (BF).
The dataset accompanies the NeurIPS 2026 Datasets & Benchmarks Track submission "L-FAME: Longitudinal Focused Attention Meditation EEG Dataset and Benchmark." It defines three benchmark tasks: (1) cognitive-state decoding (rest vs. meditation), (2) fine-grained technique classification (HK / SA / BF), and (3) cross-session adaptation across the longitudinal gap.
Highlights
- 64-channel scalp EEG recorded at 250 Hz, BIDS-EEG 1.9.0 compliant
- Two sessions per participant: pre- and post-intervention (6-week gap)
- Five fixed task segments per session:
restOE,restCE01,Medita,restCE02,slMedita - Three meditation paradigms compared head-to-head (HK, SA, BF)
- Three preprocessing tiers shipped: raw BIDS, EEGLAB-cleaned, and ML-ready tensors
- Three benchmark tasks with reference baselines (PSD+SVM, FBCSP+SVM, ShallowConvNet, DeepConvNet, EEGNet, EEG-Conformer)
Cohort and demographics
| Group | Focus object | N (pre) | N (post) | Female / Male | Age (mean ± SD) |
|---|---|---|---|---|---|
| Breath Focus (BF) | Respiration | 16 | 9 | 11 / 5 | 22.2 ± 3.9 |
| Hare Krishna (HK) | Long mantra | 31 | 19 | 18 / 13 | 22.2 ± 4.2 |
| SA-TA-NA-MA (SA) | Short mantra | 27 | 16 | 17 / 10 | 21.7 ± 2.7 |
| Total | 74 | 44 | 46 / 28 | 22.0 ± 3.6 |
Attrition (30 / 74 = 40.5 %) was statistically Missing-Completely-At-Random with respect to baseline demographics and pre-intervention EEG features (see paper Appendix A.2).
Experimental paradigm
Each EEG session (≈ 32 min) contains five task segments recorded in fixed order:
| # | Segment | Duration | Instruction |
|---|---|---|---|
| 1 | restOE (eyes-open rest) |
120 s | Eyes open, relaxed |
| 2 | restCE01 (eyes-closed rest, mind-wandering proxy) |
240 s | "Close your eyes and let your mind wander" |
| 3 | Medita (active meditation) |
480 s | Group-specific overt practice |
| 4 | restCE02 (eyes-closed rest, post-active) |
240 s | Same as restCE01 |
| 5 | slMedita (silent / sustained meditation) |
480 s | Group-specific covert practice (eyes closed, no movement) |
The artefact-free silent block slMedita is the primary input for benchmark Tasks 1 and 3. Event codes embedded in the BrainVision marker files: Medita=1, restCE01=2, restCE02=3, restOE=4, slMedita=5.
Longitudinal design. Pre-intervention EEG → 6-week daily home practice (5 min/day week 1, 10 min/day week 2, 15 min/day weeks 3–6) with a mid-point check-in → post-intervention EEG.
Recording setup
- Amplifier: mBrainTrain Smarting Pro X
- Cap: 64-channel EASYCAP, Ag/AgCl, extended international 10-10 layout
- Reference: FCz | Ground: FPz | Impedance: < 20 kΩ
- Sampling rate: 250 Hz | Power-line: 60 Hz
- Electrolyte: abralyt HiCl gel
- Electrode digitisation: CapTrak coordinate system (per-session
*_electrodes.tsvand*_coordsystem.json)
Repository structure
| Tier | Path | Format | Approx. size | Intended use |
|---|---|---|---|---|
| Raw BIDS | sub-XX/ses-{premedita,posmedita}/eeg/*.{eeg,vhdr,vmrk,json,tsv} |
BrainVision | ~23 GB | Bring-your-own preprocessing |
| EEGLAB cleaned | derivatives/eeglab_preproc/sub-XXX/ses-*/.../*_preproc_{preica,icrm}.{set,fdt} |
EEGLAB | ~15 GB | MATLAB / MNE-Python pipelines; both pre-ICA and IC-removed snapshots |
| ML segmented | derivatives/ml_preproc_data/sub-XXX/.../*_eeg_preproc.npy |
NumPy float32 (channels × timepoints, per task segment) | ~11 GB | Drop-in tensors used by the official benchmark code |
| ML continuous (ICA-cleaned) | derivatives/ml_continuous_tensors/sub-XXX/.../*_desc-icacleaned_continuous.npy |
NumPy float32 (channels × timepoints, full continuous recording) | ~11 GB | Custom segmentation, sliding-window, or self-supervised pre-training that needs the full ICA-cleaned signal stream |
Subject IDs are 2-digit (
sub-01…sub-74) under the raw BIDS root and 3-digit (sub-001…sub-074) under allderivatives/tiers, following BIDS-Derivatives convention.
participants.tsv and participants.json at the root document each subject's group, session-completion status, age, sex, and handedness.
Preprocessing details
EEGLAB tier (eeglab_preproc/):
- Zero-phase 1 Hz Butterworth high-pass
- Zapline-plus 60 Hz line-noise removal (spectral integration, no notch artefact)
- Artifact Subspace Reconstruction (ASR, burst SD = 25, max-bad-channel = 0.2; bad channels spherically interpolated)
- Common-average re-reference including FCz (65 channels)
- Infomax ICA → ICLabel (artefact threshold ≥ 0.9) → component removal
Both pre-ICA (_preproc_preica) and post-ICA (_preproc_icrm) versions are kept.
ML tier (ml_preproc_data/):
- 0.5 Hz FIR high-pass + EEGLAB
clean_rawdata(spatial-corr 0.9, line-noise 4 SD) - Spherical-spline interpolation of bad channels for uniform 64-channel topography
- Per-segment epochs as
float32(C × T)NumPy arrays
ML continuous tier (ml_continuous_tensors/):
- ICA-cleaned, full continuous signal stream (no segmentation), saved as
*_desc-icacleaned_continuous.npy. Use this for sliding-window pipelines or self-supervised pre-training that needs the unsegmented stream.
Per-channel z-score normalisation is applied at run time by the dataset loader, not at storage time. Full parameter list: paper Appendix C.1.
Benchmark tasks
| Task | Goal | Input segments | N | Evaluation |
|---|---|---|---|---|
| Task 1 — Cognitive State Decoding | Resting (restCE01) vs. focused meditation (slMedita) |
pre-intervention only | 74 | intra-subject (block-wise / chronological), inter-subject 5-fold, LOSO |
| Task 2 — Technique Classification | HK vs. SA vs. BF from slMedita (and/or Medita) |
pre and post separately | 74 (pre) / 44 (post) | inter-subject 5-fold |
| Task 3 — Cross-Session Adaptation | Pre-trained Task-1 models applied to post-intervention slMedita vs. restCE01 |
post-intervention | 44 | zero-shot and N-shot (10/30) calibration; intra-subject |
Reference benchmark code, configs, and baselines: https://anonymous.4open.science/r/L-FAME-Benchmark (anonymous repository hosted for the NeurIPS 2026 review period; will be replaced with a permanent GitHub URL upon acceptance).
Quick start
The repository ships a one-call wrapper, load_benchmark, that handles dataset download, windowing, label assignment, and cross-validation splitting. Per-task auto-download keeps it to ~2–3 GB instead of the full 11 GB.
1. Install dependencies
pip install -U huggingface_hub torch numpy pandas scikit-learn
2. Pull the wrapper + dataset API
Both files live at the repo root and are downloaded once with hf_hub_download:
from huggingface_hub import hf_hub_download
for fn in ["dataset_api.py", "lfame.py"]:
hf_hub_download("L-FAME-Dataset-Benchmark/L-FAME",
filename=fn, repo_type="dataset", local_dir=".")
3. Run a benchmark task in one call
import sys; sys.path.insert(0, ".")
from lfame import load_benchmark
# Task 1 — Cognitive State Decoding (rest vs. focused meditation)
train, val, test = load_benchmark(task_id=1, cv_strategy="inter")
# or LOSO holding out sub-05
train, val, test = load_benchmark(task_id=1, cv_strategy="loso", test_subject="05")
# Task 2 — Technique Classification (HK vs. SA vs. BF)
train, val, test = load_benchmark(task_id=2, cv_strategy="inter", task2_session="pre")
# Task 3 — Cross-Session Adaptation, 30-shot calibration on sub-12
train, val, test = load_benchmark(task_id=3, cv_strategy="intra_30_shot",
test_subject="12")
# Standard PyTorch training loop
for X, y in train:
print(X.shape, y.shape) # torch.Size([64, 64, 1000]) torch.Size([64])
break
The first call auto-downloads only the segments needed for that task_id into ~/.cache/huggingface/; subsequent runs are cache-hits.
Cross-validation strategies
cv_strategy |
Used for | Description |
|---|---|---|
loso |
Tasks 1, 2 | Leave-one-subject-out; pass test_subject="XX" |
inter |
Tasks 1, 2 | 5-fold inter-subject (subject-stratified for Task 2) |
intra-block |
Tasks 1, 3 | 20-second blocks alternating train/test |
intra-chrono |
Tasks 1, 3 | First 80% of session for train, last 20% for test |
intra-zero |
Task 3 | Zero-shot: pre-trained model evaluated on post-session data |
intra_{N}_shot |
Task 3 | N-shot calibration; paper uses N = 10 and N = 30 |
Window length defaults to 4 s @ 250 Hz; overlap defaults are paper-faithful (50% for cross-subject, 87.5% for all intra-*). Override with window_sec=..., overlap_sec=....
Dev mode (small subset for pipeline debugging)
train, val, test = load_benchmark(
task_id=1, cv_strategy="inter",
download_subjects=["01", "02", "03"], # only download these subjects (~150 MB)
subject_filter=["01", "02", "03"], # build the index from only these
)
Lower-level data access
If you need a different preprocessing tier, custom windowing, or just want raw files, skip the wrapper and use huggingface_hub.snapshot_download directly:
from huggingface_hub import snapshot_download
import numpy as np
# All ML-ready segmented tensors (≈ 11 GB)
root = snapshot_download(
repo_id="L-FAME-Dataset-Benchmark/L-FAME", repo_type="dataset",
allow_patterns=["derivatives/ml_preproc_data/**", "participants.tsv"],
)
arr = np.load(f"{root}/derivatives/ml_preproc_data/sub-001/"
"sub-001_ses-premedita_task-slMedita_eeg_preproc.npy")
print(arr.shape, arr.dtype) # (64, ~120000), float32
Note on the dataset viewer. The viewer panel on this page shows only
participants.tsv. The raw BIDS files (.eeg/.vhdr/.vmrk/.set/.fdt/.npy) are not tabular and are intentionally excluded from automatic loading; download them viasnapshot_download(above) or browse the Files and versions tab.
License
This dataset is released under CC BY-NC 4.0. You may copy, redistribute, remix, and build upon the material for non-commercial purposes with appropriate attribution. Commercial use requires explicit written permission from the authors. By downloading the dataset you agree not to attempt re-identification of any participant, and to comply with applicable data-protection regulations (HIPAA / GDPR where relevant).
Citation
NeurIPS 2026 Datasets & Benchmarks Track submission — currently under double-blind review. The full citation block will be filled in upon acceptance.
@inproceedings{lfame2026,
title = {L-FAME: Longitudinal Focused Attention Meditation EEG Dataset and Benchmark},
author = {<TODO: fill in author list at acceptance>},
booktitle = {Advances in Neural Information Processing Systems (NeurIPS) 39 — Datasets and Benchmarks Track},
year = {2026},
note = {Under review}
}
If you use the dataset before the paper is published, please cite this Hugging Face repository directly (L-FAME-Dataset-Benchmark/L-FAME).
Authors and acknowledgments
Author and affiliation block redacted during the NeurIPS double-blind review period. Real names and affiliations will be added at acceptance.
We thank Devin O'Rourke and Sidharth Chhabra at The Harmony Collective (Ypsilanti, MI) for expert guidance on the three meditation training protocols. We are grateful to the undergraduate and graduate student research assistants who collected and curated the EEG sessions: Ab Basit Rafi Syed, Pratham Pradhan, Annie Wozniak, Vu Song Thuy Nguyen, Genevieve Orlewicz, and Alisia Coipel. Most of all, we thank the 74 participants who completed the demanding longitudinal protocol.
Changelog
- v1.0 — 2026-05 — Initial public release: 74-subject pre-intervention cohort, 44-subject post-intervention cohort, three derivative tiers, three benchmark task definitions.
- Downloads last month
- 71
