Datasets:

nm000134 / README.md
bruAristimunha's picture
Metadata stub for nm000134
463c0e1 verified
metadata
pretty_name: Alljoined-1.6M
license: other
tags:
  - eeg
  - neuroscience
  - eegdash
  - brain-computer-interface
  - pytorch
size_categories:
  - 1K<n<10K
task_categories:
  - other

Alljoined-1.6M

Dataset ID: nm000134

Xu2025

Canonical aliases: Alljoined16M · Alljoined_16M · Alljoined1p6M

At a glance: EEG · 20 subjects · 1525 recordings · CC-BY-NC-ND-4.0

Load this dataset

This repo is a pointer. The raw EEG data lives at its canonical source (OpenNeuro / NEMAR); EEGDash streams it on demand and returns a PyTorch / braindecode dataset.

# pip install eegdash
from eegdash import EEGDashDataset

ds = EEGDashDataset(dataset="nm000134", cache_dir="./cache")
print(len(ds), "recordings")

You can also load it by canonical alias — these are registered classes in eegdash.dataset:

from eegdash.dataset import Alljoined16M
ds = Alljoined16M(cache_dir="./cache")

If the dataset has been mirrored to the HF Hub in braindecode's Zarr layout, you can also pull it directly:

from braindecode.datasets import BaseConcatDataset
ds = BaseConcatDataset.pull_from_hub("EEGDash/nm000134")

Dataset metadata

Subjects 20
Recordings 1525
Tasks (count) 1
Channels 32 (×1525)
Sampling rate (Hz) 256 (×1525)
Total duration (h) 129.5
Size on disk 8.2 GB
Recording type EEG
Source nemar
License CC-BY-NC-ND-4.0

Links


Auto-generated from dataset_summary.csv and the EEGDash API. Do not edit this file by hand — update the upstream source and re-run scripts/push_metadata_stubs.py.