Datasets:

Dataset Viewer
Auto-converted to Parquet Duplicate
dataset_id
stringclasses
1 value
title
stringclasses
1 value
source
stringclasses
1 value
source_url
stringclasses
1 value
doi
stringclasses
1 value
license
stringclasses
1 value
loader
dict
catalog
stringclasses
1 value
generated_by
stringclasses
1 value
nm000133
Alljoined1
nemar
https://openneuro.org/datasets/nm000133
10.82901/nemar.nm000133
CC-BY-NC-ND-4.0
{ "library": "eegdash", "class": "EEGDashDataset", "kwargs": { "dataset": "nm000133" } }
https://huggingface.co/spaces/EEGDash/catalog
huggingface-space/scripts/push_metadata_stubs.py

Alljoined1

Dataset ID: nm000133

Xu2024

Canonical aliases: Alljoined1 · Alljoined

At a glance: EEG · 8 subjects · 13 recordings · CC-BY-NC-ND-4.0

Load this dataset

This repo is a pointer. The raw EEG data lives at its canonical source (OpenNeuro / NEMAR); EEGDash streams it on demand and returns a PyTorch / braindecode dataset.

# pip install eegdash
from eegdash import EEGDashDataset

ds = EEGDashDataset(dataset="nm000133", cache_dir="./cache")
print(len(ds), "recordings")

You can also load it by canonical alias — these are registered classes in eegdash.dataset:

from eegdash.dataset import Alljoined1
ds = Alljoined1(cache_dir="./cache")

If the dataset has been mirrored to the HF Hub in braindecode's Zarr layout, you can also pull it directly:

from braindecode.datasets import BaseConcatDataset
ds = BaseConcatDataset.pull_from_hub("EEGDash/nm000133")

Dataset metadata

Subjects 8
Recordings 13
Tasks (count) 1
Channels 64 (×13)
Sampling rate (Hz) 512 (×13)
Size on disk 7.6 GB
Recording type EEG
Source nemar
License CC-BY-NC-ND-4.0

Links


Auto-generated from dataset_summary.csv and the EEGDash API. Do not edit this file by hand — update the upstream source and re-run scripts/push_metadata_stubs.py.

Downloads last month
51