Datasets:

Dataset Viewer
Auto-converted to Parquet Duplicate
dataset_id
stringclasses
1 value
title
stringclasses
1 value
source
stringclasses
1 value
source_url
stringclasses
1 value
doi
stringclasses
1 value
license
stringclasses
1 value
loader
dict
catalog
stringclasses
1 value
generated_by
stringclasses
1 value
nm000110
CHB-MIT
nemar
https://openneuro.org/datasets/nm000110
10.82901/nemar.nm000110
ODC-By-1.0
{ "library": "eegdash", "class": "EEGDashDataset", "kwargs": { "dataset": "nm000110" } }
https://huggingface.co/spaces/EEGDash/catalog
huggingface-space/scripts/push_metadata_stubs.py

CHB-MIT

Dataset ID: nm000110

Connolly2010

Canonical aliases: CHBMIT · CHB_MIT

At a glance: EEG · 24 subjects · 686 recordings · ODC-By-1.0

Load this dataset

This repo is a pointer. The raw EEG data lives at its canonical source (OpenNeuro / NEMAR); EEGDash streams it on demand and returns a PyTorch / braindecode dataset.

# pip install eegdash
from eegdash import EEGDashDataset

ds = EEGDashDataset(dataset="nm000110", cache_dir="./cache")
print(len(ds), "recordings")

You can also load it by canonical alias — these are registered classes in eegdash.dataset:

from eegdash.dataset import CHBMIT
ds = CHBMIT(cache_dir="./cache")

If the dataset has been mirrored to the HF Hub in braindecode's Zarr layout, you can also pull it directly:

from braindecode.datasets import BaseConcatDataset
ds = BaseConcatDataset.pull_from_hub("EEGDash/nm000110")

Dataset metadata

Subjects 24
Recordings 686
Tasks (count) 1
Channels 23 (×306), 28 (×259), 38 (×39), 22 (×36), 24 (×30), 29 (×14), 25 (×1), 31 (×1)
Sampling rate (Hz) 256 (×686)
Total duration (h) 982.9
Size on disk 42.6 GB
Recording type EEG
Source nemar
License ODC-By-1.0

Links


Auto-generated from dataset_summary.csv and the EEGDash API. Do not edit this file by hand — update the upstream source and re-run scripts/push_metadata_stubs.py.

Downloads last month
42