Datasets:

Dataset Viewer
Auto-converted to Parquet Duplicate
dataset_id
stringclasses
1 value
title
stringclasses
1 value
source
stringclasses
1 value
source_url
stringclasses
1 value
doi
stringclasses
1 value
license
stringclasses
1 value
loader
dict
catalog
stringclasses
1 value
generated_by
stringclasses
1 value
nm000193
Class for Kojima2024A dataset management. P300 dataset
nemar
https://openneuro.org/datasets/nm000193
CC0-1.0
{ "library": "eegdash", "class": "EEGDashDataset", "kwargs": { "dataset": "nm000193" } }
https://huggingface.co/spaces/EEGDash/catalog
huggingface-space/scripts/push_metadata_stubs.py

Class for Kojima2024A dataset management. P300 dataset

Dataset ID: nm000193

Kojima2024A_P300

At a glance: EEG · Auditory attention · healthy · 11 subjects · 66 recordings · CC0-1.0

Load this dataset

This repo is a pointer. The raw EEG data lives at its canonical source (OpenNeuro / NEMAR); EEGDash streams it on demand and returns a PyTorch / braindecode dataset.

# pip install eegdash
from eegdash import EEGDashDataset

ds = EEGDashDataset(dataset="nm000193", cache_dir="./cache")
print(len(ds), "recordings")

If the dataset has been mirrored to the HF Hub in braindecode's Zarr layout, you can also pull it directly:

from braindecode.datasets import BaseConcatDataset
ds = BaseConcatDataset.pull_from_hub("EEGDash/nm000193")

Dataset metadata

Subjects 11
Recordings 66
Tasks (count) 1
Channels 64 (×66)
Sampling rate (Hz) 1000 (×66)
Total duration (h) 5.8
Size on disk 3.7 GB
Recording type EEG
Experimental modality Auditory
Paradigm type Attention
Population Healthy
Source nemar
License CC0-1.0

Links


Auto-generated from dataset_summary.csv and the EEGDash API. Do not edit this file by hand — update the upstream source and re-run scripts/push_metadata_stubs.py.

Downloads last month
39