Datasets:

Dataset Viewer
Auto-converted to Parquet Duplicate
dataset_id
stringclasses
1 value
title
stringclasses
1 value
source
stringclasses
1 value
source_url
stringclasses
1 value
doi
stringclasses
1 value
license
stringclasses
1 value
loader
dict
catalog
stringclasses
1 value
generated_by
stringclasses
1 value
ds004883
Registerd Report of ERN During Three Versions of a Flanker Task
openneuro
https://openneuro.org/datasets/ds004883
10.18112/openneuro.ds004883.v1.0.0
CC0
{ "library": "eegdash", "class": "EEGDashDataset", "kwargs": { "dataset": "ds004883" } }
https://huggingface.co/spaces/EEGDash/catalog
huggingface-space/scripts/push_metadata_stubs.py

Registerd Report of ERN During Three Versions of a Flanker Task

Dataset ID: ds004883

Clayson2023_Registerd

At a glance: EEG · Visual decision-making · healthy · 172 subjects · 516 recordings · CC0

Load this dataset

This repo is a pointer. The raw EEG data lives at its canonical source (OpenNeuro / NEMAR); EEGDash streams it on demand and returns a PyTorch / braindecode dataset.

# pip install eegdash
from eegdash import EEGDashDataset

ds = EEGDashDataset(dataset="ds004883", cache_dir="./cache")
print(len(ds), "recordings")

If the dataset has been mirrored to the HF Hub in braindecode's Zarr layout, you can also pull it directly:

from braindecode.datasets import BaseConcatDataset
ds = BaseConcatDataset.pull_from_hub("EEGDash/ds004883")

Dataset metadata

Subjects 172
Age range 18–58 yrs, mean 20.2
Recordings 516
Tasks (count) 3
Channels 129 (×516)
Sampling rate (Hz) 500 (×516)
Total duration (h) 140.0
Size on disk 122.8 GB
Recording type EEG
Experimental modality Visual
Paradigm type Decision-making
Population Healthy
BIDS version v1.8.0
Source openneuro
License CC0
NEMAR citations 3

Tasks

  • ffa
  • ffb
  • ffc

Upstream README

Verbatim from the dataset's authors — the canonical description.

This study is described at https://osf.io/qt2zh/. Scripts used for data processing are posted there. Here is the script from the manuscript that describes these data. Error-related negativity is a widely used measure of error monitoring, and many projects are independently moving ERN recorded during a flanker task towards standardization, optimization, and eventual clinical application. However, each project uses a different version of the flanker task and tacitly assumes ERN is functionally equivalent across each version. The routine neglect of a rigorous test of this assumption undermines efforts to integrate ERN findings across tasks, optimize and standardize ERN assessment, and widely apply ERN in clinical trials. The purpose of this registered report was to determine whether ERN shows similar experimental effects (correct vs. error trials) and data quality (intraindividual variability) during three commonly-used versions of a flanker task. ERN was recorded from 172 participants during three versions of a flanker task across two study sites. ERN scores showed numerical differences between tasks, raising questions about the comparability of ERN findings across studies and tasks. Although ERN scores from all three versions of the flanker task yielded high data quality and internal consistency, one version did outperform the other two in terms of the size of experimental effects and the data quality. Exploratory analyses of the error positivity (Pe) provided tentative support for the other two versions of the task over the paradigm that appeared optimal for ERN. The present study provides a roadmap for how to statistically compare psychometric characteristics of ERP scores across paradigms and gives preliminary recommendations for flanker tasks to use for ERN- and Pe-focused studies.

People

Authors

  • Peter E. Clayson
  • Michael J. Larson (senior)

Contact

  • Peter Clayson

Links

Provenance

  • Backend: s3s3://openneuro.org/ds004883
  • Exact size: 131,858,855,109 bytes (122.8 GB)
  • Ingested: 2026-04-06
  • Stats computed: 2026-04-04

Auto-generated from dataset_summary.csv and the EEGDash API. Do not edit this file by hand — update the upstream source and re-run scripts/push_metadata_stubs.py.

Downloads last month
33