Datasets:

Dataset Viewer
Auto-converted to Parquet Duplicate
dataset_id
stringclasses
1 value
title
stringclasses
1 value
source
stringclasses
1 value
source_url
stringclasses
1 value
doi
stringclasses
1 value
license
stringclasses
1 value
loader
dict
catalog
stringclasses
1 value
generated_by
stringclasses
1 value
ds004844
T22
openneuro
https://openneuro.org/datasets/ds004844
10.18112/openneuro.ds004844.v1.0.0
CC0
{ "library": "eegdash", "class": "EEGDashDataset", "kwargs": { "dataset": "ds004844" } }
https://huggingface.co/spaces/EEGDash/catalog
huggingface-space/scripts/push_metadata_stubs.py

T22

Dataset ID: ds004844

Metcalfe2023_T22

At a glance: EEG · Visual decision-making · healthy · 17 subjects · 68 recordings · CC0

Load this dataset

This repo is a pointer. The raw EEG data lives at its canonical source (OpenNeuro / NEMAR); EEGDash streams it on demand and returns a PyTorch / braindecode dataset.

# pip install eegdash
from eegdash import EEGDashDataset

ds = EEGDashDataset(dataset="ds004844", cache_dir="./cache")
print(len(ds), "recordings")

If the dataset has been mirrored to the HF Hub in braindecode's Zarr layout, you can also pull it directly:

from braindecode.datasets import BaseConcatDataset
ds = BaseConcatDataset.pull_from_hub("EEGDash/ds004844")

Dataset metadata

Subjects 17
Recordings 68
Tasks (count) 1
Sessions 4
Channels 72 (×68)
Sampling rate (Hz) 1024 (×68)
Total duration (h) 21.3
Size on disk 22.3 GB
Recording type EEG
Experimental modality Visual
Paradigm type Decision-making
Population Healthy
BIDS version 1.8.0
Source openneuro
License CC0
NEMAR citations 0

Tasks

  • Drive

Upstream README

Verbatim from the dataset's authors — the canonical description.

TX22 dataset: Predicting and influencing trust-based decisions about control authority hand-off and take-over during simulated, semi-automated driving in a leader-follower paradigm.Vehicle survivability is critically important in todays military. Significant DoD investments have focused on developing and integrating autonomous vehicle technologies to mitigate the effects of human error and thus enhance surviability and mission effectiveness. In a previous experiment (SANDR designation: ARL_TX20), we explored how a human operators acceptance and use of advanced technology is influenced by their trust and related factors, like subjective workload and automation reliability. Nevertheless, more critical than measuring and achieving a certain level of trust is the need for a capability to resolve observed (or predicted) discrepancies between trust and trustworthiness that will undermine effective joint system performance. Using the same paradigm as we developed for our previous experiment (ARL_TX20), here we explore our ability to (a) make accurate real-time predictions of instances where intervention is necessary and (b) use those predictions to provide feedback to the driver that is intended to support active "trust management" by influencing the trust-based decisions of the driver.

People

Authors

  • Jason S. Metcalfe
  • Victor Paul
  • Benamin Haynes
  • Corey Atwater
  • Amar Marathe
  • Gregory Gremillion
  • Kim Drnec
  • William Nothwang
  • Justin R. Estepp
  • Margaret Bowers
  • Jamie Lukos
  • Tony Johnson
  • Mike Dunkel
  • Stephen Gordon
  • Jon Touryan
  • Kevin King (senior)

Contact

  • Kevin King

Links

Provenance

  • Backend: s3s3://openneuro.org/ds004844
  • Exact size: 23,976,121,518 bytes (22.3 GB)
  • Ingested: 2026-04-06
  • Stats computed: 2026-04-04

Auto-generated from dataset_summary.csv and the EEGDash API. Do not edit this file by hand — update the upstream source and re-run scripts/push_metadata_stubs.py.

Downloads last month
34