--- pretty_name: "FACED - Finer-grained Affective Computing EEG Dataset" license: cc-by-4.0 tags: - eeg - neuroscience - eegdash - brain-computer-interface - pytorch size_categories: - n<1K task_categories: - other --- # FACED - Finer-grained Affective Computing EEG Dataset **Dataset ID:** `nm000112` _Liu2024_112_ **Canonical aliases:** `FACED` > **At a glance:** EEG · 123 subjects · 123 recordings · CC-BY-4.0 ## Load this dataset This repo is a **pointer**. The raw EEG data lives at its canonical source (OpenNeuro / NEMAR); [EEGDash](https://github.com/eegdash/EEGDash) streams it on demand and returns a PyTorch / braindecode dataset. ```python # pip install eegdash from eegdash import EEGDashDataset ds = EEGDashDataset(dataset="nm000112", cache_dir="./cache") print(len(ds), "recordings") ``` You can also load it by canonical alias — these are registered classes in `eegdash.dataset`: ```python from eegdash.dataset import FACED ds = FACED(cache_dir="./cache") ``` If the dataset has been mirrored to the HF Hub in braindecode's Zarr layout, you can also pull it directly: ```python from braindecode.datasets import BaseConcatDataset ds = BaseConcatDataset.pull_from_hub("EEGDash/nm000112") ``` ## Dataset metadata | | | |---|---| | **Subjects** | 123 | | **Recordings** | 123 | | **Tasks (count)** | 1 | | **Channels** | 32 (×123) | | **Sampling rate (Hz)** | 1000 (×68), 250 (×55) | | **Total duration (h)** | 155.3 | | **Size on disk** | 31.4 GB | | **Recording type** | EEG | | **Source** | nemar | | **License** | CC-BY-4.0 | ## Links - **DOI:** [10.82901/nemar.nm000112](https://doi.org/10.82901/nemar.nm000112) - **NEMAR:** [nm000112](https://nemar.org/dataexplorer/detail?dataset_id=nm000112) - **Browse 700+ datasets:** [EEGDash catalog](https://huggingface.co/spaces/EEGDash/catalog) - **Docs:** - **Code:** --- _Auto-generated from [dataset_summary.csv](https://github.com/eegdash/EEGDash/blob/main/eegdash/dataset/dataset_summary.csv) and the [EEGDash API](https://data.eegdash.org/api/eegdash/datasets/summary/nm000112). Do not edit this file by hand — update the upstream source and re-run `scripts/push_metadata_stubs.py`._