Datasets:

nm000166 / README.md
bruAristimunha's picture
Metadata stub for nm000166
8596041 verified
---
pretty_name: "M3CV: Multi-subject, Multi-session, Multi-task EEG Database"
license: cc-by-4.0
tags:
- eeg
- neuroscience
- eegdash
- brain-computer-interface
- pytorch
size_categories:
- 1K<n<10K
task_categories:
- other
---
# M3CV: Multi-subject, Multi-session, Multi-task EEG Database
**Dataset ID:** `nm000166`
_Huang2018_
> **At a glance:** EEG · 95 subjects · 2469 recordings · CC BY 4.0
## Load this dataset
This repo is a **pointer**. The raw EEG data lives at its canonical source
(OpenNeuro / NEMAR); [EEGDash](https://github.com/eegdash/EEGDash) streams it
on demand and returns a PyTorch / braindecode dataset.
```python
# pip install eegdash
from eegdash import EEGDashDataset
ds = EEGDashDataset(dataset="nm000166", cache_dir="./cache")
print(len(ds), "recordings")
```
If the dataset has been mirrored to the HF Hub in braindecode's Zarr layout,
you can also pull it directly:
```python
from braindecode.datasets import BaseConcatDataset
ds = BaseConcatDataset.pull_from_hub("EEGDash/nm000166")
```
## Dataset metadata
| | |
|---|---|
| **Subjects** | 95 |
| **Recordings** | 2469 |
| **Tasks (count)** | 13 |
| **Channels** | 64 (×2469) |
| **Sampling rate (Hz)** | 250 (×2469) |
| **Total duration (h)** | 100.5 |
| **Size on disk** | 21.6 GB |
| **Recording type** | EEG |
| **Source** | nemar |
| **License** | CC BY 4.0 |
## Links
- **DOI:** [10.1016/j.neuroimage.2022.119666](https://doi.org/10.1016/j.neuroimage.2022.119666)
- **NEMAR:** [nm000166](https://nemar.org/dataexplorer/detail?dataset_id=nm000166)
- **Browse 700+ datasets:** [EEGDash catalog](https://huggingface.co/spaces/EEGDash/catalog)
- **Docs:** <https://eegdash.org>
- **Code:** <https://github.com/eegdash/EEGDash>
---
_Auto-generated from [dataset_summary.csv](https://github.com/eegdash/EEGDash/blob/main/eegdash/dataset/dataset_summary.csv) and the [EEGDash API](https://data.eegdash.org/api/eegdash/datasets/summary/nm000166). Do not edit this file by hand — update the upstream source and re-run `scripts/push_metadata_stubs.py`._