--- pretty_name: "MEG: Major Depression & Probabilistic Learning Task" license: cc0-1.0 tags: - meg - neuroscience - eegdash - brain-computer-interface - pytorch - visual - learning - depression size_categories: - n<1K task_categories: - other --- # MEG: Major Depression & Probabilistic Learning Task **Dataset ID:** `ds005356` _DS5356_MajorDepression_ > **At a glance:** MEG · Visual learning · depression · 85 subjects · 116 recordings · CC0 ## Load this dataset This repo is a **pointer**. The raw EEG data lives at its canonical source (OpenNeuro / NEMAR); [EEGDash](https://github.com/eegdash/EEGDash) streams it on demand and returns a PyTorch / braindecode dataset. ```python # pip install eegdash from eegdash import EEGDashDataset ds = EEGDashDataset(dataset="ds005356", cache_dir="./cache") print(len(ds), "recordings") ``` If the dataset has been mirrored to the HF Hub in braindecode's Zarr layout, you can also pull it directly: ```python from braindecode.datasets import BaseConcatDataset ds = BaseConcatDataset.pull_from_hub("EEGDash/ds005356") ``` ## Dataset metadata | | | |---|---| | **Subjects** | 85 | | **Recordings** | 116 | | **Tasks (count)** | 1 | | **Channels** | 396 (×113), 450 (×2) | | **Sampling rate (Hz)** | 1000 (×55) | | **Total duration (h)** | 18.2 | | **Size on disk** | 161.6 GB | | **Recording type** | MEG | | **Experimental modality** | Visual | | **Paradigm type** | Learning | | **Population** | Depression | | **Source** | openneuro | | **License** | CC0 | ## Links - **DOI:** [10.18112/openneuro.ds005356.v1.5.0](https://doi.org/10.18112/openneuro.ds005356.v1.5.0) - **OpenNeuro:** [ds005356](https://openneuro.org/datasets/ds005356) - **Browse 700+ datasets:** [EEGDash catalog](https://huggingface.co/spaces/EEGDash/catalog) - **Docs:** - **Code:** --- _Auto-generated from [dataset_summary.csv](https://github.com/eegdash/EEGDash/blob/main/eegdash/dataset/dataset_summary.csv) and the [EEGDash API](https://data.eegdash.org/api/eegdash/datasets/summary/ds005356). Do not edit this file by hand — update the upstream source and re-run `scripts/push_metadata_stubs.py`._