--- pretty_name: "ArEEG: Arabic Inner Speech EEG dataset" license: cc0-1.0 tags: - eeg - neuroscience - eegdash - brain-computer-interface - pytorch size_categories: - n<1K task_categories: - other --- # ArEEG: Arabic Inner Speech EEG dataset **Dataset ID:** `ds005262` _Metwalli2024_ **Canonical aliases:** `ArEEG` > **At a glance:** EEG · 12 subjects · 186 recordings · CC0 ## Load this dataset This repo is a **pointer**. The raw EEG data lives at its canonical source (OpenNeuro / NEMAR); [EEGDash](https://github.com/eegdash/EEGDash) streams it on demand and returns a PyTorch / braindecode dataset. ```python # pip install eegdash from eegdash import EEGDashDataset ds = EEGDashDataset(dataset="ds005262", cache_dir="./cache") print(len(ds), "recordings") ``` You can also load it by canonical alias — these are registered classes in `eegdash.dataset`: ```python from eegdash.dataset import ArEEG ds = ArEEG(cache_dir="./cache") ``` If the dataset has been mirrored to the HF Hub in braindecode's Zarr layout, you can also pull it directly: ```python from braindecode.datasets import BaseConcatDataset ds = BaseConcatDataset.pull_from_hub("EEGDash/ds005262") ``` ## Dataset metadata | | | |---|---| | **Subjects** | 12 | | **Recordings** | 186 | | **Tasks (count)** | 1 | | **Channels** | 8 (×186) | | **Sampling rate (Hz)** | 250 (×186) | | **Total duration (h)** | 25.0 | | **Size on disk** | 688.8 MB | | **Recording type** | EEG | | **Source** | openneuro | | **License** | CC0 | | **NEMAR citations** | 0.0 | ## Links - **DOI:** [10.18112/openneuro.ds005262.v1.0.1](https://doi.org/10.18112/openneuro.ds005262.v1.0.1) - **OpenNeuro:** [ds005262](https://openneuro.org/datasets/ds005262) - **Browse 700+ datasets:** [EEGDash catalog](https://huggingface.co/spaces/EEGDash/catalog) - **Docs:** - **Code:** --- _Auto-generated from [dataset_summary.csv](https://github.com/eegdash/EEGDash/blob/main/eegdash/dataset/dataset_summary.csv) and the [EEGDash API](https://data.eegdash.org/api/eegdash/datasets/summary/ds005262). Do not edit this file by hand — update the upstream source and re-run `scripts/push_metadata_stubs.py`._