Datasets:
Add subsets for all supported metadata sets and docs for using EEG data
#7
by nicholascarr - opened
README.md
CHANGED
|
@@ -1,10 +1,15 @@
|
|
| 1 |
---
|
| 2 |
license: cc-by-nc-sa-4.0
|
| 3 |
configs:
|
| 4 |
-
- config_name:
|
| 5 |
data_files:
|
| 6 |
- split: train # what the viewer dropdown shows
|
| 7 |
path: "preprocessed_eeg/experiment_metadata_categories.parquet"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
pretty_name: Alljoined-1.6M
|
| 9 |
extra_gated_prompt: >-
|
| 10 |
By clicking on "Access repository" below, you also agree to Alljoined Dataset Terms of Access:
|
|
@@ -39,6 +44,23 @@ tags:
|
|
| 39 |
|
| 40 |
We present a new large-scale electroencephalography (EEG) dataset as part of the THINGS initiative, comprising over 1.6 million visual stimulus trials collected from 20 participants, and totaling more than twice the size of the most popular current benchmark dataset, THINGS-EEG2. Crucially, our data was recorded using a 32-channel consumer-grade wet electrode system costing ~$2.2k, around 27x cheaper than research-grade EEG systems typically used in cognitive neuroscience labs. Our work is one of the first open-source, large-scale EEG resource designed to closely reflect the quality of hardware that is practical to deploy in real-world, downstream applications of brain-computer interfaces (BCIs). We aim to explore the specific question of whether deep neural network-based BCI research and semantic decoding methods can be effectively conducted with such affordable systems, filling an important gap in current literature that is extremely relevant for future research. In our analysis, we not only demonstrate that decoding of high-level semantic information from EEG of visualized images is possible at consumer-grade hardware, but also that our data can facilitate effective EEG-to-Image reconstruction even despite significantly lower signal-to-noise ratios. In addition to traditional benchmarks, we also conduct analyses of EEG-to-Image models that demonstrate log-linear decoding performance with increasing data volume on our data, and discuss the trade-offs between hardware cost, signal fidelity, and the scale of data collection efforts in increasing the size and utility of currently available datasets. Our contributions aim to pave the way for large-scale, cost-effective EEG research with widely accessible equipment, and position our dataset as a unique resource for the democratization and development of effective deep neural models of visual cognition.
|
| 41 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 42 |
### Licensing Information
|
| 43 |
In exchange for permission to use the Alljoined-1.6M database (the "Database") at Alljoined, Researcher hereby agrees to the following terms and conditions:
|
| 44 |
|
|
|
|
| 1 |
---
|
| 2 |
license: cc-by-nc-sa-4.0
|
| 3 |
configs:
|
| 4 |
+
- config_name: stim_metadata # appears as the subset name
|
| 5 |
data_files:
|
| 6 |
- split: train # what the viewer dropdown shows
|
| 7 |
path: "preprocessed_eeg/experiment_metadata_categories.parquet"
|
| 8 |
+
- config_name: stimuli
|
| 9 |
+
data_files: "stimuli.zip"
|
| 10 |
+
- config_name: stim_order
|
| 11 |
+
data_files: "preprocessed_eeg/*/experiment_metadata_categories.parquet"
|
| 12 |
+
default: true
|
| 13 |
pretty_name: Alljoined-1.6M
|
| 14 |
extra_gated_prompt: >-
|
| 15 |
By clicking on "Access repository" below, you also agree to Alljoined Dataset Terms of Access:
|
|
|
|
| 44 |
|
| 45 |
We present a new large-scale electroencephalography (EEG) dataset as part of the THINGS initiative, comprising over 1.6 million visual stimulus trials collected from 20 participants, and totaling more than twice the size of the most popular current benchmark dataset, THINGS-EEG2. Crucially, our data was recorded using a 32-channel consumer-grade wet electrode system costing ~$2.2k, around 27x cheaper than research-grade EEG systems typically used in cognitive neuroscience labs. Our work is one of the first open-source, large-scale EEG resource designed to closely reflect the quality of hardware that is practical to deploy in real-world, downstream applications of brain-computer interfaces (BCIs). We aim to explore the specific question of whether deep neural network-based BCI research and semantic decoding methods can be effectively conducted with such affordable systems, filling an important gap in current literature that is extremely relevant for future research. In our analysis, we not only demonstrate that decoding of high-level semantic information from EEG of visualized images is possible at consumer-grade hardware, but also that our data can facilitate effective EEG-to-Image reconstruction even despite significantly lower signal-to-noise ratios. In addition to traditional benchmarks, we also conduct analyses of EEG-to-Image models that demonstrate log-linear decoding performance with increasing data volume on our data, and discuss the trade-offs between hardware cost, signal fidelity, and the scale of data collection efforts in increasing the size and utility of currently available datasets. Our contributions aim to pave the way for large-scale, cost-effective EEG research with widely accessible equipment, and position our dataset as a unique resource for the democratization and development of effective deep neural models of visual cognition.
|
| 46 |
|
| 47 |
+
### How to Use
|
| 48 |
+
|
| 49 |
+
To download the full dataset including EEG data, use the huggingface_hub Python package [(docs)](https://huggingface.co/docs/huggingface_hub/en/guides/download#download-an-entire-repository), or clone the dataset with Git + LFS + Xet [(docs)](https://huggingface.co/docs/hub/xet/using-xet-storage#git).
|
| 50 |
+
|
| 51 |
+
```py
|
| 52 |
+
from huggingface_hub import snapshot_download
|
| 53 |
+
snapshot_download(repo_id="Alljoined/Alljoined-1.6M", repo_type="dataset")
|
| 54 |
+
```
|
| 55 |
+
|
| 56 |
+
For convenience, some metadata that are already in a format compatible with the Hugging Face `datasets` Python library are exposed as subsets:
|
| 57 |
+
|
| 58 |
+
- Image stimuli in stimuli.zip are exposed under the `stimuli` subset
|
| 59 |
+
- Category metadata for these stimuli from preprocessed_eeg/experiment_metadata_categories.parquet are exposed under the `stim_metadata` subset
|
| 60 |
+
- The order stimuli were presented in each session for each subject, with some repeated per-stimulus metadata, from preprocessed_eeg/sub-*/experiment_metadata_categories.parquet, is exposed under the `stim_order` subset (default subset)
|
| 61 |
+
|
| 62 |
+
Raw and preprocessed EEG data are not in a format compatible with the Hugging Face `datasets` library and must be downloaded using one of the other methods listed above.
|
| 63 |
+
|
| 64 |
### Licensing Information
|
| 65 |
In exchange for permission to use the Alljoined-1.6M database (the "Database") at Alljoined, Researcher hereby agrees to the following terms and conditions:
|
| 66 |
|