|
|
--- |
|
|
tags: |
|
|
- time series |
|
|
- time series classification |
|
|
- monster |
|
|
- EEG |
|
|
license: other |
|
|
pretty_name: DreamerA |
|
|
size_categories: |
|
|
- 100K<n<1M |
|
|
--- |
|
|
Part of MONSTER: <https://arxiv.org/abs/2502.15122>. |
|
|
|
|
|
|DreamerA|| |
|
|
|-|-:| |
|
|
|Category|EEG| |
|
|
|Num. Examples|170,246| |
|
|
|Num. Channels|14| |
|
|
|Length|256| |
|
|
|Sampling Freq.|128 Hz| |
|
|
|Num. Classes|2| |
|
|
|License|Other| |
|
|
|Citations|[1] [2] [3]| |
|
|
|
|
|
***Dreamer*** is a multimodal dataset that includes electroencephalogram (EEG) and electrocardiogram (ECG) signals recorded during affect elicitation using audio-visual stimuli [1], captured with a 14-channel Emotiv EPOC headset at a sampling rate of 128 Hz. It consists of data recording from 23 participants, along with their self-assessments of affective states (valence, arousal, and dominance) after each stimulus. For our classification task, we focus on the arousal and valence labels, referred to as ***DreamerA*** and ***DreamerV*** respectively. The processed datasets both consist of 170,246 multivariate time series each of length 256 (i.e., representing 2 seconds of data per time series at a sampling rate of 128 Hz). |
|
|
|
|
|
The dataset is publicly available [2], and we utilize the Torcheeg toolkit for preprocessing, including signal cropping and low-pass and high-pass filtering [3]. Note that only EEG data is analyzed in this study, with ECG signals excluded. Labels for arousal and valence are binarized, assigning values below 3 to class 1 and values of 3 or higher to class 2, and has been split into cross-validation folds based on participant. |
|
|
|
|
|
[1] Stamos Katsigiannis and Naeem Ramzan. (2017) Dreamer: A database for emotion recognition through EEG and ECG signals from wireless low-cost off-the-shelf devices. *IEEE Journal of Biomedical and Health Informatics*, 22(1):98–107. |
|
|
|
|
|
[2] Stamos Katsigiannis and Naeem Ramzan. (2017). Dreamer: A database for emotion recognition through EEG and ECG signals from wireless low-cost off-the-shelf devices. <https://zenodo.org/records/546113>. |
|
|
|
|
|
[3] Zhi Zhang, Sheng-Hua Zhong, and Yan Liu. (2024). TorchEEGEMO: A deep learning toolbox towards EEG-based emotion recognition. *Expert Systems with Applications*. |