Update README.md
Browse files
README.md
CHANGED
|
@@ -1,9 +1,28 @@
|
|
| 1 |
---
|
| 2 |
tags:
|
|
|
|
|
|
|
|
|
|
| 3 |
- EEG
|
|
|
|
|
|
|
|
|
|
|
|
|
| 4 |
---
|
|
|
|
| 5 |
Part of MONSTER: <https://arxiv.org/abs/2502.15122>.
|
| 6 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
***Dreamer*** is a multimodal dataset that includes electroencephalogram (EEG) and electrocardiogram (ECG) signals recorded during affect elicitation using audio-visual stimuli [1], captured with a 14-channel Emotiv EPOC headset. It consists of data recording from 23 participants, along with their self-assessments of affective states (valence, arousal, and dominance) after each stimulus. For our classification task, we focus on the arousal and valence labels, referred to as ***DreamerA*** and ***DreamerV*** respectively.
|
| 8 |
|
| 9 |
The dataset is publicly available [2], and we utilize the Torcheeg toolkit for preprocessing, including signal cropping and low-pass and high-pass filtering [3]. Note that only EEG data is analyzed in this study, with ECG signals excluded. Labels for arousal and valence are binarized, assigning values below 3 to class 1 and values of 3 or higher to class 2, and has been split into cross-validation folds based on participant.
|
|
|
|
| 1 |
---
|
| 2 |
tags:
|
| 3 |
+
- time series
|
| 4 |
+
- time series classification
|
| 5 |
+
- monster
|
| 6 |
- EEG
|
| 7 |
+
license: other
|
| 8 |
+
pretty_name: DreamerA
|
| 9 |
+
size_categories:
|
| 10 |
+
- 100K<n<1M
|
| 11 |
---
|
| 12 |
+
|
| 13 |
Part of MONSTER: <https://arxiv.org/abs/2502.15122>.
|
| 14 |
|
| 15 |
+
|DreamerA||
|
| 16 |
+
|-|-:|
|
| 17 |
+
|Category|EEG|
|
| 18 |
+
|Num. Examples|170,246|
|
| 19 |
+
|Num. Channels|14|
|
| 20 |
+
|Length|256|
|
| 21 |
+
|Sampling Freq.|128 Hz|
|
| 22 |
+
|Num. Classes|2|
|
| 23 |
+
|License|Other|
|
| 24 |
+
|Citations|[1] [2] [3]|
|
| 25 |
+
|
| 26 |
***Dreamer*** is a multimodal dataset that includes electroencephalogram (EEG) and electrocardiogram (ECG) signals recorded during affect elicitation using audio-visual stimuli [1], captured with a 14-channel Emotiv EPOC headset. It consists of data recording from 23 participants, along with their self-assessments of affective states (valence, arousal, and dominance) after each stimulus. For our classification task, we focus on the arousal and valence labels, referred to as ***DreamerA*** and ***DreamerV*** respectively.
|
| 27 |
|
| 28 |
The dataset is publicly available [2], and we utilize the Torcheeg toolkit for preprocessing, including signal cropping and low-pass and high-pass filtering [3]. Note that only EEG data is analyzed in this study, with ECG signals excluded. Labels for arousal and valence are binarized, assigning values below 3 to class 1 and values of 3 or higher to class 2, and has been split into cross-validation folds based on participant.
|