Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,6 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
---
|
| 4 |
+
# EIT-1M: One Million EEG-Image-Text Pairs for Human Visual-Textural Recognition and More
|
| 5 |
+
|
| 6 |
+
Recently, emerging multi-modal models integrate electroencephalography (EEG) signals to better understand and mimic human cognition in neuroscience and bio-inspired artificial intelligence (AI). Existing EEG-based recognition datasets often use single modal stimuli, either visual or textual. These datasets typically encompass numerous categories but offer limited EEG epochs per category. The complex semantics of stimuli and the short duration presented to participants compromise the dataset's quality and utility in capturing precise brain activity. This limitation makes them unsuitable for data-driven AI scenarios.Recognizing the importance of extensive and unbiased sampling of neural responses to both visual and textual stimuli for research in neuroscience and bio-inspired multi-modal AI, we present the first multi-modal dataset EIT-1M comprising EEG-image-text pairs. Our dataset collection method differs from prior works. Specifically, we collected data pairs while participants viewed alternating sequence of visual-textural stimuli from 60K natural images and 10 category-specific texts. Ten common semantic categories were included to elicit better reactions from participant brains, meanwhile allowing us to record more epochs per category. Overall, we have gathered over 1 million epochs of brain responses. Our EIT-1M dataset features response-based stimulus timing, and repetition across blocks and sessions. Importantly, to show the effectiveness of the proposed dataset, we provide an in-depth analysis of EEG data captured from multi-modal stimuli across different categories and participants, along with data quality scores for transparency. Our dataset can be applied to many tasks, and we demonstrate the value of our dataset by conducting both recognition and generation experiments.
|