eit-1m commited on
Commit
36f942c
·
verified ·
1 Parent(s): 7bd3a35

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +2 -1
README.md CHANGED
@@ -4,4 +4,5 @@ license: mit
4
  # EIT-1M: One Million EEG-Image-Text Pairs for Human Visual-Textural Recognition and More
5
  **( * This version is only for anonymous review, we only release part of the dataset. All data will be publicly available upon accepta.)**
6
 
7
- Recently, emerging multi-modal models integrate electroencephalography (EEG) signals to better understand and mimic human cognition in neuroscience and bio-inspired artificial intelligence (AI). Existing EEG-based recognition datasets often use single modal stimuli, either visual or textual. These datasets typically encompass numerous categories but offer limited EEG epochs per category. The complex semantics of stimuli and the short duration presented to participants compromise the dataset's quality and utility in capturing precise brain activity. This limitation makes them unsuitable for data-driven AI scenarios.Recognizing the importance of extensive and unbiased sampling of neural responses to both visual and textual stimuli for research in neuroscience and bio-inspired multi-modal AI, we present the first multi-modal dataset EIT-1M comprising EEG-image-text pairs. Our dataset collection method differs from prior works. Specifically, we collected data pairs while participants viewed alternating sequence of visual-textural stimuli from 60K natural images and 10 category-specific texts. Ten common semantic categories were included to elicit better reactions from participant brains, meanwhile allowing us to record more epochs per category. Overall, we have gathered over 1 million epochs of brain responses. Our EIT-1M dataset features response-based stimulus timing, and repetition across blocks and sessions. Importantly, to show the effectiveness of the proposed dataset, we provide an in-depth analysis of EEG data captured from multi-modal stimuli across different categories and participants, along with data quality scores for transparency. Our dataset can be applied to many tasks, and we demonstrate the value of our dataset by conducting both recognition and generation experiments.
 
 
4
  # EIT-1M: One Million EEG-Image-Text Pairs for Human Visual-Textural Recognition and More
5
  **( * This version is only for anonymous review, we only release part of the dataset. All data will be publicly available upon accepta.)**
6
 
7
+ Recently, electroencephalography (EEG) signals have been actively incorporated to decode brain activity to visual or textual stimuli and achieve object recognition in multi-modal AI. Accordingly, endeavors have been focused on building EEG-based datasets from visual or textual single-modal stimuli. However, these datasets offer limited EEG epochs per category, and the complex semantics of stimuli presented to participants compromise their quality and fidelity in capturing precise brain activity. The study in neuroscience unveils that the relationship between visual and textual stimulus in EEG recordings provides valuable insights into the brain's ability to process and integrate multi-modal information simultaneously. Inspired by this, we propose a novel large-scale multi-modal dataset, named **EIT-1M**, with over 1 million EEG-image-text pairs. Our dataset is superior in its capacity of reflecting brain activities in simultaneously processing multi-modal information. To achieve this, we collected data pairs while participants viewed alternating sequences of visual-textual stimuli from 60K natural images and category-specific texts. Common semantic categories are also included to elicit better reactions from participants' brains. Meanwhile, response-based stimulus timing and repetition across blocks and sessions are included to ensure data diversity. To verify the effectiveness of EIT-1M, we provide an in-depth analysis of EEG data captured from multi-modal stimuli across different categories and participants, along with data quality scores for transparency. We demonstrate its validity on two tasks: 1) EEG recognition from visual or textual stimuli or both and 2) EEG-to-visual generation.
8
+ We release part of the dataset and code at https://eit-1m.github.io/EIT-1M/ for anonymous review.