license: mit
code
Markdown
Calcium-Bridged Temporal EEG Decoder
This project explores the idea of decoding EEG brain signals by modeling perception as a sequential process. Instead of treating the brain's response as a single event, this system breaks it down into distinct temporal windows, attempting to model the "chain of thought" as a visual concept crystallizes in the mind.
The project consists of two main components:
- A trainer (
pkas_cal_trainer_gemini.py) that builds a novel neural network model using the Alljoined1 dataset. - A viewer (
pkas_cal_viewer_gemini2.py) that loads a trained model and provides an interactive visualization of its "thought process" on new EEG samples.
Core Concept: The "Vibecoded" System
The central idea of this project is a system inspired by neuromorphic computing and constraint satisfaction, which we've nicknamed the "vibecoded" system.
Here’s how it works simply:
Thinking in Moments: The brain's response to an image (e.g., from 0 to 600ms) is not analyzed all at once. It's sliced into four distinct "thinking moments" or time windows based on known ERP components.
A Solver for Each Moment: Each time window is processed by a special
CalciumAttentionModule. This module's job is to look at the EEG clues in its slice and find the best explanation that satisfies all the "constraints" in the signal.The Calcium Bridge: This is the key. The "hunch" or "focus" (
Calciumstate) from one thinking moment is passed to the next. This creates a causal chain of thought, allowing the model to refine its predictions over time from a general gist to a more specific concept.
Requirements
- Python 3.x
- PyTorch
datasets(from Hugging Face)tkinter(usually included with Python)matplotlibpillow
You can install the main dependencies with pip: pip install torch datasets matplotlib pillow code Code
Setup and Usage
1. Download Data and Model
Data:
- COCO Images: Download the 2017 training/validation images from the COCO Dataset official site. You will need
train2017.zipand/orval2017.zip. Unzip them into a known directory. - COCO Annotations: On the same site, download the "2017 Train/Val annotations". You only need the
instances_train2017.jsonfile. - Alljoined1 EEG Data: This will be downloaded automatically by the scripts on their first run.
Pre-trained Model (Recommended):
- You can download the pre-trained V2 model directly from its Hugging Face Repository. Click on
calcium_bridge_eeg_model_v2.pthand then click the "download" button.
2. Viewing the Results (Using the Pre-trained Model)
- Run the V2 viewer script:
python pkas_cal_viewer_gemini2.py - In the GUI:
- Select the COCO image and annotation paths you downloaded.
- Click "Load V2 Model" and select the
calcium_bridge_eeg_model_v2.pthfile you downloaded from Hugging Face.
- Once the model is loaded, click "Test Random Sample" to see the model's analysis of a new brain signal.
3. Training Your Own Model (Optional)
- Run the V2 training script:
- In the GUI, select your COCO image and annotation paths.
- Click "Train Extended Model (V2)".
- A new file named
calcium_bridge_eeg_model_v2.pthwill be saved with the best-performing model from your training run. You can then load this file into the viewer.
A Note on Interpretation
This is an exploratory research tool. The model's predictions should not be interpreted as literal "mind-reading."
Instead, the results reflect the complex statistical associations learned from the multi-subject Alljoined dataset. When the model associates a "horse trailer" with "horse," it is because this is a strong, common conceptual link found in the aggregate brain data. The viewer is a window into the "cognitive gestalt" of an "average mind" as represented by the dataset.