File size: 3,958 Bytes
df77177 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 |
---
language: en
license: mit
library_name: pytorch
tags:
- eeg
- brain-decoding
- object-recognition
- neuroscience
- computer-vision
datasets:
- Alljoined/05_125
- cocodataset/coco
metrics:
- accuracy
- f1
---
# EEG Weak Signal Category Detection
[](https://www.python.org/)
[](https://opensource.org/licenses/MIT)
[](https://huggingface.co/Aluode/WeakEEGSignalCategoryDetection)
[](https://github.com/anttiluode/WeakEEGSignalCategoryDetection)
Research code for detecting weak category-specific signals in EEG data during visual object perception. Vibecoded after months of brain-decoding experiments—detects subtle probability shifts (Δ ~0.3-1.7%) for 38 COCO categories, excluding lab confounds.
## Overview
This project explores whether EEG contains subtle info about viewed objects, controlling for lab artifacts (e.g., people/chairs). Using multi-label classification on the Alljoined dataset, we find statistically significant signals for foods/vehicles, with source localization revealing ventral temporal hotspots.
**Key Insight**: EEG encodes semantics weakly (~200-350ms post-stimulus) in object recognition networks, but effects are small (d<1.0)—exploratory, not production-ready BCI.
## Intended Use
- **Research**: Test weak semantic decoding; localize category signals with MNE.
- **Education**: Demo EEG-AI integration for neuro classes.
- **Not For**: Real-time BCI or clinical use (signals too noisy).
## Limitations
- Weak effects: Δ=0.3-1.7%; N<20 for rares limits power.
- Confounds: Lab objects inflate signals 3-4x.
- EEG Limits: ~1-2cm resolution; poor for deep structures (e.g., amygdala).
- Data: Assumes COCO annotations + Alljoined HF dataset.
## Training Data
- **Dataset**: Alljoined EEG-Image (Gifford et al., 2022): 6k+ COCO images + 64-ch BioSemi EEG.
- HF: [Alljoined/05_125](https://huggingface.co/datasets/Alljoined/05_125) (split='test').
- COCO Annotations: [Download 2017 Train/Val](https://cocodataset.org/#download) (instances_train2017.json).
- **Preprocess**: 50-350ms window, z-score normalize per channel.
- **Categories**: 38 non-lab (animals, vehicles, food, outdoor/sports).
## Model Details
- **Architecture**: CNN (3 conv blocks: 128→256→512 ch) + FC classifier. BCE loss, AdamW.
- **Params**: ~6.7M. Input: (64 ch × 154 tp). Output: 38 sigmoid probs.
- **Training**: 30 epochs, 80/20 split, cos LR. Val loss: ~0.15.
## Evaluation Results
Test on ~700 samples (N≥10/category). Metrics: Mean prob shift (Δ), Cohen's d, t-test p.
| Category | Δ (Present - Absent) | d (Effect Size) | p-value | N (Present) | Notes |
|-----------|----------------------|-----------------|---------|-------------|-------|
| Broccoli | +0.0169 | 1.027 (large) | 0.001** | 10 | Strongest; uniform dist. |
| Cake | +0.0045 | 0.652 (medium) | 0.006* | 18 | Food reward? |
| Train | +0.0065 | 0.397 (small) | 0.056 | 24 | Marginal; motion bias. |
| Fire Hydrant | +0.0043 | 0.414 (small) | 0.084 | 18 | Bimodal (saliency?). |
* p<0.05, ** p<0.01. 2/35 sig; 70% positive Δ.

*Ventral temporal hotspots at 250ms (d=1.027).*
## Source Localization
- **Method**: MNE sLORETA on fsaverage (ico4 src, 3-layer BEM).
- **Patterns**: Early occipital (visual), late temporal (semantics). Food: Ventral bias; animals: Parietal spread.
## Usage
### Inference
```python
from transformers import pipeline # Or load custom
# Custom load (PyTorch):
import torch
model = torch.load("clean_signal_detector.pth") # Load as in App.py
# Run on EEG: probs = torch.sigmoid(model(eeg_tensor)) |