Aluode commited on
Commit
df77177
·
verified ·
1 Parent(s): 475c048

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +82 -3
README.md CHANGED
@@ -1,3 +1,82 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ language: en
3
+ license: mit
4
+ library_name: pytorch
5
+ tags:
6
+ - eeg
7
+ - brain-decoding
8
+ - object-recognition
9
+ - neuroscience
10
+ - computer-vision
11
+ datasets:
12
+ - Alljoined/05_125
13
+ - cocodataset/coco
14
+ metrics:
15
+ - accuracy
16
+ - f1
17
+ ---
18
+
19
+ # EEG Weak Signal Category Detection
20
+
21
+ [![Python](https://img.shields.io/badge/Python-3.10%2B-blue)](https://www.python.org/)
22
+ [![License: MIT](https://img.shields.io/badge/License-MIT-yellow.svg)](https://opensource.org/licenses/MIT)
23
+ [![Hugging Face](https://img.shields.io/badge/%F0%9F%A4%97-Hugging%20Face-orange)](https://huggingface.co/Aluode/WeakEEGSignalCategoryDetection)
24
+ [![GitHub Repo](https://img.shields.io/badge/GitHub-Repo-black?logo=github)](https://github.com/anttiluode/WeakEEGSignalCategoryDetection)
25
+
26
+ Research code for detecting weak category-specific signals in EEG data during visual object perception. Vibecoded after months of brain-decoding experiments—detects subtle probability shifts (Δ ~0.3-1.7%) for 38 COCO categories, excluding lab confounds.
27
+
28
+ ## Overview
29
+ This project explores whether EEG contains subtle info about viewed objects, controlling for lab artifacts (e.g., people/chairs). Using multi-label classification on the Alljoined dataset, we find statistically significant signals for foods/vehicles, with source localization revealing ventral temporal hotspots.
30
+
31
+ **Key Insight**: EEG encodes semantics weakly (~200-350ms post-stimulus) in object recognition networks, but effects are small (d<1.0)—exploratory, not production-ready BCI.
32
+
33
+ ## Intended Use
34
+ - **Research**: Test weak semantic decoding; localize category signals with MNE.
35
+ - **Education**: Demo EEG-AI integration for neuro classes.
36
+ - **Not For**: Real-time BCI or clinical use (signals too noisy).
37
+
38
+ ## Limitations
39
+ - Weak effects: Δ=0.3-1.7%; N<20 for rares limits power.
40
+ - Confounds: Lab objects inflate signals 3-4x.
41
+ - EEG Limits: ~1-2cm resolution; poor for deep structures (e.g., amygdala).
42
+ - Data: Assumes COCO annotations + Alljoined HF dataset.
43
+
44
+ ## Training Data
45
+ - **Dataset**: Alljoined EEG-Image (Gifford et al., 2022): 6k+ COCO images + 64-ch BioSemi EEG.
46
+ - HF: [Alljoined/05_125](https://huggingface.co/datasets/Alljoined/05_125) (split='test').
47
+ - COCO Annotations: [Download 2017 Train/Val](https://cocodataset.org/#download) (instances_train2017.json).
48
+ - **Preprocess**: 50-350ms window, z-score normalize per channel.
49
+ - **Categories**: 38 non-lab (animals, vehicles, food, outdoor/sports).
50
+
51
+ ## Model Details
52
+ - **Architecture**: CNN (3 conv blocks: 128→256→512 ch) + FC classifier. BCE loss, AdamW.
53
+ - **Params**: ~6.7M. Input: (64 ch × 154 tp). Output: 38 sigmoid probs.
54
+ - **Training**: 30 epochs, 80/20 split, cos LR. Val loss: ~0.15.
55
+
56
+ ## Evaluation Results
57
+ Test on ~700 samples (N≥10/category). Metrics: Mean prob shift (Δ), Cohen's d, t-test p.
58
+
59
+ | Category | Δ (Present - Absent) | d (Effect Size) | p-value | N (Present) | Notes |
60
+ |-----------|----------------------|-----------------|---------|-------------|-------|
61
+ | Broccoli | +0.0169 | 1.027 (large) | 0.001** | 10 | Strongest; uniform dist. |
62
+ | Cake | +0.0045 | 0.652 (medium) | 0.006* | 18 | Food reward? |
63
+ | Train | +0.0065 | 0.397 (small) | 0.056 | 24 | Marginal; motion bias. |
64
+ | Fire Hydrant | +0.0043 | 0.414 (small) | 0.084 | 18 | Bimodal (saliency?). |
65
+
66
+ * p<0.05, ** p<0.01. 2/35 sig; 70% positive Δ.
67
+
68
+ ![Broccoli Activation](screenshots/broccoli_t025s.png)
69
+ *Ventral temporal hotspots at 250ms (d=1.027).*
70
+
71
+ ## Source Localization
72
+ - **Method**: MNE sLORETA on fsaverage (ico4 src, 3-layer BEM).
73
+ - **Patterns**: Early occipital (visual), late temporal (semantics). Food: Ventral bias; animals: Parietal spread.
74
+
75
+ ## Usage
76
+ ### Inference
77
+ ```python
78
+ from transformers import pipeline # Or load custom
79
+ # Custom load (PyTorch):
80
+ import torch
81
+ model = torch.load("clean_signal_detector.pth") # Load as in App.py
82
+ # Run on EEG: probs = torch.sigmoid(model(eeg_tensor))