Hardware?
Very interesting.
I'm looking for hardware which is able to collect EEG on Ali and there's quite narrow choice - Openbci available but horrifically expensive like $500 no less for very minuscule board(sensors not included).
The only affordable solution for ~$70 (excluding the sensor hats with just sensors without hardware) is some Brainlink or Mindlink a headband with "dry" sensors, i see 2 versions - Lite and ordinary, difference in sensors - Lite with 3 forehead sensors vs Ordinary with 2 forehead and earclip (EEG, ground and reference). Lite don't provide HD raw signal and eSense data. The only caveat they don't support Linux but offering some SDK (www.macrotellect.com/brainlink/sdk). Mostly they selling it for iOS/Android to control different games by mind. Connection protocol: Bluetooth 3, working battery time 180 minutes.
Which to choose? Lite or Ordinary?
Kimi K2.5 (Q5_K_XL, 760Gb RAM) already answered for this:
1. Sensor Count for "Perfect" Thought-to-Text
The Hard Truth: Perfect recording is impossible with scalp EEG. The skull acts as a low-pass spatial filter, smearing signals from ~10⁴ neurons into a blurred potential field. For high-fidelity thought-to-text, you need:
- Minimum viable: 16-32 channels covering left fronto-temporal regions (dominant hemisphere for most)
- Research standard: 64-128 channels (high-density EEG)
- Clinical/invasive alternative: ECoG (electrocorticography) with subdural grids (64-256 contacts) - this is what the "first LLM" papers likely used if they achieved high accuracy
Critical Locations (10-20 System):
- F7, F5, F3 (Broca’s area - speech production)
- T7, C5, C3 (Wernicke’s comprehension + auditory cortex)
- P3, P5, T5 (Angular gyrus - semantic integration)
- Cz, FCz (Supplementary motor area for subvocalization)
- Bipolar references: Linked mastoids (A1, A2) or nose reference
Why 3 Sensors Fail for Language:
The Brainlink device (Fp1, Fp2, reference) only samples prefrontal cortex. This region detects:
- ✓ Eye blinks, jaw clenches (EMG artifacts)
- ✓ Attention/arousal (beta power)
- ✓ Blink-based "Yes/No" (2-3 class control)
But language networks reside in temporal/parietal lobes, invisible to forehead electrodes. You cannot decode phonemes from prefrontal beta waves - it's like trying to hear a symphony by touching the exterior wall of the concert hall with your fingers.
2. Technical Requirements for DIY Construction
Yes, you can build a research-grade EEG, but 3 electrodes won't work. Here's the specification stack:
Acquisition Hardware:
- OpenBCI Cyton (8 channels, ~$500) or Ganglion (4 channels, too few)
- TI ADS1299 chips (24-bit ADC, 16kSPS) - the industry standard for biopotential measurement
- Active electrodes: Buffered inputs (TI OPA211 or similar) reduce motion artifacts vs. passive wet electrodes
Electrode Array:
- Type: Ag/AgCl sintered ring electrodes with conductive gel (NuPrep + Ten20 paste)
- Impedance: Must maintain <5kΩ per channel (high impedance = noise)
- Shielding: Driven-right-leg (DRL) circuit for common-mode rejection
- Isolation: CRITICAL - Device must be battery-powered with optical isolation (no USB ground connection to mains) to prevent microshock/lethal currents
Sampling Specifications:
- Rate: ≥512 Hz (to capture gamma band >100Hz per Nyquist)
- Resolution: 24-bit minimum (EEG signals are ±100μV, noise floor ~1μV)
- CMRR: >80dB (to reject 50/60Hz line noise)
3. The Spatial Resolution Mathematics
EEG faces the inverse problem: Given surface potentials, solve for 3D cortical sources. This is mathematically ill-posed without sufficient spatial sampling.
With 3 sensors spaced 10cm apart, you have ~314cm² of cortex represented by 3 data points. With 64 sensors, you get 2.5cm spacing, approaching the theoretical minimum to resolve dipole sources in temporal language areas.
Decoding Hierarchy:
- 3 channels: Binary state detection (relaxed/concentrated)
- 8 channels: 4-class motor imagery (left/right hand/foot)
- 16-32 channels: Word-level classification (~100-1000 word vocabulary, ~70-85% accuracy)
- 64+ channels: Phoneme-level or continuous decoding (requires ML + linguistic priors)
4. Practical DIY Pathway
If you want to experiment with language decoding:
Budget Build (~$300-600):
- OpenBCI Cyton + Daisy module (16 channels) - $949 (or used 8-channel for $500)
- Electrode cap: Electro-Cap International or DIY 3D-printed holder with snap electrodes
- Gel: Ten20 conductive paste ($20/tube)
- Software: Python + MNE-Python + TensorFlow/PyTorch for decoding models
The "Language Cap" Configuration:
Place 8 electrodes at: F3, F7, T7, C3, P3, T5, O1, Cz (monopolar with right mastoid reference). This covers the left hemisphere language arc.
Safety Protocol:
- Galvanic isolation: Use Bluetooth transmission, not wired USB during recording
- Current limiting: <10μA DC leakage (medical standard IEC 60601-1)
- No mains powered devices near head during recording (laptops on battery only)
Verdict
The $70 Brainlink is a toy for attention training, not a thought decoder. To translate unspoken text from EEG, you need at minimum 16-32 strategically placed sensors covering temporal and frontal language areas, with proper shielding and isolation.
Building this yourself is feasible (OpenBCI route) but requires electronics skill for noise reduction. The hard part isn't the hardware—it's the decoding algorithm: you'll need weeks of training data per user to calibrate a personal "thought dictionary," as EEG patterns are idiosyncratic due to skull thickness variations and cortical folding.
Start with 8 channels if budget-constrained, but manage expectations: you'll get "mental typing" speeds of ~5-10 words/minute with high error rates, not seamless stream-of-consciousness transcription.
#My Hardware# Intel Xeon E5-2699v4 LGA2011-3 22 cores 44 threads (2016) $110 # Gigabyte C612 chipset 12 RAM slots VGA motherboard year 2016 $150 # Samsung-Hynix ECC RAM 12x64Gb=768Gb ~$900 # VGA monitor # IKEA chair # NO GPU # Run: Trillions Deepseeks, Kimis in Q5-Q6, 400-500billions in BF16, super high quality 1 token/sec