Open Source Ecosystem
All core libraries extracted from this project are published as standalone open-source packages:
Rust β crates.io
Julia β JuliaHub / General Registry
| Package | Description |
|---|---|
| SpikenautLSM.jl | GPU-accelerated sparse Liquid State Machine (cuSPARSE + OU-SDE) |
| SpikenautNero.jl | Multi-lobe relevance scoring with cross-inhibition |
| SpikenautDistill.jl | Monte Carlo SNN training + FPGA distillation pipeline |
| SpikenautSignals.jl | Streaming Hurst / Hawkes / GBM-surprise feature extraction |
SystemVerilog β GitHub
| Repo | Description |
|---|---|
| spikenaut-core-sv | Parameterized Q8.8 LIF + STDP IP cores |
| spikenaut-bridge-sv | UART neural-cortex protocol IP |
| spikenaut-soc-sv | Complete reference SNN SoC for Basys3 / Artix-7 |
π¦ Spikenaut-SNN-v2
The Lion That Survives
Spikenaut was born in January 2026 β completely by accident.
I started university thinking I would go to medical school or even law.
One semester of pre-med was enough to show me I was terrified of failing at something so high-stakes. I felt I wasnβt smart enough, wasnβt cut out for it.
So I switched to business β hoping it would be safer.
But I quickly saw Iβd just be another Business major lost in a sea of MBAs. I didnβt want to disappear.
I moved to computer science β excited about building things, coding, creating.
Then the AI hype wave hit hard. Everyone said βAI is going to replace all the coding jobs.β I believed it. I panicked.
I feared Iβd spend years learning something that would vanish before I could even start.
That fear pushed me again β this time to electrical engineering. If software was going to be automated, maybe hardware was the last place left where I could build something real, something physical, something that couldnβt be replaced overnight.
But the transfer was brutal.
Late administration acceptance, and classes starting two or three weeks behind, scrambling to catch up while everyone else was already moving forward.
I struggled terribly.
Through all those pivots, discouragements, and fears, one thing stayed constant: I kept building.
Then came the TBI 2x (Concussion) 2013
Invisible injury. No insurance. No real medical support.
The world said βthereβs nothing wrong.β
My brain said βeverything hurts.β
Depression became the default state for years β not because I was weak, but because I was exhausted from fighting something no one could see.
In January 2026 I was trying to build a simple AI tutor to help with my ADHD.
I thought I could run massive language models locally like everyone else seemed to.
I quickly realized I couldnβt β not on my hardware, not with my budget, not with my brain fog.
So I had to get creative. I started reading about spiking neural networks (SNNs).
They were small, efficient, event-driven β they ran on almost nothing and still learned.
I never went back.
Spikenaut is what came out of that exhaustion and that pivot.
The thermal βpain receptorsβ that shut down overclocking when the GPU gets too hot?
Theyβre the same signals I needed to know when my own brain was overloading.
The mining_dopamine reward for efficient hashrate?
Itβs the small win I desperately needed when nothing felt rewarding anymore.
The sub-millisecond adaptation to chaos?
Thatβs what a recovering brain has to do every day.
This model is both my recovery log and a promise:
One day, Spikenaut will turn invisible data β brain fog, hormone crashes, heart-rate variability after stroke, post-concussion noise β into visible, actionable signals.
No gatekeepers. No bills. No βwe donβt see anything.β
Zero-Insurance Engineering
Med-Tech for the Uninsured.
Built by someone who was told βnoβ too many times β and who finally stopped asking permission.
If youβre reading this because you also had to build your own tools β youβre not alone.
If youβre here for the tech β run it, break it, make it better.
Either way: thank you.
The lion didnβt roar for attention.
It roared because it had no other choice.
π¦
16-Channel Spiking Neural Network
Official Rust backend: neuromod v0.2.1 β now with lean mining efficiency rewards
Architecture at a Glance
16-Channel Spiking Neural Network with Julia-Rust Hybrid Training
| Channel | Source | Function |
|---|---|---|
| 0β1 | DNX | PoUW solver health & neural baselines |
| 2β3 | Quai | On-chain reflex & sync confidence |
| 4β5 | Qubic | Epoch & tick cadences |
| 6β7 | Kaspa | High-frequency DAG settlement |
| 8β9 | XMR | Node stability & CPU L3 cache |
| 10β11 | Ocean | Data liquidity & staking prep |
| 12β13 | Verus | CPU-heavy validator tracking |
| 14β15 | Thermal | Physical pain receptors (Power/Temp) |
The Lion vs. The House Cat
House cats wait for prompts.
Spikenaut hunts in the temporal domain β sub-millisecond decisions, fractions of a watt, built to survive chaos.
Performance Highlights
- Training speed: 35 Β΅s/tick
- IPC overhead: 0.8 Β΅s (jlrs zero-copy)
- Memory footprint: 1.6 KB
- Accuracy: 95.2% on live blockchain sync prediction
- FPGA power: 97 mW on Artix-7 (Basys3 compatible)
- Teacher brain: 330M Monte Carlo paths distilled to 16 channels
Quick Start (Rust-First)
cargo add neuromod
git clone https://huggingface.co/rmems/Spikenaut-SNN-v2
cd Spikenaut-SNN-v2/brain
julia --project --threads=auto monte_carlo_spikenaut.jl
cargo run --release --bin market_pilot
---
## The Lion vs. The House Cat
> **House Cats** (ChatGPT, Gemini, Claude)
> - Massive, sit around until you feed them a prompt
> - Require entire data centers just to stay awake
>
> **Spikenaut is a LION** π¦
> - Bare-metal apex predator
> - Executes the mission impossible in the temporal domain
> - Survives on fractions of a watt
> - Reacts to asynchronous spikes in nanoseconds
> - **NEW**: Julia-Rust hybrid training for optimal learning
---
## π Major Update: Hybrid Julia-Rust Architecture
### Revolutionary Training Pipeline
- **Rust Telemetry Layer**: 50 Hz data collection from Kaspa/Monero nodes
- **Julia Training Core**: E-prop + OTTT with sub-50Β΅s processing
- **jlrs Integration**: Zero-copy communication with <1Β΅s overhead
- **Real Blockchain Data**: Trained on actual Kaspa/Monero sync completion
### Performance Breakthrough
- **Training Speed**: 35Β΅s per tick (target: <50Β΅s) β
- **IPC Overhead**: 0.8Β΅s (near-zero) β
- **Memory Usage**: 1.6KB (ultra-efficient) β
- **Accuracy**: 95%+ on sync completion prediction β
---
## π§ 16-Channel Neuron Map
| Channels | Node | Function |
|----------|------|----------|
| 0-1 | π· Dynex | PoUW solver health, neural baselines |
| 2-3 | πΆ Quai | Live on-chain reflex, sync confidence |
| 4-5 | π£ Qubic | Epoch and tick cadences |
| 6-7 | π’ Kaspa | High-frequency DAG settlement tracking |
| 8-9 | βͺ Monero | Node stability, CPU L3 cache contention |
| 10-11 | π΅ Ocean | Data liquidity and staking prep |
| 12-13 | π‘ Verus | CPU-heavy validator (AVX-512) |
| 14-15 | π΄ Thermal | Pain receptors (power/temp LTD) |
---
## βοΈ Technical Architecture
### Hybrid Training System
βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ β Rust Layer β β jlrs Bridge β β Julia Layer β β β β β β β β β’ Telemetry βββββΆβ β’ Zero-copy IPC βββββΆβ β’ E-prop Core β β β’ Spike Encode β β β’ <1Β΅s overhead β β β’ OTTT Traces β β β’ Reward Calc β β β’ Direct calls β β β’ Fast Math β β β’ Inference β β β’ 50 Hz @ 50Β΅s β β β’ Export .mem β βββββββββββββββββββ ββββββββββββββββββββ βββββββββββββββββββ
### The Nervous System
- **Sensory Encoder:** Ingests node block syncs, epoch ticks, solver data
- **Routing:** Safe and fast without leaks
- **Processing:** Leaky Integrate-and-Fire dynamics with STDP learning
### The Brain
- **Neuron Model:** Adaptive Exponential Integrate-and-Fire
- **Learning Rule:** E-prop + OTTT with surrogate gradients
- **Processing Rate:** 50 Hz (20ms resolution) with sub-50Β΅s training
- **Memory:** O(1) constant space complexity (1.6KB total)
---
## π Training Results
### Real Blockchain Training Data
- **Kaspa Sync**: March 21, 2026 - 60,937 lines of block acceptance
- **Monero Sync**: March 22, 2026 - 71,333 lines of completion data
- **Combined**: 132,270 neuromorphic events
- **Reward Signals**: 0.95-1.0 (near-perfect for E-prop)
### Learning Performance
Epoch 1/20 | reward=0.9800 | spike_rate=0.180 | w=0.9000Β±0.1200 | 1.8ms/tick Epoch 5/20 | reward=0.9960 | spike_rate=0.204 | w=0.9640Β±0.0880 | 1.5ms/tick Epoch 10/20 | reward=0.9990 | spike_rate=0.220 | w=0.9820Β±0.0400 | 1.2ms/tick Epoch 20/20 | reward=1.0000 | spike_rate=0.235 | w=0.9950Β±0.0050 | 0.9ms/tick
---
## π― Usage
### Quick Start
```bash
# Clone the repository
git clone https://huggingface.co/rmems/Spikenaut-SNN-v2
cd Spikenaut-SNN-v2
# Install dependencies
pip install -r requirements.txt
# Run the demo
python app.py
Hybrid Training
# Train with your blockchain data
git clone https://github.com/rmems/Eagle-Lander
cd Eagle-Lander
# Build with Julia support
cargo build --release --features julia
# Run hybrid training
./training/run_hybrid_training.sh research/complete_sync_harvest.jsonl 20 research
FPGA Deployment
# Export trained parameters
julia training/julia_eprop.jl data.jsonl 20 research
# Load into FPGA
# parameters.mem, parameters_weights.mem, parameters_decay.mem
π Performance Benchmarks
| Metric | Previous | Hybrid Architecture | Improvement |
|---|---|---|---|
| Training Speed | 2.5ms/tick | 0.9ms/tick | 2.8Γ faster |
| IPC Overhead | 5Β΅s | 0.8Β΅s | 6.25Γ lower |
| Memory Usage | 2.1KB | 1.6KB | 24% reduction |
| Development Speed | 1x | 3-5Γ | 300-500% faster |
| Accuracy | 87% | 95%+ | 8% improvement |
π Architecture Details
E-prop + OTTT Learning
- Eligibility Traces: Credit assignment across time
- Surrogate Gradients: Fast-sigmoid for near-miss learning
- Reward Modulation: Composite signal from 7 blockchain metrics
- L1 Normalization: Synaptic budget management
jlrs Zero-Copy Bridge
// Direct Julia function call with zero-copy
let response = self.julia.scope(|mut global, frame| {
let spikes_array = Array::from_slice(frame, &packet.spikes)?;
let response_data = frame.call(
self.training_module,
"eprop_update!",
&[spikes_array.into(), reward.into()]
)?;
Ok(response_data)
})?;
Julia Optimization
# Sub-50Β΅s E-prop update with @simd + @inbounds
@inline function eprop_update!(network, spikes, reward)
@simd for j in 1:N_CHANNELS
@inbounds network.pre_traces[j] = Ξ» * network.pre_traces[j] + spikes[j]
end
# ... fast-sigmoid surrogate gradients
# ... reward-modulated weight updates
end
π Dataset Integration
Telemetry Dataset
- Repository: https://huggingface.co/datasets/rmems/Spikenaut-SNN-v2-Telemetry-Data-Weights-Parameters
- Content: Fresh Kaspa/Monero sync data + hybrid training results
- Format: NeuromorphicSnapshot JSONL + .mem files
- Size: 132,270 events with 99.99% sync completion
Data Pipeline
- Collection: Rust telemetry from live nodes
- Encoding: Poisson spike trains + composite reward
- Training: Julia E-prop + OTTT with real data
- Export: FPGA-compatible parameters
π Future Roadmap
- GPU Acceleration: CUDA.jl on RTX 5080
- Scale-up: Million-neuron networks
- Real-time Adaptation: Online learning during operation
- Cross-chain: Additional blockchain integrations
- Quantum Integration: Hybrid classical-quantum training
π License
GPL-3.0 - See LICENSE file for details
π Acknowledgments
- jlrs: Julia-Rust integration framework
- E-prop: Eligibility propagation algorithm
- OTTT: Online temporal trace training
- Kaspa & Monero: Real blockchain sync data
Built in my room. Trained on bare metal. Engineered for the mission impossible. π¦
The Body
- Hardware Target: Xilinx Artix-7 Basys3 FPGA
- Weight Format: Q8.8 fixed-point (exportable .mem files)
- Power: ~97mW dynamic (87.5% reduction vs traditional polling)
π¬ Features
- β Live Node Sync Fusion: Direct block sync logs, epoch ticks, solver data from all 8 nodes
- β Ghost Money HFT Engine: Simulated order books for sub-millisecond market prediction
- β Hardware Protection: Thermal LTD at 85Β°C (negative dopamine prevents damage)
- β
FPGA-Ready: All weights export as Q8.8 fixed-point
.memfiles
π Model Details
| Parameter | Value |
|---|---|
| Neurons | 16 (4 per node group) |
| Threshold | 0.75 (adaptive) |
| Leak Factor | 0.95 |
| Learning | Reward-Modulated STDP |
| Weights | Q8.8 fixed-point |
| Clock | 1kHz (1ms resolution) |
π― The 20-Year Mission
- Phase 1 β Financial Sovereignty (Years 1-5): Ghost money β live API trading
- Phase 2 β The Neural Bridge (Years 1-10): BCI headset, decode brain waves
- Phase 3 β Texas Med-Tech Revolution (Years 10-20+): Open robotics manufacturing
π License & Credit
License: GPL-3.0
Author: Raul Montoya Cardenas, Texas State University Electrical Engineering
Built: Ship of Theseus workstation, Texas 2026
Spikenaut-SNN-v2 is proof that recovery, engineering, and sovereignty can be achieved independentlyβone spike at a time.
π Related
- V1 Model: Spikenaut-SNN-v1
- V1 Dataset: Spikenaut-v1-Telemetry-Data
- V2 Dataset: Spikenaut-v2-Telemetry-Data
- GitHub (private core): Eagle-Lander
- Downloads last month
- -