Evolved Qwen Hybrid v3

An experimental Hybrid AI integrating a Large Language Model with biological-inspired neuromorphic memory. This model uses a combination of structural SNN neurogenesis and Hebbian fast-weights to achieve rapid, on-the-fly learning from user interactions.

🧬 Neuromorphic Architecture

  • SNN Reservoir (SELSC Engine): A 10,000-neuron liquid-state machine that captures structural context.
  • Dopamine-Modulated STDP: Synaptic plasticity in the SNN is gated by a "Dopamine" signal derived from sentiment analysis and novelty (Surprise Factor).
  • Hebbian Fast-Weights: Dynamic shadow matrices attached to the LLM's FFN layers, allowing for near-instantaneous association learning without backpropagation.
  • Tiny Reward Expert: A prioritized linear expert that internalizes human feedback into a predictive reward model.

πŸš€ Quickstart

  1. Clone the repository:

    git clone https://huggingface.co/Specialgfhdhdh/learning-qwen
    cd learning-qwen
    
  2. Install dependencies:

    pip install -r requirements.txt
    
  3. Run the interactive chat:

    python run_hybrid.py
    

🧠 How it Learns

As you converse with the AI, the SELSC Engine builds a biological representation of the context. If you praise the AI, its Dopamine level rises, strengthening current synaptic patterns. If you correct it, Dopamine drops, suppressing incorrect associations. The Hebbian Layer ensures that these associations are immediately accessible in future turns of the same session.

Credits

Built using Qwen-2.5-0.5B-Instruct as the cognitive core and a custom JAX-powered neuromorphic engine for structural memory.

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Spaces using Specialgfhdhdh/learning-qwen 4