ModularBrainAgent / README.md
Almusawee's picture
Update README.md
fa888d5 verified
|
raw
history blame
3.9 kB
metadata
tags:
  - brain-inspired
  - spiking-neural-network
  - biologically-plausible
  - modular-architecture
  - reinforcement-learning
  - vision-language
  - pytorch
  - curriculum-learning
  - cognitive-architecture
  - artificial-general-intelligence
license: mit
datasets:
  - mnist
  - imdb
  - synthetic-environment
language:
  - en
library_name: transformers
widget:
  - text: The weather is nice today.
  - text: I feel curious about the stars.
model-index:
  - name: ModularBrainAgent
    results:
      - task:
          type: image-classification
          name: Vision-based Classification
        dataset:
          type: mnist
          name: MNIST
        metrics:
          - type: accuracy
            value: 0.98
      - task:
          type: text-classification
          name: Language Sentiment Analysis
        dataset:
          type: imdb
          name: IMDb
        metrics:
          - type: accuracy
            value: 0.91
      - task:
          type: reinforcement-learning
          name: Curiosity-driven Exploration
        dataset:
          type: synthetic-environment
          name: Synthetic Environment
        metrics:
          - type: cumulative_reward
            value: 112.5

🧠 ModularBrainAgent: A Brain-Inspired Cognitive AI Model

ModularBrainAgent is a biologically plausible, spiking neural agent combining vision, language, and reinforcement learning in a single architecture. Inspired by human neurobiology, it implements eight neuron types and complex synaptic pathways, including excitatory, inhibitory, modulatory, bidirectional, feedback, lateral, and plastic connections.

It’s designed for researchers, neuroscientists, and AI developers exploring the frontier between brain science and general intelligence.


🧩 Model Architecture

  • Total Neurons: 576
  • Neuron Types: Interneurons, Excitatory, Inhibitory, Cholinergic, Dopaminergic, Serotonergic, Feedback, Plastic
  • Core Modules:
    • SharedEncoder: Multidimensional feature compressor
    • CNNVision: Convolutional module for visual inputs
    • GRULanguage: Recurrent module for sentence understanding
    • ReplayMemory + Curiosity + Entropy: RL exploration engine
    • Task Heads: Classifiers + Reinforcement actor-critic

🧠 Features

  • πŸͺ Multi-modal input support (Images, Language, Environment signals)
  • πŸ” Hebbian + gradient learning
  • ⚑ Spiking simulation for dynamic activity
  • 🧠 Biologically-inspired synaptic dynamics
  • 🧬 Curriculum learning and memory consolidation
  • πŸ” Fully modular: plug-and-play layers

πŸ“Š Performance Summary

Task Dataset Metric Result
Digit Recognition MNIST Accuracy 98%
Sentiment Analysis IMDb Accuracy 91%
Exploration Task Synthetic GridWorld Cumulative Reward 112.5

πŸ’» Training Data

  • MNIST: Handwritten digit classification
  • IMDb: Sentiment classification from text
  • Synthetic Grid Environment: Exploration and navigation

πŸ§ͺ Intended Uses

Use Case Description
Neuroscience AI Research For brain-inspired network modeling
Cognitive Simulation Test artificial memory, attention, curiosity
Multi-task Agent Prototyping For language + vision + decision-making models
Educational Tool Learn principles of bio-AI, spiking neurons, and RL

⚠️ Limitations

  • Currently trained on small-scale datasets
  • Needs GPU/TPU for efficient inference
  • Cognitive feedback not yet implemented for all pathways
  • Limited real-world generalization until scaled

πŸ“‚ Repository Structure