Cognitive-ai / README.md
mdAmin313's picture
Upload 10 files
5923f33 verified

🧠 Cognitive AI Agent

A human-like AI system with persistent memory, multi-step reasoning, and self-reflection.

Features

  • 4-Layer Memory System: Short-term, episodic, semantic, and procedural memory
  • Chain-of-Thought Reasoning: Explains its thinking step-by-step
  • Self-Reflection: Evaluates and improves its own outputs
  • Goal Management: Tracks and pursues objectives
  • No API Keys Required: Runs completely locally (uses stub mode)
  • Confidence Scores: Returns confidence levels with responses

How It Works

User Input
    ↓
Memory Retrieval (Find relevant context)
    ↓
Reasoning (Plan step-by-step approach)
    ↓
Response Generation (LLM or stub)
    ↓
Self-Reflection (Evaluate quality)
    ↓
Learning (Store new knowledge)
    ↓
Output (Response + Confidence)

Running on Hugging Face Spaces

  1. Create a new Space on Hugging Face
  2. Clone this repo or upload the files
  3. Set the runtime to "CPU basic" or higher
  4. The app.py will auto-launch the Gradio interface

Running Locally

# Install requirements
pip install -r requirements.txt

# Option 1: Web interface (Gradio)
python app.py

# Option 2: Interactive CLI
python -m examples.examples_interactive

# Option 3: Run demos
python -m examples.examples_simple_agent

Project Structure

cognitive-agent/
β”œβ”€β”€ app.py                      # Hugging Face Spaces app
β”œβ”€β”€ requirements.txt            # Dependencies
β”œβ”€β”€ cognitive_agent/            # Main package
β”‚   β”œβ”€β”€ __init__.py
β”‚   β”œβ”€β”€ core_agent.py          # Main orchestrator
β”‚   β”œβ”€β”€ core_brain.py          # LLM integration
β”‚   β”œβ”€β”€ core_memory.py         # Memory systems
β”‚   β”œβ”€β”€ core_reasoning.py      # Reasoning engine
β”‚   β”œβ”€β”€ core_reflection.py     # Self-reflection
β”‚   β”œβ”€β”€ core_retrieval.py      # Context retrieval
β”‚   β”œβ”€β”€ utils_config.py        # Configuration
β”‚   └── utils_embeddings.py    # Vector embeddings
└── examples/                   # Examples
    β”œβ”€β”€ examples_interactive.py # CLI demo
    └── examples_simple_agent.py # 6 code examples

Quick Start

Web Interface (Gradio)

python app.py
# Opens at http://localhost:7860

Programmatic Usage

from cognitive_agent import CognitiveAgent, Config

# Create agent
agent = CognitiveAgent(mode='stub', provider='local')

# Ask a question
response, metadata = agent.think("What is artificial intelligence?")

print(response)
print(f"Confidence: {metadata['confidence']:.1%}")
print(f"Memory items: {metadata['memory_stats']['total_items']}")

Memory Across Conversations

The agent learns and retains information across conversations:

  1. First interaction: "Tell me about Python"

    • Agent responds and stores knowledge about Python
  2. Second interaction: "What programming languages did we discuss?"

    • Agent retrieves from memory and responds with Python
  3. Third interaction: "Compare Python to other languages"

    • Agent uses previous context to make comparisons

Customization

Change memory sizes

from cognitive_agent import Config

config = Config()
config.memory.short_term_max_items = 100
config.memory.episodic_memory_max = 2000

agent = CognitiveAgent(mode='stub', config=config)

Use real LLM (Anthropic Claude)

import os

agent = CognitiveAgent(
    mode='api',
    provider='anthropic',
    api_key=os.environ.get('ANTHROPIC_API_KEY')
)

Disable self-reflection (faster)

config = Config()
config.reflection.enable_self_reflection = False
agent = CognitiveAgent(config=config)

Performance

  • Response time: ~500ms (stub mode)
  • Memory usage: ~5MB after 100 interactions
  • Confidence range: 0.75-0.85 typical
  • Reasoning depth: 3-5 steps per query

Architecture Details

7-Step Cognitive Loop

  1. Context Retrieval - Find relevant memories using vector similarity
  2. Reasoning - Break down problem into steps with confidence
  3. Brain (LLM) - Generate response using language model
  4. Reflection - Evaluate response quality on 5 dimensions
  5. Improvement - Auto-improve if quality score < 0.7
  6. Learning - Store interaction as new memories
  7. Output - Return response with confidence score

Memory Types

  • Short-term (50 items, 1 hour): Conversation context
  • Episodic (1000 items): Specific events and experiences
  • Semantic (5000 items): General knowledge and facts
  • Procedural (expandable): Learned patterns and strategies

Reflection Checks

The system evaluates responses on:

  • βœ“ Coherence - Does it flow logically?
  • βœ“ Completeness - Does it address the query?
  • βœ“ Accuracy - Are claims supported?
  • βœ“ Clarity - Is it understandable?
  • βœ“ Consistency - Does it match memory?

If overall score < 0.7, the system regenerates an improved version.

Examples

See examples/ directory for:

  • 6 programmatic demonstrations (examples_simple_agent.py)
  • Interactive CLI interface (examples_interactive.py)

API Reference

CognitiveAgent

# Initialize
agent = CognitiveAgent(
    mode='stub' | 'api',
    provider='local' | 'anthropic' | 'openai',
    api_key=str,
    config=Config
)

# Main method
response, metadata = agent.think(
    user_input: str,
    enable_reflection: bool = True
)

# Introspection
agent.get_memory_summary() -> Dict
agent.get_cognitive_trace(limit: int = 5) -> List[Dict]

# Persistence
agent.save_state(filepath: str)
agent.load_state(filepath: str)

Limitations

  • Stub mode uses deterministic responses (good for testing)
  • No fine-tuning on user data (future enhancement)
  • Memory limited to in-process storage (add persistent DB for production)
  • No multi-agent reasoning (future enhancement)

Future Enhancements

  • Persistent database (SQLite)
  • Tool integration (search, calculator, etc.)
  • Multi-agent communication
  • Fine-tuning on domain data
  • Streaming responses
  • Vision integration

License

MIT License - Feel free to use and modify

Citation

If you use this system, cite as:

Cognitive AI Agent - A human-like AI prototype with memory, reasoning, and self-reflection.
(2024)

Questions & Support

  • Check the documentation in cognitive_agent/ directory
  • Look at examples in examples/ directory
  • Read code comments for implementation details

Ready to start? Run python app.py for the web interface or choose a script in the examples directory!