|
|
--- |
|
|
tags: |
|
|
- AGI |
|
|
- SentientAI |
|
|
--- |
|
|
# DreamStack - Symbolic Cognition Simulator |
|
|
|
|
|
A Python script that implements symbolic cognition using the `llama-cpp-python` library with a stacked equation structure. |
|
|
|
|
|
## Overview |
|
|
|
|
|
DreamStack simulates symbolic cognition using this stacked equation structure: |
|
|
|
|
|
``` |
|
|
D₀ = A₀ · f₀(x) + B₀ · g₀(x) |
|
|
D₁ = A₁ · f₁(g₀) + B₁ · g₁(g₀) |
|
|
D₂ = A₂ · f₂(g₁) + B₂ · g₂(g₁) |
|
|
... up to a user-defined recursion depth |
|
|
``` |
|
|
|
|
|
Each level contains: |
|
|
- `fₙ(x)`: the first output (real) |
|
|
- `gₙ(x)`: the next 3 alternate outputs (imaginary) |
|
|
- Recursively process each imaginary output as a new prompt |
|
|
|
|
|
## Prerequisites |
|
|
|
|
|
1. **Python 3.8+** installed on your system |
|
|
2. **GGUF model file** (e.g., `mistral-7b-instruct-v0.2.Q4_K_M.gguf`) |
|
|
|
|
|
## Installation |
|
|
|
|
|
1. **Install dependencies:** |
|
|
```bash |
|
|
pip install -r requirements.txt |
|
|
``` |
|
|
|
|
|
2. **Download a GGUF model:** |
|
|
- Download a GGUF format model (e.g., from Hugging Face) |
|
|
- Place it in a `models/` directory relative to the script |
|
|
- Default expected path: `./models/mistral-7b-instruct-v0.2.Q4_K_M.gguf` |
|
|
|
|
|
## Usage |
|
|
|
|
|
1. **Run the script:** |
|
|
```bash |
|
|
python dreamstack.py |
|
|
``` |
|
|
|
|
|
2. **Follow the prompts:** |
|
|
- Enter your initial prompt |
|
|
- Specify recursion depth (default: 3) |
|
|
|
|
|
3. **View results:** |
|
|
- Results are displayed in the terminal |
|
|
- JSON results are saved to `dreamstack_results.json` |
|
|
|
|
|
## Configuration |
|
|
|
|
|
You can modify the model configuration in the `DreamStack` class: |
|
|
|
|
|
```python |
|
|
dreamstack = DreamStack( |
|
|
model_path="./models/your-model.gguf", |
|
|
n_ctx=2048, # Context window size |
|
|
n_threads=4 # Number of threads |
|
|
) |
|
|
``` |
|
|
|
|
|
## Output Format |
|
|
|
|
|
The script generates a JSON structure like: |
|
|
|
|
|
```json |
|
|
{ |
|
|
"D_0": { |
|
|
"prompt": "user prompt", |
|
|
"real": "f₀ output", |
|
|
"imaginary": ["g₀₁", "g₀₂", "g₀₃"] |
|
|
}, |
|
|
"D_1": { |
|
|
"prompt": "Based on: g₀₁", |
|
|
"real": "f₁ output", |
|
|
"imaginary": ["g₁₁", "g₁₂", "g₁₃"] |
|
|
}, |
|
|
... |
|
|
} |
|
|
``` |
|
|
|
|
|
## Example |
|
|
|
|
|
```bash |
|
|
$ python dreamstack.py |
|
|
|
|
|
=== DreamStack - Symbolic Cognition Simulator === |
|
|
|
|
|
Enter your prompt: What is the meaning of life? |
|
|
Enter recursion depth (default: 3): 2 |
|
|
|
|
|
Loading model from ./models/mistral-7b-instruct-v0.2.Q4_K_M.gguf... |
|
|
Model loaded successfully! |
|
|
|
|
|
=== Starting DreamStack Processing === |
|
|
Initial prompt: What is the meaning of life? |
|
|
Depth: 2 |
|
|
|
|
|
--- Generating D_0 --- |
|
|
Prompt: What is the meaning of life? |
|
|
Generating real output... |
|
|
Real output: The meaning of life is to find purpose and fulfillment... |
|
|
Generating imaginary output 1/3... |
|
|
Imaginary output 1: From a philosophical perspective... |
|
|
Generating imaginary output 2/3... |
|
|
Imaginary output 2: In terms of biological evolution... |
|
|
Generating imaginary output 3/3... |
|
|
Imaginary output 3: From a spiritual viewpoint... |
|
|
|
|
|
=== Processing D_1 === |
|
|
... |
|
|
``` |
|
|
|
|
|
## Requirements |
|
|
|
|
|
- `llama-cpp-python>=0.2.0` |
|
|
- `typing-extensions>=4.0.0` |
|
|
|
|
|
## Notes |
|
|
|
|
|
- The script requires a GGUF format model file |
|
|
- Processing time depends on model size and recursion depth |
|
|
- Results are automatically saved to JSON format |
|
|
- Each layer generates 1 real and 3 imaginary outputs |