File size: 5,988 Bytes
fb867c3
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
# Felix Framework - Quick Start Guide

Welcome to the Felix Framework! This guide will get you up and running with LLM-powered geometric orchestration in minutes.

## Prerequisites

1. **LM Studio** - Local LLM inference server
   - Download from [https://lmstudio.ai/](https://lmstudio.ai/)
   - Install and load a model (any chat model works)
   - Start the server (default: http://localhost:1234)

2. **Python 3.12+** and **Git** (Python 3.8+ supported but 3.12+ recommended)

## Step-by-Step Setup

### 1. Clone and Setup Environment

```bash
# Clone the repository
git clone <your-repo-url>
cd thefelix

# Create virtual environment
python3 -m venv venv
source venv/bin/activate  # On Windows: venv\Scripts\activate

# Install dependencies
pip install -r requirements.txt
```

### 2. Verify LM Studio Connection

```bash
# Test if LM Studio is running
curl http://localhost:1234/v1/models

# Test from Python
python -c "from src.llm.lm_studio_client import LMStudioClient; print('✓ Connected' if LMStudioClient().test_connection() else '✗ Failed')"
```

### 3. Run Your First Demo

```bash
# Simple blog writer demo
python examples/blog_writer.py "Write about renewable energy" --complexity simple

# Code reviewer demo
python examples/code_reviewer.py --code-string "def factorial(n): return 1 if n <= 1 else n * factorial(n-1)"

# Performance benchmark
python examples/benchmark_comparison.py --task "Research AI safety" --runs 3

# Real-time visualization
python visualization/helix_monitor.py --mode terminal --demo
```

## What You'll See

### Blog Writer
- **3-6 agents** spawn at different times
- **Research agents** (top of helix): Broad exploration, high creativity
- **Analysis agents** (middle): Focused processing 
- **Synthesis agents** (bottom): Final integration, low temperature
- **Natural convergence** through geometric constraints

### Code Reviewer
- **Multi-perspective analysis**: Structure, performance, security, style
- **Quality assurance**: Bug detection, best practices
- **Comprehensive report**: Final synthesis of all reviews

### Benchmark Comparison
- **Felix vs Linear**: Statistical comparison of approaches
- **Performance metrics**: Time, tokens, quality scores
- **Geometric advantages**: Natural bottlenecking, memory efficiency

## Key Concepts

### Geometric Orchestration
Instead of explicit graphs (like LangGraph), Felix uses **3D helix geometry**:

```python
# Traditional approach
graph.add_node("research", research_function)
graph.add_edge("research", "analysis")

# Felix approach  
helix = HelixGeometry(33.0, 0.001, 33.0, 33)
agents = create_specialized_team(helix, llm_client, "medium")
# Agents naturally converge through geometry
```

### Position-Aware Behavior
Agent behavior adapts based on helix position:
- **Top (wide)**: Temperature 0.9, broad exploration
- **Middle**: Temperature 0.5, focused analysis
- **Bottom (narrow)**: Temperature 0.1, precise synthesis

### Spoke Communication
- **O(N) complexity** vs O(N²) mesh systems
- **Central coordination** with distributed processing
- **Natural bottlenecking** for quality control

## Configuration

### Adjust Token Limits
```python
# In examples or your code
llm_client = LMStudioClient(timeout=120.0)  # 2 minute timeout
agent = LLMAgent(..., max_tokens=300)       # Shorter responses
```

### Team Complexity
```python
# Simple: 3 agents (1 research, 1 analysis, 1 synthesis)
agents = create_specialized_team(helix, llm_client, "simple")

# Medium: 6 agents (2 research, 2 analysis, 1 critic, 1 synthesis)  
agents = create_specialized_team(helix, llm_client, "medium")

# Complex: 9 agents (3 research, 3 analysis, 2 critics, 1 synthesis)
agents = create_specialized_team(helix, llm_client, "complex")
```

## Troubleshooting

### "Connection Failed"
```bash
# Check LM Studio is running
curl http://localhost:1234/v1/models

# Restart LM Studio and ensure model is loaded
# Check firewall isn't blocking port 1234
```

### "Import Errors"
```bash
# Ensure you're in the right directory
cd /path/to/thefelix

# Activate virtual environment
source venv/bin/activate

# Reinstall dependencies
pip install --force-reinstall openai httpx numpy scipy
```

### "Request Timeout"
- **Reduce complexity**: Use `--complexity simple`
- **Smaller model**: Use a faster model in LM Studio
- **Reduce tokens**: Lower max_tokens in agent creation

### "No Output Generated"
- **Check LM Studio console** for activity
- **Try simpler prompts** first
- **Verify model responses** work in LM Studio UI

## Next Steps

1. **Experiment with prompts**: Try different topics and complexity levels
2. **Review your code**: Use the code reviewer on your own files  
3. **Run benchmarks**: Compare Felix vs traditional approaches
4. **Customize agents**: Create your own specialized agent types
5. **Monitor in real-time**: Use the visualization tools

## Getting Help

- **Documentation**: Check `/docs/` folder for detailed explanations
  - **Navigation Guide**: See `docs/getting-started/README.md` for documentation structure
  - **Architecture**: Review `docs/architecture/PROJECT_OVERVIEW.md` for high-level overview
- **Research Log**: See `RESEARCH_LOG.md` for development insights
- **Mathematical Model**: Review `docs/architecture/core/mathematical_model.md` for theory
- **LLM Integration**: Full details in `docs/guides/llm-integration/LLM_INTEGRATION.md`
- **Development**: See `docs/guides/development/DEVELOPMENT_RULES.md` for contribution guidelines

## Examples to Try

```bash
# Creative writing
python examples/blog_writer.py "The future of space exploration" --complexity medium

# Technical analysis  
python examples/code_reviewer.py examples/blog_writer.py

# Research comparison
python examples/benchmark_comparison.py --task "Analyze climate change solutions" --runs 5

# Watch agents work
python visualization/helix_monitor.py --mode terminal --demo
```

Welcome to geometric orchestration! 🌀

---

*Felix Framework: Where geometry meets intelligence*