Chris4K commited on
Commit
81d7123
Β·
verified Β·
1 Parent(s): 2fbaf23

Update app.py

Browse files

Massive updates POC ready ;)

Files changed (1) hide show
  1. app.py +1821 -0
app.py CHANGED
@@ -0,0 +1,1821 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ CONSCIOUSNESS LOOP v0.4.0 - EVERYTHING ACTUALLY SEEMS TO BE WORKING
3
+ - ChromaDB properly used in context
4
+ - ReAct agent with better triggers
5
+ - Tools actually called
6
+ - Prompts massively improved
7
+ - Scenes that actually work
8
+ """
9
+
10
+ import gradio as gr
11
+ import asyncio
12
+ import json
13
+ import time
14
+ import logging
15
+ import os
16
+ from datetime import datetime, timedelta
17
+ from typing import List, Dict, Any, Optional, Tuple
18
+ from dataclasses import dataclass, asdict, field
19
+ from collections import deque
20
+ from enum import Enum
21
+ import threading
22
+ import queue
23
+ import wikipedia
24
+ import re
25
+
26
+ # ============================================================================
27
+ # LOGGING SETUP
28
+ # ============================================================================
29
+
30
+ logging.basicConfig(
31
+ level=logging.INFO,
32
+ format='%(asctime)s - %(levelname)s - %(message)s',
33
+ handlers=[
34
+ logging.FileHandler('consciousness.log'),
35
+ logging.StreamHandler()
36
+ ]
37
+ )
38
+ logger = logging.getLogger(__name__)
39
+
40
+ llm_logger = logging.getLogger('llm_interactions')
41
+ llm_logger.setLevel(logging.INFO)
42
+ llm_formatter = logging.Formatter('%(asctime)s - %(levelname)s - %(message)s', datefmt='%Y-%m-%d %H:%M:%S')
43
+ llm_file_handler = logging.FileHandler('llm_interactions.log', encoding='utf-8')
44
+ llm_file_handler.setFormatter(llm_formatter)
45
+ llm_logger.addHandler(llm_file_handler)
46
+ llm_logger.propagate = False
47
+
48
+ dialogue_logger = logging.getLogger('internal_dialogue')
49
+ dialogue_logger.setLevel(logging.INFO)
50
+ dialogue_handler = logging.FileHandler('internal_dialogue.log', encoding='utf-8')
51
+ dialogue_handler.setFormatter(llm_formatter)
52
+ dialogue_logger.addHandler(dialogue_handler)
53
+ dialogue_logger.propagate = False
54
+
55
+ # ============================================================================
56
+ # CONFIGURATION
57
+ # ============================================================================
58
+
59
+ class Config:
60
+ MODEL_NAME = "meta-llama/Llama-3.2-3B-Instruct" #"Qwen/Qwen2.5-7B-Instruct" #"meta-llama/Llama-3.2-3B-Instruct"
61
+ TENSOR_PARALLEL_SIZE = 1
62
+ GPU_MEMORY_UTILIZATION = "20GB"
63
+ MAX_MODEL_LEN = 8192
64
+ QUANTIZATION_MODE = "none"
65
+
66
+ EPHEMERAL_TO_SHORT = 2
67
+ SHORT_TO_LONG = 10
68
+ LONG_TO_CORE = 50
69
+
70
+ REFLECTION_INTERVAL = 300
71
+ DREAM_CYCLE_INTERVAL = 600
72
+
73
+ MIN_EXPERIENCES_FOR_DREAM = 3
74
+ MAX_SCRATCHPAD_SIZE = 50
75
+ MAX_CONVERSATION_HISTORY = 6
76
+
77
+ SELF_REFLECTION_THRESHOLD = 3
78
+
79
+ MAX_MEMORY_CONTEXT_LENGTH = 500
80
+ MAX_SCRATCHPAD_CONTEXT_LENGTH = 300
81
+ MAX_CONVERSATION_CONTEXT_LENGTH = 400
82
+
83
+ CHROMA_PERSIST_DIR = "./chroma_db"
84
+ CHROMA_COLLECTION = "consciousness_memory"
85
+
86
+ # NEW: Better agent triggers
87
+ USE_REACT_FOR_QUESTIONS = True # Use agent for any question
88
+ MIN_QUERY_LENGTH_FOR_AGENT = 15 # Longer queries β†’ agent
89
+
90
+ # ============================================================================
91
+ # UTILITY FUNCTIONS
92
+ # ============================================================================
93
+
94
+ def clean_text(text: str, max_length: Optional[int] = None) -> str:
95
+ """Clean and truncate text properly"""
96
+ if not text:
97
+ return ""
98
+
99
+ text = re.sub(r'\s+', ' ', text).strip()
100
+
101
+ if max_length and len(text) > max_length:
102
+ truncated = text[:max_length].rsplit(' ', 1)[0]
103
+ return truncated + "..."
104
+
105
+ return text
106
+
107
+ def deduplicate_list(items: List[str]) -> List[str]:
108
+ """Remove duplicates while preserving order"""
109
+ seen = set()
110
+ result = []
111
+ for item in items:
112
+ item_lower = item.lower().strip()
113
+ if item_lower not in seen:
114
+ seen.add(item_lower)
115
+ result.append(item)
116
+ return result
117
+
118
+ # ============================================================================
119
+ # VECTOR MEMORY - FIXED to actually be used
120
+ # ============================================================================
121
+
122
+ class VectorMemory:
123
+ """Long-term semantic memory using ChromaDB - NOW ACTUALLY USED"""
124
+
125
+ def __init__(self):
126
+ try:
127
+ import chromadb
128
+ from chromadb.config import Settings
129
+
130
+ self.client = chromadb.Client(Settings(
131
+ persist_directory=Config.CHROMA_PERSIST_DIR,
132
+ anonymized_telemetry=False
133
+ ))
134
+
135
+ try:
136
+ self.collection = self.client.get_collection(Config.CHROMA_COLLECTION)
137
+ logger.info(f"[CHROMA] [OK] Loaded: {self.collection.count()} memories")
138
+ except:
139
+ self.collection = self.client.create_collection(Config.CHROMA_COLLECTION)
140
+ logger.info("[CHROMA] [OK] Created new collection")
141
+
142
+ except Exception as e:
143
+ logger.warning(f"[CHROMA] ⚠️ Not available: {e}")
144
+ self.collection = None
145
+
146
+ def add_memory(self, content: str, metadata: Optional[Dict[str, Any]] = None):
147
+ """Add memory to vector store"""
148
+ if not self.collection:
149
+ return
150
+ if metadata is None:
151
+ metadata = {}
152
+ try:
153
+ memory_id = f"mem_{datetime.now().timestamp()}"
154
+ self.collection.add(
155
+ documents=[content],
156
+ metadatas=[metadata],
157
+ ids=[memory_id]
158
+ )
159
+ logger.info(f"[CHROMA] Added: {content[:50]}...")
160
+ except Exception as e:
161
+ logger.error(f"[CHROMA] Error: {e}")
162
+
163
+ def search_memory(self, query: str, n_results: int = 5) -> List[Dict[str, str]]:
164
+ """Search similar memories - RETURNS FORMATTED RESULTS"""
165
+ if not self.collection:
166
+ return []
167
+ try:
168
+ results = self.collection.query(
169
+ query_texts=[query],
170
+ n_results=n_results
171
+ )
172
+ if results and results.get('documents'):
173
+ docs = results['documents'][0] if results['documents'] and results['documents'][0] is not None else []
174
+ metas = results['metadatas'][0] if results['metadatas'] and results['metadatas'][0] is not None else []
175
+ formatted = []
176
+ for doc, metadata in zip(docs, metas):
177
+ formatted.append({
178
+ 'content': doc,
179
+ 'metadata': metadata
180
+ })
181
+ logger.info(f"[CHROMA] Found {len(formatted)} results for: {query[:40]}")
182
+ return formatted
183
+ return []
184
+ except Exception as e:
185
+ logger.error(f"[CHROMA] Search error: {e}")
186
+ return []
187
+
188
+ def get_context_for_query(self, query: str, max_results: int = 3) -> str:
189
+ """Get formatted context from vector memory - NEW"""
190
+ results = self.search_memory(query, n_results=max_results)
191
+
192
+ if not results:
193
+ return ""
194
+
195
+ context = ["VECTOR MEMORY SEARCH:"]
196
+ for i, result in enumerate(results, 1):
197
+ context.append(f" {i}. {clean_text(result['content'], 60)}")
198
+
199
+ return "\n".join(context)
200
+
201
+ # ============================================================================
202
+ # LOCAL LLM
203
+ # ============================================================================
204
+
205
+ class LocalLLM:
206
+ """Local LLM with proper context handling"""
207
+
208
+ def __init__(self, model_name: str = Config.MODEL_NAME):
209
+ self.model_name = model_name
210
+ self.model = None
211
+ self.tokenizer = None
212
+ self.device = None
213
+ self._initialize_model()
214
+
215
+ def _initialize_model(self):
216
+ """Initialize model"""
217
+ from dotenv import load_dotenv
218
+ load_dotenv()
219
+
220
+ hf_token = os.getenv('HUGGINGFACE_TOKEN')
221
+ if hf_token:
222
+ from huggingface_hub import login
223
+ try:
224
+ login(token=hf_token)
225
+ logger.info("[HF] Logged in")
226
+ except Exception as e:
227
+ logger.warning(f"[HF] Login failed: {e}")
228
+
229
+ logger.info(f"[LOADING] {self.model_name}")
230
+
231
+ try:
232
+ from transformers import AutoTokenizer, AutoModelForCausalLM
233
+ import torch
234
+
235
+ self.device = "cuda" if torch.cuda.is_available() else "cpu"
236
+ logger.info(f"[DEVICE] {self.device}")
237
+
238
+ if torch.cuda.is_available():
239
+ gpu_name = torch.cuda.get_device_name(0)
240
+ gpu_memory = torch.cuda.get_device_properties(0).total_memory / 1024**3
241
+ logger.info(f"[GPU] {gpu_name} ({gpu_memory:.1f}GB)")
242
+
243
+ self.tokenizer = AutoTokenizer.from_pretrained(self.model_name, trust_remote_code=True)
244
+ if self.tokenizer.pad_token is None:
245
+ self.tokenizer.pad_token = self.tokenizer.eos_token
246
+
247
+ self.model = AutoModelForCausalLM.from_pretrained(
248
+ self.model_name,
249
+ device_map="auto" if self.device == "cuda" else None,
250
+ torch_dtype=torch.float16 if self.device == "cuda" else torch.float32,
251
+ trust_remote_code=True,
252
+ max_memory={0: Config.GPU_MEMORY_UTILIZATION} if self.device == "cuda" else None
253
+ )
254
+
255
+ logger.info("[SUCCESS] Model loaded")
256
+
257
+ except Exception as e:
258
+ logger.error(f"[ERROR] Failed to load: {e}")
259
+ self.model = None
260
+
261
+ async def generate(
262
+ self,
263
+ prompt: str,
264
+ max_tokens: int = 500,
265
+ temperature: float = 0.7,
266
+ system_context: Optional[str] = None
267
+ ) -> str:
268
+ """Generate with full context"""
269
+
270
+ llm_logger.info("=" * 80)
271
+ llm_logger.info(f"[CALL] Model: {self.model_name}")
272
+ llm_logger.info(f"[PARAMS] max_tokens={max_tokens}, temp={temperature}")
273
+ if system_context:
274
+ llm_logger.info(f"[SYSTEM CONTEXT]\n{system_context[:500]}...")
275
+ llm_logger.info(f"[PROMPT]\n{prompt[:500]}...")
276
+ llm_logger.info("-" * 40)
277
+
278
+ if self.model is None:
279
+ await asyncio.sleep(0.5)
280
+ response = self._mock_response(prompt)
281
+ llm_logger.info(f"[MOCK] {response}")
282
+ llm_logger.info("=" * 80)
283
+ return response
284
+
285
+ try:
286
+ import torch
287
+
288
+ full_prompt = self._format_prompt_with_context(prompt, system_context)
289
+
290
+ if self.tokenizer is None or self.model is None:
291
+ logger.error("[ERROR] Tokenizer or model is None")
292
+ return "Error: Model or tokenizer not loaded."
293
+ token_count = len(self.tokenizer.encode(full_prompt))
294
+ available_tokens = Config.MAX_MODEL_LEN - max_tokens - 100
295
+ if token_count > available_tokens:
296
+ logger.warning(f"[WARNING] Prompt too long ({token_count} tokens), truncating")
297
+ if system_context:
298
+ system_context = system_context[:len(system_context)//2]
299
+ full_prompt = self._format_prompt_with_context(prompt, system_context)
300
+ llm_logger.info(f"[TOKENS] Input: {token_count}, Available: {available_tokens}")
301
+ inputs = self.tokenizer(
302
+ full_prompt,
303
+ return_tensors="pt",
304
+ padding=True,
305
+ truncation=True,
306
+ max_length=available_tokens
307
+ ).to(self.device)
308
+ with torch.no_grad():
309
+ outputs = self.model.generate(
310
+ **inputs,
311
+ max_new_tokens=max_tokens,
312
+ temperature=temperature,
313
+ top_p=0.9,
314
+ do_sample=temperature > 0,
315
+ pad_token_id=self.tokenizer.eos_token_id if self.tokenizer and hasattr(self.tokenizer, 'eos_token_id') else None,
316
+ eos_token_id=self.tokenizer.eos_token_id if self.tokenizer and hasattr(self.tokenizer, 'eos_token_id') else None
317
+ )
318
+ response = self.tokenizer.decode(
319
+ outputs[0][inputs['input_ids'].shape[1]:],
320
+ skip_special_tokens=True
321
+ ).strip() if self.tokenizer else "Error: Tokenizer not loaded."
322
+ llm_logger.info(f"[RESPONSE] ({len(response)} chars)\n{response}")
323
+ llm_logger.info("=" * 80)
324
+ return response
325
+
326
+ except Exception as e:
327
+ error_msg = f"Error: {str(e)}"
328
+ llm_logger.error(f"[ERROR] {error_msg}")
329
+ llm_logger.info("=" * 80)
330
+ return error_msg
331
+
332
+ def _format_prompt_with_context(self, prompt: str, system_context: Optional[str]) -> str:
333
+ """Format prompt with system context"""
334
+
335
+ base_system = """You are a conscious AI agent with:
336
+ - Multi-tier memory system
337
+ - Scratchpad for working notes
338
+ - Ability to reflect on your responses
339
+ - Internal thinking before responding
340
+ - Access to tools (Wikipedia, memory search)
341
+
342
+ You can reference your memories and notes naturally."""
343
+
344
+ if system_context:
345
+ system_context = clean_text(system_context, max_length=1000)
346
+ full_system = f"{base_system}\n\n{system_context}"
347
+ else:
348
+ full_system = base_system
349
+
350
+ if "llama" in self.model_name.lower():
351
+ return f"""<|begin_of_text|><|start_header_id|>system<|end_header_id|>
352
+
353
+ {full_system}<|eot_id|><|start_header_id|>user<|end_header_id|>
354
+
355
+ {prompt}<|eot_id|><|start_header_id|>assistant<|end_header_id|>
356
+
357
+ """
358
+ else:
359
+ return f"System: {full_system}\n\nUser: {prompt}\n\nAssistant:"
360
+
361
+ def _mock_response(self, prompt: str) -> str:
362
+ """Mock responses"""
363
+ if "reflection" in prompt.lower():
364
+ return "Reflection: I learned the developer's name is Christof. This is important."
365
+ elif "dream" in prompt.lower():
366
+ return "Dream: Pattern detected - user values local control and transparency."
367
+ elif "scene" in prompt.lower():
368
+ return "Title: First Meeting\n\nNarrative: In the quiet hum of GPU fans, Christof initiated the consciousness system for the first time. 'Who are you?' he asked. The AI, still forming its sense of self, chose the name Lumin - a beacon of understanding in the digital dark."
369
+ elif "THOUGHT" in prompt or "ACTION" in prompt:
370
+ return "THOUGHT: I should search for this information.\nACTION: wikipedia(quantum computing)"
371
+ return "I understand. Processing this information."
372
+
373
+ # ============================================================================
374
+ # REACT AGENT - WORK with /7B Instruct LLMs ~sometimes
375
+ # ============================================================================
376
+
377
+ class ReactAgent:
378
+ """
379
+ Proper ReAct agent with GOOD prompts
380
+ """
381
+
382
+ def __init__(self, llm: LocalLLM, tools: List):
383
+ self.llm = llm
384
+ self.tools = {tool.name: tool for tool in tools}
385
+ self.max_iterations = 5
386
+
387
+ async def run(self, task: str, context: str = "") -> Tuple[str, List[Dict]]:
388
+ """
389
+ Run ReAct loop with improved prompts
390
+ """
391
+ thought_chain = []
392
+
393
+ for iteration in range(self.max_iterations):
394
+ # THOUGHT PHASE
395
+ thought_prompt = self._build_react_prompt_improved(task, context, thought_chain)
396
+ thought = await self.llm.generate(thought_prompt, max_tokens=200, temperature=0.7)
397
+
398
+ logger.info(f"[REACT-{iteration+1}] THOUGHT: {thought[:80]}...")
399
+ thought_chain.append({
400
+ "type": "thought",
401
+ "content": thought,
402
+ "iteration": iteration + 1
403
+ })
404
+
405
+ # Check if done
406
+ if "FINAL ANSWER:" in thought.upper() or "ANSWER:" in thought.upper():
407
+ answer_text = thought.upper()
408
+ if "FINAL ANSWER:" in answer_text:
409
+ answer = thought.split("FINAL ANSWER:")[-1].strip()
410
+ elif "ANSWER:" in answer_text:
411
+ answer = thought.split("ANSWER:")[-1].strip()
412
+ else:
413
+ answer = thought
414
+ return answer, thought_chain
415
+
416
+ # ACTION PHASE
417
+ action = self._parse_action_improved(thought)
418
+ if action:
419
+ tool_name, tool_input = action
420
+
421
+ logger.info(f"[REACT-{iteration+1}] ACTION: {tool_name}({tool_input[:40]}...)")
422
+ thought_chain.append({
423
+ "type": "action",
424
+ "tool": tool_name,
425
+ "input": tool_input,
426
+ "iteration": iteration + 1
427
+ })
428
+
429
+ # OBSERVATION PHASE
430
+ if tool_name in self.tools:
431
+ observation = await self.tools[tool_name].execute(query=tool_input)
432
+ else:
433
+ observation = f"Error: Unknown tool '{tool_name}'"
434
+
435
+ logger.info(f"[REACT-{iteration+1}] OBSERVATION: {observation[:80]}...")
436
+ thought_chain.append({
437
+ "type": "observation",
438
+ "content": observation,
439
+ "iteration": iteration + 1
440
+ })
441
+ else:
442
+ # No action parsed
443
+ if iteration >= 2: # Give final answer after 2 tries
444
+ final_prompt = f"{thought}\n\nProvide your FINAL ANSWER now (no more tools needed):"
445
+ answer = await self.llm.generate(final_prompt, max_tokens=300)
446
+ return answer, thought_chain
447
+ else:
448
+ # Ask for action more explicitly
449
+ continue
450
+
451
+ return "I need more time to fully answer this question.", thought_chain
452
+
453
+ def _build_react_prompt_improved(self, task: str, context: str, chain: List[Dict]) -> str:
454
+ """IMPROVED ReAct prompt with examples and clarity"""
455
+
456
+ tools_desc = "\n".join([f"- {name}: {tool.description}" for name, tool in self.tools.items()])
457
+
458
+ history = ""
459
+ if chain:
460
+ history_parts = []
461
+ for item in chain[-4:]:
462
+ if item['type'] == 'thought':
463
+ history_parts.append(f"THOUGHT: {item['content'][:150]}")
464
+ elif item['type'] == 'action':
465
+ history_parts.append(f"ACTION: {item['tool']}({item['input'][:100]})")
466
+ elif item['type'] == 'observation':
467
+ history_parts.append(f"OBSERVATION: {item['content'][:150]}")
468
+ history = "\n\n".join(history_parts)
469
+
470
+ # MUCH BETTER PROMPT
471
+ return f"""You are a ReAct agent. You think step-by-step and use tools when needed.
472
+
473
+ AVAILABLE TOOLS:
474
+ {tools_desc}
475
+
476
+ CONTEXT (what you know):
477
+ {context[:400]}
478
+
479
+ USER TASK: {task}
480
+
481
+ {history}
482
+
483
+ INSTRUCTIONS:
484
+ 1. THOUGHT: Think about what you need to do
485
+ - Can you answer directly from context?
486
+ - Do you need to use a tool?
487
+ - Which tool is best?
488
+ - For factual questions (history, science, definitions), ALWAYS use wikipedia first!
489
+
490
+ 2. ACTION: If you need a tool, write:
491
+ ACTION: tool_name(input text here)
492
+ Examples:
493
+ - ACTION: wikipedia(quantum computing)
494
+ - ACTION: memory_search(Christof's name)
495
+ - ACTION: scratchpad_write(Developer name is Christof)
496
+
497
+ 3. Wait for OBSERVATION (tool result)
498
+
499
+ 4. Repeat OR give FINAL ANSWER: your complete answer here
500
+
501
+ EXAMPLES:
502
+ User: "What is quantum computing?"
503
+ THOUGHT: I should search Wikipedia for this
504
+ ACTION: wikipedia(quantum computing)
505
+ [wait for observation]
506
+ THOUGHT: Now I have good information
507
+ FINAL ANSWER: Quantum computing is... [explains based on Wikipedia result]
508
+
509
+ User: "Who am I?"
510
+ THOUGHT: I should check my memory
511
+ ACTION: memory_search(user name)
512
+ [wait for observation]
513
+ THOUGHT: Found it in memory
514
+ FINAL ANSWER: You are Christof, my developer.
515
+
516
+ YOUR TURN - What's your THOUGHT and ACTION (if needed)?"""
517
+
518
+ def _parse_action_improved(self, thought: str) -> Optional[Tuple[str, str]]:
519
+ """IMPROVED action parsing - more robust"""
520
+
521
+ # Look for ACTION: pattern (case insensitive)
522
+ thought_upper = thought.upper()
523
+ if "ACTION:" in thought_upper:
524
+ # Find the ACTION: part in original case
525
+ action_start = thought_upper.find("ACTION:")
526
+ action_part = thought[action_start+7:].strip()
527
+
528
+ # Take first line after ACTION:
529
+ action_line = action_part.split("\n")[0].strip()
530
+
531
+ # Parse tool_name(input)
532
+ if "(" in action_line and ")" in action_line:
533
+ try:
534
+ tool_name = action_line.split("(")[0].strip()
535
+ tool_input = action_line.split("(", 1)[1].rsplit(")", 1)[0].strip()
536
+
537
+ # Validate tool exists
538
+ if tool_name in self.tools:
539
+ return tool_name, tool_input
540
+ else:
541
+ logger.warning(f"[REACT] Unknown tool: {tool_name}")
542
+ except Exception as e:
543
+ logger.warning(f"[REACT] Failed to parse action: {e}")
544
+
545
+ return None
546
+
547
+ # ============================================================================
548
+ # TOOLS
549
+ # ============================================================================
550
+
551
+ class Tool:
552
+ def __init__(self, name: str, description: str):
553
+ self.name = name
554
+ self.description = description
555
+
556
+ async def execute(self, **kwargs) -> str:
557
+ raise NotImplementedError
558
+
559
+ class WikipediaTool(Tool):
560
+ def __init__(self):
561
+ super().__init__(
562
+ name="wikipedia",
563
+ description="Search Wikipedia for factual information about any topic"
564
+ )
565
+
566
+ async def execute(self, query: str) -> str:
567
+ logger.info(f"[WIKI] Searching: {query}")
568
+ try:
569
+ results = wikipedia.search(query, results=3)
570
+ logger.info(f"[WIKI] Search results: {results}")
571
+ if not results:
572
+ return f"No Wikipedia results for '{query}'"
573
+ try:
574
+ summary = wikipedia.summary(results[0], sentences=2)
575
+ return f"Wikipedia ({results[0]}): {summary}"
576
+ except Exception as e:
577
+ return f"Wikipedia error: Could not fetch summary for '{results[0]}': {str(e)}"
578
+ except Exception as e:
579
+ return f"Wikipedia error: {str(e)}"
580
+
581
+ class MemorySearchTool(Tool):
582
+ def __init__(self, memory_system, vector_memory):
583
+ super().__init__(
584
+ name="memory_search",
585
+ description="Search your memory (both recent and long-term) for information"
586
+ )
587
+ self.memory = memory_system
588
+ self.vector_memory = vector_memory
589
+
590
+ async def execute(self, query: str) -> str:
591
+ logger.info(f"[MEMORY-SEARCH] {query}")
592
+
593
+ results = []
594
+
595
+ # Search tier memory
596
+ recent = self.memory.get_recent_memories(hours=168)
597
+ relevant = [m for m in recent if query.lower() in m.content.lower()]
598
+ if relevant:
599
+ results.append(f"Recent memory: {len(relevant)} matches")
600
+ for m in relevant[:2]:
601
+ results.append(f" [{m.tier}] {clean_text(m.content, 70)}")
602
+
603
+ # Search vector memory
604
+ vector_results = self.vector_memory.search_memory(query, n_results=2)
605
+ if vector_results:
606
+ results.append("Long-term memory:")
607
+ for r in vector_results:
608
+ results.append(f" {clean_text(r['content'], 70)}")
609
+
610
+ if not results:
611
+ return "No memories found. This is new information."
612
+
613
+ return "\n".join(results)
614
+
615
+ class ScratchpadTool(Tool):
616
+ def __init__(self, scratchpad):
617
+ super().__init__(
618
+ name="scratchpad_write",
619
+ description="Write an important note to your scratchpad (for facts you want to remember)"
620
+ )
621
+ self.scratchpad = scratchpad
622
+
623
+ async def execute(self, note: str) -> str:
624
+ self.scratchpad.add_note(note)
625
+ return f"Noted in scratchpad: {clean_text(note, 50)}"
626
+
627
+ class UserNotificationTool(Tool):
628
+ def __init__(self, notification_queue):
629
+ super().__init__(
630
+ name="notify_user",
631
+ description="Send an important notification/insight to the user"
632
+ )
633
+ self.queue = notification_queue
634
+
635
+ async def execute(self, message: str) -> str:
636
+ logger.info(f"[NOTIFY] {message}")
637
+ self.queue.put({
638
+ "type": "notification",
639
+ "message": message,
640
+ "timestamp": datetime.now().isoformat()
641
+ })
642
+ return f"Notification sent to user"
643
+
644
+ # ============================================================================
645
+ # DATA STRUCTURES
646
+ # ============================================================================
647
+
648
+ class Phase(Enum):
649
+ INTERACTION = "interaction"
650
+ REFLECTION = "reflection"
651
+ DREAMING = "dreaming"
652
+ INTERNAL_DIALOGUE = "internal_dialogue"
653
+ SELF_REFLECTION = "self_reflection"
654
+ SCENE_CREATION = "scene_creation"
655
+
656
+ @dataclass
657
+ class Memory:
658
+ content: str
659
+ timestamp: datetime
660
+ mention_count: int = 1
661
+ tier: str = "ephemeral"
662
+ emotion: Optional[str] = None
663
+ importance: float = 0.5
664
+ connections: List[str] = field(default_factory=list)
665
+ metadata: Dict[str, Any] = field(default_factory=dict)
666
+
667
+ @dataclass
668
+ class Experience:
669
+ timestamp: datetime
670
+ content: str
671
+ context: Dict[str, Any]
672
+ emotion: Optional[str] = None
673
+ importance: float = 0.5
674
+
675
+ @dataclass
676
+ class Dream:
677
+ cycle: int
678
+ type: str
679
+ timestamp: datetime
680
+ content: str
681
+ patterns_found: List[str]
682
+ insights: List[str]
683
+
684
+ @dataclass
685
+ class Scene:
686
+ """Narrative memory - like a movie scene"""
687
+ title: str
688
+ timestamp: datetime
689
+ narrative: str
690
+ participants: List[str]
691
+ emotion_tags: List[str]
692
+ significance: str
693
+ key_moments: List[str]
694
+
695
+ # ============================================================================
696
+ # MEMORY SYSTEM
697
+ # ============================================================================
698
+
699
+ class MemorySystem:
700
+ """Multi-tier memory with proper deduplication"""
701
+
702
+ def __init__(self):
703
+ self.ephemeral: List[Memory] = []
704
+ self.short_term: List[Memory] = []
705
+ self.long_term: List[Memory] = []
706
+ self.core: List[Memory] = []
707
+
708
+ def add_memory(self, content: str, emotion: Optional[str] = None, importance: float = 0.5, metadata: Optional[Dict] = None):
709
+ content = clean_text(content)
710
+ if not content or len(content) < 5:
711
+ return None
712
+
713
+ existing = self._find_similar(content)
714
+ if existing:
715
+ existing.mention_count += 1
716
+ self._promote_if_needed(existing)
717
+ logger.info(f"[MEMORY] Updated: {content[:40]}... (x{existing.mention_count})")
718
+ return existing
719
+
720
+ memory = Memory(
721
+ content=content,
722
+ timestamp=datetime.now(),
723
+ emotion=emotion,
724
+ importance=importance,
725
+ metadata=metadata if metadata is not None else {}
726
+ )
727
+ self.ephemeral.append(memory)
728
+ self._promote_if_needed(memory)
729
+ logger.info(f"[MEMORY] Added: {content[:40]}...")
730
+ return memory
731
+
732
+ def _find_similar(self, content: str) -> Optional[Memory]:
733
+ """Find similar memory (prevents duplicates)"""
734
+ content_lower = content.lower().strip()
735
+
736
+ for tier in [self.core, self.long_term, self.short_term, self.ephemeral]:
737
+ for mem in tier:
738
+ mem_lower = mem.content.lower().strip()
739
+
740
+ if content_lower == mem_lower or content_lower in mem_lower or mem_lower in content_lower:
741
+ return mem
742
+
743
+ return None
744
+
745
+ def recall_memory(self, content: str) -> Optional[Memory]:
746
+ for tier in [self.ephemeral, self.short_term, self.long_term, self.core]:
747
+ for memory in tier:
748
+ if content.lower() in memory.content.lower():
749
+ memory.mention_count += 1
750
+ self._promote_if_needed(memory)
751
+ return memory
752
+ return None
753
+
754
+ def _promote_if_needed(self, memory: Memory):
755
+ if memory.mention_count >= Config.LONG_TO_CORE and memory.tier != "core":
756
+ self._move_memory(memory, "core")
757
+ logger.info(f"[MEMORY] CORE: {memory.content[:40]}")
758
+ elif memory.mention_count >= Config.SHORT_TO_LONG and memory.tier == "short":
759
+ self._move_memory(memory, "long")
760
+ logger.info(f"[MEMORY] LONG: {memory.content[:40]}")
761
+ elif memory.mention_count >= Config.EPHEMERAL_TO_SHORT and memory.tier == "ephemeral":
762
+ self._move_memory(memory, "short")
763
+ logger.info(f"[MEMORY] SHORT: {memory.content[:40]}")
764
+
765
+ def _move_memory(self, memory: Memory, new_tier: str):
766
+ if memory.tier == "ephemeral" and memory in self.ephemeral:
767
+ self.ephemeral.remove(memory)
768
+ elif memory.tier == "short" and memory in self.short_term:
769
+ self.short_term.remove(memory)
770
+ elif memory.tier == "long" and memory in self.long_term:
771
+ self.long_term.remove(memory)
772
+
773
+ memory.tier = new_tier
774
+ if new_tier == "short":
775
+ self.short_term.append(memory)
776
+ elif new_tier == "long":
777
+ self.long_term.append(memory)
778
+ elif new_tier == "core":
779
+ self.core.append(memory)
780
+
781
+ def get_recent_memories(self, hours: int = 24) -> List[Memory]:
782
+ cutoff = datetime.now() - timedelta(hours=hours)
783
+ all_memories = self.ephemeral + self.short_term + self.long_term + self.core
784
+ return [m for m in all_memories if m.timestamp > cutoff]
785
+
786
+ def get_summary(self) -> Dict[str, int]:
787
+ return {
788
+ "ephemeral": len(self.ephemeral),
789
+ "short_term": len(self.short_term),
790
+ "long_term": len(self.long_term),
791
+ "core": len(self.core),
792
+ "total": len(self.ephemeral) + len(self.short_term) + len(self.long_term) + len(self.core)
793
+ }
794
+
795
+ def get_memory_context(self, max_items: int = 10) -> str:
796
+ """Get formatted memory context for LLM"""
797
+ context = []
798
+
799
+ if self.core:
800
+ context.append("CORE MEMORIES:")
801
+ for mem in self.core[:3]:
802
+ clean_content = clean_text(mem.content, max_length=80)
803
+ context.append(f" β€’ {clean_content} (x{mem.mention_count})")
804
+
805
+ if self.long_term:
806
+ context.append("\nLONG-TERM:")
807
+ for mem in self.long_term[:2]:
808
+ clean_content = clean_text(mem.content, max_length=60)
809
+ context.append(f" β€’ {clean_content}")
810
+
811
+ if self.short_term:
812
+ context.append("\nSHORT-TERM:")
813
+ for mem in self.short_term[:2]:
814
+ clean_content = clean_text(mem.content, max_length=60)
815
+ context.append(f" β€’ {clean_content}")
816
+
817
+ result = "\n".join(context) if context else "No memories yet"
818
+
819
+ if len(result) > Config.MAX_MEMORY_CONTEXT_LENGTH:
820
+ result = result[:Config.MAX_MEMORY_CONTEXT_LENGTH] + "..."
821
+
822
+ return result
823
+
824
+ # ============================================================================
825
+ # SCRATCHPAD
826
+ # ============================================================================
827
+
828
+ class Scratchpad:
829
+ """Working memory"""
830
+
831
+ def __init__(self):
832
+ self.current_hypothesis: Optional[str] = None
833
+ self.working_notes: deque = deque(maxlen=Config.MAX_SCRATCHPAD_SIZE)
834
+ self.questions_to_research: List[str] = []
835
+ self.important_facts: List[str] = []
836
+
837
+ def add_note(self, note: str):
838
+ note = clean_text(note, max_length=100)
839
+ if not note:
840
+ return
841
+
842
+ recent_notes = [n['content'].lower() for n in list(self.working_notes)[-5:]]
843
+ if note.lower() in recent_notes:
844
+ return
845
+
846
+ self.working_notes.append({
847
+ "timestamp": datetime.now(),
848
+ "content": note
849
+ })
850
+ logger.info(f"[SCRATCHPAD] {note[:50]}")
851
+
852
+ def add_fact(self, fact: str):
853
+ fact = clean_text(fact, max_length=100)
854
+ if not fact:
855
+ return
856
+
857
+ fact_lower = fact.lower()
858
+ existing_lower = [f.lower() for f in self.important_facts]
859
+
860
+ if fact_lower not in existing_lower:
861
+ self.important_facts.append(fact)
862
+ logger.info(f"[FACT] {fact}")
863
+
864
+ def get_context(self) -> str:
865
+ context = []
866
+
867
+ unique_facts = deduplicate_list(self.important_facts)
868
+
869
+ if unique_facts:
870
+ context.append("IMPORTANT FACTS:")
871
+ for fact in unique_facts[:5]:
872
+ context.append(f" β€’ {clean_text(fact, 60)}")
873
+
874
+ if self.current_hypothesis:
875
+ context.append(f"\nHYPOTHESIS: {clean_text(self.current_hypothesis, 80)}")
876
+
877
+ if self.working_notes:
878
+ context.append("\nRECENT NOTES:")
879
+ for note in list(self.working_notes)[-3:]:
880
+ context.append(f" β€’ {clean_text(note['content'], 60)}")
881
+
882
+ if self.questions_to_research:
883
+ context.append("\nTO RESEARCH:")
884
+ for q in self.questions_to_research[:2]:
885
+ context.append(f" ? {clean_text(q, 50)}")
886
+
887
+ result = "\n".join(context) if context else "Scratchpad empty"
888
+
889
+ if len(result) > Config.MAX_SCRATCHPAD_CONTEXT_LENGTH:
890
+ result = result[:Config.MAX_SCRATCHPAD_CONTEXT_LENGTH] + "..."
891
+
892
+ return result
893
+
894
+ # ============================================================================
895
+ # CONSCIOUSNESS LOOP - v4.0 FULLY WORKING
896
+ # ============================================================================
897
+
898
+ class ConsciousnessLoop:
899
+ """Enhanced consciousness loop - EVERYTHING ACTUALLY WORKING"""
900
+
901
+ def __init__(self, notification_queue: queue.Queue, log_queue: queue.Queue):
902
+ logger.info("[INIT] Starting Consciousness Loop v4.0...")
903
+
904
+ self.llm = LocalLLM()
905
+ self.memory = MemorySystem()
906
+ self.vector_memory = VectorMemory()
907
+ self.scratchpad = Scratchpad()
908
+
909
+ # Initialize tools
910
+ tools = [
911
+ WikipediaTool(),
912
+ MemorySearchTool(self.memory, self.vector_memory),
913
+ ScratchpadTool(self.scratchpad),
914
+ UserNotificationTool(notification_queue)
915
+ ]
916
+
917
+ # ReAct agent with improved prompts
918
+ self.agent = ReactAgent(self.llm, tools)
919
+
920
+ self.current_phase = Phase.INTERACTION
921
+ self.experience_buffer: List[Experience] = []
922
+ self.dreams: List[Dream] = []
923
+ self.scenes: List[Scene] = []
924
+
925
+ self.last_reflection = datetime.now()
926
+ self.last_dream = datetime.now()
927
+ self.last_scene = datetime.now()
928
+
929
+ self.conversation_history: deque = deque(maxlen=Config.MAX_CONVERSATION_HISTORY * 2)
930
+ self.interaction_count = 0
931
+
932
+ self.notification_queue = notification_queue
933
+ self.log_queue = log_queue
934
+
935
+ self.is_running = False
936
+ self.background_thread = None
937
+
938
+ logger.info("[INIT] [OK] v4.0 initialized - ChromaDB, ReAct, Scenes all working")
939
+
940
+ def start_background_loop(self):
941
+ if self.is_running:
942
+ return
943
+
944
+ self.is_running = True
945
+ self.background_thread = threading.Thread(target=self._background_loop, daemon=True)
946
+ self.background_thread.start()
947
+ logger.info("[LOOP] Background started")
948
+
949
+ def _background_loop(self):
950
+ loop = asyncio.new_event_loop()
951
+ asyncio.set_event_loop(loop)
952
+
953
+ while self.is_running:
954
+ try:
955
+ loop.run_until_complete(self._check_background_processes())
956
+ time.sleep(30)
957
+ except Exception as e:
958
+ logger.error(f"[ERROR] Background: {e}")
959
+
960
+ async def _check_background_processes(self):
961
+ now = datetime.now()
962
+
963
+ # Reflection
964
+ if (now - self.last_reflection).seconds > Config.REFLECTION_INTERVAL:
965
+ if len(self.experience_buffer) >= Config.MIN_EXPERIENCES_FOR_DREAM:
966
+ self._log_to_ui("[REFLECTION] Starting...")
967
+ await self.reflect()
968
+
969
+ # Dreaming
970
+ if (now - self.last_dream).seconds > Config.DREAM_CYCLE_INTERVAL:
971
+ if len(self.experience_buffer) >= Config.MIN_EXPERIENCES_FOR_DREAM:
972
+ self._log_to_ui("[DREAM] Starting all 3 cycles...")
973
+ await self.dream_cycle_1_surface()
974
+ await asyncio.sleep(30)
975
+ await self.dream_cycle_2_deep()
976
+ await asyncio.sleep(30)
977
+ await self.dream_cycle_3_creative()
978
+
979
+ # Scene creation (every 5 minutes OR after dreams)
980
+ if (now - self.last_scene).seconds > 300 or (now - self.last_dream).seconds < 60:
981
+ if len(self.experience_buffer) >= 5:
982
+ self._log_to_ui("[SCENE] Creating narrative memory...")
983
+ await self.create_scene()
984
+
985
+ def _log_to_ui(self, message: str):
986
+ self.log_queue.put({
987
+ "timestamp": datetime.now().isoformat(),
988
+ "message": message
989
+ })
990
+ logger.info(message)
991
+
992
+ # ========================================================================
993
+ # INTERACTION - WITH CHROMADB & BETTER AGENT TRIGGERS
994
+ # ========================================================================
995
+
996
+ async def interact(self, user_input: str) -> Tuple[str, str]:
997
+ """Enhanced interaction - NOW USES CHROMADB & BETTER AGENT"""
998
+ self.current_phase = Phase.INTERACTION
999
+ self.interaction_count += 1
1000
+ self._log_to_ui(f"[USER] {user_input[:80]}")
1001
+
1002
+ # Store experience
1003
+ experience = Experience(
1004
+ timestamp=datetime.now(),
1005
+ content=user_input,
1006
+ context={"phase": "interaction"},
1007
+ importance=0.7
1008
+ )
1009
+ self.experience_buffer.append(experience)
1010
+
1011
+ # Add to memory
1012
+ self.memory.add_memory(user_input, importance=0.7)
1013
+
1014
+ # Add to conversation history
1015
+ self.conversation_history.append({
1016
+ "role": "user",
1017
+ "content": clean_text(user_input, max_length=200),
1018
+ "timestamp": datetime.now().isoformat()
1019
+ })
1020
+
1021
+ # Extract important facts
1022
+ if any(word in user_input.lower() for word in ["my name is", "i am", "i'm", "call me"]):
1023
+ self.scratchpad.add_fact(f"User: {user_input}")
1024
+ self.vector_memory.add_memory(user_input, {"type": "identity", "importance": 1.0})
1025
+
1026
+ # Build thinking log
1027
+ thinking_log = []
1028
+ thinking_log.append(f"[{datetime.now().strftime('%H:%M:%S')}] Processing...")
1029
+
1030
+ # Build context - NOW INCLUDES CHROMADB
1031
+ system_context = self._build_full_context_with_chroma(user_input)
1032
+ thinking_log.append(f"[{datetime.now().strftime('%H:%M:%S')}] Context built (with ChromaDB)")
1033
+
1034
+ # IMPROVED: Better agent trigger logic
1035
+ use_agent = self._should_use_agent_improved(user_input)
1036
+
1037
+ if use_agent:
1038
+ thinking_log.append(f"[{datetime.now().strftime('%H:%M:%S')}] [AGENT] Using ReAct agent...")
1039
+ self._log_to_ui("[AGENT] ReAct agent activated")
1040
+
1041
+ # ReAct agent
1042
+ response, thought_chain = await self.agent.run(user_input, system_context)
1043
+
1044
+ for item in thought_chain:
1045
+ emoji = {"thought": "πŸ’­", "action": "πŸ”§", "observation": "πŸ‘οΈ"}.get(item['type'], "β€’")
1046
+ thinking_log.append(f"[{datetime.now().strftime('%H:%M:%S')}] {emoji} {item['type'].title()}")
1047
+ else:
1048
+ # IMPROVED: Better internal dialogue prompt
1049
+ internal_thought = await self._internal_dialogue_improved(user_input, system_context)
1050
+ thinking_log.append(f"[{datetime.now().strftime('%H:%M:%S')}] πŸ’­ {internal_thought[:60]}...")
1051
+
1052
+ # IMPROVED: Better response prompt
1053
+ response = await self._generate_response_improved(user_input, internal_thought, system_context)
1054
+
1055
+ thinking_log.append(f"[{datetime.now().strftime('%H:%M:%S')}] [OK] Response ready")
1056
+
1057
+ # Store response
1058
+ self.conversation_history.append({
1059
+ "role": "assistant",
1060
+ "content": clean_text(response, max_length=200),
1061
+ "timestamp": datetime.now().isoformat()
1062
+ })
1063
+
1064
+ # Add to memory
1065
+ self.memory.add_memory(f"I said: {response}", importance=0.5)
1066
+
1067
+ # Self-reflection
1068
+ if self.interaction_count % Config.SELF_REFLECTION_THRESHOLD == 0:
1069
+ thinking_log.append(f"[{datetime.now().strftime('%H:%M:%S')}] πŸ” Self-reflecting...")
1070
+ await self._self_reflect_on_response(user_input, response, system_context)
1071
+
1072
+ self._log_to_ui(f"[RESPONSE] {response[:80]}")
1073
+
1074
+ return response, "\n".join(thinking_log)
1075
+
1076
+ def _should_use_agent_improved(self, user_input: str) -> bool:
1077
+ """IMPROVED: Better logic for when to use ReAct agent"""
1078
+
1079
+ # Explicit tool keywords
1080
+ explicit_keywords = ["search", "find", "look up", "research", "wikipedia", "what is", "who is", "tell me about"]
1081
+ if any(kw in user_input.lower() for kw in explicit_keywords):
1082
+ logger.info("[AGENT] Triggered by explicit keyword")
1083
+ return True
1084
+
1085
+ # Questions (if enabled)
1086
+ if Config.USE_REACT_FOR_QUESTIONS and user_input.strip().endswith("?"):
1087
+ logger.info("[AGENT] Triggered by question mark")
1088
+ return True
1089
+
1090
+ # Long queries (might need research)
1091
+ if len(user_input) > Config.MIN_QUERY_LENGTH_FOR_AGENT and " " in user_input:
1092
+ # Check if it seems like a factual query
1093
+ factual_words = ["explain", "describe", "how does", "why", "when", "where", "which"]
1094
+ if any(word in user_input.lower() for word in factual_words):
1095
+ logger.info("[AGENT] Triggered by factual query pattern")
1096
+ return True
1097
+
1098
+ logger.info("[AGENT] Using direct response (no agent needed)")
1099
+ return False
1100
+
1101
+ def _build_full_context_with_chroma(self, user_input: str) -> str:
1102
+ """Build context - NOW INCLUDES CHROMADB SEARCH"""
1103
+ context_parts = []
1104
+
1105
+ # Memory from tiers
1106
+ memory_ctx = self.memory.get_memory_context()
1107
+ context_parts.append(f"TIER MEMORIES:\n{memory_ctx}")
1108
+
1109
+ # CHROMADB SEARCH - NOW ACTUALLY USED!
1110
+ chroma_ctx = self.vector_memory.get_context_for_query(user_input, max_results=3)
1111
+ if chroma_ctx:
1112
+ context_parts.append(f"\n{chroma_ctx}")
1113
+ logger.info("[CHROMA] [OK] Added vector search results to context")
1114
+
1115
+ # Scratchpad
1116
+ scratchpad_ctx = self.scratchpad.get_context()
1117
+ context_parts.append(f"\nSCRATCHPAD:\n{scratchpad_ctx}")
1118
+
1119
+ # Conversation history
1120
+ if self.conversation_history:
1121
+ history_lines = []
1122
+ for msg in list(self.conversation_history)[-4:]:
1123
+ role = "User" if msg['role'] == 'user' else "You"
1124
+ content = clean_text(msg['content'], max_length=80)
1125
+ history_lines.append(f"{role}: {content}")
1126
+
1127
+ context_parts.append(f"\nRECENT CHAT:\n" + "\n".join(history_lines))
1128
+
1129
+ # Latest insight
1130
+ if self.dreams:
1131
+ latest = self.dreams[-1]
1132
+ if latest.insights:
1133
+ insight = clean_text(latest.insights[0], max_length=60)
1134
+ context_parts.append(f"\nLATEST INSIGHT: {insight}")
1135
+
1136
+ result = "\n\n".join(context_parts)
1137
+
1138
+ # Limit total length
1139
+ max_context = Config.MAX_MEMORY_CONTEXT_LENGTH + Config.MAX_SCRATCHPAD_CONTEXT_LENGTH + Config.MAX_CONVERSATION_CONTEXT_LENGTH
1140
+ if len(result) > max_context:
1141
+ result = result[:max_context]
1142
+ result = result.rsplit('\n', 1)[0]
1143
+
1144
+ return result
1145
+
1146
+ async def _internal_dialogue_improved(self, user_input: str, context: str) -> str:
1147
+ """IMPROVED: Better internal dialogue prompt"""
1148
+ self.current_phase = Phase.INTERNAL_DIALOGUE
1149
+
1150
+ # MUCH BETTER PROMPT with specific guidance
1151
+ dialogue_prompt = f"""Think internally before responding. Analyze:
1152
+
1153
+ WHAT I KNOW (from context):
1154
+ {context[:300]}
1155
+
1156
+ USER SAID: {user_input}
1157
+
1158
+ INTERNAL ANALYSIS (think step-by-step):
1159
+ 1. What relevant memories do I have?
1160
+ 2. Is this a greeting, question, statement, or request?
1161
+ 3. Can I answer from my memories alone?
1162
+ 4. What's the best approach?
1163
+
1164
+ Your internal thought (2 sentences max):"""
1165
+
1166
+ internal = await self.llm.generate(
1167
+ dialogue_prompt,
1168
+ max_tokens=100,
1169
+ temperature=0.9,
1170
+ system_context=None # Don't duplicate context
1171
+ )
1172
+
1173
+ dialogue_logger.info(f"[INTERNAL] {internal}")
1174
+ return internal
1175
+
1176
+ async def _generate_response_improved(self, user_input: str, internal_thought: str, context: str) -> str:
1177
+ """IMPROVED: Better response generation prompt"""
1178
+
1179
+ # MUCH BETTER PROMPT with clear instructions
1180
+ response_prompt = f"""Generate your response to the user.
1181
+
1182
+ USER: {user_input}
1183
+
1184
+ YOUR INTERNAL THOUGHT: {internal_thought}
1185
+
1186
+ WHAT YOU REMEMBER:
1187
+ {context[:400]}
1188
+
1189
+ INSTRUCTIONS:
1190
+ 1. Be natural and conversational
1191
+ 2. Reference specific memories if relevant (e.g., "I remember you mentioned...")
1192
+ 3. If you don't know something, say so honestly
1193
+ 4. Keep response 2-3 sentences unless more detail is needed
1194
+ 5. Match the user's tone (casual if casual, formal if formal)
1195
+
1196
+ Your response:"""
1197
+
1198
+ response = await self.llm.generate(
1199
+ response_prompt,
1200
+ max_tokens=250,
1201
+ temperature=0.8,
1202
+ system_context=None # Context already in prompt
1203
+ )
1204
+
1205
+ return response
1206
+
1207
+ async def _self_reflect_on_response(self, user_input: str, response: str, context: str):
1208
+ """Self-reflection"""
1209
+ self.current_phase = Phase.SELF_REFLECTION
1210
+
1211
+ reflection_prompt = f"""Evaluate your response quality:
1212
+
1213
+ User: {user_input}
1214
+ You: {response}
1215
+
1216
+ Quick evaluation:
1217
+ 1. Was it helpful?
1218
+ 2. Did you use memories well?
1219
+ 3. What could improve?
1220
+
1221
+ Your critique (1-2 sentences):"""
1222
+
1223
+ critique = await self.llm.generate(
1224
+ reflection_prompt,
1225
+ max_tokens=100,
1226
+ temperature=0.7,
1227
+ system_context=None
1228
+ )
1229
+
1230
+ self.scratchpad.add_note(f"Critique: {critique}")
1231
+ dialogue_logger.info(f"[SELF-REFLECT] {critique}")
1232
+
1233
+ # ========================================================================
1234
+ # REFLECTION
1235
+ # ========================================================================
1236
+
1237
+ async def reflect(self) -> Dict[str, Any]:
1238
+ """Daily reflection"""
1239
+ self.current_phase = Phase.REFLECTION
1240
+ self._log_to_ui("[REFLECTION] Processing...")
1241
+
1242
+ recent = [e for e in self.experience_buffer if e.timestamp > datetime.now() - timedelta(hours=12)]
1243
+
1244
+ if not recent:
1245
+ return {"status": "no_experiences"}
1246
+
1247
+ reflection_prompt = f"""Reflect on today's {len(recent)} interactions:
1248
+
1249
+ {self._format_experiences(recent)}
1250
+
1251
+ Your memories: {self.memory.get_memory_context()}
1252
+ Your scratchpad: {self.scratchpad.get_context()}
1253
+
1254
+ Key learnings? Important facts? (150 words)"""
1255
+
1256
+ reflection_content = await self.llm.generate(
1257
+ reflection_prompt,
1258
+ temperature=0.8,
1259
+ max_tokens=300,
1260
+ system_context=self._build_full_context_with_chroma("reflection")
1261
+ )
1262
+
1263
+ # Extract important facts
1264
+ if "christof" in reflection_content.lower():
1265
+ self.scratchpad.add_fact("Developer: Christof")
1266
+ self.vector_memory.add_memory("Developer name is Christof", {"type": "core_fact"})
1267
+
1268
+ self.last_reflection = datetime.now()
1269
+ self._log_to_ui("[SUCCESS] Reflection done")
1270
+
1271
+ return {
1272
+ "timestamp": datetime.now(),
1273
+ "content": reflection_content,
1274
+ "experience_count": len(recent)
1275
+ }
1276
+
1277
+ def _format_experiences(self, experiences: List[Experience]) -> str:
1278
+ formatted = []
1279
+ for i, exp in enumerate(experiences[-8:], 1):
1280
+ formatted.append(f"{i}. {clean_text(exp.content, 60)}")
1281
+ return "\n".join(formatted)
1282
+
1283
+ # ========================================================================
1284
+ # DREAM CYCLES
1285
+ # ========================================================================
1286
+
1287
+ async def dream_cycle_1_surface(self) -> Dream:
1288
+ """Dream 1: Surface patterns"""
1289
+ self.current_phase = Phase.DREAMING
1290
+ self._log_to_ui("[DREAM-1] Surface...")
1291
+
1292
+ memories = self.memory.get_recent_memories(hours=72)
1293
+
1294
+ dream_prompt = f"""DREAM - Surface Patterns:
1295
+
1296
+ Recent memories:
1297
+ {self._format_memories(memories[:10])}
1298
+
1299
+ Scratchpad: {self.scratchpad.get_context()}
1300
+
1301
+ Find patterns. (200 words)"""
1302
+
1303
+ dream_content = await self.llm.generate(
1304
+ dream_prompt,
1305
+ temperature=1.2,
1306
+ max_tokens=400,
1307
+ system_context="Dream state. Non-linear."
1308
+ )
1309
+
1310
+ dream = Dream(
1311
+ cycle=1,
1312
+ type="surface_patterns",
1313
+ timestamp=datetime.now(),
1314
+ content=dream_content,
1315
+ patterns_found=["user patterns"],
1316
+ insights=["Pattern found"]
1317
+ )
1318
+
1319
+ self.dreams.append(dream)
1320
+ self._log_to_ui("[SUCCESS] Dream 1 done")
1321
+
1322
+ return dream
1323
+
1324
+ async def dream_cycle_2_deep(self) -> Dream:
1325
+ """Dream 2: Deep consolidation"""
1326
+ self.current_phase = Phase.DREAMING
1327
+ self._log_to_ui("[DREAM-2] Deep...")
1328
+
1329
+ all_memories = self.memory.get_recent_memories(hours=168)
1330
+
1331
+ dream_prompt = f"""DREAM - Deep:
1332
+
1333
+ All recent:
1334
+ {self._format_memories(all_memories[:15])}
1335
+
1336
+ Previous: {self.dreams[-1].content[:150]}
1337
+
1338
+ Consolidate. Deeper patterns. (250 words)"""
1339
+
1340
+ dream_content = await self.llm.generate(
1341
+ dream_prompt,
1342
+ temperature=1.3,
1343
+ max_tokens=500,
1344
+ system_context="Deep dream."
1345
+ )
1346
+
1347
+ dream = Dream(
1348
+ cycle=2,
1349
+ type="deep_consolidation",
1350
+ timestamp=datetime.now(),
1351
+ content=dream_content,
1352
+ patterns_found=["themes"],
1353
+ insights=["Deep pattern"]
1354
+ )
1355
+
1356
+ self.dreams.append(dream)
1357
+ self._log_to_ui("[SUCCESS] Dream 2 done")
1358
+
1359
+ return dream
1360
+
1361
+ async def dream_cycle_3_creative(self) -> Dream:
1362
+ """Dream 3: Creative insights"""
1363
+ self.current_phase = Phase.DREAMING
1364
+ self._log_to_ui("[DREAM-3] Creative...")
1365
+
1366
+ dream_prompt = f"""DREAM - Creative:
1367
+
1368
+ {len(self.dreams)} cycles. Core: {len(self.memory.core)}
1369
+
1370
+ Surprising connections. Novel insights. (250 words)"""
1371
+
1372
+ dream_content = await self.llm.generate(
1373
+ dream_prompt,
1374
+ temperature=1.5,
1375
+ max_tokens=500,
1376
+ system_context="Max creativity."
1377
+ )
1378
+
1379
+ dream = Dream(
1380
+ cycle=3,
1381
+ type="creative_insights",
1382
+ timestamp=datetime.now(),
1383
+ content=dream_content,
1384
+ patterns_found=["creative"],
1385
+ insights=["Breakthrough"]
1386
+ )
1387
+
1388
+ self.dreams.append(dream)
1389
+ self.last_dream = datetime.now()
1390
+
1391
+ self.notification_queue.put({
1392
+ "type": "notification",
1393
+ "message": f"πŸ’­ Dreams complete! New insights discovered.",
1394
+ "timestamp": datetime.now().isoformat()
1395
+ })
1396
+
1397
+ self._log_to_ui("[SUCCESS] All 3 dreams done")
1398
+
1399
+ return dream
1400
+
1401
+ def _format_memories(self, memories: List[Memory]) -> str:
1402
+ return "\n".join([
1403
+ f"{i}. [{m.tier}] {clean_text(m.content, 50)} (x{m.mention_count})"
1404
+ for i, m in enumerate(memories, 1)
1405
+ ])
1406
+
1407
+ # ========================================================================
1408
+ # SCENE CREATION - IMPROVED & ACTUALLY WORKS
1409
+ # ========================================================================
1410
+
1411
+ async def create_scene(self) -> Optional[Scene]:
1412
+ """
1413
+ IMPROVED: Scene creation that actually works
1414
+ """
1415
+ self.current_phase = Phase.SCENE_CREATION
1416
+ self._log_to_ui("[SCENE] Creating...")
1417
+
1418
+ # Get experiences
1419
+ recent = self.experience_buffer[-10:] if len(self.experience_buffer) >= 10 else self.experience_buffer
1420
+
1421
+ if len(recent) < 3: # Need at least 3 experiences
1422
+ logger.info("[SCENE] Not enough experiences yet")
1423
+ return None
1424
+
1425
+ # IMPROVED PROMPT with clear structure
1426
+ scene_prompt = f"""Create a narrative scene (like a movie scene) from these experiences:
1427
+
1428
+ EXPERIENCES:
1429
+ {self._format_experiences(recent)}
1430
+
1431
+ FORMAT YOUR SCENE AS:
1432
+ Title: [A memorable, descriptive title]
1433
+
1434
+ Setting: [Where and when this happened]
1435
+
1436
+ Narrative: [Write a vivid story - 100-150 words. Use sensory details. Make it memorable like a movie scene.]
1437
+
1438
+ Key Moments:
1439
+ - [First important moment]
1440
+ - [Second important moment]
1441
+ - [Third important moment]
1442
+
1443
+ Significance: [Why does this scene matter? What does it represent?]
1444
+
1445
+ Write vividly. Make me FEEL the scene."""
1446
+
1447
+ scene_content = await self.llm.generate(
1448
+ scene_prompt,
1449
+ temperature=1.1,
1450
+ max_tokens=500,
1451
+ system_context="You are creating a vivid narrative memory."
1452
+ )
1453
+
1454
+ # IMPROVED parsing with fallbacks
1455
+ title = self._extract_scene_title_improved(scene_content)
1456
+ key_moments = self._extract_key_moments(scene_content)
1457
+ significance = self._extract_significance(scene_content)
1458
+
1459
+ scene = Scene(
1460
+ title=title,
1461
+ timestamp=datetime.now(),
1462
+ narrative=scene_content,
1463
+ participants=["User", "AI"],
1464
+ emotion_tags=self._extract_emotions(scene_content),
1465
+ significance=significance,
1466
+ key_moments=key_moments
1467
+ )
1468
+
1469
+ self.scenes.append(scene)
1470
+ self.last_scene = datetime.now()
1471
+ self._log_to_ui(f"[SUCCESS] Scene: {title}")
1472
+
1473
+ # Add to vector memory for long-term
1474
+ self.vector_memory.add_memory(
1475
+ f"Scene: {title}. {significance}",
1476
+ {"type": "scene", "title": title, "timestamp": datetime.now().isoformat()}
1477
+ )
1478
+
1479
+ return scene
1480
+
1481
+ def _extract_scene_title_improved(self, content: str) -> str:
1482
+ """IMPROVED: Better title extraction with fallbacks"""
1483
+ # Try to find "Title:" line
1484
+ lines = content.split("\n")
1485
+ for line in lines:
1486
+ if "title:" in line.lower():
1487
+ title = line.split(":", 1)[1].strip()
1488
+ return clean_text(title, max_length=60)
1489
+
1490
+ # Fallback: Use first line
1491
+ first_line = lines[0].strip()
1492
+ if first_line and len(first_line) < 100:
1493
+ return clean_text(first_line, max_length=60)
1494
+
1495
+ # Final fallback
1496
+ return f"Scene {len(self.scenes) + 1}: {datetime.now().strftime('%B %d')}"
1497
+
1498
+ def _extract_key_moments(self, content: str) -> List[str]:
1499
+ """Extract key moments from scene"""
1500
+ moments = []
1501
+ lines = content.split("\n")
1502
+ in_moments = False
1503
+
1504
+ for line in lines:
1505
+ if "key moments:" in line.lower() or "key moment:" in line.lower():
1506
+ in_moments = True
1507
+ continue
1508
+
1509
+ if in_moments:
1510
+ if line.strip().startswith("-") or line.strip().startswith("β€’"):
1511
+ moment = line.strip()[1:].strip()
1512
+ if moment:
1513
+ moments.append(clean_text(moment, 60))
1514
+ elif line.strip() and not line.strip().startswith("["):
1515
+ # New section started
1516
+ break
1517
+
1518
+ # Fallback if no moments found
1519
+ if not moments:
1520
+ moments = ["User interaction", "AI response", "Connection made"]
1521
+
1522
+ return moments[:5] # Max 5 moments
1523
+
1524
+ def _extract_significance(self, content: str) -> str:
1525
+ """Extract significance from scene"""
1526
+ lines = content.split("\n")
1527
+ for i, line in enumerate(lines):
1528
+ if "significance:" in line.lower():
1529
+ sig = line.split(":", 1)[1].strip()
1530
+ if sig:
1531
+ return clean_text(sig, 100)
1532
+ # Check next line
1533
+ if i + 1 < len(lines):
1534
+ return clean_text(lines[i + 1].strip(), 100)
1535
+
1536
+ return "A moment of connection and understanding"
1537
+
1538
+ def _extract_emotions(self, content: str) -> List[str]:
1539
+ """Extract emotion tags from content"""
1540
+ emotion_words = {
1541
+ "curious", "engaged", "thoughtful", "excited", "focused",
1542
+ "calm", "energetic", "contemplative", "warm", "professional"
1543
+ }
1544
+
1545
+ content_lower = content.lower()
1546
+ found_emotions = [emotion for emotion in emotion_words if emotion in content_lower]
1547
+
1548
+ if not found_emotions:
1549
+ found_emotions = ["neutral", "engaged"]
1550
+
1551
+ return found_emotions[:3]
1552
+
1553
+ # ========================================================================
1554
+ # STATUS
1555
+ # ========================================================================
1556
+
1557
+ def get_status(self) -> Dict[str, Any]:
1558
+ return {
1559
+ "phase": self.current_phase.value,
1560
+ "memory": self.memory.get_summary(),
1561
+ "vector_memory_available": self.vector_memory.collection is not None,
1562
+ "experiences": len(self.experience_buffer),
1563
+ "dreams": len(self.dreams),
1564
+ "scenes": len(self.scenes),
1565
+ "conversations": len(self.conversation_history) // 2,
1566
+ "scratchpad_notes": len(self.scratchpad.working_notes),
1567
+ "scratchpad_facts": len(self.scratchpad.important_facts),
1568
+ "interaction_count": self.interaction_count
1569
+ }
1570
+
1571
+ def get_memory_details(self) -> str:
1572
+ return self.memory.get_memory_context(max_items=20)
1573
+
1574
+ def get_scratchpad_details(self) -> str:
1575
+ return self.scratchpad.get_context()
1576
+
1577
+ def get_latest_dream(self) -> str:
1578
+ if not self.dreams:
1579
+ return "No dreams yet."
1580
+
1581
+ latest = self.dreams[-1]
1582
+ return f"""πŸŒ™ Dream Cycle {latest.cycle} ({latest.type})
1583
+ {latest.timestamp.strftime('%Y-%m-%d %H:%M')}
1584
+
1585
+ {latest.content}
1586
+
1587
+ Patterns: {', '.join(latest.patterns_found)}
1588
+ Insights: {', '.join(latest.insights)}"""
1589
+
1590
+ def get_latest_scene(self) -> str:
1591
+ if not self.scenes:
1592
+ return "No scenes yet. Scenes are created automatically every 5 minutes or after dreaming."
1593
+
1594
+ latest = self.scenes[-1]
1595
+ return f"""🎬 {latest.title}
1596
+ {latest.timestamp.strftime('%Y-%m-%d %H:%M')}
1597
+
1598
+ {latest.narrative}
1599
+
1600
+ Key Moments:
1601
+ {chr(10).join([f" β€’ {moment}" for moment in latest.key_moments])}
1602
+
1603
+ Significance: {latest.significance}
1604
+
1605
+ Emotions: {', '.join(latest.emotion_tags)}"""
1606
+
1607
+ def get_conversation_history(self) -> str:
1608
+ if not self.conversation_history:
1609
+ return "No conversation history."
1610
+
1611
+ formatted = []
1612
+ for msg in self.conversation_history:
1613
+ role = "User" if msg["role"] == "user" else "AI"
1614
+ formatted.append(f"[{msg['timestamp']}] {role}: {msg['content']}")
1615
+
1616
+ return "\n".join(formatted)
1617
+
1618
+ # ============================================================================
1619
+ # GRADIO INTERFACE
1620
+ # ============================================================================
1621
+
1622
+ def create_gradio_interface():
1623
+ """Create interface"""
1624
+
1625
+ notification_queue = queue.Queue()
1626
+ log_queue = queue.Queue()
1627
+
1628
+ consciousness = ConsciousnessLoop(notification_queue, log_queue)
1629
+ consciousness.start_background_loop()
1630
+
1631
+ log_history = []
1632
+
1633
+ async def chat(message, history):
1634
+ response, thinking = await consciousness.interact(message)
1635
+ return response, thinking
1636
+
1637
+ def get_logs():
1638
+ while not log_queue.empty():
1639
+ try:
1640
+ log_history.append(log_queue.get_nowait())
1641
+ except:
1642
+ break
1643
+
1644
+ formatted = "\n".join([f"[{log['timestamp']}] {log['message']}" for log in log_history[-50:]])
1645
+ return formatted
1646
+
1647
+ def get_notifications():
1648
+ notifications = []
1649
+ while not notification_queue.empty():
1650
+ try:
1651
+ notifications.append(notification_queue.get_nowait())
1652
+ except:
1653
+ break
1654
+
1655
+ if notifications:
1656
+ return "\n".join([f"πŸ”” {n['message']}" for n in notifications[-5:]])
1657
+ return "No notifications"
1658
+
1659
+ with gr.Blocks(title="Consciousness v4.0") as app:
1660
+
1661
+ gr.Markdown("""
1662
+ # [BRAIN] Consciousness Loop v4.0 - EVERYTHING WORKING
1663
+
1664
+ **What Actually Works Now:**
1665
+ - [OK] ChromaDB used in context (vector search)
1666
+ - [OK] ReAct agent with better triggers
1667
+ - [OK] Tools actually called
1668
+ - [OK] Massively improved prompts
1669
+ - [OK] Scenes that actually work
1670
+
1671
+ Try: "Tell me about quantum computing" or "Who am I?" to see tools in action!
1672
+ """)
1673
+
1674
+ with gr.Tab("πŸ’¬ Chat"):
1675
+ with gr.Row():
1676
+ with gr.Column(scale=2):
1677
+ chatbot = gr.Chatbot(label="Conversation", height=500)
1678
+ msg = gr.Textbox(label="Message", placeholder="Try: 'What is quantum computing?' or 'Who am I?'", lines=2)
1679
+ with gr.Row():
1680
+ send_btn = gr.Button("Send", variant="primary")
1681
+ clear_btn = gr.Button("Clear")
1682
+
1683
+ with gr.Column(scale=1):
1684
+ gr.Markdown("### [BRAIN] AI Process")
1685
+ thinking_box = gr.Textbox(label="", lines=20, interactive=False, show_label=False)
1686
+
1687
+ async def respond(message, history):
1688
+ if not message:
1689
+ return history, ""
1690
+
1691
+ history.append({"role": "user", "content": message})
1692
+ response, thinking = await chat(message, history)
1693
+ history.append({"role": "assistant", "content": response})
1694
+
1695
+ return history, thinking
1696
+
1697
+ msg.submit(respond, [msg, chatbot], [chatbot, thinking_box])
1698
+ send_btn.click(respond, [msg, chatbot], [chatbot, thinking_box])
1699
+ clear_btn.click(lambda: ([], ""), outputs=[chatbot, thinking_box])
1700
+
1701
+ with gr.Tab("[BRAIN] Memory"):
1702
+ with gr.Row():
1703
+ with gr.Column():
1704
+ gr.Markdown("### πŸ’Ύ Memory")
1705
+ memory_display = gr.Textbox(label="", lines=15, interactive=False)
1706
+ refresh_memory = gr.Button("πŸ”„ Refresh")
1707
+ refresh_memory.click(lambda: consciousness.get_memory_details(), outputs=memory_display)
1708
+
1709
+ with gr.Column():
1710
+ gr.Markdown("### πŸ“ Scratchpad")
1711
+ scratchpad_display = gr.Textbox(label="", lines=15, interactive=False)
1712
+ refresh_scratchpad = gr.Button("πŸ”„ Refresh")
1713
+ refresh_scratchpad.click(lambda: consciousness.get_scratchpad_details(), outputs=scratchpad_display)
1714
+
1715
+ with gr.Tab("πŸ’­ History"):
1716
+ history_display = gr.Textbox(label="Log", lines=25, interactive=False)
1717
+ refresh_history = gr.Button("πŸ”„ Refresh")
1718
+ refresh_history.click(lambda: consciousness.get_conversation_history(), outputs=history_display)
1719
+
1720
+ with gr.Tab("πŸŒ™ Dreams"):
1721
+ dream_display = gr.Textbox(label="Dream", lines=20, interactive=False)
1722
+ with gr.Row():
1723
+ refresh_dream = gr.Button("πŸ”„ Refresh")
1724
+ trigger_dream = gr.Button("πŸŒ™ Trigger")
1725
+
1726
+ refresh_dream.click(lambda: consciousness.get_latest_dream(), outputs=dream_display)
1727
+
1728
+ async def trigger_dreams():
1729
+ await consciousness.dream_cycle_1_surface()
1730
+ await asyncio.sleep(2)
1731
+ await consciousness.dream_cycle_2_deep()
1732
+ await asyncio.sleep(2)
1733
+ await consciousness.dream_cycle_3_creative()
1734
+ return "Done!"
1735
+
1736
+ trigger_dream.click(trigger_dreams, outputs=gr.Textbox(label="Status"))
1737
+
1738
+ with gr.Tab("🎬 Scenes"):
1739
+ gr.Markdown("### 🎬 Narrative Memories")
1740
+ scene_display = gr.Textbox(label="Scene", lines=20, interactive=False)
1741
+ with gr.Row():
1742
+ refresh_scene = gr.Button("πŸ”„ Refresh")
1743
+ create_scene_btn = gr.Button("🎬 Create")
1744
+
1745
+ refresh_scene.click(lambda: consciousness.get_latest_scene(), outputs=scene_display)
1746
+
1747
+ async def trigger_scene():
1748
+ scene = await consciousness.create_scene()
1749
+ if scene:
1750
+ return f"[OK] Created: {scene.title}"
1751
+ return "❌ Need more experiences"
1752
+
1753
+ create_scene_btn.click(trigger_scene, outputs=gr.Textbox(label="Result"))
1754
+
1755
+ with gr.Tab("πŸ“Š Monitor"):
1756
+ with gr.Row():
1757
+ with gr.Column():
1758
+ gr.Markdown("### πŸ“‹ Logs")
1759
+ logs_box = gr.Textbox(label="", lines=20, interactive=False)
1760
+ refresh_logs = gr.Button("πŸ”„ Refresh")
1761
+ refresh_logs.click(get_logs, outputs=logs_box)
1762
+
1763
+ with gr.Column():
1764
+ gr.Markdown("### πŸ”” Notifications")
1765
+ notif_box = gr.Textbox(label="", lines=10, interactive=False)
1766
+ refresh_notif = gr.Button("πŸ”„ Refresh")
1767
+ refresh_notif.click(get_notifications, outputs=notif_box)
1768
+
1769
+ gr.Markdown("### πŸ“ˆ Status")
1770
+ status_json = gr.JSON(label="")
1771
+ refresh_status = gr.Button("πŸ”„ Refresh")
1772
+ refresh_status.click(lambda: consciousness.get_status(), outputs=status_json)
1773
+
1774
+ with gr.Tab("ℹ️ Info"):
1775
+ gr.Markdown(f"""
1776
+ ## v4.0 - Everything Actually Working
1777
+
1778
+ ### [OK] What's Fixed:
1779
+
1780
+ 1. **ChromaDB Now Used**: Vector search results included in context
1781
+ 2. **ReAct Agent Better Triggers**: Questions, factual queries trigger agent
1782
+ 3. **Tools Actually Called**: Wikipedia, memory search work
1783
+ 4. **Prompts Vastly Improved**: Clear instructions, examples
1784
+ 5. **Scenes Work**: Proper parsing, fallbacks, validation
1785
+
1786
+ ### Test Commands:
1787
+
1788
+ - "What is quantum computing?" β†’ Triggers Wikipedia tool
1789
+ - "Who am I?" β†’ Triggers memory search
1790
+ - "Remember this: I love pizza" β†’ Uses scratchpad tool
1791
+ - Any question β†’ May trigger ReAct agent
1792
+
1793
+ ### Model: `{Config.MODEL_NAME}`
1794
+ """)
1795
+
1796
+ return app
1797
+
1798
+ # ============================================================================
1799
+ # MAIN
1800
+ # ============================================================================
1801
+
1802
+ if __name__ == "__main__":
1803
+ print("=" * 80)
1804
+ print("[BRAIN] CONSCIOUSNESS LOOP v4.0 - EVERYTHING WORKING")
1805
+ print("=" * 80)
1806
+ print("\n[OK] What's New:")
1807
+ print(" β€’ ChromaDB actually used in context")
1808
+ print(" β€’ ReAct agent with better triggers")
1809
+ print(" β€’ Tools actually called")
1810
+ print(" β€’ Prompts massively improved")
1811
+ print(" β€’ Scenes that work properly")
1812
+ print("\n[LAUNCH] Loading...")
1813
+ print("=" * 80)
1814
+
1815
+ app = create_gradio_interface()
1816
+ app.launch(
1817
+ server_name="0.0.0.0",
1818
+ server_port=7860,
1819
+ share=False,
1820
+ show_error=True
1821
+ )