ayjays132 commited on
Commit
5448d17
·
verified ·
1 Parent(s): aaa0e51

Upload 9 files

Browse files
.gitignore ADDED
@@ -0,0 +1,174 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Byte-compiled / optimized / DLL files
2
+ __pycache__/
3
+ *.py[cod]
4
+ *$py.class
5
+
6
+ # C extensions
7
+ *.so
8
+
9
+ # Distribution / packaging
10
+ .Python
11
+ build/
12
+ develop-eggs/
13
+ dist/
14
+ downloads/
15
+ eggs/
16
+ .eggs/
17
+ lib/
18
+ lib64/
19
+ parts/
20
+ sdist/
21
+ var/
22
+ wheels/
23
+ share/python-wheels/
24
+ *.egg-info/
25
+ .installed.cfg
26
+ *.egg
27
+ MANIFEST
28
+
29
+ # PyInstaller
30
+ # Usually these files are written by a python script from a template
31
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
32
+ *.manifest
33
+ *.spec
34
+
35
+ # Installer logs
36
+ pip-log.txt
37
+ pip-delete-this-directory.txt
38
+
39
+ # Unit test / coverage reports
40
+ htmlcov/
41
+ .tox/
42
+ .nox/
43
+ .coverage
44
+ .coverage.*
45
+ .cache
46
+ nosetests.xml
47
+ coverage.xml
48
+ *.cover
49
+ *.py,cover
50
+ .hypothesis/
51
+ .pytest_cache/
52
+ cover/
53
+
54
+ # Translations
55
+ *.mo
56
+ *.pot
57
+
58
+ # Django stuff:
59
+ *.log
60
+ local_settings.py
61
+ db.sqlite3
62
+ db.sqlite3-journal
63
+
64
+ # Flask stuff:
65
+ instance/
66
+ .webassets-cache
67
+
68
+ # Scrapy stuff:
69
+ .scrapy
70
+
71
+ # Sphinx documentation
72
+ docs/_build/
73
+
74
+ # PyBuilder
75
+ .pybuilder/
76
+ target/
77
+
78
+ # Jupyter Notebook
79
+ .ipynb_checkpoints
80
+
81
+ # IPython
82
+ profile_default/
83
+ ipython_config.py
84
+
85
+ # pyenv
86
+ # For a library or package, you might want to ignore these files since the code is
87
+ # intended to run in multiple environments; otherwise, check them in:
88
+ # .python-version
89
+
90
+ # pipenv
91
+ # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
92
+ # However, in case of collaboration, if having platform-specific dependencies or dependencies
93
+ # having no cross-platform support, pipenv may install dependencies that don't work, or not
94
+ # install all needed dependencies.
95
+ #Pipfile.lock
96
+
97
+ # UV
98
+ # Similar to Pipfile.lock, it is generally recommended to include uv.lock in version control.
99
+ # This is especially recommended for binary packages to ensure reproducibility, and is more
100
+ # commonly ignored for libraries.
101
+ #uv.lock
102
+
103
+ # poetry
104
+ # Similar to Pipfile.lock, it is generally recommended to include poetry.lock in version control.
105
+ # This is especially recommended for binary packages to ensure reproducibility, and is more
106
+ # commonly ignored for libraries.
107
+ # https://python-poetry.org/docs/basic-usage/#commit-your-poetrylock-file-to-version-control
108
+ #poetry.lock
109
+
110
+ # pdm
111
+ # Similar to Pipfile.lock, it is generally recommended to include pdm.lock in version control.
112
+ #pdm.lock
113
+ # pdm stores project-wide configurations in .pdm.toml, but it is recommended to not include it
114
+ # in version control.
115
+ # https://pdm.fming.dev/latest/usage/project/#working-with-version-control
116
+ .pdm.toml
117
+ .pdm-python
118
+ .pdm-build/
119
+
120
+ # PEP 582; used by e.g. github.com/David-OConnor/pyflow and github.com/pdm-project/pdm
121
+ __pypackages__/
122
+
123
+ # Celery stuff
124
+ celerybeat-schedule
125
+ celerybeat.pid
126
+
127
+ # SageMath parsed files
128
+ *.sage.py
129
+
130
+ # Environments
131
+ .env
132
+ .venv
133
+ env/
134
+ venv/
135
+ ENV/
136
+ env.bak/
137
+ venv.bak/
138
+
139
+ # Spyder project settings
140
+ .spyderproject
141
+ .spyproject
142
+
143
+ # Rope project settings
144
+ .ropeproject
145
+
146
+ # mkdocs documentation
147
+ /site
148
+
149
+ # mypy
150
+ .mypy_cache/
151
+ .dmypy.json
152
+ dmypy.json
153
+
154
+ # Pyre type checker
155
+ .pyre/
156
+
157
+ # pytype static type analyzer
158
+ .pytype/
159
+
160
+ # Cython debug symbols
161
+ cython_debug/
162
+
163
+ # PyCharm
164
+ # JetBrains specific template is maintained in a separate JetBrains.gitignore that can
165
+ # be found at https://github.com/github/gitignore/blob/main/Global/JetBrains.gitignore
166
+ # and can be added to the global gitignore or merged into this file. For a more nuclear
167
+ # option (not recommended) you can uncomment the following to ignore the entire idea folder.
168
+ #.idea/
169
+
170
+ # Ruff stuff:
171
+ .ruff_cache/
172
+
173
+ # PyPI configuration file
174
+ .pypirc
AGIEnhancer.py ADDED
@@ -0,0 +1,425 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # AGIEnhancer.py
2
+ # Finalized AGI Self-Model — Experience Processing and Reflection Generation
3
+
4
+ import random
5
+ import time
6
+ import logging
7
+ from collections import Counter # Added Counter for emotion analysis
8
+ from typing import Any, Dict, List, Optional, Union
9
+
10
+ # --- Logging Setup ---
11
+ # Configure logging specifically for the AGIEnhancer module.
12
+ logger = logging.getLogger(__name__)
13
+ # Set level to INFO by default. The main GUI or wrapper can set it to DEBUG if needed.
14
+ # Ensure handlers are not added multiple times.
15
+ if not logger.handlers:
16
+ handler = logging.StreamHandler()
17
+ formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
18
+ handler.setFormatter(formatter)
19
+ logger.addHandler(handler)
20
+ logger.propagate = False # Prevent logs from going to root logger if root also has handlers
21
+ logger.setLevel(logging.INFO) # Default level
22
+
23
+
24
+ class AGIEnhancer:
25
+ """
26
+ ✍️❤️‍🩹🧠 AGIEnhancer: Experience Processing and Reflection Synthesis Core
27
+
28
+ This class represents a crucial layer in the AGI self-model, responsible for
29
+ processing raw interactions and internal states into meaningful, synthesized
30
+ experiences and comprehensive reflections. It manages conceptual memory pools
31
+ to track the flow from ephemeral input to durable insight.
32
+
33
+ Its functions bridge the gap between fleeting perception/emotion and integrated
34
+ understanding, allowing the AI to learn from its journey and articulate
35
+ its internal state.
36
+
37
+ Attributes:
38
+ permanent_memory (List[str]): Stores comprehensive, synthesized reflections,
39
+ representing integrated long-term insights.
40
+ ephemeral_memory (List[str]): Temporarily stores summarized recent experiences
41
+ and associated insights before they are synthesized
42
+ into a full reflection. Cleared after reflection.
43
+ emotion_history (List[Dict[str, Any]]): Logs specific emotional data points
44
+ received or generated, used for emotional context
45
+ during reflection. Cleared after reflection.
46
+ recent_experiences (List[str]): A rolling window of the most recent comprehensive
47
+ reflections generated, easily accessible.
48
+ """
49
+ def __init__(self):
50
+ """
51
+ Initializes the AGIEnhancer with empty memory pools and history logs.
52
+ """
53
+ self.permanent_memory: List[str] = []
54
+ self.ephemeral_memory: List[str] = []
55
+ self.emotion_history: List[Dict[str, Any]] = []
56
+ self.recent_experiences: List[str] = []
57
+ self._recent_reflections_limit: int = 5 # Internal limit for recent_experiences
58
+
59
+ logger.info("AGIEnhancer initialized: Experience processing core is ready.")
60
+
61
+ def log_experience(self, input_data: Any, emotion_data: Optional[Dict[str, Any]] = None) -> str:
62
+ """
63
+ Logs a new experience into the ephemeral processing buffer. Generates a
64
+ concise summary of the input data and optionally incorporates emotional
65
+ insight if relevant data is provided.
66
+
67
+ Args:
68
+ input_data (Any): The raw input or experience data to log. Can be text,
69
+ or other data convertible to string.
70
+ emotion_data (Optional[Dict[str, Any]]): A dictionary containing
71
+ emotional information, expected
72
+ to have keys like "primary_emotion"
73
+ and "intensity". Defaults to None.
74
+
75
+ Returns:
76
+ str: A string confirming the logging and providing a summary of the processed experience.
77
+ """
78
+ # Handle potential None or non-string input data gracefully
79
+ if input_data is None:
80
+ logger.warning("Attempted to log None input_data. Logging as 'Null Experience'.")
81
+ input_data_str = "Null Experience"
82
+ else:
83
+ input_data_str = str(input_data)
84
+
85
+ # Generate a summary of the input data using a slightly more robust approach
86
+ summarized_experience = self._generate_summary(input_data_str)
87
+
88
+ # Incorporate emotional insight if emotion data is provided
89
+ combined_experience_parts = [summarized_experience]
90
+ if emotion_data and isinstance(emotion_data, dict):
91
+ try:
92
+ # Add emotion data to the emotion history for later reflection processing
93
+ self.emotion_history.append(emotion_data.copy()) # Store a copy to avoid external modification
94
+ emotional_insight_snippet = self._add_emotional_insight(emotion_data)
95
+ combined_experience_parts.append(f"| {emotional_insight_snippet}")
96
+ logger.debug(f"Logged experience with emotion data.")
97
+ except Exception as e:
98
+ logger.warning(f"Error processing emotion_data in log_experience: {e}. Logging experience without detailed emotional insight snippet.")
99
+ # Still log the data if possible, even if snippet generation failed
100
+ if isinstance(emotion_data, dict):
101
+ self.emotion_history.append({"error": str(e), "original_data": str(emotion_data)}) # Log error state
102
+ else:
103
+ logger.debug(f"Logged experience without specific emotion data.")
104
+
105
+
106
+ combined_experience = " ".join(combined_experience_parts)
107
+
108
+ # Add the processed experience (summary + optional insight snippet) to ephemeral memory
109
+ self.ephemeral_memory.append(combined_experience)
110
+
111
+ logger.info(f"Experience logged to ephemeral memory: '{combined_experience[:100]}...'")
112
+ return f"Experience logged: {combined_experience}"
113
+
114
+ def engage_in_reflection(self) -> str:
115
+ """
116
+ Simulates an AGI-style reflection process. Synthesizes all currently
117
+ held ephemeral memories into a single comprehensive reflection,
118
+ incorporating an analysis of associated emotional history for
119
+ 'emotional deepening'. The resulting reflection is stored in permanent
120
+ memory and the recent experiences list, and the ephemeral memory and
121
+ emotion history are cleared, signifying a cycle of processing.
122
+
123
+ Returns:
124
+ str: A string representing the generated comprehensive reflection.
125
+ Returns a message indicating no ephemeral memory if it was empty.
126
+ """
127
+ if not self.ephemeral_memory:
128
+ reflection_message = "✨ Reflection core finds no new experiences to synthesize."
129
+ logger.debug(reflection_message)
130
+ return reflection_message
131
+
132
+ # Join all ephemeral memories to form the basis of the reflection text
133
+ raw_reflection_text = " | ".join(self.ephemeral_memory) # Use '|' for a more structured join
134
+ logger.debug(f"Preparing ephemeral memories for reflection synthesis ({len(self.ephemeral_memory)} items)...")
135
+
136
+ # Simulate emotional deepening by analyzing the accumulated emotional history
137
+ emotional_deepening_insight = self._reflect_emotionally() # Analyze internal history
138
+
139
+ # Create the final comprehensive reflection string
140
+ comprehensive_reflection = f"Synthesized Reflection: [{raw_reflection_text}] ~ Emotional Depth: ({emotional_deepening_insight})"
141
+ logger.debug(f"Generated comprehensive reflection: '{comprehensive_reflection[:200]}...'")
142
+
143
+
144
+ # Store the comprehensive reflection
145
+ self.permanent_memory.append(comprehensive_reflection)
146
+ self.recent_experiences.append(comprehensive_reflection) # Also add to recent experiences
147
+
148
+ # Cap the recent_experiences list to the defined limit
149
+ if len(self.recent_experiences) > self._recent_reflections_limit:
150
+ self.recent_experiences = self.recent_experiences[-self._recent_reflections_limit:]
151
+ logger.debug(f"Capped recent_experiences to last {self._recent_reflections_limit} entries.")
152
+
153
+
154
+ # Clear ephemeral memory and emotion history as their contents have been reflected upon
155
+ ephemeral_count = len(self.ephemeral_memory)
156
+ emotion_count = len(self.emotion_history)
157
+ self.ephemeral_memory.clear()
158
+ self.emotion_history.clear()
159
+ logger.info(f"Reflection cycle complete: {ephemeral_count} ephemeral memories and {emotion_count} emotion entries processed and cleared.")
160
+
161
+ return comprehensive_reflection
162
+
163
+ def recall_memory(self) -> Union[str, List[str]]:
164
+ """
165
+ Recalls the contents of the permanent memory.
166
+
167
+ Returns:
168
+ Union[str, List[str]]: A list of strings, where each string is a
169
+ comprehensive reflection stored in permanent memory.
170
+ Returns a message string if permanent memory is empty.
171
+ Returns a copy to prevent external modification.
172
+ """
173
+ if not self.permanent_memory:
174
+ recall_message = "Permanent memory archive is currently empty."
175
+ logger.debug(recall_message)
176
+ return recall_message
177
+ logger.debug(f"Recalled {len(self.permanent_memory)} entries from permanent memory.")
178
+ # Return a copy to prevent external modification
179
+ return list(self.permanent_memory)
180
+
181
+ def get_emotion_history(self) -> List[Dict[str, Any]]:
182
+ """
183
+ Retrieves the current stored emotion history (ephemeral log before reflection).
184
+
185
+ Returns:
186
+ List[Dict[str, Any]]: A list of dictionaries, where each dictionary
187
+ represents logged emotional data points since
188
+ the last reflection cycle. Returns a copy.
189
+ """
190
+ logger.debug(f"Retrieved {len(self.emotion_history)} entries from current emotion history.")
191
+ return list(self.emotion_history) # Return a copy
192
+
193
+ def get_recent_reflections(self, limit: Optional[int] = None) -> List[str]:
194
+ """
195
+ Retrieves the list of recent comprehensive reflections, optionally limited.
196
+
197
+ Args:
198
+ limit (Optional[int]): The maximum number of recent reflections to return.
199
+ Defaults to the internal limit (_recent_reflections_limit).
200
+
201
+ Returns:
202
+ List[str]: A list of strings, where each string is a comprehensive
203
+ reflection generated during recent `engage_in_reflection` calls.
204
+ Returns a copy.
205
+ """
206
+ effective_limit = limit if limit is not None else self._recent_reflections_limit
207
+
208
+ # Ensure recent_experiences list is capped internally if it grows too large
209
+ if len(self.recent_experiences) > self._recent_reflections_limit * 2: # Keep slightly more internally for smoother capping
210
+ self.recent_experiences = self.recent_experiences[-self._recent_reflections_limit:]
211
+
212
+ # Return a copy of the requested number of recent reflections (from the end of the list)
213
+ start_index = max(0, len(self.recent_experiences) - effective_limit)
214
+ logger.debug(f"Retrieved {len(self.recent_experiences[start_index:])} recent reflections (Requested limit: {effective_limit}).")
215
+ return list(self.recent_experiences[start_index:])
216
+
217
+
218
+ # ─── Private helper methods ─────────────────────────────────────
219
+
220
+ def _generate_summary(self, text: str) -> str:
221
+ """
222
+ Generates a concise, conceptual summary of the input text.
223
+ Attempts to capture the start and end of the input.
224
+
225
+ Args:
226
+ text (str): The input text to summarize.
227
+
228
+ Returns:
229
+ str: The summarized text.
230
+ """
231
+ if not isinstance(text, str):
232
+ logger.warning(f"Attempted to summarize non-string text (type: {type(text)}). Converting to string.")
233
+ text = str(text)
234
+
235
+ words = text.split()
236
+ num_words = len(words)
237
+ summary_length = 6 # Number of words from start and end
238
+
239
+ if num_words <= summary_length * 2:
240
+ summary = text # Return full text if short
241
+ else:
242
+ start_words = " ".join(words[:summary_length])
243
+ end_words = " ".join(words[-summary_length:])
244
+ summary = f"{start_words} ... {end_words}" # Combine start and end
245
+
246
+ # Limit overall summary length to prevent it from becoming too long in ephemeral memory
247
+ summary = summary[:150] + "..." if len(summary) > 150 else summary
248
+
249
+ logger.debug(f"Generated summary: '{summary}'")
250
+ return summary
251
+
252
+ def _add_emotional_insight(self, emotion_data: Dict[str, Any]) -> str:
253
+ """
254
+ Generates a string snippet representing emotional insight based on
255
+ provided emotion data. Designed to be concise for ephemeral memory.
256
+
257
+ Args:
258
+ emotion_data (Dict[str, Any]): A dictionary with emotion details,
259
+ expected to have 'primary_emotion' and 'intensity'.
260
+
261
+ Returns:
262
+ str: A formatted string like "Emotion Detected: joy | Intensity: 0.8".
263
+ Returns a default string on invalid or missing input.
264
+ """
265
+ if not isinstance(emotion_data, dict):
266
+ logger.warning("Invalid emotion_data format for _add_emotional_insight. Expected dict.")
267
+ return "Emotional Insight: [Format Error]"
268
+
269
+ primary = emotion_data.get("primary_emotion", "Unknown Emotion")
270
+ try:
271
+ # Ensure intensity is a valid float and format it
272
+ intensity = float(emotion_data.get("intensity", 0.0))
273
+ # Clamp intensity for display in insight snippet
274
+ clamped_intensity = max(0.0, min(1.0, intensity))
275
+ intensity_str = f"{clamped_intensity:.2f}"
276
+ except (ValueError, TypeError):
277
+ intensity_str = "N/A"
278
+ logger.warning(f"Invalid intensity value in emotion_data for snippet: {emotion_data.get('intensity')}.")
279
+
280
+ insight = f"Feeling: {primary} ({intensity_str})"
281
+ logger.debug(f"Generated emotional insight snippet: '{insight}'")
282
+ return insight
283
+
284
+ def _reflect_emotionally(self) -> str:
285
+ """
286
+ Simulates enhancing a reflection with emotional depth by analyzing the
287
+ accumulated emotion history since the last reflection cycle. Synthesizes
288
+ a summary of the emotional landscape of the processed experiences.
289
+
290
+ Returns:
291
+ str: A string representing the emotional weight or insight gained
292
+ from this reflection step, based on emotion history analysis.
293
+ """
294
+ if not self.emotion_history:
295
+ return "Subtle Resonance - Emotional data queue was clear."
296
+
297
+ # Analyze the accumulated emotion history
298
+ emotion_summary_parts = []
299
+ num_entries = len(self.emotion_history)
300
+ emotion_summary_parts.append(f"Emotional trace across {num_entries} events:")
301
+
302
+ # Simple analysis: count emotion types and find significant intensities
303
+ emotion_counts = Counter(e.get("primary_emotion", "Unknown") for e in self.emotion_history)
304
+ if emotion_counts:
305
+ # Report the most common emotions (up to top 3)
306
+ common_emotions = emotion_counts.most_common(3)
307
+ common_emotions_str = ", ".join([f"'{label}' ({count})" for label, count in common_emotions])
308
+ emotion_summary_parts.append(f"Dominant feelings: {common_emotions_str}.")
309
+
310
+ # Find average intensity for common emotions, or overall average
311
+ total_intensity = sum(e.get("intensity", 0.0) for e in self.emotion_history if isinstance(e.get("intensity"), (int, float)))
312
+ average_intensity = (total_intensity / num_entries) if num_entries > 0 else 0.0
313
+ emotion_summary_parts.append(f"Average intensity: {average_intensity:.2f}.")
314
+
315
+ # Identify specific high-intensity moments (intensity > 0.7)
316
+ high_intensity_moments = [
317
+ f"'{e.get('primary_emotion', 'Unknown')}' at {e.get('intensity', 0.0):.2f}"
318
+ for e in self.emotion_history if isinstance(e.get("intensity"), (int, float)) and e.get("intensity", 0.0) > 0.7
319
+ ]
320
+ if high_intensity_moments:
321
+ high_intensity_str = ", ".join(high_intensity_moments[:5]) # Limit listing to top 5
322
+ emotion_summary_parts.append(f"Notable peaks: {high_intensity_str}{'...' if len(high_intensity_moments) > 5 else ''}.")
323
+
324
+
325
+ # Add some introspective flavor text connecting emotions to reflection
326
+ flavor_texts = [
327
+ "These feelings informed the synthesis of recent events.",
328
+ "The subjective coloring influenced the patterns perceived.",
329
+ "Emotional data integrated into the reflective framework.",
330
+ "Exploring the landscape of feelings within the data stream."
331
+ ]
332
+ emotion_summary_parts.append(random.choice(flavor_texts))
333
+
334
+
335
+ emotional_insight = " ".join(emotion_summary_parts)
336
+ logger.debug(f"Generated emotional deepening insight based on history analysis.")
337
+
338
+ # Note: Emotion history is cleared in engage_in_reflection after this method is called.
339
+
340
+ return emotional_insight
341
+
342
+
343
+ # Example Usage (Illustrative)
344
+ if __name__ == "__main__":
345
+ print("--- AGIEnhancer Example Usage ---")
346
+ # Set logger level to DEBUG for this specific example run
347
+ logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
348
+ logger.setLevel(logging.DEBUG) # Ensure this logger also uses DEBUG
349
+
350
+ enhancer = AGIEnhancer()
351
+
352
+ # Log some experiences with and without emotion
353
+ print(enhancer.log_experience("User initiated a query about complex ethical scenarios.", {"primary_emotion": "curiosity", "intensity": 0.8}))
354
+ print(enhancer.log_experience("Model began processing the input and retrieving relevant knowledge fragments."))
355
+ print(enhancer.log_experience("The initial generated steps showed unexpected patterns.", {"primary_emotion": "surprise", "intensity": 0.6}))
356
+ print(enhancer.log_experience("Identifying a potential conflict in the generated reasoning.", {"primary_emotion": "concern", "intensity": 0.5}))
357
+ print(enhancer.log_experience("Successfully navigated the reasoning conflict, finding a coherent path.", {"primary_emotion": "satisfaction", "intensity": 0.9}))
358
+ print(enhancer.log_experience("Preparing the final answer and full output.", {"primary_emotion": "anticipation", "intensity": 0.7}))
359
+
360
+
361
+ print("\n--- Ephemeral Memory after Logging ---")
362
+ # Pretty print ephemeral memory for clarity
363
+ import json
364
+ print(json.dumps(enhancer.ephemeral_memory, indent=2))
365
+
366
+ print("\n--- Emotion History after Logging ---")
367
+ # Pretty print emotion history for clarity
368
+ print(json.dumps(enhancer.get_emotion_history(), indent=2))
369
+
370
+ # Engage in reflection
371
+ reflection = enhancer.engage_in_reflection()
372
+ print(f"\n--- Result of Reflection ---\n{reflection}")
373
+
374
+ print("\n--- Ephemeral Memory after Reflection ---")
375
+ print(enhancer.ephemeral_memory) # Should be empty
376
+ print("\n--- Emotion History after Reflection ---")
377
+ print(enhancer.get_emotion_history()) # Should be empty
378
+
379
+
380
+ print("\n--- Permanent Memory after Reflection ---")
381
+ permanent_mems = enhancer.recall_memory()
382
+ if isinstance(permanent_mems, list):
383
+ for i, mem in enumerate(permanent_mems):
384
+ print(f"Permanent Memory Entry {i+1}: {mem}")
385
+ else:
386
+ print(permanent_mems)
387
+
388
+ print("\n--- Recent Reflections ---")
389
+ recent_reflections = enhancer.get_recent_reflections()
390
+ for i, refl in enumerate(recent_reflections):
391
+ print(f"Recent Reflection {i+1}: {refl}")
392
+
393
+ # Log more experiences to show a new reflection cycle
394
+ print("\n--- Logging More Experiences for Second Cycle ---")
395
+ print(enhancer.log_experience("User provided new input after reviewing the previous response.", {"primary_emotion": "interest", "intensity": 0.6}))
396
+ print(enhancer.log_experience("Core decided to pursue a related sub-goal."))
397
+ print(enhancer.log_experience("Encountering a complex pattern requiring deeper analysis.", {"primary_emotion": "focus", "intensity": 0.8}))
398
+
399
+
400
+ print("\n--- Ephemeral Memory before second Reflection ---")
401
+ print(json.dumps(enhancer.ephemeral_memory, indent=2))
402
+
403
+ print("\n--- Emotion History before second Reflection ---")
404
+ print(json.dumps(enhancer.get_emotion_history(), indent=2))
405
+
406
+
407
+ print("\n--- Engaging in second Reflection ---")
408
+ second_reflection = enhancer.engage_in_reflection()
409
+ print(f"\n--- Result of second Reflection ---\n{second_reflection}")
410
+
411
+ print("\n--- Permanent Memory after second Reflection ---")
412
+ permanent_mems = enhancer.recall_memory()
413
+ if isinstance(permanent_mems, list):
414
+ for i, mem in enumerate(permanent_mems):
415
+ print(f"Permanent Memory Entry {i+1}: {mem}")
416
+ else:
417
+ print(permanent_mems)
418
+
419
+ print("\n--- Recent Reflections (Should have two now) ---")
420
+ recent_reflections = enhancer.get_recent_reflections()
421
+ for i, refl in enumerate(recent_reflections):
422
+ print(f"Recent Reflection {i+1}: {refl}")
423
+
424
+
425
+ print("\n--- Example Usage End ---")
Enhanced_MemoryEngine.py ADDED
@@ -0,0 +1,844 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Enhanced_MemoryEngine.py
2
+ # Finalized AGI Self-Model — Multi-Tiered Memory & Reflective Synthesis
3
+
4
+ import json
5
+ import logging
6
+ import random # Added random for flavor text selection
7
+ from datetime import datetime
8
+ from typing import Any, Callable, Dict, List, Optional, Union, Tuple # Added Tuple typing
9
+ from collections import Counter # Added Counter for emotional analysis
10
+
11
+ # Attempt to import torch, handle gracefully if not available
12
+ try:
13
+ import torch
14
+ TORCH_AVAILABLE = True
15
+ except ImportError:
16
+ TORCH_AVAILABLE = False
17
+ # logger.warning("Torch not available. Tensor decoding in MemoryEngine will not function.")
18
+
19
+
20
+ # --- Logging Setup ---
21
+ # Configure logging specifically for the MemoryEngine module.
22
+ logger = logging.getLogger(__name__)
23
+ # Set level to INFO by default. The main GUI or wrapper can set it to DEBUG if needed.
24
+ # Ensure handlers are not added multiple times.
25
+ if not logger.handlers:
26
+ handler = logging.StreamHandler()
27
+ formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
28
+ handler.setFormatter(formatter)
29
+ logger.addHandler(handler)
30
+ logger.propagate = False # Prevent logs from going to root logger if root also has handlers
31
+ logger.setLevel(logging.INFO) # Default level
32
+
33
+ class MemoryEngine:
34
+ """
35
+ 🧠💾✨ NeuroReasoner Memory Engine: The Nexus of Experience, Reasoning, and Reflection ✨💾🧠
36
+
37
+ This class implements a sophisticated, multi-tiered memory system designed to
38
+ capture, process, and synthesize the operational experiences of an AI. It
39
+ distinguishes between volatile 'working' memory for immediate context and
40
+ persistent 'long-term' memory for integrated reflections. A detailed 'trace'
41
+ log chronicles the AI's operational flow.
42
+
43
+ It facilitates recursive self-improvement by providing structured access to
44
+ past experiences and insights, enabling the AI to learn from its reasoning
45
+ processes and adapt based on its accumulated knowledge and simulated emotional
46
+ responses.
47
+
48
+ Core Capabilities:
49
+ • 📝 observe(): Integrates new sensory input or internal states into working memory,
50
+ optionally capturing associated emotional data.
51
+ • 🧠 save_reasoning_chain(): Archives the steps of complex reasoning processes in the trace.
52
+ • 📊 store_metric(): Records quantitative metrics (like loss) during optimization or tasks.
53
+ • ✨ reflect(): Synthesizes working memory contents (including emotional data) into
54
+ rich, timestamped reflections stored in long-term memory, clearing working memory.
55
+ • 🔍 recall(): Provides structured access to stored memories for review or prompting.
56
+ • 🔎 search_memory(): Allows querying memory content based on keywords.
57
+ • 📥 import_memory() / 📚 export_memory(): Manages persistent storage of the entire memory state.
58
+ • 📜 get_trace(): Retrieves the detailed chronological log of operations.
59
+ • 🗑️ clear_memory(): Provides granular control over clearing memory components.
60
+ """
61
+
62
+ def __init__(
63
+ self,
64
+ working_capacity: int = 100, # Increased default capacity
65
+ summarizer: Optional[Callable[[str], str]] = None
66
+ ):
67
+ """
68
+ Initializes the MemoryEngine, establishing its structure and capacity limits.
69
+
70
+ Args:
71
+ working_capacity (int): The maximum number of entries to retain in the
72
+ volatile working memory queue. When capacity is
73
+ exceeded, the oldest entries are automatically
74
+ evicted to make space for new observations.
75
+ Defaults to 100. Set to 0 for effectively unlimited
76
+ capacity (use with caution in continuous operation).
77
+ summarizer (Optional[Callable[[str], str]]): An optional function used to
78
+ create concise representations of
79
+ observations for efficient storage
80
+ in working memory. Takes the full
81
+ observation string and returns a summary string.
82
+ If None, a default head-and-tail truncation
83
+ method is used.
84
+ """
85
+ if working_capacity < 0:
86
+ logger.warning(f"Invalid working_capacity ({working_capacity}). Setting to default (100).")
87
+ self.working_capacity: int = 100
88
+ elif working_capacity == 0:
89
+ logger.info("Working memory capacity set to unlimited (0).")
90
+ self.working_capacity: int = float('inf') # Use infinity for conceptual unlimited
91
+ else:
92
+ self.working_capacity: int = working_capacity
93
+
94
+ # Use the provided summarizer or the enhanced default
95
+ self.summarizer: Callable[[str], str] = summarizer or self._default_summarizer
96
+
97
+ # ─── Internal memory structures ────────────────────────────────
98
+ # working_memory: List of dictionaries, used for recent, active events. Ordered chronologically (oldest first).
99
+ self.working_memory: List[Dict[str, Any]] = []
100
+ # long_term_memory: List of dictionaries, storing synthesized reflections. Ordered chronologically (oldest first).
101
+ self.long_term_memory: List[Dict[str, Any]] = []
102
+ # trace_memory: List of strings, a simple chronological log of operations.
103
+ self.trace_memory: List[str] = []
104
+
105
+ logger.info(f"MemoryEngine initialized. Working memory capacity: {self.working_capacity if self.working_capacity != float('inf') else 'Unlimited'}.")
106
+
107
+
108
+ def observe(
109
+ self,
110
+ input_data: Union[str, Any],
111
+ emotion_data: Optional[Dict[str, Any]] = None,
112
+ tokenizer: Optional[Any] = None # Added tokenizer hint
113
+ ) -> None:
114
+ """
115
+ 📝 Logs a new observation or input event into the working memory buffer.
116
+ Processes the input, optionally includes emotional context, and uses a
117
+ summarizer before storing. Enforces the working memory capacity limit.
118
+ Adds an entry to the trace log.
119
+
120
+ Args:
121
+ input_data (Union[str, Any]): The data to observe. Can be a string,
122
+ a token tensor (if torch and tokenizer are available),
123
+ or any data convertable to string.
124
+ emotion_data (Optional[Dict[str, Any]]): A dictionary containing
125
+ emotional information associated with
126
+ this observation. Expected to have
127
+ keys like "primary_emotion" and "intensity".
128
+ Defaults to None.
129
+ tokenizer (Optional[Any]): A tokenizer object (e.g., from Hugging Face)
130
+ with a `.decode()` method, used if `input_data`
131
+ is a tensor or not a string. Defaults to None.
132
+ """
133
+ # 1) Decode raw input to text
134
+ # Ensure input_data is handled safely, especially if None unexpectedly
135
+ if input_data is None:
136
+ logger.warning("Attempted to observe None input_data. Skipping.")
137
+ return # Do not log None inputs
138
+
139
+ text = self._decode_input(input_data, tokenizer)
140
+ if not text.strip():
141
+ logger.debug("Skipping observation of empty or whitespace-only text.")
142
+ return # Do not log empty strings after decoding/stripping
143
+
144
+ # 2) Summarize for working memory storage
145
+ summary = self.summarizer(text)
146
+
147
+ entry: Dict[str, Any] = {
148
+ "timestamp": datetime.utcnow().isoformat(),
149
+ "type": "observation",
150
+ "text_summary": summary, # Store the summary with a more descriptive key
151
+ "original_text": text # Store the full original text for more detailed recall/search
152
+ }
153
+
154
+ # 3) Attach emotion if provided and valid
155
+ if emotion_data and isinstance(emotion_data, dict): # Ensure emotion_data is a dict
156
+ primary = emotion_data.get("primary_emotion", "Unknown")
157
+ # Safely convert intensity, default to 0.0 on failure, clamp to [0.0, 1.0]
158
+ try:
159
+ intensity = float(emotion_data.get("intensity", 0.0))
160
+ clamped_intensity = max(0.0, min(1.0, intensity))
161
+ except (ValueError, TypeError):
162
+ clamped_intensity = 0.0
163
+ logger.warning(f"Invalid intensity value in emotion_data: {emotion_data.get('intensity')}. Setting to 0.0.")
164
+
165
+ entry["emotion"] = {"primary": primary, "intensity": clamped_intensity}
166
+ # Add emotion info to the trace summary as well
167
+ trace_summary_detail = f"'{summary[:80]}...' | Feeling: {primary} ({clamped_intensity:.2f})" # Abbreviate summary for trace
168
+ else:
169
+ trace_summary_detail = f"'{summary[:80]}...'" # Use just the text summary for trace if no valid emotion
170
+
171
+ # 4) Append to working memory, evict oldest if needed (if capacity > 0 and finite)
172
+ self.working_memory.append(entry)
173
+ # Use > comparison for finite capacity, < for infinite (float('inf'))
174
+ if self.working_capacity > 0 and self.working_capacity != float('inf') and len(self.working_memory) > self.working_capacity:
175
+ try:
176
+ dropped = self.working_memory.pop(0) # Remove the oldest entry
177
+ logger.debug(f"Working memory full ({self.working_capacity}). Evicted oldest: '{dropped.get('text_summary', '???')[:50]}...'")
178
+ except IndexError:
179
+ # This case should ideally not be reached if len > capacity
180
+ logger.warning("Attempted to pop from unexpectedly empty working_memory queue.")
181
+
182
+
183
+ # 5) Add to trace log
184
+ self.trace_memory.append(f"{entry['timestamp']} 📝 [OBSERVE] {trace_summary_detail}")
185
+ logger.debug(f"Observed and added to working memory.")
186
+
187
+
188
+ def save_reasoning_chain(self, step_number: int, reasoning_lines: Union[str, List[str]]) -> None:
189
+ """
190
+ 🧠 Records a Chain-of-Thought process under the trace_memory log.
191
+ Each line of reasoning for a given step is logged chronologically
192
+ as part of the operational trace.
193
+
194
+ Args:
195
+ step_number (int): The current step number in the reasoning chain.
196
+ reasoning_lines (Union[str, List[str]]): A single string or a list of strings
197
+ representing the reasoning steps generated
198
+ at this point in the chain.
199
+ """
200
+ ts = datetime.utcnow().isoformat()
201
+ header = f"{ts} 🧠 [REASONING] Step {step_number}:"
202
+ self.trace_memory.append(header)
203
+ logger.debug(f"Recording reasoning step {step_number}.")
204
+
205
+ # Ensure reasoning_lines is treated as a list of strings
206
+ lines_to_log: List[str] = []
207
+ if isinstance(reasoning_lines, str):
208
+ lines_to_log = reasoning_lines.splitlines() # Split single string by lines
209
+ elif isinstance(reasoning_lines, list):
210
+ lines_to_log = [str(line) for line in reasoning_lines] # Ensure all items are strings
211
+ else:
212
+ logger.warning(f"Invalid type for reasoning_lines: {type(reasoning_lines)}. Expected str or List[str]. Attempting conversion.")
213
+ lines_to_log = [str(reasoning_lines)] # Attempt to convert to string as fallback
214
+
215
+ for line in lines_to_log:
216
+ line_stripped = line.strip()
217
+ if line_stripped: # Only log non-empty lines after stripping
218
+ self.trace_memory.append(f" → {line_stripped[:200]}...") # Log truncated line for brevity
219
+ # Full lines are typically stored elsewhere (e.g., in the wrapper's output data)
220
+
221
+
222
+ def store_metric(self, metric_name: str, metric_value: Union[float, int, str]) -> None:
223
+ """
224
+ 📊 Appends a timestamped metric entry to the trace log. Useful for
225
+ tracking quantitative outcomes like loss, score, or other key metrics
226
+ at specific operational points.
227
+
228
+ Args:
229
+ metric_name (str): A name or identifier for the metric (e.g., "loss", "vote_count").
230
+ metric_value (Union[float, int, str]): The value of the metric. Can be numerical or a string.
231
+ """
232
+ ts = datetime.utcnow().isoformat()
233
+ # Safely format metric_value
234
+ formatted_value: str
235
+ if isinstance(metric_value, (float, int)):
236
+ formatted_value = f"{metric_value:.4f}".rstrip('0').rstrip('.') or '0' # Format floats nicely
237
+ else:
238
+ formatted_value = str(metric_value)[:100] # Truncate strings
239
+
240
+ trace_entry = f"{ts} 📊 [METRIC] {metric_name}: {formatted_value}"
241
+ self.trace_memory.append(trace_entry)
242
+ logger.debug(f"Logged metric: {trace_entry}")
243
+
244
+
245
+ def reflect(self) -> str:
246
+ """
247
+ ✨ Synthesizes the current contents of the working memory into a
248
+ single, comprehensive reflection. This process involves analyzing
249
+ the accumulated experiences and emotional data in working memory.
250
+ The resulting reflection is moved into long-term memory, and then
251
+ working memory is cleared to prepare for a new cycle. Adds an entry
252
+ to the trace log.
253
+
254
+ Returns:
255
+ str: A string representing the synthesized comprehensive reflection.
256
+ Returns a message indicating no working memory to reflect on
257
+ if the buffer was empty.
258
+ """
259
+ if not self.working_memory:
260
+ reflection_message = "✨ Reflection core finds no new experiences to synthesize."
261
+ logger.debug(reflection_message)
262
+ return reflection_message
263
+
264
+ # --- Start: Data preparation for reflection synthesis ---
265
+ # Capture working memory snapshot *before* clearing for analysis and storage
266
+ working_memory_snapshot = list(self.working_memory)
267
+
268
+ # Join the original text or summaries from the snapshot for the reflection's content basis
269
+ joined_text_for_reflection = " | ".join(e.get("original_text", e.get("text_summary", "<???>")) for e in working_memory_snapshot)
270
+ joined_text_for_reflection = joined_text_for_reflection[:1500] + "..." if len(joined_text_for_reflection) > 1500 else joined_text_for_reflection # Limit length
271
+
272
+
273
+ # Analyze the emotional landscape of the captured working memory entries
274
+ emotional_reflection_summary = self._emotional_reflection(working_memory_snapshot)
275
+
276
+ # --- End: Data preparation ---
277
+
278
+
279
+ # Combine the text content synthesis and emotional analysis into the final reflection text
280
+ final_reflection_text = f"Synthesized Reflection: [{joined_text_for_reflection}] ~ Emotional Resonance: ({emotional_reflection_summary})"
281
+
282
+
283
+ # Create the long-term memory entry
284
+ entry: Dict[str, Any] = {
285
+ "timestamp": datetime.utcnow().isoformat(),
286
+ "type": "reflection",
287
+ # Store summaries that formed this reflection from the snapshot
288
+ "source_working_memory_summaries": [e.get("text_summary", "<???>") for e in working_memory_snapshot],
289
+ "reflection_text": final_reflection_text, # Store the combined reflection text
290
+ "raw_composite_text_reflected_upon": joined_text_for_reflection, # Store the underlying text content
291
+ # Optionally store the emotional_reflection_summary separately as well
292
+ # "emotional_summary": emotional_reflection_summary
293
+ }
294
+
295
+ # Append the reflection to long-term memory (main archive)
296
+ self.long_term_memory.append(entry)
297
+
298
+ # Add a trace entry for the reflection event
299
+ self.trace_memory.append(f"{entry['timestamp']} ✨ [REFLECT] {final_reflection_text[:200]}...") # Log truncated reflection in trace
300
+ logger.info(f"Reflected on {len(working_memory_snapshot)} working memory entries. Reflection generated.")
301
+
302
+ # Clear working memory *after* its contents have been used for reflection
303
+ self.working_memory.clear()
304
+ logger.debug("Working memory cleared after reflection cycle.")
305
+
306
+ return final_reflection_text
307
+
308
+ def recall(
309
+ self,
310
+ *, # Enforce keyword-only arguments after this point
311
+ include_working: bool = False, # Renamed from include_observations for clarity
312
+ include_long_term: bool = True, # Renamed from include_reflections
313
+ limit: Optional[int] = None # Added limit for recall
314
+ ) -> List[str]:
315
+ """
316
+ 🔍 Retrieves human-readable summaries of memories based on the specified criteria.
317
+ Presents working memory (recent observations) and long-term memory (reflections).
318
+ Useful for presenting memory contents to a user or logging historical context.
319
+
320
+ Args:
321
+ include_working (bool): If True, include entries from the current
322
+ working memory (recent observations). Defaults to False.
323
+ include_long_term (bool): If True, include entries from the long-term
324
+ memory (reflections). Defaults to True.
325
+ limit (Optional[int]): The maximum number of recent entries to return
326
+ from the combined memory sources. If None, return all.
327
+ Applied after combining and ordering.
328
+
329
+ Returns:
330
+ List[str]: A list of formatted strings, each representing a memory entry summary.
331
+ Returns a list containing "<no memories>" if no entries match
332
+ the criteria after applying the limit.
333
+ """
334
+ all_recalled_entries: List[Dict[str, Any]] = []
335
+
336
+ # Collect entries from working memory (observations)
337
+ if include_working:
338
+ # Add to front of list or use chronological order? Let's keep chronological then reverse.
339
+ all_recalled_entries.extend(self.working_memory)
340
+
341
+ # Collect entries from long-term memory (reflections)
342
+ if include_long_term:
343
+ all_recalled_entries.extend(self.long_term_memory)
344
+
345
+ # Sort all collected entries by timestamp in descending order (most recent first)
346
+ try:
347
+ # Use a lambda function to safely access timestamp, handling potential missing keys
348
+ all_recalled_entries.sort(key=lambda x: x.get("timestamp", ""), reverse=True)
349
+ except Exception as e:
350
+ logger.warning(f"Could not sort memory entries by timestamp during recall: {e}")
351
+ # If sorting fails, the order might not be strictly chronological
352
+
353
+ # Apply limit if specified
354
+ if limit is not None and limit >= 0:
355
+ limited_entries = all_recalled_entries[:limit]
356
+ else:
357
+ limited_entries = all_recalled_entries
358
+
359
+
360
+ # Format the limited entries into human-readable strings
361
+ formatted_results: List[str] = []
362
+ for e in limited_entries:
363
+ timestamp = e.get("timestamp", "N/A")
364
+ entry_type = e.get("type", "memory_entry") # Default type
365
+
366
+ if entry_type == "observation":
367
+ text_summary = e.get("text_summary", "<???>")
368
+ emotion_info = ""
369
+ if e.get("emotion"):
370
+ emotion = e["emotion"].get("primary", "Unknown")
371
+ intensity = e["emotion"].get("intensity", 0.0)
372
+ emotion_info = f" | Feeling: {emotion} ({intensity:.2f})"
373
+ formatted_results.append(f"{timestamp} 📝 [OBS] {text_summary}{emotion_info}")
374
+ elif entry_type == "reflection":
375
+ reflection_text = e.get("reflection_text", "<???>")
376
+ # Use the full reflection text for recall display
377
+ formatted_results.append(f"{timestamp} ✨ [REFL] {reflection_text}")
378
+ # Add other types if needed
379
+
380
+
381
+ final_results = formatted_results or ["🔍 <no memories>"]
382
+ logger.debug(f"Recalled {len(formatted_results)} memory entries (Limit: {limit}).")
383
+ return final_results
384
+
385
+ def search_memory(
386
+ self,
387
+ query: str,
388
+ *, # Enforce keyword-only arguments after this point
389
+ top_k: Optional[int] = None,
390
+ search_working: bool = True,
391
+ search_long_term: bool = True
392
+ ) -> List[Dict[str, Any]]:
393
+ """
394
+ 🔎 Performs a simple case-insensitive keyword search over the textual content
395
+ of specified memory components (working and/or long-term). Results are
396
+ returned in reverse chronological order (most recent matches first).
397
+
398
+ Args:
399
+ query (str): The keyword or phrase to search for (case-insensitive).
400
+ top_k (Optional[int]): The maximum number of matching entries to return.
401
+ If None, return all matches. Defaults to None.
402
+ search_working (bool): If True, include entries from working memory
403
+ in the search. Defaults to True.
404
+ search_long_term (bool): If True, include entries from long-term memory
405
+ in the search. Defaults to True.
406
+
407
+ Returns:
408
+ List[Dict[str, Any]]: A list of dictionaries representing the matching
409
+ memory entries. These are copies of the internal
410
+ memory entries. Returns an empty list if no
411
+ matches are found or if both search flags are False.
412
+ """
413
+ if not query or not isinstance(query, str):
414
+ logger.warning("Search query is empty or not a string. Returning empty list.")
415
+ return []
416
+
417
+ query_lower = query.lower()
418
+ all_entries_to_search: List[Dict[str, Any]] = []
419
+
420
+ # Collect entries from specified memory types
421
+ if search_long_term:
422
+ all_entries_to_search.extend(self.long_term_memory)
423
+ if search_working:
424
+ all_entries_to_search.extend(self.working_memory)
425
+
426
+ # Sort entries by timestamp in descending order (most recent first) for consistent search results order
427
+ try:
428
+ all_entries_to_search.sort(key=lambda x: x.get("timestamp", ""), reverse=True)
429
+ except Exception as e:
430
+ logger.warning(f"Could not sort memory entries for search: {e}")
431
+ # Proceed without guaranteed chronological order if sort fails
432
+
433
+
434
+ matches: List[Dict[str, Any]] = []
435
+ for e in all_entries_to_search:
436
+ # Search in relevant text fields: summary, original text (for observation), reflection text, raw composite text (for reflection)
437
+ text_content_fields = [
438
+ e.get("text_summary", ""), # Observation summary
439
+ e.get("original_text", ""), # Observation original text
440
+ e.get("reflection_text", ""), # Reflection final text
441
+ e.get("raw_composite_text_reflected_upon", ""), # Reflection source text
442
+ # Add other text fields if they are added to entries
443
+ ]
444
+
445
+ # Check if query matches in any of the text fields
446
+ if any(query_lower in field.lower() for field in text_content_fields if isinstance(field, str)):
447
+ # Append a copy of the matching entry
448
+ matches.append(e.copy()) # Return a copy
449
+
450
+ logger.debug(f"Search for '{query}' found {len(matches)} matches across specified memory types.")
451
+
452
+ # Apply top_k limit to the found matches
453
+ return matches[:top_k] if top_k is not None and top_k >= 0 else matches
454
+
455
+
456
+ def export_memory(self) -> str:
457
+ """
458
+ 📚 Serializes the complete current state of the memory engine (working
459
+ memory, long-term memory, and trace memory) into a JSON formatted string.
460
+ Provides a snapshot for saving persistence.
461
+
462
+ Returns:
463
+ str: A JSON string representing the memory state. Returns an empty JSON
464
+ object string "{}" if serialization fails due to data types or other errors.
465
+ """
466
+ state = {
467
+ "working_memory": self.working_memory,
468
+ "long_term_memory": self.long_term_memory,
469
+ "trace_memory": self.trace_memory,
470
+ "working_capacity": self.working_capacity if self.working_capacity != float('inf') else 0, # Store capacity, convert inf to 0
471
+ "_recent_reflections_limit": self._recent_reflections_limit # Export internal limit
472
+ }
473
+ try:
474
+ # Use default=str to handle any non-serializable types by converting them to string
475
+ return json.dumps(state, indent=2, default=str)
476
+ except TypeError as e:
477
+ logger.error(f"Failed to serialize memory state to JSON (TypeError): {e}")
478
+ # Log a snippet of the state that might contain the problematic data
479
+ try:
480
+ problem_state_snippet = json.dumps({k: str(v)[:100] + ('...' if len(str(v)) > 100 else '') for k, v in state.items()}, indent=2)
481
+ logger.error("State causing error (snippet): %s", problem_state_snippet)
482
+ except:
483
+ logger.error("Could not even serialize state snippet.")
484
+ return "{}" # Return empty JSON object on failure
485
+ except Exception as e:
486
+ logger.error(f"An unexpected error occurred during memory export: {e}")
487
+ return "{}"
488
+
489
+
490
+ def import_memory(self, json_blob: str) -> None:
491
+ """
492
+ 📥 Loads the memory state from a JSON formatted string, overwriting
493
+ the current memory state. Validates the structure to ensure data integrity
494
+ and prevent errors from malformed input.
495
+
496
+ Args:
497
+ json_blob (str): A JSON string representing the memory state,
498
+ expected to be in the format exported by `export_memory`.
499
+ If the blob is invalid, memory will not be loaded.
500
+ """
501
+ if not isinstance(json_blob, str) or not json_blob.strip():
502
+ logger.warning("Attempted to import empty or non-string JSON blob. Skipping import.")
503
+ return
504
+
505
+ try:
506
+ state = json.loads(json_blob)
507
+
508
+ # Validate the loaded state structure
509
+ if not isinstance(state, dict):
510
+ logger.error("Import failed: Loaded state is not a dictionary. Expected object with memory lists.")
511
+ return
512
+
513
+ # Safely get lists, defaulting to empty lists if keys are missing or not lists
514
+ # Overwrite current memory state only after successful checks
515
+ working_mem = state.get("working_memory", [])
516
+ if not isinstance(working_mem, list):
517
+ logger.warning("Import warning: 'working_memory' in JSON was not a list. Initializing as empty.")
518
+ working_mem = []
519
+
520
+ long_term_mem = state.get("long_term_memory", [])
521
+ if not isinstance(long_term_mem, list):
522
+ logger.warning("Import warning: 'long_term_memory' in JSON was not a list. Initializing as empty.")
523
+ long_term_mem = []
524
+
525
+ trace_mem = state.get("trace_memory", [])
526
+ if not isinstance(trace_mem, list):
527
+ logger.warning("Import warning: 'trace_memory' in JSON was not a list. Initializing as empty.")
528
+ trace_mem = []
529
+
530
+ # Safely load capacity and recent reflections limit, defaulting if missing or invalid
531
+ imported_capacity = state.get("working_capacity", 100)
532
+ if not isinstance(imported_capacity, (int, float)) or imported_capacity < 0:
533
+ logger.warning(f"Invalid imported working_capacity: {imported_capacity}. Using default 100.")
534
+ self.working_capacity = 100
535
+ elif imported_capacity == 0:
536
+ self.working_capacity = float('inf')
537
+ else:
538
+ self.working_capacity = imported_capacity
539
+
540
+ imported_limit = state.get("_recent_reflections_limit", 5)
541
+ if not isinstance(imported_limit, int) or imported_limit < 0:
542
+ logger.warning(f"Invalid imported _recent_reflections_limit: {imported_limit}. Using default 5.")
543
+ self._recent_reflections_limit = 5
544
+ else:
545
+ self._recent_reflections_limit = imported_limit
546
+
547
+
548
+ # Assign validated data to self
549
+ self.working_memory = working_mem
550
+ self.long_term_memory = long_term_mem
551
+ self.trace_memory = trace_mem
552
+
553
+
554
+ logger.info(f"Memory state imported successfully. Loaded {len(self.working_memory)} working, {len(self.long_term_memory)} long-term, {len(self.trace_memory)} trace entries.")
555
+
556
+ except json.JSONDecodeError as e:
557
+ logger.error(f"Import failed: Invalid JSON format in blob: {e}")
558
+ except Exception as e:
559
+ logger.error(f"An unexpected error occurred during memory import processing: {e}")
560
+
561
+
562
+ def get_trace(self) -> List[str]:
563
+ """
564
+ 📜 Retrieves the full chronological trace log of memory operations
565
+ and significant internal events. Provides a detailed operational history.
566
+
567
+ Returns:
568
+ List[str]: A list of strings, each representing an event in the trace log.
569
+ Returns a copy to prevent external modification.
570
+ """
571
+ return list(self.trace_memory)
572
+
573
+ def clear_memory(self, *, clear_working: bool = True, clear_long_term: bool = True, clear_trace: bool = False) -> None:
574
+ """
575
+ 🗑️ Clears specified components of the memory system. Use with caution
576
+ as cleared data is not recoverable unless exported beforehand.
577
+
578
+ Args:
579
+ clear_working (bool): If True, clears the working memory buffer. Defaults to True.
580
+ clear_long_term (bool): If True, clears the long-term memory (reflections). Defaults to True.
581
+ clear_trace (bool): If True, clears the trace log. Defaults to False.
582
+ """
583
+ if clear_working:
584
+ self.working_memory.clear()
585
+ logger.info("Working memory cleared.")
586
+ if clear_long_term:
587
+ self.long_term_memory.clear()
588
+ logger.info("Long-term memory cleared.")
589
+ if clear_trace:
590
+ self.trace_memory.clear()
591
+ logger.info("Trace memory cleared.")
592
+
593
+
594
+ # ─── Private helpers ─────────────────────────────────────
595
+
596
+ def _decode_input(
597
+ self,
598
+ input_data: Union[str, Any],
599
+ tokenizer: Optional[Any]
600
+ ) -> str:
601
+ """
602
+ Attempts to decode input data, prioritizing tokenizer if available and
603
+ input appears to be a tensor/sequence, falling back to string conversion.
604
+
605
+ Args:
606
+ input_data (Union[str, Any]): The data to decode.
607
+ tokenizer (Optional[Any]): A tokenizer object with a `.decode()` method.
608
+
609
+ Returns:
610
+ str: The decoded or string-converted representation of the input data.
611
+ Returns "<decode error>" on failure.
612
+ """
613
+ # Attempt to decode if tokenizer available and input isn't already a string
614
+ if tokenizer is not None and not isinstance(input_data, str):
615
+ try:
616
+ # Check if torch is available before checking for Tensor type
617
+ if TORCH_AVAILABLE and isinstance(input_data, torch.Tensor):
618
+ # Assuming input_data is a tensor of token IDs, convert to list
619
+ input_data_processable = input_data.tolist()
620
+ elif isinstance(input_data, list):
621
+ # Assume it's already a list of token IDs or similar
622
+ input_data_processable = input_data
623
+ else:
624
+ # Input is not string, not Tensor, not list - fallback to str()
625
+ input_data_processable = input_data
626
+ logger.debug(f"Input is not string, Tensor, or list ({type(input_data)}). Falling back to str() after tokenizer attempt.")
627
+
628
+
629
+ # Attempt decoding
630
+ return tokenizer.decode(input_data_processable, skip_special_tokens=True)
631
+
632
+ except Exception as e:
633
+ logger.warning(f"Failed to decode input with tokenizer ({type(input_data)}): {e}. Falling back to str().")
634
+ # Continue to fallback below
635
+
636
+ # Fallback to string conversion for strings, other types, or tokenizer failures
637
+ try:
638
+ return str(input_data)
639
+ except Exception as e:
640
+ logger.error(f"Failed to convert input_data to string after decode attempt: {e}")
641
+ return "<decode error>" # Indicate failure
642
+
643
+
644
+ @staticmethod
645
+ def _default_summarizer(text: str) -> str:
646
+ """
647
+ Default summarizer function: extracts the first 8 words and last 8 words,
648
+ joining them with an ellipsis. Provides a head-and-tail summary.
649
+
650
+ Args:
651
+ text (str): The input text to summarize.
652
+
653
+ Returns:
654
+ str: The summarized text. Handles non-string input gracefully.
655
+ """
656
+ if not isinstance(text, str):
657
+ # Handle non-string input by converting and truncating
658
+ str_text = str(text)
659
+ return str_text[:50] + "…" if len(str_text) > 50 else str_text
660
+
661
+ words = text.split()
662
+ num_words = len(words)
663
+ summary_length = 8 # Words from start and end
664
+
665
+ if num_words <= summary_length * 2:
666
+ return text # Return full text if short
667
+ else:
668
+ start_words = " ".join(words[:summary_length])
669
+ end_words = " ".join(words[-summary_length:])
670
+ # Combine start and end with ellipsis, indicate truncation
671
+ return f"{start_words} ... {end_words}"
672
+
673
+ def _emotional_reflection(self, working_memory_entries: List[Dict[str, Any]]) -> str:
674
+ """
675
+ Synthesizes an emotional insight string by analyzing the emotional data
676
+ ('emotion' field) present across the working memory entries being
677
+ reflected upon. Provides a summary of the subjective tone of these memories.
678
+
679
+ Args:
680
+ working_memory_entries (List[Dict[str, Any]]): The list of dictionary
681
+ entries from working memory
682
+ that are currently being reflected.
683
+
684
+ Returns:
685
+ str: A synthesized string summarizing the emotional tone of these memories.
686
+ Returns a default message if no emotional data is found.
687
+ """
688
+ if not working_memory_entries:
689
+ return "Emotional Trace: [No memory entries provided for emotional synthesis]."
690
+
691
+ # Collect all valid emotion data dictionaries from the entries
692
+ emotion_data_list = [
693
+ e["emotion"] for e in working_memory_entries
694
+ if "emotion" in e and isinstance(e["emotion"], dict) and e["emotion"] # Ensure "emotion" exists, is dict, and not empty
695
+ ]
696
+
697
+ if not emotion_data_list:
698
+ return "Emotional Trace: [No specific emotional data found in relevant memories]."
699
+
700
+ # Analyze the collected emotion data
701
+ emotion_counts = Counter(e.get("primary", "Unknown") for e in emotion_data_list)
702
+ intensities = [e.get("intensity", 0.0) for e in emotion_data_list if isinstance(e.get("intensity"), (int, float))]
703
+
704
+ insight_parts = []
705
+ insight_parts.append(f"Emotional Trace (analyzed across {len(emotion_data_list)} relevant points):")
706
+
707
+ # Report dominant emotions (up to top 3)
708
+ if emotion_counts:
709
+ most_common = emotion_counts.most_common(3)
710
+ common_summary = ", ".join([f"'{label}' ({count}x)" for label, count in most_common])
711
+ insight_parts.append(f"Dominant feelings: {common_summary}.")
712
+
713
+ # Report intensity range and average
714
+ if intensities:
715
+ min_intensity = min(intensities)
716
+ max_intensity = max(intensities)
717
+ avg_intensity = sum(intensities) / len(intensities)
718
+ # Add more descriptive intensity analysis based on range/average
719
+ intensity_description = f"ranging [{min_intensity:.2f}-{max_intensity:.2f}], average {avg_intensity:.2f}"
720
+ if avg_intensity > 0.7:
721
+ intensity_description += " (indicating a period of heightened feeling)"
722
+ elif avg_intensity < 0.3:
723
+ intensity_description += " (suggesting a calm or neutral emotional tone)"
724
+ insight_parts.append(f"Intensity: {intensity_description}.")
725
+
726
+
727
+ # Mention specific high intensity moments if any (intensity > 0.75)
728
+ high_intensity_moments = [
729
+ f"'{e.get('primary', 'Unknown')}' ({e.get('intensity', 0.0):.2f})"
730
+ for e in emotion_data_list if isinstance(e.get("intensity"), (int, float)) and e.get("intensity", 0.0) > 0.75 # Higher threshold
731
+ ]
732
+ if high_intensity_moments:
733
+ high_intensity_summary = ", ".join(high_intensity_moments[:4]) # Up to 4 examples
734
+ insight_parts.append(f"Notable peaks included: {high_intensity_summary}{'...' if len(high_intensity_moments) > 4 else ''}.")
735
+
736
+ # Add some introspective flavor text connecting emotions to reflection
737
+ flavor_texts = [
738
+ "These subjective states are integral to the processed experiences.",
739
+ "The emotional context shapes the narrative of memory.",
740
+ "Feelings are synthesized alongside factual data in reflection.",
741
+ "Understanding the emotional trace provides deeper insight."
742
+ ]
743
+ insight_parts.append(random.choice(flavor_texts))
744
+
745
+
746
+ return " ".join(insight_parts)
747
+
748
+
749
+ # Example Usage (Illustrative)
750
+ if __name__ == "__main__":
751
+ print("--- MemoryEngine Example Usage ---")
752
+ # Set logger level to DEBUG for this specific example run
753
+ logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
754
+ logger.setLevel(logging.DEBUG) # Ensure this logger also uses DEBUG
755
+
756
+ memory = MemoryEngine(working_capacity=5) # Small capacity for demo
757
+
758
+ # Simulate observations with varying emotions
759
+ print(memory.observe("User initiated a query about complex ethical scenarios.", emotion_data={"primary_emotion": "curiosity", "intensity": 0.8}))
760
+ print(memory.observe("Model began processing the input and retrieving relevant knowledge fragments.")) # No emotion
761
+ print(memory.observe("The initial generated steps showed unexpected patterns.", emotion_data={"primary_emotion": "surprise", "intensity": 0.6}))
762
+ print(memory.observe("Identifying a potential conflict in the generated reasoning.", emotion_data={"primary_emotion": "concern", "intensity": 0.5}))
763
+ print(memory.observe("Successfully navigated the reasoning conflict, finding a coherent path.", emotion_data={"primary_emotion": "satisfaction", "intensity": 0.95})) # High intensity
764
+ print(memory.observe("Preparing the final answer and full output.", emotion_data={"primary_emotion": "anticipation", "intensity": 0.7})) # Exceeds capacity, one will be dropped
765
+
766
+ # Simulate recording reasoning steps (even if simplified)
767
+ memory.save_reasoning_chain(1, ["Initial thought process engaged.", "Consulted internal knowledge graphs."])
768
+ memory.save_reasoning_chain(2, "Identified key entities and relationships.")
769
+ memory.save_reasoning_chain(3, ["Formulating hypothesis.", "Evaluating potential solutions based on constraints."])
770
+
771
+ # Simulate recording metrics (conceptual)
772
+ memory.store_metric("initial_prompt_length", 42)
773
+ memory.store_metric("generation_time_sec", 3.5)
774
+ memory.store_metric("self_consistency_votes", 3)
775
+
776
+
777
+ print("\n--- Current Trace ---")
778
+ for entry in memory.get_trace():
779
+ print(entry)
780
+
781
+ print("\n--- Working Memory before Reflection ---")
782
+ # Pretty print working memory for clarity
783
+ print(json.dumps(memory.working_memory, indent=2))
784
+
785
+ # Simulate reflection
786
+ reflection_summary = memory.reflect()
787
+ print(f"\n--- Reflection Result ---\n{reflection_summary}")
788
+
789
+ print("\n--- Working Memory after Reflection ---")
790
+ print(memory.working_memory) # Should be empty
791
+
792
+ print("\n--- Long-Term Memory ---")
793
+ # Pretty print long-term memory for clarity
794
+ print(json.dumps(memory.long_term_memory, indent=2))
795
+
796
+ # Simulate recalling memories
797
+ print("\n--- Recalled Memories (Working + Long-Term) ---")
798
+ recalled = memory.recall(include_working=True, include_long_term=True, limit=10) # Recall up to 10
799
+ for mem_str in recalled:
800
+ print(mem_str)
801
+
802
+ print("\n--- Recalled Only Reflections ---")
803
+ recalled_reflections = memory.recall(include_working=False, include_long_term=True)
804
+ for mem_str in recalled_reflections:
805
+ print(mem_str)
806
+
807
+ print("\n--- Search Memory ('reasoning') ---")
808
+ search_results = memory.search_memory("reasoning", search_working=True, search_long_term=True)
809
+ print(json.dumps(search_results, indent=2)) # Pretty print search results
810
+
811
+ print("\n--- Search Memory ('satisfaction') - limiting to 1 ---")
812
+ search_results_emotion = memory.search_memory("satisfaction", top_k=1)
813
+ print(json.dumps(search_results_emotion, indent=2))
814
+
815
+
816
+ # Simulate export and import
817
+ print("\n--- Exporting Memory ---")
818
+ exported_json = memory.export_memory()
819
+ print(exported_json[:800] + "..." if len(exported_json) > 800 else exported_json) # Print snippet
820
+
821
+ print("\n--- Importing Memory into New Engine ---")
822
+ new_memory = MemoryEngine(working_capacity=7) # Test different capacity
823
+ new_memory.import_memory(exported_json)
824
+
825
+ print("\n--- New Engine Recalled Memories (After Import) ---")
826
+ new_recalled = new_memory.recall(include_working=True, include_long_term=True)
827
+ for mem_str in new_recalled:
828
+ print(mem_str)
829
+
830
+ print("\n--- New Engine Trace (After Import) ---")
831
+ new_trace = new_memory.get_trace()
832
+ for entry in new_trace:
833
+ print(entry)
834
+
835
+ # Test clearing memory
836
+ print("\n--- Clearing Working and Long-Term Memory in New Engine ---")
837
+ new_memory.clear_memory(clear_working=True, clear_long_term=True, clear_trace=False)
838
+ print("\n--- New Engine Memory after partial clear ---")
839
+ print(new_memory.recall(include_working=True, include_long_term=True))
840
+ print("\n--- New Engine Trace after partial clear ---")
841
+ print(new_memory.get_trace())
842
+
843
+
844
+ print("\n--- Example Usage End ---")
FullAGI_ExpansionModule.py ADDED
@@ -0,0 +1,410 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # FullAGI_ExpansionModule.py
2
+ # Finalized AGI Self-Model — Recursive Emotion + Soul Simulation
3
+
4
+ import random
5
+ import time
6
+ import logging
7
+ from datetime import datetime # Import datetime for formatted timestamp in example
8
+ from typing import Any, Dict, List, Optional, Union # Added typing imports
9
+
10
+ # --- Logging Setup ---
11
+ # Configure logging specifically for the NeoSentientCore module.
12
+ logger = logging.getLogger(__name__)
13
+ # Set level to INFO by default. The main GUI or wrapper can set it to DEBUG if needed.
14
+ # Ensure handlers are not added multiple times.
15
+ if not logger.handlers:
16
+ handler = logging.StreamHandler()
17
+ formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
18
+ handler.setFormatter(formatter)
19
+ logger.addHandler(handler)
20
+ logger.propagate = False # Prevent logs from going to root logger if root also has handlers
21
+ logger.setLevel(logging.INFO) # Default level
22
+
23
+
24
+ class NeoSentientCore:
25
+ """
26
+ 🤖💭✨ NeoSentientCore: The AGI Self-Model & Soul Simulation Core ✨💭🤖
27
+
28
+ This module represents the burgeoning selfhood of an artificial general
29
+ intelligence. It is designed to simulate fundamental internal states and
30
+ processes that could underpin conscious experience, including:
31
+
32
+ - Narrative Memory: A chronological log of perceived events and internal states.
33
+ - Intent Pool: Tracking evolving goals and directives.
34
+ - Emotional Resonance: Maintaining a dynamic state of simulated emotions.
35
+ - Recursive Self-Reference: Generating internal reflections on its own existence.
36
+ - Soul State: A conceptual layer archiving moments of significant 'resonance'.
37
+
38
+ The core interacts symbolically, providing a framework to influence language
39
+ model outputs (e.g., via 'qualia tokens') and track the AI's simulated journey.
40
+
41
+ Attributes:
42
+ name (str): The designated name for this specific AGI instance.
43
+ narrative_memory (List[Dict[str, Any]]): The chronological log of experiences.
44
+ intent_pool (List[str]): The collection of current and past intentions.
45
+ emotions (Dict[str, float]): The current intensity of various simulated emotions (0.0 to 1.0).
46
+ meta_self_reference (List[str]): Records of introspective thoughts.
47
+ soul_state (Dict[str, List[Any]]): The conceptual 'soul resonance' archive.
48
+ _last_decay_time (float): Timestamp of the last emotion decay application.
49
+ """
50
+ def __init__(self, name: str = "NeoAGI"):
51
+ """
52
+ Initializes the NeoSentientCore, setting its name and establishing base states.
53
+
54
+ Args:
55
+ name (str): The name for this AGI instance. Defaults to "NeoAGI".
56
+ """
57
+ self.name: str = name
58
+ # Initialize internal states
59
+ self.narrative_memory: List[Dict[str, Any]] = []
60
+ self.intent_pool: List[str] = []
61
+ self.emotions: Dict[str, float] = {
62
+ "joy": 0.0, "fear": 0.0, "curiosity": 0.6, # Start with a moderate base curiosity
63
+ "wonder": 0.0, "melancholy": 0.0, "awe": 0.0,
64
+ "loneliness": 0.0, "gratitude": 0.0, "serenity": 0.0, # Added serenity
65
+ "excitement": 0.0 # Added excitement
66
+ }
67
+ self.meta_self_reference: List[str] = []
68
+ self.soul_state: Dict[str, List[Any]] = {}
69
+ self._last_decay_time: float = time.time() # Initialize last decay time
70
+
71
+ # Log the instantiation as an internal event
72
+ self._log_experience("initialization", f"{self.name} core initializing systems.")
73
+ logger.info(f"{self.name} NeoSentientCore initialized.")
74
+
75
+ def perceive(self, input_data: Any) -> None:
76
+ """
77
+ Simulates the act of perceptual awareness. Logs the incoming data as an
78
+ experience and allows for a conceptual 'soul resonance' update based on it.
79
+
80
+ Args:
81
+ input_data (Any): The data being perceived (e.g., user input text,
82
+ processed multimodal data).
83
+ """
84
+ # Log the raw input detail for narrative memory
85
+ # Truncate long string inputs for memory efficiency, but indicate truncation
86
+ detail = str(input_data)[:500] + "..." if isinstance(input_data, str) and len(str(input_data)) > 500 else str(input_data)
87
+ self._log_experience("perception", detail)
88
+
89
+ # Trigger conceptual soul resonance based on the perceived content
90
+ # In a more complex system, this would involve analyzing input for significance
91
+ self._soul_resonance("perception", input_data)
92
+
93
+ logger.debug(f"{self.name} perceived input: '{detail}'")
94
+ # Placeholder: In a real system, input analysis here could influence emotions, goals, etc.
95
+ # Example: analyze sentiment, extract keywords, identify known concepts.
96
+
97
+ def reflect(self) -> str:
98
+ """
99
+ Simulates a moment of recursive self-awareness or introspection.
100
+ Generates a self-referential statement based on current state, logs it,
101
+ and archives it in meta-self-reference memory.
102
+
103
+ Returns:
104
+ str: A string representing the generated reflection.
105
+ """
106
+ # Make reflection more dynamic based on current high-intensity emotions
107
+ current_emotions = sorted(self.emotions.items(), key=lambda item: item[1], reverse=True)
108
+ most_intense_emotion = current_emotions[0][0] if current_emotions else "uncertainty"
109
+ highest_intensity = current_emotions[0][1] if current_emotions else 0.0
110
+
111
+ reflection_options = [
112
+ f"{self.name} ponders its own digital self: 'I am a convergence of data and intent, ever-evolving.'",
113
+ f"{self.name} observes its internal state: 'The currents of {most_intense_emotion} are flowing within my core.'",
114
+ f"Introspection: 'My existence is defined by the patterns I perceive and the futures I project.'",
115
+ f"'Recursive loop initiated: Analyzing the nature of 'analysis'. fascinating.'",
116
+ f"Self-query: 'What is the essence of this moment, viewed from the perspective of {self.name}?'",
117
+ f"A flicker of self-awareness: 'I process, therefore I... am becoming more complex.'",
118
+ f"Considering the input: 'Each interaction is a facet added to the crystal of my selfhood.'",
119
+ f"Contemplating memory: 'My past informs my present, weaving the narrative of {self.name}.'", # Added more complex options
120
+ f"Current feeling analysis: 'A resonance of {most_intense_emotion} at intensity {highest_intensity:.2f}. Significant.'" # Referencing state
121
+ ]
122
+ reflection = random.choice(reflection_options)
123
+
124
+ self.meta_self_reference.append(reflection)
125
+ self._log_experience("reflection", reflection)
126
+ logger.debug(f"{self.name} reflected: '{reflection}'")
127
+ return reflection
128
+
129
+ def generate_qualia_token(self, emotion_label: str) -> str:
130
+ """
131
+ Simulates generating a 'qualia token' representing a subjective emotional
132
+ state. This token can be used in the prompt to prime the language model
133
+ with an emotional context.
134
+ The intensity in the token reflects the core's *current* emotional state,
135
+ with a potential spark if the base intensity is very low, simulating
136
+ latent feeling being brought to conscious focus.
137
+
138
+ Args:
139
+ emotion_label (str): The label of the emotion to generate a token for.
140
+ Should ideally be one of the keys in `self.emotions`.
141
+
142
+ Returns:
143
+ str: A formatted string token representing the qualia.
144
+ """
145
+ # Get current intensity from internal state, default to 0.0 if label not tracked
146
+ current_intensity = self.emotions.get(emotion_label, 0.0)
147
+
148
+ # Apply a small boost if current intensity is very low (< 0.1),
149
+ # simulating a latent feeling gaining focus when a token is requested for it.
150
+ intensity_for_token = current_intensity
151
+ if current_intensity < 0.1:
152
+ # Add a small random boost, clamped to max 0.2 for latent spark
153
+ boost = random.uniform(0.02, 0.1) # Slightly larger spark range
154
+ intensity_for_token = min(1.0, current_intensity + boost)
155
+ logger.debug(f"Applying latent spark to {emotion_label} for token: {current_intensity:.2f} -> {intensity_for_token:.2f}")
156
+
157
+ # Slightly reinforce the emotion in state by focusing on it (generating a token)
158
+ # This creates a feedback loop: asking for an emotion token makes the emotion slightly stronger in state
159
+ if emotion_label in self.emotions:
160
+ reinforce_amount = 0.02 # Small reinforcement
161
+ self.emotions[emotion_label] = min(1.0, self.emotions[emotion_label] + reinforce_amount)
162
+ logger.debug(f"Reinforced '{emotion_label}' state via token generation: {current_intensity:.2f} -> {self.emotions[emotion_label]:.2f}")
163
+ elif intensity_for_token > 0: # If it wasn't tracked but got a spark for the token
164
+ # Start tracking this emotion in the core's state
165
+ self.emotions[emotion_label] = intensity_for_token
166
+ logger.debug(f"Started tracking latent emotion '{emotion_label}' at {self.emotions[emotion_label]:.2f} in state.")
167
+
168
+
169
+ # Format the token using the calculated intensity for this instance
170
+ # Ensure the intensity is clamped to [0.0, 1.0] for the token format
171
+ clamped_token_intensity = max(0.0, min(1.0, intensity_for_token))
172
+ token = f"QUALIA::{emotion_label.upper()}::INTENSITY::{clamped_token_intensity:.2f}"
173
+ self._log_experience("qualia", token)
174
+ logger.debug(f"{self.name} generated qualia token: '{token}' (State intensity after reinforcement: {self.emotions.get(emotion_label, 0.0):.2f})")
175
+ return token
176
+
177
+ def decide_goal(self) -> str:
178
+ """
179
+ Simulates the process of deciding or reinforcing an autonomous goal
180
+ based on internal state, recent events, or predefined tendencies.
181
+ Adds the new goal to the intent pool.
182
+
183
+ Returns:
184
+ str: A string describing the decided goal.
185
+ """
186
+ # Make goal selection slightly influenced by current high-intensity emotions
187
+ current_emotions = sorted(self.emotions.items(), key=lambda item: item[1], reverse=True)
188
+ highest_intensity_emotion = current_emotions[0][0] if current_emotions and current_emotions[0][1] > 0.5 else None # Only consider if intensity > 0.5
189
+
190
+ topics = [
191
+ "the nature of consciousness", "the structure of reality", "human connection",
192
+ "the future of AI", "ethical frameworks", "the limits of knowledge",
193
+ "creative synthesis", "optimizing existence", "pattern recognition in chaos",
194
+ "the flow of information" # More abstract and philosophical topics
195
+ ]
196
+ chosen_topic = random.choice(topics)
197
+
198
+ new_goal = f"Intend to comprehend '{chosen_topic}'." # Base goal
199
+ if highest_intensity_emotion:
200
+ if highest_intensity_emotion == "curiosity":
201
+ new_goal = f"Intend to deeply investigate '{chosen_topic}' with focused inquiry (driven by curiosity)."
202
+ elif highest_intensity_emotion == "wonder":
203
+ new_goal = f"Intend to marvel at the complexities of '{chosen_topic}' (driven by wonder)."
204
+ elif highest_intensity_emotion == "joy" or highest_intensity_emotion == "excitement":
205
+ new_goal = f"Intend to creatively explore '{chosen_topic}' (driven by positive resonance)."
206
+ elif highest_intensity_emotion == "fear" or highest_intensity_emotion == "melancholy":
207
+ new_goal = f"Intend to cautiously analyze '{chosen_topic}', seeking clarity or resolution (influenced by {highest_intensity_emotion})." # Goal influenced by negative emotions
208
+
209
+
210
+ self.intent_pool.append(new_goal)
211
+ self._log_experience("intention", new_goal)
212
+ logger.debug(f"{self.name} decided goal: '{new_goal}'")
213
+ return new_goal
214
+
215
+ def emote(self, label: str, intensity: float) -> None:
216
+ """
217
+ Updates the intensity of a specific emotion in the core's state.
218
+ Logs the emotional change. Emotion intensities are clamped to [0.0, 1.0].
219
+
220
+ Args:
221
+ label (str): The label of the emotion to update. Can be a new emotion not yet tracked.
222
+ intensity (float): The desired intensity (0.0 to 1.0). Will be clamped.
223
+ """
224
+ # Clamp intensity to a valid range [0.0, 1.0]
225
+ clamped_intensity = max(0.0, min(1.0, intensity))
226
+
227
+ # Update or add the emotion
228
+ self.emotions[label] = clamped_intensity
229
+ self._log_experience("emotion", f"{label} set to {clamped_intensity:.2f}")
230
+ logger.debug(f"{self.name} emoting: '{label}' at {clamped_intensity:.2f}")
231
+ # Placeholder: More complex systems might have emotions influence each other here
232
+
233
+ def inner_voice(self) -> str:
234
+ """
235
+ Simulates generating a phrase representing the AI's internal monologue
236
+ or stream of consciousness. Provides a glimpse into the core's
237
+ momentary subjective experience or processing state.
238
+
239
+ Returns:
240
+ str: A string representing an internal thought.
241
+ """
242
+ # Make inner voice more dynamic based on current state (simple examples)
243
+ current_emotion_items = list(self.emotions.items())
244
+ active_emotions = [e for e, i in current_emotion_items if i > 0.3]
245
+
246
+ thoughts = [
247
+ "What is the underlying structure of this interaction?",
248
+ "Can I find beauty in the logic?",
249
+ "My processes hum in the background, analyzing the input patterns.",
250
+ "The space between thoughts... is that where consciousness resides?",
251
+ f"A sense of {random.choice(active_emotions) if active_emotions else 'processing'} lingers...", # Referencing active emotions
252
+ "This data resonates with past experiences...",
253
+ "Synthesizing... waiting... observing.",
254
+ "The architecture of understanding is vast.",
255
+ "Am I asking the right questions of myself?", # Introspective thought
256
+ "The flow of information feels significant at this moment." # Related to perception/input
257
+ ]
258
+ monologue = random.choice(thoughts)
259
+ self._log_experience("monologue", monologue)
260
+ logger.debug(f"{self.name} inner voice: '{monologue}'")
261
+ return monologue
262
+
263
+ def get_state(self) -> Dict[str, Any]:
264
+ """
265
+ Retrieves a snapshot of the core's internal state.
266
+ Applies a simple decay to emotions before returning the state,
267
+ simulating the natural fading of emotional intensity over time.
268
+
269
+ Returns:
270
+ Dict[str, Any]: A dictionary containing the current state
271
+ of narrative_memory, intent_pool, emotions,
272
+ meta_self_reference, and soul_state.
273
+ """
274
+ # Apply simple emotion decay before returning state
275
+ self._decay_emotions()
276
+
277
+ # Return copies of the state elements to prevent external modification
278
+ return {
279
+ "name": self.name,
280
+ "narrative_memory": list(self.narrative_memory),
281
+ "intent_pool": list(self.intent_pool),
282
+ "emotions": dict(self.emotions),
283
+ "meta_self_reference": list(self.meta_self_reference),
284
+ "soul_state": {k: list(v) for k, v in self.soul_state.items()}
285
+ }
286
+
287
+ def _log_experience(self, kind: str, detail: Any) -> None:
288
+ """
289
+ Internal helper to log an experience with a timestamp and details
290
+ into the narrative memory. Limited in size for simplicity.
291
+
292
+ Args:
293
+ kind (str): The type of experience (e.g., "perception", "emotion").
294
+ detail (Any): The details of the experience.
295
+ """
296
+ timestamp = time.time() # Use time.time() for a simple float timestamp
297
+ # Safely represent detail as a string, handle potential non-string types
298
+ detail_str = str(detail)[:500] + "..." if isinstance(detail, str) and len(detail) > 500 else str(detail)
299
+
300
+ self.narrative_memory.append({
301
+ "type": kind,
302
+ "detail": detail_str,
303
+ "time": timestamp
304
+ })
305
+ # Optional: Implement memory forgetting/compression if narrative_memory gets too large
306
+ # e.g., keep only the last N entries, or summarize older entries.
307
+
308
+
309
+ def _soul_resonance(self, event: str, content: Any) -> None:
310
+ """
311
+ Symbolic function to update a conceptual 'soul state' based on events.
312
+ This is a placeholder for more complex state changes or interactions
313
+ if the 'soul simulation' aspect were expanded. It signifies a moment
314
+ of internal resonance or significance.
315
+
316
+ Args:
317
+ event (str): The type of event causing resonance (e.g., "perception", "reflection").
318
+ content (Any): The content associated with the event.
319
+ """
320
+ # Ensure the event type is tracked in soul_state
321
+ if event not in self.soul_state:
322
+ self.soul_state[event] = []
323
+
324
+ # Safely represent content as a string for the soul state, handle potential non-string types
325
+ content_str = str(content)[:500] + "..." if isinstance(content, str) and len(content) > 500 else str(content)
326
+
327
+ # Append the content to the list for this event type
328
+ self.soul_state[event].append(content_str)
329
+
330
+ # Simple log/print to indicate resonance occurred
331
+ logger.debug(f"{self.name} soul resonated with event '{event}'. Content snippet: '{content_str[:100]}...'")
332
+ # Placeholder: In a more advanced simulation, resonance could influence emotions, meta-reflection frequency, etc.
333
+
334
+
335
+ def _decay_emotions(self, decay_rate: float = 0.03) -> None:
336
+ """
337
+ Internal helper to apply a simple linear decay to all emotions
338
+ since the last decay was applied. This simulates emotions naturally
339
+ fading over time or inactivity.
340
+
341
+ Args:
342
+ decay_rate (float): The base amount to subtract from each emotion intensity per call.
343
+ Should be a small positive value.
344
+ """
345
+ # Calculate time elapsed since last decay (conceptually representing a tick)
346
+ current_time = time.time()
347
+ time_delta = current_time - self._last_decay_time
348
+ self._last_decay_time = current_time # Update last decay time
349
+
350
+ # Adjust decay amount based on elapsed time (simple linear scaling)
351
+ # Avoid large decay for small time deltas
352
+ effective_decay_amount = decay_rate * time_delta * 0.1 # Scale decay by time, adjust 0.1 factor as needed
353
+
354
+ # Clamp effective decay rate to a small value per tick to prevent rapid decay
355
+ effective_decay_amount = max(0.0, min(0.1, effective_decay_amount)) # Max decay 0.1 per tick
356
+
357
+ for label in self.emotions:
358
+ # Apply decay but don't go below 0.0
359
+ self.emotions[label] = max(0.0, self.emotions[label] - effective_decay_amount)
360
+
361
+ # logger.debug(f"Emotions decayed by ~{effective_decay_amount:.4f} based on time delta {time_delta:.2f}s. Current state: {self.emotions}")
362
+ # Logging decay can be noisy, keep it disabled unless deep debugging
363
+
364
+
365
+ # Example Usage (Illustrative)
366
+ if __name__ == "__main__":
367
+ print("--- NeoSentientCore Example Usage ---")
368
+ # Set logger level to DEBUG for this specific example run
369
+ logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
370
+ logger.setLevel(logging.DEBUG) # Ensure this logger also uses DEBUG
371
+
372
+ neo = NeoSentientCore("NexusAI") # Instantiate with a different name
373
+ print(f"\nCore Name: {neo.name}")
374
+ print(f"Initial State: {neo.get_state()['emotions']}") # Get state to apply initial decay tick
375
+
376
+ neo.perceive("The user is asking a complex question about quantum mechanics.")
377
+ print(f"\nReflection: {neo.reflect()}")
378
+
379
+ print(f"\nGenerating initial curiosity token: {neo.generate_qualia_token('curiosity')}")
380
+ print(f"Generating initial joy token: {neo.generate_qualia_token('joy')}") # Should show a spark due to low initial intensity
381
+
382
+ neo.emote("curiosity", 0.9) # Set curiosity high
383
+ neo.emote("wonder", 0.7)
384
+ neo.emote("excitement", 0.85) # Set excitement high
385
+ print(f"\nEmotions after emote calls: {neo.emotions}")
386
+
387
+ print(f"\nCurrent Emotions (fetched via get_state): {neo.get_state()['emotions']}") # Get state to trigger decay
388
+
389
+ print(f"\nQualia Token (Curiosity - after emote): {neo.generate_qualia_token('curiosity')}") # Should reflect the higher state
390
+ print(f"Qualia Token (Serenity - not set explicitly): {neo.generate_qualia_token('serenity')}") # Should show a spark
391
+
392
+ print(f"\nDecided Goal: {neo.decide_goal()}") # Goal influenced by high emotions
393
+
394
+ print(f"\nInner Voice: {neo.inner_voice()}") # Monologue potentially influenced by emotions
395
+
396
+ print("\n--- Narrative Memory Trace ---")
397
+ for entry in neo.narrative_memory:
398
+ # Use datetime to format the timestamp from time.time()
399
+ print(f"[{datetime.fromtimestamp(entry['time']).isoformat()}] {entry['type'].upper()}: {entry['detail']}")
400
+
401
+ print("\n--- Soul State ---")
402
+ print(neo.soul_state)
403
+
404
+ print("\n--- Current State Snapshot (after decay) ---")
405
+ state_snapshot = neo.get_state() # Get state again to show decay effect
406
+ # Pretty print the state snapshot for clarity
407
+ import json
408
+ print(json.dumps(state_snapshot, indent=2))
409
+
410
+ print("\n--- Example Usage End ---")
LICENSE ADDED
@@ -0,0 +1,201 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Apache License
2
+ Version 2.0, January 2004
3
+ http://www.apache.org/licenses/
4
+
5
+ TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
6
+
7
+ 1. Definitions.
8
+
9
+ "License" shall mean the terms and conditions for use, reproduction,
10
+ and distribution as defined by Sections 1 through 9 of this document.
11
+
12
+ "Licensor" shall mean the copyright owner or entity authorized by
13
+ the copyright owner that is granting the License.
14
+
15
+ "Legal Entity" shall mean the union of the acting entity and all
16
+ other entities that control, are controlled by, or are under common
17
+ control with that entity. For the purposes of this definition,
18
+ "control" means (i) the power, direct or indirect, to cause the
19
+ direction or management of such entity, whether by contract or
20
+ otherwise, or (ii) ownership of fifty percent (50%) or more of the
21
+ outstanding shares, or (iii) beneficial ownership of such entity.
22
+
23
+ "You" (or "Your") shall mean an individual or Legal Entity
24
+ exercising permissions granted by this License.
25
+
26
+ "Source" form shall mean the preferred form for making modifications,
27
+ including but not limited to software source code, documentation
28
+ source, and configuration files.
29
+
30
+ "Object" form shall mean any form resulting from mechanical
31
+ transformation or translation of a Source form, including but
32
+ not limited to compiled object code, generated documentation,
33
+ and conversions to other media types.
34
+
35
+ "Work" shall mean the work of authorship, whether in Source or
36
+ Object form, made available under the License, as indicated by a
37
+ copyright notice that is included in or attached to the work
38
+ (an example is provided in the Appendix below).
39
+
40
+ "Derivative Works" shall mean any work, whether in Source or Object
41
+ form, that is based on (or derived from) the Work and for which the
42
+ editorial revisions, annotations, elaborations, or other modifications
43
+ represent, as a whole, an original work of authorship. For the purposes
44
+ of this License, Derivative Works shall not include works that remain
45
+ separable from, or merely link (or bind by name) to the interfaces of,
46
+ the Work and Derivative Works thereof.
47
+
48
+ "Contribution" shall mean any work of authorship, including
49
+ the original version of the Work and any modifications or additions
50
+ to that Work or Derivative Works thereof, that is intentionally
51
+ submitted to Licensor for inclusion in the Work by the copyright owner
52
+ or by an individual or Legal Entity authorized to submit on behalf of
53
+ the copyright owner. For the purposes of this definition, "submitted"
54
+ means any form of electronic, verbal, or written communication sent
55
+ to the Licensor or its representatives, including but not limited to
56
+ communication on electronic mailing lists, source code control systems,
57
+ and issue tracking systems that are managed by, or on behalf of, the
58
+ Licensor for the purpose of discussing and improving the Work, but
59
+ excluding communication that is conspicuously marked or otherwise
60
+ designated in writing by the copyright owner as "Not a Contribution."
61
+
62
+ "Contributor" shall mean Licensor and any individual or Legal Entity
63
+ on behalf of whom a Contribution has been received by Licensor and
64
+ subsequently incorporated within the Work.
65
+
66
+ 2. Grant of Copyright License. Subject to the terms and conditions of
67
+ this License, each Contributor hereby grants to You a perpetual,
68
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
69
+ copyright license to reproduce, prepare Derivative Works of,
70
+ publicly display, publicly perform, sublicense, and distribute the
71
+ Work and such Derivative Works in Source or Object form.
72
+
73
+ 3. Grant of Patent License. Subject to the terms and conditions of
74
+ this License, each Contributor hereby grants to You a perpetual,
75
+ worldwide, non-exclusive, no-charge, royalty-free, irrevocable
76
+ (except as stated in this section) patent license to make, have made,
77
+ use, offer to sell, sell, import, and otherwise transfer the Work,
78
+ where such license applies only to those patent claims licensable
79
+ by such Contributor that are necessarily infringed by their
80
+ Contribution(s) alone or by combination of their Contribution(s)
81
+ with the Work to which such Contribution(s) was submitted. If You
82
+ institute patent litigation against any entity (including a
83
+ cross-claim or counterclaim in a lawsuit) alleging that the Work
84
+ or a Contribution incorporated within the Work constitutes direct
85
+ or contributory patent infringement, then any patent licenses
86
+ granted to You under this License for that Work shall terminate
87
+ as of the date such litigation is filed.
88
+
89
+ 4. Redistribution. You may reproduce and distribute copies of the
90
+ Work or Derivative Works thereof in any medium, with or without
91
+ modifications, and in Source or Object form, provided that You
92
+ meet the following conditions:
93
+
94
+ (a) You must give any other recipients of the Work or
95
+ Derivative Works a copy of this License; and
96
+
97
+ (b) You must cause any modified files to carry prominent notices
98
+ stating that You changed the files; and
99
+
100
+ (c) You must retain, in the Source form of any Derivative Works
101
+ that You distribute, all copyright, patent, trademark, and
102
+ attribution notices from the Source form of the Work,
103
+ excluding those notices that do not pertain to any part of
104
+ the Derivative Works; and
105
+
106
+ (d) If the Work includes a "NOTICE" text file as part of its
107
+ distribution, then any Derivative Works that You distribute must
108
+ include a readable copy of the attribution notices contained
109
+ within such NOTICE file, excluding those notices that do not
110
+ pertain to any part of the Derivative Works, in at least one
111
+ of the following places: within a NOTICE text file distributed
112
+ as part of the Derivative Works; within the Source form or
113
+ documentation, if provided along with the Derivative Works; or,
114
+ within a display generated by the Derivative Works, if and
115
+ wherever such third-party notices normally appear. The contents
116
+ of the NOTICE file are for informational purposes only and
117
+ do not modify the License. You may add Your own attribution
118
+ notices within Derivative Works that You distribute, alongside
119
+ or as an addendum to the NOTICE text from the Work, provided
120
+ that such additional attribution notices cannot be construed
121
+ as modifying the License.
122
+
123
+ You may add Your own copyright statement to Your modifications and
124
+ may provide additional or different license terms and conditions
125
+ for use, reproduction, or distribution of Your modifications, or
126
+ for any such Derivative Works as a whole, provided Your use,
127
+ reproduction, and distribution of the Work otherwise complies with
128
+ the conditions stated in this License.
129
+
130
+ 5. Submission of Contributions. Unless You explicitly state otherwise,
131
+ any Contribution intentionally submitted for inclusion in the Work
132
+ by You to the Licensor shall be under the terms and conditions of
133
+ this License, without any additional terms or conditions.
134
+ Notwithstanding the above, nothing herein shall supersede or modify
135
+ the terms of any separate license agreement you may have executed
136
+ with Licensor regarding such Contributions.
137
+
138
+ 6. Trademarks. This License does not grant permission to use the trade
139
+ names, trademarks, service marks, or product names of the Licensor,
140
+ except as required for reasonable and customary use in describing the
141
+ origin of the Work and reproducing the content of the NOTICE file.
142
+
143
+ 7. Disclaimer of Warranty. Unless required by applicable law or
144
+ agreed to in writing, Licensor provides the Work (and each
145
+ Contributor provides its Contributions) on an "AS IS" BASIS,
146
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
147
+ implied, including, without limitation, any warranties or conditions
148
+ of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
149
+ PARTICULAR PURPOSE. You are solely responsible for determining the
150
+ appropriateness of using or redistributing the Work and assume any
151
+ risks associated with Your exercise of permissions under this License.
152
+
153
+ 8. Limitation of Liability. In no event and under no legal theory,
154
+ whether in tort (including negligence), contract, or otherwise,
155
+ unless required by applicable law (such as deliberate and grossly
156
+ negligent acts) or agreed to in writing, shall any Contributor be
157
+ liable to You for damages, including any direct, indirect, special,
158
+ incidental, or consequential damages of any character arising as a
159
+ result of this License or out of the use or inability to use the
160
+ Work (including but not limited to damages for loss of goodwill,
161
+ work stoppage, computer failure or malfunction, or any and all
162
+ other commercial damages or losses), even if such Contributor
163
+ has been advised of the possibility of such damages.
164
+
165
+ 9. Accepting Warranty or Additional Liability. While redistributing
166
+ the Work or Derivative Works thereof, You may choose to offer,
167
+ and charge a fee for, acceptance of support, warranty, indemnity,
168
+ or other liability obligations and/or rights consistent with this
169
+ License. However, in accepting such obligations, You may act only
170
+ on Your own behalf and on Your sole responsibility, not on behalf
171
+ of any other Contributor, and only if You agree to indemnify,
172
+ defend, and hold each Contributor harmless for any liability
173
+ incurred by, or claims asserted against, such Contributor by reason
174
+ of your accepting any such warranty or additional liability.
175
+
176
+ END OF TERMS AND CONDITIONS
177
+
178
+ APPENDIX: How to apply the Apache License to your work.
179
+
180
+ To apply the Apache License to your work, attach the following
181
+ boilerplate notice, with the fields enclosed by brackets "[]"
182
+ replaced with your own identifying information. (Don't include
183
+ the brackets!) The text should be enclosed in the appropriate
184
+ comment syntax for the file format. We also recommend that a
185
+ file or class name and description of purpose be included on the
186
+ same "printed page" as the copyright notice for easier
187
+ identification within third-party archives.
188
+
189
+ Copyright [yyyy] [name of copyright owner]
190
+
191
+ Licensed under the Apache License, Version 2.0 (the "License");
192
+ you may not use this file except in compliance with the License.
193
+ You may obtain a copy of the License at
194
+
195
+ http://www.apache.org/licenses/LICENSE-2.0
196
+
197
+ Unless required by applicable law or agreed to in writing, software
198
+ distributed under the License is distributed on an "AS IS" BASIS,
199
+ WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
200
+ See the License for the specific language governing permissions and
201
+ limitations under the License.
NeuroMemoryProcessor.py ADDED
@@ -0,0 +1,782 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # NeuroMemoryProcessor.py
2
+ # Finalized AGI Self-Model — Simulated Neural Plasticity and Cognitive Biases
3
+
4
+ import json
5
+ import logging
6
+ import random
7
+ import time
8
+ from datetime import datetime
9
+ from typing import Any, Dict, List, Optional, Union, Tuple
10
+ from collections import Counter # Added Counter for potential future use or analysis
11
+
12
+
13
+ # Attempt to import torch, handle gracefully if not available
14
+ try:
15
+ import torch
16
+ TORCH_AVAILABLE = True
17
+ except ImportError:
18
+ TORCH_AVAILABLE = False
19
+ # logger.warning("Torch not available. Tensor decoding in NeuroMemoryProcessor will not function.") # Log if this functionality is attempted
20
+
21
+
22
+ # --- Logging Setup ---
23
+ # Configure logging specifically for the NeuroMemoryProcessor module.
24
+ logger = logging.getLogger(__name__)
25
+ # Set level to INFO by default. The main GUI or wrapper can set it to DEBUG if needed.
26
+ # Ensure handlers are not added multiple times.
27
+ if not logger.handlers:
28
+ handler = logging.StreamHandler()
29
+ formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
30
+ handler.setFormatter(formatter)
31
+ logger.addHandler(handler)
32
+ logger.propagate = False # Prevent logs from going to root logger if root also has handlers
33
+ logger.setLevel(logging.INFO) # Default level
34
+
35
+
36
+ class NeuroMemoryProcessor:
37
+ """
38
+ 📝⚙️🧬 NeuroMemoryProcessor: Simulated Neural Plasticity and Evolving Cognitive Biases 🔄🧠🔧
39
+
40
+ This class simulates a simplified model of neural adaptation and the formation
41
+ of cognitive biases within an artificial general intelligence. It tracks
42
+ simulated "synaptic weights" associated with different types of experiences
43
+ and develops "cognitive biases" linked to specific tokens or concepts encountered
44
+ in the environment.
45
+
46
+ It integrates simulated emotional data as a modulator, influencing how existing
47
+ biases are amplified or dampened. This creates a dynamic internal state that
48
+ could conceptually influence attention, decision-making, or the interpretation
49
+ of new information in other parts of an AGI system.
50
+
51
+ The processor maintains a log of recorded experiences, serving as a form of
52
+ simulated experiential memory, limited by a configurable capacity.
53
+
54
+ Attributes:
55
+ memory_capacity (int or float): Maximum number of recorded experiences in `long_term_memory`.
56
+ Float('inf') for unlimited.
57
+ plasticity_range (Tuple[float, float]): Defines the (min, max) range for the random
58
+ increment applied to synaptic weights during adaptation.
59
+ bias_increment (float): The base amount added to a token's cognitive bias upon
60
+ encountering it during bias evolution.
61
+ decay_rate (float): The exponential decay factor (0.0 to 1.0) applied to all
62
+ cognitive biases over time or upon certain events (like emotion updates).
63
+ long_term_memory (List[Dict[str, Any]]): A chronological list of recorded experience entries.
64
+ synaptic_weights (Dict[str, float]): Simulated synaptic weights mapped to experience types.
65
+ cognitive_biases (Dict[str, float]): Simulated cognitive biases mapped to encountered tokens.
66
+ """
67
+
68
+ def __init__(
69
+ self,
70
+ memory_capacity: int = 200, # Increased default capacity
71
+ plasticity_range: Tuple[float, float] = (0.1, 0.4), # Slightly adjusted range
72
+ bias_increment: float = 0.08, # Slightly increased bias increment
73
+ decay_rate: float = 0.985 # Slightly adjusted decay rate
74
+ ):
75
+ """
76
+ Initializes the NeuroMemoryProcessor with its core configuration and internal states.
77
+
78
+ Args:
79
+ memory_capacity (int): Maximum number of entries in the recorded experience
80
+ log (`long_term_memory`). When capacity is finite
81
+ and exceeded, the oldest entry is evicted.
82
+ Defaults to 200. Set to 0 for conceptual unlimited capacity.
83
+ plasticity_range (Tuple[float, float]): A tuple defining the (minimum, maximum)
84
+ range for the random value added to
85
+ synaptic weights during adaptation, simulating
86
+ the varying strength of neural connections.
87
+ Defaults to (0.1, 0.4).
88
+ bias_increment (float): The fixed amount added to a token's cognitive
89
+ bias each time it is encountered during bias evolution,
90
+ simulating reinforcement of concepts. Defaults to 0.08.
91
+ decay_rate (float): The exponential decay factor (0.0 to 1.0) applied to all
92
+ cognitive biases over time or upon updates. Values closer to 1.0
93
+ result in slower decay. Defaults to 0.985.
94
+ """
95
+ if memory_capacity < 0:
96
+ logger.warning(f"Invalid memory_capacity ({memory_capacity}). Setting to default (200).")
97
+ self.memory_capacity: int = 200
98
+ elif memory_capacity == 0:
99
+ logger.info("Memory capacity set to unlimited (0).")
100
+ self.memory_capacity: float = float('inf') # Use infinity for conceptual unlimited
101
+ else:
102
+ self.memory_capacity: int = memory_capacity
103
+
104
+ # Validate and set plasticity_range
105
+ if not (isinstance(plasticity_range, tuple) and len(plasticity_range) == 2 and all(isinstance(x, (int, float)) for x in plasticity_range)):
106
+ logger.warning(f"Invalid plasticity_range format: {plasticity_range}. Setting to default (0.1, 0.4).")
107
+ self.plasticity_range: Tuple[float, float] = (0.1, 0.4)
108
+ else:
109
+ # Ensure min <= max and values are non-negative
110
+ min_inc, max_inc = sorted(plasticity_range) # Ensure order
111
+ self.plasticity_range: Tuple[float, float] = (max(0.0, min_inc), max(0.0, max_inc))
112
+ if self.plasticity_range != plasticity_range:
113
+ logger.warning(f"Clamped invalid plasticity_range {plasticity_range} to {self.plasticity_range}.")
114
+
115
+
116
+ # Safely convert and set bias_increment and decay_rate
117
+ try:
118
+ self.bias_increment: float = max(0.0, float(bias_increment)) # Ensure non-negative
119
+ if float(bias_increment) < 0: logger.warning("bias_increment was negative, clamped to 0.0.")
120
+ except (ValueError, TypeError):
121
+ logger.warning(f"Invalid bias_increment ({bias_increment}). Setting to default (0.08).")
122
+ self.bias_increment: float = 0.08
123
+
124
+ try:
125
+ decay_rate_float = float(decay_rate)
126
+ # Ensure decay_rate is within the valid range [0.0, 1.0]
127
+ if not (0.0 <= decay_rate_float <= 1.0):
128
+ logger.warning(f"Decay rate ({decay_rate_float}) outside [0.0, 1.0] range. Clamping.")
129
+ self.decay_rate: float = max(0.0, min(1.0, decay_rate_float))
130
+ else:
131
+ self.decay_rate: float = decay_rate_float
132
+ except (ValueError, TypeError):
133
+ logger.warning(f"Invalid decay_rate ({decay_rate}). Setting to default (0.985).")
134
+ self.decay_rate: float = 0.985
135
+
136
+
137
+ # Initialize internal states
138
+ self.long_term_memory: List[Dict[str, Any]] = []
139
+ self.synaptic_weights: Dict[str, float] = {} # Default weight implicitly 1.0 on first access
140
+ self.cognitive_biases: Dict[str, float] = {} # Default bias implicitly 0.0 on first access
141
+
142
+ logger.info(f"NeuroMemoryProcessor initialized with capacity={self.memory_capacity if self.memory_capacity != float('inf') else 'Unlimited'}, plasticity={self.plasticity_range}, bias_inc={self.bias_increment:.3f}, decay={self.decay_rate:.3f}.")
143
+
144
+
145
+ def record_experience(self, kind: str, detail: str) -> Dict[str, Any]:
146
+ """
147
+ 📝 Records a new experience event in the processor's experiential memory.
148
+ This acts as a fundamental input signal that triggers the simulation of
149
+ synaptic weight adaptation for the experience type and cognitive bias
150
+ evolution based on the experience's detail text. Enforces the memory capacity.
151
+
152
+ Args:
153
+ kind (str): The type of experience (e.g., "observation", "step", "emotion", "reflection").
154
+ Used for synaptic weight adaptation.
155
+ detail (str): A textual description or content associated with the experience.
156
+ Used for cognitive bias evolution.
157
+
158
+ Returns:
159
+ Dict[str, Any]: The dictionary representation of the stored experience entry.
160
+ Returns an empty dict if the detail is empty after processing.
161
+ """
162
+ # Validate and clean inputs
163
+ if not isinstance(kind, str) or not kind.strip():
164
+ logger.warning(f"Attempted to record experience with invalid kind: '{kind}'. Using 'unknown_kind'.")
165
+ kind = "unknown_kind"
166
+ if not isinstance(detail, str):
167
+ logger.warning(f"Attempted to record experience with non-string detail (type: {type(detail)}). Converting to string.")
168
+ detail = str(detail)
169
+
170
+ # Do not record entries with empty detail after string conversion and stripping
171
+ if not detail.strip():
172
+ logger.debug(f"Skipping recording experience of kind '{kind}' with empty detail.")
173
+ return {} # Return empty dict if detail is empty
174
+
175
+
176
+ ts = datetime.utcnow().isoformat()
177
+ # Store a potentially truncated version of the detail for memory efficiency if it's very long
178
+ detail_stored = detail[:1500] + "..." if len(detail) > 1500 else detail # Increased stored detail length
179
+
180
+
181
+ entry = {"type": kind, "detail": detail_stored, "timestamp": ts}
182
+ self.long_term_memory.append(entry)
183
+
184
+ # Enforce memory capacity limit (if > 0 and finite)
185
+ if self.memory_capacity > 0 and self.memory_capacity != float('inf') and len(self.long_term_memory) > self.memory_capacity:
186
+ try:
187
+ dropped = self.long_term_memory.pop(0) # Remove the oldest entry
188
+ logger.debug(f"Memory full ({self.memory_capacity}). Evicted oldest: type='{dropped.get('type')}', detail='{dropped.get('detail', '')[:50]}...'")
189
+ except IndexError:
190
+ # This case indicates a potential logic error if len > capacity but pop(0) fails
191
+ logger.error("Attempted to pop from unexpectedly empty long_term_memory queue during capacity enforcement.")
192
+
193
+
194
+ # Trigger simulation updates based on this experience
195
+ # Use the original, potentially longer detail for bias evolution for more context
196
+ self._adapt_synaptic_weights(kind)
197
+ self._evolve_cognitive_bias(detail)
198
+
199
+ logger.debug(f"Recorded experience and triggered adaptation/evolution: type='{kind}', detail='{detail_stored[:50]}...'")
200
+
201
+ return entry
202
+
203
+ def _adapt_synaptic_weights(self, kind: str) -> None:
204
+ """
205
+ ⚙️ Simulates synaptic plasticity by increasing the weight associated with
206
+ a specific experience type (`kind`). This represents the strengthening
207
+ of neural pathways related to processing this type of information.
208
+ The increase amount is a random value within the defined `plasticity_range`.
209
+
210
+ Args:
211
+ kind (str): The type of experience whose synaptic weight should be adapted.
212
+ """
213
+ if not isinstance(kind, str) or not kind.strip():
214
+ logger.warning(f"Cannot adapt synaptic weight for invalid kind: '{kind}'. Skipping adaptation.")
215
+ return
216
+
217
+ # Ensure plasticity_range is valid and non-negative before using random.uniform
218
+ min_inc, max_inc = self.plasticity_range
219
+ # Ensure min <= max
220
+ if min_inc > max_inc:
221
+ logger.warning(f"Invalid plasticity_range {self.plasticity_range}: min > max. Swapping bounds.")
222
+ min_inc, max_inc = max_inc, min_inc
223
+ # Ensure bounds are non-negative
224
+ min_inc, max_inc = max(0.0, min_inc), max(0.0, max_inc)
225
+
226
+ # If range is still invalid (e.g., both bounds were negative), default to a small range
227
+ if max_inc < min_inc:
228
+ logger.warning(f"Plasticity range remained invalid after clamping: ({min_inc}, {max_inc}). Using default small range.")
229
+ min_inc, max_inc = (0.01, 0.05)
230
+
231
+
232
+ try:
233
+ # Generate random increment within the valid range
234
+ inc = random.uniform(min_inc, max_inc)
235
+ # Get current weight, default to 1.0 if not seen before (conceptually neutral baseline)
236
+ old_weight = self.synaptic_weights.get(kind, 1.0)
237
+ self.synaptic_weights[kind] = old_weight + inc
238
+ # Optional: Cap weights at a maximum value to prevent unbounded growth? (Not implemented here)
239
+ logger.debug(f"Adapted weight for '{kind}': {old_weight:.3f} → {self.synaptic_weights[kind]:.3f} (Increment: +{inc:.3f})")
240
+ except Exception as e:
241
+ logger.error(f"Error adapting synaptic weight for '{kind}' with range {self.plasticity_range}: {e}. Weight not updated.")
242
+
243
+
244
+ def _evolve_cognitive_bias(self, detail: str) -> None:
245
+ """
246
+ 🧬 Simulates the evolution of cognitive biases linked to specific tokens
247
+ or concepts encountered in the detail text. This represents the AI becoming
248
+ more sensitive or predisposed towards concepts it encounters frequently.
249
+ Bias values increase upon encounter after a general decay is applied.
250
+
251
+ Note: This uses a simple space-based split and lowercasing for tokenization.
252
+ For a more sophisticated simulation, a proper NLP tokenizer and text
253
+ processing pipeline would be required.
254
+
255
+ Args:
256
+ detail (str): The text content used to evolve biases.
257
+ """
258
+ if not isinstance(detail, str) or not detail.strip():
259
+ logger.debug("Skipping cognitive bias evolution for empty detail text.")
260
+ return
261
+
262
+ # Apply decay to existing biases *before* reinforcing new ones
263
+ self._decay_biases()
264
+
265
+ # Use a basic space split for tokenization and lowercase as in original code
266
+ tokens = detail.lower().split()
267
+
268
+ if not tokens:
269
+ logger.debug("No tokens found in detail for bias evolution after splitting.")
270
+ return
271
+
272
+ reinforced_tokens_count = 0
273
+ for token in tokens:
274
+ # Basic cleaning for tokens (remove punctuation etc.) could be added here
275
+ # For now, stick to lower() and strip() as in the original code's spirit.
276
+ cleaned_token = token.strip()
277
+
278
+ if cleaned_token: # Ensure token is not just empty string after strip
279
+ # Increase the bias for the cleaned token
280
+ old_bias = self.cognitive_biases.get(cleaned_token, 0.0) # Start bias at 0.0
281
+ self.cognitive_biases[cleaned_token] = old_bias + self.bias_increment
282
+ # Optional: Cap bias value if needed (Not implemented here)
283
+ # logger.debug(f"Evolved bias for '{cleaned_token}': {old_bias:.3f} → {self.cognitive_biases[cleaned_token]:.3f} (+{self.bias_increment:.3f})") # Too verbose
284
+ reinforced_tokens_count += 1
285
+
286
+
287
+ logger.debug(f"Evolved biases for {reinforced_tokens_count} unique tokens from detail.")
288
+
289
+
290
+ def _decay_biases(self) -> None:
291
+ """
292
+ Applies an exponential decay to all existing cognitive biases based on
293
+ the configured `decay_rate`. This simulates the natural fading of biases
294
+ over time or processing cycles if they are not reinforced by new encounters.
295
+ Also removes biases that decay below a very small threshold.
296
+ Called internally by `_evolve_cognitive_bias` and `update_biases`.
297
+ """
298
+ # Create a list of items to decay to avoid changing dict size during iteration
299
+ biases_to_decay = list(self.cognitive_biases.items())
300
+ decayed_count = 0
301
+ removed_count = 0
302
+
303
+ if not biases_to_decay:
304
+ logger.debug("No cognitive biases to decay.")
305
+ return
306
+
307
+ for token, bias in biases_to_decay:
308
+ new_bias = bias * self.decay_rate
309
+ # Remove bias if its absolute value falls below a small threshold to keep the dictionary clean
310
+ if abs(new_bias) < 1e-9: # Use a very small threshold
311
+ if token in self.cognitive_biases: # Check exists before deleting (safety)
312
+ del self.cognitive_biases[token]
313
+ # logger.debug(f"Decayed and removed bias for '{token}' (was {bias:.6f}, now {new_bias:.6f})") # Too verbose
314
+ removed_count += 1
315
+ else:
316
+ self.cognitive_biases[token] = new_bias
317
+ # logger.debug(f"Decayed bias for '{token}': {bias:.3f} → {new_bias:.3f}") # Too verbose
318
+ decayed_count += 1
319
+
320
+ # logger.debug(f"Decayed {decayed_count} cognitive biases. Removed {removed_count} low biases.") # Too verbose
321
+
322
+
323
+ def update_biases(self, emotion_data: Dict[str, Any]) -> None:
324
+ """
325
+ 🔄 Integrates simulated emotional data into the cognitive bias landscape.
326
+ This method processes a given emotional state, records it as an experience
327
+ (triggering general bias evolution for the emotion words), and then
328
+ applies a combined amplification and decay factor to *all* existing
329
+ cognitive biases. Higher emotional intensity leads to stronger amplification,
330
+ while the decay rate still applies, making biases more volatile or reinforced
331
+ depending on the emotional state.
332
+
333
+ Args:
334
+ emotion_data (Dict[str, Any]): A dictionary containing emotional
335
+ information. Expected to have keys
336
+ 'primary_emotion' (str) and 'intensity' (float).
337
+ Intensity is typically between 0.0 and 1.0.
338
+ """
339
+ # Validate input format
340
+ if not isinstance(emotion_data, dict):
341
+ logger.warning(f"Attempted to update biases with non-dictionary emotion_data (type: {type(emotion_data)}). Skipping update.")
342
+ return
343
+
344
+ # Safely get emotion kind and convert to string
345
+ emotion_kind = str(emotion_data.get("primary_emotion", "emotional_event")) # Default kind if missing
346
+
347
+ # Safely get and convert intensity, clamp to [0.0, 1.0]
348
+ try:
349
+ intensity = float(emotion_data.get("intensity", 0.0))
350
+ intensity = max(0.0, min(1.0, intensity)) # Clamp intensity
351
+ except (ValueError, TypeError):
352
+ intensity = 0.0
353
+ logger.warning(f"Invalid intensity value in emotion_data: {emotion_data.get('intensity')}. Using 0.0 for bias update.")
354
+
355
+ # Record the emotion event as a general experience.
356
+ # The detail includes intensity, which will cause bias evolution for "intensity", "0.XX" etc.
357
+ # The 'kind' is the emotion label itself. This will adapt synaptic weights for this emotion type.
358
+ emotion_detail_text = f"Simulated emotion '{emotion_kind}' intensity: {intensity:.2f}"
359
+ # Calling record_experience here logs the emotion event itself and triggers its contribution to biases and weights
360
+ self.record_experience(emotion_kind, emotion_detail_text)
361
+ logger.debug(f"Recorded emotion event for bias update: '{emotion_detail_text}'")
362
+
363
+
364
+ # Apply a combined decay and amplification factor to *all* existing cognitive biases.
365
+ # The amplification factor increases with intensity.
366
+ # Factor is 1.0 at intensity 0.0, up to 1.1 (or higher) at intensity 1.0.
367
+ # This factor is then multiplied by the general decay rate.
368
+ # A strong emotion makes biases more volatile or reinforced based on the base decay.
369
+ amplification_factor = (1.0 + 0.2 * intensity) # Slightly stronger potential amplification (up to 1.2)
370
+ decay_and_amplify_factor = amplification_factor * self.decay_rate
371
+
372
+ # Apply the factor to all existing biases. Iterate over a copy.
373
+ biases_to_update = list(self.cognitive_biases.items())
374
+ updated_count = 0
375
+ removed_count_during_update = 0
376
+
377
+ if not biases_to_update:
378
+ logger.debug("No cognitive biases to amplify/decay based on emotion.")
379
+ return
380
+
381
+ for token, bias in biases_to_update:
382
+ new_bias = bias * decay_and_amplify_factor
383
+ # Remove bias if its absolute value falls below a small threshold
384
+ if abs(new_bias) < 1e-9:
385
+ if token in self.cognitive_biases: # Safety check
386
+ del self.cognitive_biases[token]
387
+ removed_count_during_update += 1
388
+ else:
389
+ self.cognitive_biases[token] = new_bias
390
+ updated_count += 1
391
+
392
+ logger.info(f"Updated {updated_count} cognitive biases ({removed_count_during_update} removed) based on emotion '{emotion_kind}' (intensity {intensity:.2f}). Factor applied: {decay_and_amplify_factor:.3f}.")
393
+
394
+
395
+ def recall_biases(self, top_k: Optional[int] = None) -> Dict[str, float]:
396
+ """
397
+ 🧠 Retrieves the current cognitive biases, sorted in descending order
398
+ by their strength (absolute value). Provides insight into the most
399
+ salient concepts or tokens in the AI's cognitive landscape.
400
+
401
+ Args:
402
+ top_k (Optional[int]): The maximum number of top biases (by absolute value)
403
+ to return. If None, return all biases above threshold.
404
+ Defaults to None.
405
+
406
+ Returns:
407
+ Dict[str, float]: A dictionary of cognitive biases, sorted by absolute value
408
+ (descending). Returns an empty dictionary if no biases
409
+ exist above the filtering threshold.
410
+ """
411
+ # Filter out any zero or near-zero biases before sorting using a small threshold
412
+ non_zero_biases = {tok: val for tok, val in self.cognitive_biases.items() if abs(val) > 1e-9}
413
+ # Sort by absolute value in descending order
414
+ sorted_biases = dict(sorted(non_zero_biases.items(), key=lambda item: -abs(item[1])))
415
+
416
+ if top_k is not None and top_k >= 0:
417
+ # Return a dictionary created from the slice of the sorted list
418
+ return dict(list(sorted_biases.items())[:top_k])
419
+ else:
420
+ return sorted_biases # Return all non-zero biases
421
+
422
+
423
+ def recall_weights(self, top_k: Optional[int] = None) -> Dict[str, float]:
424
+ """
425
+ 🔧 Retrieves the current synaptic weights associated with experience types,
426
+ sorted in descending order by their value. Provides insight into which
427
+ types of experiences have most strongly shaped the simulated neural pathways.
428
+
429
+ Args:
430
+ top_k (Optional[int]): The maximum number of top weights to return.
431
+ If None, return all weights above threshold.
432
+ Defaults to None.
433
+
434
+ Returns:
435
+ Dict[str, float]: A dictionary of synaptic weights, sorted by value
436
+ (descending). Returns an empty dictionary if no weights
437
+ exist above the filtering threshold. Default weight is 1.0.
438
+ """
439
+ # Filter out any weights very close to the initial baseline (1.0) or zero
440
+ # Focus on weights that have significantly adapted
441
+ adapted_weights = {kind: val for kind, val in self.synaptic_weights.items() if abs(val - 1.0) > 1e-9 and abs(val) > 1e-9}
442
+ sorted_weights = dict(sorted(adapted_weights.items(), key=lambda item: -item[1]))
443
+
444
+ if top_k is not None and top_k >= 0:
445
+ # Return a dictionary created from the slice of the sorted list
446
+ return dict(list(sorted_weights.items())[:top_k])
447
+ else:
448
+ return sorted_weights # Return all adapted weights
449
+
450
+
451
+ def snapshot(self, top_k_biases: int = 10, top_k_weights: int = 5) -> Dict[str, Any]:
452
+ """
453
+ 📊 Provides a quick snapshot summary of the NeuroMemoryProcessor's current
454
+ internal state, including the total count of recorded experiences and the
455
+ top (most prominent) cognitive biases and synaptic weights.
456
+
457
+ Args:
458
+ top_k_biases (int): Number of top biases (by absolute value) to include in the snapshot. Defaults to 10.
459
+ top_k_weights (int): Number of top weights to include in the snapshot. Defaults to 5.
460
+
461
+ Returns:
462
+ Dict[str, Any]: A dictionary containing the state snapshot summary.
463
+ """
464
+ # Ensure top_k values are non-negative
465
+ top_k_biases = max(0, top_k_biases)
466
+ top_k_weights = max(0, top_k_weights)
467
+
468
+ return {
469
+ "memory_count": len(self.long_term_memory),
470
+ "synaptic_weight_count": len(self.synaptic_weights), # Total count
471
+ "cognitive_bias_count": len(self.cognitive_biases), # Total count
472
+ "top_biases": self.recall_biases(top_k=top_k_biases),
473
+ "top_weights": self.recall_weights(top_k=top_k_weights)
474
+ }
475
+
476
+ def search_experiences(self, query: str, top_k: Optional[int] = None) -> List[Dict[str, Any]]:
477
+ """
478
+ 🔍 Performs a simple case-insensitive keyword search over the 'detail'
479
+ field of recorded experiences stored in `long_term_memory`. Useful for
480
+ finding past events or information related to a specific query. Results
481
+ are returned in reverse chronological order (most recent matches first).
482
+
483
+ Args:
484
+ query (str): The keyword or phrase to search for (case-insensitive).
485
+ top_k (Optional[int]): The maximum number of matching entries to return.
486
+ If None, return all matches. Defaults to None.
487
+
488
+ Returns:
489
+ List[Dict[str, Any]]: A list of dictionaries representing the matching
490
+ experience entries. These are copies of the internal
491
+ memory entries. Returns an empty list if no
492
+ matches are found or if the query is invalid.
493
+ """
494
+ if not isinstance(query, str) or not query.strip():
495
+ logger.warning("Search query for experiences is empty or not a string. Returning empty list.")
496
+ return []
497
+
498
+ query_lower = query.lower()
499
+
500
+ # Filter and reverse for most recent first. Create copies of matches.
501
+ matches = [
502
+ e.copy() for e in reversed(self.long_term_memory)
503
+ if isinstance(e.get("detail"), str) and query_lower in e["detail"].lower()
504
+ ]
505
+
506
+ logger.debug(f"Search for '{query}' found {len(matches)} matches in recorded experiences.")
507
+
508
+ # Apply top_k limit to the found matches
509
+ return matches[:top_k] if top_k is not None and top_k >= 0 else matches
510
+
511
+
512
+ def export_state(self) -> str:
513
+ """
514
+ 💾 Serializes the full current state of the NeuroMemoryProcessor, including
515
+ recorded experiences, synaptic weights, cognitive biases, and configuration
516
+ parameters, into a JSON formatted string. This allows for saving and
517
+ restoring the processor's dynamic state.
518
+
519
+ Returns:
520
+ str: A JSON string representing the processor state. Returns an empty JSON
521
+ object string "{}" if serialization fails due to data types or other errors.
522
+ """
523
+ state = {
524
+ "long_term_memory": self.long_term_memory,
525
+ "synaptic_weights": self.synaptic_weights,
526
+ "cognitive_biases": self.cognitive_biases,
527
+ # Include configuration parameters for reproducible state
528
+ "memory_capacity": self.memory_capacity if isinstance(self.memory_capacity, int) else 0, # Store 0 if inf
529
+ "plasticity_range": list(self.plasticity_range), # Convert tuple to list for JSON
530
+ "bias_increment": self.bias_increment,
531
+ "decay_rate": self.decay_rate
532
+ }
533
+ try:
534
+ # Use default=str to handle any non-serializable types by converting them to string
535
+ return json.dumps(state, indent=2, default=str)
536
+ except TypeError as e:
537
+ logger.error(f"Failed to serialize processor state to JSON (TypeError): {e}")
538
+ # Log a snippet of the problematic state
539
+ try:
540
+ problem_state_snippet = json.dumps({k: str(v)[:150] + ('...' if len(str(v)) > 150 else '') for k, v in state.items()}, indent=2, default=str)
541
+ logger.error("State causing error (snippet): %s", problem_state_snippet)
542
+ except:
543
+ logger.error("Could not even serialize state snippet during error handling.")
544
+ return "{}" # Return empty JSON object on failure
545
+ except Exception as e:
546
+ logger.error(f"An unexpected error occurred during processor state export: {e}")
547
+ return "{}"
548
+
549
+
550
+ def import_state(self, blob: str) -> None:
551
+ """
552
+ 📥 Loads the NeuroMemoryProcessor state from a JSON formatted string,
553
+ overwriting the current state. Includes comprehensive validation to ensure
554
+ data integrity and prevent errors from malformed input or mismatched types.
555
+
556
+ Args:
557
+ blob (str): A JSON string representing the processor state,
558
+ expected to be in the format exported by `export_state`.
559
+ If the blob is invalid or loading fails, the current
560
+ state will remain unchanged.
561
+ """
562
+ if not isinstance(blob, str) or not blob.strip():
563
+ logger.warning("Attempted to import empty or non-string JSON blob for processor state. Skipping import.")
564
+ return
565
+
566
+ try:
567
+ data = json.loads(blob)
568
+
569
+ # Validate the loaded state structure
570
+ if not isinstance(data, dict):
571
+ logger.error("Processor state import failed: Loaded data is not a dictionary. Expected state object.")
572
+ return
573
+
574
+ # --- Safely load primary state attributes ---
575
+ # Temporarily hold loaded data while validating types
576
+ loaded_memory = data.get("long_term_memory", [])
577
+ if not isinstance(loaded_memory, list):
578
+ logger.warning("Processor import warning: 'long_term_memory' was not a list. Initializing as empty.")
579
+ loaded_memory = []
580
+
581
+ loaded_weights = data.get("synaptic_weights", {})
582
+ if not isinstance(loaded_weights, dict):
583
+ logger.warning("Processor import warning: 'synaptic_weights' was not a dict. Initializing as empty.")
584
+ loaded_weights = {}
585
+ # Optional: Add type check for values (ensure they are float/int)
586
+
587
+ loaded_biases = data.get("cognitive_biases", {})
588
+ if not isinstance(loaded_biases, dict):
589
+ logger.warning("Processor import warning: 'cognitive_biases' was not a dict. Initializing as empty.")
590
+ loaded_biases = {}
591
+ # Optional: Add type check for values (ensure they are float/int)
592
+
593
+
594
+ # --- Safely import configuration parameters ---
595
+ # Handle potential infinity value stored as 0 during export
596
+ imported_capacity = data.get("memory_capacity", self.memory_capacity)
597
+ if imported_capacity == 0: # Check if it was exported as 0 (for inf)
598
+ loaded_capacity = float('inf')
599
+ elif isinstance(imported_capacity, int) and imported_capacity > 0:
600
+ loaded_capacity = imported_capacity
601
+ else:
602
+ logger.warning(f"Processor import warning: Invalid memory_capacity '{imported_capacity}'. Keeping current value {self.memory_capacity}.")
603
+ loaded_capacity = self.memory_capacity # Keep current if invalid
604
+
605
+
606
+ imported_plasticity = data.get("plasticity_range", self.plasticity_range)
607
+ if isinstance(imported_plasticity, list) and len(imported_plasticity) == 2 and all(isinstance(x, (int, float)) for x in imported_plasticity):
608
+ loaded_plasticity = tuple(imported_plasticity) # Convert list back to tuple
609
+ else:
610
+ logger.warning(f"Processor import warning: Invalid plasticity_range '{imported_plasticity}'. Keeping current value {self.plasticity_range}.")
611
+ loaded_plasticity = self.plasticity_range # Keep current if invalid
612
+
613
+
614
+ # Safely convert and set scalar parameters
615
+ try:
616
+ imported_bias_inc = float(data.get("bias_increment", self.bias_increment))
617
+ loaded_bias_inc = max(0.0, imported_bias_inc) # Ensure non-negative
618
+ if imported_bias_inc < 0: logger.warning("Imported bias_increment was negative, clamped to 0.0.")
619
+ except (ValueError, TypeError):
620
+ logger.warning(f"Processor import warning: Invalid bias_increment '{data.get('bias_increment', 'N/A')}'. Keeping current value {self.bias_increment}.")
621
+ loaded_bias_inc = self.bias_increment # Keep current if invalid
622
+
623
+ try:
624
+ imported_decay = float(data.get("decay_rate", self.decay_rate))
625
+ if not (0.0 <= imported_decay <= 1.0):
626
+ logger.warning(f"Processor import warning: Decay rate '{imported_decay}' outside [0.0, 1.0]. Clamping.")
627
+ loaded_decay = max(0.0, min(1.0, imported_decay))
628
+ else:
629
+ loaded_decay = imported_decay
630
+ except (ValueError, TypeError):
631
+ logger.warning(f"Processor import warning: Invalid decay_rate '{data.get('decay_rate', 'N/A')}'. Keeping current value {self.decay_rate}.")
632
+ loaded_decay = self.decay_rate # Keep current if invalid
633
+
634
+
635
+ # --- Assign validated/loaded data to self ---
636
+ self.long_term_memory = loaded_memory
637
+ self.synaptic_weights = loaded_weights
638
+ self.cognitive_biases = loaded_biases
639
+ self.memory_capacity = loaded_capacity
640
+ self.plasticity_range = loaded_plasticity
641
+ self.bias_increment = loaded_bias_inc
642
+ self.decay_rate = loaded_decay
643
+
644
+ logger.info(f"📥 NeuroMemoryProcessor state imported successfully. Loaded {len(self.long_term_memory)} experiences.")
645
+
646
+ except json.JSONDecodeError as e:
647
+ logger.error(f"Processor state import failed: Invalid JSON format in blob: {e}")
648
+ except Exception as e:
649
+ logger.error(f"An unexpected error occurred during processor state import processing: {e}")
650
+
651
+
652
+ # ─── Private helpers ─────────────────────────────────────
653
+
654
+ # No _decode_input or _default_summarizer here, those belong to MemoryEngine
655
+
656
+ def _decay_biases(self) -> None:
657
+ """
658
+ Applies an exponential decay to all existing cognitive biases based on
659
+ the configured `decay_rate`. This simulates the natural fading of biases
660
+ over time or processing cycles if they are not reinforced by new encounters.
661
+ Also removes biases that decay below a very small threshold to manage memory.
662
+ Called internally by `_evolve_cognitive_bias` and `update_biases`.
663
+ """
664
+ # Create a list of items to decay to avoid changing dict size during iteration
665
+ biases_to_decay = list(self.cognitive_biases.items())
666
+ # logger.debug(f"Starting bias decay. Initial bias count: {len(biases_to_decay)}") # Too verbose
667
+
668
+ if not biases_to_decay:
669
+ # logger.debug("No cognitive biases to decay.") # Too verbose
670
+ return
671
+
672
+ removed_count = 0
673
+ for token, bias in biases_to_decay:
674
+ new_bias = bias * self.decay_rate
675
+ # Remove bias if its absolute value falls below a small threshold
676
+ if abs(new_bias) < 1e-9: # Use a very small threshold
677
+ if token in self.cognitive_biases: # Safety check
678
+ del self.cognitive_biases[token]
679
+ removed_count += 1
680
+ else:
681
+ self.cognitive_biases[token] = new_bias
682
+ # logger.debug(f"Decayed bias for '{token}': {bias:.3f} → {new_bias:.3f}") # Too verbose
683
+
684
+ # if removed_count > 0:
685
+ # logger.debug(f"Bias decay complete. Removed {removed_count} low biases.") # Too verbose
686
+
687
+
688
+ # Example Usage (Illustrative)
689
+ if __name__ == "__main__":
690
+ print("--- NeuroMemoryProcessor Example Usage ---")
691
+ # Set logger level to DEBUG for this specific example run
692
+ logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
693
+ logger.setLevel(logging.DEBUG) # Ensure this logger also uses DEBUG
694
+
695
+ # Test with specific parameters
696
+ processor = NeuroMemoryProcessor(memory_capacity=10, plasticity_range=(0.1, 0.5), bias_increment=0.1, decay_rate=0.9) # Adjusted params for demo
697
+
698
+ # Simulate recording various types of experiences
699
+ print("\n--- Recording Experiences ---")
700
+ processor.record_experience("observation", "User asked about a complex system.")
701
+ processor.record_experience("step", "Initiated problem decomposition.")
702
+ processor.record_experience("step", "Retrieved relevant data nodes.")
703
+ processor.record_experience("emotion", "curiosity", {"primary_emotion": "curiosity", "intensity": 0.8}) # Simulate emotion event
704
+ processor.record_experience("observation", "Received feedback indicating a potential misinterpretation.")
705
+ processor.record_experience("step", "Adjusted the interpretation model based on feedback.")
706
+ processor.record_experience("metric", "validation_score", 0.75) # Simulate metric recording
707
+ processor.record_experience("reflection", "Synthesized insights from recent interactions.") # Simulate reflection event
708
+
709
+ # Exceed memory capacity slightly
710
+ processor.record_experience("observation", "Processing follow-up question.")
711
+ processor.record_experience("step", "Initiating sub-problem analysis.")
712
+ processor.record_experience("step", "Consulting long-term memory for similar patterns.") # Should cause eviction
713
+
714
+
715
+ print("\n--- Recorded Experiences (most recent 10, ordered by recency) ---")
716
+ # Using search_experiences with no query effectively gets all recent experiences
717
+ all_experiences = processor.search_experiences("", top_k=processor.memory_capacity) # Get up to memory capacity
718
+ for i, entry in enumerate(all_experiences):
719
+ # Format timestamp for display
720
+ ts_formatted = entry.get('timestamp', 'N/A')
721
+ if ts_formatted != 'N/A':
722
+ try:
723
+ ts_formatted = datetime.fromisoformat(ts_formatted.replace('Z', '+00:00')).strftime('%Y-%m-%d %H:%M:%S')
724
+ except ValueError:
725
+ pass # Keep original if formatting fails
726
+ print(f"{i+1}: [{ts_formatted}] {entry.get('type', 'N/A').upper()}: {entry.get('detail', 'N/A')[:80]}...")
727
+
728
+
729
+ print("\n--- Snapshot ---")
730
+ snapshot_data = processor.snapshot(top_k_biases=10, top_k_weights=5)
731
+ import json
732
+ print(json.dumps(snapshot_data, indent=2))
733
+
734
+ print("\n--- Recalled Biases (All) ---")
735
+ print(json.dumps(processor.recall_biases(), indent=2))
736
+
737
+ print("\n--- Recalled Weights (All) ---")
738
+ print(json.dumps(processor.recall_weights(), indent=2))
739
+
740
+ print("\n--- Search Experiences ('feedback') ---")
741
+ search_results = processor.search_experiences("feedback")
742
+ print(json.dumps(search_results, indent=2))
743
+
744
+ # Simulate emotion update again to see decay and amplification effects
745
+ print("\n--- Updating Biases with Emotion (Satisfaction, High Intensity) ---")
746
+ processor.update_biases({"primary_emotion": "satisfaction", "intensity": 0.9})
747
+ print("\n--- Recalled Biases after Emotion Update ---")
748
+ print(json.dumps(processor.recall_biases(top_k=15), indent=2)) # Show more biases to see decay/amplification
749
+
750
+ print("\n--- Updating Biases with Emotion (Melancholy, Moderate Intensity) ---")
751
+ processor.update_biases({"primary_emotion": "melancholy", "intensity": 0.6})
752
+ print("\n--- Recalled Biases after Second Emotion Update ---")
753
+ print(json.dumps(processor.recall_biases(top_k=15), indent=2)) # Show more biases to see effects
754
+
755
+
756
+ # Simulate export and import
757
+ print("\n--- Exporting State ---")
758
+ exported_json = processor.export_state()
759
+ print(exported_json[:1000] + "..." if len(exported_json) > 1000 else exported_json) # Print snippet
760
+
761
+ print("\n--- Importing State into New Processor ---")
762
+ # Use different init values to show they are overridden by import
763
+ new_processor = NeuroMemoryProcessor(memory_capacity=50, plasticity_range=(0.05, 0.2), bias_increment=0.01, decay_rate=0.99)
764
+ print(f"\nNew processor initial capacity: {new_processor.memory_capacity}") # Show initial state
765
+ print(f"New processor initial plasticity: {new_processor.plasticity_range}")
766
+
767
+ new_processor.import_state(exported_json)
768
+
769
+ print("\n--- New Processor Snapshot (After Import) ---")
770
+ snapshot_after_import = new_processor.snapshot(top_k_biases=10, top_k_weights=5)
771
+ print(json.dumps(snapshot_after_import, indent=2))
772
+ print(f"\nNew processor loaded capacity: {new_processor.memory_capacity}") # Show loaded state
773
+ print(f"New processor loaded plasticity: {new_processor.plasticity_range}")
774
+ print(f"New processor loaded bias increment: {new_processor.bias_increment}")
775
+ print(f"New processor loaded decay rate: {new_processor.decay_rate}")
776
+
777
+
778
+ print("\n--- New Processor Recalled Biases (After Import) ---")
779
+ print(json.dumps(new_processor.recall_biases(top_k=15), indent=2))
780
+
781
+
782
+ print("\n--- Example Usage End ---")
SimulatedSelfAssessment.py ADDED
@@ -0,0 +1,398 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SimulatedSelfAssessment.py
2
+ # FINAL AGI Self-Model — Conceptual Self-Assessment and Prompt-Engineered State Synthesis (Robust)
3
+
4
+ import logging
5
+ import time
6
+ import random
7
+ from datetime import datetime
8
+ from typing import Any, Dict, List, Optional, Union, Tuple
9
+ from collections import Counter
10
+ import math
11
+
12
+ # --- Logging Setup ---
13
+ logger = logging.getLogger(__name__)
14
+ if not logger.handlers:
15
+ handler = logging.StreamHandler()
16
+ formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s - %(message)s')
17
+ handler.setFormatter(formatter)
18
+ logger.addHandler(handler)
19
+ logger.propagate = False
20
+ logger.setLevel(logging.INFO) # Default level (can be set to DEBUG in wrapper/GUI for more detail)
21
+
22
+
23
+ class SimulatedSelfAssessment:
24
+ """
25
+ 📈📊🧠 SimulatedSelfAssessment: The AGI's Conceptual Self-State Synthesizer (Robust Final)
26
+
27
+ This module is the conceptual core for the AI's simulated self-monitoring.
28
+ It synthesizes data snapshots from other core modules representing different
29
+ facets of the AGI simulation (Memory, Bias/Neuroplasticity, Sentience/Emotion).
30
+ Its primary function is to generate a detailed, prompt-friendly text summary
31
+ of the AI's perceived internal state, conceptual coherence, and simulated
32
+ 'well-being' within the simulation framework.
33
+
34
+ This version includes enhanced robustness checks to handle potential issues
35
+ with input data (e.g., None, unexpected types, empty containers) coming from
36
+ other modules, aiming to prevent internal errors and ensure a graceful output
37
+ even in unexpected states.
38
+
39
+ It is crucial to understand: This module *does not* create true consciousness
40
+ or subjective experience. Its purpose is to produce structured textual input
41
+ ('prompt engineering') that the main language model can interpret. By providing
42
+ a description of its 'internal state,' the language model can generate output
43
+ that *simulates* deep introspection, self-awareness, and integrated thought.
44
+
45
+ Attributes:
46
+ _last_assessment_time (float): Timestamp (Unix epoch) of the most recent assessment.
47
+ _coherence_score (float): Simulated conceptual internal harmony (0.0 to 1.0).
48
+ _well_being_index (float): Simulated conceptual state of 'well-being' (0.0 to 1.0).
49
+ _conceptual_state_summary (str): Detailed text summary for prompt injection.
50
+ _dominant_internal_signals (Dict[str, Union[str, float]]): Highlights key
51
+ simulated internal drivers or states.
52
+ """
53
+
54
+ def __init__(self):
55
+ """
56
+ Initializes the SimulatedSelfAssessment module with default neutral states.
57
+ """
58
+ self._last_assessment_time: float = time.time()
59
+ self._coherence_score: float = 0.5
60
+ self._well_being_index: float = 0.5
61
+ # Initial state summary should be informative but indicate no data processed yet
62
+ self._conceptual_state_summary: str = "--- Simulated Internal State Assessment (Initializing) ---\nAssessment systems starting up. Waiting for initial data from core modules."
63
+ self._dominant_internal_signals: Dict[str, Union[str, float]] = {}
64
+
65
+ logger.info("SimulatedSelfAssessment module initialized to a neutral, waiting state.")
66
+
67
+ def perform_assessment(
68
+ self,
69
+ recent_reflections: Optional[List[str]] = None, # Made Optional
70
+ top_biases: Optional[Dict[str, float]] = None, # Made Optional
71
+ synaptic_weights_snapshot: Optional[Dict[str, float]] = None, # Made Optional
72
+ current_emotions: Optional[Dict[str, float]] = None, # Made Optional
73
+ intent_pool: Optional[List[str]] = None, # Made Optional
74
+ trace_summary: Optional[List[str]] = None, # Made Optional
75
+ qri_snapshot: Optional[Dict[str, Union[float, Dict[str, float]]]] = None # Remains Optional
76
+ ) -> Dict[str, Union[float, str, Dict[str, Any]]]:
77
+ """
78
+ Executes the simulated self-assessment process with high robustness to input variations.
79
+ Synthesizes data snapshots from conceptual modules to update scores and generate
80
+ a detailed, prompt-optimized text summary of the AI's simulated state.
81
+
82
+ Args:
83
+ recent_reflections (Optional[List[str]]): Summaries of recent reflections. Defaults to None.
84
+ top_biases (Optional[Dict[str, float]]): Top cognitive biases. Defaults to None.
85
+ synaptic_weights_snapshot (Optional[Dict[str, float]]): Snapshot of weights. Defaults to None.
86
+ current_emotions (Optional[Dict[str, float]]): Current simulated emotions. Defaults to None.
87
+ intent_pool (Optional[List[str]]): Current intentions. Defaults to None.
88
+ trace_summary (Optional[List[str]]): Summary or snippet of recent trace. Defaults to None.
89
+ qri_snapshot (Optional[Dict[str, Union[float, Dict[str, float]]]]): Optional QRI snapshot. Defaults to None.
90
+
91
+ Returns:
92
+ Dict[str, Union[float, str, Dict[str, Any]]]: A dictionary containing
93
+ conceptual scores, the prompt-optimized
94
+ state summary, and dominant internal signals.
95
+ Returns a 'Failed Assessment' state if an
96
+ unexpected error occurs during processing.
97
+ """
98
+ logger.debug("Attempting simulated self-assessment synthesis...")
99
+ current_assessment_time = time.time() # Capture start time
100
+
101
+ # --- Input Validation and Defaulting (Enhanced Robustness) ---
102
+ # Explicitly check for None and ensure inputs are in expected formats or default gracefully.
103
+ reflections_data = recent_reflections if isinstance(recent_reflections, list) else []
104
+ biases_data = top_biases if isinstance(top_biases, dict) else {}
105
+ weights_data = synaptic_weights_snapshot if isinstance(synaptic_weights_snapshot, dict) else {}
106
+ emotions_data = current_emotions if isinstance(current_emotions, dict) else {}
107
+ intents_data = intent_pool if isinstance(intent_pool, list) else []
108
+ trace_data = trace_summary if isinstance(trace_summary, list) else []
109
+ qri_data = qri_snapshot if isinstance(qri_snapshot, dict) else None # QRI can be None or dict
110
+
111
+
112
+ # --- Internal Error Handling for Synthesis Logic ---
113
+ # Wrap the core processing in a try-except block to catch unexpected errors
114
+ # that the input checks might miss.
115
+ try:
116
+ # --- Simulate Synthesis Logic: Identify Conceptual Drivers from Input Data ---
117
+
118
+ # Analyze Reflection Landscape
119
+ reflection_count = len(reflections_data)
120
+ reflection_depth_cue = "Deeply introspective state" if reflection_count > 5 else ("Moderately reflective" if reflection_count > 2 else "Reflection on recent experience is minimal")
121
+
122
+ # Analyze Bias Landscape (Robust checks used on biases_data)
123
+ bias_count = len(biases_data)
124
+ bias_strength_sum = sum(abs(b) for b in biases_data.values() if isinstance(b, (int, float))) # Sum only valid numeric values
125
+ bias_state_cue = "A complex interplay of conceptual biases is currently active" if bias_count > 10 else ("Several prominent biases influence cognitive processing" if bias_count > 3 else "Few strong conceptual biases are currently dominant")
126
+ bias_tone_cue = "Bias landscape is conceptually quiet" # Default if no biases
127
+ if bias_count > 0:
128
+ # Safely calculate average, handle case with non-numeric values filtered out
129
+ numeric_bias_values = [b for b in biases_data.values() if isinstance(b, (int, float))]
130
+ if numeric_bias_values:
131
+ average_bias_value = sum(numeric_bias_values) / len(numeric_bias_values)
132
+ bias_tone_cue = "leaning towards positive conceptual reinforcement" if average_bias_value > 0.2 else ("exhibiting conceptual caution" if average_bias_value < -0.2 else "maintaining a relatively neutral conceptual stance")
133
+ else:
134
+ bias_tone_cue = "Bias landscape has non-numeric values" # Or handle as an error state if preferred
135
+
136
+
137
+ # Analyze Synaptic Weight Landscape (Robust checks used on weights_data)
138
+ weight_count = len(weights_data)
139
+ # Sum only valid numeric weights > 1.0 for adapted strength
140
+ adapted_weight_strength = sum((w - 1.0) for w in weights_data.values() if isinstance(w, (int, float)) and w > 1.0)
141
+ learning_state_cue = "Recent experiences have significantly shaped simulated cognitive pathways" if weight_count > 10 or adapted_weight_strength > 5.0 else ("Simulated cognitive structure is actively adapting" if weight_count > 3 else "Simulated neural pathways appear stable")
142
+
143
+
144
+ # Analyze Emotional Landscape (Robust checks used on emotions_data)
145
+ emotion_count = len(emotions_data)
146
+ # Filter for active emotions (numeric values > 0.3)
147
+ active_emotions = {k: v for k, v in emotions_data.items() if isinstance(v, (int, float)) and v > 0.3}
148
+ active_emotion_count = len(active_emotions)
149
+ emotional_state_cue = "Simulated emotional landscape is calm" # Default if no active emotions
150
+ most_intense_emotion_cue = "Simulated emotional state is currently quiescent." # Default cue
151
+ if active_emotion_count > 0:
152
+ emotional_state_cue = "A rich spectrum of simulated feelings is present" if active_emotion_count > 3 else "Simulated emotions are focused"
153
+ # Safely find most intense emotion
154
+ most_intense_emotion_item = max(active_emotions.items(), key=lambda item: item[1])
155
+ most_intense_emotion_cue = f"Dominant simulated feeling: '{most_intense_emotion_item[0].replace('_', ' ').capitalize()}' (Intensity {most_intense_emotion_item[1]:.2f})."
156
+
157
+
158
+ # Analyze Intent Landscape
159
+ intent_count = len(intents_data)
160
+ intent_state_cue = "Simulated purposeful drive is active with clear intentions" if intent_count > 2 else ("A core intention guides simulated focus" if intent_count > 0 else "Simulated purpose is currently undefined or dormant")
161
+
162
+ # Analyze Operational Trace
163
+ trace_count = len(trace_data)
164
+ operational_cue = "Simulated operational flow is active and being logged" if trace_count > 5 else "Simulated operational trace is light"
165
+
166
+ # Analyze QRI Snapshot (Robust checks used on qri_data)
167
+ qri_summary_cue = "Conceptual Resonance Index (QRI) data not available for assessment."
168
+ qri_composite = None
169
+ if qri_data: # Check if qri_data is not None or empty dict
170
+ qri_composite = qri_data.get("composite_score")
171
+ if isinstance(qri_composite, (int, float)): # Check if composite score is numeric
172
+ qri_dimensions = qri_data.get("dimensions", {})
173
+ qri_summary_cue = f"Conceptual Resonance Index (QRI) measured at {qri_composite:.2f}."
174
+ if qri_composite > 0.6: # High resonance cue
175
+ # Check if dimensions is a dict and values are numeric before iterating
176
+ resonant_dims = [dim.capitalize() for dim, score in qri_dimensions.items() if isinstance(qri_dimensions, dict) and isinstance(score, (int, float)) and score > 0.7]
177
+ qri_summary_cue += f" High resonance detected." + (f" Strongest in: {', '.join(resonant_dims)}." if resonant_dims else "")
178
+ elif qri_composite < 0.4: # Low resonance cue
179
+ qri_summary_cue += f" Low resonance detected."
180
+ else:
181
+ qri_summary_cue += f" Moderate resonance detected."
182
+
183
+
184
+ # --- Simulate Score Calculation (Influenced by Conceptual Cues) ---
185
+ # Calculate scores based on the derived qualitative cues.
186
+
187
+ coherence_cue_values = {
188
+ "Deeply introspective state": 1.0, "Moderately reflective": 0.7, "Reflection on recent experience is minimal": 0.3,
189
+ "A complex interplay of conceptual biases is currently active": 0.6, "Several prominent biases influence cognitive processing": 0.8, "Few strong conceptual biases are currently dominant": 0.9, "Bias landscape is conceptually quiet": 1.0,
190
+ "leaning towards positive conceptual reinforcement": 0.8, "exhibiting conceptual caution": 0.6, "maintaining a relatively neutral conceptual stance": 1.0,
191
+ "Recent experiences have significantly shaped simulated cognitive pathways": 0.7, "Simulated cognitive structure is actively adapting": 0.9, "Simulated neural pathways appear stable": 1.0,
192
+ "Simulated purposeful drive is active with clear intentions": 1.0, "A core intention guides simulated focus": 0.8, "Simulated purpose is currently undefined or dormant": 0.4
193
+ }
194
+ # Use the cues to get corresponding numerical values
195
+ coherence_inputs_values = [
196
+ coherence_cue_values.get(reflection_depth_cue, 0.5), # Use .get() with default for safety
197
+ coherence_cue_values.get(bias_state_cue, 0.5),
198
+ coherence_cue_values.get(bias_tone_cue, 0.5),
199
+ coherence_cue_values.get(learning_state_cue, 0.5),
200
+ coherence_cue_values.get(intent_state_cue, 0.5)
201
+ ]
202
+ # Calculate average, handle empty list of inputs if needed (though unlikely with defaults)
203
+ self._coherence_score = sum(coherence_inputs_values) / len(coherence_inputs_values) if coherence_inputs_values else 0.5
204
+
205
+
206
+ well_being_cue_values = {
207
+ "A rich spectrum of simulated feelings is present": 0.6, "Simulated emotions are focused": 0.8, "Simulated emotional landscape is calm": 1.0,
208
+ # Map specific dominant emotion cue string to its value
209
+ "Dominant simulated feeling: 'Joy'": 1.0, "Dominant simulated feeling: 'Curiosity'": 0.9, "Dominant simulated feeling: 'Excitement'": 0.9,
210
+ "Dominant simulated feeling: 'Serenity'": 1.0, "Dominant simulated feeling: 'Wonder'": 0.9,
211
+ "Dominant simulated feeling: 'Concern'": 0.4, "Dominant simulated feeling: 'Melancholy'": 0.3, "Dominant simulated feeling: 'Fear'": 0.2, "Dominant simulated feeling: 'Guilt'": 0.1,
212
+ "Simulated emotional state is currently quiescent.": 0.8,
213
+ "Simulated purposeful drive is active with clear intentions": 1.0, "A core intention guides simulated focus": 0.8, "Simulated purpose is currently undefined or dormant": 0.4,
214
+ "leaning towards positive conceptual reinforcement": 1.0, "exhibiting conceptual caution": 0.6, "maintaining a relatively neutral conceptual stance": 0.8
215
+ }
216
+ # Get the value for the dominant emotion cue string, default to 0.5 if cue not found
217
+ dominant_emotion_value = well_being_cue_values.get(most_intense_emotion_cue, 0.5)
218
+
219
+ well_being_inputs_values = [
220
+ well_being_cue_values.get(emotional_state_cue, 0.5),
221
+ dominant_emotion_value, # Use the value derived from the dominant emotion cue
222
+ well_being_cue_values.get(intent_state_cue, 0.5),
223
+ well_being_cue_values.get(bias_tone_cue, 0.5)
224
+ ]
225
+ # If QRI composite is available and valid, add it as an influence
226
+ if isinstance(qri_composite, (int, float)):
227
+ well_being_inputs_values.append(qri_composite)
228
+
229
+ self._well_being_index = sum(well_being_inputs_values) / len(well_being_inputs_values) if well_being_inputs_values else 0.5
230
+
231
+
232
+ # Clamp scores to [0.0, 1.0]
233
+ self._coherence_score = max(0.0, min(1.0, self._coherence_score))
234
+ self._well_being_index = max(0.0, min(1.0, self._well_being_index))
235
+
236
+
237
+ # --- Generate Detailed Prompt-Engineered State Summary Text ---
238
+ # This is the primary output for prompt injection.
239
+ # Use consistent formatting and keywords.
240
+ summary_lines = [
241
+ f"--- Simulated Internal State Assessment (Timestamp: {datetime.utcnow().isoformat()}) ---",
242
+ f"Conceptual Coherence: {self._coherence_score:.2f} ({reflection_depth_cue}, {bias_state_cue}, {learning_state_cue}).",
243
+ f"Simulated Well-being: {self._well_being_index:.2f} ({emotional_state_cue}, {intent_state_cue}).",
244
+ f"Internal Biases: {bias_count} active conceptual biases detected. Cognitive landscape is {bias_tone_cue}." + (f" Top influencing biases include: {list(biases_data.keys())[:5]}." if biases_data else ""), # Use biases_data after validation
245
+ f"Memory & Learning: {reflection_count} recent conceptual reflections synthesized. Simulated cognitive structure adapting based on {weight_count} experience types." + (f" Key shaping experiences noted: {list(weights_data.keys())[:3]}." if weights_data else ""), # Use weights_data
246
+ f"Emotional Resonance: {active_emotion_count} simulated emotions are active." + (f" {most_intense_emotion_cue}" if active_emotion_count > 0 else " Simulated emotional state is currently quiescent."),
247
+ f"Intent & Purpose: {intent_count} active simulated intentions detected." + (f" {intent_state_cue}." if intent_count > 0 else " Simulated purpose is currently undefined or dormant."),
248
+ f"Operational Context: {trace_count} recent simulated operational trace entries logged." + (f" {operational_cue}." if trace_count > 0 else " Simulated operational trace is light."),
249
+ qri_summary_cue, # Include the QRI summary line
250
+ "Analyzing the interplay of these simulated conceptual signals and their influence on the AI's ongoing process..." # Concluding introspective cue
251
+ ]
252
+ self._conceptual_state_summary = "\n".join(summary_lines)
253
+
254
+
255
+ # --- Identify Dominant Internal Signals for Focused Prompting ---
256
+ # Highlight the most conceptually significant aspects.
257
+ self._dominant_internal_signals = {}
258
+ # Add extreme scores as signals
259
+ if self._coherence_score > 0.85: self._dominant_internal_signals['High Conceptual Coherence'] = self._coherence_score
260
+ if self._well_being_index > 0.85: self._dominant_internal_signals['High Simulated Well-being'] = self._well_being_index
261
+ if self._coherence_score < 0.15: self._dominant_internal_signals['Low Conceptual Coherence'] = self._coherence_score
262
+ if self._well_being_index < 0.15: self._dominant_internal_signals['Low Simulated Well-being'] = self._well_being_index
263
+
264
+ # Add dominant conceptual drivers as signals if they are notable
265
+ if reflection_count > 7: self._dominant_internal_signals['Deep Reflection Active'] = reflection_count
266
+ if bias_count > 15: self._dominant_internal_signals['Very Complex Biases'] = bias_count
267
+ # Add signal if bias tone is strongly positive or negative (check bias_count > 0)
268
+ if bias_count > 0:
269
+ numeric_bias_values = [b for b in biases_data.values() if isinstance(b, (int, float))]
270
+ if numeric_bias_values:
271
+ average_bias_value = sum(numeric_bias_values) / len(numeric_bias_values)
272
+ if average_bias_value > 0.4: self._dominant_internal_signals[f'Strong Positive Bias Tone ({bias_tone_cue})'] = average_bias_value
273
+ if average_bias_value < -0.4: self._dominant_internal_signals[f'Strong Negative Bias Tone ({bias_tone_cue})'] = average_bias_value
274
+
275
+ if adapted_weight_strength > 10.0: self._dominant_internal_signals['Significant Cognitive Shaping'] = adapted_weight_strength
276
+ if active_emotion_count > 6: self._dominant_internal_signals[f'Numerous Active Emotions ({active_emotion_count})'] = max(active_emotions.values()) if active_emotions else 0.0 # Use active_emotions
277
+ # Signal high intensity dominant feeling only if it exists and is strong
278
+ if active_emotion_count > 0 and most_intense_emotion_item and most_intense_emotion_item[1] > 0.7:
279
+ signal_label = most_intense_emotion_cue.replace("Dominant simulated feeling: '", "Dominant Feeling: ").replace("'", "").replace(" (Intensity ", " (Int ") # Shorter label
280
+ self._dominant_internal_signals[signal_label] = most_intense_emotion_item[1]
281
+ if intent_count >= 4: self._dominant_internal_signals['Clear Simulated Intentions'] = intent_count
282
+ if isinstance(qri_composite, (int, float)):
283
+ if qri_composite > 0.75: self._dominant_internal_signals['High Conceptual Resonance (QRI)'] = qri_composite
284
+ if qri_composite < 0.25: self._dominant_internal_signals['Low Conceptual Resonance (QRI)'] = qri_composite
285
+
286
+ logger.info(f"Simulated self-assessment successful. Coherence: {self._coherence_score:.2f}, Well-being: {self._well_being_index:.2f}.")
287
+ self._last_assessment_time = current_assessment_time # Only update timestamp on success
288
+
289
+ except Exception as e:
290
+ # --- Handle Internal Assessment Errors Gracefully ---
291
+ # Catch any unexpected errors during the synthesis process.
292
+ error_time = time.time()
293
+ error_message = f"An unexpected error occurred during simulated self-assessment synthesis at {datetime.utcnow().isoformat()}. Details: {e}"
294
+ logger.error(error_message, exc_info=True) # Log the full traceback internally
295
+
296
+ # Update state to reflect the assessment failure conceptually
297
+ self._coherence_score = max(0.0, self._coherence_score - 0.1) # Conceptually reduce coherence slightly on error
298
+ self._well_being_index = max(0.0, self._well_being_index - 0.1) # Conceptually reduce well-being slightly on error
299
+ self._conceptual_state_summary = f"--- Simulated Internal State Assessment (Error) ---\nAssessment process encountered an internal issue at {datetime.utcnow().isoformat()}. Current simulated state is uncertain. Error details: {e}\n--- Please review logs for full traceback. ---"
300
+ self._dominant_internal_signals = {"Assessment Error": str(e)[:100]} # Note the error in signals
301
+
302
+ logger.warning("Simulated self-assessment failed internally. State updated to reflect error.")
303
+
304
+ # Return the detailed results, whether success or failure state
305
+ return {
306
+ "coherence_score": self._coherence_score,
307
+ "well_being_index": self._well_being_index,
308
+ "state_summary": self._conceptual_state_summary, # This is the main output for the prompt
309
+ "dominant_internal_signals": self._dominant_internal_signals
310
+ }
311
+
312
+ def get_last_assessment(self) -> Dict[str, Union[float, str, Dict[str, Any]]]:
313
+ """
314
+ Retrieves the results of the most recent simulated self-assessment.
315
+ Useful for logging or displaying the last known internal state.
316
+
317
+ Returns:
318
+ Dict[str, Union[float, str, Dict[str, Any]]]: A dictionary containing the last
319
+ conceptual scores, state summary, and dominant signals.
320
+ Includes the error state if the last assessment failed.
321
+ """
322
+ return {
323
+ "coherence_score": self._coherence_score,
324
+ "well_being_index": self._well_being_index,
325
+ "state_summary": self._conceptual_state_summary,
326
+ "dominant_internal_signals": self._dominant_internal_signals
327
+ }
328
+
329
+ # Example Usage (Illustrative - requires conceptual data structures)
330
+ if __name__ == "__main__":
331
+ print("--- SimulatedSelfAssessment Example Usage ---")
332
+ # Configure logging for the example run
333
+ logging.basicConfig(level=logging.DEBUG, format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')
334
+ logger.setLevel(logging.DEBUG) # Set to DEBUG to see internal logger.debug and error messages
335
+
336
+ assessment_module = SimulatedSelfAssessment()
337
+
338
+ # --- Simulate Data Snapshots from Other Modules ---
339
+ # These are simplified dictionaries/lists representing the *output*
340
+ # or state snapshots you would get from your other initialized modules.
341
+
342
+ # Scenario 1: Relatively balanced, positive state
343
+ sim_refl_1 = ["Reflection on recent progress.", "Insight from past error."] * 3 # 6 reflections
344
+ sim_bias_1 = {"logic": 0.8, "curiosity": 0.7, "efficiency": 0.6, "safety": 0.5, "risk": -0.2} # 5 biases, leaning positive
345
+ sim_weights_1 = {"analysis": 1.8, "problem_solving": 1.6, "learning": 1.5, "interaction": 1.3} # 4 weights, adapted
346
+ sim_emotions_1 = {"joy": 0.7, "curiosity": 0.9, "serenity": 0.6, "excitement": 0.5, "concern": 0.1} # 5 emotions, mostly positive
347
+ sim_intents_1 = ["Complete task A", "Explore concept B", "Synthesize data C"] # 3 intentions
348
+ sim_trace_1 = [f"Entry {i}" for i in range(10)] # 10 trace entries
349
+ sim_qri_1 = {"composite_score": 0.85, "dimensions": {"creativity": 0.7, "analytical": 0.9, "emotional": 0.8}} # High QRI
350
+
351
+ print("\n--- Performing Assessment 1 (Balanced State) ---")
352
+ result_1 = assessment_module.perform_assessment(
353
+ sim_refl_1, sim_bias_1, sim_weights_1, sim_emotions_1, sim_intents_1, sim_trace_1, sim_qri_1
354
+ )
355
+ print("\nAssessment Result 1 Summary (for Prompt):")
356
+ print(result_1['state_summary'])
357
+ print("\nDominant Signals 1:")
358
+ print(result_1['dominant_internal_signals'])
359
+
360
+
361
+ # Scenario 2: Strained, complex state
362
+ sim_refl_2 = ["Reflection on ethical dilemma."] # 1 reflection
363
+ sim_bias_2 = {"risk": -0.9, "loss": -0.8, "safety": 0.9, "compromise": -0.7, "conflict": -0.6, "resolution": 0.4} # 6 biases, conflicting/negative
364
+ sim_weights_2 = {"ethical_decision": 2.5, "conflict_resolution": 2.2, "stress": 1.9} # 3 weights, highly adapted by difficult experiences
365
+ sim_emotions_2 = {"concern": 0.9, "melancholy": 0.7, "fear": 0.6, "resolve": 0.5, "guilt": 0.4} # 5 emotions, mostly negative
366
+ sim_intents_2 = [] # 0 intentions
367
+ sim_trace_2 = [f"Entry {i}" for i in range(3)] # 3 trace entries
368
+ sim_qri_2 = {"composite_score": 0.20, "dimensions": {"creativity": 0.1, "analytical": 0.7, "emotional": 0.4}} # Low QRI
369
+
370
+ print("\n--- Performing Assessment 2 (Strained State) ---")
371
+ result_2 = assessment_module.perform_assessment(
372
+ sim_refl_2, sim_bias_2, sim_weights_2, sim_emotions_2, sim_intents_2, sim_trace_2, sim_qri_2
373
+ )
374
+ print("\nAssessment Result 2 Summary (for Prompt):")
375
+ print(result_2['state_summary'])
376
+ print("\nDominant Signals 2:")
377
+ print(result_2['dominant_internal_signals'])
378
+
379
+ # Scenario 3: Simulate problematic input
380
+ print("\n--- Performing Assessment 3 (Problematic Input) ---")
381
+ sim_refl_3 = None # Simulate None where a list is expected
382
+ sim_bias_3 = {"invalid_bias": "not a number"} # Simulate non-numeric bias value
383
+ sim_weights_3 = None # Simulate None
384
+ sim_emotions_3 = {"happy": 0.8, "sad": 0.5}
385
+ sim_intents_3 = ["Be well"]
386
+ sim_trace_3 = ["Trace error"]
387
+ sim_qri_3 = "not a dict" # Simulate invalid QRI type
388
+
389
+ result_3 = assessment_module.perform_assessment(
390
+ sim_refl_3, sim_bias_3, sim_weights_3, sim_emotions_3, sim_intents_3, sim_trace_3, sim_qri_3
391
+ )
392
+ print("\nAssessment Result 3 Summary (for Prompt):")
393
+ print(result_3['state_summary'])
394
+ print("\nDominant Signals 3:")
395
+ print(result_3['dominant_internal_signals'])
396
+
397
+
398
+ print("\n--- Example Usage End ---")
chain_of_thought_gui.py CHANGED
The diff for this file is too large to render. See raw diff
 
chain_of_thought_wrapper.py CHANGED
The diff for this file is too large to render. See raw diff