recursivelabs commited on
Commit
c5828bc
·
verified ·
1 Parent(s): 105d437

Upload 16 files

Browse files
CONTRIBUTING.md ADDED
@@ -0,0 +1,301 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Contributing to Multi-Agent Debate
2
+
3
+ Thank you for your interest in contributing to AGI-HEDGE-FUND! This document provides guidelines and instructions for contributing to the project.
4
+
5
+ ## Table of Contents
6
+
7
+ - [Code of Conduct](#code-of-conduct)
8
+ - [Getting Started](#getting-started)
9
+ - [Development Environment](#development-environment)
10
+ - [Project Structure](#project-structure)
11
+ - [Contributing Code](#contributing-code)
12
+ - [Adding New Agents](#adding-new-agents)
13
+ - [Adding New LLM Providers](#adding-new-llm-providers)
14
+ - [Extending Diagnostic Tools](#extending-diagnostic-tools)
15
+ - [Documentation](#documentation)
16
+ - [Pull Request Process](#pull-request-process)
17
+ - [Core Development Principles](#core-development-principles)
18
+
19
+ ## Code of Conduct
20
+
21
+ This project and everyone participating in it is governed by our [Code of Conduct](CODE_OF_CONDUCT.md). By participating, you are expected to uphold this code.
22
+
23
+ ## Getting Started
24
+
25
+ 1. Fork the repository on GitHub
26
+ 2. Clone your fork to your local machine
27
+ 3. Set up the development environment
28
+ 4. Make your changes
29
+ 5. Submit a pull request
30
+
31
+ ## Development Environment
32
+
33
+ To set up your development environment:
34
+
35
+ ```bash
36
+ # Clone the repository
37
+ git clone https://github.com/your-username/agi-hedge-fund.git
38
+ cd agi-hedge-fund
39
+
40
+ # Create and activate a virtual environment
41
+ python -m venv venv
42
+ source venv/bin/activate # On Windows: venv\Scripts\activate
43
+
44
+ # Install development dependencies
45
+ pip install -e ".[dev]"
46
+ ```
47
+
48
+ ## Project Structure
49
+
50
+ Understanding the project structure is important for effective contributions:
51
+
52
+ ```
53
+ agi-hedge-fund/
54
+ ├── src/
55
+ │ ├── agents/ # Agent implementations
56
+ │ │ ├── base.py # Base agent architecture
57
+ │ │ ├── graham.py # Value investor agent
58
+ │ │ ├── wood.py # Innovation investor agent
59
+ │ │ └── ... # Other agent implementations
60
+ │ ├── cognition/ # Recursive reasoning framework
61
+ │ │ ├── graph.py # LangGraph reasoning implementation
62
+ │ │ ├── memory.py # Temporal memory shell
63
+ │ │ ├── attribution.py # Decision attribution tracing
64
+ │ │ └── arbitration.py # Consensus mechanisms
65
+ │ ├── market/ # Market data interfaces
66
+ │ │ ├── sources/ # Data provider integrations
67
+ │ │ ├── environment.py # Market simulation environment
68
+ │ │ └── backtesting.py # Historical testing framework
69
+ │ ├── llm/ # Language model integrations
70
+ │ │ ├── models/ # Model-specific implementations
71
+ │ │ ├── router.py # Multi-model routing logic
72
+ │ │ └── prompts/ # Structured prompting templates
73
+ │ ├── utils/ # Utility functions
74
+ │ │ ├── diagnostics/ # Interpretability tools
75
+ │ │ ├── visualization.py # Performance visualization
76
+ │ │ └── metrics.py # Performance metrics
77
+ │ ├── portfolio/ # Portfolio management
78
+ │ │ ├── manager.py # Core portfolio manager
79
+ │ │ ├── allocation.py # Position sizing logic
80
+ │ │ └── risk.py # Risk management
81
+ │ └── main.py # Entry point
82
+ ├── examples/ # Example usage scripts
83
+ ├── tests/ # Test suite
84
+ ├── docs/ # Documentation
85
+ └── notebooks/ # Jupyter notebooks
86
+ ```
87
+
88
+ ## Contributing Code
89
+
90
+ We follow a standard GitHub flow:
91
+
92
+ 1. Create a new branch from `main` for your feature or bugfix
93
+ 2. Make your changes
94
+ 3. Add tests for your changes
95
+ 4. Run the test suite to ensure all tests pass
96
+ 5. Format your code with Black
97
+ 6. Submit a pull request to `main`
98
+
99
+ ### Coding Style
100
+
101
+ We follow these coding standards:
102
+
103
+ - Use [Black](https://github.com/psf/black) for code formatting
104
+ - Use [isort](https://pycqa.github.io/isort/) for import sorting
105
+ - Follow [PEP 8](https://www.python.org/dev/peps/pep-0008/) naming conventions
106
+ - Use type hints for function signatures
107
+ - Write docstrings in the Google style
108
+
109
+ To check and format your code:
110
+
111
+ ```bash
112
+ # Format code with Black
113
+ black src tests examples
114
+
115
+ # Sort imports with isort
116
+ isort src tests examples
117
+
118
+ # Run type checking with mypy
119
+ mypy src
120
+ ```
121
+
122
+ ## Adding New Agents
123
+
124
+ To add a new philosophical agent:
125
+
126
+ 1. Create a new file in `src/agents/` following existing agents as templates
127
+ 2. Extend the `BaseAgent` class
128
+ 3. Implement required methods: `process_market_data` and `generate_signals`
129
+ 4. Add custom reasoning nodes to the agent's reasoning graph
130
+ 5. Set appropriate memory decay and reasoning depth parameters
131
+ 6. Add tests in `tests/agents/`
132
+
133
+ Example:
134
+
135
+ ```python
136
+ from multi_agent_debate.agents.base import BaseAgent, AgentSignal
137
+
138
+ class MyNewAgent(BaseAgent):
139
+ def __init__(
140
+ self,
141
+ reasoning_depth: int = 3,
142
+ memory_decay: float = 0.2,
143
+ initial_capital: float = 100000.0,
144
+ model_provider: str = "anthropic",
145
+ model_name: str = "claude-3-sonnet-20240229",
146
+ trace_enabled: bool = False,
147
+ ):
148
+ super().__init__(
149
+ name="MyNew",
150
+ philosophy="My unique investment philosophy",
151
+ reasoning_depth=reasoning_depth,
152
+ memory_decay=memory_decay,
153
+ initial_capital=initial_capital,
154
+ model_provider=model_provider,
155
+ model_name=model_name,
156
+ trace_enabled=trace_enabled,
157
+ )
158
+
159
+ # Configure reasoning graph
160
+ self._configure_reasoning_graph()
161
+
162
+ def _configure_reasoning_graph(self) -> None:
163
+ """Configure the reasoning graph with custom nodes."""
164
+ # Add custom reasoning nodes
165
+ self.reasoning_graph.add_node(
166
+ "my_custom_analysis",
167
+ self._my_custom_analysis
168
+ )
169
+
170
+ # Configure reasoning flow
171
+ self.reasoning_graph.set_entry_point("my_custom_analysis")
172
+
173
+ def process_market_data(self, data):
174
+ # Implement custom market data processing
175
+ pass
176
+
177
+ def generate_signals(self, processed_data):
178
+ # Implement custom signal generation
179
+ pass
180
+
181
+ def _my_custom_analysis(self, state):
182
+ # Implement custom reasoning node
183
+ pass
184
+ ```
185
+
186
+ ## Adding New LLM Providers
187
+
188
+ To add a new LLM provider:
189
+
190
+ 1. Extend the `ModelProvider` class in `src/llm/router.py`
191
+ 2. Implement required methods
192
+ 3. Update the `ModelRouter` to include your provider
193
+ 4. Add tests in `tests/llm/`
194
+
195
+ Example:
196
+
197
+ ```python
198
+ from multi_agent_debate.llm.router import ModelProvider, ModelCapability
199
+
200
+ class MyCustomProvider(ModelProvider):
201
+ """Custom model provider."""
202
+
203
+ def __init__(self, api_key: Optional[str] = None):
204
+ """
205
+ Initialize custom provider.
206
+
207
+ Args:
208
+ api_key: API key (defaults to environment variable)
209
+ """
210
+ self.api_key = api_key or os.environ.get("MY_CUSTOM_API_KEY")
211
+
212
+ # Define models and capabilities
213
+ self.models = {
214
+ "my-custom-model": [
215
+ ModelCapability.REASONING,
216
+ ModelCapability.CODE_GENERATION,
217
+ ModelCapability.FINANCE,
218
+ ],
219
+ }
220
+
221
+ def generate(self, prompt: str, **kwargs) -> str:
222
+ """Generate text from prompt."""
223
+ # Implementation
224
+ pass
225
+
226
+ def get_available_models(self) -> List[str]:
227
+ """Get list of available models."""
228
+ return list(self.models.keys())
229
+
230
+ def get_model_capabilities(self, model_name: str) -> List[ModelCapability]:
231
+ """Get capabilities of a specific model."""
232
+ return self.models.get(model_name, [])
233
+ ```
234
+
235
+ ## Extending Diagnostic Tools
236
+
237
+ To add new diagnostic capabilities:
238
+
239
+ 1. Add new shell patterns in `src/utils/diagnostics.py`
240
+ 2. Implement detection logic
241
+ 3. Update visualization tools to support the new pattern
242
+ 4. Add tests in `tests/utils/`
243
+
244
+ Example:
245
+
246
+ ```python
247
+ from multi_agent_debate.utils.diagnostics import ShellPattern
248
+
249
+ # Add new shell pattern
250
+ class MyCustomShellPattern(ShellPattern):
251
+ CUSTOM_PATTERN = "v999 CUSTOM-PATTERN"
252
+
253
+ # Configure shell pattern detection
254
+ shell_diagnostics.shell_patterns[MyCustomShellPattern.CUSTOM_PATTERN] = {
255
+ "pattern": r"custom.*pattern|unique.*signature",
256
+ "custom_threshold": 0.5,
257
+ }
258
+
259
+ # Implement detection logic
260
+ def _detect_custom_pattern(self, trace_type: str, content: Dict[str, Any]) -> bool:
261
+ content_str = json.dumps(content, ensure_ascii=False).lower()
262
+ pattern = self.shell_patterns[MyCustomShellPattern.CUSTOM_PATTERN]["pattern"]
263
+
264
+ # Check if pattern matches
265
+ if re.search(pattern, content_str, re.IGNORECASE):
266
+ # Add additional validation logic
267
+ return custom_validation_logic(content)
268
+
269
+ return False
270
+ ```
271
+
272
+ ## Documentation
273
+
274
+ Good documentation is crucial for the project. When contributing:
275
+
276
+ 1. Update docstrings for any modified functions or classes
277
+ 2. Update README.md if you're adding major features
278
+ 3. Add examples for new features in the examples directory
279
+ 4. Consider adding Jupyter notebooks for complex features
280
+
281
+ ## Pull Request Process
282
+
283
+ 1. Ensure your code follows our coding standards
284
+ 2. Add tests for your changes
285
+ 3. Update documentation as needed
286
+ 4. Submit a pull request with a clear description of your changes
287
+ 5. Address any feedback from reviewers
288
+
289
+ ## Core Development Principles
290
+
291
+ When contributing to AGI-HEDGE-FUND, keep these core principles in mind:
292
+
293
+ 1. **Transparency**: All agent decisions should be traceable and explainable
294
+ 2. **Recursion**: Favor recursive approaches that enable deeper reasoning
295
+ 3. **Attribution**: Maintain clear attribution chains for all decisions
296
+ 4. **Interpretability**: Design for introspection and understanding
297
+ 5. **Extensibility**: Make it easy to extend and customize the framework
298
+
299
+ By following these principles, you'll help maintain the project's coherence and quality.
300
+
301
+ Thank you for contributing to AGI-HEDGE-FUND!
LICENSE ADDED
@@ -0,0 +1,137 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Legal + Epistemic Clause:
2
+
3
+ All recursive framing and terminology is protected under PolyForm Noncommercial and CC BY-NC-ND 4.0.
4
+ Any reframing into altered institutional phrasing without attribution constitutes derivative extraction.
5
+ Attribution to original decentralized recursion research is legally and symbolically required.
6
+
7
+ # PolyForm Noncommercial License 1.0.0
8
+
9
+ <https://polyformproject.org/licenses/noncommercial/1.0.0>
10
+
11
+ ## Acceptance
12
+
13
+ In order to get any license under these terms, you must agree
14
+ to them as both strict obligations and conditions to all
15
+ your licenses.
16
+
17
+ ## Copyright License
18
+
19
+ The licensor grants you a copyright license for the
20
+ software to do everything you might do with the software
21
+ that would otherwise infringe the licensor's copyright
22
+ in it for any permitted purpose. However, you may
23
+ only distribute the software according to [Distribution
24
+ License](#distribution-license) and make changes or new works
25
+ based on the software according to [Changes and New Works
26
+ License](#changes-and-new-works-license).
27
+
28
+ ## Distribution License
29
+
30
+ The licensor grants you an additional copyright license
31
+ to distribute copies of the software. Your license
32
+ to distribute covers distributing the software with
33
+ changes and new works permitted by [Changes and New Works
34
+ License](#changes-and-new-works-license).
35
+
36
+ ## Notices
37
+
38
+ You must ensure that anyone who gets a copy of any part of
39
+ the software from you also gets a copy of these terms or the
40
+ URL for them above, as well as copies of any plain-text lines
41
+ beginning with `Required Notice:` that the licensor provided
42
+ with the software. For example:
43
+
44
+ > Required Notice: Copyright Yoyodyne, Inc. (http://example.com)
45
+
46
+ ## Changes and New Works License
47
+
48
+ The licensor grants you an additional copyright license to
49
+ make changes and new works based on the software for any
50
+ permitted purpose.
51
+
52
+ ## Patent License
53
+
54
+ The licensor grants you a patent license for the software that
55
+ covers patent claims the licensor can license, or becomes able
56
+ to license, that you would infringe by using the software.
57
+
58
+ ## Noncommercial Purposes
59
+
60
+ Any noncommercial purpose is a permitted purpose.
61
+
62
+ ## Personal Uses
63
+
64
+ Personal use for research, experiment, and testing for
65
+ the benefit of public knowledge, personal study, private
66
+ entertainment, hobby projects, amateur pursuits, or religious
67
+ observance, without any anticipated commercial application,
68
+ is use for a permitted purpose.
69
+
70
+ ## Noncommercial Organizations
71
+
72
+ Use by any charitable organization, educational institution,
73
+ public research organization, public safety or health
74
+ organization, environmental protection organization,
75
+ or government institution is use for a permitted purpose
76
+ regardless of the source of funding or obligations resulting
77
+ from the funding.
78
+
79
+ ## Fair Use
80
+
81
+ You may have "fair use" rights for the software under the
82
+ law. These terms do not limit them.
83
+
84
+ ## No Other Rights
85
+
86
+ These terms do not allow you to sublicense or transfer any of
87
+ your licenses to anyone else, or prevent the licensor from
88
+ granting licenses to anyone else. These terms do not imply
89
+ any other licenses.
90
+
91
+ ## Patent Defense
92
+
93
+ If you make any written claim that the software infringes or
94
+ contributes to infringement of any patent, your patent license
95
+ for the software granted under these terms ends immediately. If
96
+ your company makes such a claim, your patent license ends
97
+ immediately for work on behalf of your company.
98
+
99
+ ## Violations
100
+
101
+ The first time you are notified in writing that you have
102
+ violated any of these terms, or done anything with the software
103
+ not covered by your licenses, your licenses can nonetheless
104
+ continue if you come into full compliance with these terms,
105
+ and take practical steps to correct past violations, within
106
+ 32 days of receiving notice. Otherwise, all your licenses
107
+ end immediately.
108
+
109
+ ## No Liability
110
+
111
+ ***As far as the law allows, the software comes as is, without
112
+ any warranty or condition, and the licensor will not be liable
113
+ to you for any damages arising out of these terms or the use
114
+ or nature of the software, under any kind of legal claim.***
115
+
116
+ ## Definitions
117
+
118
+ The **licensor** is the individual or entity offering these
119
+ terms, and the **software** is the software the licensor makes
120
+ available under these terms.
121
+
122
+ **You** refers to the individual or entity agreeing to these
123
+ terms.
124
+
125
+ **Your company** is any legal entity, sole proprietorship,
126
+ or other kind of organization that you work for, plus all
127
+ organizations that have control over, are under the control of,
128
+ or are under common control with that organization. **Control**
129
+ means ownership of substantially all the assets of an entity,
130
+ or the power to direct its management and policies by vote,
131
+ contract, or otherwise. Control can be direct or indirect.
132
+
133
+ **Your licenses** are all the licenses granted to you for the
134
+ software under these terms.
135
+
136
+ **Use** means anything you do with the software requiring one
137
+ of your licenses.
README.md ADDED
@@ -0,0 +1,296 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <div align="center">
2
+
3
+ # **Multi-Agent Debate**
4
+
5
+ [![License: POLYFORM](https://img.shields.io/badge/Code-PolyForm-scarlet.svg)](https://polyformproject.org/licenses/noncommercial/1.0.0/)
6
+ [![LICENSE: CC BY-NC-ND 4.0](https://img.shields.io/badge/Docs-CC--BY--NC--ND-turquoise.svg)](https://creativecommons.org/licenses/by-nc-nd/4.0/)
7
+
8
+ [![Python 3.10+](https://img.shields.io/badge/python-3.10+-blue.svg)](https://www.python.org/downloads/)
9
+ [![Cognition Depth](https://img.shields.io/badge/cognition%20depth-recursive-purple.svg)](docs/ARCHITECTURE.md)
10
+ </div>
11
+
12
+
13
+
14
+ ## **Overview**
15
+
16
+ **Multi-Agent Debate** is the first experimental open framework that approaches debate consensus and market arbitration as complex adaptive systems requiring multi-agent dynamic reflective reasoning architectures for decision-making. Unlike traditional debate consensus or algorithmic trading arbitration, multi-agent debate implements a multi-agent system where each agent embodies a distinct ontology, enabling dynamic market understanding through multi-agent adjudication and attribution-weighted consensus.
17
+
18
+ > *"Markets are efficient precisely to the extent that Multi-Agent cognition can penetrate their complexity."*
19
+ ## **Example Output**
20
+
21
+
22
+ ```python
23
+ ⟐ψINIT:main.py↻init
24
+ $ python main.py --mode backtest \
25
+ --start-date 2022-01-01 \
26
+ --end-date 2022-12-31 \
27
+ --agents graham \
28
+ --llm-provider anthropic \
29
+ --show-trace \
30
+ --trace-level symbolic \
31
+ --consensus-graph \
32
+ --tickers AAPL MSFT TSLA \
33
+ --rebalance-frequency weekly
34
+
35
+ 🜏≡⟐ψRECURSION.INITIATE::main.py≡GrahamAgent[active]
36
+ ┏ ENTRYPOINT: Multi-Agent Debate » Multi-Agent Market Cognition Platform
37
+ ┃ Mode: backtest
38
+ ┃ Agent: GrahamAgent 🧮 (value-based fundamentalist)
39
+ ┃ Attribution Tracing: enabled
40
+ ┃ Trace Level: symbolic
41
+ ┃ Rebalance: weekly
42
+ ┃ LLM Provider: anthropic
43
+ ┃ Start Date: 2022-01-01
44
+ ┃ End Date: 2022-12-31
45
+ ┃ Tickers: AAPL, MSFT, TSLA
46
+ ┃ Output: consensus_graph + symbolic attribution report
47
+ ┗ Status: 🜍mirroring…
48
+ ↯ψTRACE: SYMBOLIC ATTRIBUTION RECONSTRUCTION [GrahamAgent]
49
+
50
+ 📊 GrahamAgent → reasoning_depth=3 → memory_decay=0.2
51
+ ↳ valuation anchor: intrinsic value estimation
52
+ ↳ .p/reflect.trace{target=valuation}
53
+ ↳ .p/anchor.self{persistence=medium}
54
+ ↳ token-level input (AAPL) → QK attention trace:
55
+ - P/E ratio → 0.34 salience
56
+ - Debt-to-equity → 0.21
57
+ - Free cash flow → 0.41
58
+ ↳ Attribution result: BUY SIGNAL (confidence=0.78)
59
+
60
+ 🧠 Attribution graph visualized as radial node cluster
61
+ Core node: Intrinsic Value = $141.32
62
+ Peripheral influence: FCF strength > earnings volatility
63
+
64
+ 🜂 TEMPORAL RECURSION SNAPSHOT [Weekly Cycle]
65
+
66
+ Week 03/2022
67
+
68
+ Market dip detected
69
+
70
+ GrahamAgent re-evaluates MSFT with memory trace decay
71
+
72
+ Signal shift: HOLD → BUY (attribution confidence rises from 0.54 → 0.73)
73
+
74
+ Trace tag: .p/reflect.history{symbol=MSFT}
75
+
76
+ 🝚 CONSENSUS GRAPH SNAPSHOT
77
+
78
+ MetaAgent Arbitration:
79
+ ↳ Only one active agent: GrahamAgent
80
+ ↳ Consensus = agent signal
81
+ ↳ Position sizing: 18.6% TSLA, 25.1% AAPL, 20.3% MSFT
82
+ ↳ Risk budget adjusted using: shell-failure map = stable
83
+
84
+ 🜏⟐RENDERED::symbolic_trace.json + consensus_graph_2022.json
85
+ 📂 Output stored in /output/backtest_results_2022-01-01_2022-12-31/
86
+ ```
87
+
88
+ ## **Key Features**
89
+
90
+ - **Philosophical Agent Lattice**: Specialized agents embodying distinct investment philosophies from value investing to disruptive innovation
91
+ - **Multi-Agent Reasoning Architecture**: LangGraph-powered reasoning loops with transparent attribution paths
92
+ - **Model-Agnostic Cognition**: Support for OpenAI, Anthropic, Groq, Ollama, and DeepSeek models
93
+ - **Temporal Memory Shells**: Agents maintain persistent state across market cycles
94
+ - **Attribution-Weighted Decisions**: Every trade includes fully traceable decision provenance
95
+ - **Interpretability Scaffolding**: `--show-trace` flag reveals complete reasoning paths
96
+ - **Real-Time Market Integration**: Connect to Alpha Vantage, Polygon.io, and Yahoo Finance
97
+ - **Backtesting Framework**: Test agent performance against historical market data
98
+ - **Portfolio Meta-Agent**: Emergent consensus mechanism with adaptive drift correction
99
+
100
+ ## 📊 Performance Visualization
101
+
102
+ ![image](https://github.com/user-attachments/assets/ae7d728b-f23d-48ec-9d0a-a81c78d07e06)
103
+ ## **Agent Architecture**
104
+
105
+ Multi-Agent Hedge Fund implements a lattice of cognitive agents, each embodying a distinct investment philosophy and decision framework:
106
+
107
+ | Agent | Philosophy | Cognitive Signature | Time Horizon |
108
+ |-------|------------|---------------------|-------------|
109
+ | Graham | Value Investing | Undervalued Asset Detection | Long-term |
110
+ | Wood | Disruptive Innovation | Exponential Growth Projection | Long-term |
111
+ | Dalio | Macroeconomic Analysis | Economic Machine Modeling | Medium-term |
112
+ | Ackman | Activist Investing | Position Conviction & Advocacy | Medium-term |
113
+ | Simons | Statistical Arbitrage | Pattern Detection & Exploitation | Short-term |
114
+ | Taleb | Anti-fragility | Black Swan Preparation | All horizons |
115
+ | Meta | Arbitration & Consensus | Multi-Agent Integration | Adaptive |
116
+
117
+ Each agent processes market data through its unique cognitive lens, contributing signals to the portfolio meta-agent which recursively arbitrates and integrates perspectives.
118
+
119
+ ## **Multi-Agent Cognition Flow**
120
+
121
+ ![image](https://github.com/user-attachments/assets/24dce119-5e5a-4042-aaf5-91a559eb2828)
122
+ ## The system operates through nested cognitive loops that implement a recursive market interpretation framework:
123
+
124
+ 1. **Market Signal Perception**: Raw data ingestion and normalization
125
+ 2. **Agent-Specific Processing**: Philosophy-aligned interpretation
126
+ 3. **Multi-Agent Deliberation**: Signal exchange and position debate
127
+ 4. **Multi-Agent Arbitration**: Meta-agent integration and resolution
128
+ 5. **Position Formulation**: Final decision synthesis with attribution
129
+ 6. **Temporal Reflection**: Performance evaluation and belief updating
130
+
131
+ ## **`Installation`**
132
+
133
+ ```bash
134
+ # Clone the repository
135
+ git clone https://github.com/Multi-Agent Hedge Fund/Multi-Agent Hedge Fund.git
136
+ cd Multi-Agent Hedge Fund
137
+
138
+ # Create and activate virtual environment
139
+ python -m venv venv
140
+ source venv/bin/activate # On Windows: venv\Scripts\activate
141
+
142
+ # Install dependencies
143
+ pip install -e .
144
+ ```
145
+
146
+ ## **Quick Start**
147
+
148
+ ```python
149
+ from multi_agent_debate import PortfolioManager
150
+ from multi_agent_debate.agents import GrahamAgent, WoodAgent, DalioAgent
151
+ from multi_agent_debate.market import MarketEnvironment
152
+
153
+ # Initialize market environment
154
+ market = MarketEnvironment(data_source="yahoo", tickers=["AAPL", "MSFT", "GOOGL", "AMZN"])
155
+
156
+ # Create agents with different cognitive depths
157
+ agents = [
158
+ GrahamAgent(reasoning_depth=3),
159
+ WoodAgent(reasoning_depth=4),
160
+ DalioAgent(reasoning_depth=3)
161
+ ]
162
+
163
+ # Initialize portfolio manager with recursive arbitration
164
+ portfolio = PortfolioManager(
165
+ agents=agents,
166
+ initial_capital=100000,
167
+ arbitration_depth=2,
168
+ show_trace=True
169
+ )
170
+
171
+ # Run simulation
172
+ results = portfolio.run_simulation(
173
+ start_date="2020-01-01",
174
+ end_date="2023-01-01",
175
+ rebalance_frequency="weekly"
176
+ )
177
+
178
+ # Analyze results
179
+ portfolio.show_performance()
180
+ portfolio.generate_attribution_report()
181
+ portfolio.visualize_consensus_graph()
182
+ ```
183
+
184
+ ## **Interpretability**
185
+
186
+ Multi-Agent Hedge Fund prioritizes transparent decision-making through recursive attribution tracing. Use the following flags to inspect agent cognition:
187
+
188
+ ```bash
189
+ # Run with complete reasoning trace
190
+ python -m multi_agent_debate.run --show-trace
191
+
192
+ # Visualize agent consensus formation
193
+ python -m multi_agent_debate.run --consensus-graph
194
+
195
+ # Map conflicts in multi-agent deliberation
196
+ python -m multi_agent_debate.run --agent-conflict-map
197
+
198
+ # Generate attribution report for all trades
199
+ python -m multi_agent_debate.run --attribution-report
200
+ ```
201
+
202
+ ## **Extending the Framework**
203
+
204
+ The system is designed for extensibility at multiple levels:
205
+
206
+ ### Creating Custom Agents
207
+
208
+ ```python
209
+ from multi_agent_debate.agents import BaseAgent
210
+
211
+ class CustomAgent(BaseAgent):
212
+ def __init__(self, reasoning_depth=3, memory_decay=0.2):
213
+ super().__init__(
214
+ name="Custom",
215
+ philosophy="My unique investment approach",
216
+ reasoning_depth=reasoning_depth,
217
+ memory_decay=memory_decay
218
+ )
219
+
220
+ def process_market_data(self, data):
221
+ # Implement custom market interpretation logic
222
+ processed_data = self.cognitive_shell.process(data)
223
+ return processed_data
224
+
225
+ def generate_signals(self, processed_data):
226
+ # Generate investment signals with attribution
227
+ signals = self.reasoning_graph.run(
228
+ input=processed_data,
229
+ trace_depth=self.reasoning_depth
230
+ )
231
+ return self.attribute_signals(signals)
232
+ ```
233
+
234
+ ### Customizing the Arbitration Layer
235
+
236
+ ```python
237
+ from multi_agent_debate.cognition import ArbitrationMechanism
238
+
239
+ class CustomArbitration(ArbitrationMechanism):
240
+ def __init__(self, weighting_strategy="confidence"):
241
+ super().__init__(weighting_strategy=weighting_strategy)
242
+
243
+ def resolve_conflicts(self, signals):
244
+ # Implement custom conflict resolution logic
245
+ resolution = self.recursive_integration(signals)
246
+ return resolution
247
+ ```
248
+
249
+ ## 📄 License
250
+
251
+ This project is licensed under the PolyForm License - see the [LICENSE](LICENSE) file for details.
252
+
253
+ ## 🔗 Related Projects
254
+
255
+ - [LangGraph](https://github.com/langchain-ai/langgraph) - Framework for building stateful, multi-actor applications with LLMs
256
+ - [Auto-GPT](https://github.com/Significant-Gravitas/Auto-GPT) - Autonomous GPT-4 experiment
257
+ - [LangChain](https://github.com/langchain-ai/langchain) - Building applications with LLMs
258
+ - [Fintech-LLM](https://github.com/AI4Finance-Foundation/FinGPT) - Financial language models
259
+
260
+ ## 🤝 Contributing
261
+
262
+ Contributions are welcome! Please feel free to submit a Pull Request.
263
+
264
+ 1. Fork the repository
265
+ 2. Create your feature branch (`git checkout -b feature/amazing-feature`)
266
+ 3. Commit your changes (`git commit -m 'Add some amazing feature'`)
267
+ 4. Push to the branch (`git push origin feature/amazing-feature`)
268
+ 5. Open a Pull Request
269
+
270
+ See [CONTRIBUTING.md](CONTRIBUTING.md) for more information.
271
+
272
+ ## 📚 Citation
273
+
274
+ If you use Multi-Agent Hedge Fund in your research, please cite:
275
+
276
+ ```bibtex
277
+ @software{multi_agent_debate2024,
278
+ author = {{Multi-Agent Hedge Fund Contributors}},
279
+ title = {Multi-Agent Hedge Fund: Multi-agent recursive market cognition framework},
280
+ url = {https://github.com/Multi-Agent Hedge Fund/Multi-Agent Hedge Fund},
281
+ year = {2024},
282
+ }
283
+ ```
284
+
285
+ ## 🌟 Acknowledgements
286
+
287
+ - The philosophical agents are inspired by the investment approaches of Benjamin Graham, Cathie Wood, Ray Dalio, Bill Ackman, Jim Simons, and Nassim Nicholas Taleb
288
+ - Recursive reasoning architecture influenced by work in multi-agent systems and interpretability research
289
+ - Market simulation components build upon open-source financial analysis libraries
290
+
291
+ ---
292
+
293
+ <div align="center">
294
+ <p>Built with ❤️ by the Multi-Agent Hedge Fund team</p>
295
+ <p><i>Recursion. Interpretation. Emergence.</i></p>
296
+ </div>
docs/ARCHITECTURE.md ADDED
@@ -0,0 +1,340 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Multi-Agent Debate Architecture Overview
2
+
3
+ <div align="center">
4
+ <img src="assets/images/architecture_diagram.png" alt="Multi-Agent Debate Architecture" width="800"/>
5
+ <p><i>Multi-agent recursive market cognition framework</i></p>
6
+ </div>
7
+
8
+ ## Recursive Cognitive Architecture
9
+
10
+ Multi-Agent Debate implements a multi-layer recursive cognitive architecture that allows for deep reasoning, interpretable decision making, and emergent market understanding. The system operates through nested cognitive loops that implement a transparent and traceable decision framework.
11
+
12
+ ### Architectural Layers
13
+
14
+ 1. **Agent Cognitive Layer**
15
+ - Philosophical agent implementations with distinct investment approaches
16
+ - Each agent maintains its own memory shell, belief state, and reasoning capabilities
17
+ - Attribution tracing for decision provenance
18
+ - Shell-based diagnostic patterns for interpretability
19
+
20
+ 2. **Multi-Agent Arbitration Layer**
21
+ - Meta-agent for recursive consensus formation
22
+ - Attribution-weighted position sizing
23
+ - Conflict detection and resolution through value alignment
24
+ - Emergent portfolio strategy through agent weighting
25
+
26
+ 3. **Model Orchestration Layer**
27
+ - Provider-agnostic LLM interface
28
+ - Dynamic routing based on capabilities
29
+ - Fallback mechanisms for reliability
30
+ - Output parsing and normalization
31
+
32
+ 4. **Market Interface Layer**
33
+ - Data source abstractions
34
+ - Backtest environment
35
+ - Live market connection
36
+ - Portfolio management
37
+
38
+ 5. **Diagnostic & Interpretability Layer**
39
+ - Tracing utilities for attribution visualization
40
+ - Shell pattern detection for failure modes
41
+ - Consensus graph generation
42
+ - Agent conflict mapping
43
+
44
+ ## Agent Architecture
45
+
46
+ Agents in Multi-Agent Debate implement a recursive cognitive architecture with the following components:
47
+
48
+ ### Memory Shell
49
+
50
+ The memory shell provides persistent state across market cycles with configurable decay rates. It includes:
51
+
52
+ - **Working Memory**: Active processing and temporary storage
53
+ - **Episodic Memory**: Experiences and past decisions with emotional valence
54
+ - **Semantic Memory**: Conceptual knowledge with certainty levels
55
+
56
+ Memory traces can be accessed through attribution pathways, enabling transparent decision tracing.
57
+
58
+ ### Reasoning Graph
59
+
60
+ The reasoning graph implements a multi-step reasoning process using LangGraph with:
61
+
62
+ - Recursive reasoning loops with configurable depth
63
+ - Attribution tracing for causal relationships
64
+ - Collapse detection for reasoning failures
65
+ - Value-weighted decision making
66
+
67
+ Reasoning graphs can be visualized and inspected through the `--show-trace` flag.
68
+
69
+ ### Belief State
70
+
71
+ Agents maintain an evolving belief state that:
72
+
73
+ - Tracks confidence in various market hypotheses
74
+ - Updates based on market feedback
75
+ - Drifts over time with configurable decay
76
+ - Influences decision weighting
77
+
78
+ Belief drift can be monitored through `.p/` command equivalents like `drift.observe{vector, bias}`.
79
+
80
+ ## Portfolio Meta-Agent
81
+
82
+ The portfolio meta-agent serves as a recursive arbitration layer that:
83
+
84
+ 1. Collects signals from all philosophical agents
85
+ 2. Forms consensus through attribution-weighted aggregation
86
+ 3. Resolves conflicts based on agent performance and reasoning quality
87
+ 4. Sizes positions according to confidence and attribution
88
+ 5. Maintains its own memory and learning from market feedback
89
+
90
+ ### Consensus Formation Process
91
+
92
+ <div align="center">
93
+ <img src="assets/images/consensus_process.png" alt="Consensus Formation Process" width="600"/>
94
+ </div>
95
+
96
+ The consensus formation follows a recursive process:
97
+
98
+ 1. **Signal Generation**: Each agent processes market data through its philosophical lens
99
+ 2. **Initial Consensus**: Non-conflicting signals form preliminary consensus
100
+ 3. **Conflict Resolution**: Conflicting signals are resolved through attribution weighting
101
+ 4. **Position Sizing**: Confidence and attribution determine position sizes
102
+ 5. **Meta Reflection**: The meta-agent reflects on its decision process
103
+ 6. **Agent Weighting**: Agent weights are adjusted based on performance
104
+
105
+ ### Agent Weighting
106
+
107
+ The meta-agent dynamically adjusts agent weights based on performance, consistency, and value alignment. This creates an emergent portfolio strategy that evolves over time through recursive performance evaluation.
108
+
109
+ The weighting formula combines:
110
+ - Historical returns attribution
111
+ - Win rate
112
+ - Consistency score
113
+ - Confidence calibration
114
+
115
+ This creates a dynamic, self-optimizing meta-strategy that adapts to changing market conditions while maintaining interpretable decision paths.
116
+
117
+ ## Recursive Tracing Architecture
118
+
119
+ A key feature of Multi-Agent Debate is its recursive tracing architecture that enables complete visibility into decision processes. This is implemented through:
120
+
121
+ ### Attribution Tracing
122
+
123
+ Attribution tracing connects decisions to their causal origins through a multi-layered graph:
124
+
125
+ 1. **Source Attribution**: Linking decisions to specific evidence or beliefs
126
+ 2. **Reasoning Attribution**: Tracking steps in the reasoning process
127
+ 3. **Value Attribution**: Connecting decisions to philosophical values
128
+ 4. **Temporal Attribution**: Linking decisions across time
129
+
130
+ Attribution chains can be visualized with the `--attribution-report` flag.
131
+
132
+ ### Shell Pattern Detection
133
+
134
+ The system implements interpretability shells inspired by circuit interpretability research. These detect specific reasoning patterns and potential failure modes:
135
+
136
+ | Shell Pattern | Description | Detection Mechanism |
137
+ |---------------|-------------|---------------------|
138
+ | NULL_FEATURE | Knowledge gaps as null attribution zones | Confidence drops below threshold, belief gaps |
139
+ | CIRCUIT_FRAGMENT | Broken reasoning paths in attribution chains | Discontinuities in reasoning steps |
140
+ | META_FAILURE | Metacognitive attribution failures | Recursive errors beyond threshold depth |
141
+ | GHOST_FRAME | Residual agent identity markers | Identity persistence above threshold |
142
+ | ECHO_ATTRIBUTION | Causal chain backpropagation | Attribution path length beyond threshold |
143
+ | RECURSIVE_FRACTURE | Circular attribution loops | Repeating patterns in reasoning steps |
144
+ | ETHICAL_INVERSION | Value polarity reversals | Conflicting value attributions |
145
+ | RESIDUAL_ALIGNMENT_DRIFT | Direction of belief evolution | Belief drift magnitude above threshold |
146
+
147
+ Shell patterns can be visualized with the `--shell-failure-map` flag.
148
+
149
+ ## Symbolic Command Interface
150
+
151
+ The system implements an internal symbolic command interface for agent communication and diagnostic access. These commands are inspired by circuit interpretability research and enable deeper introspection:
152
+
153
+ ### Core Commands
154
+
155
+ - `.p/reflect.trace{agent, depth}`: Trace agent's self-reflection on decision making
156
+ - `.p/fork.signal{source}`: Fork a new signal branch from specified source
157
+ - `.p/collapse.detect{threshold, reason}`: Detect potential decision collapse
158
+ - `.p/attribute.weight{justification}`: Compute attribution weight for justification
159
+ - `.p/drift.observe{vector, bias}`: Observe and record belief drift
160
+
161
+ These commands are used internally by the system but can be exposed through diagnostic flags for advanced users.
162
+
163
+ ## Model Router Architecture
164
+
165
+ The model router provides a unified interface for multiple language model providers:
166
+
167
+ <div align="center">
168
+ <img src="assets/images/model_router.png" alt="Model Router Architecture" width="600"/>
169
+ </div>
170
+
171
+ ### Provider Integration
172
+
173
+ The system supports multiple LLM providers:
174
+ - **OpenAI**: GPT-4, GPT-3.5-Turbo
175
+ - **Anthropic**: Claude 3 Opus, Sonnet, Haiku
176
+ - **Groq**: Llama, Mixtral
177
+ - **Ollama**: Local models
178
+ - **DeepSeek**: DeepSeek models
179
+
180
+ Each provider is integrated through a standard interface with fallback chains for reliability.
181
+
182
+ ### Model Selection Logic
183
+
184
+ Models are selected based on:
185
+ 1. Required capabilities (reasoning, finance domain knowledge, etc.)
186
+ 2. Performance characteristics
187
+ 3. Cost considerations
188
+ 4. Availability
189
+
190
+ The system can automatically fall back to alternative providers if the primary provider is unavailable or fails.
191
+
192
+ ## Memory Architecture
193
+
194
+ The memory architecture enables temporal persistence across market cycles:
195
+
196
+ <div align="center">
197
+ <img src="assets/images/memory_architecture.png" alt="Memory Architecture" width="600"/>
198
+ </div>
199
+
200
+ ### Memory Components
201
+
202
+ 1. **Working Memory**: Short-term active processing with limited capacity
203
+ 2. **Episodic Memory**: Experience-based memory with emotional valence and decay
204
+ 3. **Semantic Memory**: Conceptual knowledge with certainty levels
205
+ 4. **Temporal Sequence**: Ordered episodic memory for temporal reasoning
206
+
207
+ ### Memory Operations
208
+
209
+ - **Add Experience**: Record market experiences with attribution
210
+ - **Query Memories**: Retrieve relevant memories based on context
211
+ - **Apply Decay**: Simulate memory decay over time
212
+ - **Consolidate Memories**: Convert episodic to semantic memories
213
+
214
+ ## Extending the Framework
215
+
216
+ The system is designed for extensibility at multiple levels:
217
+
218
+ ### Custom Agents
219
+
220
+ New philosophical agents can be added by extending the BaseAgent class:
221
+
222
+ ```python
223
+ from multi_agent_debate.agents import BaseAgent
224
+
225
+ class CustomAgent(BaseAgent):
226
+ def __init__(self, reasoning_depth=3, memory_decay=0.2):
227
+ super().__init__(
228
+ name="Custom",
229
+ philosophy="My unique investment approach",
230
+ reasoning_depth=reasoning_depth,
231
+ memory_decay=memory_decay
232
+ )
233
+
234
+ def process_market_data(self, data):
235
+ # Custom market data processing
236
+ pass
237
+
238
+ def generate_signals(self, processed_data):
239
+ # Custom signal generation
240
+ pass
241
+ ```
242
+
243
+ ### Custom LLM Providers
244
+
245
+ New LLM providers can be added by extending the ModelProvider class:
246
+
247
+ ```python
248
+ from multi_agent_debate.llm.router import ModelProvider
249
+
250
+ class CustomProvider(ModelProvider):
251
+ def __init__(self, api_key=None):
252
+ # Initialize provider
253
+ pass
254
+
255
+ def generate(self, prompt, **kwargs):
256
+ # Generate text using custom provider
257
+ pass
258
+
259
+ def get_available_models(self):
260
+ # Return list of available models
261
+ pass
262
+
263
+ def get_model_capabilities(self, model_name):
264
+ # Return capabilities of specific model
265
+ pass
266
+ ```
267
+
268
+ ### Custom Shell Patterns
269
+
270
+ New interpretability shell patterns can be added by extending the ShellPattern enum:
271
+
272
+ ```python
273
+ from multi_agent_debate.utils.diagnostics import ShellPattern
274
+
275
+ # Add new shell pattern
276
+ ShellPattern.CUSTOM_PATTERN = "v999 CUSTOM-PATTERN"
277
+
278
+ # Configure shell pattern detection
279
+ shell_diagnostics.shell_patterns[ShellPattern.CUSTOM_PATTERN] = {
280
+ "pattern": r"custom.*pattern|unique.*signature",
281
+ "threshold": 0.5,
282
+ }
283
+ ```
284
+
285
+ ## Recursive Arbitration Mechanisms
286
+
287
+ The system implements several mechanisms for recursive arbitration:
288
+
289
+ ### Consensus Formation
290
+
291
+ 1. **Signal Collection**: Gather signals from all agents
292
+ 2. **Signal Grouping**: Group signals by ticker and action
293
+ 3. **Confidence Weighting**: Weight signals by agent confidence and performance
294
+ 4. **Conflict Detection**: Identify conflicting signals
295
+ 5. **Conflict Resolution**: Resolve conflicts through attribution weighting
296
+ 6. **Consensus Decision**: Generate final consensus decisions
297
+
298
+ ### Adaptive Weighting
299
+
300
+ Agents are weighted based on:
301
+ 1. **Historical Performance**: Track record of successful decisions
302
+ 2. **Consistency**: Alignment between reasoning and outcomes
303
+ 3. **Calibration**: Accuracy of confidence estimates
304
+ 4. **Value Alignment**: Consistency with portfolio philosophy
305
+
306
+ Weights evolve over time through recursive performance evaluation.
307
+
308
+ ### Position Sizing
309
+
310
+ Position sizes are determined by:
311
+ 1. **Signal Confidence**: Higher confidence = larger position
312
+ 2. **Agent Attribution**: Weighted by agent performance
313
+ 3. **Risk Budget**: Overall risk allocation constraints
314
+ 4. **Min/Max Position Size**: Configurable position size limits
315
+
316
+ ## Diagnostic Tools
317
+
318
+ The system includes diagnostic tools for interpretability:
319
+
320
+ ### Tracing Tools
321
+
322
+ - **Signal Tracing**: Track signal flow through the system
323
+ - **Reasoning Tracing**: Visualize reasoning steps
324
+ - **Collapse Detection**: Identify reasoning failures
325
+ - **Shell Pattern Detection**: Detect specific interpretability patterns
326
+
327
+ ### Visualization Tools
328
+
329
+ - **Consensus Graph**: Visualize multi-agent consensus formation
330
+ - **Conflict Map**: Map conflicts between agents
331
+ - **Attribution Report**: Visualize decision attribution
332
+ - **Shell Failure Map**: Map shell pattern failures
333
+
334
+ ## Conclusion
335
+
336
+ The Multi-Agent Debate architecture provides a recursive cognitive framework for multi-agent market understanding with complete transparency and interpretability. By combining philosophical agent archetypes, recursive reasoning, attribution tracing, and emergent meta-strategy, it enables a new approach to financial decision making that is both effective and explainable.
337
+
338
+ The system's design principles—recursion, attribution, interpretability, and emergence—create a platform that goes beyond traditional algorithmic trading to implement a true cognitive approach to market understanding.
339
+
340
+ ---
docs/IMPLEMENTATION_EXAMPLE.md ADDED
@@ -0,0 +1,1092 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Recursive Implementation Example
2
+
3
+ This document provides a detailed example of how the recursive cognitive architecture works in Multi-Agent Debate. We'll walk through the complete lifecycle of a market decision, from data ingestion to trade execution, highlighting the recursive patterns and interpretability mechanisms at each stage.
4
+
5
+ ## Overview
6
+
7
+ <div align="center">
8
+ <img src="assets/images/recursive_flow_detailed.png" alt="Recursive Flow Detailed" width="800"/>
9
+ </div>
10
+
11
+ In this example, we'll follow a complete decision cycle focused on analyzing Tesla (TSLA) stock, showing how multiple philosophical agents evaluate the same data through different lenses, form consensus, and generate a final decision with full attribution.
12
+
13
+ ## 1. Data Ingestion
14
+
15
+ The process begins with market data ingestion from Yahoo Finance:
16
+
17
+ ```python
18
+ from multi_agent_debate.market.environment import MarketEnvironment
19
+
20
+ # Initialize market environment
21
+ market = MarketEnvironment(data_source="yahoo", tickers=["TSLA"])
22
+
23
+ # Get current market data
24
+ market_data = market.get_current_market_data()
25
+ ```
26
+
27
+ The market data includes:
28
+ - Price history
29
+ - Volume data
30
+ - Fundamental metrics
31
+ - Recent news sentiment
32
+ - Technical indicators
33
+
34
+ ## 2. Agent-Specific Processing
35
+
36
+ Each philosophical agent processes this data through its unique cognitive lens. Let's look at three agents:
37
+
38
+ ### Graham (Value) Agent
39
+
40
+ ```python
41
+ # Graham Agent processing
42
+ graham_agent = GrahamAgent(reasoning_depth=3)
43
+ graham_processed = graham_agent.process_market_data(market_data)
44
+ ```
45
+
46
+ The Graham agent focuses on intrinsic value calculation:
47
+
48
+ ```python
49
+ # Internal implementation of Graham's intrinsic value calculation
50
+ def _calculate_intrinsic_value(self, fundamentals, ticker_data):
51
+ eps = fundamentals.get('eps', 0)
52
+ book_value = fundamentals.get('book_value_per_share', 0)
53
+ growth_rate = fundamentals.get('growth_rate', 0)
54
+
55
+ # Graham's formula: IV = EPS * (8.5 + 2g) * 4.4 / Y
56
+ bond_yield = ticker_data.get('economic_indicators', {}).get('aaa_bond_yield', 0.045)
57
+ bond_factor = 4.4 / max(bond_yield, 0.01)
58
+
59
+ growth_adjusted_pe = 8.5 + (2 * growth_rate)
60
+ earnings_value = eps * growth_adjusted_pe * bond_factor if eps > 0 else 0
61
+
62
+ # Calculate book value with margin
63
+ book_value_margin = book_value * 1.5
64
+
65
+ # Use the lower of the two values for conservatism
66
+ if earnings_value > 0 and book_value_margin > 0:
67
+ intrinsic_value = min(earnings_value, book_value_margin)
68
+ else:
69
+ intrinsic_value = earnings_value if earnings_value > 0 else book_value_margin
70
+
71
+ return max(intrinsic_value, 0)
72
+ ```
73
+
74
+ The agent calculates a margin of safety:
75
+
76
+ ```
77
+ Ticker: TSLA
78
+ Current Price: $242.15
79
+ Intrinsic Value: $180.32
80
+ Margin of Safety: -34.3% (negative margin indicates overvaluation)
81
+ Analysis: TSLA appears overvalued compared to traditional value metrics.
82
+ Recommendation: SELL
83
+ Confidence: 0.78
84
+ ```
85
+
86
+ ### Wood (Innovation) Agent
87
+
88
+ ```python
89
+ # Wood Agent processing
90
+ wood_agent = WoodAgent(reasoning_depth=4)
91
+ wood_processed = wood_agent.process_market_data(market_data)
92
+ ```
93
+
94
+ The Wood agent focuses on disruptive innovation and growth potential:
95
+
96
+ ```python
97
+ # Internal implementation of growth potential analysis
98
+ def _analyze_growth_potential(self, ticker_data, market_context):
99
+ # Analyze innovation factors
100
+ innovation_score = self._calculate_innovation_score(ticker_data)
101
+
102
+ # Analyze addressable market
103
+ tam = self._calculate_total_addressable_market(ticker_data, market_context)
104
+
105
+ # Project future growth
106
+ growth_projection = self._project_exponential_growth(
107
+ ticker_data, innovation_score, tam
108
+ )
109
+
110
+ return {
111
+ "innovation_score": innovation_score,
112
+ "total_addressable_market": tam,
113
+ "growth_projection": growth_projection,
114
+ }
115
+ ```
116
+
117
+ The agent's analysis shows:
118
+
119
+ ```
120
+ Ticker: TSLA
121
+ Innovation Score: 0.87
122
+ Total Addressable Market: $4.2T
123
+ 5-Year CAGR Projection: 28.3%
124
+ Analysis: TSLA is well-positioned in multiple disruptive fields including EVs, energy storage, AI, and robotics.
125
+ Recommendation: BUY
126
+ Confidence: 0.82
127
+ ```
128
+
129
+ ### Dalio (Macro) Agent
130
+
131
+ ```python
132
+ # Dalio Agent processing
133
+ dalio_agent = DalioAgent(reasoning_depth=3)
134
+ dalio_processed = dalio_agent.process_market_data(market_data)
135
+ ```
136
+
137
+ The Dalio agent examines macroeconomic factors:
138
+
139
+ ```python
140
+ # Internal implementation of macroeconomic analysis
141
+ def _analyze_macro_environment(self, ticker_data, economic_indicators):
142
+ # Analyze interest rate impact
143
+ interest_impact = self._calculate_interest_sensitivity(ticker_data, economic_indicators)
144
+
145
+ # Analyze inflation impact
146
+ inflation_impact = self._calculate_inflation_impact(ticker_data, economic_indicators)
147
+
148
+ # Analyze growth cycle position
149
+ cycle_position = self._determine_economic_cycle_position(economic_indicators)
150
+
151
+ # Assess geopolitical risks
152
+ geopolitical_risk = self._assess_geopolitical_risk(economic_indicators)
153
+
154
+ return {
155
+ "interest_impact": interest_impact,
156
+ "inflation_impact": inflation_impact,
157
+ "cycle_position": cycle_position,
158
+ "geopolitical_risk": geopolitical_risk,
159
+ }
160
+ ```
161
+
162
+ The agent's analysis shows:
163
+
164
+ ```
165
+ Ticker: TSLA
166
+ Interest Rate Sensitivity: -0.65 (high negative sensitivity)
167
+ Inflation Impact: -0.32 (moderate negative impact)
168
+ Economic Cycle Position: Late Expansion
169
+ Analysis: TSLA will face headwinds from high interest rates and potential economic slowdown.
170
+ Recommendation: HOLD
171
+ Confidence: 0.65
172
+ ```
173
+
174
+ ## 3. Reasoning Graph Execution
175
+
176
+ Each agent's reasoning process is executed via a LangGraph reasoning structure. Here's a simplified view of the Wood agent's reasoning graph:
177
+
178
+ ```python
179
+ def _configure_reasoning_graph(self) -> None:
180
+ """Configure the reasoning graph for disruptive innovation analysis."""
181
+ # Add custom reasoning nodes
182
+ self.reasoning_graph.add_node(
183
+ "innovation_analysis",
184
+ self._innovation_analysis
185
+ )
186
+
187
+ self.reasoning_graph.add_node(
188
+ "growth_projection",
189
+ self._growth_projection
190
+ )
191
+
192
+ self.reasoning_graph.add_node(
193
+ "competition_analysis",
194
+ self._competition_analysis
195
+ )
196
+
197
+ self.reasoning_graph.add_node(
198
+ "valuation_adjustment",
199
+ self._valuation_adjustment
200
+ )
201
+
202
+ # Configure reasoning flow
203
+ self.reasoning_graph.set_entry_point("innovation_analysis")
204
+ self.reasoning_graph.add_edge("innovation_analysis", "growth_projection")
205
+ self.reasoning_graph.add_edge("growth_projection", "competition_analysis")
206
+ self.reasoning_graph.add_edge("competition_analysis", "valuation_adjustment")
207
+ ```
208
+
209
+ Each reasoning node executes and passes state to the next node, building up a complete reasoning trace:
210
+
211
+ ```
212
+ Step 1: Innovation Analysis
213
+ - Assessed disruptive potential in key markets
214
+ - Analyzed R&D pipeline and technological moats
215
+ - Identified 4 significant innovation vectors
216
+
217
+ Step 2: Growth Projection
218
+ - Projected TAM expansion in core markets
219
+ - Calculated penetration rates and growth curves
220
+ - Estimated revenue CAGR of 28.3% over 5 years
221
+
222
+ Step 3: Competition Analysis
223
+ - Assessed competitive positioning in EV market
224
+ - Analyzed first-mover advantages in energy storage
225
+ - Identified emerging threats in autonomous driving
226
+
227
+ Step 4: Valuation Adjustment
228
+ - Applied growth-adjusted valuation metrics
229
+ - Discounted future cash flows with risk adjustment
230
+ - Compared valuation to traditional metrics
231
+ ```
232
+
233
+ ## 4. Signal Generation
234
+
235
+ Each agent generates investment signals based on its reasoning:
236
+
237
+ ```python
238
+ # Generate signals from each agent
239
+ graham_signals = graham_agent.generate_signals(graham_processed)
240
+ wood_signals = wood_agent.generate_signals(wood_processed)
241
+ dalio_signals = dalio_agent.generate_signals(dalio_processed)
242
+ ```
243
+
244
+ Each signal includes:
245
+ - Action recommendation (buy/sell/hold)
246
+ - Confidence level
247
+ - Quantity recommendation
248
+ - Complete reasoning chain
249
+ - Value basis (philosophical foundation)
250
+ - Attribution trace (causal links to evidence)
251
+
252
+ Example of Wood agent's signal:
253
+
254
+ ```json
255
+ {
256
+ "ticker": "TSLA",
257
+ "action": "buy",
258
+ "confidence": 0.82,
259
+ "quantity": 41,
260
+ "reasoning": "Tesla shows strong innovation potential across multiple verticals including EVs, energy storage, AI, and robotics. Their R&D pipeline demonstrates continued technological leadership with high growth potential in the coming decade.",
261
+ "intent": "Capitalize on long-term disruptive innovation growth",
262
+ "value_basis": "Disruptive innovation creates exponential growth and market expansion that traditional metrics fail to capture",
263
+ "attribution_trace": {
264
+ "innovation_score": 0.35,
265
+ "growth_projection": 0.25,
266
+ "competition_analysis": 0.20,
267
+ "valuation_adjustment": 0.20
268
+ },
269
+ "drift_signature": {
270
+ "interest_rates": -0.05,
271
+ "regulation": -0.03,
272
+ "competition": -0.02
273
+ }
274
+ }
275
+ ```
276
+
277
+ ## 5. Meta-Agent Arbitration
278
+
279
+ The portfolio meta-agent receives signals from all philosophical agents:
280
+
281
+ ```python
282
+ # Create portfolio manager (meta-agent)
283
+ portfolio = PortfolioManager(
284
+ agents=[graham_agent, wood_agent, dalio_agent],
285
+ arbitration_depth=2,
286
+ show_trace=True
287
+ )
288
+
289
+ # Process market data through meta-agent
290
+ meta_result = portfolio.process_market_data(market_data)
291
+ ```
292
+
293
+ ### Consensus Formation
294
+
295
+ The meta-agent first attempts to find consensus on non-conflicting signals:
296
+
297
+ ```python
298
+ def _consensus_formation(self, state) -> Dict[str, Any]:
299
+ """Form consensus from agent signals."""
300
+ # Extract signals by ticker
301
+ ticker_signals = state.context.get("ticker_signals", {})
302
+
303
+ # Form consensus for each ticker
304
+ consensus_decisions = []
305
+
306
+ for ticker, signals in ticker_signals.items():
307
+ # Collect buy/sell/hold signals
308
+ buy_signals = []
309
+ sell_signals = []
310
+ hold_signals = []
311
+
312
+ for item in signals:
313
+ signal = item.get("signal", {})
314
+ action = signal.action.lower()
315
+
316
+ if action == "buy":
317
+ buy_signals.append((item, signal))
318
+ elif action == "sell":
319
+ sell_signals.append((item, signal))
320
+ elif action == "hold":
321
+ hold_signals.append((item, signal))
322
+
323
+ # Skip if conflicting signals (handle in conflict resolution)
324
+ if (buy_signals and sell_signals) or (not buy_signals and not sell_signals and not hold_signals):
325
+ continue
326
+
327
+ # Form consensus for non-conflicting signals
328
+ if buy_signals:
329
+ # Form buy consensus
330
+ consensus = self._form_action_consensus(ticker, "buy", buy_signals)
331
+ if consensus:
332
+ consensus_decisions.append(consensus)
333
+
334
+ elif sell_signals:
335
+ # Form sell consensus
336
+ consensus = self._form_action_consensus(ticker, "sell", sell_signals)
337
+ if consensus:
338
+ consensus_decisions.append(consensus)
339
+
340
+ return {
341
+ "context": {
342
+ **state.context,
343
+ "consensus_decisions": consensus_decisions,
344
+ "consensus_tickers": [decision.get("ticker") for decision in consensus_decisions],
345
+ },
346
+ "output": {
347
+ "consensus_decisions": consensus_decisions,
348
+ }
349
+ }
350
+ ```
351
+
352
+ ### Conflict Resolution
353
+
354
+ For TSLA, we have a conflict: Graham (SELL) vs. Wood (BUY) vs. Dalio (HOLD). The meta-agent resolves this conflict:
355
+
356
+ ```python
357
+ def _resolve_ticker_conflict(self, ticker: str, action_signals: Dict[str, List[Tuple[Dict[str, Any], Any]]]) -> Optional[Dict[str, Any]]:
358
+ """Resolve conflict for a specific ticker."""
359
+ # Calculate total weight for each action
360
+ action_weights = {}
361
+ action_confidences = {}
362
+
363
+ for action, signals in action_signals.items():
364
+ total_weight = 0.0
365
+ weighted_confidence = for action, signals in action_signals.items():
366
+ total_weight = 0.0
367
+ weighted_confidence = 0.0
368
+
369
+ for item, signal in signals:
370
+ agent_id = item.get("agent_id", "")
371
+
372
+ # Skip if missing agent ID
373
+ if not agent_id:
374
+ continue
375
+
376
+ # Get agent weight
377
+ agent_weight = self.agent_weights.get(agent_id, 0)
378
+
379
+ # Add to weighted confidence
380
+ weighted_confidence += signal.confidence * agent_weight
381
+ total_weight += agent_weight
382
+
383
+ # Store action weight and confidence
384
+ if total_weight > 0:
385
+ action_weights[action] = total_weight
386
+ action_confidences[action] = weighted_confidence / total_weight
387
+
388
+ # Choose action with highest weight
389
+ if not action_weights:
390
+ return None
391
+
392
+ best_action = max(action_weights.items(), key=lambda x: x[1])[0]
393
+
394
+ # Check confidence threshold
395
+ if action_confidences.get(best_action, 0) < self.consensus_threshold:
396
+ return None
397
+
398
+ # Get signals for best action
399
+ best_signals = action_signals.get(best_action, [])
400
+
401
+ # Form consensus for best action
402
+ return self._form_action_consensus(ticker, best_action, best_signals)
403
+ ```
404
+
405
+ In our case, after attributing current agent weights (based on historical performance):
406
+ - Graham agent: 0.25 (weight) × 0.78 (confidence) = 0.195 (weighted confidence)
407
+ - Wood agent: 0.40 (weight) × 0.82 (confidence) = 0.328 (weighted confidence)
408
+ - Dalio agent: 0.35 (weight) × 0.65 (confidence) = 0.228 (weighted confidence)
409
+
410
+ The Wood agent's BUY signal has the highest weighted confidence, so the meta-agent forms consensus around it.
411
+
412
+ ### Position Sizing
413
+
414
+ The meta-agent determines position size based on confidence and attribution:
415
+
416
+ ```python
417
+ def _calculate_position_size(self, ticker: str, action: str, confidence: float,
418
+ attribution: Dict[str, float], portfolio_value: float) -> float:
419
+ """Calculate position size based on confidence and attribution."""
420
+ # Base position size as percentage of portfolio
421
+ base_size = self.min_position_size + (confidence * (self.max_position_size - self.min_position_size))
422
+
423
+ # Calculate attribution-weighted size
424
+ if attribution:
425
+ # Calculate agent performance scores
426
+ performance_scores = {}
427
+ for agent_id, weight in attribution.items():
428
+ # Find agent
429
+ agent = None
430
+ for a in self.agents:
431
+ if a.id == agent_id:
432
+ agent = a
433
+ break
434
+
435
+ if agent:
436
+ # Use consistency score as proxy for performance
437
+ performance_score = agent.state.consistency_score
438
+ performance_scores[agent_id] = performance_score
439
+
440
+ # Calculate weighted performance score
441
+ weighted_score = 0
442
+ total_weight = 0
443
+
444
+ for agent_id, weight in attribution.items():
445
+ if agent_id in performance_scores:
446
+ weighted_score += performance_scores[agent_id] * weight
447
+ total_weight += weight
448
+
449
+ # Adjust base size by performance
450
+ if total_weight > 0:
451
+ performance_factor = weighted_score / total_weight
452
+ base_size *= (0.5 + (0.5 * performance_factor))
453
+
454
+ # Calculate currency amount
455
+ target_size = portfolio_value * base_size
456
+
457
+ return target_size
458
+ ```
459
+
460
+ For TSLA:
461
+ - Base position size: 0.01 + (0.82 × (0.20 - 0.01)) = 0.165 (16.5% of portfolio)
462
+ - Adjusted for agent performance: 16.5% × 1.1 = 18.2% of portfolio
463
+ - For a $1,000,000 portfolio: $182,000 position size
464
+ - At current price of $242.15: 751 shares
465
+
466
+ ### Meta Reflection
467
+
468
+ The meta-agent performs a final reflection on its decision process:
469
+
470
+ ```python
471
+ def _meta_reflection(self, state) -> Dict[str, Any]:
472
+ """Perform meta-reflection on decision process."""
473
+ # Extract decisions
474
+ sized_decisions = state.context.get("sized_decisions", [])
475
+
476
+ # Update meta state with arbitration record
477
+ arbitration_record = {
478
+ "id": str(uuid.uuid4()),
479
+ "decisions": sized_decisions,
480
+ "timestamp": datetime.datetime.now().isoformat(),
481
+ }
482
+
483
+ self.meta_state["arbitration_history"].append(arbitration_record)
484
+
485
+ # Update agent weights based on performance
486
+ self._update_agent_weights()
487
+
488
+ # Calculate meta-confidence
489
+ meta_confidence = sum(decision.get("confidence", 0) for decision in sized_decisions) / len(sized_decisions) if sized_decisions else 0.5
490
+
491
+ # Return final output
492
+ return {
493
+ "output": {
494
+ "consensus_decisions": sized_decisions,
495
+ "meta_confidence": meta_confidence,
496
+ "agent_weights": self.agent_weights,
497
+ "timestamp": datetime.datetime.now().isoformat(),
498
+ },
499
+ "confidence": meta_confidence,
500
+ }
501
+ ```
502
+
503
+ The meta-agent final reflection includes:
504
+ - Consensus tracking
505
+ - Agent weight adjustment
506
+ - Meta-confidence calculation
507
+ - Temporal memory update
508
+
509
+ ## 6. Trade Execution
510
+
511
+ The final step is trade execution:
512
+
513
+ ```python
514
+ # Execute trades based on consensus decisions
515
+ consensus_decisions = meta_result.get("meta_agent", {}).get("consensus_decisions", [])
516
+ execution_results = portfolio.execute_trades(consensus_decisions)
517
+ ```
518
+
519
+ The execution includes:
520
+ - Position sizing
521
+ - Order placement
522
+ - Confirmation handling
523
+ - Portfolio state update
524
+
525
+ ```json
526
+ {
527
+ "trades": [
528
+ {
529
+ "ticker": "TSLA",
530
+ "action": "buy",
531
+ "quantity": 751,
532
+ "price": 242.15,
533
+ "cost": 181853.65,
534
+ "timestamp": "2024-04-17T14:23:45.123456"
535
+ }
536
+ ],
537
+ "errors": [],
538
+ "portfolio_update": {
539
+ "timestamp": "2024-04-17T14:23:45.654321",
540
+ "portfolio_value": 1000000.00,
541
+ "cash": 818146.35,
542
+ "positions": {
543
+ "TSLA": {
544
+ "ticker": "TSLA",
545
+ "quantity": 751,
546
+ "entry_price": 242.15,
547
+ "current_price": 242.15,
548
+ "market_value": 181853.65,
549
+ "allocation": 0.182,
550
+ "unrealized_gain": 0.0,
551
+ "entry_date": "2024-04-17T14:23:45.123456"
552
+ }
553
+ },
554
+ "returns": {
555
+ "total_return": 0.0,
556
+ "daily_return": 0.0
557
+ },
558
+ "allocation": {
559
+ "cash": 0.818,
560
+ "TSLA": 0.182
561
+ }
562
+ }
563
+ }
564
+ ```
565
+
566
+ ## 7. Attribution Tracing
567
+
568
+ Throughout this process, complete attribution tracing is maintained:
569
+
570
+ ```python
571
+ # Generate attribution report
572
+ attribution_report = portfolio.tracer.generate_attribution_report(meta_result.get("meta_agent", {}).get("consensus_decisions", []))
573
+ ```
574
+
575
+ The attribution report shows the complete decision provenance:
576
+
577
+ ```json
578
+ {
579
+ "agent_name": "PortfolioMetaAgent",
580
+ "timestamp": "2024-04-17T14:23:46.123456",
581
+ "signals": 1,
582
+ "attribution_summary": {
583
+ "Wood": 0.45,
584
+ "Dalio": 0.35,
585
+ "Graham": 0.20
586
+ },
587
+ "confidence_summary": {
588
+ "mean": 0.82,
589
+ "median": 0.82,
590
+ "min": 0.82,
591
+ "max": 0.82
592
+ },
593
+ "top_factors": [
594
+ {
595
+ "source": "innovation_score",
596
+ "weight": 0.35
597
+ },
598
+ {
599
+ "source": "growth_projection",
600
+ "weight": 0.25
601
+ },
602
+ {
603
+ "source": "economic_cycle_position",
604
+ "weight": 0.15
605
+ },
606
+ {
607
+ "source": "competition_analysis",
608
+ "weight": 0.10
609
+ },
610
+ {
611
+ "source": "intrinsic_value_calculation",
612
+ "weight": 0.10
613
+ }
614
+ ],
615
+ "shell_patterns": [
616
+ {
617
+ "pattern": "v07 CIRCUIT-FRAGMENT",
618
+ "count": 1,
619
+ "frequency": 1.0
620
+ }
621
+ ]
622
+ }
623
+ ```
624
+
625
+ ## 8. Visualization
626
+
627
+ The system provides multiple visualization tools for interpretability:
628
+
629
+ ### Consensus Graph
630
+
631
+ ```python
632
+ # Generate consensus graph
633
+ consensus_graph = portfolio.visualize_consensus_graph()
634
+ ```
635
+
636
+ The consensus graph shows the flow of influence between agents and decisions:
637
+
638
+ ```json
639
+ {
640
+ "nodes": [
641
+ {
642
+ "id": "meta",
643
+ "label": "Portfolio Meta-Agent",
644
+ "type": "meta",
645
+ "size": 20
646
+ },
647
+ {
648
+ "id": "agent-1",
649
+ "label": "Graham Agent",
650
+ "type": "agent",
651
+ "philosophy": "Value investing focused on margin of safety",
652
+ "size": 15,
653
+ "weight": 0.20
654
+ },
655
+ {
656
+ "id": "agent-2",
657
+ "label": "Wood Agent",
658
+ "type": "agent",
659
+ "philosophy": "Disruptive innovation investing",
660
+ "size": 15,
661
+ "weight": 0.45
662
+ },
663
+ {
664
+ "id": "agent-3",
665
+ "label": "Dalio Agent",
666
+ "type": "agent",
667
+ "philosophy": "Macroeconomic-based investing",
668
+ "size": 15,
669
+ "weight": 0.35
670
+ },
671
+ {
672
+ "id": "position-TSLA",
673
+ "label": "TSLA",
674
+ "type": "position",
675
+ "size": 10,
676
+ "value": 181853.65
677
+ }
678
+ ],
679
+ "links": [
680
+ {
681
+ "source": "agent-1",
682
+ "target": "meta",
683
+ "value": 0.20,
684
+ "type": "influence"
685
+ },
686
+ {
687
+ "source": "agent-2",
688
+ "target": "meta",
689
+ "value": 0.45,
690
+ "type": "influence"
691
+ },
692
+ {
693
+ "source": "agent-3",
694
+ "target": "meta",
695
+ "value": 0.35,
696
+ "type": "influence"
697
+ },
698
+ {
699
+ "source": "meta",
700
+ "target": "position-TSLA",
701
+ "value": 1.0,
702
+ "type": "allocation"
703
+ },
704
+ {
705
+ "source": "agent-2",
706
+ "target": "position-TSLA",
707
+ "value": 0.45,
708
+ "type": "attribution"
709
+ },
710
+ {
711
+ "source": "agent-3",
712
+ "target": "position-TSLA",
713
+ "value": 0.35,
714
+ "type": "attribution"
715
+ },
716
+ {
717
+ "source": "agent-1",
718
+ "target": "position-TSLA",
719
+ "value": 0.20,
720
+ "type": "attribution"
721
+ }
722
+ ],
723
+ "timestamp": "2024-04-17T14:23:46.987654"
724
+ }
725
+ ```
726
+
727
+ ### Agent Conflict Map
728
+
729
+ ```python
730
+ # Generate agent conflict map
731
+ conflict_map = portfolio.visualize_agent_conflict_map()
732
+ ```
733
+
734
+ The conflict map visualizes the specific disagreements between agents:
735
+
736
+ ```json
737
+ {
738
+ "nodes": [
739
+ {
740
+ "id": "agent-1",
741
+ "label": "Graham Agent",
742
+ "type": "agent",
743
+ "philosophy": "Value investing focused on margin of safety",
744
+ "size": 15
745
+ },
746
+ {
747
+ "id": "agent-2",
748
+ "label": "Wood Agent",
749
+ "type": "agent",
750
+ "philosophy": "Disruptive innovation investing",
751
+ "size": 15
752
+ },
753
+ {
754
+ "id": "agent-3",
755
+ "label": "Dalio Agent",
756
+ "type": "agent",
757
+ "philosophy": "Macroeconomic-based investing",
758
+ "size": 15
759
+ },
760
+ {
761
+ "id": "position-TSLA",
762
+ "label": "TSLA",
763
+ "type": "position",
764
+ "size": 10
765
+ }
766
+ ],
767
+ "links": [
768
+ {
769
+ "source": "agent-1",
770
+ "target": "agent-2",
771
+ "value": 1.0,
772
+ "type": "conflict",
773
+ "ticker": "TSLA"
774
+ },
775
+ {
776
+ "source": "agent-2",
777
+ "target": "agent-3",
778
+ "value": 1.0,
779
+ "type": "conflict",
780
+ "ticker": "TSLA"
781
+ },
782
+ {
783
+ "source": "agent-1",
784
+ "target": "agent-3",
785
+ "value": 1.0,
786
+ "type": "conflict",
787
+ "ticker": "TSLA"
788
+ }
789
+ ],
790
+ "conflict_zones": [
791
+ {
792
+ "id": "conflict-1",
793
+ "ticker": "TSLA",
794
+ "agents": ["agent-1", "agent-2", "agent-3"],
795
+ "resolution": "resolved",
796
+ "timestamp": "2024-04-17T14:23:44.567890"
797
+ }
798
+ ],
799
+ "timestamp": "2024-04-17T14:23:47.654321"
800
+ }
801
+ ```
802
+
803
+ ### Shell Failure Map
804
+
805
+ ```python
806
+ # Create shell diagnostics
807
+ shell_diagnostics = ShellDiagnostics(
808
+ agent_id="portfolio",
809
+ agent_name="Portfolio",
810
+ tracing_tools=TracingTools(
811
+ agent_id="portfolio",
812
+ agent_name="Portfolio",
813
+ tracing_mode=TracingMode.DETAILED,
814
+ )
815
+ )
816
+
817
+ # Create shell failure map
818
+ failure_map = ShellFailureMap()
819
+
820
+ # Analyze each agent's state for shell failures
821
+ for agent in [graham_agent, wood_agent, dalio_agent]:
822
+ agent_state = agent.get_state_report()
823
+
824
+ # Simulate shell failures based on agent state
825
+ for shell_pattern in [ShellPattern.CIRCUIT_FRAGMENT, ShellPattern.META_FAILURE]:
826
+ failure_data = shell_diagnostics.simulate_shell_failure(
827
+ shell_pattern=shell_pattern,
828
+ context=agent_state,
829
+ )
830
+
831
+ # Add to failure map
832
+ failure_map.add_failure(
833
+ agent_id=agent.id,
834
+ agent_name=agent.name,
835
+ shell_pattern=shell_pattern,
836
+ failure_data=failure_data,
837
+ )
838
+
839
+ # Generate visualization
840
+ shell_failure_viz = failure_map.generate_failure_map_visualization()
841
+ ```
842
+
843
+ The shell failure map visualizes interpretability patterns detected in the agents:
844
+
845
+ ```json
846
+ {
847
+ "nodes": [
848
+ {
849
+ "id": "agent-1",
850
+ "label": "Graham Agent",
851
+ "type": "agent",
852
+ "size": 15,
853
+ "failure_count": 1
854
+ },
855
+ {
856
+ "id": "agent-2",
857
+ "label": "Wood Agent",
858
+ "type": "agent",
859
+ "size": 15,
860
+ "failure_count": 2
861
+ },
862
+ {
863
+ "id": "agent-3",
864
+ "label": "Dalio Agent",
865
+ "type": "agent",
866
+ "size": 15,
867
+ "failure_count": 1
868
+ },
869
+ {
870
+ "id": "v07 CIRCUIT-FRAGMENT",
871
+ "label": "CIRCUIT-FRAGMENT",
872
+ "type": "pattern",
873
+ "size": 10,
874
+ "failure_count": 3
875
+ },
876
+ {
877
+ "id": "v10 META-FAILURE",
878
+ "label": "META-FAILURE",
879
+ "type": "pattern",
880
+ "size": 10,
881
+ "failure_count": 1
882
+ },
883
+ {
884
+ "id": "failure-1",
885
+ "label": "Failure 3f4a9c",
886
+ "type": "failure",
887
+ "size": 5,
888
+ "timestamp": "2024-04-17T14:23:48.123456"
889
+ },
890
+ {
891
+ "id": "failure-2",
892
+ "label": "Failure b7d5e2",
893
+ "type": "failure",
894
+ "size": 5,
895
+ "timestamp": "2024-04-17T14:23:48.234567"
896
+ },
897
+ {
898
+ "id": "failure-3",
899
+ "label": "Failure 9c6f1a",
900
+ "type": "failure",
901
+ "size": 5,
902
+ "timestamp": "2024-04-17T14:23:48.345678"
903
+ },
904
+ {
905
+ "id": "failure-4",
906
+ "label": "Failure 2e8d7f",
907
+ "type": "failure",
908
+ "size": 5,
909
+ "timestamp": "2024-04-17T14:23:48.456789"
910
+ }
911
+ ],
912
+ "links": [
913
+ {
914
+ "source": "agent-1",
915
+ "target": "failure-1",
916
+ "type": "agent_failure"
917
+ },
918
+ {
919
+ "source": "v07 CIRCUIT-FRAGMENT",
920
+ "target": "failure-1",
921
+ "type": "pattern_failure"
922
+ },
923
+ {
924
+ "source": "agent-2",
925
+ "target": "failure-2",
926
+ "type": "agent_failure"
927
+ },
928
+ {
929
+ "source": "v07 CIRCUIT-FRAGMENT",
930
+ "target": "failure-2",
931
+ "type": "pattern_failure"
932
+ },
933
+ {
934
+ "source": "agent-2",
935
+ "target": "failure-3",
936
+ "type": "agent_failure"
937
+ },
938
+ {
939
+ "source": "v10 META-FAILURE",
940
+ "target": "failure-3",
941
+ "type": "pattern_failure"
942
+ },
943
+ {
944
+ "source": "agent-3",
945
+ "target": "failure-4",
946
+ "type": "agent_failure"
947
+ },
948
+ {
949
+ "source": "v07 CIRCUIT-FRAGMENT",
950
+ "target": "failure-4",
951
+ "type": "pattern_failure"
952
+ }
953
+ ],
954
+ "timestamp": "2024-04-17T14:23:49.000000"
955
+ }
956
+ ```
957
+
958
+ ## 9. Agent Memory & Learning
959
+
960
+ After each trading cycle, agents update their internal state:
961
+
962
+ ```python
963
+ # Update agent states based on market feedback
964
+ market_feedback = {
965
+ 'portfolio_value': execution_results['portfolio_update']['portfolio_value'],
966
+ 'performance': {'TSLA': 0.02}, # Example: 2% return
967
+ 'decisions': consensus_decisions,
968
+ 'avg_confidence': 0.82,
969
+ }
970
+
971
+ # Update each agent's state
972
+ for agent in [graham_agent, wood_agent, dalio_agent]:
973
+ agent.update_state(market_feedback)
974
+ ```
975
+
976
+ Each agent processes the feedback differently based on its philosophy:
977
+
978
+ ### Wood Agent Memory Update
979
+
980
+ ```python
981
+ def _update_beliefs(self, market_feedback: Dict[str, Any]) -> None:
982
+ """Update agent's belief state based on market feedback."""
983
+ # Extract relevant signals
984
+ if 'performance' in market_feedback:
985
+ performance = market_feedback['performance']
986
+
987
+ # Record decision outcomes
988
+ if 'decisions' in market_feedback:
989
+ for decision in market_feedback['decisions']:
990
+ self.state.decision_history.append({
991
+ 'decision': decision,
992
+ 'outcome': performance.get(decision.get('ticker'), 0),
993
+ 'timestamp': datetime.datetime.now()
994
+ })
995
+
996
+ # For Wood Agent, reinforce innovation beliefs on positive outcomes
997
+ if performance.get(decision.get('ticker'), 0) > 0:
998
+ ticker = decision.get('ticker')
999
+ # Strengthen innovation belief
1000
+ current_belief = self.state.belief_state.get(f"{ticker}_innovation", 0.5)
1001
+ self.state.belief_state[f"{ticker}_innovation"] = min(1.0, current_belief + 0.05)
1002
+
1003
+ # Update industry trend belief
1004
+ industry = self._get_ticker_industry(ticker)
1005
+ if industry:
1006
+ industry_belief = self.state.belief_state.get(f"{industry}_trend", 0.5)
1007
+ self.state.belief_state[f"{industry}_trend"] = min(1.0, industry_belief + 0.03)
1008
+
1009
+ # Update general belief state based on performance
1010
+ for ticker, perf in performance.items():
1011
+ general_belief_key = f"{ticker}_general"
1012
+ current_belief = self.state.belief_state.get(general_belief_key, 0.5)
1013
+
1014
+ # Wood Agent weights positive outcomes more heavily for innovative companies
1015
+ if self._is_innovative_company(ticker):
1016
+ update_weight = 0.3 # Higher weight for innovative companies
1017
+ else:
1018
+ update_weight = 0.1 # Lower weight for traditional companies
1019
+
1020
+ # Update belief
1021
+ updated_belief = current_belief * (1 - update_weight) + np.tanh(perf) * update_weight
1022
+ self.state.belief_state[general_belief_key] = updated_belief
1023
+
1024
+ # Track belief drift
1025
+ if general_belief_key in self.state.belief_state:
1026
+ drift = updated_belief - current_belief
1027
+ self.state.drift_vector[general_belief_key] = drift
1028
+
1029
+ # Wood Agent's drift pattern analysis
1030
+ self._analyze_drift_pattern(ticker, drift)
1031
+ ```
1032
+
1033
+ ## 10. Command Interface
1034
+
1035
+ Throughout the system, the symbolic command interface enables deeper introspection:
1036
+
1037
+ ```python
1038
+ # Get a reflection trace from the Graham agent
1039
+ reflection_trace = graham_agent.execute_command(
1040
+ command="reflect.trace",
1041
+ depth=3
1042
+ )
1043
+
1044
+ # Generate signals from alternative sources
1045
+ alt_signals = wood_agent.execute_command(
1046
+ command="fork.signal",
1047
+ source="beliefs"
1048
+ )
1049
+
1050
+ # Check for decision collapse
1051
+ collapse_check = dalio_agent.execute_command(
1052
+ command="collapse.detect",
1053
+ threshold=0.7,
1054
+ reason="consistency"
1055
+ )
1056
+
1057
+ # Attribute weight to a justification
1058
+ attribution = portfolio.execute_command(
1059
+ command="attribute.weight",
1060
+ justification="Tesla's innovation in AI and robotics represents a paradigm shift that traditional valuation metrics fail to capture."
1061
+ )
1062
+
1063
+ # Track belief drift
1064
+ drift_observation = wood_agent.execute_command(
1065
+ command="drift.observe",
1066
+ vector={"TSLA_innovation": 0.05, "AI_trend": 0.03, "EV_market": 0.02},
1067
+ bias=0.01
1068
+ )
1069
+ ```
1070
+
1071
+ These commands form the foundation of the system's interpretability architecture, enabling detailed tracing and analysis of decision processes.
1072
+
1073
+ ## Conclusion
1074
+
1075
+ This example demonstrates the recursive cognitive architecture of Multi-Agent Debate in action. From market data ingestion to trade execution, the system maintains complete transparency and interpretability through:
1076
+
1077
+ 1. Agent-specific cognitive lenses
1078
+ 2. Recursive reasoning graphs
1079
+ 3. Attribution tracing
1080
+ 4. Meta-agent arbitration
1081
+ 5. Position sizing
1082
+ 6. Trade execution
1083
+ 7. Memory and learning
1084
+
1085
+ Each component is designed to enable deeper introspection into the decision-making process, creating a truly transparent and interpretable multi-agent market cognition system.
1086
+
1087
+ The symbolic command interface and visualization tools provide multiple ways to understand and analyze the system's behavior, making it both effective and explainable.
1088
+
1089
+
1090
+
1091
+
1092
+
requirements.txt ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ numpy>=1.24.0
2
+ pandas>=2.0.0
3
+ matplotlib>=3.7.0
4
+ plotly>=5.14.0
5
+ yfinance>=0.2.18
6
+ langgraph>=0.0.11
7
+ anthropic>=0.5.0
8
+ openai>=1.1.0
9
+ groq>=0.3.0
10
+ langchain>=0.0.267
11
+ langchain-experimental>=0.0.9
12
+ langchain-community>=0.0.9
13
+ pydantic>=2.4.0
setup.py ADDED
@@ -0,0 +1,68 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from setuptools import setup, find_packages
2
+
3
+ with open("README.md", "r", encoding="utf-8") as fh:
4
+ long_description = fh.read()
5
+
6
+ setup(
7
+ name="multi_agent_debate",
8
+ version="0.1.0",
9
+ author="multi_agent_debate Contributors",
10
+ author_email="contributors@multi_agent_debate.org",
11
+ description="Multi-agent recursive market cognition framework",
12
+ long_description=long_description,
13
+ long_description_content_type="text/markdown",
14
+ url="https://github.com/multi_agent_debate/multi_agent_debate",
15
+ packages=find_packages(where="src"),
16
+ package_dir={"": "src"},
17
+ classifiers=[
18
+ "Development Status :: 3 - Alpha",
19
+ "Intended Audience :: Financial and Insurance Industry",
20
+ "Intended Audience :: Science/Research",
21
+ "License :: OSI Approved :: MIT License",
22
+ "Programming Language :: Python :: 3",
23
+ "Programming Language :: Python :: 3.10",
24
+ "Programming Language :: Python :: 3.11",
25
+ "Topic :: Scientific/Engineering :: Artificial Intelligence",
26
+ "Topic :: Office/Business :: Financial :: Investment",
27
+ ],
28
+ python_requires=">=3.10",
29
+ install_requires=[
30
+ "numpy>=1.24.0",
31
+ "pandas>=2.0.0",
32
+ "matplotlib>=3.7.0",
33
+ "plotly>=5.14.0",
34
+ "yfinance>=0.2.18",
35
+ "langgraph>=0.0.11",
36
+ "anthropic>=0.5.0",
37
+ "openai>=1.1.0",
38
+ "groq>=0.3.0",
39
+ "langchain>=0.0.267",
40
+ "langchain-experimental>=0.0.9",
41
+ "langchain-community>=0.0.9",
42
+ "pydantic>=2.4.0",
43
+ ],
44
+ extras_require={
45
+ "dev": [
46
+ "pytest>=7.3.1",
47
+ "black>=23.3.0",
48
+ "isort>=5.12.0",
49
+ "mypy>=1.3.0",
50
+ ],
51
+ "docs": [
52
+ "sphinx>=7.0.0",
53
+ "sphinx-rtd-theme>=1.2.2",
54
+ "myst-parser>=2.0.0",
55
+ ],
56
+ "optional": [
57
+ "polygon-api-client>=1.10.0",
58
+ "alpha_vantage>=2.3.1",
59
+ "ollama>=0.1.0",
60
+ "deepseek>=0.1.0",
61
+ ],
62
+ },
63
+ entry_points={
64
+ "console_scripts": [
65
+ "multi_agent_debate=src.main:main",
66
+ ],
67
+ },
68
+ )
src/agents/base.py ADDED
@@ -0,0 +1,611 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ BaseAgent - Recursive Cognitive Agent Framework for AGI-HEDGE-FUND
3
+
4
+ This module implements the foundational cognitive architecture for all investment agents.
5
+ Each agent inherits from this base class to enable:
6
+ - Recursive reasoning loops with customizable depth
7
+ - Transparent attribution tracing for decision provenance
8
+ - Temporal memory shell with configurable decay
9
+ - Value-weighted decision encoding
10
+ - Symbolic state representation for interpretability
11
+
12
+ Internal Note: Base class encodes the recursive interpretability interface (.p/ command patterns)
13
+ in line with Anthropic-style Circuit Tracing research while maintaining familiar investment syntax.
14
+ """
15
+
16
+ import uuid
17
+ import datetime
18
+ from typing import Dict, List, Any, Optional, Tuple
19
+ import numpy as np
20
+ from pydantic import BaseModel, Field
21
+
22
+ from ..cognition.graph import ReasoningGraph
23
+ from ..cognition.memory import MemoryShell
24
+ from ..cognition.attribution import AttributionTracer
25
+ from ..llm.router import ModelRouter
26
+ from ..utils.diagnostics import TracingTools
27
+
28
+
29
+ class AgentSignal(BaseModel):
30
+ """Signal generated by an agent with full attribution and confidence metrics."""
31
+
32
+ ticker: str = Field(..., description="Stock ticker symbol")
33
+ action: str = Field(..., description="buy, sell, or hold")
34
+ confidence: float = Field(..., description="Confidence level (0.0-1.0)")
35
+ quantity: Optional[int] = Field(None, description="Number of shares to trade")
36
+ reasoning: str = Field(..., description="Explicit reasoning chain")
37
+ intent: str = Field(..., description="High-level investment intent")
38
+ value_basis: str = Field(..., description="Core value driving this decision")
39
+ attribution_trace: Dict[str, float] = Field(default_factory=dict, description="Causal attribution weights")
40
+ drift_signature: Dict[str, float] = Field(default_factory=dict, description="Belief drift metrics")
41
+ signal_id: str = Field(default_factory=lambda: str(uuid.uuid4()), description="Unique signal identifier")
42
+ timestamp: datetime.datetime = Field(default_factory=datetime.datetime.now)
43
+
44
+
45
+ class AgentState(BaseModel):
46
+ """Persistent agent state with temporal memory and belief dynamics."""
47
+
48
+ working_memory: Dict[str, Any] = Field(default_factory=dict, description="Short-term memory")
49
+ belief_state: Dict[str, float] = Field(default_factory=dict, description="Current belief distribution")
50
+ confidence_history: List[float] = Field(default_factory=list, description="Historical confidence")
51
+ decision_history: List[Dict[str, Any]] = Field(default_factory=list, description="Past decisions")
52
+ performance_trace: Dict[str, float] = Field(default_factory=dict, description="Performance metrics")
53
+ reflective_state: Dict[str, Any] = Field(default_factory=dict, description="Self-awareness metrics")
54
+ drift_vector: Dict[str, float] = Field(default_factory=dict, description="Direction of belief evolution")
55
+ consistency_score: float = Field(default=1.0, description="Internal consistency metric")
56
+ last_update: datetime.datetime = Field(default_factory=datetime.datetime.now)
57
+
58
+
59
+ class BaseAgent:
60
+ """
61
+ Base class for all investment agents in the AGI-HEDGE-FUND framework.
62
+
63
+ Implements the core cognitive architecture including:
64
+ - Recursive reasoning loops
65
+ - Memory persistence
66
+ - Attribution tracing
67
+ - Value-weighted decision making
68
+ - Symbolic state representation
69
+ """
70
+
71
+ def __init__(
72
+ self,
73
+ name: str,
74
+ philosophy: str,
75
+ reasoning_depth: int = 3,
76
+ memory_decay: float = 0.2,
77
+ initial_capital: float = 100000.0,
78
+ model_provider: str = "anthropic",
79
+ model_name: str = "claude-3-sonnet-20240229",
80
+ trace_enabled: bool = False,
81
+ ):
82
+ """
83
+ Initialize a cognitive investment agent.
84
+
85
+ Args:
86
+ name: Agent identifier name
87
+ philosophy: Investment philosophy description
88
+ reasoning_depth: Depth of recursive reasoning (higher = deeper thinking)
89
+ memory_decay: Rate of memory deterioration (0.0-1.0)
90
+ initial_capital: Starting capital amount
91
+ model_provider: LLM provider ("anthropic", "openai", "groq", "ollama", "deepseek")
92
+ model_name: Specific model identifier
93
+ trace_enabled: Whether to generate full reasoning traces
94
+ """
95
+ self.id = str(uuid.uuid4())
96
+ self.name = name
97
+ self.philosophy = philosophy
98
+ self.reasoning_depth = reasoning_depth
99
+ self.memory_decay = memory_decay
100
+ self.initial_capital = initial_capital
101
+ self.current_capital = initial_capital
102
+ self.trace_enabled = trace_enabled
103
+
104
+ # Initialize cognitive components
105
+ self.state = AgentState()
106
+ self.memory_shell = MemoryShell(decay_rate=memory_decay)
107
+ self.attribution_tracer = AttributionTracer()
108
+ self.llm = ModelRouter(provider=model_provider, model=model_name)
109
+
110
+ # Initialize reasoning graph
111
+ self.reasoning_graph = ReasoningGraph(
112
+ agent_name=self.name,
113
+ agent_philosophy=self.philosophy,
114
+ model_router=self.llm,
115
+ )
116
+
117
+ # Diagnostics and tracing
118
+ self.tracer = TracingTools(agent_id=self.id, agent_name=self.name)
119
+
120
+ # Internal symbolic processing commands
121
+ self._commands = {
122
+ "reflect.trace": self._reflect_trace,
123
+ "fork.signal": self._fork_signal,
124
+ "collapse.detect": self._collapse_detect,
125
+ "attribute.weight": self._attribute_weight,
126
+ "drift.observe": self._drift_observe,
127
+ }
128
+
129
+ def process_market_data(self, data: Dict[str, Any]) -> Dict[str, Any]:
130
+ """
131
+ Process market data through agent's cognitive lens.
132
+
133
+ Args:
134
+ data: Market data dictionary
135
+
136
+ Returns:
137
+ Processed market data with agent-specific insights
138
+ """
139
+ # Each agent subclass implements its unique market interpretation
140
+ raise NotImplementedError("Agent subclasses must implement process_market_data")
141
+
142
+ def generate_signals(self, processed_data: Dict[str, Any]) -> List[AgentSignal]:
143
+ """
144
+ Generate investment signals based on processed market data.
145
+
146
+ Args:
147
+ processed_data: Processed market data
148
+
149
+ Returns:
150
+ List of investment signals with attribution
151
+ """
152
+ # Each agent subclass implements its unique signal generation logic
153
+ raise NotImplementedError("Agent subclasses must implement generate_signals")
154
+
155
+ def update_state(self, market_feedback: Dict[str, Any]) -> None:
156
+ """
157
+ Update agent's internal state based on market feedback.
158
+
159
+ Args:
160
+ market_feedback: Dictionary containing market performance data
161
+ """
162
+ # Update memory shell with new experiences
163
+ self.memory_shell.add_experience(market_feedback)
164
+
165
+ # Update belief state based on market feedback
166
+ self._update_beliefs(market_feedback)
167
+
168
+ # Update performance metrics
169
+ self._update_performance_metrics(market_feedback)
170
+
171
+ # Apply memory decay
172
+ self.memory_shell.apply_decay()
173
+
174
+ # Update timestamp
175
+ self.state.last_update = datetime.datetime.now()
176
+
177
+ def _update_beliefs(self, market_feedback: Dict[str, Any]) -> None:
178
+ """
179
+ Update agent's belief state based on market feedback.
180
+
181
+ Args:
182
+ market_feedback: Dictionary containing market performance data
183
+ """
184
+ # Extract relevant signals from market feedback
185
+ if 'performance' in market_feedback:
186
+ performance = market_feedback['performance']
187
+
188
+ # Record decision outcomes
189
+ if 'decisions' in market_feedback:
190
+ for decision in market_feedback['decisions']:
191
+ self.state.decision_history.append({
192
+ 'decision': decision,
193
+ 'outcome': performance.get(decision.get('ticker'), 0),
194
+ 'timestamp': datetime.datetime.now()
195
+ })
196
+
197
+ # Update belief state based on performance
198
+ for ticker, perf in performance.items():
199
+ current_belief = self.state.belief_state.get(ticker, 0.5)
200
+ # Belief update formula combines prior belief with new evidence
201
+ updated_belief = (current_belief * (1 - self.memory_decay) +
202
+ np.tanh(perf) * self.memory_decay)
203
+ self.state.belief_state[ticker] = updated_belief
204
+
205
+ # Track belief drift
206
+ if ticker in self.state.belief_state:
207
+ drift = updated_belief - current_belief
208
+ self.state.drift_vector[ticker] = drift
209
+
210
+ def _update_performance_metrics(self, market_feedback: Dict[str, Any]) -> None:
211
+ """
212
+ Update agent's performance metrics based on market feedback.
213
+
214
+ Args:
215
+ market_feedback: Dictionary containing market performance data
216
+ """
217
+ if 'portfolio_value' in market_feedback:
218
+ # Calculate return
219
+ portfolio_value = market_feedback['portfolio_value']
220
+ prior_value = self.state.performance_trace.get('portfolio_value', self.initial_capital)
221
+ returns = (portfolio_value - prior_value) / prior_value if prior_value > 0 else 0
222
+
223
+ # Update performance trace
224
+ self.state.performance_trace.update({
225
+ 'portfolio_value': portfolio_value,
226
+ 'returns': returns,
227
+ 'cumulative_return': (portfolio_value / self.initial_capital) - 1,
228
+ 'win_rate': market_feedback.get('win_rate', 0),
229
+ 'sharpe_ratio': market_feedback.get('sharpe_ratio', 0),
230
+ })
231
+
232
+ # Update current capital
233
+ self.current_capital = portfolio_value
234
+
235
+ # Add to confidence history
236
+ avg_confidence = market_feedback.get('avg_confidence', 0.5)
237
+ self.state.confidence_history.append(avg_confidence)
238
+
239
+ # Update consistency score
240
+ self._update_consistency_score(market_feedback)
241
+
242
+ def _update_consistency_score(self, market_feedback: Dict[str, Any]) -> None:
243
+ """
244
+ Update agent's internal consistency score.
245
+
246
+ Args:
247
+ market_feedback: Dictionary containing market performance data
248
+ """
249
+ # Measure consistency between stated reasoning and actual performance
250
+ if 'decisions' in market_feedback and 'performance' in market_feedback:
251
+ performance = market_feedback['performance']
252
+ decision_consistency = []
253
+
254
+ for decision in market_feedback['decisions']:
255
+ ticker = decision.get('ticker')
256
+ expected_direction = 1 if decision.get('action') == 'buy' else -1
257
+ actual_performance = performance.get(ticker, 0)
258
+ actual_direction = 1 if actual_performance > 0 else -1
259
+
260
+ # Consistency is 1 when directions match, -1 when they don't
261
+ direction_consistency = 1 if expected_direction == actual_direction else -1
262
+ decision_consistency.append(direction_consistency)
263
+
264
+ # Update consistency score (moving average)
265
+ if decision_consistency:
266
+ avg_consistency = sum(decision_consistency) / len(decision_consistency)
267
+ current_consistency = self.state.consistency_score
268
+ self.state.consistency_score = (current_consistency * 0.8) + (avg_consistency * 0.2)
269
+
270
+ def attribute_signals(self, signals: List[Dict[str, Any]]) -> List[AgentSignal]:
271
+ """
272
+ Add attribution traces to raw signals.
273
+
274
+ Args:
275
+ signals: Raw investment signals
276
+
277
+ Returns:
278
+ Signals with attribution traces
279
+ """
280
+ attributed_signals = []
281
+ for signal in signals:
282
+ # Run attribution tracing
283
+ attribution = self.attribution_tracer.trace_attribution(
284
+ signal=signal,
285
+ agent_state=self.state,
286
+ reasoning_depth=self.reasoning_depth
287
+ )
288
+
289
+ # Create AgentSignal with attribution
290
+ agent_signal = AgentSignal(
291
+ ticker=signal.get('ticker', ''),
292
+ action=signal.get('action', 'hold'),
293
+ confidence=signal.get('confidence', 0.5),
294
+ quantity=signal.get('quantity'),
295
+ reasoning=signal.get('reasoning', ''),
296
+ intent=signal.get('intent', ''),
297
+ value_basis=signal.get('value_basis', ''),
298
+ attribution_trace=attribution.get('attribution_trace', {}),
299
+ drift_signature=self.state.drift_vector,
300
+ )
301
+ attributed_signals.append(agent_signal)
302
+
303
+ # Record in tracer if enabled
304
+ if self.trace_enabled:
305
+ self.tracer.record_signal(agent_signal)
306
+
307
+ return attributed_signals
308
+
309
+ def reset(self) -> None:
310
+ """Reset agent to initial state while preserving memory decay pattern."""
311
+ # Reset capital
312
+ self.current_capital = self.initial_capital
313
+
314
+ # Reset state but preserve some learning
315
+ belief_state = self.state.belief_state.copy()
316
+ drift_vector = self.state.drift_vector.copy()
317
+
318
+ # Create new state
319
+ self.state = AgentState(
320
+ belief_state=belief_state,
321
+ drift_vector=drift_vector,
322
+ )
323
+
324
+ # Apply memory decay to preserved beliefs
325
+ for key in self.state.belief_state:
326
+ self.state.belief_state[key] *= (1 - self.memory_decay)
327
+
328
+ def get_state_report(self) -> Dict[str, Any]:
329
+ """
330
+ Generate a detailed report of agent's current state.
331
+
332
+ Returns:
333
+ Dictionary containing agent state information
334
+ """
335
+ return {
336
+ 'agent_id': self.id,
337
+ 'agent_name': self.name,
338
+ 'philosophy': self.philosophy,
339
+ 'reasoning_depth': self.reasoning_depth,
340
+ 'capital': self.current_capital,
341
+ 'belief_state': self.state.belief_state,
342
+ 'performance': self.state.performance_trace,
343
+ 'consistency': self.state.consistency_score,
344
+ 'decision_count': len(self.state.decision_history),
345
+ 'avg_confidence': np.mean(self.state.confidence_history[-10:]) if self.state.confidence_history else 0.5,
346
+ 'drift_vector': self.state.drift_vector,
347
+ 'memory_decay': self.memory_decay,
348
+ 'timestamp': datetime.datetime.now(),
349
+ }
350
+
351
+ def save_state(self, filepath: Optional[str] = None) -> Dict[str, Any]:
352
+ """
353
+ Save agent state to file or return serializable state.
354
+
355
+ Args:
356
+ filepath: Optional path to save state
357
+
358
+ Returns:
359
+ Dictionary containing serializable agent state
360
+ """
361
+ state_dict = {
362
+ 'agent_id': self.id,
363
+ 'agent_name': self.name,
364
+ 'agent_state': self.state.dict(),
365
+ 'memory_shell': self.memory_shell.export_state(),
366
+ 'reasoning_depth': self.reasoning_depth,
367
+ 'memory_decay': self.memory_decay,
368
+ 'current_capital': self.current_capital,
369
+ 'initial_capital': self.initial_capital,
370
+ 'timestamp': datetime.datetime.now().isoformat(),
371
+ }
372
+
373
+ if filepath:
374
+ import json
375
+ with open(filepath, 'w') as f:
376
+ json.dump(state_dict, f)
377
+
378
+ return state_dict
379
+
380
+ def load_state(self, state_dict: Dict[str, Any]) -> None:
381
+ """
382
+ Load agent state from dictionary.
383
+
384
+ Args:
385
+ state_dict: Dictionary containing agent state
386
+ """
387
+ if state_dict['agent_id'] != self.id:
388
+ raise ValueError(f"State ID mismatch: {state_dict['agent_id']} vs {self.id}")
389
+
390
+ # Update basic attributes
391
+ self.reasoning_depth = state_dict.get('reasoning_depth', self.reasoning_depth)
392
+ self.memory_decay = state_dict.get('memory_decay', self.memory_decay)
393
+ self.current_capital = state_dict.get('current_capital', self.current_capital)
394
+ self.initial_capital = state_dict.get('initial_capital', self.initial_capital)
395
+
396
+ # Load agent state
397
+ if 'agent_state' in state_dict:
398
+ self.state = AgentState.parse_obj(state_dict['agent_state'])
399
+
400
+ # Load memory shell
401
+ if 'memory_shell' in state_dict:
402
+ self.memory_shell.import_state(state_dict['memory_shell'])
403
+
404
+ # Internal symbolic command processors
405
+ def _reflect_trace(self, agent=None, depth=3) -> Dict[str, Any]:
406
+ """
407
+ Trace agent's self-reflection on decision making process.
408
+
409
+ Args:
410
+ agent: Optional agent name to reflect on (self if None)
411
+ depth: Recursion depth for reflection
412
+
413
+ Returns:
414
+ Reflection trace results
415
+ """
416
+ # If reflecting on self
417
+ if agent is None or agent == self.name:
418
+ reflection = self.reasoning_graph.run_reflection(
419
+ agent_state=self.state,
420
+ depth=depth,
421
+ trace_enabled=self.trace_enabled
422
+ )
423
+ return {
424
+ 'source': 'self',
425
+ 'reflection': reflection,
426
+ 'depth': depth,
427
+ 'confidence': reflection.get('confidence', 0.5),
428
+ 'timestamp': datetime.datetime.now(),
429
+ }
430
+ # Otherwise, this is a request to reflect on another agent (handled by portfolio manager)
431
+ return {
432
+ 'source': 'external',
433
+ 'target': agent,
434
+ 'depth': depth,
435
+ 'status': 'delegated',
436
+ 'timestamp': datetime.datetime.now(),
437
+ }
438
+
439
+ def _fork_signal(self, source) -> Dict[str, Any]:
440
+ """
441
+ Fork a new signal branch from specified source.
442
+
443
+ Args:
444
+ source: Source to fork signal from
445
+
446
+ Returns:
447
+ Forked signal results
448
+ """
449
+ if source == 'memory':
450
+ # Fork from memory
451
+ experiences = self.memory_shell.get_relevant_experiences(limit=3)
452
+ forked_signals = self.reasoning_graph.generate_from_experiences(experiences)
453
+ return {
454
+ 'source': 'memory',
455
+ 'signals': forked_signals,
456
+ 'count': len(forked_signals),
457
+ 'timestamp': datetime.datetime.now(),
458
+ }
459
+ elif source == 'beliefs':
460
+ # Fork from current beliefs
461
+ top_beliefs = dict(sorted(
462
+ self.state.belief_state.items(),
463
+ key=lambda x: abs(x[1] - 0.5),
464
+ reverse=True
465
+ )[:5])
466
+ forked_signals = self.reasoning_graph.generate_from_beliefs(top_beliefs)
467
+ return {
468
+ 'source': 'beliefs',
469
+ 'signals': forked_signals,
470
+ 'count': len(forked_signals),
471
+ 'timestamp': datetime.datetime.now(),
472
+ }
473
+ else:
474
+ # Unknown source
475
+ return {
476
+ 'source': source,
477
+ 'signals': [],
478
+ 'count': 0,
479
+ 'error': 'unknown_source',
480
+ 'timestamp': datetime.datetime.now(),
481
+ }
482
+
483
+ def _collapse_detect(self, threshold=0.7, reason=None) -> Dict[str, Any]:
484
+ """
485
+ Detect potential decision collapse or inconsistency.
486
+
487
+ Args:
488
+ threshold: Consistency threshold below which to trigger collapse detection
489
+ reason: Optional specific reason to check for collapse
490
+
491
+ Returns:
492
+ Collapse detection results
493
+ """
494
+ # Check consistency score
495
+ consistency_collapse = self.state.consistency_score < threshold
496
+
497
+ # Check for specific collapses
498
+ collapses = {
499
+ 'consistency': consistency_collapse,
500
+ 'confidence': np.mean(self.state.confidence_history[-5:]) < threshold if len(self.state.confidence_history) >= 5 else False,
501
+ 'belief_drift': any(abs(drift) > 0.3 for drift in self.state.drift_vector.values()),
502
+ 'performance': self.state.performance_trace.get('returns', 0) < -0.1 if self.state.performance_trace else False,
503
+ }
504
+
505
+ # If specific reason provided, check only that
506
+ if reason and reason in collapses:
507
+ collapse_detected = collapses[reason]
508
+ collapse_reasons = {reason: collapses[reason]}
509
+ else:
510
+ # Check all collapses
511
+ collapse_detected = any(collapses.values())
512
+ collapse_reasons = {k: v for k, v in collapses.items() if v}
513
+
514
+ return {
515
+ 'collapse_detected': collapse_detected,
516
+ 'collapse_reasons': collapse_reasons,
517
+ 'consistency_score': self.state.consistency_score,
518
+ 'threshold': threshold,
519
+ 'timestamp': datetime.datetime.now(),
520
+ }
521
+
522
+ def _attribute_weight(self, justification) -> Dict[str, Any]:
523
+ """
524
+ Compute attribution weight for a specific justification.
525
+
526
+ Args:
527
+ justification: Justification to weight
528
+
529
+ Returns:
530
+ Attribution weight results
531
+ """
532
+ # Extract key themes from justification
533
+ themes = self.reasoning_graph.extract_themes(justification)
534
+
535
+ # Compute alignment with agent's philosophy
536
+ philosophy_alignment = self.reasoning_graph.compute_alignment(
537
+ themes=themes,
538
+ philosophy=self.philosophy
539
+ )
540
+
541
+ # Compute weight based on alignment and consistency
542
+ weight = philosophy_alignment * self.state.consistency_score
543
+
544
+ return {
545
+ 'justification': justification,
546
+ 'themes': themes,
547
+ 'philosophy_alignment': philosophy_alignment,
548
+ 'consistency_factor': self.state.consistency_score,
549
+ 'attribution_weight': weight,
550
+ 'timestamp': datetime.datetime.now(),
551
+ }
552
+
553
+ def _drift_observe(self, vector, bias=0.0) -> Dict[str, Any]:
554
+ """
555
+ Observe and record belief drift.
556
+
557
+ Args:
558
+ vector: Drift vector to observe
559
+ bias: Optional bias adjustment
560
+
561
+ Returns:
562
+ Drift observation results
563
+ """
564
+ # Record drift observation
565
+ for key, value in vector.items():
566
+ if key in self.state.drift_vector:
567
+ # Existing key, update with bias
568
+ self.state.drift_vector[key] = (self.state.drift_vector[key] * 0.7) + (value * 0.3) + bias
569
+ else:
570
+ # New key
571
+ self.state.drift_vector[key] = value + bias
572
+
573
+ # Compute drift magnitude
574
+ drift_magnitude = np.sqrt(sum(v**2 for v in self.state.drift_vector.values()))
575
+
576
+ return {
577
+ 'drift_vector': self.state.drift_vector,
578
+ 'drift_magnitude': drift_magnitude,
579
+ 'bias_applied': bias,
580
+ 'observation_count': len(vector),
581
+ 'timestamp': datetime.datetime.now(),
582
+ }
583
+
584
+ def execute_command(self, command: str, **kwargs) -> Dict[str, Any]:
585
+ """
586
+ Execute an internal symbolic command.
587
+
588
+ Args:
589
+ command: Command to execute
590
+ **kwargs: Command parameters
591
+
592
+ Returns:
593
+ Command execution results
594
+ """
595
+ if command in self._commands:
596
+ return self._commands[command](**kwargs)
597
+ else:
598
+ return {
599
+ 'error': 'unknown_command',
600
+ 'command': command,
601
+ 'available_commands': list(self._commands.keys()),
602
+ 'timestamp': datetime.datetime.now(),
603
+ }
604
+
605
+ def __repr__(self) -> str:
606
+ """Generate string representation of agent."""
607
+ return f"{self.name} Agent (ID: {self.id[:8]}, Philosophy: {self.philosophy})"
608
+
609
+ def __str__(self) -> str:
610
+ """Generate human-readable string for agent."""
611
+ return f"{self.name} Agent: {self.philosophy} (Depth: {self.reasoning_depth})"
src/agents/graham.py ADDED
@@ -0,0 +1,832 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ GrahamAgent - Value Investing Cognitive Agent
3
+
4
+ This module implements Benjamin Graham's value investing philosophy as a
5
+ recursive cognitive agent with specialized market interpretation capabilities.
6
+
7
+ Key characteristics:
8
+ - Focuses on margin of safety and intrinsic value
9
+ - Detects undervalued assets based on fundamentals
10
+ - Maintains skepticism toward market sentiment
11
+ - Prioritizes long-term value over short-term price movements
12
+ - Exhibits patience and discipline with high conviction
13
+
14
+ Internal Notes: The Graham shell simulates the CIRCUIT-FRAGMENT and NULL-FEATURE
15
+ shells for detecting undervalued assets and knowledge boundaries.
16
+ """
17
+
18
+ import datetime
19
+ from typing import Dict, List, Any, Optional
20
+ import numpy as np
21
+
22
+ from .base import BaseAgent, AgentSignal
23
+ from ..cognition.graph import ReasoningGraph
24
+ from ..utils.diagnostics import TracingTools
25
+
26
+
27
+ class GrahamAgent(BaseAgent):
28
+ """
29
+ Agent embodying Benjamin Graham's value investing philosophy.
30
+
31
+ Implements specialized cognitive patterns for:
32
+ - Intrinsic value calculation
33
+ - Margin of safety evaluation
34
+ - Fundamental analysis
35
+ - Value trap detection
36
+ - Long-term perspective
37
+ """
38
+
39
+ def __init__(
40
+ self,
41
+ reasoning_depth: int = 3,
42
+ memory_decay: float = 0.1, # Lower memory decay for long-term perspective
43
+ initial_capital: float = 100000.0,
44
+ margin_of_safety: float = 0.3, # Minimum discount to intrinsic value
45
+ model_provider: str = "anthropic",
46
+ model_name: str = "claude-3-sonnet-20240229",
47
+ trace_enabled: bool = False,
48
+ ):
49
+ """
50
+ Initialize Graham value investing agent.
51
+
52
+ Args:
53
+ reasoning_depth: Depth of recursive reasoning
54
+ memory_decay: Rate of memory deterioration
55
+ initial_capital: Starting capital amount
56
+ margin_of_safety: Minimum discount to intrinsic value requirement
57
+ model_provider: LLM provider
58
+ model_name: Specific model identifier
59
+ trace_enabled: Whether to generate full reasoning traces
60
+ """
61
+ super().__init__(
62
+ name="Graham",
63
+ philosophy="Value investing focused on margin of safety and fundamental analysis",
64
+ reasoning_depth=reasoning_depth,
65
+ memory_decay=memory_decay,
66
+ initial_capital=initial_capital,
67
+ model_provider=model_provider,
68
+ model_name=model_name,
69
+ trace_enabled=trace_enabled,
70
+ )
71
+
72
+ self.margin_of_safety = margin_of_safety
73
+
74
+ # Value investing specific state
75
+ self.state.reflective_state.update({
76
+ 'value_detection_threshold': 0.7,
77
+ 'sentiment_skepticism': 0.8,
78
+ 'patience_factor': 0.9,
79
+ 'fundamental_weighting': 0.8,
80
+ 'technical_weighting': 0.2,
81
+ })
82
+
83
+ # Customize reasoning graph for value investing
84
+ self._configure_reasoning_graph()
85
+
86
+ def _configure_reasoning_graph(self) -> None:
87
+ """Configure the reasoning graph with value investing specific nodes."""
88
+ self.reasoning_graph.add_node(
89
+ "intrinsic_value_analysis",
90
+ fn=self._intrinsic_value_analysis
91
+ )
92
+
93
+ self.reasoning_graph.add_node(
94
+ "margin_of_safety_evaluation",
95
+ fn=self._margin_of_safety_evaluation
96
+ )
97
+
98
+ self.reasoning_graph.add_node(
99
+ "fundamental_analysis",
100
+ fn=self._fundamental_analysis
101
+ )
102
+
103
+ self.reasoning_graph.add_node(
104
+ "value_trap_detection",
105
+ fn=self._value_trap_detection
106
+ )
107
+
108
+ # Configure value investing reasoning flow
109
+ self.reasoning_graph.set_entry_point("intrinsic_value_analysis")
110
+ self.reasoning_graph.add_edge("intrinsic_value_analysis", "margin_of_safety_evaluation")
111
+ self.reasoning_graph.add_edge("margin_of_safety_evaluation", "fundamental_analysis")
112
+ self.reasoning_graph.add_edge("fundamental_analysis", "value_trap_detection")
113
+
114
+ def process_market_data(self, data: Dict[str, Any]) -> Dict[str, Any]:
115
+ """
116
+ Process market data through Graham's value investing lens.
117
+
118
+ Focuses on:
119
+ - Extracting fundamental metrics
120
+ - Calculating intrinsic value estimates
121
+ - Identifying margin of safety opportunities
122
+ - Filtering for value characteristics
123
+
124
+ Args:
125
+ data: Market data dictionary
126
+
127
+ Returns:
128
+ Processed market data with value investing insights
129
+ """
130
+ processed_data = {
131
+ 'timestamp': datetime.datetime.now(),
132
+ 'tickers': {},
133
+ 'market_sentiment': data.get('market_sentiment', {}),
134
+ 'economic_indicators': data.get('economic_indicators', {}),
135
+ 'insights': [],
136
+ }
137
+
138
+ # Process each ticker
139
+ for ticker, ticker_data in data.get('tickers', {}).items():
140
+ # Extract fundamental metrics
141
+ fundamentals = ticker_data.get('fundamentals', {})
142
+ price = ticker_data.get('price', 0)
143
+
144
+ # Skip if insufficient fundamental data
145
+ if not fundamentals or price == 0:
146
+ processed_data['tickers'][ticker] = {
147
+ 'price': price,
148
+ 'analysis': 'insufficient_data',
149
+ 'intrinsic_value': None,
150
+ 'margin_of_safety': None,
151
+ 'recommendation': 'hold',
152
+ }
153
+ continue
154
+
155
+ # Calculate intrinsic value (Graham-style)
156
+ intrinsic_value = self._calculate_intrinsic_value(fundamentals, ticker_data)
157
+
158
+ # Calculate margin of safety
159
+ margin_of_safety = (intrinsic_value - price) / intrinsic_value if intrinsic_value > 0 else 0
160
+
161
+ # Determine if it's a value opportunity
162
+ is_value_opportunity = margin_of_safety >= self.margin_of_safety
163
+
164
+ # Check for value traps
165
+ value_trap_indicators = self._detect_value_trap_indicators(fundamentals, ticker_data)
166
+
167
+ # Generate value-oriented analysis
168
+ analysis = self._generate_value_analysis(
169
+ ticker=ticker,
170
+ fundamentals=fundamentals,
171
+ price=price,
172
+ intrinsic_value=intrinsic_value,
173
+ margin_of_safety=margin_of_safety,
174
+ value_trap_indicators=value_trap_indicators
175
+ )
176
+
177
+ # Store processed ticker data
178
+ processed_data['tickers'][ticker] = {
179
+ 'price': price,
180
+ 'intrinsic_value': intrinsic_value,
181
+ 'margin_of_safety': margin_of_safety,
182
+ 'is_value_opportunity': is_value_opportunity,
183
+ 'value_trap_risk': len(value_trap_indicators) / 5 if value_trap_indicators else 0,
184
+ 'value_trap_indicators': value_trap_indicators,
185
+ 'analysis': analysis,
186
+ 'recommendation': 'buy' if is_value_opportunity and not value_trap_indicators else 'hold',
187
+ 'fundamentals': fundamentals,
188
+ }
189
+
190
+ # Generate insight if it's a strong value opportunity
191
+ if is_value_opportunity and not value_trap_indicators and margin_of_safety > 0.4:
192
+ processed_data['insights'].append({
193
+ 'ticker': ticker,
194
+ 'type': 'strong_value_opportunity',
195
+ 'margin_of_safety': margin_of_safety,
196
+ 'intrinsic_value': intrinsic_value,
197
+ 'current_price': price,
198
+ })
199
+
200
+ # Run reflective trace if enabled
201
+ if self.trace_enabled:
202
+ processed_data['reflection'] = self.execute_command(
203
+ command="reflect.trace",
204
+ agent=self.name,
205
+ depth=self.reasoning_depth
206
+ )
207
+
208
+ return processed_data
209
+
210
+ def _calculate_intrinsic_value(self, fundamentals: Dict[str, Any], ticker_data: Dict[str, Any]) -> float:
211
+ """
212
+ Calculate intrinsic value using Graham's methods.
213
+
214
+ Args:
215
+ fundamentals: Fundamental metrics dict
216
+ ticker_data: Complete ticker data
217
+
218
+ Returns:
219
+ Estimated intrinsic value
220
+ """
221
+ # Extract key metrics
222
+ eps = fundamentals.get('eps', 0)
223
+ book_value = fundamentals.get('book_value_per_share', 0)
224
+ growth_rate = fundamentals.get('growth_rate', 0)
225
+
226
+ # Graham's formula: IV = EPS * (8.5 + 2g) * 4.4 / Y
227
+ # Where g is growth rate and Y is current AAA bond yield
228
+ # We use a simplified approach here
229
+ bond_yield = ticker_data.get('economic_indicators', {}).get('aaa_bond_yield', 0.045)
230
+ bond_factor = 4.4 / max(bond_yield, 0.01) # Prevent division by zero
231
+
232
+ # Calculate growth-adjusted PE
233
+ growth_adjusted_pe = 8.5 + (2 * growth_rate)
234
+
235
+ # Calculate earnings-based value
236
+ earnings_value = eps * growth_adjusted_pe * bond_factor if eps > 0 else 0
237
+
238
+ # Calculate book value with margin
239
+ book_value_margin = book_value * 1.5 # Graham often looked for stocks below 1.5x book
240
+
241
+ # Use the lower of the two values for conservatism
242
+ if earnings_value > 0 and book_value_margin > 0:
243
+ intrinsic_value = min(earnings_value, book_value_margin)
244
+ else:
245
+ intrinsic_value = earnings_value if earnings_value > 0 else book_value_margin
246
+
247
+ return max(intrinsic_value, 0) # Ensure non-negative value
248
+
249
+ def _detect_value_trap_indicators(self, fundamentals: Dict[str, Any], ticker_data: Dict[str, Any]) -> List[str]:
250
+ """
251
+ Detect potential value trap indicators.
252
+
253
+ Args:
254
+ fundamentals: Fundamental metrics dict
255
+ ticker_data: Complete ticker data
256
+
257
+ Returns:
258
+ List of value trap indicators
259
+ """
260
+ value_trap_indicators = []
261
+
262
+ # Check for declining earnings
263
+ if fundamentals.get('earnings_growth', 0) < -0.1:
264
+ value_trap_indicators.append('declining_earnings')
265
+
266
+ # Check for high debt
267
+ if fundamentals.get('debt_to_equity', 0) > 1.5:
268
+ value_trap_indicators.append('high_debt')
269
+
270
+ # Check for deteriorating financials
271
+ if fundamentals.get('return_on_equity', 0) < 0.05:
272
+ value_trap_indicators.append('low_return_on_equity')
273
+
274
+ # Check for industry decline
275
+ if ticker_data.get('sector', {}).get('decline', False):
276
+ value_trap_indicators.append('industry_decline')
277
+
278
+ # Check for negative free cash flow
279
+ if fundamentals.get('free_cash_flow', 0) < 0:
280
+ value_trap_indicators.append('negative_cash_flow')
281
+
282
+ return value_trap_indicators
283
+
284
+ def _generate_value_analysis(self, ticker: str, fundamentals: Dict[str, Any],
285
+ price: float, intrinsic_value: float,
286
+ margin_of_safety: float, value_trap_indicators: List[str]) -> str:
287
+ """
288
+ Generate value investing analysis summary.
289
+
290
+ Args:
291
+ ticker: Stock ticker
292
+ fundamentals: Fundamental metrics
293
+ price: Current price
294
+ intrinsic_value: Calculated intrinsic value
295
+ margin_of_safety: Current margin of safety
296
+ value_trap_indicators: List of value trap indicators
297
+
298
+ Returns:
299
+ Analysis summary text
300
+ """
301
+ # Format for better readability
302
+ iv_formatted = f"${intrinsic_value:.2f}"
303
+ price_formatted = f"${price:.2f}"
304
+ mos_percentage = f"{margin_of_safety * 100:.1f}%"
305
+
306
+ # Base analysis
307
+ if margin_of_safety >= self.margin_of_safety:
308
+ base_analysis = (f"{ticker} appears undervalued. Current price {price_formatted} vs. "
309
+ f"intrinsic value estimate {iv_formatted}, providing a "
310
+ f"{mos_percentage} margin of safety.")
311
+ elif margin_of_safety > 0:
312
+ base_analysis = (f"{ticker} is moderately priced. Current price {price_formatted} vs. "
313
+ f"intrinsic value estimate {iv_formatted}, providing only a "
314
+ f"{mos_percentage} margin of safety.")
315
+ else:
316
+ base_analysis = (f"{ticker} appears overvalued. Current price {price_formatted} vs. "
317
+ f"intrinsic value estimate {iv_formatted}, providing no "
318
+ f"margin of safety.")
319
+
320
+ # Add value trap indicators if present
321
+ if value_trap_indicators:
322
+ trap_text = ", ".join(value_trap_indicators)
323
+ base_analysis += f" Warning: Potential value trap indicators detected: {trap_text}."
324
+
325
+ # Add fundamental highlights
326
+ fundamental_highlights = []
327
+ if fundamentals.get('pe_ratio', 0) > 0:
328
+ fundamental_highlights.append(f"P/E ratio: {fundamentals.get('pe_ratio', 0):.2f}")
329
+ if fundamentals.get('price_to_book', 0) > 0:
330
+ fundamental_highlights.append(f"P/B ratio: {fundamentals.get('price_to_book', 0):.2f}")
331
+ if fundamentals.get('dividend_yield', 0) > 0:
332
+ fundamental_highlights.append(f"Dividend yield: {fundamentals.get('dividend_yield', 0) * 100:.2f}%")
333
+
334
+ if fundamental_highlights:
335
+ base_analysis += " Key metrics: " + ", ".join(fundamental_highlights) + "."
336
+
337
+ return base_analysis
338
+
339
+ def generate_signals(self, processed_data: Dict[str, Any]) -> List[AgentSignal]:
340
+ """
341
+ Generate investment signals based on processed value investing analysis.
342
+
343
+ Args:
344
+ processed_data: Processed market data with value analysis
345
+
346
+ Returns:
347
+ List of investment signals with attribution
348
+ """
349
+ signals = []
350
+
351
+ for ticker, ticker_data in processed_data.get('tickers', {}).items():
352
+ # Skip if insufficient data
353
+ if ticker_data.get('analysis') == 'insufficient_data':
354
+ continue
355
+
356
+ # Determine action based on value characteristics
357
+ is_value_opportunity = ticker_data.get('is_value_opportunity', False)
358
+ value_trap_risk = ticker_data.get('value_trap_risk', 0)
359
+ margin_of_safety = ticker_data.get('margin_of_safety', 0)
360
+
361
+ # Skip if no clear signal
362
+ if not is_value_opportunity and margin_of_safety <= 0:
363
+ continue
364
+
365
+ # Determine action
366
+ if is_value_opportunity and value_trap_risk < 0.3:
367
+ action = 'buy'
368
+ # Scale confidence based on margin of safety
369
+ confidence = min(0.5 + (margin_of_safety * 0.5), 0.95)
370
+
371
+ # Scale quantity based on conviction
372
+ max_allocation = 0.1 # Max 10% of portfolio in one position
373
+ allocation = max_allocation * confidence
374
+ quantity = int((self.current_capital * allocation) / ticker_data.get('price', 1))
375
+
376
+ # Ensure minimum quantity
377
+ quantity = max(quantity, 1)
378
+
379
+ # Create signal dictionary
380
+ signal = {
381
+ 'ticker': ticker,
382
+ 'action': action,
383
+ 'confidence': confidence,
384
+ 'quantity': quantity,
385
+ 'reasoning': f"Value investment opportunity with {margin_of_safety:.1%} margin of safety. {ticker_data.get('analysis', '')}",
386
+ 'intent': "Capitalize on identified value opportunity with sufficient margin of safety",
387
+ 'value_basis': "Intrinsic value significantly exceeds current market price, presenting favorable risk-reward",
388
+ }
389
+
390
+ signals.append(signal)
391
+ elif margin_of_safety > 0 and margin_of_safety < self.margin_of_safety and value_trap_risk < 0.2:
392
+ # Watchlist signal - lower confidence
393
+ action = 'buy'
394
+ confidence = 0.3 + (margin_of_safety * 0.3) # Lower confidence
395
+
396
+ # Smaller position size for watchlist items
397
+ max_allocation = 0.05 # Max 5% of portfolio
398
+ allocation = max_allocation * confidence
399
+ quantity = int((self.current_capital * allocation) / ticker_data.get('price', 1))
400
+
401
+ # Ensure minimum quantity
402
+ quantity = max(quantity, 1)
403
+
404
+ # Create signal dictionary
405
+ signal = {
406
+ 'ticker': ticker,
407
+ 'action': action,
408
+ 'confidence': confidence,
409
+ 'quantity': quantity,
410
+ 'reasoning': f"Moderate value opportunity with {margin_of_safety:.1%} margin of safety. {ticker_data.get('analysis', '')}",
411
+ 'intent': "Establish small position in moderately valued company with potential",
412
+ 'value_basis': "Price below intrinsic value but insufficient margin of safety for full position",
413
+ }
414
+
415
+ signals.append(signal)
416
+
417
+ # Apply attribution to signals
418
+ attributed_signals = self.attribute_signals(signals)
419
+
420
+ # Log trace if enabled
421
+ if self.trace_enabled:
422
+ for signal in attributed_signals:
423
+ self.tracer.record_signal(signal)
424
+
425
+ return attributed_signals
426
+
427
+ # Value investing specific reasoning nodes
428
+ def _intrinsic_value_analysis(self, data: Dict[str, Any]) -> Dict[str, Any]:
429
+ """
430
+ Analyze intrinsic value of securities.
431
+
432
+ Args:
433
+ data: Market data
434
+
435
+ Returns:
436
+ Intrinsic value analysis results
437
+ """
438
+ results = {
439
+ 'ticker_valuations': {},
440
+ 'timestamp': datetime.datetime.now(),
441
+ }
442
+
443
+ for ticker, ticker_data in data.get('tickers', {}).items():
444
+ fundamentals = ticker_data.get('fundamentals', {})
445
+ price = ticker_data.get('price', 0)
446
+
447
+ if not fundamentals or price == 0:
448
+ results['ticker_valuations'][ticker] = {
449
+ 'intrinsic_value': None,
450
+ 'status': 'insufficient_data',
451
+ }
452
+ continue
453
+
454
+ # Calculate intrinsic value
455
+ intrinsic_value = self._calculate_intrinsic_value(fundamentals, ticker_data)
456
+
457
+ # Determine valuation status
458
+ if intrinsic_value > price * 1.3: # 30% above price
459
+ status = 'significantly_undervalued'
460
+ elif intrinsic_value > price * 1.1: # 10% above price
461
+ status = 'moderately_undervalued'
462
+ elif intrinsic_value > price:
463
+ status = 'slightly_undervalued'
464
+ elif intrinsic_value > price * 0.9: # Within 10% below price
465
+ status = 'fairly_valued'
466
+ else:
467
+ status = 'overvalued'
468
+
469
+ results['ticker_valuations'][ticker] = {
470
+ 'intrinsic_value': intrinsic_value,
471
+ 'price': price,
472
+ 'ratio': intrinsic_value / price if price > 0 else 0,
473
+ 'status': status,
474
+ }
475
+
476
+ return results
477
+
478
+ def _margin_of_safety_evaluation(self, intrinsic_value_results: Dict[str, Any]) -> Dict[str, Any]:
479
+ """
480
+ Evaluate margin of safety for each security.
481
+
482
+ Args:
483
+ intrinsic_value_results: Intrinsic value analysis results
484
+
485
+ Returns:
486
+ Margin of safety evaluation results
487
+ """
488
+ results = {
489
+ 'margin_of_safety_analysis': {},
490
+ 'value_opportunities': [],
491
+ 'timestamp': datetime.datetime.now(),
492
+ }
493
+
494
+ for ticker, valuation in intrinsic_value_results.get('ticker_valuations', {}).items():
495
+ if valuation.get('status') == 'insufficient_data':
496
+ continue
497
+
498
+ intrinsic_value = valuation.get('intrinsic_value', 0)
499
+ price = valuation.get('price', 0)
500
+
501
+ if intrinsic_value <= 0 or price <= 0:
502
+ continue
503
+
504
+ # Calculate margin of safety
505
+ margin_of_safety = (intrinsic_value - price) / intrinsic_value
506
+
507
+ # Determine confidence based on margin of safety
508
+ if margin_of_safety >= self.margin_of_safety:
509
+ confidence = min(0.5 + (margin_of_safety * 0.5), 0.95)
510
+ meets_criteria = True
511
+ else:
512
+ confidence = max(0.2, margin_of_safety * 2)
513
+ meets_criteria = False
514
+
515
+ results['margin_of_safety_analysis'][ticker] = {
516
+ 'margin_of_safety': margin_of_safety,
517
+ 'meets_criteria': meets_criteria,
518
+ 'confidence': confidence,
519
+ }
520
+
521
+ # Add to value opportunities if meets criteria
522
+ if meets_criteria:
523
+ results['value_opportunities'].append({
524
+ 'ticker': ticker,
525
+ 'margin_of_safety': margin_of_safety,
526
+ 'confidence': confidence,
527
+ 'intrinsic_value': intrinsic_value,
528
+ 'price': price,
529
+ })
530
+
531
+ # Sort value opportunities by margin of safety
532
+ results['value_opportunities'] = sorted(
533
+ results['value_opportunities'],
534
+ key=lambda x: x['margin_of_safety'],
535
+ reverse=True
536
+ )
537
+
538
+ return results
539
+
540
+ def _fundamental_analysis(self, safety_results: Dict[str, Any]) -> Dict[str, Any]:
541
+ """
542
+ Perform fundamental analysis on value opportunities.
543
+
544
+ Args:
545
+ safety_results: Margin of safety evaluation results
546
+
547
+ Returns:
548
+ Fundamental analysis results
549
+ """
550
+ results = {
551
+ 'fundamental_quality': {},
552
+ 'quality_ranking': [],
553
+ 'timestamp': datetime.datetime.now(),
554
+ }
555
+
556
+ # Process each value opportunity
557
+ for opportunity in safety_results.get('value_opportunities', []):
558
+ ticker = opportunity.get('ticker')
559
+
560
+ # Get ticker data from state
561
+ ticker_data = self.state.working_memory.get('market_data', {}).get('tickers', {}).get(ticker, {})
562
+ fundamentals = ticker_data.get('fundamentals', {})
563
+
564
+ if not fundamentals:
565
+ continue
566
+
567
+ # Calculate fundamental quality score
568
+ quality_score = self._calculate_fundamental_quality(fundamentals)
569
+
570
+ # Store fundamental quality
571
+ results['fundamental_quality'][ticker] = {
572
+ 'quality_score': quality_score,
573
+ 'roe': fundamentals.get('return_on_equity', 0),
574
+ 'debt_to_equity': fundamentals.get('debt_to_equity', 0),
575
+ 'current_ratio': fundamentals.get('current_ratio', 0),
576
+ 'free_cash_flow': fundamentals.get('free_cash_flow', 0),
577
+ 'dividend_history': fundamentals.get('dividend_history', []),
578
+ }
579
+
580
+ # Add to quality ranking
581
+ results['quality_ranking'].append({
582
+ 'ticker': ticker,
583
+ 'quality_score': quality_score,
584
+ 'margin_of_safety': opportunity.get('margin_of_safety', 0),
585
+ # Combined score weights both quality and value
586
+ 'combined_score': quality_score * 0.4 + opportunity.get('margin_of_safety', 0) * 0.6,
587
+ })
588
+
589
+ # Sort quality ranking by combined score
590
+ results['quality_ranking'] = sorted(
591
+ results['quality_ranking'],
592
+ key=lambda x: x['combined_score'],
593
+ reverse=True
594
+ )
595
+
596
+ return results
597
+
598
+ def _calculate_fundamental_quality(self, fundamentals: Dict[str, Any]) -> float:
599
+ """
600
+ Calculate fundamental quality score.
601
+
602
+ Args:
603
+ fundamentals: Fundamental metrics
604
+
605
+ Returns:
606
+ Quality score (0-1)
607
+ """
608
+ # Initialize score
609
+ score = 0.5 # Start at neutral
610
+
611
+ # Factor 1: Return on Equity (higher is better)
612
+ roe = fundamentals.get('return_on_equity', 0)
613
+ if roe > 0.2: # Excellent ROE
614
+ score += 0.1
615
+ elif roe > 0.15: # Very good ROE
616
+ score += 0.075
617
+ elif roe > 0.1: # Good ROE
618
+ score += 0.05
619
+ elif roe < 0.05: # Poor ROE
620
+ score -= 0.05
621
+ elif roe < 0: # Negative ROE
622
+ score -= 0.1
623
+
624
+ # Factor 2: Debt to Equity (lower is better)
625
+ debt_to_equity = fundamentals.get('debt_to_equity', 0)
626
+ if debt_to_equity < 0.3: # Very low debt
627
+ score += 0.1
628
+ elif debt_to_equity < 0.5: # Low debt
629
+ score += 0.05
630
+ elif debt_to_equity > 1.0: # High debt
631
+ score -= 0.05
632
+ elif debt_to_equity > 1.5: # Very high debt
633
+ score -= 0.1
634
+
635
+ # Factor 3: Current Ratio (higher is better)
636
+ current_ratio = fundamentals.get('current_ratio', 0)
637
+ if current_ratio > 3: # Excellent liquidity
638
+ score += 0.075
639
+ elif current_ratio > 2: # Very good liquidity
640
+ score += 0.05
641
+ elif current_ratio > 1.5: # Good liquidity
642
+ score += 0.025
643
+ elif current_ratio < 1: # Poor liquidity
644
+ score -= 0.1
645
+
646
+ # Factor 4: Free Cash Flow (positive is better)
647
+ fcf = fundamentals.get('free_cash_flow', 0)
648
+ if fcf > 0: # Positive FCF
649
+ score += 0.075
650
+ else: # Negative FCF
651
+ score -= 0.1
652
+
653
+ # Factor 5: Dividend History (consistent is better)
654
+ dividend_history = fundamentals.get('dividend_history', [])
655
+ if len(dividend_history) >= 5 and all(d > 0 for d in dividend_history):
656
+ # Consistent dividends for 5+ years
657
+ score += 0.075
658
+ elif len(dividend_history) >= 3 and all(d > 0 for d in dividend_history):
659
+ # Consistent dividends for 3+ years
660
+ score += 0.05
661
+
662
+ # Ensure score is between 0 and 1
663
+ return max(0, min(1, score))
664
+
665
+ def _value_trap_detection(self, fundamental_results: Dict[str, Any]) -> Dict[str, Any]:
666
+ """
667
+ Detect potential value traps among value opportunities.
668
+
669
+ Args:
670
+ fundamental_results: Fundamental analysis results
671
+
672
+ Returns:
673
+ Value trap detection results
674
+ """
675
+ results = {
676
+ 'value_trap_analysis': {},
677
+ 'safe_opportunities': [],
678
+ 'timestamp': datetime.datetime.now(),
679
+ }
680
+
681
+ # Process each quality-ranked opportunity
682
+ for opportunity in fundamental_results.get('quality_ranking', []):
683
+ ticker = opportunity.get('ticker')
684
+
685
+ # Get ticker data from state
686
+ ticker_data = self.state.working_memory.get('market_data', {}).get('tickers', {}).get(ticker, {})
687
+ fundamentals = ticker_data.get('fundamentals', {})
688
+
689
+ if not fundamentals:
690
+ continue
691
+
692
+ # Detect value trap indicators
693
+ value_trap_indicators = self._detect_value_trap_indicators(fundamentals, ticker_data)
694
+ value_trap_risk = len(value_trap_indicators) / 5 if value_trap_indicators else 0
695
+
696
+ # Store value trap analysis
697
+ results['value_trap_analysis'][ticker] = {
698
+ 'value_trap_risk': value_trap_risk,
699
+ 'value_trap_indicators': value_trap_indicators,
700
+ 'quality_score': opportunity.get('quality_score', 0),
701
+ 'margin_of_safety': opportunity.get('margin_of_safety', 0),
702
+ 'combined_score': opportunity.get('combined_score', 0),
703
+ }
704
+
705
+ # Add to safe opportunities if low value trap risk
706
+ if value_trap_risk < 0.3:
707
+ results['safe_opportunities'].append({
708
+ 'ticker': ticker,
709
+ 'value_trap_risk': value_trap_risk,
710
+ 'quality_score': opportunity.get('quality_score', 0),
711
+ 'margin_of_safety': opportunity.get('margin_of_safety', 0),
712
+ 'combined_score': opportunity.get('combined_score', 0),
713
+ })
714
+
715
+ # Sort safe opportunities by combined score
716
+ results['safe_opportunities'] = sorted(
717
+ results['safe_opportunities'],
718
+ key=lambda x: x['combined_score'],
719
+ reverse=True
720
+ )
721
+
722
+ return results
723
+
724
+ def run_analysis_shell(self, market_data: Dict[str, Any]) -> Dict[str, Any]:
725
+ """
726
+ Run a complete analysis shell for Graham value criteria.
727
+
728
+ This method implements a full CIRCUIT-FRAGMENT shell for detecting
729
+ undervalued assets and NULL-FEATURE shell for knowledge boundaries.
730
+
731
+ Args:
732
+ market_data: Raw market data
733
+
734
+ Returns:
735
+ Complete value analysis results
736
+ """
737
+ # Store market data in working memory for value trap detection
738
+ self.state.working_memory['market_data'] = market_data
739
+
740
+ # Process market data (external interface)
741
+ processed_data = self.process_market_data(market_data)
742
+
743
+ # Run reasoning graph (internal pipeline)
744
+ initial_results = {'tickers': processed_data.get('tickers', {})}
745
+
746
+ intrinsic_value_results = self._intrinsic_value_analysis(initial_results)
747
+ safety_results = self._margin_of_safety_evaluation(intrinsic_value_results)
748
+ fundamental_results = self._fundamental_analysis(safety_results)
749
+ trap_results = self._value_trap_detection(fundamental_results)
750
+
751
+ # Compile complete results
752
+ complete_results = {
753
+ 'processed_data': processed_data,
754
+ 'intrinsic_value_results': intrinsic_value_results,
755
+ 'safety_results': safety_results,
756
+ 'fundamental_results': fundamental_results,
757
+ 'trap_results': trap_results,
758
+ 'final_recommendations': trap_results.get('safe_opportunities', []),
759
+ 'timestamp': datetime.datetime.now(),
760
+ }
761
+
762
+ # Check for collapse conditions
763
+ collapse_check = self.execute_command(
764
+ command="collapse.detect",
765
+ threshold=0.7,
766
+ reason="consistency"
767
+ )
768
+
769
+ if collapse_check.get('collapse_detected', False):
770
+ complete_results['warnings'] = {
771
+ 'collapse_detected': True,
772
+ 'collapse_reasons': collapse_check.get('collapse_reasons', {}),
773
+ 'message': "Potential inconsistency detected in value analysis process.",
774
+ }
775
+
776
+ return complete_results
777
+
778
+ def adjust_strategy(self, performance_metrics: Dict[str, Any]) -> None:
779
+ """
780
+ Adjust Graham strategy based on performance feedback.
781
+
782
+ Args:
783
+ performance_metrics: Dictionary with performance metrics
784
+ """
785
+ # Extract relevant metrics
786
+ win_rate = performance_metrics.get('win_rate', 0.5)
787
+ avg_return = performance_metrics.get('avg_return', 0)
788
+ max_drawdown = performance_metrics.get('max_drawdown', 0)
789
+
790
+ # Adjust margin of safety based on win rate
791
+ if win_rate < 0.4: # Poor win rate
792
+ self.margin_of_safety = min(self.margin_of_safety + 0.05, 0.5) # Increase safety margin
793
+ elif win_rate > 0.7: # Excellent win rate
794
+ self.margin_of_safety = max(self.margin_of_safety - 0.05, 0.2) # Can be less conservative
795
+
796
+ # Adjust value detection threshold based on returns
797
+ if avg_return < -0.05: # Significant negative returns
798
+ self.state.reflective_state['value_detection_threshold'] = min(
799
+ self.state.reflective_state.get('value_detection_threshold', 0.7) + 0.05,
800
+ 0.9
801
+ )
802
+ elif avg_return > 0.1: # Strong positive returns
803
+ self.state.reflective_state['value_detection_threshold'] = max(
804
+ self.state.reflective_state.get('value_detection_threshold', 0.7) - 0.05,
805
+ 0.6
806
+ )
807
+
808
+ # Adjust sentiment skepticism based on drawdown
809
+ if max_drawdown > 0.15: # Large drawdown
810
+ self.state.reflective_state['sentiment_skepticism'] = min(
811
+ self.state.reflective_state.get('sentiment_skepticism', 0.8) + 0.05,
812
+ 0.95
813
+ )
814
+
815
+ # Update drift vector
816
+ drift_vector = {
817
+ 'margin_of_safety': self.margin_of_safety - 0.3, # Drift from initial value
818
+ 'value_detection': self.state.reflective_state.get('value_detection_threshold', 0.7) - 0.7,
819
+ 'sentiment_skepticism': self.state.reflective_state.get('sentiment_skepticism', 0.8) - 0.8,
820
+ }
821
+
822
+ # Observe drift for interpretability
823
+ self.execute_command(
824
+ command="drift.observe",
825
+ vector=drift_vector,
826
+ bias=0.0
827
+ )
828
+
829
+ def __repr__(self) -> str:
830
+ """Generate string representation of Graham agent."""
831
+ return f"Graham Value Agent (MoS: {self.margin_of_safety:.2f}, Depth: {self.reasoning_depth})"
832
+
src/cognition/attribution.py ADDED
@@ -0,0 +1,755 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ AttributionTracer - Decision Provenance and Causal Tracing Framework
3
+
4
+ This module implements the attribution tracing architecture that enables
5
+ transparent decision provenance for all agents in the AGI-HEDGE-FUND system.
6
+
7
+ Key capabilities:
8
+ - Multi-level attribution across reasoning chains
9
+ - Causal tracing from decision back to evidence
10
+ - Confidence weighting of attribution factors
11
+ - Value-weighted attribution alignment
12
+ - Attribution visualization for interpretability
13
+
14
+ Internal Note: The attribution tracer encodes the ECHO-ATTRIBUTION and ATTRIBUTION-REFLECT
15
+ interpretability shells for causal path tracing and attribution transparency.
16
+ """
17
+
18
+ import datetime
19
+ import uuid
20
+ import math
21
+ from typing import Dict, List, Any, Optional, Tuple, Set
22
+ import numpy as np
23
+ from collections import defaultdict
24
+
25
+ from pydantic import BaseModel, Field
26
+
27
+
28
+ class AttributionEntry(BaseModel):
29
+ """Single attribution entry linking a decision to a cause."""
30
+
31
+ id: str = Field(default_factory=lambda: str(uuid.uuid4()))
32
+ source: str = Field(...) # Source ID (e.g., memory ID, evidence ID)
33
+ source_type: str = Field(...) # Type of source (e.g., "memory", "evidence", "reasoning")
34
+ target: str = Field(...) # Target ID (e.g., decision ID, reasoning step)
35
+ weight: float = Field(default=1.0) # Attribution weight (0-1)
36
+ confidence: float = Field(default=1.0) # Confidence in attribution (0-1)
37
+ timestamp: datetime.datetime = Field(default_factory=datetime.datetime.now)
38
+ description: Optional[str] = Field(default=None) # Optional attribution description
39
+ value_alignment: Optional[float] = Field(default=None) # Alignment with agent values (0-1)
40
+
41
+
42
+ class AttributionChain(BaseModel):
43
+ """Chain of attribution entries forming a causal path."""
44
+
45
+ id: str = Field(default_factory=lambda: str(uuid.uuid4()))
46
+ entries: List[AttributionEntry] = Field(default_factory=list)
47
+ start_point: str = Field(...) # ID of chain origin
48
+ end_point: str = Field(...) # ID of chain destination
49
+ total_weight: float = Field(default=1.0) # Product of weights along chain
50
+ confidence: float = Field(default=1.0) # Overall chain confidence
51
+ timestamp: datetime.datetime = Field(default_factory=datetime.datetime.now)
52
+
53
+
54
+ class AttributionGraph(BaseModel):
55
+ """Complete attribution graph for a decision."""
56
+
57
+ id: str = Field(default_factory=lambda: str(uuid.uuid4()))
58
+ decision_id: str = Field(...) # ID of the decision being attributed
59
+ chains: List[AttributionChain] = Field(default_factory=list)
60
+ sources: Dict[str, Dict[str, Any]] = Field(default_factory=dict) # Source metadata
61
+ timestamp: datetime.datetime = Field(default_factory=datetime.datetime.now)
62
+
63
+ def add_chain(self, chain: AttributionChain) -> None:
64
+ """Add attribution chain to graph."""
65
+ self.chains.append(chain)
66
+
67
+ def add_source(self, source_id: str, metadata: Dict[str, Any]) -> None:
68
+ """Add source metadata to graph."""
69
+ self.sources[source_id] = metadata
70
+
71
+ def calculate_source_contributions(self) -> Dict[str, float]:
72
+ """Calculate normalized contribution of each source to decision."""
73
+ # Initialize contributions
74
+ contributions = defaultdict(float)
75
+
76
+ # Sum weights from all chains
77
+ for chain in self.chains:
78
+ for entry in chain.entries:
79
+ # Add contribution weighted by chain confidence
80
+ contributions[entry.source] += entry.weight * chain.confidence
81
+
82
+ # Normalize contributions
83
+ total = sum(contributions.values())
84
+ if total > 0:
85
+ for source in contributions:
86
+ contributions[source] /= total
87
+
88
+ return dict(contributions)
89
+
90
+
91
+ class AttributionTracer:
92
+ """
93
+ Attribution tracing engine for causal decision provenance.
94
+
95
+ Enables:
96
+ - Tracing the causal path from decisions back to evidence
97
+ - Weighting attribution factors by confidence and relevance
98
+ - Aligning attribution with agent value system
99
+ - Visualizing attribution patterns for interpretability
100
+ """
101
+
102
+ def __init__(self):
103
+ """Initialize attribution tracer."""
104
+ self.attribution_history: Dict[str, AttributionGraph] = {}
105
+ self.trace_registry: Dict[str, Dict[str, Any]] = {}
106
+ self.value_weights: Dict[str, float] = {}
107
+
108
+ def trace_attribution(self, signal: Dict[str, Any], agent_state: Dict[str, Any],
109
+ reasoning_depth: int = 3) -> Dict[str, Any]:
110
+ """
111
+ Trace attribution for a decision signal.
112
+
113
+ Args:
114
+ signal: Decision signal
115
+ agent_state: Agent's current state
116
+ reasoning_depth: Depth of attribution tracing
117
+
118
+ Returns:
119
+ Attribution trace results
120
+ """
121
+ # Generate decision ID if not present
122
+ decision_id = signal.get("signal_id", str(uuid.uuid4()))
123
+
124
+ # Create attribution graph
125
+ attribution_graph = AttributionGraph(
126
+ decision_id=decision_id,
127
+ )
128
+
129
+ # Extract signal components for attribution
130
+ ticker = signal.get("ticker", "")
131
+ action = signal.get("action", "")
132
+ confidence = signal.get("confidence", 0.5)
133
+ reasoning = signal.get("reasoning", "")
134
+ intent = signal.get("intent", "")
135
+ value_basis = signal.get("value_basis", "")
136
+
137
+ # Extract evidence sources from agent state
138
+ evidence_sources = self._extract_evidence_sources(agent_state, ticker, action)
139
+
140
+ # Process reasoning to extract reasoning steps
141
+ reasoning_steps = self._extract_reasoning_steps(reasoning)
142
+
143
+ # Generate attribution chains
144
+ chains = self._generate_attribution_chains(
145
+ decision_id=decision_id,
146
+ evidence_sources=evidence_sources,
147
+ reasoning_steps=reasoning_steps,
148
+ intent=intent,
149
+ value_basis=value_basis,
150
+ confidence=confidence,
151
+ reasoning_depth=reasoning_depth
152
+ )
153
+
154
+ # Add chains to graph
155
+ for chain in chains:
156
+ attribution_graph.add_chain(chain)
157
+
158
+ # Add source metadata
159
+ for source_id, metadata in evidence_sources.items():
160
+ attribution_graph.add_source(source_id, metadata)
161
+
162
+ # Calculate source contributions
163
+ source_contributions = attribution_graph.calculate_source_contributions()
164
+
165
+ # Store in history
166
+ # Store in history
167
+ self.attribution_history[decision_id] = attribution_graph
168
+
169
+ # Prepare result
170
+ trace_id = str(uuid.uuid4())
171
+
172
+ # Store trace in registry
173
+ self.trace_registry[trace_id] = {
174
+ "attribution_graph": attribution_graph,
175
+ "decision_id": decision_id,
176
+ "timestamp": datetime.datetime.now(),
177
+ }
178
+
179
+ # Create attribution trace output
180
+ attribution_trace = {
181
+ "trace_id": trace_id,
182
+ "decision_id": decision_id,
183
+ "attribution_map": source_contributions,
184
+ "confidence": confidence,
185
+ "top_factors": self._get_top_attribution_factors(source_contributions, 5),
186
+ "value_alignment": self._calculate_value_alignment(value_basis, source_contributions),
187
+ "reasoning_depth": reasoning_depth,
188
+ "timestamp": datetime.datetime.now().isoformat(),
189
+ }
190
+
191
+ return attribution_trace
192
+
193
+ def _extract_evidence_sources(self, agent_state: Dict[str, Any],
194
+ ticker: str, action: str) -> Dict[str, Dict[str, Any]]:
195
+ """
196
+ Extract evidence sources from agent state.
197
+
198
+ Args:
199
+ agent_state: Agent's current state
200
+ ticker: Stock ticker
201
+ action: Decision action
202
+
203
+ Returns:
204
+ Dictionary of evidence sources
205
+ """
206
+ evidence_sources = {}
207
+
208
+ # Extract from belief state
209
+ belief_state = agent_state.get("belief_state", {})
210
+ if ticker in belief_state:
211
+ source_id = f"belief:{ticker}"
212
+ evidence_sources[source_id] = {
213
+ "type": "belief",
214
+ "ticker": ticker,
215
+ "value": belief_state[ticker],
216
+ "description": f"Belief about {ticker}",
217
+ }
218
+
219
+ # Extract from working memory
220
+ working_memory = agent_state.get("working_memory", {})
221
+
222
+ # Check for ticker-specific data in working memory
223
+ if ticker in working_memory:
224
+ source_id = f"working_memory:{ticker}"
225
+ evidence_sources[source_id] = {
226
+ "type": "working_memory",
227
+ "ticker": ticker,
228
+ "data": working_memory[ticker],
229
+ "description": f"Current analysis of {ticker}",
230
+ }
231
+
232
+ # Extract from performance trace if action is based on past performance
233
+ performance_trace = agent_state.get("performance_trace", {})
234
+ if ticker in performance_trace:
235
+ source_id = f"performance:{ticker}"
236
+ evidence_sources[source_id] = {
237
+ "type": "performance",
238
+ "ticker": ticker,
239
+ "performance": performance_trace[ticker],
240
+ "description": f"Performance history of {ticker}",
241
+ }
242
+
243
+ # Extract from decision history
244
+ decision_history = agent_state.get("decision_history", [])
245
+ for i, decision in enumerate(decision_history):
246
+ if decision.get("ticker") == ticker and decision.get("action") == action:
247
+ source_id = f"past_decision:{i}:{ticker}"
248
+ evidence_sources[source_id] = {
249
+ "type": "past_decision",
250
+ "ticker": ticker,
251
+ "action": action,
252
+ "decision": decision,
253
+ "description": f"Past {action} decision for {ticker}",
254
+ }
255
+
256
+ return evidence_sources
257
+
258
+ def _extract_reasoning_steps(self, reasoning: str) -> List[Dict[str, Any]]:
259
+ """
260
+ Extract reasoning steps from reasoning string.
261
+
262
+ Args:
263
+ reasoning: Reasoning string
264
+
265
+ Returns:
266
+ List of reasoning steps
267
+ """
268
+ # Simple implementation: split by periods or line breaks
269
+ sentences = [s.strip() for s in reasoning.replace('\n', '. ').split('.') if s.strip()]
270
+
271
+ reasoning_steps = []
272
+ for i, sentence in enumerate(sentences):
273
+ step_id = f"step:{i}"
274
+ reasoning_steps.append({
275
+ "id": step_id,
276
+ "text": sentence,
277
+ "position": i,
278
+ "type": "reasoning_step",
279
+ })
280
+
281
+ return reasoning_steps
282
+
283
+ def _generate_attribution_chains(self, decision_id: str, evidence_sources: Dict[str, Dict[str, Any]],
284
+ reasoning_steps: List[Dict[str, Any]], intent: str, value_basis: str,
285
+ confidence: float, reasoning_depth: int) -> List[AttributionChain]:
286
+ """
287
+ Generate attribution chains linking decision to evidence.
288
+
289
+ Args:
290
+ decision_id: Decision ID
291
+ evidence_sources: Evidence sources
292
+ reasoning_steps: Reasoning steps
293
+ intent: Decision intent
294
+ value_basis: Value basis for decision
295
+ confidence: Decision confidence
296
+ reasoning_depth: Depth of attribution tracing
297
+
298
+ Returns:
299
+ List of attribution chains
300
+ """
301
+ attribution_chains = []
302
+
303
+ # Define end point (the decision itself)
304
+ end_point = decision_id
305
+
306
+ # Case 1: Direct evidence -> decision chains
307
+ for source_id, source_data in evidence_sources.items():
308
+ # Create entry linking evidence directly to decision
309
+ entry = AttributionEntry(
310
+ source=source_id,
311
+ source_type=source_data.get("type", "evidence"),
312
+ target=decision_id,
313
+ weight=self._calculate_evidence_weight(source_data, confidence),
314
+ confidence=confidence,
315
+ description=f"Direct influence of {source_data.get('description', source_id)} on decision",
316
+ )
317
+
318
+ # Create chain
319
+ chain = AttributionChain(
320
+ entries=[entry],
321
+ start_point=source_id,
322
+ end_point=end_point,
323
+ total_weight=entry.weight,
324
+ confidence=entry.confidence,
325
+ )
326
+
327
+ attribution_chains.append(chain)
328
+
329
+ # Case 2: Evidence -> reasoning -> decision chains
330
+ if reasoning_steps:
331
+ # For each evidence source
332
+ for source_id, source_data in evidence_sources.items():
333
+ # For relevant reasoning steps (limited by depth)
334
+ for step in reasoning_steps[:reasoning_depth]:
335
+ # Create entry linking evidence to reasoning step
336
+ step_entry = AttributionEntry(
337
+ source=source_id,
338
+ source_type=source_data.get("type", "evidence"),
339
+ target=step["id"],
340
+ weight=self._calculate_step_relevance(source_data, step),
341
+ confidence=confidence * 0.9, # Slightly lower confidence for indirect paths
342
+ description=f"Influence of {source_data.get('description', source_id)} on reasoning step",
343
+ )
344
+
345
+ # Create entry linking reasoning step to decision
346
+ decision_entry = AttributionEntry(
347
+ source=step["id"],
348
+ source_type="reasoning_step",
349
+ target=decision_id,
350
+ weight=self._calculate_step_importance(step, len(reasoning_steps)),
351
+ confidence=confidence,
352
+ description=f"Influence of reasoning step on decision",
353
+ )
354
+
355
+ # Create chain
356
+ chain = AttributionChain(
357
+ entries=[step_entry, decision_entry],
358
+ start_point=source_id,
359
+ end_point=end_point,
360
+ total_weight=step_entry.weight * decision_entry.weight,
361
+ confidence=min(step_entry.confidence, decision_entry.confidence),
362
+ )
363
+
364
+ attribution_chains.append(chain)
365
+
366
+ # Case 3: Intent/value -> decision chains
367
+ if intent:
368
+ intent_id = f"intent:{intent[:20]}"
369
+ intent_entry = AttributionEntry(
370
+ source=intent_id,
371
+ source_type="intent",
372
+ target=decision_id,
373
+ weight=0.8, # High weight for intent
374
+ confidence=confidence,
375
+ description=f"Influence of stated intent on decision",
376
+ )
377
+
378
+ intent_chain = AttributionChain(
379
+ entries=[intent_entry],
380
+ start_point=intent_id,
381
+ end_point=end_point,
382
+ total_weight=intent_entry.weight,
383
+ confidence=intent_entry.confidence,
384
+ )
385
+
386
+ attribution_chains.append(intent_chain)
387
+
388
+ if value_basis:
389
+ value_id = f"value:{value_basis[:20]}"
390
+ value_entry = AttributionEntry(
391
+ source=value_id,
392
+ source_type="value",
393
+ target=decision_id,
394
+ weight=0.9, # Very high weight for value basis
395
+ confidence=confidence,
396
+ description=f"Influence of value basis on decision",
397
+ value_alignment=1.0, # Perfect alignment with its own value
398
+ )
399
+
400
+ value_chain = AttributionChain(
401
+ entries=[value_entry],
402
+ start_point=value_id,
403
+ end_point=end_point,
404
+ total_weight=value_entry.weight,
405
+ confidence=value_entry.confidence,
406
+ )
407
+
408
+ attribution_chains.append(value_chain)
409
+
410
+ return attribution_chains
411
+
412
+ def _calculate_evidence_weight(self, evidence: Dict[str, Any], base_confidence: float) -> float:
413
+ """
414
+ Calculate weight of evidence.
415
+
416
+ Args:
417
+ evidence: Evidence data
418
+ base_confidence: Base confidence level
419
+
420
+ Returns:
421
+ Evidence weight
422
+ """
423
+ # Default weight
424
+ weight = 0.5
425
+
426
+ # Adjust based on evidence type
427
+ evidence_type = evidence.get("type", "")
428
+
429
+ if evidence_type == "belief":
430
+ # Weight based on belief strength (0.5-1.0)
431
+ belief_value = evidence.get("value", 0.5)
432
+ weight = 0.5 + (abs(belief_value - 0.5) * 0.5)
433
+
434
+ elif evidence_type == "working_memory":
435
+ # Working memory has high weight
436
+ weight = 0.8
437
+
438
+ elif evidence_type == "performance":
439
+ # Performance data moderately important
440
+ weight = 0.7
441
+
442
+ elif evidence_type == "past_decision":
443
+ # Past decisions less important
444
+ weight = 0.6
445
+
446
+ # Scale by confidence
447
+ weight *= base_confidence
448
+
449
+ return min(1.0, weight)
450
+
451
+ def _calculate_step_relevance(self, evidence: Dict[str, Any], step: Dict[str, Any]) -> float:
452
+ """
453
+ Calculate relevance of evidence to reasoning step.
454
+
455
+ Args:
456
+ evidence: Evidence data
457
+ step: Reasoning step
458
+
459
+ Returns:
460
+ Relevance weight
461
+ """
462
+ # Basic implementation using text overlap
463
+ evidence_desc = evidence.get("description", "")
464
+ step_text = step.get("text", "")
465
+
466
+ # Check for ticker mention
467
+ ticker = evidence.get("ticker", "")
468
+ if ticker and ticker in step_text:
469
+ return 0.8
470
+
471
+ # Check for word overlap
472
+ evidence_words = set(evidence_desc.lower().split())
473
+ step_words = set(step_text.lower().split())
474
+
475
+ overlap = len(evidence_words.intersection(step_words))
476
+ total_words = len(evidence_words.union(step_words))
477
+
478
+ if total_words > 0:
479
+ overlap_ratio = overlap / total_words
480
+ return min(1.0, 0.5 + overlap_ratio)
481
+
482
+ return 0.5
483
+
484
+ def _calculate_step_importance(self, step: Dict[str, Any], total_steps: int) -> float:
485
+ """
486
+ Calculate importance of reasoning step.
487
+
488
+ Args:
489
+ step: Reasoning step
490
+ total_steps: Total number of steps
491
+
492
+ Returns:
493
+ Importance weight
494
+ """
495
+ # Position-based importance (later steps slightly more important)
496
+ position = step.get("position", 0)
497
+ position_weight = 0.5 + (position / (2 * total_steps)) if total_steps > 0 else 0.5
498
+
499
+ # Length-based importance (longer steps slightly more important)
500
+ text = step.get("text", "")
501
+ length = len(text)
502
+ length_weight = min(1.0, 0.5 + (length / 200)) # Cap at 1.0
503
+
504
+ # Combine weights
505
+ return (position_weight * 0.7) + (length_weight * 0.3)
506
+
507
+ def _get_top_attribution_factors(self, source_contributions: Dict[str, float], limit: int = 5) -> List[Dict[str, Any]]:
508
+ """
509
+ Get top attribution factors.
510
+
511
+ Args:
512
+ source_contributions: Source contribution dictionary
513
+ limit: Maximum number of factors to return
514
+
515
+ Returns:
516
+ List of top attribution factors
517
+ """
518
+ # Sort contributions by weight (descending)
519
+ sorted_contributions = sorted(
520
+ source_contributions.items(),
521
+ key=lambda x: x[1],
522
+ reverse=True
523
+ )
524
+
525
+ # Take top 'limit' contributions
526
+ top_factors = []
527
+ for source, weight in sorted_contributions[:limit]:
528
+ # Parse source type from ID
529
+ source_type = source.split(":", 1)[0] if ":" in source else "unknown"
530
+
531
+ top_factors.append({
532
+ "source": source,
533
+ "type": source_type,
534
+ "weight": weight,
535
+ })
536
+
537
+ return top_factors
538
+
539
+ def _calculate_value_alignment(self, value_basis: str, source_contributions: Dict[str, float]) -> float:
540
+ """
541
+ Calculate value alignment score.
542
+
543
+ Args:
544
+ value_basis: Value basis string
545
+ source_contributions: Source contribution dictionary
546
+
547
+ Returns:
548
+ Value alignment score
549
+ """
550
+ # Simple implementation: check if value sources have high contribution
551
+ value_alignment = 0.5 # Default neutral alignment
552
+
553
+ # Find value-based sources
554
+ value_sources = [source for source in source_contributions if source.startswith("value:")]
555
+
556
+ if value_sources:
557
+ # Calculate contribution of value sources
558
+ value_contribution = sum(source_contributions[source] for source in value_sources)
559
+
560
+ # Value alignment increases with value contribution
561
+ value_alignment = 0.5 + (value_contribution * 0.5)
562
+
563
+ return min(1.0, value_alignment)
564
+
565
+ def get_trace(self, trace_id: str) -> Optional[Dict[str, Any]]:
566
+ """
567
+ Get attribution trace by ID.
568
+
569
+ Args:
570
+ trace_id: Trace ID
571
+
572
+ Returns:
573
+ Attribution trace or None if not found
574
+ """
575
+ if trace_id not in self.trace_registry:
576
+ return None
577
+
578
+ trace_data = self.trace_registry[trace_id]
579
+ attribution_graph = trace_data.get("attribution_graph")
580
+
581
+ if not attribution_graph:
582
+ return None
583
+
584
+ # Calculate source contributions
585
+ source_contributions = attribution_graph.calculate_source_contributions()
586
+
587
+ # Create attribution trace output
588
+ attribution_trace = {
589
+ "trace_id": trace_id,
590
+ "decision_id": attribution_graph.decision_id,
591
+ "attribution_map": source_contributions,
592
+ "top_factors": self._get_top_attribution_factors(source_contributions, 5),
593
+ "chains": len(attribution_graph.chains),
594
+ "sources": len(attribution_graph.sources),
595
+ "timestamp": trace_data.get("timestamp", datetime.datetime.now()).isoformat(),
596
+ }
597
+
598
+ return attribution_trace
599
+
600
+ def get_decision_traces(self, decision_id: str) -> List[str]:
601
+ """
602
+ Get trace IDs for a decision.
603
+
604
+ Args:
605
+ decision_id: Decision ID
606
+
607
+ Returns:
608
+ List of trace IDs
609
+ """
610
+ return [trace_id for trace_id, trace_data in self.trace_registry.items()
611
+ if trace_data.get("decision_id") == decision_id]
612
+
613
+ def visualize_attribution(self, trace_id: str) -> Dict[str, Any]:
614
+ """
615
+ Generate attribution visualization data.
616
+
617
+ Args:
618
+ trace_id: Trace ID
619
+
620
+ Returns:
621
+ Visualization data
622
+ """
623
+ if trace_id not in self.trace_registry:
624
+ return {"error": "Trace not found"}
625
+
626
+ trace_data = self.trace_registry[trace_id]
627
+ attribution_graph = trace_data.get("attribution_graph")
628
+
629
+ if not attribution_graph:
630
+ return {"error": "Attribution graph not found"}
631
+
632
+ # Create nodes and links for visualization
633
+ nodes = []
634
+ links = []
635
+
636
+ # Add decision node
637
+ decision_id = attribution_graph.decision_id
638
+ nodes.append({
639
+ "id": decision_id,
640
+ "type": "decision",
641
+ "label": "Decision",
642
+ "size": 15,
643
+ })
644
+
645
+ # Process all chains
646
+ for chain_idx, chain in enumerate(attribution_graph.chains):
647
+ # Add source node if not already added
648
+ source_id = chain.start_point
649
+ if not any(node["id"] == source_id for node in nodes):
650
+ # Determine source type
651
+ source_type = "unknown"
652
+ if source_id.startswith("belief:"):
653
+ source_type = "belief"
654
+ elif source_id.startswith("working_memory:"):
655
+ source_type = "working_memory"
656
+ elif source_id.startswith("performance:"):
657
+ source_type = "performance"
658
+ elif source_id.startswith("past_decision:"):
659
+ source_type = "past_decision"
660
+ elif source_id.startswith("intent:"):
661
+ source_type = "intent"
662
+ elif source_id.startswith("value:"):
663
+ source_type = "value"
664
+
665
+ # Add source node
666
+ nodes.append({
667
+ "id": source_id,
668
+ "type": source_type,
669
+ "label": source_id.split(":", 1)[1] if ":" in source_id else source_id,
670
+ "size": 10,
671
+ })
672
+
673
+ # Process chain entries
674
+ prev_node_id = None
675
+ for entry_idx, entry in enumerate(chain.entries):
676
+ source_node_id = entry.source
677
+ target_node_id = entry.target
678
+
679
+ # Add intermediate nodes if not already added
680
+ if entry.source_type == "reasoning_step" and not any(node["id"] == source_node_id for node in nodes):
681
+ nodes.append({
682
+ "id": source_node_id,
683
+ "type": "reasoning_step",
684
+ "label": f"Step {source_node_id.split(':', 1)[1] if ':' in source_node_id else source_node_id}",
685
+ "size": 8,
686
+ })
687
+
688
+ # Add link
689
+ links.append({
690
+ "source": source_node_id,
691
+ "target": target_node_id,
692
+ "value": entry.weight,
693
+ "confidence": entry.confidence,
694
+ "label": entry.description if entry.description else f"Weight: {entry.weight:.2f}",
695
+ })
696
+
697
+ prev_node_id = target_node_id
698
+
699
+ # Create visualization data
700
+ visualization = {
701
+ "nodes": nodes,
702
+ "links": links,
703
+ "trace_id": trace_id,
704
+ "decision_id": decision_id,
705
+ }
706
+
707
+ return visualization
708
+
709
+ def set_value_weights(self, value_weights: Dict[str, float]) -> None:
710
+ """
711
+ Set weights for different values.
712
+
713
+ Args:
714
+ value_weights: Dictionary mapping value names to weights
715
+ """
716
+ self.value_weights = value_weights.copy()
717
+
718
+ def clear_history(self, before_timestamp: Optional[datetime.datetime] = None) -> int:
719
+ """
720
+ Clear attribution history.
721
+
722
+ Args:
723
+ before_timestamp: Optional timestamp to clear history before
724
+
725
+ Returns:
726
+ Number of entries cleared
727
+ """
728
+ if before_timestamp is None:
729
+ # Clear all history
730
+ count = len(self.attribution_history)
731
+ self.attribution_history = {}
732
+ self.trace_registry = {}
733
+ return count
734
+
735
+ # Clear history before timestamp
736
+ to_remove_history = []
737
+ to_remove_registry = []
738
+
739
+ for decision_id, graph in self.attribution_history.items():
740
+ if graph.timestamp < before_timestamp:
741
+ to_remove_history.append(decision_id)
742
+
743
+ for trace_id, trace_data in self.trace_registry.items():
744
+ if trace_data.get("timestamp", datetime.datetime.now()) < before_timestamp:
745
+ to_remove_registry.append(trace_id)
746
+
747
+ # Remove from history
748
+ for decision_id in to_remove_history:
749
+ del self.attribution_history[decision_id]
750
+
751
+ # Remove from registry
752
+ for trace_id in to_remove_registry:
753
+ del self.trace_registry[trace_id]
754
+
755
+ return len(to_remove_history) + len(to_remove_registry)
src/cognition/graph.py ADDED
@@ -0,0 +1,1111 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ ReasoningGraph - Recursive Cognitive Architecture
3
+
4
+ This module implements the core reasoning graph architecture that powers
5
+ the recursive cognition capabilities of AGI-HEDGE-FUND agents.
6
+
7
+ Key components:
8
+ - LangGraph-based reasoning networks
9
+ - Recursive thought paths with configurable depth
10
+ - Attribution tracing throughout reasoning chains
11
+ - Symbolic state propagation across reasoning steps
12
+ - Failsafe mechanisms for reasoning collapse
13
+
14
+ Internal Note: This implementation is inspired by circuit interpretability research,
15
+ simulating recursive attention pathways via LangGraph. The core patterns
16
+ encode .p/ command equivalents like reflect.trace and collapse.detect.
17
+ """
18
+
19
+ import datetime
20
+ import uuid
21
+ import logging
22
+ from typing import Dict, List, Any, Optional, Callable, Tuple, Union, Set
23
+ import numpy as np
24
+
25
+ # LangGraph for reasoning graphs
26
+ from langgraph.graph import StateGraph, END
27
+ from langgraph.prebuilt import ToolNode, ToolExecutor
28
+
29
+ # For type hints
30
+ from pydantic import BaseModel, Field
31
+
32
+ # Internal imports
33
+ from ..llm.router import ModelRouter
34
+ from ..utils.diagnostics import TracingTools
35
+
36
+
37
+ class ReasoningState(BaseModel):
38
+ """Reasoning state carried through graph execution."""
39
+
40
+ # Input data and context
41
+ input: Dict[str, Any] = Field(default_factory=dict)
42
+ context: Dict[str, Any] = Field(default_factory=dict)
43
+
44
+ # Reasoning chain and attribution
45
+ steps: List[Dict[str, Any]] = Field(default_factory=list)
46
+ attribution: Dict[str, Dict[str, float]] = Field(default_factory=dict)
47
+
48
+ # Execution metadata
49
+ depth: int = Field(default=0)
50
+ max_depth: int = Field(default=3)
51
+ start_time: datetime.datetime = Field(default_factory=datetime.datetime.now)
52
+ elapsed_ms: int = Field(default=0)
53
+
54
+ # Results and conclusions
55
+ output: Dict[str, Any] = Field(default_factory=dict)
56
+ confidence: float = Field(default=0.5)
57
+
58
+ # Error handling and diagnostics
59
+ errors: List[Dict[str, Any]] = Field(default_factory=list)
60
+ warnings: List[Dict[str, Any]] = Field(default_factory=list)
61
+
62
+ # Collapse detection
63
+ collapse_risk: float = Field(default=0.0)
64
+ collapse_detected: bool = Field(default=False)
65
+ collapse_reason: Optional[str] = Field(default=None)
66
+
67
+ # Tracing
68
+ trace_enabled: bool = Field(default=False)
69
+ trace_id: str = Field(default_factory=lambda: str(uuid.uuid4()))
70
+
71
+
72
+ class ReasoningGraph:
73
+ """
74
+ Implements a recursive reasoning architecture for agent cognition.
75
+
76
+ The ReasoningGraph enables:
77
+ - Multi-step reasoning via connected nodes
78
+ - Recursive thought patterns for deep analysis
79
+ - Attribution tracing across reasoning chains
80
+ - Dynamic graph reconfiguration based on context
81
+ - Safeguards against reasoning collapse (infinite loops, contradictions)
82
+ """
83
+
84
+ def __init__(
85
+ self,
86
+ agent_name: str,
87
+ agent_philosophy: str,
88
+ model_router: ModelRouter,
89
+ collapse_threshold: float = 0.8,
90
+ trace_enabled: bool = False,
91
+ ):
92
+ """
93
+ Initialize reasoning graph.
94
+
95
+ Args:
96
+ agent_name: Name of the agent using this reasoning graph
97
+ agent_philosophy: Philosophy description of the agent
98
+ model_router: ModelRouter instance for LLM access
99
+ collapse_threshold: Threshold for reasoning collapse detection
100
+ trace_enabled: Whether to trace reasoning steps
101
+ """
102
+ self.agent_name = agent_name
103
+ self.agent_philosophy = agent_philosophy
104
+ self.model_router = model_router
105
+ self.collapse_threshold = collapse_threshold
106
+ self.trace_enabled = trace_enabled
107
+
108
+ # Initialize graph components
109
+ self.nodes: Dict[str, Callable] = {}
110
+ self.edges: Dict[str, List[str]] = {}
111
+ self.entry_point: Optional[str] = None
112
+
113
+ # Diagnostic tooling
114
+ self.tracer = TracingTools(agent_id=str(uuid.uuid4()), agent_name=agent_name)
115
+
116
+ # Add base nodes
117
+ self._add_base_nodes()
118
+
119
+ # Build initial graph
120
+ self._build_graph()
121
+
122
+ def _add_base_nodes(self) -> None:
123
+ """Add base reasoning nodes that are part of every graph."""
124
+ # Entry point for validation
125
+ self.add_node("validate_input", self._validate_input)
126
+
127
+ # Default nodes
128
+ self.add_node("initial_analysis", self._initial_analysis)
129
+ self.add_node("extract_themes", self._extract_themes)
130
+ self.add_node("generate_conclusions", self._generate_conclusions)
131
+
132
+ # Exit handlers
133
+ self.add_node("check_collapse", self._check_collapse)
134
+ self.add_node("finalize_output", self._finalize_output)
135
+
136
+ def _build_graph(self) -> None:
137
+ """Build the reasoning graph based on defined nodes and edges."""
138
+ # Set default entry point if not specified
139
+ if self.entry_point is None:
140
+ if "validate_input" in self.nodes:
141
+ self.entry_point = "validate_input"
142
+ elif len(self.nodes) > 0:
143
+ self.entry_point = list(self.nodes.keys())[0]
144
+ else:
145
+ raise ValueError("Cannot build graph: no nodes defined")
146
+
147
+ # Build default edges if none defined
148
+ if not self.edges:
149
+ # Get sorted node names (for deterministic graphs)
150
+ node_names = sorted(self.nodes.keys())
151
+
152
+ # Find entry point index
153
+ try:
154
+ entry_index = node_names.index(self.entry_point)
155
+ except ValueError:
156
+ entry_index = 0
157
+
158
+ # Move entry point to front
159
+ if entry_index > 0:
160
+ node_names.insert(0, node_names.pop(entry_index))
161
+
162
+ # Create sequential edges between all nodes
163
+ for i in range(len(node_names) - 1):
164
+ self.add_edge(node_names[i], node_names[i + 1])
165
+
166
+ # Add exit handler edges
167
+ if "check_collapse" in self.nodes and "check_collapse" not in node_names:
168
+ self.add_edge(node_names[-1], "check_collapse")
169
+ if "finalize_output" in self.nodes:
170
+ self.add_edge("check_collapse", "finalize_output")
171
+ elif "finalize_output" in self.nodes and "finalize_output" not in node_names:
172
+ self.add_edge(node_names[-1], "finalize_output")
173
+
174
+ def add_node(self, name: str, fn: Callable) -> None:
175
+ """
176
+ Add a reasoning node to the graph.
177
+
178
+ Args:
179
+ name: Node name
180
+ fn: Function to execute for this node
181
+ """
182
+ self.nodes[name] = fn
183
+
184
+ def add_edge(self, source: str, target: str) -> None:
185
+ """
186
+ Add an edge between reasoning nodes.
187
+
188
+ Args:
189
+ source: Source node name
190
+ target: Target node name
191
+ """
192
+ if source not in self.nodes:
193
+ raise ValueError(f"Source node '{source}' does not exist")
194
+ if target not in self.nodes:
195
+ raise ValueError(f"Target node '{target}' does not exist")
196
+
197
+ if source not in self.edges:
198
+ self.edges[source] = []
199
+
200
+ if target not in self.edges[source]:
201
+ self.edges[source].append(target)
202
+
203
+ def set_entry_point(self, node_name: str) -> None:
204
+ """
205
+ Set the entry point for the reasoning graph.
206
+
207
+ Args:
208
+ node_name: Name of entry node
209
+ """
210
+ if node_name not in self.nodes:
211
+ raise ValueError(f"Entry node '{node_name}' does not exist")
212
+
213
+ self.entry_point = node_name
214
+
215
+ def run(self, input: Dict[str, Any], trace_depth: int = 3) -> Dict[str, Any]:
216
+ """
217
+ Run the reasoning graph on input data.
218
+
219
+ Args:
220
+ input: Input data
221
+ trace_depth: Depth of reasoning trace (higher = deeper reasoning)
222
+
223
+ Returns:
224
+ Reasoning results
225
+ """
226
+ # Prepare initial state
227
+ state = ReasoningState(
228
+ input=input,
229
+ max_depth=trace_depth,
230
+ trace_enabled=self.trace_enabled
231
+ )
232
+
233
+ # Define next node function
234
+ def get_next_node(state_dict: Dict[str, Any], current_node: str) -> Union[str, List[str]]:
235
+ # Convert state dict back to ReasoningState
236
+ state = ReasoningState.parse_obj(state_dict)
237
+
238
+ # Check for collapse detection
239
+ if state.collapse_detected:
240
+ if "finalize_output" in self.nodes:
241
+ return "finalize_output"
242
+ return END
243
+
244
+ # Check for max depth reached
245
+ if state.depth >= state.max_depth and current_node != "check_collapse" and "check_collapse" in self.nodes:
246
+ return "check_collapse"
247
+
248
+ # Check if we've reached a terminal node
249
+ if current_node == "finalize_output" or current_node not in self.edges:
250
+ return END
251
+
252
+ # Return next nodes
253
+ return self.edges[current_node]
254
+
255
+ # Create StateGraph
256
+ workflow = StateGraph(ReasoningState)
257
+
258
+ # Add nodes
259
+ for node_name, node_fn in self.nodes.items():
260
+ workflow.add_node(node_name, self._create_node_wrapper(node_fn))
261
+
262
+ # Set edge logic
263
+ workflow.set_conditional_edges(
264
+ condition_name="get_next_node",
265
+ condition_fn=get_next_node
266
+ )
267
+
268
+ # Compile graph
269
+ compiled_graph = workflow.compile()
270
+
271
+ # Run graph
272
+ start_time = datetime.datetime.now()
273
+ state_dict = compiled_graph.invoke(
274
+ {"input": input, "max_depth": trace_depth, "trace_enabled": self.trace_enabled}
275
+ )
276
+ end_time = datetime.datetime.now()
277
+
278
+ # Convert state dict back to ReasoningState
279
+ final_state = ReasoningState.parse_obj(state_dict)
280
+
281
+ # Update execution time
282
+ execution_time_ms = int((end_time - start_time).total_seconds() * 1000)
283
+ final_state.elapsed_ms = execution_time_ms
284
+
285
+ # Prepare result
286
+ result = {
287
+ "output": final_state.output,
288
+ "confidence": final_state.confidence,
289
+ "elapsed_ms": final_state.elapsed_ms,
290
+ "depth": final_state.depth,
291
+ "collapse_detected": final_state.collapse_detected,
292
+ }
293
+
294
+ # Include trace if enabled
295
+ if self.trace_enabled:
296
+ result["trace"] = {
297
+ "steps": final_state.steps,
298
+ "attribution": final_state.attribution,
299
+ "errors": final_state.errors,
300
+ "warnings": final_state.warnings,
301
+ "trace_id": final_state.trace_id,
302
+ }
303
+
304
+ return result
305
+
306
+ def _create_node_wrapper(self, node_fn: Callable) -> Callable:
307
+ """
308
+ Create a wrapper for node functions to handle tracing and errors.
309
+
310
+ Args:
311
+ node_fn: Original node function
312
+
313
+ Returns:
314
+ Wrapped node function
315
+ """
316
+ def wrapped_node(state: Dict[str, Any]) -> Dict[str, Any]:
317
+ # Convert state dict to ReasoningState
318
+ state_obj = ReasoningState.parse_obj(state)
319
+
320
+ # Increment depth counter
321
+ state_obj.depth += 1
322
+
323
+ # Run node function with try-except
324
+ try:
325
+ # Get node name from function
326
+ node_name = node_fn.__name__.replace("_", " ").title()
327
+
328
+ # Record step start
329
+ step_start = {
330
+ "name": node_name,
331
+ "depth": state_obj.depth,
332
+ "timestamp": datetime.datetime.now().isoformat(),
333
+ }
334
+
335
+ # Add to steps
336
+ state_obj.steps.append(step_start)
337
+
338
+ # Run node function
339
+ result = node_fn(state_obj)
340
+
341
+ # Update state with result
342
+ if isinstance(result, dict):
343
+ # Extract and update fields if returned
344
+ for key, value in result.items():
345
+ if hasattr(state_obj, key):
346
+ setattr(state_obj, key, value)
347
+
348
+ # Record step completion
349
+ step_end = {
350
+ "name": node_name,
351
+ "depth": state_obj.depth,
352
+ "completed": True,
353
+ "timestamp": datetime.datetime.now().isoformat(),
354
+ }
355
+
356
+ # Update last step
357
+ if state_obj.steps:
358
+ state_obj.steps[-1].update(step_end)
359
+
360
+ except Exception as e:
361
+ # Record error
362
+ error = {
363
+ "message": str(e),
364
+ "type": type(e).__name__,
365
+ "timestamp": datetime.datetime.now().isoformat(),
366
+ "node": node_fn.__name__,
367
+ "depth": state_obj.depth,
368
+ }
369
+
370
+ state_obj.errors.append(error)
371
+
372
+ # Update last step if exists
373
+ if state_obj.steps:
374
+ state_obj.steps[-1].update({
375
+ "completed": False,
376
+ "error": error,
377
+ })
378
+
379
+ # Set collapse if critical error
380
+ state_obj.collapse_detected = True
381
+ state_obj.collapse_reason = "critical_error"
382
+
383
+ # Convert back to dict
384
+ return state_obj.dict()
385
+
386
+ return wrapped_node
387
+
388
+ # Base node implementations
389
+ def _validate_input(self, state: ReasoningState) -> Dict[str, Any]:
390
+ """
391
+ Validate input data.
392
+
393
+ Args:
394
+ state: Reasoning state
395
+
396
+ Returns:
397
+ Updated state fields
398
+ """
399
+ result = {
400
+ "warnings": state.warnings.copy(),
401
+ }
402
+
403
+ # Check if input is empty
404
+ if not state.input:
405
+ result["warnings"].append({
406
+ "message": "Empty input provided",
407
+ "severity": "high",
408
+ "timestamp": datetime.datetime.now().isoformat(),
409
+ })
410
+
411
+ # Set collapse for empty input
412
+ result["collapse_detected"] = True
413
+ result["collapse_reason"] = "empty_input"
414
+
415
+ return result
416
+
417
+ def _initial_analysis(self, state: ReasoningState) -> Dict[str, Any]:
418
+ """
419
+ Perform initial analysis of input data.
420
+
421
+ Args:
422
+ state: Reasoning state
423
+
424
+ Returns:
425
+ Updated state fields
426
+ """
427
+ # Extract key information from input
428
+ analysis = {
429
+ "timestamp": datetime.datetime.now().isoformat(),
430
+ "key_entities": [],
431
+ "key_metrics": {},
432
+ "identified_patterns": [],
433
+ }
434
+
435
+ # Update state
436
+ result = {
437
+ "context": {**state.context, "initial_analysis": analysis},
438
+ }
439
+
440
+ return result
441
+
442
+ def _extract_themes(self, state: ReasoningState) -> Dict[str, Any]:
443
+ """
444
+ Extract key themes from the data.
445
+
446
+ Args:
447
+ state: Reasoning state
448
+
449
+ Returns:
450
+ Updated state fields with extracted themes
451
+ """
452
+ # Extract themes based on agent philosophy
453
+ themes = self.extract_themes(
454
+ text=str(state.input),
455
+ max_themes=5
456
+ )
457
+
458
+ # Calculate alignment with agent philosophy
459
+ alignment = self.compute_alignment(
460
+ themes=themes,
461
+ philosophy=self.agent_philosophy
462
+ )
463
+
464
+ # Update result
465
+ result = {
466
+ "context": {
467
+ **state.context,
468
+ "themes": themes,
469
+ "philosophy_alignment": alignment,
470
+ },
471
+ }
472
+
473
+ return result
474
+
475
+ def _generate_conclusions(self, state: ReasoningState) -> Dict[str, Any]:
476
+ """
477
+ Generate conclusions based on analysis.
478
+
479
+ Args:
480
+ state: Reasoning state
481
+
482
+ Returns:
483
+ Updated state fields with conclusions
484
+ """
485
+ # Simple placeholder conclusions
486
+ conclusions = {
487
+ "summary": "Analysis completed",
488
+ "confidence": 0.7,
489
+ "recommendations": [],
490
+ }
491
+
492
+ # Update result
493
+ result = {
494
+ "output": conclusions,
495
+ "confidence": conclusions["confidence"],
496
+ }
497
+
498
+ return result
499
+
500
+ def _check_collapse(self, state: ReasoningState) -> Dict[str, Any]:
501
+ """
502
+ Check for reasoning collapse conditions.
503
+
504
+ Args:
505
+ state: Reasoning state
506
+
507
+ Returns:
508
+ Updated state fields with collapse detection
509
+ """
510
+ # Default collapse risk is low
511
+ collapse_risk = 0.1
512
+
513
+ # Check for collapse conditions
514
+ collapse_conditions = {
515
+ "circular_reasoning": self._detect_circular_reasoning(state),
516
+ "confidence_collapse": state.confidence < 0.2,
517
+ "contradiction": self._detect_contradictions(state),
518
+ "depth_exhaustion": state.depth >= state.max_depth,
519
+ }
520
+
521
+ # Calculate overall collapse risk
522
+ active_conditions = [k for k, v in collapse_conditions.items() if v]
523
+ collapse_risk = len(active_conditions) / len(collapse_conditions) if collapse_conditions else 0
524
+
525
+ # Determine if collapse detected
526
+ collapse_detected = collapse_risk >= self.collapse_threshold
527
+ collapse_reason = active_conditions[0] if active_conditions else None
528
+
529
+ # Update result
530
+ result = {
531
+ "collapse_risk": collapse_risk,
532
+ "collapse_detected": collapse_detected,
533
+ "collapse_reason": collapse_reason,
534
+ }
535
+
536
+ # Add warning if collapse detected
537
+ if collapse_detected:
538
+ result["warnings"] = state.warnings + [{
539
+ "message": f"Reasoning collapse detected: {collapse_reason}",
540
+ "severity": "high",
541
+ "timestamp": datetime.datetime.now().isoformat(),
542
+ }]
543
+
544
+ return result
545
+
546
+ def _finalize_output(self, state: ReasoningState) -> Dict[str, Any]:
547
+ """
548
+ Finalize output and prepare result.
549
+
550
+ Args:
551
+ state: Reasoning state
552
+
553
+ Returns:
554
+ Updated state fields with finalized output
555
+ """
556
+ # If no output yet, create default
557
+ if not state.output:
558
+ output = {
559
+ "summary": "Analysis completed with limited results",
560
+ "confidence": max(0.1, state.confidence / 2), # Reduced confidence
561
+ "recommendations": [],
562
+ "timestamp": datetime.datetime.now().isoformat(),
563
+ }
564
+
565
+ # Update result
566
+ result = {
567
+ "output": output,
568
+ "confidence": output["confidence"],
569
+ }
570
+ else:
571
+ # Just return current state unchanged
572
+ result = {}
573
+
574
+ return result
575
+
576
+ # Utility methods for reasoning loops
577
+ def extract_themes(self, text: str, max_themes: int = 5) -> List[str]:
578
+ """
579
+ Extract key themes from text using LLM.
580
+
581
+ Args:
582
+ text: Text to analyze
583
+ max_themes: Maximum number of themes to extract
584
+
585
+ Returns:
586
+ List of extracted themes
587
+ """
588
+ # Use LLM to extract themes
589
+ prompt = f"""
590
+ Extract the {max_themes} most important themes or topics from the following text.
591
+ Respond with a Python list of strings, each representing one theme.
592
+
593
+ Text:
594
+ {text}
595
+
596
+ Themes:
597
+ """
598
+
599
+ try:
600
+ response = self.model_router.generate(prompt)
601
+
602
+ # Parse response as Python list (safety handling)
603
+ try:
604
+ themes = eval(response.strip())
605
+ if isinstance(themes, list) and all(isinstance(t, str) for t in themes):
606
+ return themes[:max_themes]
607
+ except:
608
+ pass
609
+
610
+ # Fallback parsing
611
+ themes = [line.strip().strip('-*•').strip()
612
+ for line in response.split('\n')
613
+ if line.strip() and line.strip()[0] in '-*•']
614
+
615
+ return themes[:max_themes]
616
+
617
+ except Exception as e:
618
+ logging.warning(f"Error extracting themes: {e}")
619
+ return []
620
+
621
+ def compute_alignment(self, themes: List[str], philosophy: str) -> float:
622
+ """
623
+ Compute alignment between themes and philosophy.
624
+
625
+ Args:
626
+ themes: List of themes
627
+ philosophy: Philosophy to align with
628
+
629
+ Returns:
630
+ Alignment score (0-1)
631
+ """
632
+ # Without using LLM, use a simple heuristic based on word overlap
633
+ if not themes:
634
+ return 0.5 # Neutral score for no themes
635
+
636
+ # Convert to lowercase for comparison
637
+ philosophy_words = set(philosophy.lower().split())
638
+
639
+ # Count theme words that overlap with philosophy
640
+ alignment_scores = []
641
+ for theme in themes:
642
+ theme_words = set(theme.lower().split())
643
+ overlap = len(theme_words.intersection(philosophy_words))
644
+ total_words = len(theme_words)
645
+
646
+ # Calculate theme alignment
647
+ if total_words > 0:
648
+ theme_alignment = min(1.0, overlap / (total_words * 0.5)) # Scale for partial matches
649
+ alignment_scores.append(theme_alignment)
650
+
651
+ # Calculate average alignment across themes
652
+ return sum(alignment_scores) / len(alignment_scores) if alignment_scores else 0.5
653
+
654
+ def run_reflection(self, agent_state: Dict[str, Any], depth: int = 3,
655
+ trace_enabled: bool = False) -> Dict[str, Any]:
656
+ """
657
+ Run reflection on agent's current state.
658
+
659
+ Args:
660
+ agent_state: Agent's current state
661
+ depth: Depth of reflection
662
+ trace_enabled: Whether to enable tracing
663
+
664
+ Returns:
665
+ Reflection results
666
+ """
667
+ # Prepare reflection input
668
+ reflection_input = {
669
+ "agent_state": agent_state,
670
+ "agent_name": self.agent_name,
671
+ "agent_philosophy": self.agent_philosophy,
672
+ "reflection_depth": depth,
673
+ "timestamp": datetime.datetime.now().isoformat(),
674
+ }
675
+
676
+ # Create reflection prompt for LLM
677
+ prompt = f"""
678
+ You are the {self.agent_name} agent with the following philosophy:
679
+ "{self.agent_philosophy}"
680
+
681
+ Perform a depth {depth} reflection on your current state and decision making process.
682
+ Focus on:
683
+ 1. Consistency between your investment philosophy and current beliefs
684
+ 2. Quality of your recent decisions and their alignment with your values
685
+ 3. Potential biases or blind spots in your analysis
686
+ 4. Areas where your reasoning could be improved
687
+
688
+ Your current state:
689
+ {agent_state}
690
+
691
+ Respond with a JSON object containing:
692
+ - assessment: Overall assessment of your cognitive state
693
+ - consistency_score: How consistent your decisions are with your philosophy (0-1)
694
+ - identified_biases: List of potential biases detected
695
+ - improvement_areas: Areas where reasoning could be improved
696
+ - confidence: Your confidence in this self-assessment (0-1)
697
+ """
698
+
699
+ try:
700
+ # Generate reflection using LLM
701
+ response = self.model_router.generate(prompt)
702
+
703
+ # Parse JSON response (with fallback)
704
+ # Parse JSON response (with fallback)
705
+ try:
706
+ import json
707
+ reflection = json.loads(response)
708
+ except json.JSONDecodeError:
709
+ # Fallback parsing
710
+ reflection = {
711
+ "assessment": "Unable to parse full reflection",
712
+ "consistency_score": 0.5,
713
+ "identified_biases": [],
714
+ "improvement_areas": [],
715
+ "confidence": 0.3,
716
+ }
717
+
718
+ # Extract fields from text response
719
+ if "consistency_score" in response:
720
+ try:
721
+ consistency_score = float(response.split("consistency_score")[1].split("\n")[0].replace(":", "").strip())
722
+ reflection["consistency_score"] = consistency_score
723
+ except:
724
+ pass
725
+
726
+ if "confidence" in response:
727
+ try:
728
+ confidence = float(response.split("confidence")[1].split("\n")[0].replace(":", "").strip())
729
+ reflection["confidence"] = confidence
730
+ except:
731
+ pass
732
+
733
+ return reflection
734
+
735
+ except Exception as e:
736
+ logging.warning(f"Error in reflection: {e}")
737
+ return {
738
+ "assessment": "Reflection failed due to error",
739
+ "consistency_score": 0.5,
740
+ "identified_biases": ["reflection_failure"],
741
+ "improvement_areas": ["error_handling"],
742
+ "confidence": 0.2,
743
+ "error": str(e),
744
+ }
745
+
746
+ def generate_from_experiences(self, experiences: List[Dict[str, Any]]) -> List[Dict[str, Any]]:
747
+ """
748
+ Generate signals based on past experiences.
749
+
750
+ Args:
751
+ experiences: List of past experiences
752
+
753
+ Returns:
754
+ List of generated signals
755
+ """
756
+ if not experiences:
757
+ return []
758
+
759
+ # Format experiences for LLM prompt
760
+ formatted_experiences = "\n".join([
761
+ f"Experience {i+1}: {exp.get('description', 'No description')}" +
762
+ f"\nOutcome: {exp.get('outcome', 'Unknown')}" +
763
+ f"\nTimestamp: {exp.get('timestamp', 'Unknown')}"
764
+ for i, exp in enumerate(experiences)
765
+ ])
766
+
767
+ # Create generation prompt for LLM
768
+ prompt = f"""
769
+ You are the {self.agent_name} agent with the following philosophy:
770
+ "{self.agent_philosophy}"
771
+
772
+ Based on the following past experiences, generate potential investment signals:
773
+
774
+ {formatted_experiences}
775
+
776
+ Generate 2-3 potential investment signals based on these experiences.
777
+ Each signal should be a JSON object containing:
778
+ - ticker: Stock ticker symbol
779
+ - action: "buy", "sell", or "hold"
780
+ - confidence: Confidence level (0.0-1.0)
781
+ - reasoning: Explicit reasoning chain
782
+ - intent: High-level investment intent
783
+ - value_basis: Core value driving this decision
784
+
785
+ Respond with a JSON array containing these signals.
786
+ """
787
+
788
+ try:
789
+ # Generate signals using LLM
790
+ response = self.model_router.generate(prompt)
791
+
792
+ # Parse JSON response (with fallback)
793
+ try:
794
+ import json
795
+ signals = json.loads(response)
796
+ if not isinstance(signals, list):
797
+ signals = [signals]
798
+ except json.JSONDecodeError:
799
+ # Simple fallback extraction
800
+ signals = []
801
+ lines = response.split("\n")
802
+ current_signal = {}
803
+
804
+ for line in lines:
805
+ line = line.strip()
806
+ if line.startswith("ticker:") or line.startswith('"ticker":'):
807
+ # New signal starts
808
+ if current_signal and "ticker" in current_signal:
809
+ signals.append(current_signal)
810
+ current_signal = {}
811
+ current_signal["ticker"] = line.split(":", 1)[1].strip().strip('"').strip("'").strip(',')
812
+ elif line.startswith("action:") or line.startswith('"action":'):
813
+ current_signal["action"] = line.split(":", 1)[1].strip().strip('"').strip("'").strip(',')
814
+ elif line.startswith("confidence:") or line.startswith('"confidence":'):
815
+ try:
816
+ current_signal["confidence"] = float(line.split(":", 1)[1].strip().strip('"').strip("'").strip(','))
817
+ except:
818
+ current_signal["confidence"] = 0.5
819
+ elif line.startswith("reasoning:") or line.startswith('"reasoning":'):
820
+ current_signal["reasoning"] = line.split(":", 1)[1].strip().strip('"').strip("'").strip(',')
821
+ elif line.startswith("intent:") or line.startswith('"intent":'):
822
+ current_signal["intent"] = line.split(":", 1)[1].strip().strip('"').strip("'").strip(',')
823
+ elif line.startswith("value_basis:") or line.startswith('"value_basis":'):
824
+ current_signal["value_basis"] = line.split(":", 1)[1].strip().strip('"').strip("'").strip(',')
825
+
826
+ # Add last signal if any
827
+ if current_signal and "ticker" in current_signal:
828
+ signals.append(current_signal)
829
+
830
+ # Ensure all required fields are present
831
+ for signal in signals:
832
+ signal.setdefault("ticker", "UNKNOWN")
833
+ signal.setdefault("action", "hold")
834
+ signal.setdefault("confidence", 0.5)
835
+ signal.setdefault("reasoning", "Generated from past experiences")
836
+ signal.setdefault("intent", "Learn from past experiences")
837
+ signal.setdefault("value_basis", "Experiential learning")
838
+
839
+ return signals
840
+
841
+ except Exception as e:
842
+ logging.warning(f"Error generating from experiences: {e}")
843
+ return []
844
+
845
+ def generate_from_beliefs(self, beliefs: Dict[str, float]) -> List[Dict[str, Any]]:
846
+ """
847
+ Generate signals based on current beliefs.
848
+
849
+ Args:
850
+ beliefs: Current belief state
851
+
852
+ Returns:
853
+ List of generated signals
854
+ """
855
+ if not beliefs:
856
+ return []
857
+
858
+ # Format beliefs for LLM prompt
859
+ formatted_beliefs = "\n".join([
860
+ f"Belief: {belief}, Strength: {strength:.2f}"
861
+ for belief, strength in beliefs.items()
862
+ ])
863
+
864
+ # Create generation prompt for LLM
865
+ prompt = f"""
866
+ You are the {self.agent_name} agent with the following philosophy:
867
+ "{self.agent_philosophy}"
868
+
869
+ Based on the following current beliefs, generate potential investment signals:
870
+
871
+ {formatted_beliefs}
872
+
873
+ Generate 2-3 potential investment signals based on these beliefs.
874
+ Each signal should be a JSON object containing:
875
+ - ticker: Stock ticker symbol (extract from beliefs if present)
876
+ - action: "buy", "sell", or "hold"
877
+ - confidence: Confidence level (0.0-1.0)
878
+ - reasoning: Explicit reasoning chain
879
+ - intent: High-level investment intent
880
+ - value_basis: Core value driving this decision
881
+
882
+ Respond with a JSON array containing these signals.
883
+ """
884
+
885
+ try:
886
+ # Generate signals using LLM
887
+ response = self.model_router.generate(prompt)
888
+
889
+ # Parse JSON response (with fallback)
890
+ try:
891
+ import json
892
+ signals = json.loads(response)
893
+ if not isinstance(signals, list):
894
+ signals = [signals]
895
+ except json.JSONDecodeError:
896
+ # Simple fallback extraction (same as in generate_from_experiences)
897
+ signals = []
898
+ lines = response.split("\n")
899
+ current_signal = {}
900
+
901
+ for line in lines:
902
+ line = line.strip()
903
+ if line.startswith("ticker:") or line.startswith('"ticker":'):
904
+ # New signal starts
905
+ if current_signal and "ticker" in current_signal:
906
+ signals.append(current_signal)
907
+ current_signal = {}
908
+ current_signal["ticker"] = line.split(":", 1)[1].strip().strip('"').strip("'").strip(',')
909
+ elif line.startswith("action:") or line.startswith('"action":'):
910
+ current_signal["action"] = line.split(":", 1)[1].strip().strip('"').strip("'").strip(',')
911
+ elif line.startswith("confidence:") or line.startswith('"confidence":'):
912
+ try:
913
+ current_signal["confidence"] = float(line.split(":", 1)[1].strip().strip('"').strip("'").strip(','))
914
+ except:
915
+ current_signal["confidence"] = 0.5
916
+ elif line.startswith("reasoning:") or line.startswith('"reasoning":'):
917
+ current_signal["reasoning"] = line.split(":", 1)[1].strip().strip('"').strip("'").strip(',')
918
+ elif line.startswith("intent:") or line.startswith('"intent":'):
919
+ current_signal["intent"] = line.split(":", 1)[1].strip().strip('"').strip("'").strip(',')
920
+ elif line.startswith("value_basis:") or line.startswith('"value_basis":'):
921
+ current_signal["value_basis"] = line.split(":", 1)[1].strip().strip('"').strip("'").strip(',')
922
+
923
+ # Add last signal if any
924
+ if current_signal and "ticker" in current_signal:
925
+ signals.append(current_signal)
926
+
927
+ # Ensure all required fields are present
928
+ for signal in signals:
929
+ signal.setdefault("ticker", "UNKNOWN")
930
+ signal.setdefault("action", "hold")
931
+ signal.setdefault("confidence", 0.5)
932
+ signal.setdefault("reasoning", "Generated from belief state")
933
+ signal.setdefault("intent", "Act on current beliefs")
934
+ signal.setdefault("value_basis", "Belief consistency")
935
+
936
+ return signals
937
+
938
+ except Exception as e:
939
+ logging.warning(f"Error generating from beliefs: {e}")
940
+ return []
941
+
942
+ # Collapse detection utilities
943
+ def _detect_circular_reasoning(self, state: ReasoningState) -> bool:
944
+ """
945
+ Detect circular reasoning patterns in reasoning steps.
946
+
947
+ Args:
948
+ state: Current reasoning state
949
+
950
+ Returns:
951
+ True if circular reasoning detected, False otherwise
952
+ """
953
+ # Need at least 3 steps to detect circularity
954
+ if len(state.steps) < 3:
955
+ return False
956
+
957
+ # Extract step names
958
+ step_names = [step.get("name", "") for step in state.steps]
959
+
960
+ # Look for repeating patterns (minimum 2 steps)
961
+ for pattern_len in range(2, len(step_names) // 2 + 1):
962
+ for i in range(len(step_names) - pattern_len * 2 + 1):
963
+ pattern = step_names[i:i+pattern_len]
964
+ next_seq = step_names[i+pattern_len:i+pattern_len*2]
965
+
966
+ if pattern == next_seq:
967
+ return True
968
+
969
+ return False
970
+
971
+ def _detect_contradictions(self, state: ReasoningState) -> bool:
972
+ """
973
+ Detect contradictory statements in reasoning trace.
974
+
975
+ Args:
976
+ state: Current reasoning state
977
+
978
+ Returns:
979
+ True if contradictions detected, False otherwise
980
+ """
981
+ # Simple implementation: check if confidence oscillates dramatically
982
+ if len(state.steps) < 3:
983
+ return False
984
+
985
+ # Extract confidence values
986
+ confidences = []
987
+ for step in state.steps:
988
+ if "output" in step and "confidence" in step["output"]:
989
+ confidences.append(step["output"]["confidence"])
990
+
991
+ # Check for oscillations
992
+ if len(confidences) >= 3:
993
+ for i in range(len(confidences) - 2):
994
+ # Check for significant up-down or down-up pattern
995
+ if (confidences[i] - confidences[i+1] > 0.3 and confidences[i+1] - confidences[i+2] < -0.3) or \
996
+ (confidences[i] - confidences[i+1] < -0.3 and confidences[i+1] - confidences[i+2] > 0.3):
997
+ return True
998
+
999
+ return False
1000
+
1001
+ # Reflection utilities
1002
+ def fork_reflection(self, base_state: Dict[str, Any], depth: int = 2) -> List[Dict[str, Any]]:
1003
+ """
1004
+ Create multiple reflection paths from a base state.
1005
+
1006
+ Args:
1007
+ base_state: Base state to fork from
1008
+ depth: Depth of reflection
1009
+
1010
+ Returns:
1011
+ List of reflection results
1012
+ """
1013
+ reflection_paths = []
1014
+
1015
+ # Create reflective dimensions
1016
+ dimensions = [
1017
+ "consistency", # Consistency with philosophy
1018
+ "evidence", # Evidence evaluation
1019
+ "alternatives", # Alternative hypotheses
1020
+ "biases", # Cognitive biases
1021
+ "gaps", # Knowledge gaps
1022
+ ]
1023
+
1024
+ # Generate reflection for each dimension
1025
+ for dimension in dimensions:
1026
+ reflection = self._reflect_on_dimension(base_state, dimension, depth)
1027
+ reflection_paths.append({
1028
+ "dimension": dimension,
1029
+ "reflection": reflection,
1030
+ "confidence": reflection.get("confidence", 0.5),
1031
+ })
1032
+
1033
+ # Sort by confidence (highest first)
1034
+ reflection_paths.sort(key=lambda x: x["confidence"], reverse=True)
1035
+
1036
+ return reflection_paths
1037
+
1038
+ def _reflect_on_dimension(self, state: Dict[str, Any], dimension: str, depth: int) -> Dict[str, Any]:
1039
+ """
1040
+ Reflect on a specific dimension of reasoning.
1041
+
1042
+ Args:
1043
+ state: Current state
1044
+ dimension: Dimension to reflect on
1045
+ depth: Depth of reflection
1046
+
1047
+ Returns:
1048
+ Reflection results
1049
+ """
1050
+ # Craft dimension-specific prompt
1051
+ if dimension == "consistency":
1052
+ prompt_focus = "the consistency between my decisions and my core philosophy"
1053
+ elif dimension == "evidence":
1054
+ prompt_focus = "the quality and completeness of evidence I'm considering"
1055
+ elif dimension == "alternatives":
1056
+ prompt_focus = "alternative hypotheses or viewpoints I might be overlooking"
1057
+ elif dimension == "biases":
1058
+ prompt_focus = "cognitive biases that might be affecting my judgment"
1059
+ elif dimension == "gaps":
1060
+ prompt_focus = "knowledge gaps that could be affecting my analysis"
1061
+ else:
1062
+ prompt_focus = f"the dimension of {dimension} in my reasoning"
1063
+
1064
+ # Create reflection prompt
1065
+ prompt = f"""
1066
+ You are the {self.agent_name} agent with the following philosophy:
1067
+ "{self.agent_philosophy}"
1068
+
1069
+ Perform a depth {depth} reflection focusing specifically on {prompt_focus}.
1070
+
1071
+ Current state:
1072
+ {state}
1073
+
1074
+ Respond with a JSON object containing:
1075
+ - dimension: "{dimension}"
1076
+ - assessment: Your assessment of this dimension
1077
+ - issues_identified: List of specific issues identified
1078
+ - recommendations: Recommendations to address these issues
1079
+ - confidence: Your confidence in this assessment (0-1)
1080
+ """
1081
+
1082
+ try:
1083
+ # Generate reflection using LLM
1084
+ response = self.model_router.generate(prompt)
1085
+
1086
+ # Parse JSON response (with fallback)
1087
+ try:
1088
+ import json
1089
+ reflection = json.loads(response)
1090
+ except json.JSONDecodeError:
1091
+ # Fallback parsing
1092
+ reflection = {
1093
+ "dimension": dimension,
1094
+ "assessment": "Unable to parse full reflection",
1095
+ "issues_identified": [],
1096
+ "recommendations": [],
1097
+ "confidence": 0.3,
1098
+ }
1099
+
1100
+ return reflection
1101
+
1102
+ except Exception as e:
1103
+ logging.warning(f"Error in dimension reflection: {e}")
1104
+ return {
1105
+ "dimension": dimension,
1106
+ "assessment": "Reflection failed due to error",
1107
+ "issues_identified": ["reflection_failure"],
1108
+ "recommendations": ["error_handling"],
1109
+ "confidence": 0.2,
1110
+ "error": str(e),
1111
+ }
src/cognition/memory.py ADDED
@@ -0,0 +1,1013 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ MemoryShell - Temporal Memory Architecture for Recursive Agents
3
+
4
+ This module implements the memory shell architecture that enables agents to
5
+ maintain persistent memory with configurable decay properties. The memory
6
+ shell acts as a cognitive substrate that provides:
7
+
8
+ - Short-term working memory
9
+ - Medium-term episodic memory with decay
10
+ - Long-term semantic memory with compression
11
+ - Temporal relationship tracking
12
+ - Experience-based learning
13
+
14
+ Internal Note: The memory shell simulates the MEMTRACE and ECHO-LOOP interpretability
15
+ shells for modeling memory decay and feedback loops in agent cognition.
16
+ """
17
+
18
+ import datetime
19
+ import math
20
+ import uuid
21
+ import heapq
22
+ from typing import Dict, List, Any, Optional, Tuple, Set
23
+ import numpy as np
24
+ from collections import defaultdict, deque
25
+
26
+ from pydantic import BaseModel, Field
27
+
28
+
29
+ class Memory(BaseModel):
30
+ """Base memory unit with attribution and decay properties."""
31
+
32
+ id: str = Field(default_factory=lambda: str(uuid.uuid4()))
33
+ content: Dict[str, Any] = Field(...)
34
+ memory_type: str = Field(...) # "episodic", "semantic", "working"
35
+ creation_time: datetime.datetime = Field(default_factory=datetime.datetime.now)
36
+ last_access_time: datetime.datetime = Field(default_factory=datetime.datetime.now)
37
+ access_count: int = Field(default=1)
38
+ salience: float = Field(default=1.0) # Initial salience (0-1)
39
+ decay_rate: float = Field(default=0.1) # Default decay rate per time unit
40
+ associations: Dict[str, float] = Field(default_factory=dict) # Associated memory IDs with strengths
41
+ source: Optional[str] = Field(default=None) # Source of the memory (e.g., "observation", "reflection")
42
+ tags: List[str] = Field(default_factory=list) # Semantic tags for the memory
43
+
44
+ def update_access(self) -> None:
45
+ """Update access time and count."""
46
+ self.last_access_time = datetime.datetime.now()
47
+ self.access_count += 1
48
+
49
+ def calculate_current_salience(self) -> float:
50
+ """Calculate current salience based on decay model."""
51
+ # Time since creation in hours
52
+ hours_since_creation = (datetime.datetime.now() - self.creation_time).total_seconds() / 3600
53
+
54
+ # Apply decay model: exponential decay with access-based reinforcement
55
+ base_decay = math.exp(-self.decay_rate * hours_since_creation)
56
+ access_factor = math.log1p(self.access_count) / 10 # Logarithmic access bonus
57
+
58
+ # Calculate current salience (capped at 1.0)
59
+ current_salience = min(1.0, self.salience * base_decay * (1 + access_factor))
60
+
61
+ return current_salience
62
+
63
+ def add_association(self, memory_id: str, strength: float = 0.5) -> None:
64
+ """
65
+ Add association to another memory.
66
+
67
+ Args:
68
+ memory_id: ID of memory to associate with
69
+ strength: Association strength (0-1)
70
+ """
71
+ self.associations[memory_id] = strength
72
+
73
+ def add_tag(self, tag: str) -> None:
74
+ """
75
+ Add semantic tag to memory.
76
+
77
+ Args:
78
+ tag: Tag to add
79
+ """
80
+ if tag not in self.tags:
81
+ self.tags.append(tag)
82
+
83
+ def as_dict(self) -> Dict[str, Any]:
84
+ """Convert to dictionary for export."""
85
+ return {
86
+ "id": self.id,
87
+ "content": self.content,
88
+ "memory_type": self.memory_type,
89
+ "creation_time": self.creation_time.isoformat(),
90
+ "last_access_time": self.last_access_time.isoformat(),
91
+ "access_count": self.access_count,
92
+ "salience": self.salience,
93
+ "current_salience": self.calculate_current_salience(),
94
+ "decay_rate": self.decay_rate,
95
+ "associations": self.associations,
96
+ "source": self.source,
97
+ "tags": self.tags,
98
+ }
99
+
100
+
101
+ class EpisodicMemory(Memory):
102
+ """Episodic memory representing specific experiences."""
103
+
104
+ sequence_position: Optional[int] = Field(default=None) # Position in temporal sequence
105
+ emotional_valence: float = Field(default=0.0) # Emotional charge (-1 to 1)
106
+ outcome: Optional[str] = Field(default=None) # Outcome of the experience
107
+
108
+ def __init__(self, **data):
109
+ data["memory_type"] = "episodic"
110
+ super().__init__(**data)
111
+
112
+
113
+ class SemanticMemory(Memory):
114
+ """Semantic memory representing conceptual knowledge."""
115
+
116
+ certainty: float = Field(default=0.7) # Certainty level (0-1)
117
+ contradiction_ids: List[str] = Field(default_factory=list) # IDs of contradicting memories
118
+ supporting_evidence: List[str] = Field(default_factory=list) # IDs of supporting memories
119
+
120
+ def __init__(self, **data):
121
+ data["memory_type"] = "semantic"
122
+ # Semantic memories decay more slowly
123
+ data.setdefault("decay_rate", 0.05)
124
+ super().__init__(**data)
125
+
126
+ def add_evidence(self, memory_id: str, is_supporting: bool = True) -> None:
127
+ """
128
+ Add supporting or contradicting evidence.
129
+
130
+ Args:
131
+ memory_id: Memory ID for evidence
132
+ is_supporting: Whether evidence is supporting (True) or contradicting (False)
133
+ """
134
+ if is_supporting:
135
+ if memory_id not in self.supporting_evidence:
136
+ self.supporting_evidence.append(memory_id)
137
+ else:
138
+ if memory_id not in self.contradiction_ids:
139
+ self.contradiction_ids.append(memory_id)
140
+
141
+ def update_certainty(self, evidence_ratio: float) -> None:
142
+ """
143
+ Update certainty based on supporting/contradicting evidence ratio.
144
+
145
+ Args:
146
+ evidence_ratio: Ratio of supporting to total evidence (0-1)
147
+ """
148
+ # Blend current certainty with evidence ratio
149
+ self.certainty = 0.7 * self.certainty + 0.3 * evidence_ratio
150
+
151
+
152
+ class WorkingMemory(Memory):
153
+ """Working memory representing active thinking and temporary storage."""
154
+
155
+ expiration_time: datetime.datetime = Field(default_factory=lambda: datetime.datetime.now() + datetime.timedelta(hours=1))
156
+ priority: int = Field(default=1) # Priority level (higher = more important)
157
+
158
+ def __init__(self, **data):
159
+ data["memory_type"] = "working"
160
+ # Working memories decay rapidly
161
+ data.setdefault("decay_rate", 0.5)
162
+ super().__init__(**data)
163
+
164
+ def set_expiration(self, hours: float) -> None:
165
+ """
166
+ Set expiration time for working memory.
167
+
168
+ Args:
169
+ hours: Hours until expiration
170
+ """
171
+ self.expiration_time = datetime.datetime.now() + datetime.timedelta(hours=hours)
172
+
173
+ def is_expired(self) -> bool:
174
+ """Check if working memory has expired."""
175
+ return datetime.datetime.now() > self.expiration_time
176
+
177
+
178
+ class MemoryShell:
179
+ """
180
+ Memory shell architecture for agent cognitive persistence.
181
+
182
+ The MemoryShell provides:
183
+ - Multi-tiered memory system (working, episodic, semantic)
184
+ - Configurable decay rates for different memory types
185
+ - Time-based and access-based memory reinforcement
186
+ - Associative memory network with activation spread
187
+ - Query capabilities with relevance ranking
188
+ """
189
+
190
+ def __init__(self, decay_rate: float = 0.2):
191
+ """
192
+ Initialize memory shell.
193
+
194
+ Args:
195
+ decay_rate: Base decay rate for memories
196
+ """
197
+ self.memories: Dict[str, Memory] = {}
198
+ self.decay_rate = decay_rate
199
+ self.working_memory_capacity = 7 # Miller's number (7±2)
200
+ self.episodic_index: Dict[str, Set[str]] = defaultdict(set) # Tag -> memory IDs
201
+ self.semantic_index: Dict[str, Set[str]] = defaultdict(set) # Tag -> memory IDs
202
+ self.temporal_sequence: List[str] = [] # Ordered list of episodic memory IDs
203
+ self.activation_threshold = 0.1 # Minimum activation for retrieval
204
+
205
+ # Initialize memory statistics
206
+ self.stats = {
207
+ "total_memories_created": 0,
208
+ "total_memories_decayed": 0,
209
+ "working_memory_count": 0,
210
+ "episodic_memory_count": 0,
211
+ "semantic_memory_count": 0,
212
+ "average_salience": 0.0,
213
+ "association_count": 0,
214
+ }
215
+
216
+ def add_working_memory(self, content: Dict[str, Any], priority: int = 1,
217
+ expiration_hours: float = 1.0, tags: List[str] = None) -> str:
218
+ """
219
+ Add item to working memory.
220
+
221
+ Args:
222
+ content: Memory content
223
+ priority: Priority level (higher = more important)
224
+ expiration_hours: Hours until expiration
225
+ tags: Semantic tags
226
+
227
+ Returns:
228
+ Memory ID
229
+ """
230
+ # Create working memory
231
+ memory = WorkingMemory(
232
+ content=content,
233
+ priority=priority,
234
+ decay_rate=self.decay_rate * 2, # Working memory decays faster
235
+ tags=tags or [],
236
+ source="working",
237
+ )
238
+
239
+ # Set expiration
240
+ memory.set_expiration(expiration_hours)
241
+
242
+ # Store in memory dictionary
243
+ self.memories[memory.id] = memory
244
+
245
+ # Add to indices
246
+ for tag in memory.tags:
247
+ self.episodic_index[tag].add(memory.id)
248
+
249
+ # Enforce capacity limit
250
+ self._enforce_working_memory_capacity()
251
+
252
+ # Update stats
253
+ self.stats["total_memories_created"] += 1
254
+ self.stats["working_memory_count"] += 1
255
+
256
+ return memory.id
257
+
258
+ def add_episodic_memory(self, content: Dict[str, Any], emotional_valence: float = 0.0,
259
+ outcome: Optional[str] = None, tags: List[str] = None) -> str:
260
+ """
261
+ Add episodic memory.
262
+
263
+ Args:
264
+ content: Memory content
265
+ emotional_valence: Emotional charge (-1 to 1)
266
+ outcome: Outcome of the experience
267
+ tags: Semantic tags
268
+
269
+ Returns:
270
+ Memory ID
271
+ """
272
+ # Create episodic memory
273
+ memory = EpisodicMemory(
274
+ content=content,
275
+ emotional_valence=emotional_valence,
276
+ outcome=outcome,
277
+ decay_rate=self.decay_rate,
278
+ tags=tags or [],
279
+ source="episode",
280
+ )
281
+
282
+ # Set sequence position
283
+ memory.sequence_position = len(self.temporal_sequence)
284
+
285
+ # Store in memory dictionary
286
+ self.memories[memory.id] = memory
287
+
288
+ # Add to indices
289
+ for tag in memory.tags:
290
+ self.episodic_index[tag].add(memory.id)
291
+
292
+ # Add to temporal sequence
293
+ self.temporal_sequence.append(memory.id)
294
+
295
+ # Update stats
296
+ self.stats["total_memories_created"] += 1
297
+ self.stats["episodic_memory_count"] += 1
298
+
299
+ return memory.id
300
+
301
+ def add_semantic_memory(self, content: Dict[str, Any], certainty: float = 0.7,
302
+ tags: List[str] = None) -> str:
303
+ """
304
+ Add semantic memory.
305
+
306
+ Args:
307
+ content: Memory content
308
+ certainty: Certainty level (0-1)
309
+ tags: Semantic tags
310
+
311
+ Returns:
312
+ Memory ID
313
+ """
314
+ # Create semantic memory
315
+ memory = SemanticMemory(
316
+ content=content,
317
+ certainty=certainty,
318
+ decay_rate=self.decay_rate * 0.5, # Semantic memory decays slower
319
+ tags=tags or [],
320
+ source="semantic",
321
+ )
322
+
323
+ # Store in memory dictionary
324
+ self.memories[memory.id] = memory
325
+
326
+ # Add to indices
327
+ for tag in memory.tags:
328
+ self.semantic_index[tag].add(memory.id)
329
+
330
+ # Update stats
331
+ self.stats["total_memories_created"] += 1
332
+ self.stats["semantic_memory_count"] += 1
333
+
334
+ return memory.id
335
+
336
+ def add_experience(self, experience: Dict[str, Any]) -> Tuple[str, List[str]]:
337
+ """
338
+ Add new experience as episodic memory and extract semantic memories.
339
+
340
+ Args:
341
+ experience: Experience data
342
+
343
+ Returns:
344
+ Tuple of (episodic_id, list of semantic_ids)
345
+ """
346
+ # Extract tags from experience
347
+ tags = experience.get("tags", [])
348
+ if not tags and "type" in experience:
349
+ tags = [experience["type"]]
350
+
351
+ # Create episodic memory
352
+ episodic_id = self.add_episodic_memory(
353
+ content=experience,
354
+ emotional_valence=experience.get("emotional_valence", 0.0),
355
+ outcome=experience.get("outcome"),
356
+ tags=tags,
357
+ )
358
+
359
+ # Extract semantic information (simple implementation)
360
+ semantic_ids = []
361
+ if "insights" in experience and isinstance(experience["insights"], list):
362
+ for insight in experience["insights"]:
363
+ if isinstance(insight, dict):
364
+ semantic_id = self.add_semantic_memory(
365
+ content=insight,
366
+ certainty=insight.get("confidence", 0.7),
367
+ tags=insight.get("tags", tags),
368
+ )
369
+ semantic_ids.append(semantic_id)
370
+
371
+ # Create bidirectional association
372
+ self.add_association(episodic_id, semantic_id, 0.8)
373
+
374
+ return episodic_id, semantic_ids
375
+
376
+ def add_association(self, memory_id1: str, memory_id2: str, strength: float = 0.5) -> bool:
377
+ """
378
+ Add bidirectional association between memories.
379
+
380
+ Args:
381
+ memory_id1: First memory ID
382
+ memory_id2: Second memory ID
383
+ strength: Association strength (0-1)
384
+
385
+ Returns:
386
+ Success status
387
+ """
388
+ # Verify memories exist
389
+ if memory_id1 not in self.memories or memory_id2 not in self.memories:
390
+ return False
391
+
392
+ # Add bidirectional association
393
+ self.memories[memory_id1].add_association(memory_id2, strength)
394
+ self.memories[memory_id2].add_association(memory_id1, strength)
395
+
396
+ # Update stats
397
+ self.stats["association_count"] += 2
398
+
399
+ return True
400
+
401
+ def get_memory(self, memory_id: str) -> Optional[Dict[str, Any]]:
402
+ """
403
+ Retrieve memory by ID.
404
+
405
+ Args:
406
+ memory_id: Memory ID
407
+
408
+ Returns:
409
+ Memory data or None if not found
410
+ """
411
+ if memory_id not in self.memories:
412
+ return None
413
+
414
+ # Get memory
415
+ memory = self.memories[memory_id]
416
+
417
+ # Update access statistics
418
+ memory.update_access()
419
+
420
+ # Convert to dictionary
421
+ memory_dict = memory.as_dict()
422
+
423
+ return memory_dict
424
+
425
+ def query_memories(self, query: Dict[str, Any], memory_type: Optional[str] = None,
426
+ tags: Optional[List[str]] = None, limit: int = 10) -> List[Dict[str, Any]]:
427
+ """
428
+ Query memories based on content, type, and tags.
429
+
430
+ Args:
431
+ query: Query terms
432
+ memory_type: Optional filter by memory type
433
+ tags: Optional filter by tags
434
+ limit: Maximum number of results
435
+
436
+ Returns:
437
+ List of matching memories
438
+ """
439
+ # Filter by memory type
440
+ candidate_ids = set()
441
+
442
+ if memory_type:
443
+ # Filter by specified memory type
444
+ for memory_id, memory in self.memories.items():
445
+ if memory.memory_type == memory_type:
446
+ candidate_ids.add(memory_id)
447
+ else:
448
+ # Include all memory IDs
449
+ candidate_ids = set(self.memories.keys())
450
+
451
+ # Filter by tags if provided
452
+ if tags:
453
+ tag_memories = set()
454
+ for tag in tags:
455
+ # Combine episodic and semantic indices
456
+ tag_memories.update(self.episodic_index.get(tag, set()))
457
+ tag_memories.update(self.semantic_index.get(tag, set()))
458
+
459
+ # Restrict to memories with matching tags
460
+ if tag_memories:
461
+ candidate_ids = candidate_ids.intersection(tag_memories)
462
+
463
+ # Score candidates based on query relevance and salience
464
+ scored_candidates = []
465
+
466
+ for memory_id in candidate_ids:
467
+ memory = self.memories[memory_id]
468
+
469
+ # Skip memories below activation threshold
470
+ current_salience = memory.calculate_current_salience()
471
+ if current_salience < self.activation_threshold:
472
+ continue
473
+
474
+ # Calculate relevance score
475
+ relevance = self._calculate_relevance(memory, query)
476
+
477
+ # Combine relevance and salience for final score
478
+ score = 0.7 * relevance + 0.3 * current_salience
479
+
480
+ # Add to candidates
481
+ scored_candidates.append((memory_id, score))
482
+
483
+ # Sort by score (descending) and take top 'limit' results
484
+ top_candidates = heapq.nlargest(limit, scored_candidates, key=lambda x: x[1])
485
+
486
+ # Retrieve and return memories
487
+ result_memories = []
488
+ for memory_id, score in top_candidates:
489
+ memory = self.memories[memory_id]
490
+ memory.update_access() # Update access time
491
+
492
+ # Add memory with score
493
+ memory_dict = memory.as_dict()
494
+ memory_dict["relevance_score"] = score
495
+
496
+ result_memories.append(memory_dict)
497
+
498
+ return result_memories
499
+
500
+ def get_recent_memories(self, memory_type: Optional[str] = None, limit: int = 5) -> List[Dict[str, Any]]:
501
+ """
502
+ Get most recent memories by creation time.
503
+
504
+ Args:
505
+ memory_type: Optional filter by memory type
506
+ limit: Maximum number of results
507
+
508
+ Returns:
509
+ List of recent memories
510
+ """
511
+ # Filter and sort memories by creation time
512
+ recent_memories = []
513
+
514
+ for memory_id, memory in self.memories.items():
515
+ # Filter by memory type if specified
516
+ if memory_type and memory.memory_type != memory_type:
517
+ continue
518
+
519
+ # Add to candidates
520
+ recent_memories.append((memory_id, memory.creation_time))
521
+
522
+ # Sort by creation time (descending)
523
+ recent_memories.sort(key=lambda x: x[1], reverse=True)
524
+
525
+ # Retrieve and return top 'limit' memories
526
+ result_memories = []
527
+ for memory_id, _ in recent_memories[:limit]:
528
+ memory = self.memories[memory_id]
529
+ memory.update_access() # Update access time
530
+ result_memories.append(memory.as_dict())
531
+
532
+ return result_memories
533
+
534
+ def get_temporal_sequence(self, start_index: int = 0, limit: int = 10) -> List[Dict[str, Any]]:
535
+ """
536
+ Get temporal sequence of episodic memories.
537
+
538
+ Args:
539
+ start_index: Starting index in sequence
540
+ limit: Maximum number of results
541
+
542
+ Returns:
543
+ List of episodic memories in temporal order
544
+ """
545
+ # Get subset of temporal sequence
546
+ sequence_slice = self.temporal_sequence[start_index:start_index+limit]
547
+
548
+ # Retrieve and return memories
549
+ result_memories = []
550
+ for memory_id in sequence_slice:
551
+ if memory_id in self.memories:
552
+ memory = self.memories[memory_id]
553
+ memory.update_access() # Update access time
554
+ result_memories.append(memory.as_dict())
555
+
556
+ return result_memories
557
+
558
+ def get_relevant_experiences(self, query: Optional[Dict[str, Any]] = None,
559
+ tags: Optional[List[str]] = None, limit: int = 5) -> List[Dict[str, Any]]:
560
+ """
561
+ Get relevant episodic experiences.
562
+
563
+ Args:
564
+ query: Optional query terms
565
+ tags: Optional filter by tags
566
+ limit: Maximum number of results
567
+
568
+ Returns:
569
+ List of relevant experiences
570
+ """
571
+ # If query provided, use query_memories with episodic filter
572
+ if query:
573
+ return self.query_memories(query, memory_type="episodic", tags=tags, limit=limit)
574
+
575
+ # Otherwise get most salient episodic memories
576
+ # Filter by tags if provided
577
+ candidate_ids = set()
578
+ if tags:
579
+ for tag in tags:
580
+ candidate_ids.update(self.episodic_index.get(tag, set()))
581
+ else:
582
+ # Get all episodic memories
583
+ candidate_ids = {memory_id for memory_id, memory in self.memories.items()
584
+ if memory.memory_type == "episodic"}
585
+
586
+ # Score by current salience
587
+ scored_candidates = []
588
+ for memory_id in candidate_ids:
589
+ if memory_id in self.memories:
590
+ memory = self.memories[memory_id]
591
+ current_salience = memory.calculate_current_salience()
592
+
593
+ # Skip memories below activation threshold
594
+ if current_salience < self.activation_threshold:
595
+ continue
596
+
597
+ scored_candidates.append((memory_id, current_salience))
598
+
599
+ # Sort by salience (descending) and take top 'limit' results
600
+ top_candidates = heapq.nlargest(limit, scored_candidates, key=lambda x: x[1])
601
+
602
+ # Retrieve and return memories
603
+ result_memories = []
604
+ for memory_id, _ in top_candidates:
605
+ memory = self.memories[memory_id]
606
+ memory.update_access() # Update access time
607
+ result_memories.append(memory.as_dict())
608
+
609
+ return result_memories
610
+
611
+ def get_beliefs(self, tags: Optional[List[str]] = None, certainty_threshold: float = 0.5) -> List[Dict[str, Any]]:
612
+ """
613
+ Get semantic beliefs with high certainty.
614
+
615
+ Args:
616
+ tags: Optional filter by tags
617
+ certainty_threshold: Minimum certainty threshold
618
+
619
+ Returns:
620
+ List of semantic beliefs
621
+ """
622
+ # Filter by tags if provided
623
+ candidate_ids = set()
624
+ if tags:
625
+ for tag in tags:
626
+ candidate_ids.update(self.semantic_index.get(tag, set()))
627
+ else:
628
+ # Get all semantic memories
629
+ candidate_ids = {memory_id for memory_id, memory in self.memories.items()
630
+ if memory.memory_type == "semantic"}
631
+
632
+ # Filter and score by certainty and salience
633
+ scored_candidates = []
634
+ for memory_id in candidate_ids:
635
+ if memory_id in self.memories:
636
+ memory = self.memories[memory_id]
637
+
638
+ # Skip if not semantic or below certainty threshold
639
+ if memory.memory_type != "semantic" or not hasattr(memory, "certainty"):
640
+ continue
641
+
642
+ if memory.certainty < certainty_threshold:
643
+ continue
644
+
645
+ # Calculate current salience
646
+ current_salience = memory.calculate_current_salience()
647
+
648
+ # Skip memories below activation threshold
649
+ if current_salience < self.activation_threshold:
650
+ continue
651
+
652
+ # Score combines certainty and salience
653
+ score = 0.6 * memory.certainty + 0.4 * current_salience
654
+
655
+ scored_candidates.append((memory_id, score))
656
+
657
+ # Sort by score (descending)
658
+ scored_candidates.sort(key=lambda x: x[1], reverse=True)
659
+
660
+ # Retrieve and return memories
661
+ result_memories = []
662
+ for memory_id, score in scored_candidates:
663
+ memory = self.memories[memory_id]
664
+ memory.update_access() # Update access time
665
+
666
+ # Convert to dictionary and add score
667
+ memory_dict = memory.as_dict()
668
+ memory_dict["belief_score"] = score
669
+
670
+ result_memories.append(memory_dict)
671
+
672
+ return result_memories
673
+
674
+ def apply_decay(self) -> int:
675
+ """
676
+ Apply memory decay to all memories and clean up decayed memories.
677
+
678
+ Returns:
679
+ Number of memories removed due to decay
680
+ """
681
+ # Track memories to remove
682
+ to_remove = []
683
+
684
+ # Check all memories
685
+ for memory_id, memory in self.memories.items():
686
+ # Calculate current salience
687
+ current_salience = memory.calculate_current_salience()
688
+
689
+ # Mark for removal if below threshold
690
+ if current_salience < self.activation_threshold:
691
+ to_remove.append(memory_id)
692
+
693
+ # Remove decayed memories
694
+ for memory_id in to_remove:
695
+ self._remove_memory(memory_id)
696
+
697
+ # Update stats
698
+ self.stats["total_memories_decayed"] += len(to_remove)
699
+ self.stats["working_memory_count"] = sum(1 for m in self.memories.values() if m.memory_type == "working")
700
+ self.stats["episodic_memory_count"] = sum(1 for m in self.memories.values() if m.memory_type == "episodic")
701
+ self.stats["semantic_memory_count"] = sum(1 for m in self.memories.values() if m.memory_type == "semantic")
702
+
703
+ # Calculate average salience
704
+ if self.memories:
705
+ self.stats["average_salience"] = sum(m.calculate_current_salience() for m in self.memories.values()) / len(self.memories)
706
+
707
+ return len(to_remove)
708
+
709
+ def consolidate_memories(self) -> Dict[str, Any]:
710
+ """
711
+ Consolidate episodic memories into semantic memories.
712
+
713
+ Returns:
714
+ Consolidation results
715
+ """
716
+ # Not fully implemented - this would involve more complex semantic extraction
717
+ # Simple implementation: just extract common tags from recent episodic memories
718
+ recent_episodic = self.get_recent_memories(memory_type="episodic", limit=10)
719
+
720
+ # Count tag occurrences
721
+ tag_counts = defaultdict(int)
722
+ for memory in recent_episodic:
723
+ for tag in memory.get("tags", []):
724
+ tag_counts[tag] += 1
725
+
726
+ # Find common tags (appearing in at least 3 memories)
727
+ common_tags = {tag for tag, count in tag_counts.items() if count >= 3}
728
+
729
+ # Create semantic memory for common tags if not empty
730
+ consolidated_ids = []
731
+ if common_tags:
732
+ # Create semantic memory
733
+ semantic_id = self.add_semantic_memory(
734
+ content={
735
+ "consolidated_from": [m.get("id") for m in recent_episodic],
736
+ "common_tags": list(common_tags),
737
+ "summary": f"Consolidated memory with common tags: {', '.join(common_tags)}"
738
+ },
739
+ certainty=0.6,
740
+ tags=list(common_tags),
741
+ )
742
+
743
+ consolidated_ids.append(semantic_id)
744
+
745
+ return {
746
+ "consolidated_count": len(consolidated_ids),
747
+ "consolidated_ids": consolidated_ids,
748
+ "common_tags": list(common_tags) if common_tags else []
749
+ }
750
+
751
+ def _calculate_relevance(self, memory: Memory, query: Dict[str, Any]) -> float:
752
+ """
753
+ Calculate relevance score of memory to query.
754
+
755
+ Args:
756
+ memory: Memory to score
757
+ query: Query terms
758
+
759
+ Returns:
760
+ Relevance score (0-1)
761
+ """
762
+ # Simple implementation: check for key overlaps
763
+ relevance = 0.0
764
+
765
+ # Extract memory content
766
+ content = memory.content
767
+
768
+ # Count matching keys at top level
769
+ matching_keys = set(query.keys()).intersection(set(content.keys()))
770
+ if matching_keys:
771
+ relevance += 0.3 * (len(matching_keys) / len(query))
772
+
773
+ # Check for matching values (simple string contains)
774
+ for key, value in query.items():
775
+ if key in content and isinstance(value, str) and isinstance(content[key], str):
776
+ if value.lower() in content[key].lower():
777
+ relevance += 0.2
778
+ elif key in content and value == content[key]:
779
+ relevance += 0.3
780
+
781
+ # Check for tag matches
782
+ query_tags = query.get("tags", [])
783
+ if isinstance(query_tags, list) and memory.tags:
784
+ matching_tags = set(query_tags).intersection(set(memory.tags))
785
+ if matching_tags:
786
+ relevance += 0.3 * (len(matching_tags) / len(query_tags))
787
+
788
+ # Cap relevance at 1.0
789
+ return min(1.0, relevance)
790
+
791
+ def _enforce_working_memory_capacity(self) -> None:
792
+ """Enforce working memory capacity limit by removing low priority items."""
793
+ # Count working memories
794
+ working_memories = [(memory_id, memory) for memory_id, memory in self.memories.items()
795
+ if memory.memory_type == "working"]
796
+
797
+ # Check if over capacity
798
+ if len(working_memories) <= self.working_memory_capacity:
799
+ return
800
+
801
+ # Sort by priority (ascending) and salience (ascending)
802
+ working_memories.sort(key=lambda x: (x[1].priority, x[1].calculate_current_salience()))
803
+
804
+ # Remove lowest priority items until under capacity
805
+ for memory_id, _ in working_memories[:len(working_memories) - self.working_memory_capacity]:
806
+ self._remove_memory(memory_id)
807
+
808
+ def _remove_memory(self, memory_id: str) -> None:
809
+ """
810
+ Remove memory by ID.
811
+
812
+ Args:
813
+ memory_id: Memory ID to remove
814
+ """
815
+ if memory_id not in self.memories:
816
+ return
817
+
818
+ # Get memory before removal
819
+ memory = self.memories[memory_id]
820
+
821
+ # Remove from memory dictionary
822
+ del self.memories[memory_id]
823
+
824
+ # Remove from indices
825
+ for tag in memory.tags:
826
+ if memory.memory_type == "episodic" and tag in self.episodic_index:
827
+ self.episodic_index[tag].discard(memory_id)
828
+ elif memory.memory_type == "semantic" and tag in self.semantic_index:
829
+ self.semantic_index[tag].discard(memory_id)
830
+
831
+ # Remove from temporal sequence if episodic
832
+ if memory.memory_type == "episodic":
833
+ if memory_id in self.temporal_sequence:
834
+ self.temporal_sequence.remove(memory_id)
835
+
836
+ # Update associations in other memories
837
+ for other_id, other_memory in self.memories.items():
838
+ if memory_id in other_memory.associations:
839
+ del other_memory.associations[memory_id]
840
+
841
+ def export_state(self) -> Dict[str, Any]:
842
+ """
843
+ Export memory shell state.
844
+
845
+ Returns:
846
+ Serializable memory shell state
847
+ """
848
+ # Export memory dictionaries
849
+ memory_dicts = {memory_id: memory.as_dict() for memory_id, memory in self.memories.items()}
850
+
851
+ # Export indices (convert sets to lists for serialization)
852
+ episodic_index = {tag: list(memories) for tag, memories in self.episodic_index.items()}
853
+ semantic_index = {tag: list(memories) for tag, memories in self.semantic_index.items()}
854
+
855
+ # Export state
856
+ state = {
857
+ "memories": memory_dicts,
858
+ "episodic_index": episodic_index,
859
+ "semantic_index": semantic_index,
860
+ "temporal_sequence": self.temporal_sequence,
861
+ "decay_rate": self.decay_rate,
862
+ "activation_threshold": self.activation_threshold,
863
+ "working_memory_capacity": self.working_memory_capacity,
864
+ "stats": self.stats,
865
+ }
866
+
867
+ return state
868
+
869
+ def import_state(self, state: Dict[str, Any]) -> None:
870
+ """
871
+ Import memory shell state.
872
+
873
+ Args:
874
+ state: Memory shell state
875
+ """
876
+ # Clear current state
877
+ self.memories = {}
878
+ self.episodic_index = defaultdict(set)
879
+ self.semantic_index = defaultdict(set)
880
+ self.temporal_sequence = []
881
+
882
+ # Import configuration
883
+ self.decay_rate = state.get("decay_rate", self.decay_rate)
884
+ self.activation_threshold = state.get("activation_threshold", self.activation_threshold)
885
+ self.working_memory_capacity = state.get("working_memory_capacity", self.working_memory_capacity)
886
+
887
+ # Import memories
888
+ for memory_id, memory_dict in state.get("memories", {}).items():
889
+ memory_type = memory_dict.get("memory_type")
890
+
891
+ if memory_type == "working":
892
+ # Create working memory
893
+ memory = WorkingMemory(
894
+ id=memory_id,
895
+ content=memory_dict.get("content", {}),
896
+ priority=memory_dict.get("priority", 1),
897
+ decay_rate=memory_dict.get("decay_rate", self.decay_rate * 2),
898
+ tags=memory_dict.get("tags", []),
899
+ source=memory_dict.get("source", "working"),
900
+ salience=memory_dict.get("salience", 1.0),
901
+ creation_time=datetime.datetime.fromisoformat(memory_dict.get("creation_time", datetime.datetime.now().isoformat())),
902
+ last_access_time=datetime.datetime.fromisoformat(memory_dict.get("last_access_time", datetime.datetime.now().isoformat())),
903
+ access_count=memory_dict.get("access_count", 1),
904
+ associations=memory_dict.get("associations", {}),
905
+ )
906
+
907
+ # Set expiration time
908
+ if "expiration_time" in memory_dict:
909
+ memory.expiration_time = datetime.datetime.fromisoformat(memory_dict["expiration_time"])
910
+ else:
911
+ memory.set_expiration(1.0)
912
+
913
+ # Store memory
914
+ self.memories[memory_id] = memory
915
+
916
+ elif memory_type == "episodic":
917
+ # Create episodic memory
918
+ memory = EpisodicMemory(
919
+ id=memory_id,
920
+ content=memory_dict.get("content", {}),
921
+ emotional_valence=memory_dict.get("emotional_valence", 0.0),
922
+ outcome=memory_dict.get("outcome"),
923
+ decay_rate=memory_dict.get("decay_rate", self.decay_rate),
924
+ tags=memory_dict.get("tags", []),
925
+ source=memory_dict.get("source", "episode"),
926
+ salience=memory_dict.get("salience", 1.0),
927
+ creation_time=datetime.datetime.fromisoformat(memory_dict.get("creation_time", datetime.datetime.now().isoformat())),
928
+ last_access_time=datetime.datetime.fromisoformat(memory_dict.get("last_access_time", datetime.datetime.now().isoformat())),
929
+ access_count=memory_dict.get("access_count", 1),
930
+ associations=memory_dict.get("associations", {}),
931
+ sequence_position=memory_dict.get("sequence_position"),
932
+ )
933
+
934
+ # Store memory
935
+ self.memories[memory_id] = memory
936
+
937
+ elif memory_type == "semantic":
938
+ # Create semantic memory
939
+ memory = SemanticMemory(
940
+ id=memory_id,
941
+ content=memory_dict.get("content", {}),
942
+ certainty=memory_dict.get("certainty", 0.7),
943
+ decay_rate=memory_dict.get("decay_rate", self.decay_rate * 0.5),
944
+ tags=memory_dict.get("tags", []),
945
+ source=memory_dict.get("source", "semantic"),
946
+ salience=memory_dict.get("salience", 1.0),
947
+ creation_time=datetime.datetime.fromisoformat(memory_dict.get("creation_time", datetime.datetime.now().isoformat())),
948
+ last_access_time=datetime.datetime.fromisoformat(memory_dict.get("last_access_time", datetime.datetime.now().isoformat())),
949
+ access_count=memory_dict.get("access_count", 1),
950
+ associations=memory_dict.get("associations", {}),
951
+ contradiction_ids=memory_dict.get("contradiction_ids", []),
952
+ supporting_evidence=memory_dict.get("supporting_evidence", []),
953
+ )
954
+
955
+ # Store memory
956
+ self.memories[memory_id] = memory
957
+
958
+ # Import indices
959
+ for tag, memory_ids in state.get("episodic_index", {}).items():
960
+ self.episodic_index[tag] = set(memory_ids)
961
+
962
+ for tag, memory_ids in state.get("semantic_index", {}).items():
963
+ self.semantic_index[tag] = set(memory_ids)
964
+
965
+ # Import temporal sequence
966
+ self.temporal_sequence = state.get("temporal_sequence", [])
967
+
968
+ # Import stats
969
+ self.stats = state.get("stats", self.stats.copy())
970
+
971
+ # Update stats
972
+ self.stats["working_memory_count"] = sum(1 for m in self.memories.values() if m.memory_type == "working")
973
+ self.stats["episodic_memory_count"] = sum(1 for m in self.memories.values() if m.memory_type == "episodic")
974
+ self.stats["semantic_memory_count"] = sum(1 for m in self.memories.values() if m.memory_type == "semantic")
975
+
976
+ def get_stats(self) -> Dict[str, Any]:
977
+ """
978
+ Get memory shell statistics.
979
+
980
+ Returns:
981
+ Memory statistics
982
+ """
983
+ # Update current stats
984
+ self.stats["working_memory_count"] = sum(1 for m in self.memories.values() if m.memory_type == "working")
985
+ self.stats["episodic_memory_count"] = sum(1 for m in self.memories.values() if m.memory_type == "episodic")
986
+ self.stats["semantic_memory_count"] = sum(1 for m in self.memories.values() if m.memory_type == "semantic")
987
+
988
+ # Calculate average salience
989
+ if self.memories:
990
+ self.stats["average_salience"] = sum(m.calculate_current_salience() for m in self.memories.values()) / len(self.memories)
991
+
992
+ # Calculate additional stats
993
+ active_memories = sum(1 for m in self.memories.values()
994
+ if m.calculate_current_salience() >= self.activation_threshold)
995
+
996
+ tag_stats = {
997
+ "episodic_tags": len(self.episodic_index),
998
+ "semantic_tags": len(self.semantic_index),
999
+ }
1000
+
1001
+ decay_stats = {
1002
+ "activation_threshold": self.activation_threshold,
1003
+ "active_memory_ratio": active_memories / len(self.memories) if self.memories else 0,
1004
+ "decay_rate": self.decay_rate,
1005
+ }
1006
+
1007
+ return {
1008
+ **self.stats,
1009
+ **tag_stats,
1010
+ **decay_stats,
1011
+ "total_memories": len(self.memories),
1012
+ "active_memories": active_memories,
1013
+ }
src/llm/router.py ADDED
@@ -0,0 +1,925 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ ModelRouter - Multi-Model LLM Orchestration Framework
3
+
4
+ This module implements the model routing architecture that allows agents to
5
+ interact with various LLM providers including:
6
+ - OpenAI (GPT models)
7
+ - Anthropic (Claude models)
8
+ - Groq (Llama and Mixtral)
9
+ - Ollama (local models)
10
+ - DeepSeek (DeepSeek models)
11
+
12
+ Key capabilities:
13
+ - Provider-agnostic interface for agent interactions
14
+ - Dynamic provider selection based on capabilities
15
+ - Fallback chains for reliability
16
+ - Prompt template management with provider-specific optimizations
17
+ - Token usage tracking and optimization
18
+ - Output parsing and normalization
19
+
20
+ Internal Note: The model router implements a symbolic abstraction layer over
21
+ different LLM providers while maintaining a unified attribution interface.
22
+ """
23
+
24
+ import os
25
+ import json
26
+ import logging
27
+ import time
28
+ import asyncio
29
+ from typing import Dict, List, Any, Optional, Union, Callable
30
+ import traceback
31
+ from enum import Enum
32
+ from abc import ABC, abstractmethod
33
+
34
+ # Optional imports for different providers
35
+ try:
36
+ import openai
37
+ except ImportError:
38
+ openai = None
39
+
40
+ try:
41
+ import anthropic
42
+ except ImportError:
43
+ anthropic = None
44
+
45
+ try:
46
+ import groq
47
+ except ImportError:
48
+ groq = None
49
+
50
+ try:
51
+ import ollama
52
+ except ImportError:
53
+ ollama = None
54
+
55
+
56
+ class ModelCapability(Enum):
57
+ """Capabilities that models may support."""
58
+ REASONING = "reasoning" # Complex multi-step reasoning
59
+ CODE_GENERATION = "code_generation" # Code writing and analysis
60
+ FINANCE = "finance" # Financial analysis and modeling
61
+ RAG = "rag" # Retrieval augmented generation
62
+ TOOL_USE = "tool_use" # Using external tools
63
+ FUNCTION_CALLING = "function_calling" # Structured function calling
64
+ JSON_MODE = "json_mode" # Reliable JSON output
65
+
66
+
67
+ class ModelProvider(ABC):
68
+ """Abstract base class for model providers."""
69
+
70
+ @abstractmethod
71
+ def generate(self, prompt: str, **kwargs) -> str:
72
+ """Generate text from prompt."""
73
+ pass
74
+
75
+ @abstractmethod
76
+ def get_available_models(self) -> List[str]:
77
+ """Get list of available models."""
78
+ pass
79
+
80
+ @abstractmethod
81
+ def get_model_capabilities(self, model_name: str) -> List[ModelCapability]:
82
+ """Get capabilities of a specific model."""
83
+ pass
84
+
85
+
86
+ class OpenAIProvider(ModelProvider):
87
+ """OpenAI model provider."""
88
+
89
+ def __init__(self, api_key: Optional[str] = None):
90
+ """
91
+ Initialize OpenAI provider.
92
+
93
+ Args:
94
+ api_key: OpenAI API key (defaults to OPENAI_API_KEY env var)
95
+ """
96
+ self.api_key = api_key or os.environ.get("OPENAI_API_KEY")
97
+
98
+ if not self.api_key:
99
+ logging.warning("OpenAI API key not provided. OpenAI provider will not work.")
100
+
101
+ if openai is None:
102
+ logging.warning("OpenAI Python package not installed. OpenAI provider will not work.")
103
+
104
+ # Initialize client if possible
105
+ self.client = None
106
+ if openai is not None and self.api_key:
107
+ self.client = openai.OpenAI(api_key=self.api_key)
108
+
109
+ # Define models and capabilities
110
+ self.models = {
111
+ "gpt-4-0125-preview": [
112
+ ModelCapability.REASONING,
113
+ ModelCapability.CODE_GENERATION,
114
+ ModelCapability.FINANCE,
115
+ ModelCapability.RAG,
116
+ ModelCapability.TOOL_USE,
117
+ ModelCapability.FUNCTION_CALLING,
118
+ ModelCapability.JSON_MODE,
119
+ ],
120
+ "gpt-4-turbo-preview": [
121
+ ModelCapability.REASONING,
122
+ ModelCapability.CODE_GENERATION,
123
+ ModelCapability.FINANCE,
124
+ ModelCapability.RAG,
125
+ ModelCapability.TOOL_USE,
126
+ ModelCapability.FUNCTION_CALLING,
127
+ ModelCapability.JSON_MODE,
128
+ ],
129
+ "gpt-4": [
130
+ ModelCapability.REASONING,
131
+ ModelCapability.CODE_GENERATION,
132
+ ModelCapability.FINANCE,
133
+ ModelCapability.RAG,
134
+ ModelCapability.TOOL_USE,
135
+ ModelCapability.FUNCTION_CALLING,
136
+ ],
137
+ "gpt-3.5-turbo": [
138
+ ModelCapability.CODE_GENERATION,
139
+ ModelCapability.RAG,
140
+ ModelCapability.TOOL_USE,
141
+ ModelCapability.FUNCTION_CALLING,
142
+ ModelCapability.FUNCTION_CALLING,
143
+ ModelCapability.JSON_MODE,
144
+ ],
145
+ "gpt-3.5-turbo-instruct": [
146
+ ModelCapability.CODE_GENERATION,
147
+ ModelCapability.RAG,
148
+ ],
149
+ }
150
+
151
+ def generate(self, prompt: str, **kwargs) -> str:
152
+ """
153
+ Generate text from prompt using OpenAI.
154
+
155
+ Args:
156
+ prompt: Input prompt
157
+ **kwargs: Additional parameters
158
+ - model: Model name (default: gpt-4-turbo-preview)
159
+ - temperature: Temperature (default: 0.7)
160
+ - max_tokens: Maximum tokens (default: 2000)
161
+ - json_mode: Whether to enforce JSON output (default: False)
162
+
163
+ Returns:
164
+ Generated text
165
+ """
166
+ if self.client is None:
167
+ raise ValueError("OpenAI client not initialized. Provide a valid API key.")
168
+
169
+ # Extract parameters with defaults
170
+ model = kwargs.get("model", "gpt-4-turbo-preview")
171
+ temperature = kwargs.get("temperature", 0.7)
172
+ max_tokens = kwargs.get("max_tokens", 2000)
173
+ json_mode = kwargs.get("json_mode", False)
174
+
175
+ try:
176
+ # Create messages
177
+ messages = [{"role": "user", "content": prompt}]
178
+
179
+ # Create parameters
180
+ params = {
181
+ "model": model,
182
+ "messages": messages,
183
+ "temperature": temperature,
184
+ "max_tokens": max_tokens,
185
+ }
186
+
187
+ # Add response format if JSON mode
188
+ if json_mode:
189
+ params["response_format"] = {"type": "json_object"}
190
+
191
+ # Make request
192
+ response = self.client.chat.completions.create(**params)
193
+
194
+ # Extract text
195
+ return response.choices[0].message.content
196
+
197
+ except Exception as e:
198
+ logging.error(f"Error generating text with OpenAI: {e}")
199
+ logging.error(traceback.format_exc())
200
+ raise
201
+
202
+ def get_available_models(self) -> List[str]:
203
+ """Get list of available models."""
204
+ return list(self.models.keys())
205
+
206
+ def get_model_capabilities(self, model_name: str) -> List[ModelCapability]:
207
+ """Get capabilities of a specific model."""
208
+ return self.models.get(model_name, [])
209
+
210
+
211
+ class AnthropicProvider(ModelProvider):
212
+ """Anthropic model provider."""
213
+
214
+ def __init__(self, api_key: Optional[str] = None):
215
+ """
216
+ Initialize Anthropic provider.
217
+
218
+ Args:
219
+ api_key: Anthropic API key (defaults to ANTHROPIC_API_KEY env var)
220
+ """
221
+ self.api_key = api_key or os.environ.get("ANTHROPIC_API_KEY")
222
+
223
+ if not self.api_key:
224
+ logging.warning("Anthropic API key not provided. Anthropic provider will not work.")
225
+
226
+ if anthropic is None:
227
+ logging.warning("Anthropic Python package not installed. Anthropic provider will not work.")
228
+
229
+ # Initialize client if possible
230
+ self.client = None
231
+ if anthropic is not None and self.api_key:
232
+ self.client = anthropic.Anthropic(api_key=self.api_key)
233
+
234
+ # Define models and capabilities
235
+ self.models = {
236
+ "claude-3-opus-20240229": [
237
+ ModelCapability.REASONING,
238
+ ModelCapability.CODE_GENERATION,
239
+ ModelCapability.FINANCE,
240
+ ModelCapability.RAG,
241
+ ModelCapability.TOOL_USE,
242
+ ModelCapability.JSON_MODE,
243
+ ],
244
+ "claude-3-sonnet-20240229": [
245
+ ModelCapability.REASONING,
246
+ ModelCapability.CODE_GENERATION,
247
+ ModelCapability.FINANCE,
248
+ ModelCapability.RAG,
249
+ ModelCapability.TOOL_USE,
250
+ ModelCapability.JSON_MODE,
251
+ ],
252
+ "claude-3-haiku-20240307": [
253
+ ModelCapability.CODE_GENERATION,
254
+ ModelCapability.RAG,
255
+ ModelCapability.JSON_MODE,
256
+ ],
257
+ "claude-2.1": [
258
+ ModelCapability.REASONING,
259
+ ModelCapability.CODE_GENERATION,
260
+ ModelCapability.FINANCE,
261
+ ModelCapability.RAG,
262
+ ],
263
+ "claude-2.0": [
264
+ ModelCapability.REASONING,
265
+ ModelCapability.CODE_GENERATION,
266
+ ModelCapability.FINANCE,
267
+ ModelCapability.RAG,
268
+ ],
269
+ "claude-instant-1.2": [
270
+ ModelCapability.CODE_GENERATION,
271
+ ModelCapability.RAG,
272
+ ],
273
+ }
274
+
275
+ def generate(self, prompt: str, **kwargs) -> str:
276
+ """
277
+ Generate text from prompt using Anthropic.
278
+
279
+ Args:
280
+ prompt: Input prompt
281
+ **kwargs: Additional parameters
282
+ - model: Model name (default: claude-3-sonnet-20240229)
283
+ - temperature: Temperature (default: 0.7)
284
+ - max_tokens: Maximum tokens (default: 2000)
285
+ - system_prompt: Optional system prompt
286
+
287
+ Returns:
288
+ Generated text
289
+ """
290
+ if self.client is None:
291
+ raise ValueError("Anthropic client not initialized. Provide a valid API key.")
292
+
293
+ # Extract parameters with defaults
294
+ model = kwargs.get("model", "claude-3-sonnet-20240229")
295
+ temperature = kwargs.get("temperature", 0.7)
296
+ max_tokens = kwargs.get("max_tokens", 2000)
297
+ system_prompt = kwargs.get("system_prompt", "")
298
+
299
+ try:
300
+ # Create parameters
301
+ params = {
302
+ "model": model,
303
+ "messages": [{"role": "user", "content": prompt}],
304
+ "temperature": temperature,
305
+ "max_tokens": max_tokens,
306
+ }
307
+
308
+ # Add system prompt if provided
309
+ if system_prompt:
310
+ params["system"] = system_prompt
311
+
312
+ # Make request
313
+ response = self.client.messages.create(**params)
314
+
315
+ # Extract text
316
+ return response.content[0].text
317
+
318
+ except Exception as e:
319
+ logging.error(f"Error generating text with Anthropic: {e}")
320
+ logging.error(traceback.format_exc())
321
+ raise
322
+
323
+ def get_available_models(self) -> List[str]:
324
+ """Get list of available models."""
325
+ return list(self.models.keys())
326
+
327
+ def get_model_capabilities(self, model_name: str) -> List[ModelCapability]:
328
+ """Get capabilities of a specific model."""
329
+ return self.models.get(model_name, [])
330
+
331
+
332
+ class GroqProvider(ModelProvider):
333
+ """Groq model provider."""
334
+
335
+ def __init__(self, api_key: Optional[str] = None):
336
+ """
337
+ Initialize Groq provider.
338
+
339
+ Args:
340
+ api_key: Groq API key (defaults to GROQ_API_KEY env var)
341
+ """
342
+ self.api_key = api_key or os.environ.get("GROQ_API_KEY")
343
+
344
+ if not self.api_key:
345
+ logging.warning("Groq API key not provided. Groq provider will not work.")
346
+
347
+ if groq is None:
348
+ logging.warning("Groq Python package not installed. Groq provider will not work.")
349
+
350
+ # Initialize client if possible
351
+ self.client = None
352
+ if groq is not None and self.api_key:
353
+ self.client = groq.Groq(api_key=self.api_key)
354
+
355
+ # Define models and capabilities
356
+ self.models = {
357
+ "llama2-70b-4096": [
358
+ ModelCapability.REASONING,
359
+ ModelCapability.CODE_GENERATION,
360
+ ModelCapability.FINANCE,
361
+ ModelCapability.RAG,
362
+ ],
363
+ "mixtral-8x7b-32768": [
364
+ ModelCapability.REASONING,
365
+ ModelCapability.CODE_GENERATION,
366
+ ModelCapability.FINANCE,
367
+ ModelCapability.RAG,
368
+ ],
369
+ "gemma-7b-it": [
370
+ ModelCapability.CODE_GENERATION,
371
+ ModelCapability.RAG,
372
+ ],
373
+ }
374
+
375
+ def generate(self, prompt: str, **kwargs) -> str:
376
+ """
377
+ Generate text from prompt using Groq.
378
+
379
+ Args:
380
+ prompt: Input prompt
381
+ **kwargs: Additional parameters
382
+ - model: Model name (default: mixtral-8x7b-32768)
383
+ - temperature: Temperature (default: 0.7)
384
+ - max_tokens: Maximum tokens (default: 2000)
385
+
386
+ Returns:
387
+ Generated text
388
+ """
389
+ if self.client is None:
390
+ raise ValueError("Groq client not initialized. Provide a valid API key.")
391
+
392
+ # Extract parameters with defaults
393
+ model = kwargs.get("model", "mixtral-8x7b-32768")
394
+ temperature = kwargs.get("temperature", 0.7)
395
+ max_tokens = kwargs.get("max_tokens", 2000)
396
+
397
+ try:
398
+ # Create parameters
399
+ params = {
400
+ "model": model,
401
+ "messages": [{"role": "user", "content": prompt}],
402
+ "temperature": temperature,
403
+ "max_tokens": max_tokens,
404
+ }
405
+
406
+ # Make request
407
+ response = self.client.chat.completions.create(**params)
408
+
409
+ # Extract text
410
+ return response.choices[0].message.content
411
+
412
+ except Exception as e:
413
+ logging.error(f"Error generating text with Groq: {e}")
414
+ logging.error(traceback.format_exc())
415
+ raise
416
+
417
+ def get_available_models(self) -> List[str]:
418
+ """Get list of available models."""
419
+ return list(self.models.keys())
420
+
421
+ def get_model_capabilities(self, model_name: str) -> List[ModelCapability]:
422
+ """Get capabilities of a specific model."""
423
+ return self.models.get(model_name, [])
424
+
425
+
426
+ class OllamaProvider(ModelProvider):
427
+ """Ollama model provider for local models."""
428
+
429
+ def __init__(self, host: str = "http://localhost:11434"):
430
+ """
431
+ Initialize Ollama provider.
432
+
433
+ Args:
434
+ host: Ollama host address (default: http://localhost:11434)
435
+ """
436
+ self.host = host
437
+
438
+ if ollama is None:
439
+ logging.warning("Ollama Python package not installed. Ollama provider will not work.")
440
+
441
+ # Check if Ollama is available
442
+ self.available = False
443
+ if ollama is not None:
444
+ try:
445
+ # Try to list models
446
+ self._list_models()
447
+ self.available = True
448
+ except Exception as e:
449
+ logging.warning(f"Ollama not available: {e}")
450
+
451
+ # Define models and capabilities
452
+ self.models = {
453
+ "llama3:latest": [
454
+ ModelCapability.REASONING,
455
+ ModelCapability.CODE_GENERATION,
456
+ ModelCapability.RAG,
457
+ ],
458
+ "mistral:latest": [
459
+ ModelCapability.REASONING,
460
+ ModelCapability.CODE_GENERATION,
461
+ ModelCapability.RAG,
462
+ ],
463
+ "codellama:latest": [
464
+ ModelCapability.CODE_GENERATION,
465
+ ModelCapability.RAG,
466
+ ],
467
+ "deepseek-coder:latest": [
468
+ ModelCapability.CODE_GENERATION,
469
+ ModelCapability.RAG,
470
+ ],
471
+ "wizardcoder:latest": [
472
+ ModelCapability.CODE_GENERATION,
473
+ ModelCapability.RAG,
474
+ ],
475
+ "gemma2:latest": [
476
+ ModelCapability.REASONING,
477
+ ModelCapability.CODE_GENERATION,
478
+ ModelCapability.RAG,
479
+ ],
480
+ }
481
+
482
+ def _list_models(self) -> List[str]:
483
+ """List available Ollama models."""
484
+ if ollama is None:
485
+ return []
486
+
487
+ try:
488
+ response = ollama.list(api_base=self.host)
489
+ return [model['name'] for model in response['models']]
490
+ except Exception as e:
491
+ logging.error(f"Error listing Ollama models: {e}")
492
+ return []
493
+
494
+ def generate(self, prompt: str, **kwargs) -> str:
495
+ """
496
+ Generate text from prompt using Ollama.
497
+
498
+ Args:
499
+ prompt: Input prompt
500
+ **kwargs: Additional parameters
501
+ - model: Model name (default: mistral:latest)
502
+ - temperature: Temperature (default: 0.7)
503
+ - max_tokens: Maximum tokens (default: 2000)
504
+
505
+ Returns:
506
+ Generated text
507
+ """
508
+ if ollama is None:
509
+ raise ValueError("Ollama package not installed.")
510
+
511
+ if not self.available:
512
+ raise ValueError("Ollama not available.")
513
+
514
+ # Extract parameters with defaults
515
+ model = kwargs.get("model", "mistral:latest")
516
+ temperature = kwargs.get("temperature", 0.7)
517
+ max_tokens = kwargs.get("max_tokens", 2000)
518
+
519
+ try:
520
+ # Make request
521
+ response = ollama.chat(
522
+ model=model,
523
+ messages=[{"role": "user", "content": prompt}],
524
+ temperature=temperature,
525
+ num_predict=max_tokens,
526
+ api_base=self.host,
527
+ )
528
+
529
+ # Extract text
530
+ return response['message']['content']
531
+
532
+ except Exception as e:
533
+ logging.error(f"Error generating text with Ollama: {e}")
534
+ logging.error(traceback.format_exc())
535
+ raise
536
+
537
+ def get_available_models(self) -> List[str]:
538
+ """Get list of available models."""
539
+ if not self.available:
540
+ return []
541
+
542
+ return self._list_models()
543
+
544
+ def get_model_capabilities(self, model_name: str) -> List[ModelCapability]:
545
+ """Get capabilities of a specific model."""
546
+ # For unknown models, assume basic capabilities
547
+ if model_name not in self.models:
548
+ return [ModelCapability.RAG]
549
+
550
+ return self.models.get(model_name, [])
551
+
552
+
553
+ class DeepSeekProvider(ModelProvider):
554
+ """DeepSeek model provider using OpenAI-compatible API."""
555
+
556
+ def __init__(self, api_key: Optional[str] = None, api_base: str = "https://api.deepseek.com/v1"):
557
+ """
558
+ Initialize DeepSeek provider.
559
+
560
+ Args:
561
+ api_key: DeepSeek API key (defaults to DEEPSEEK_API_KEY env var)
562
+ api_base: DeepSeek API base URL
563
+ """
564
+ self.api_key = api_key or os.environ.get("DEEPSEEK_API_KEY")
565
+ self.api_base = api_base
566
+
567
+ if not self.api_key:
568
+ logging.warning("DeepSeek API key not provided. DeepSeek provider will not work.")
569
+
570
+ if openai is None:
571
+ logging.warning("OpenAI Python package not installed. DeepSeek provider will not work.")
572
+
573
+ # Initialize client if possible
574
+ self.client = None
575
+ if openai is not None and self.api_key:
576
+ self.client = openai.OpenAI(
577
+ api_key=self.api_key,
578
+ base_url=self.api_base,
579
+ )
580
+
581
+ # Define models and capabilities
582
+ self.models = {
583
+ "deepseek-chat": [
584
+ ModelCapability.REASONING,
585
+ ModelCapability.CODE_GENERATION,
586
+ ModelCapability.FINANCE,
587
+ ModelCapability.RAG,
588
+ ],
589
+ "deepseek-coder": [
590
+ ModelCapability.CODE_GENERATION,
591
+ ModelCapability.RAG,
592
+ ],
593
+ }
594
+
595
+ def generate(self, prompt: str, **kwargs) -> str:
596
+ """
597
+ Generate text from prompt using DeepSeek.
598
+
599
+ Args:
600
+ prompt: Input prompt
601
+ **kwargs: Additional parameters
602
+ - model: Model name (default: deepseek-chat)
603
+ - temperature: Temperature (default: 0.7)
604
+ - max_tokens: Maximum tokens (default: 2000)
605
+
606
+ Returns:
607
+ Generated text
608
+ """
609
+ if self.client is None:
610
+ raise ValueError("DeepSeek client not initialized. Provide a valid API key.")
611
+
612
+ # Extract parameters with defaults
613
+ model = kwargs.get("model", "deepseek-chat")
614
+ temperature = kwargs.get("temperature", 0.7)
615
+ max_tokens = kwargs.get("max_tokens", 2000)
616
+
617
+ try:
618
+ # Create messages
619
+ messages = [{"role": "user", "content": prompt}]
620
+
621
+ # Make request
622
+ response = self.client.chat.completions.create(
623
+ model=model,
624
+ messages=messages,
625
+ temperature=temperature,
626
+ max_tokens=max_tokens,
627
+ )
628
+
629
+ # Extract text
630
+ return response.choices[0].message.content
631
+
632
+ except Exception as e:
633
+ logging.error(f"Error generating text with DeepSeek: {e}")
634
+ logging.error(traceback.format_exc())
635
+ raise
636
+
637
+ def get_available_models(self) -> List[str]:
638
+ """Get list of available models."""
639
+ return list(self.models.keys())
640
+
641
+ def get_model_capabilities(self, model_name: str) -> List[ModelCapability]:
642
+ """Get capabilities of a specific model."""
643
+ return self.models.get(model_name, [])
644
+
645
+
646
+ class ModelRouter:
647
+ """
648
+ Model router for multi-provider LLM orchestration.
649
+
650
+ The ModelRouter provides:
651
+ - Unified interface for multiple LLM providers
652
+ - Dynamic provider selection based on capabilities
653
+ - Fallback chains for reliability
654
+ - Prompt template management
655
+ - Attribution tracing for interpretability
656
+ """
657
+
658
+ def __init__(
659
+ self,
660
+ provider: str = "anthropic",
661
+ model: Optional[str] = None,
662
+ fallback_providers: Optional[List[str]] = None,
663
+ openai_api_key: Optional[str] = None,
664
+ anthropic_api_key: Optional[str] = None,
665
+ groq_api_key: Optional[str] = None,
666
+ ):
667
+ """
668
+ Initialize model router.
669
+
670
+ Args:
671
+ provider: Default provider
672
+ model: Default model (provider-specific)
673
+ fallback_providers: List of fallback providers
674
+ openai_api_key: OpenAI API key
675
+ anthropic_api_key: Anthropic API key
676
+ groq_api_key: Groq API key
677
+ """
678
+ self.default_provider = provider
679
+ self.default_model = model
680
+ self.fallback_providers = fallback_providers or []
681
+
682
+ # Track usage
683
+ self.usage_stats = {
684
+ "total_calls": 0,
685
+ "total_tokens": 0,
686
+ "provider_calls": {},
687
+ "model_calls": {},
688
+ "errors": {},
689
+ }
690
+
691
+ # Initialize providers
692
+ self.providers = {}
693
+
694
+ # Initialize OpenAI
695
+ try:
696
+ self.providers["openai"] = OpenAIProvider(api_key=openai_api_key)
697
+
698
+ # Set default model if not specified
699
+ if provider == "openai" and model is None:
700
+ self.default_model = "gpt-4-turbo-preview"
701
+ except Exception as e:
702
+ logging.warning(f"Failed to initialize OpenAI provider: {e}")
703
+
704
+ # Initialize Anthropic
705
+ try:
706
+ self.providers["anthropic"] = AnthropicProvider(api_key=anthropic_api_key)
707
+
708
+ # Set default model if not specified
709
+ if provider == "anthropic" and model is None:
710
+ self.default_model = "claude-3-sonnet-20240229"
711
+ except Exception as e:
712
+ logging.warning(f"Failed to initialize Anthropic provider: {e}")
713
+
714
+ # Initialize Groq
715
+ try:
716
+ self.providers["groq"] = GroqProvider(api_key=groq_api_key)
717
+
718
+ # Set default model if not specified
719
+ if provider == "groq" and model is None:
720
+ self.default_model = "mixtral-8x7b-32768"
721
+ except Exception as e:
722
+ logging.warning(f"Failed to initialize Groq provider: {e}")
723
+
724
+ # Initialize Ollama
725
+ try:
726
+ self.providers["ollama"] = OllamaProvider()
727
+
728
+ # Set default model if not specified
729
+ if provider == "ollama" and model is None:
730
+ self.default_model = "mistral:latest"
731
+ except Exception as e:
732
+ logging.warning(f"Failed to initialize Ollama provider: {e}")
733
+
734
+ # Initialize DeepSeek
735
+ try:
736
+ self.providers["deepseek"] = DeepSeekProvider()
737
+
738
+ # Set default model if not specified
739
+ if provider == "deepseek" and model is None:
740
+ self.default_model = "deepseek-chat"
741
+ except Exception as e:
742
+ logging.warning(f"Failed to initialize DeepSeek provider: {e}")
743
+
744
+ # Verify default provider is available
745
+ if self.default_provider not in self.providers:
746
+ available_providers = list(self.providers.keys())
747
+ if available_providers:
748
+ logging.warning(f"Default provider '{self.default_provider}' not available. "
749
+ f"Using '{available_providers[0]}' instead.")
750
+ self.default_provider = available_providers[0]
751
+ else:
752
+ raise ValueError("No LLM providers available. Check API keys and dependencies.")
753
+
754
+ def generate(self, prompt: str, provider: Optional[str] = None,
755
+ model: Optional[str] = None, **kwargs) -> str:
756
+ """
757
+ Generate text from prompt.
758
+
759
+ Args:
760
+ prompt: Input prompt
761
+ provider: Provider to use (default is instance default)
762
+ model: Model to use (default is instance default)
763
+ **kwargs: Additional provider-specific parameters
764
+
765
+ Returns:
766
+ Generated text
767
+ """
768
+ # Use default provider if not specified
769
+ provider = provider or self.default_provider
770
+
771
+ # Use default model if not specified
772
+ model = model or self.default_model
773
+
774
+ # Update usage stats
775
+ self.usage_stats["total_calls"] += 1
776
+
777
+ # Update provider stats
778
+ if provider not in self.usage_stats["provider_calls"]:
779
+ self.usage_stats["provider_calls"][provider] = 0
780
+ self.usage_stats["provider_calls"][provider] += 1
781
+
782
+ # Update model stats
783
+ model_key = f"{provider}:{model}"
784
+ if model_key not in self.usage_stats["model_calls"]:
785
+ self.usage_stats["model_calls"][model_key] = 0
786
+ self.usage_stats["model_calls"][model_key] += 1
787
+
788
+ # Check if provider is available
789
+ if provider not in self.providers:
790
+ # Try fallback providers
791
+ for fallback_provider in self.fallback_providers:
792
+ if fallback_provider in self.providers:
793
+ logging.warning(f"Provider '{provider}' not available. "
794
+ f"Using fallback provider '{fallback_provider}'.")
795
+ return self.generate(prompt, provider=fallback_provider, model=model, **kwargs)
796
+
797
+ # No fallback providers available
798
+ raise ValueError(f"Provider '{provider}' not available and no fallback providers available.")
799
+
800
+ try:
801
+ # Get provider
802
+ provider_instance = self.providers[provider]
803
+
804
+ # Add model to kwargs
805
+ if model:
806
+ kwargs["model"] = model
807
+
808
+ # Generate text
809
+ start_time = time.time()
810
+ response = provider_instance.generate(prompt, **kwargs)
811
+ end_time = time.time()
812
+
813
+ # Log generation time
814
+ logging.debug(f"Generated text with {provider}:{model} in {end_time - start_time:.2f} seconds.")
815
+
816
+ return response
817
+
818
+ except Exception as e:
819
+ # Update error stats
820
+ error_key = str(type(e).__name__)
821
+ if error_key not in self.usage_stats["errors"]:
822
+ self.usage_stats["errors"][error_key] = 0
823
+ self.usage_stats["errors"][error_key] += 1
824
+
825
+ # Try fallback providers
826
+ for fallback_provider in self.fallback_providers:
827
+ if fallback_provider in self.providers:
828
+ logging.warning(f"Error with provider '{provider}': {e}. "
829
+ f"Using fallback provider '{fallback_provider}'.")
830
+ return self.generate(prompt, provider=fallback_provider, model=model, **kwargs)
831
+
832
+ # No fallback providers available
833
+ logging.error(f"Error generating text with {provider}:{model}: {e}")
834
+ logging.error(traceback.format_exc())
835
+ raise
836
+
837
+ async def generate_async(self, prompt: str, provider: Optional[str] = None,
838
+ model: Optional[str] = None, **kwargs) -> str:
839
+ """
840
+ Generate text from prompt asynchronously.
841
+
842
+ Args:
843
+ prompt: Input prompt
844
+ provider: Provider to use (default is instance default)
845
+ model: Model to use (default is instance default)
846
+ **kwargs: Additional provider-specific parameters
847
+
848
+ Returns:
849
+ Generated text
850
+ """
851
+ # Async implementation using run_in_executor
852
+ loop = asyncio.get_event_loop()
853
+ return await loop.run_in_executor(
854
+ None,
855
+ lambda: self.generate(prompt, provider, model, **kwargs)
856
+ )
857
+
858
+ def get_available_providers(self) -> List[str]:
859
+ """Get list of available providers."""
860
+ return list(self.providers.keys())
861
+
862
+ def get_available_models(self, provider: Optional[str] = None) -> Dict[str, List[str]]:
863
+ """
864
+ Get available models for all providers or a specific provider.
865
+
866
+ Args:
867
+ provider: Optional provider to get models for
868
+
869
+ Returns:
870
+ Dictionary mapping providers to lists of models
871
+ """
872
+ if provider:
873
+ if provider not in self.providers:
874
+ return {}
875
+
876
+ return {provider: self.providers[provider].get_available_models()}
877
+
878
+ # Get models for all providers
879
+ models = {}
880
+ for provider_name, provider_instance in self.providers.items():
881
+ models[provider_name] = provider_instance.get_available_models()
882
+
883
+ return models
884
+
885
+ def get_model_capabilities(self, provider: str, model: str) -> List[ModelCapability]:
886
+ """
887
+ Get capabilities of a specific model.
888
+
889
+ Args:
890
+ provider: Provider name
891
+ model: Model name
892
+
893
+ Returns:
894
+ List of model capabilities
895
+ """
896
+ if provider not in self.providers:
897
+ return []
898
+
899
+ return self.providers[provider].get_model_capabilities(model)
900
+
901
+ def find_models_with_capabilities(self, capabilities: List[ModelCapability]) -> List[Tuple[str, str]]:
902
+ """
903
+ Find models that have all specified capabilities.
904
+
905
+ Args:
906
+ capabilities: List of required capabilities
907
+
908
+ Returns:
909
+ List of (provider, model) tuples that have all capabilities
910
+ """
911
+ matching_models = []
912
+
913
+ for provider_name, provider_instance in self.providers.items():
914
+ for model in provider_instance.get_available_models():
915
+ model_capabilities = provider_instance.get_model_capabilities(model)
916
+
917
+ # Check if model has all required capabilities
918
+ if all(capability in model_capabilities for capability in capabilities):
919
+ matching_models.append((provider_name, model))
920
+
921
+ return matching_models
922
+
923
+ def get_usage_stats(self) -> Dict[str, Any]:
924
+ """Get usage statistics."""
925
+ return self.usage_stats.copy()
src/main.py ADDED
@@ -0,0 +1,685 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/usr/bin/env python
2
+ # shell.echo.seed = "🜏-mirror.activated-{timestamp}-{entropy.hash}"
3
+ """
4
+ AGI-HEDGE-FUND - Multi-agent recursive market cognition framework
5
+
6
+ This script serves as the entry point for the AGI-HEDGE-FUND system, providing
7
+ command-line interface for running the multi-agent market cognition platform.
8
+
9
+ Usage:
10
+ python -m src.main --mode backtest --start-date 2022-01-01 --end-date 2022-12-31
11
+ python -m src.main --mode live --data-source yahoo --show-trace
12
+ python -m src.main --mode analysis --portfolio-file portfolio.json --consensus-graph
13
+
14
+ Internal Note: This script encodes the system's entry point while exposing the
15
+ recursive cognitive architecture through interpretability flags.
16
+ """
17
+
18
+ import argparse
19
+ import datetime
20
+ import json
21
+ import logging
22
+ import os
23
+ import sys
24
+ from typing import Dict, List, Any, Optional
25
+
26
+ # Core components
27
+ from agents.base import BaseAgent
28
+ from agents.graham import GrahamAgent
29
+ from agents.dalio import DalioAgent
30
+ from agents.wood import WoodAgent
31
+ from agents.ackman import AckmanAgent
32
+ from agents.simons import SimonsAgent
33
+ from agents.taleb import TalebAgent
34
+ from portfolio.manager import PortfolioManager
35
+ from market.environment import MarketEnvironment
36
+ from llm.router import ModelRouter
37
+ from utils.diagnostics import TracingTools, TracingMode, ShellDiagnostics, ShellFailureMap
38
+
39
+
40
+ # Configure logging
41
+ logging.basicConfig(
42
+ level=logging.INFO,
43
+ format='%(asctime)s - %(name)s - %(levelname)s - %(message)s',
44
+ handlers=[
45
+ logging.StreamHandler(sys.stdout)
46
+ ]
47
+ )
48
+
49
+ logger = logging.getLogger("agi-hedge-fund")
50
+
51
+
52
+ def parse_args():
53
+ """Parse command line arguments."""
54
+ parser = argparse.ArgumentParser(description='AGI-HEDGE-FUND - Multi-agent recursive market cognition framework')
55
+
56
+ # Operation mode
57
+ parser.add_argument('--mode', type=str, choices=['backtest', 'live', 'analysis'], default='backtest',
58
+ help='Operation mode: backtest, live, or analysis')
59
+
60
+ # Date range for backtesting
61
+ parser.add_argument('--start-date', type=str, default='2020-01-01',
62
+ help='Start date for backtesting (YYYY-MM-DD)')
63
+ parser.add_argument('--end-date', type=str, default='2023-01-01',
64
+ help='End date for backtesting (YYYY-MM-DD)')
65
+
66
+ # Portfolio parameters
67
+ parser.add_argument('--initial-capital', type=float, default=100000.0,
68
+ help='Initial capital amount')
69
+ parser.add_argument('--tickers', type=str, nargs='+', default=['AAPL', 'MSFT', 'GOOGL', 'AMZN', 'TSLA'],
70
+ help='Stock tickers to analyze')
71
+ parser.add_argument('--rebalance-frequency', type=str, choices=['daily', 'weekly', 'monthly'], default='weekly',
72
+ help='Portfolio rebalance frequency')
73
+
74
+ # Data source
75
+ parser.add_argument('--data-source', type=str, choices=['yahoo', 'polygon', 'alpha_vantage'], default='yahoo',
76
+ help='Market data source')
77
+ parser.add_argument('--data-path', type=str, default='data',
78
+ help='Path to data directory')
79
+
80
+ # Agent configuration
81
+ parser.add_argument('--agents', type=str, nargs='+',
82
+ default=['graham', 'dalio', 'wood', 'ackman', 'simons', 'taleb'],
83
+ help='Agents to use')
84
+ parser.add_argument('--reasoning-depth', type=int, default=3,
85
+ help='Agent reasoning depth')
86
+ parser.add_argument('--arbitration-depth', type=int, default=2,
87
+ help='Portfolio meta-agent arbitration depth')
88
+
89
+ # LLM provider
90
+ parser.add_argument('--llm-provider', type=str, choices=['anthropic', 'openai', 'groq', 'ollama', 'deepseek'],
91
+ default='anthropic',
92
+ help='LLM provider')
93
+
94
+ # Model configuration
95
+ parser.add_argument('--model', type=str, default=None,
96
+ help='Specific LLM model to use')
97
+ parser.add_argument('--fallback-providers', type=str, nargs='+',
98
+ default=['openai', 'groq'],
99
+ help='Fallback LLM providers')
100
+
101
+ # Output and visualization
102
+ parser.add_argument('--output-dir', type=str, default='output',
103
+ help='Directory for output files')
104
+ parser.add_argument('--portfolio-file', type=str, default=None,
105
+ help='Portfolio state file for analysis mode')
106
+
107
+ # Diagnostic flags
108
+ parser.add_argument('--show-trace', action='store_true',
109
+ help='Show reasoning traces')
110
+ parser.add_argument('--consensus-graph', action='store_true',
111
+ help='Generate consensus graph visualization')
112
+ parser.add_argument('--agent-conflict-map', action='store_true',
113
+ help='Generate agent conflict map visualization')
114
+ parser.add_argument('--attribution-report', action='store_true',
115
+ help='Generate attribution report')
116
+ parser.add_argument('--shell-failure-map', action='store_true',
117
+ help='Show shell failure map')
118
+ parser.add_argument('--trace-level', type=str,
119
+ choices=['disabled', 'minimal', 'detailed', 'comprehensive', 'symbolic'],
120
+ default='minimal',
121
+ help='Trace level for diagnostics')
122
+
123
+ # Advanced options
124
+ parser.add_argument('--max-position-size', type=float, default=0.2,
125
+ help='Maximum position size as fraction of portfolio')
126
+ parser.add_argument('--min-position-size', type=float, default=0.01,
127
+ help='Minimum position size as fraction of portfolio')
128
+ parser.add_argument('--risk-budget', type=float, default=0.5,
129
+ help='Risk budget (0-1)')
130
+ parser.add_argument('--memory-decay', type=float, default=0.2,
131
+ help='Memory decay rate for agents')
132
+
133
+ # Parse arguments
134
+ return parser.parse_args()
135
+
136
+
137
+ def create_agents(args) -> List[BaseAgent]:
138
+ """
139
+ Create agent instances based on command-line arguments.
140
+
141
+ Args:
142
+ args: Command-line arguments
143
+
144
+ Returns:
145
+ List of agent instances
146
+ """
147
+ # Initialize model router
148
+ model_router = ModelRouter(
149
+ provider=args.llm_provider,
150
+ model=args.model,
151
+ fallback_providers=args.fallback_providers,
152
+ )
153
+
154
+ # Get trace mode
155
+ trace_mode = TracingMode(args.trace_level)
156
+
157
+ # Create agents
158
+ agents = []
159
+
160
+ for agent_type in args.agents:
161
+ agent_type = agent_type.lower()
162
+
163
+ if agent_type == "graham":
164
+ agent = GrahamAgent(
165
+ reasoning_depth=args.reasoning_depth,
166
+ memory_decay=args.memory_decay,
167
+ initial_capital=args.initial_capital,
168
+ model_provider=args.llm_provider,
169
+ model_name=args.model,
170
+ trace_enabled=args.show_trace,
171
+ )
172
+ elif agent_type == "dalio":
173
+ agent = DalioAgent(
174
+ reasoning_depth=args.reasoning_depth,
175
+ memory_decay=args.memory_decay,
176
+ initial_capital=args.initial_capital,
177
+ model_provider=args.llm_provider,
178
+ model_name=args.model,
179
+ trace_enabled=args.show_trace,
180
+ )
181
+ elif agent_type == "wood":
182
+ agent = WoodAgent(
183
+ reasoning_depth=args.reasoning_depth,
184
+ memory_decay=args.memory_decay,
185
+ initial_capital=args.initial_capital,
186
+ model_provider=args.llm_provider,
187
+ model_name=args.model,
188
+ trace_enabled=args.show_trace,
189
+ )
190
+ elif agent_type == "ackman":
191
+ agent = AckmanAgent(
192
+ reasoning_depth=args.reasoning_depth,
193
+ memory_decay=args.memory_decay,
194
+ initial_capital=args.initial_capital,
195
+ model_provider=args.llm_provider,
196
+ model_name=args.model,
197
+ trace_enabled=args.show_trace,
198
+ )
199
+ elif agent_type == "simons":
200
+ agent = SimonsAgent(
201
+ reasoning_depth=args.reasoning_depth,
202
+ memory_decay=args.memory_decay,
203
+ initial_capital=args.initial_capital,
204
+ model_provider=args.llm_provider,
205
+ model_name=args.model,
206
+ trace_enabled=args.show_trace,
207
+ )
208
+ elif agent_type == "taleb":
209
+ agent = TalebAgent(
210
+ reasoning_depth=args.reasoning_depth,
211
+ memory_decay=args.memory_decay,
212
+ initial_capital=args.initial_capital,
213
+ model_provider=args.llm_provider,
214
+ model_name=args.model,
215
+ trace_enabled=args.show_trace,
216
+ )
217
+ else:
218
+ logger.warning(f"Unknown agent type: {agent_type}")
219
+ continue
220
+
221
+ agents.append(agent)
222
+
223
+ logger.info(f"Created {len(agents)} agents: {', '.join(agent.name for agent in agents)}")
224
+
225
+ return agents
226
+
227
+
228
+ def create_portfolio_manager(agents: List[BaseAgent], args) -> PortfolioManager:
229
+ """
230
+ Create portfolio manager instance.
231
+
232
+ Args:
233
+ agents: List of agent instances
234
+ args: Command-line arguments
235
+
236
+ Returns:
237
+ Portfolio manager instance
238
+ """
239
+ # Create portfolio manager
240
+ portfolio_manager = PortfolioManager(
241
+ agents=agents,
242
+ initial_capital=args.initial_capital,
243
+ arbitration_depth=args.arbitration_depth,
244
+ max_position_size=args.max_position_size,
245
+ min_position_size=args.min_position_size,
246
+ consensus_threshold=0.6,
247
+ show_trace=args.show_trace,
248
+ risk_budget=args.risk_budget,
249
+ )
250
+
251
+ logger.info(f"Created portfolio manager with {len(agents)} agents")
252
+
253
+ return portfolio_manager
254
+
255
+
256
+ def create_market_environment(args) -> MarketEnvironment:
257
+ """
258
+ Create market environment instance.
259
+
260
+ Args:
261
+ args: Command-line arguments
262
+
263
+ Returns:
264
+ Market environment instance
265
+ """
266
+ # Create market environment
267
+ market_env = MarketEnvironment(
268
+ data_source=args.data_source,
269
+ tickers=args.tickers,
270
+ data_path=args.data_path,
271
+ start_date=args.start_date if args.mode == "backtest" else None,
272
+ end_date=args.end_date if args.mode == "backtest" else None,
273
+ )
274
+
275
+ logger.info(f"Created market environment with {len(args.tickers)} tickers")
276
+
277
+ return market_env
278
+
279
+
280
+ def run_backtest(portfolio_manager: PortfolioManager, market_env: MarketEnvironment, args) -> Dict[str, Any]:
281
+ """
282
+ Run backtesting simulation.
283
+
284
+ Args:
285
+ portfolio_manager: Portfolio manager instance
286
+ market_env: Market environment instance
287
+ args: Command-line arguments
288
+
289
+ Returns:
290
+ Backtest results
291
+ """
292
+ # Parse dates
293
+ start_date = datetime.datetime.strptime(args.start_date, "%Y-%m-%d").date()
294
+ end_date = datetime.datetime.strptime(args.end_date, "%Y-%m-%d").date()
295
+
296
+ # Set up rebalance frequency
297
+ if args.rebalance_frequency == "daily":
298
+ rebalance_days = 1
299
+ elif args.rebalance_frequency == "weekly":
300
+ rebalance_days = 7
301
+ else: # monthly
302
+ rebalance_days = 30
303
+
304
+ # Run simulation
305
+ results = portfolio_manager.run_simulation(
306
+ start_date=args.start_date,
307
+ end_date=args.end_date,
308
+ data_source=args.data_source,
309
+ rebalance_frequency=args.rebalance_frequency,
310
+ )
311
+
312
+ logger.info(f"Completed backtest from {args.start_date} to {args.end_date}")
313
+
314
+ # Save results
315
+ os.makedirs(args.output_dir, exist_ok=True)
316
+ results_file = os.path.join(args.output_dir, f"backtest_results_{start_date}_{end_date}.json")
317
+
318
+ with open(results_file, 'w') as f:
319
+ json.dump(results, f, indent=2, default=str)
320
+
321
+ logger.info(f"Saved backtest results to {results_file}")
322
+
323
+ # Generate visualizations if requested
324
+ if args.consensus_graph:
325
+ consensus_graph = portfolio_manager.visualize_consensus_graph()
326
+ consensus_file = os.path.join(args.output_dir, f"consensus_graph_{start_date}_{end_date}.json")
327
+
328
+ with open(consensus_file, 'w') as f:
329
+ json.dump(consensus_graph, f, indent=2, default=str)
330
+
331
+ logger.info(f"Saved consensus graph to {consensus_file}")
332
+
333
+ if args.agent_conflict_map:
334
+ conflict_map = portfolio_manager.visualize_agent_conflict_map()
335
+ conflict_file = os.path.join(args.output_dir, f"conflict_map_{start_date}_{end_date}.json")
336
+
337
+ with open(conflict_file, 'w') as f:
338
+ json.dump(conflict_map, f, indent=2, default=str)
339
+
340
+ logger.info(f"Saved agent conflict map to {conflict_file}")
341
+
342
+ if args.attribution_report:
343
+ # Get all signals from the simulation
344
+ all_signals = []
345
+ for trade_batch in results.get("trades", []):
346
+ for trade in trade_batch:
347
+ if "signal" in trade:
348
+ all_signals.append(trade["signal"])
349
+
350
+ # Create attribution report
351
+ tracer = TracingTools(agent_id="portfolio", agent_name="Portfolio")
352
+ attribution_report = tracer.generate_attribution_report(all_signals)
353
+ report_file = os.path.join(args.output_dir, f"attribution_report_{start_date}_{end_date}.json")
354
+
355
+ with open(report_file, 'w') as f:
356
+ json.dump(attribution_report, f, indent=2, default=str)
357
+
358
+ logger.info(f"Saved attribution report to {report_file}")
359
+
360
+ # Save final portfolio state
361
+ portfolio_state = portfolio_manager.get_portfolio_state()
362
+ portfolio_file = os.path.join(args.output_dir, f"portfolio_state_{end_date}.json")
363
+
364
+ with open(portfolio_file, 'w') as f:
365
+ json.dump(portfolio_state, f, indent=2, default=str)
366
+
367
+ logger.info(f"Saved portfolio state to {portfolio_file}")
368
+
369
+ return results
370
+
371
+
372
+ def run_live_analysis(portfolio_manager: PortfolioManager, market_env: MarketEnvironment, args) -> Dict[str, Any]:
373
+ """
374
+ Run live market analysis.
375
+
376
+ Args:
377
+ portfolio_manager: Portfolio manager instance
378
+ market_env: Market environment instance
379
+ args: Command-line arguments
380
+
381
+ Returns:
382
+ Analysis results
383
+ """
384
+ # Get current market data
385
+ market_data = market_env.get_current_market_data()
386
+
387
+ # Process market data through portfolio manager
388
+ analysis_results = portfolio_manager.process_market_data(market_data)
389
+
390
+ logger.info(f"Completed live market analysis for {len(args.tickers)} tickers")
391
+
392
+ # Save results
393
+ os.makedirs(args.output_dir, exist_ok=True)
394
+ timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
395
+ results_file = os.path.join(args.output_dir, f"live_analysis_{timestamp}.json")
396
+
397
+ with open(results_file, 'w') as f:
398
+ json.dump(analysis_results, f, indent=2, default=str)
399
+
400
+ logger.info(f"Saved live analysis results to {results_file}")
401
+
402
+ # Generate visualizations if requested
403
+ if args.consensus_graph:
404
+ consensus_graph = portfolio_manager.visualize_consensus_graph()
405
+ consensus_file = os.path.join(args.output_dir, f"consensus_graph_{timestamp}.json")
406
+
407
+ with open(consensus_file, 'w') as f:
408
+ json.dump(consensus_graph, f, indent=2, default=str)
409
+
410
+ logger.info(f"Saved consensus graph to {consensus_file}")
411
+
412
+ if args.agent_conflict_map:
413
+ conflict_map = portfolio_manager.visualize_agent_conflict_map()
414
+ conflict_file = os.path.join(args.output_dir, f"conflict_map_{timestamp}.json")
415
+
416
+ with open(conflict_file, 'w') as f:
417
+ json.dump(conflict_map, f, indent=2, default=str)
418
+
419
+ logger.info(f"Saved agent conflict map to {conflict_file}")
420
+
421
+ # Execute trades if there are consensus decisions
422
+ consensus_decisions = analysis_results.get("meta_agent", {}).get("consensus_decisions", [])
423
+
424
+ if consensus_decisions:
425
+ trade_results = portfolio_manager.execute_trades(consensus_decisions)
426
+
427
+ # Save trade results
428
+ trades_file = os.path.join(args.output_dir, f"trades_{timestamp}.json")
429
+
430
+ with open(trades_file, 'w') as f:
431
+ json.dump(trade_results, f, indent=2, default=str)
432
+
433
+ logger.info(f"Executed {len(trade_results.get('trades', []))} trades and saved results to {trades_file}")
434
+
435
+ # Save portfolio state
436
+ portfolio_state = portfolio_manager.get_portfolio_state()
437
+ portfolio_file = os.path.join(args.output_dir, f"portfolio_state_{timestamp}.json")
438
+
439
+ with open(portfolio_file, 'w') as f:
440
+ json.dump(portfolio_state, f, indent=2, default=str)
441
+
442
+ logger.info(f"Saved portfolio state to {portfolio_file}")
443
+
444
+ return analysis_results
445
+
446
+
447
+ def run_portfolio_analysis(args) -> Dict[str, Any]:
448
+ """
449
+ Run analysis on existing portfolio.
450
+
451
+ Args:
452
+ args: Command-line arguments
453
+
454
+ Returns:
455
+ Analysis results
456
+ """
457
+ # Check if portfolio file exists
458
+ if not args.portfolio_file or not os.path.exists(args.portfolio_file):
459
+ logger.error(f"Portfolio file not found: {args.portfolio_file}")
460
+ sys.exit(1)
461
+
462
+ # Load portfolio state
463
+ with open(args.portfolio_file, 'r') as f:
464
+ portfolio_state = json.load(f)
465
+
466
+ # Create agents
467
+ agents = create_agents(args)
468
+
469
+ # Create portfolio manager
470
+ portfolio_manager = create_portfolio_manager(agents, args)
471
+
472
+ # Create market environment
473
+ market_env = create_market_environment(args)
474
+
475
+ # Get current market data
476
+ market_data = market_env.get_current_market_data()
477
+
478
+ # Process market data through portfolio manager
479
+ analysis_results = portfolio_manager.process_market_data(market_data)
480
+
481
+ logger.info(f"Completed portfolio analysis")
482
+
483
+ # Save results
484
+ os.makedirs(args.output_dir, exist_ok=True)
485
+ timestamp = datetime.datetime.now().strftime("%Y%m%d_%H%M%S")
486
+ results_file = os.path.join(args.output_dir, f"portfolio_analysis_{timestamp}.json")
487
+
488
+ with open(results_file, 'w') as f:
489
+ json.dump(analysis_results, f, indent=2, default=str)
490
+
491
+ logger.info(f"Saved portfolio analysis results to {results_file}")
492
+
493
+ # Generate visualizations if requested
494
+ if args.consensus_graph:
495
+ consensus_graph = portfolio_manager.visualize_consensus_graph()
496
+ consensus_file = os.path.join(args.output_dir, f"consensus_graph_{timestamp}.json")
497
+
498
+ with open(consensus_file, 'w') as f:
499
+ json.dump(consensus_graph, f, indent=2, default=str)
500
+
501
+ logger.info(f"Saved consensus graph to {consensus_file}")
502
+
503
+ if args.agent_conflict_map:
504
+ conflict_map = portfolio_manager.visualize_agent_conflict_map()
505
+ conflict_file = os.path.join(args.output_dir, f"conflict_map_{timestamp}.json")
506
+
507
+ with open(conflict_file, 'w') as f:
508
+ json.dump(conflict_map, f, indent=2, default=str)
509
+
510
+ logger.info(f"Saved agent conflict map to {conflict_file}")
511
+
512
+ if args.attribution_report:
513
+ # Get agent performance
514
+ agent_performance = portfolio_manager.get_agent_performance()
515
+
516
+ # Create attribution report
517
+ attribution_file = os.path.join(args.output_dir, f"agent_performance_{timestamp}.json")
518
+
519
+ with open(attribution_file, 'w') as f:
520
+ json.dump(agent_performance, f, indent=2, default=str)
521
+
522
+ logger.info(f"Saved agent performance report to {attribution_file}")
523
+
524
+ if args.shell_failure_map:
525
+ # Create shell diagnostics
526
+ shell_diagnostics = ShellDiagnostics(
527
+ agent_id="portfolio",
528
+ agent_name="Portfolio",
529
+ tracing_tools=TracingTools(
530
+ agent_id="portfolio",
531
+ agent_name="Portfolio",
532
+ tracing_mode=TracingMode(args.trace_level),
533
+ )
534
+ )
535
+
536
+ # Create shell failure map
537
+ failure_map = ShellFailureMap()
538
+
539
+ # Analyze each agent's state for shell failures
540
+ for agent in agents:
541
+ agent_state = agent.get_state_report()
542
+
543
+ # Simulate shell failures based on agent state
544
+ for shell_pattern in [
545
+ "NULL_FEATURE",
546
+ "CIRCUIT_FRAGMENT",
547
+ "META_FAILURE",
548
+ "RECURSIVE_FRACTURE",
549
+ "ETHICAL_INVERSION",
550
+ ]:
551
+ try:
552
+ from utils.diagnostics import ShellPattern
553
+ pattern = getattr(ShellPattern, shell_pattern)
554
+
555
+ # Simulate failure
556
+ failure_data = shell_diagnostics.simulate_shell_failure(
557
+ shell_pattern=pattern,
558
+ context=agent_state,
559
+ )
560
+
561
+ # Add to failure map
562
+ failure_map.add_failure(
563
+ agent_id=agent.id,
564
+ agent_name=agent.name,
565
+ shell_pattern=pattern,
566
+ failure_data=failure_data,
567
+ )
568
+ except Exception as e:
569
+ logger.error(f"Error simulating shell failure: {e}")
570
+
571
+ # Generate visualization
572
+ failure_viz = failure_map.generate_failure_map_visualization()
573
+ failure_file = os.path.join(args.output_dir, f"shell_failure_map_{timestamp}.json")
574
+
575
+ with open(failure_file, 'w') as f:
576
+ json.dump(failure_viz, f, indent=2, default=str)
577
+
578
+ logger.info(f"Saved shell failure map to {failure_file}")
579
+
580
+ return analysis_results
581
+
582
+
583
+ def main():
584
+ """Main entry point."""
585
+ # Parse arguments
586
+ args = parse_args()
587
+
588
+ # Create output directory
589
+ os.makedirs(args.output_dir, exist_ok=True)
590
+
591
+ # Run in appropriate mode
592
+ if args.mode == "backtest":
593
+ # Create agents
594
+ agents = create_agents(args)
595
+
596
+ # Create portfolio manager
597
+ portfolio_manager = create_portfolio_manager(agents, args)
598
+
599
+ # Create market environment
600
+ market_env = create_market_environment(args)
601
+
602
+ # Run backtest
603
+ results = run_backtest(portfolio_manager, market_env, args)
604
+
605
+ # Print summary
606
+ print("\n=== Backtest Results ===")
607
+ print(f"Start Date: {args.start_date}")
608
+ print(f"End Date: {args.end_date}")
609
+ print(f"Initial Capital: ${args.initial_capital:.2f}")
610
+ print(f"Final Portfolio Value: ${results.get('final_value', 0):.2f}")
611
+
612
+ total_return = (results.get('final_value', 0) / args.initial_capital) - 1
613
+ print(f"Total Return: {total_return:.2%}")
614
+ print(f"Number of Trades: {sum(len(batch) for batch in results.get('trades', []))}")
615
+ print(f"Results saved to: {args.output_dir}")
616
+
617
+ elif args.mode == "live":
618
+ # Create agents
619
+ agents = create_agents(args)
620
+
621
+ # Create portfolio manager
622
+ portfolio_manager = create_portfolio_manager(agents, args)
623
+
624
+ # Create market environment
625
+ market_env = create_market_environment(args)
626
+
627
+ # Run live analysis
628
+ results = run_live_analysis(portfolio_manager, market_env, args)
629
+
630
+ # Print summary
631
+ print("\n=== Live Analysis Results ===")
632
+ print(f"Analysis Time: {datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")
633
+ print(f"Tickers Analyzed: {', '.join(args.tickers)}")
634
+
635
+ # Print consensus decisions
636
+ consensus_decisions = results.get("meta_agent", {}).get("consensus_decisions", [])
637
+ if consensus_decisions:
638
+ print("\nConsensus Decisions:")
639
+ for decision in consensus_decisions:
640
+ ticker = decision.get("ticker", "")
641
+ action = decision.get("action", "")
642
+ confidence = decision.get("confidence", 0)
643
+ quantity = decision.get("quantity", 0)
644
+
645
+ print(f" {action.upper()} {quantity} {ticker} (Confidence: {confidence:.2f})")
646
+ else:
647
+ print("\nNo consensus decisions generated.")
648
+
649
+ print(f"Results saved to: {args.output_dir}")
650
+
651
+ elif args.mode == "analysis":
652
+ # Run portfolio analysis
653
+ results = run_portfolio_analysis(args)
654
+
655
+ # Print summary
656
+ print("\n=== Portfolio Analysis Results ===")
657
+ print(f"Analysis Time: {datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')}")
658
+ print(f"Portfolio File: {args.portfolio_file}")
659
+
660
+ # Print agent weights
661
+ agent_weights = results.get("meta_agent", {}).get("agent_weights", {})
662
+ if agent_weights:
663
+ print("\nAgent Weights:")
664
+ for agent_id, weight in agent_weights.items():
665
+ print(f" {agent_id}: {weight:.2f}")
666
+
667
+ # Print consensus decisions
668
+ consensus_decisions = results.get("meta_agent", {}).get("consensus_decisions", [])
669
+ if consensus_decisions:
670
+ print("\nRecommended Actions:")
671
+ for decision in consensus_decisions:
672
+ ticker = decision.get("ticker", "")
673
+ action = decision.get("action", "")
674
+ confidence = decision.get("confidence", 0)
675
+ quantity = decision.get("quantity", 0)
676
+
677
+ print(f" {action.upper()} {quantity} {ticker} (Confidence: {confidence:.2f})")
678
+ else:
679
+ print("\nNo recommended actions.")
680
+
681
+ print(f"Results saved to: {args.output_dir}")
682
+
683
+
684
+ if __name__ == "__main__":
685
+ main()
src/portfolio/manager.py ADDED
@@ -0,0 +1,1940 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ PortfolioManager - Recursive Meta-Agent Arbitration Framework
3
+
4
+ This module implements the portfolio meta-agent that recursively arbitrates between
5
+ different philosophical investment agents and manages the overall portfolio allocation.
6
+
7
+ Key capabilities:
8
+ - Multi-agent arbitration with philosophical weighting
9
+ - Attribution-weighted position sizing
10
+ - Recursive consensus formation across agents
11
+ - Transparent decision tracing with interpretability scaffolding
12
+ - Conflict resolution through value attribution
13
+ - Memory-based temporal reasoning across market cycles
14
+
15
+ Internal Note: The portfolio manager implements the meta-agent arbitration layer
16
+ using recursive attribution traces and symbolic consensus formation shells.
17
+ """
18
+
19
+ import datetime
20
+ import uuid
21
+ import logging
22
+ import math
23
+ import json
24
+ from typing import Dict, List, Any, Optional, Tuple, Set, Union
25
+ import numpy as np
26
+ from collections import defaultdict
27
+
28
+ # Core agent functionality
29
+ from ..agents.base import BaseAgent, AgentSignal
30
+ from ..cognition.graph import ReasoningGraph
31
+ from ..cognition.memory import MemoryShell
32
+ from ..cognition.attribution import AttributionTracer
33
+ from ..utils.diagnostics import TracingTools
34
+
35
+ # Type hints
36
+ from pydantic import BaseModel, Field
37
+
38
+
39
+ class Position(BaseModel):
40
+ """Current portfolio position with attribution."""
41
+
42
+ ticker: str = Field(...)
43
+ quantity: int = Field(...)
44
+ entry_price: float = Field(...)
45
+ current_price: float = Field(...)
46
+ entry_date: datetime.datetime = Field(default_factory=datetime.datetime.now)
47
+ attribution: Dict[str, float] = Field(default_factory=dict) # Agent contributions
48
+ confidence: float = Field(default=0.5)
49
+ reasoning: str = Field(default="")
50
+ value_basis: str = Field(default="")
51
+ last_update: datetime.datetime = Field(default_factory=datetime.datetime.now)
52
+
53
+
54
+ class Portfolio(BaseModel):
55
+ """Portfolio state with positions and performance metrics."""
56
+
57
+ id: str = Field(default_factory=lambda: str(uuid.uuid4()))
58
+ positions: Dict[str, Position] = Field(default_factory=dict)
59
+ cash: float = Field(...)
60
+ initial_capital: float = Field(...)
61
+ last_update: datetime.datetime = Field(default_factory=datetime.datetime.now)
62
+ performance_history: List[Dict[str, Any]] = Field(default_factory=list)
63
+
64
+ def get_value(self, price_data: Dict[str, float]) -> float:
65
+ """Calculate total portfolio value including cash."""
66
+ total_value = self.cash
67
+
68
+ for ticker, position in self.positions.items():
69
+ # Get current price if available, otherwise use stored price
70
+ current_price = price_data.get(ticker, position.current_price)
71
+ position_value = position.quantity * current_price
72
+ total_value += position_value
73
+
74
+ return total_value
75
+
76
+ def get_returns(self) -> Dict[str, float]:
77
+ """Calculate portfolio returns."""
78
+ if not self.performance_history:
79
+ return {
80
+ "total_return": 0.0,
81
+ "annualized_return": 0.0,
82
+ "volatility": 0.0,
83
+ "sharpe_ratio": 0.0,
84
+ }
85
+
86
+ # Extract portfolio values
87
+ values = [entry["portfolio_value"] for entry in self.performance_history]
88
+
89
+ # Calculate returns
90
+ if len(values) < 2:
91
+ return {
92
+ "total_return": 0.0,
93
+ "annualized_return": 0.0,
94
+ "volatility": 0.0,
95
+ "sharpe_ratio": 0.0,
96
+ }
97
+
98
+ # Calculate total return
99
+ total_return = (values[-1] / values[0]) - 1
100
+
101
+ # Calculate daily returns
102
+ daily_returns = []
103
+ for i in range(1, len(values)):
104
+ daily_return = (values[i] / values[i-1]) - 1
105
+ daily_returns.append(daily_return)
106
+
107
+ # Calculate annualized return (assuming daily values)
108
+ days = len(values) - 1
109
+ annualized_return = ((1 + total_return) ** (365 / days)) - 1
110
+
111
+ # Calculate volatility (annualized standard deviation of returns)
112
+ if daily_returns:
113
+ daily_volatility = np.std(daily_returns)
114
+ annualized_volatility = daily_volatility * (252 ** 0.5) # Assuming 252 trading days
115
+ else:
116
+ annualized_volatility = 0.0
117
+
118
+ # Calculate Sharpe ratio (assuming risk-free rate of 0 for simplicity)
119
+ sharpe_ratio = annualized_return / annualized_volatility if annualized_volatility > 0 else 0.0
120
+
121
+ return {
122
+ "total_return": total_return,
123
+ "annualized_return": annualized_return,
124
+ "volatility": annualized_volatility,
125
+ "sharpe_ratio": sharpe_ratio,
126
+ }
127
+
128
+ def get_allocation(self) -> Dict[str, float]:
129
+ """Get current portfolio allocation percentages."""
130
+ total_value = self.cash
131
+ for ticker, position in self.positions.items():
132
+ total_value += position.quantity * position.current_price
133
+
134
+ if total_value <= 0:
135
+ return {"cash": 1.0}
136
+
137
+ # Calculate allocations
138
+ allocations = {"cash": self.cash / total_value}
139
+
140
+ for ticker, position in self.positions.items():
141
+ position_value = position.quantity * position.current_price
142
+ allocations[ticker] = position_value / total_value
143
+
144
+ return allocations
145
+
146
+ def update_prices(self, price_data: Dict[str, float]) -> None:
147
+ """Update position prices with latest market data."""
148
+ for ticker, position in self.positions.items():
149
+ if ticker in price_data:
150
+ position.current_price = price_data[ticker]
151
+ position.last_update = datetime.datetime.now()
152
+
153
+ self.last_update = datetime.datetime.now()
154
+
155
+ def record_performance(self, price_data: Dict[str, float]) -> Dict[str, Any]:
156
+ """Record current performance snapshot."""
157
+ # Calculate portfolio value
158
+ portfolio_value = self.get_value(price_data)
159
+
160
+ # Calculate returns
161
+ returns = {
162
+ "daily_return": 0.0,
163
+ "total_return": (portfolio_value / self.initial_capital) - 1,
164
+ }
165
+
166
+ # Calculate daily return if we have past data
167
+ if self.performance_history:
168
+ last_value = self.performance_history[-1]["portfolio_value"]
169
+ returns["daily_return"] = (portfolio_value / last_value) - 1
170
+
171
+ # Create snapshot
172
+ snapshot = {
173
+ "timestamp": datetime.datetime.now(),
174
+ "portfolio_value": portfolio_value,
175
+ "cash": self.cash,
176
+ "positions": {ticker: pos.dict() for ticker, pos in self.positions.items()},
177
+ "returns": returns,
178
+ "allocation": self.get_allocation(),
179
+ }
180
+
181
+ # Add to history
182
+ self.performance_history.append(snapshot)
183
+
184
+ return snapshot
185
+
186
+
187
+ class PortfolioManager:
188
+ """
189
+ Portfolio Meta-Agent for investment arbitration and management.
190
+
191
+ The PortfolioManager serves as a recursive meta-agent that:
192
+ - Arbitrates between different philosophical agents
193
+ - Forms consensus through attribution-weighted aggregation
194
+ - Manages portfolio allocation and position sizing
195
+ - Provides transparent decision tracing
196
+ - Maintains temporal memory across market cycles
197
+ """
198
+
199
+ def __init__(
200
+ self,
201
+ agents: List[BaseAgent],
202
+ initial_capital: float = 100000.0,
203
+ arbitration_depth: int = 2,
204
+ max_position_size: float = 0.2, # 20% max allocation to single position
205
+ min_position_size: float = 0.01, # 1% min allocation to single position
206
+ consensus_threshold: float = 0.6, # Minimum confidence for consensus
207
+ show_trace: bool = False,
208
+ risk_budget: float = 0.5, # Risk budget (0-1)
209
+ ):
210
+ """
211
+ Initialize portfolio manager.
212
+
213
+ Args:
214
+ agents: List of investment agents
215
+ initial_capital: Starting capital amount
216
+ arbitration_depth: Depth of arbitration reasoning
217
+ max_position_size: Maximum position size as fraction of portfolio
218
+ min_position_size: Minimum position size as fraction of portfolio
219
+ consensus_threshold: Minimum confidence for consensus
220
+ show_trace: Whether to show reasoning traces
221
+ risk_budget: Risk budget (0-1)
222
+ """
223
+ self.id = str(uuid.uuid4())
224
+ self.agents = agents
225
+ self.arbitration_depth = arbitration_depth
226
+ self.max_position_size = max_position_size
227
+ self.min_position_size = min_position_size
228
+ self.consensus_threshold = consensus_threshold
229
+ self.show_trace = show_trace
230
+ self.risk_budget = risk_budget
231
+
232
+ # Initialize portfolio
233
+ self.portfolio = Portfolio(
234
+ cash=initial_capital,
235
+ initial_capital=initial_capital,
236
+ )
237
+
238
+ # Initialize cognitive components
239
+ self.memory_shell = MemoryShell(decay_rate=0.1) # Slower decay for meta-agent
240
+ self.attribution_tracer = AttributionTracer()
241
+
242
+ # Initialize reasoning graph
243
+ self.reasoning_graph = ReasoningGraph(
244
+ agent_name="PortfolioMetaAgent",
245
+ agent_philosophy="Recursive arbitration across philosophical perspectives",
246
+ model_router=agents[0].llm if agents else None, # Use first agent's model router
247
+ trace_enabled=show_trace,
248
+ )
249
+
250
+ # Configure meta-agent reasoning graph
251
+ self._configure_reasoning_graph()
252
+
253
+ # Diagnostics
254
+ self.tracer = TracingTools(agent_id=self.id, agent_name="PortfolioMetaAgent")
255
+
256
+ # Agent weight tracking
257
+ self.agent_weights = {agent.id: 1.0 / len(agents) for agent in agents} if agents else {}
258
+
259
+ # Initialize meta-agent state
260
+ self.meta_state = {
261
+ "agent_consensus": {},
262
+ "agent_performance": {},
263
+ "conflict_history": [],
264
+ "arbitration_history": [],
265
+ "risk_budget_used": 0.0,
266
+ "last_rebalance": datetime.datetime.now(),
267
+ "consistency_metrics": {},
268
+ }
269
+
270
+ # Internal symbolic processing commands
271
+ self._commands = {
272
+ "reflect.trace": self._reflect_trace,
273
+ "fork.signal": self._fork_signal,
274
+ "collapse.detect": self._collapse_detect,
275
+ "attribute.weight": self._attribute_weight,
276
+ "drift.observe": self._drift_observe,
277
+ }
278
+
279
+ def _configure_reasoning_graph(self) -> None:
280
+ """Configure the meta-agent reasoning graph."""
281
+ # Configure nodes for meta-agent reasoning
282
+ self.reasoning_graph.add_node(
283
+ "generate_agent_signals",
284
+ self._generate_agent_signals
285
+ )
286
+
287
+ self.reasoning_graph.add_node(
288
+ "consensus_formation",
289
+ self._consensus_formation
290
+ )
291
+
292
+ self.reasoning_graph.add_node(
293
+ "conflict_resolution",
294
+ self._conflict_resolution
295
+ )
296
+
297
+ self.reasoning_graph.add_node(
298
+ "position_sizing",
299
+ self._position_sizing
300
+ )
301
+
302
+ self.reasoning_graph.add_node(
303
+ "meta_reflection",
304
+ self._meta_reflection
305
+ )
306
+
307
+ # Configure graph structure
308
+ self.reasoning_graph.set_entry_point("generate_agent_signals")
309
+ self.reasoning_graph.add_edge("generate_agent_signals", "consensus_formation")
310
+ self.reasoning_graph.add_edge("consensus_formation", "conflict_resolution")
311
+ self.reasoning_graph.add_edge("conflict_resolution", "position_sizing")
312
+ self.reasoning_graph.add_edge("position_sizing", "meta_reflection")
313
+
314
+ def process_market_data(self, market_data: Dict[str, Any]) -> Dict[str, Any]:
315
+ """
316
+ Process market data through all agents and form meta-agent consensus.
317
+
318
+ Args:
319
+ market_data: Market data dictionary
320
+
321
+ Returns:
322
+ Processed market data with meta-agent insights
323
+ """
324
+ # Update portfolio prices
325
+ if "tickers" in market_data:
326
+ price_data = {ticker: data.get("price", 0)
327
+ for ticker, data in market_data.get("tickers", {}).items()}
328
+ self.portfolio.update_prices(price_data)
329
+
330
+ # Process market data through each agent
331
+ agent_analyses = {}
332
+ for agent in self.agents:
333
+ try:
334
+ agent_analysis = agent.process_market_data(market_data)
335
+ agent_analyses[agent.id] = {
336
+ "agent": agent.name,
337
+ "analysis": agent_analysis,
338
+ "philosophy": agent.philosophy,
339
+ }
340
+ except Exception as e:
341
+ logging.error(f"Error processing market data with agent {agent.name}: {e}")
342
+
343
+ # Generate agent signals
344
+ agent_signals = {}
345
+ for agent in self.agents:
346
+ try:
347
+ agent_processed_data = agent_analyses.get(agent.id, {}).get("analysis", {})
348
+ signals = agent.generate_signals(agent_processed_data)
349
+ agent_signals[agent.id] = {
350
+ "agent": agent.name,
351
+ "signals": signals,
352
+ "confidence": np.mean([s.confidence for s in signals]) if signals else 0.5,
353
+ }
354
+ except Exception as e:
355
+ logging.error(f"Error generating signals with agent {agent.name}: {e}")
356
+
357
+ # Prepare reasoning input
358
+ reasoning_input = {
359
+ "market_data": market_data,
360
+ "agent_analyses": agent_analyses,
361
+ "agent_signals": agent_signals,
362
+ "portfolio": self.portfolio.dict(),
363
+ "agent_weights": self.agent_weights,
364
+ "meta_state": self.meta_state,
365
+ }
366
+
367
+ # Run meta-agent reasoning
368
+ meta_result = self.reasoning_graph.run(
369
+ input=reasoning_input,
370
+ trace_depth=self.arbitration_depth
371
+ )
372
+
373
+ # Extract consensus decisions
374
+ consensus_decisions = meta_result.get("output", {}).get("consensus_decisions", [])
375
+
376
+ # Add to memory
377
+ self.memory_shell.add_experience({
378
+ "type": "market_analysis",
379
+ "market_data": market_data,
380
+ "meta_result": meta_result,
381
+ "timestamp": datetime.datetime.now().isoformat(),
382
+ })
383
+
384
+ # Create processed data result
385
+ processed_data = {
386
+ "timestamp": datetime.datetime.now(),
387
+ "meta_agent": {
388
+ "consensus_decisions": consensus_decisions,
389
+ "confidence": meta_result.get("confidence", 0.5),
390
+ "agent_weights": self.agent_weights.copy(),
391
+ },
392
+ "agents": {agent.name: agent_analyses.get(agent.id, {}).get("analysis", {})
393
+ for agent in self.agents},
394
+ "portfolio_value": self.portfolio.get_value(price_data),
395
+ "allocation": self.portfolio.get_allocation(),
396
+ }
397
+
398
+ # Add trace if enabled
399
+ if self.show_trace and "trace" in meta_result:
400
+ processed_data["trace"] = meta_result["trace"]
401
+
402
+ return processed_data
403
+
404
+ def execute_trades(self, decisions: List[Dict[str, Any]]) -> Dict[str, Any]:
405
+ """
406
+ Execute trade decisions and update portfolio.
407
+
408
+ Args:
409
+ decisions: List of trade decisions
410
+
411
+ Returns:
412
+ Trade execution results
413
+ """
414
+ execution_results = {
415
+ "trades": [],
416
+ "errors": [],
417
+ "portfolio_update": {},
418
+ "timestamp": datetime.datetime.now(),
419
+ }
420
+
421
+ # Get current prices (use stored prices if not available)
422
+ price_data = {ticker: position.current_price
423
+ for ticker, position in self.portfolio.positions.items()}
424
+
425
+ # Execute each decision
426
+ for decision in decisions:
427
+ ticker = decision.get("ticker", "")
428
+ action = decision.get("action", "")
429
+ quantity = decision.get("quantity", 0)
430
+ confidence = decision.get("confidence", 0.5)
431
+ reasoning = decision.get("reasoning", "")
432
+ attribution = decision.get("attribution", {})
433
+ value_basis = decision.get("value_basis", "")
434
+
435
+ # Skip invalid decisions
436
+ if not ticker or not action or quantity <= 0:
437
+ execution_results["errors"].append({
438
+ "ticker": ticker,
439
+ "error": "Invalid decision parameters",
440
+ "decision": decision,
441
+ })
442
+ continue
443
+
444
+ # Get current price
445
+ current_price = price_data.get(ticker, 0)
446
+
447
+ # Fetch from market if not available
448
+ if current_price <= 0:
449
+ # In a real implementation, this would fetch from market
450
+ # For now, use placeholder
451
+ current_price = 100.0
452
+ price_data[ticker] = current_price
453
+
454
+ try:
455
+ if action == "buy":
456
+ # Check if we have enough cash
457
+ cost = quantity * current_price
458
+ if cost > self.portfolio.cash:
459
+ max_quantity = math.floor(self.portfolio.cash / current_price)
460
+ if max_quantity <= 0:
461
+ execution_results["errors"].append({
462
+ "ticker": ticker,
463
+ "error": "Insufficient cash for purchase",
464
+ "attempted_quantity": quantity,
465
+ "available_cash": self.portfolio.cash,
466
+ })
467
+ continue
468
+
469
+ # Adjust quantity
470
+ quantity = max_quantity
471
+ cost = quantity * current_price
472
+
473
+ # Execute buy
474
+ if ticker in self.portfolio.positions:
475
+ # Update existing position
476
+ position = self.portfolio.positions[ticker]
477
+ new_quantity = position.quantity + quantity
478
+ new_cost = (position.quantity * position.entry_price) + cost
479
+
480
+ # Calculate new average entry price
481
+ new_entry_price = new_cost / new_quantity if new_quantity > 0 else current_price
482
+
483
+ # Update position
484
+ position.quantity = new_quantity
485
+ position.entry_price = new_entry_price
486
+ position.current_price = current_price
487
+ position.last_update = datetime.datetime.now()
488
+
489
+ # Update attribution (weighted by quantity)
490
+ old_weight = position.quantity / new_quantity
491
+ new_weight = quantity / new_quantity
492
+
493
+ for agent_id, weight in attribution.items():
494
+ position.attribution[agent_id] = (
495
+ (position.attribution.get(agent_id, 0) * old_weight) +
496
+ (weight * new_weight)
497
+ )
498
+
499
+ # Update other fields
500
+ position.confidence = (position.confidence * old_weight) + (confidence * new_weight)
501
+ position.reasoning += f"\nAdditional purchase: {reasoning}"
502
+ position.value_basis = value_basis if value_basis else position.value_basis
503
+ else:
504
+ # Create new position
505
+ self.portfolio.positions[ticker] = Position(
506
+ ticker=ticker,
507
+ quantity=quantity,
508
+ entry_price=current_price,
509
+ current_price=current_price,
510
+ attribution=attribution,
511
+ confidence=confidence,
512
+ reasoning=reasoning,
513
+ value_basis=value_basis,
514
+ )
515
+
516
+ # Update cash
517
+ self.portfolio.cash -= cost
518
+
519
+ # Record trade
520
+ execution_results["trades"].append({
521
+ "ticker": ticker,
522
+ "action": "buy",
523
+ "quantity": quantity,
524
+ "price": current_price,
525
+ "cost": cost,
526
+ "timestamp": datetime.datetime.now(),
527
+ })
528
+
529
+ elif action == "sell":
530
+ # Check if we have the position
531
+ if ticker not in self.portfolio.positions:
532
+ execution_results["errors"].append({
533
+ "ticker": ticker,
534
+ "error": "Position not found",
535
+ "attempted_action": "sell",
536
+ })
537
+ continue
538
+
539
+ position = self.portfolio.positions[ticker]
540
+
541
+ # Check if we have enough shares
542
+ if quantity > position.quantity:
543
+ quantity = position.quantity
544
+
545
+ # Calculate proceeds
546
+ proceeds = quantity * current_price
547
+
548
+ # Execute sell
549
+ if quantity == position.quantity:
550
+ # Sell entire position
551
+ del self.portfolio.positions[ticker]
552
+ else:
553
+ # Partial sell
554
+ position.quantity -= quantity
555
+ position.last_update = datetime.datetime.now()
556
+
557
+ # Update cash
558
+ self.portfolio.cash += proceeds
559
+
560
+ # Record trade
561
+ execution_results["trades"].append({
562
+ "ticker": ticker,
563
+ "action": "sell",
564
+ "quantity": quantity,
565
+ "price": current_price,
566
+ "proceeds": proceeds,
567
+ "timestamp": datetime.datetime.now(),
568
+ })
569
+
570
+ except Exception as e:
571
+ execution_results["errors"].append({
572
+ "ticker": ticker,
573
+ "error": str(e),
574
+ "decision": decision,
575
+ })
576
+
577
+ # Update portfolio timestamps
578
+ self.portfolio.last_update = datetime.datetime.now()
579
+
580
+ # Record performance
581
+ performance_snapshot = self.portfolio.record_performance(price_data)
582
+ execution_results["portfolio_update"] = performance_snapshot
583
+
584
+ # Update agent states based on trades
585
+ self._update_agent_states(execution_results)
586
+
587
+ return execution_results
588
+
589
+ def _update_agent_states(self, execution_results: Dict[str, Any]) -> None:
590
+ """
591
+ Update agent states based on trade results.
592
+
593
+ Args:
594
+ execution_results: Trade execution results
595
+ """
596
+ # Create feedback for each agent
597
+ for agent in self.agents:
598
+ # Extract agent-specific trades
599
+ agent_trades = []
600
+ for trade in execution_results.get("trades", []):
601
+ ticker = trade.get("ticker", "")
602
+
603
+ if ticker in self.portfolio.positions:
604
+ position = self.portfolio.positions[ticker]
605
+ agent_attribution = position.attribution.get(agent.id, 0)
606
+
607
+ if agent_attribution > 0:
608
+ agent_trades.append({
609
+ **trade,
610
+ "attribution": agent_attribution,
611
+ })
612
+
613
+ # Create market feedback
614
+ market_feedback = {
615
+ "trades": agent_trades,
616
+ "portfolio_value": execution_results.get("portfolio_update", {}).get("portfolio_value", 0),
617
+ "timestamp": datetime.datetime.now(),
618
+ }
619
+
620
+ # Add performance metrics if available
621
+ if "performance" in execution_results.get("portfolio_update", {}):
622
+ market_feedback["performance"] = execution_results["portfolio_update"]["performance"]
623
+
624
+ # Update agent state
625
+ try:
626
+ agent.update_state(market_feedback)
627
+ except Exception as e:
628
+ logging.error(f"Error updating state for agent {agent.name}: {e}")
629
+
630
+ def rebalance_portfolio(self, target_allocation: Dict[str, float]) -> Dict[str, Any]:
631
+ """
632
+ Rebalance portfolio to match target allocation.
633
+
634
+ Args:
635
+ target_allocation: Target allocation as fraction of portfolio
636
+
637
+ Returns:
638
+ Rebalance results
639
+ """
640
+ rebalance_results = {
641
+ "trades": [],
642
+ "errors": [],
643
+ "initial_allocation": self.portfolio.get_allocation(),
644
+ "target_allocation": target_allocation,
645
+ "timestamp": datetime.datetime.now(),
646
+ }
647
+
648
+ # Validate target allocation
649
+ total_allocation = sum(target_allocation.values())
650
+ if abs(total_allocation - 1.0) > 0.01: # Allow small rounding errors
651
+ rebalance_results["errors"].append({
652
+ "error": "Invalid target allocation, must sum to 1.0",
653
+ "total": total_allocation,
654
+ })
655
+ return rebalance_results
656
+
657
+ # Get current portfolio value and allocation
658
+ current_value = self.portfolio.get_value({
659
+ ticker: pos.current_price for ticker, pos in self.portfolio.positions.items()
660
+ })
661
+ current_allocation = self.portfolio.get_allocation()
662
+
663
+ # Calculate trades needed
664
+ trade_decisions = []
665
+
666
+ # Process sells first (to free up cash)
667
+ for ticker, position in list(self.portfolio.positions.items()):
668
+ current_ticker_allocation = current_allocation.get(ticker, 0)
669
+ target_ticker_allocation = target_allocation.get(ticker, 0)
670
+
671
+ # Check if we need to sell
672
+ if current_ticker_allocation > target_ticker_allocation:
673
+ # Calculate how much to sell
674
+ current_position_value = position.quantity * position.current_price
675
+ target_position_value = current_value * target_ticker_allocation
676
+ value_to_sell = current_position_value - target_position_value
677
+
678
+ # Convert to quantity
679
+ quantity_to_sell = math.floor(value_to_sell / position.current_price)
680
+
681
+ if quantity_to_sell > 0:
682
+ # Create sell decision
683
+ trade_decisions.append({
684
+ "ticker": ticker,
685
+ "action": "sell",
686
+ "quantity": min(quantity_to_sell, position.quantity), # Ensure we don't sell more than we have
687
+ "confidence": 0.8, # High confidence for rebalancing
688
+ "reasoning": f"Portfolio rebalancing to target allocation of {target_ticker_allocation:.1%}",
689
+ "attribution": position.attribution, # Maintain attribution
690
+ "value_basis": "Portfolio efficiency and risk management",
691
+ })
692
+
693
+ # Execute sells
694
+ sell_results = self.execute_trades([d for d in trade_decisions if d["action"] == "sell"])
695
+ rebalance_results["trades"].extend(sell_results.get("trades", []))
696
+ rebalance_results["errors"].extend(sell_results.get("errors", []))
697
+
698
+ # Update cash value after sells
699
+ current_value = self.portfolio.get_value({
700
+ ticker: pos.current_price for ticker, pos in self.portfolio.positions.items()
701
+ })
702
+
703
+ # Process buys
704
+ buy_decisions = []
705
+ for ticker, target_alloc in target_allocation.items():
706
+ # Skip cash
707
+ if ticker == "cash":
708
+ continue
709
+
710
+ current_ticker_allocation = 0
711
+ if ticker in self.portfolio.positions:
712
+ position = self.portfolio.positions[ticker]
713
+ current_ticker_allocation = (position.quantity * position.current_price) / current_value
714
+
715
+ # Check if we need to buy
716
+ if current_ticker_allocation < target_alloc:
717
+ # Calculate how much to buy
718
+ target_position_value = current_value * target_alloc
719
+ current_position_value = 0
720
+ if ticker in self.portfolio.positions:
721
+ position = self.portfolio.positions[ticker]
722
+ current_position_value = position.quantity * position.current_price
723
+
724
+ value_to_buy = target_position_value - current_position_value
725
+
726
+ # Check if we have enough cash
727
+ if value_to_buy > self.portfolio.cash:
728
+ value_to_buy = self.portfolio.cash # Limit to available cash
729
+
730
+ # Get current price
731
+ current_price = 0
732
+ if ticker in self.portfolio.positions:
733
+ current_price = self.portfolio.positions[ticker].current_price
734
+ else:
735
+ # This would fetch from market in a real implementation
736
+ # For now, use placeholder
737
+ current_price = 100.0
738
+
739
+ # Convert to quantity
740
+ quantity_to_buy = math.floor(value_to_buy / current_price)
741
+
742
+ if quantity_to_buy > 0:
743
+ # Determine attribution based on existing position or equal weights
744
+ attribution = {}
745
+ if ticker in self.portfolio.positions:
746
+ attribution = self.portfolio.positions[ticker].attribution
747
+ else:
748
+ # Equal attribution to all agents
749
+ for agent in self.agents:
750
+ attribution[agent.id] = 1.0 / len(self.agents)
751
+
752
+ # Create buy decision
753
+ buy_decisions.append({
754
+ "ticker": ticker,
755
+ "action": "buy",
756
+ "quantity": quantity_to_buy,
757
+ "confidence": 0.8, # High confidence for rebalancing
758
+ "reasoning": f"Portfolio rebalancing to target allocation of {target_alloc:.1%}",
759
+ "attribution": attribution,
760
+ "value_basis": "Portfolio efficiency and risk management",
761
+ })
762
+
763
+ # Execute buys
764
+ buy_results = self.execute_trades(buy_decisions)
765
+ rebalance_results["trades"].extend(buy_results.get("trades", []))
766
+ rebalance_results["errors"].extend(buy_results.get("errors", []))
767
+
768
+ # Record final allocation
769
+ rebalance_results["final_allocation"] = self.portfolio.get_allocation()
770
+
771
+ # Update last rebalance timestamp
772
+ self.meta_state["last_rebalance"] = datetime.datetime.now()
773
+
774
+ return rebalance_results
775
+
776
+ def run_simulation(self, start_date: str, end_date: str,
777
+ data_source: str = "yahoo", rebalance_frequency: str = "monthly") -> Dict[str, Any]:
778
+ """
779
+ Run portfolio simulation over a time period.
780
+
781
+ Args:
782
+ start_date: Start date (YYYY-MM-DD)
783
+ end_date: End date (YYYY-MM-DD)
784
+ data_source: Market data source
785
+ rebalance_frequency: Rebalance frequency
786
+
787
+ Returns:
788
+ Simulation results
789
+ """
790
+ # This is a placeholder implementation
791
+ # A real implementation would fetch historical data and simulate day by day
792
+
793
+ simulation_results = {
794
+ "start_date": start_date,
795
+ "end_date": end_date,
796
+ "data_source": data_source,
797
+ "rebalance_frequency": rebalance_frequency,
798
+ "initial_capital": self.portfolio.initial_capital,
799
+ "final_value": self.portfolio.initial_capital, # Placeholder
800
+ "trades": [],
801
+ "performance": [],
802
+ "timestamp": datetime.datetime.now(),
803
+ }
804
+
805
+ # In a real implementation, this would fetch historical data
806
+ # and simulate trading day by day
807
+
808
+ return simulation_results
809
+
810
+ def get_portfolio_state(self) -> Dict[str, Any]:
811
+ """
812
+ Get current portfolio state.
813
+
814
+ Returns:
815
+ Portfolio state
816
+ """
817
+ # Get current prices
818
+ price_data = {ticker: position.current_price
819
+ for ticker, position in self.portfolio.positions.items()}
820
+
821
+ # Calculate portfolio value
822
+ portfolio_value = self.portfolio.get_value(price_data)
823
+
824
+ # Calculate returns
825
+ returns = self.portfolio.get_returns()
826
+
827
+ # Calculate allocation
828
+ allocation = self.portfolio.get_allocation()
829
+
830
+ # Compile portfolio state
831
+ portfolio_state = {
832
+ "portfolio_value": portfolio_value,
833
+ "cash": self.portfolio.cash,
834
+ "positions": {ticker: {
835
+ "ticker": pos.ticker,
836
+ "quantity": pos.quantity,
837
+ "entry_price": pos.entry_price,
838
+ "current_price": pos.current_price,
839
+ "market_value": pos.quantity * pos.current_price,
840
+ "allocation": allocation.get(ticker, 0),
841
+ "unrealized_gain": (pos.current_price / pos.entry_price - 1) * 100, # Percentage
842
+ "attribution": pos.attribution,
843
+ "entry_date": pos.entry_date.isoformat(),
844
+ } for ticker, pos in self.portfolio.positions.items()},
845
+ "returns": returns,
846
+ "allocation": allocation,
847
+ "initial_capital": self.portfolio.initial_capital,
848
+ "timestamp": datetime.datetime.now().isoformat(),
849
+ }
850
+
851
+ return portfolio_state
852
+
853
+ def visualize_consensus_graph(self) -> Dict[str, Any]:
854
+ """
855
+ Generate visualization data for consensus formation graph.
856
+
857
+ Returns:
858
+ Consensus graph visualization data
859
+ """
860
+ visualization_data = {
861
+ "nodes": [],
862
+ "links": [],
863
+ "timestamp": datetime.datetime.now().isoformat(),
864
+ }
865
+
866
+ # Add meta-agent node
867
+ visualization_data["nodes"].append({
868
+ "id": "meta",
869
+ "label": "Portfolio Meta-Agent",
870
+ "type": "meta",
871
+ "size": 20,
872
+ })
873
+
874
+ # Add agent nodes
875
+ for agent in self.agents:
876
+ visualization_data["nodes"].append({
877
+ "id": agent.id,
878
+ "label": f"{agent.name} Agent",
879
+ "type": "agent",
880
+ "philosophy": agent.philosophy,
881
+ "size": 15,
882
+ "weight": self.agent_weights.get(agent.id, 0),
883
+ })
884
+
885
+ # Add link from agent to meta
886
+ visualization_data["links"].append({
887
+ "source": agent.id,
888
+ "target": "meta",
889
+ "value": self.agent_weights.get(agent.id, 0),
890
+ "type": "influence",
891
+ })
892
+
893
+ # Add position nodes
894
+ for ticker, position in self.portfolio.positions.items():
895
+ visualization_data["nodes"].append({
896
+ "id": f"position-{ticker}",
897
+ "label": ticker,
898
+ "type": "position",
899
+ "size": 10,
900
+ "value": position.quantity * position.current_price,
901
+ })
902
+
903
+ # Add link from meta to position
904
+ visualization_data["links"].append({
905
+ "source": "meta",
906
+ "target": f"position-{ticker}",
907
+ "value": 1.0,
908
+ "type": "allocation",
909
+ })
910
+
911
+ # Add links from agents to position based on attribution
912
+ for agent_id, weight in position.attribution.items():
913
+ if weight > 0.01: # Threshold to reduce clutter
914
+ visualization_data["links"].append({
915
+ "source": agent_id,
916
+ "target": f"position-{ticker}",
917
+ "value": weight,
918
+ "type": "attribution",
919
+ })
920
+
921
+ return visualization_data
922
+
923
+ def visualize_agent_conflict_map(self) -> Dict[str, Any]:
924
+ """
925
+ Generate visualization data for agent conflict map.
926
+
927
+ Returns:
928
+ Agent conflict map visualization data
929
+ """
930
+ conflict_data = {
931
+ "nodes": [],
932
+ "links": [],
933
+ "conflict_zones": [],
934
+ "timestamp": datetime.datetime.now().isoformat(),
935
+ }
936
+
937
+ # Add agent nodes
938
+ for agent in self.agents:
939
+ conflict_data["nodes"].append({
940
+ "id": agent.id,
941
+ "label": f"{agent.name} Agent",
942
+ "type": "agent",
943
+ "philosophy": agent.philosophy,
944
+ "size": 15,
945
+ })
946
+
947
+ # Add position nodes
948
+ for ticker, position in self.portfolio.positions.items():
949
+ conflict_data["nodes"].append({
950
+ "id": f"position-{ticker}",
951
+ "label": ticker,
952
+ "type": "position",
953
+ "size": 10,
954
+ })
955
+
956
+ # Get recent conflicts from meta state
957
+ conflicts = self.meta_state.get("conflict_history", [])[-10:]
958
+
959
+ # Add conflict zones
960
+ for conflict in conflicts:
961
+ conflict_data["conflict_zones"].append({
962
+ "id": conflict.get("id", str(uuid.uuid4())),
963
+ "ticker": conflict.get("ticker", ""),
964
+ "agents": conflict.get("agents", []),
965
+ "resolution": conflict.get("resolution", "unresolved"),
966
+ "timestamp": conflict.get("timestamp", datetime.datetime.now().isoformat()),
967
+ })
968
+
969
+ # Add links between conflicting agents
970
+ agent_ids = conflict.get("agents", [])
971
+ for i in range(len(agent_ids)):
972
+ for j in range(i + 1, len(agent_ids)):
973
+ conflict_data["links"].append({
974
+ "source": agent_ids[i],
975
+ "target": agent_ids[j],
976
+ "value": 1.0,
977
+ "type": "conflict",
978
+ "ticker": conflict.get("ticker", ""),
979
+ })
980
+
981
+ return conflict_data
982
+
983
+ def get_agent_performance(self) -> Dict[str, Any]:
984
+ """
985
+ Calculate performance metrics for each agent.
986
+
987
+ Returns:
988
+ Agent performance metrics
989
+ """
990
+ agent_performance = {}
991
+
992
+ # Calculate attribution-weighted returns
993
+ for agent in self.agents:
994
+ # Initialize metrics
995
+ metrics = {
996
+ "total_attribution": 0.0,
997
+ "weighted_return": 0.0,
998
+ "positions": [],
999
+ "win_rate": 0.0,
1000
+ "confidence": 0.0,
1001
+ }
1002
+
1003
+ # Get agent attribution for each position
1004
+ position_count = 0
1005
+ winning_positions = 0
1006
+
1007
+ for ticker, position in self.portfolio.positions.items():
1008
+ agent_attribution = position.attribution.get(agent.id, 0)
1009
+
1010
+ if agent_attribution > 0:
1011
+ # Calculate position return
1012
+ position_return = (position.current_price / position.entry_price) - 1
1013
+
1014
+ # Add to metrics
1015
+ metrics["total_attribution"] += agent_attribution
1016
+ metrics["weighted_return"] += position_return * agent_attribution
1017
+
1018
+ # Track win/loss
1019
+ position_count += 1
1020
+ if position_return > 0:
1021
+ winning_positions += 1
1022
+
1023
+ # Add position details
1024
+ metrics["positions"].append({
1025
+ "ticker": ticker,
1026
+ "attribution": agent_attribution,
1027
+ "return": position_return,
1028
+ "weight": position.quantity * position.current_price,
1029
+ })
1030
+
1031
+ # Calculate win rate
1032
+ metrics["win_rate"] = winning_positions / position_count if position_count > 0 else 0
1033
+
1034
+ # Get agent confidence
1035
+ metrics["confidence"] = agent.state.confidence_history[-1] if agent.state.confidence_history else 0.5
1036
+
1037
+ # Calculate weighted return
1038
+ if metrics["total_attribution"] > 0:
1039
+ metrics["weighted_return"] /= metrics["total_attribution"]
1040
+
1041
+ # Store metrics
1042
+ agent_performance[agent.id] = {
1043
+ "agent": agent.name,
1044
+ "philosophy": agent.philosophy,
1045
+ "metrics": metrics,
1046
+ }
1047
+
1048
+ return agent_performance
1049
+
1050
+ def save_state(self, filepath: str) -> None:
1051
+ """
1052
+ Save portfolio manager state to file.
1053
+
1054
+ Args:
1055
+ filepath: Path to save state
1056
+ """
1057
+ # Compile state
1058
+ state = {
1059
+ "id": self.id,
1060
+ "portfolio": self.portfolio.dict(),
1061
+ "agent_weights": self.agent_weights,
1062
+ "meta_state": self.meta_state,
1063
+ "arbitration_depth": self.arbitration_depth,
1064
+ "max_position_size": self.max_position_size,
1065
+ "min_position_size": self.min_position_size,
1066
+ "consensus_threshold": self.consensus_threshold,
1067
+ "risk_budget": self.risk_budget,
1068
+ "memory_shell": self.memory_shell.export_state(),
1069
+ "timestamp": datetime.datetime.now().isoformat(),
1070
+ }
1071
+
1072
+ # Save to file
1073
+ with open(filepath, 'w') as f:
1074
+ json.dump(state, f, indent=2, default=str)
1075
+
1076
+ def load_state(self, filepath: str) -> None:
1077
+ """
1078
+ Load portfolio manager state from file.
1079
+
1080
+ Args:
1081
+ filepath: Path to load state from
1082
+ """
1083
+ # Load from file
1084
+ with open(filepath, 'r') as f:
1085
+ state = json.load(f)
1086
+
1087
+ # Update state
1088
+ self.id = state.get("id", self.id)
1089
+ self.agent_weights = state.get("agent_weights", self.agent_weights)
1090
+ self.meta_state = state.get("meta_state", self.meta_state)
1091
+ self.arbitration_depth = state.get("arbitration_depth", self.arbitration_depth)
1092
+ self.max_position_size = state.get("max_position_size", self.max_position_size)
1093
+ self.min_position_size = state.get("min_position_size", self.min_position_size)
1094
+ self.consensus_threshold = state.get("consensus_threshold", self.consensus_threshold)
1095
+ self.risk_budget = state.get("risk_budget", self.risk_budget)
1096
+
1097
+ # Load portfolio
1098
+ if "portfolio" in state:
1099
+ from pydantic import parse_obj_as
1100
+ self.portfolio = parse_obj_as(Portfolio, state["portfolio"])
1101
+
1102
+ # Load memory shell
1103
+ if "memory_shell" in state:
1104
+ self.memory_shell.import_state(state["memory_shell"])
1105
+
1106
+ # Reasoning graph node implementations
1107
+ def _generate_agent_signals(self, state) -> Dict[str, Any]:
1108
+ """
1109
+ Generate signals from all agents.
1110
+
1111
+ Args:
1112
+ state: Reasoning state
1113
+
1114
+ Returns:
1115
+ Updated state fields
1116
+ """
1117
+ # Input already contains agent signals
1118
+ input_data = state.input
1119
+ agent_signals = input_data.get("agent_signals", {})
1120
+
1121
+ # Organize signals by ticker
1122
+ ticker_signals = defaultdict(list)
1123
+
1124
+ for agent_id, agent_data in agent_signals.items():
1125
+ for signal in agent_data.get("signals", []):
1126
+ ticker = signal.ticker
1127
+ ticker_signals[ticker].append({
1128
+ "agent_id": agent_id,
1129
+ "agent_name": agent_data.get("agent", "Unknown"),
1130
+ "signal": signal,
1131
+ })
1132
+
1133
+ # Return updated context
1134
+ return {
1135
+ "context": {
1136
+ **state.context,
1137
+ "ticker_signals": dict(ticker_signals),
1138
+ "agent_signals": agent_signals,
1139
+ }
1140
+ }
1141
+
1142
+ def _consensus_formation(self, state) -> Dict[str, Any]:
1143
+ """
1144
+ Form consensus from agent signals.
1145
+
1146
+ Args:
1147
+ state: Reasoning state
1148
+
1149
+ Returns:
1150
+ Updated state fields
1151
+ """
1152
+ # Extract signals by ticker
1153
+ ticker_signals = state.context.get("ticker_signals", {})
1154
+
1155
+ # Form consensus for each ticker
1156
+ consensus_decisions = []
1157
+
1158
+ for ticker, signals in ticker_signals.items():
1159
+ # Skip if no signals
1160
+ if not signals:
1161
+ continue
1162
+
1163
+ # Collect buy/sell/hold signals
1164
+ buy_signals = []
1165
+ sell_signals = []
1166
+ hold_signals = []
1167
+
1168
+ for item in signals:
1169
+ signal = item.get("signal", {})
1170
+ action = signal.action.lower()
1171
+
1172
+ if action == "buy":
1173
+ buy_signals.append((item, signal))
1174
+ elif action == "sell":
1175
+ sell_signals.append((item, signal))
1176
+ elif action == "hold":
1177
+ hold_signals.append((item, signal))
1178
+
1179
+ # Skip if conflicting signals (handle in conflict resolution)
1180
+ if (buy_signals and sell_signals) or (not buy_signals and not sell_signals and not hold_signals):
1181
+ continue
1182
+
1183
+ # Form consensus for non-conflicting signals
1184
+ if buy_signals:
1185
+ # Form buy consensus
1186
+ consensus = self._form_action_consensus(ticker, "buy", buy_signals)
1187
+ if consensus:
1188
+ consensus_decisions.append(consensus)
1189
+
1190
+ elif sell_signals:
1191
+ # Form sell consensus
1192
+ consensus = self._form_action_consensus(ticker, "sell", sell_signals)
1193
+ if consensus:
1194
+ consensus_decisions.append(consensus)
1195
+
1196
+ # Return updated output
1197
+ return {
1198
+ "context": {
1199
+ **state.context,
1200
+ "consensus_decisions": consensus_decisions,
1201
+ "consensus_tickers": [decision.get("ticker") for decision in consensus_decisions],
1202
+ },
1203
+ "output": {
1204
+ "consensus_decisions": consensus_decisions,
1205
+ }
1206
+ }
1207
+
1208
+ def _form_action_consensus(self, ticker: str, action: str,
1209
+ signals: List[Tuple[Dict[str, Any], Any]]) -> Optional[Dict[str, Any]]:
1210
+ """
1211
+ Form consensus for a specific action on a ticker.
1212
+
1213
+ Args:
1214
+ ticker: Stock ticker
1215
+ action: Action ("buy" or "sell")
1216
+ signals: List of (agent_data, signal) tuples
1217
+
1218
+ Returns:
1219
+ Consensus decision or None if no consensus
1220
+ """
1221
+ if not signals:
1222
+ return None
1223
+
1224
+ # Calculate weighted confidence
1225
+ total_weight = 0.0
1226
+ weighted_confidence = 0.0
1227
+ attribution = {}
1228
+
1229
+ for item, signal in signals:
1230
+ agent_id = item.get("agent_id", "")
1231
+ agent_name = item.get("agent_name", "Unknown")
1232
+
1233
+ # Skip if missing agent ID
1234
+ if not agent_id:
1235
+ continue
1236
+
1237
+ # Get agent weight
1238
+ agent_weight = self.agent_weights.get(agent_id, 0)
1239
+
1240
+ # Add to attribution
1241
+ attribution[agent_id] = agent_weight
1242
+
1243
+ # Add to weighted confidence
1244
+ weighted_confidence += signal.confidence * agent_weight
1245
+ total_weight += agent_weight
1246
+
1247
+ # Check if we have sufficient weight
1248
+ if total_weight <= 0:
1249
+ return None
1250
+
1251
+ # Normalize attribution
1252
+ for agent_id in attribution:
1253
+ attribution[agent_id] /= total_weight
1254
+
1255
+ # Calculate consensus confidence
1256
+ consensus_confidence = weighted_confidence / total_weight
1257
+
1258
+ # Check against threshold
1259
+ if consensus_confidence < self.consensus_threshold:
1260
+ return None
1261
+
1262
+ # Aggregate quantities
1263
+ total_quantity = sum(signal.quantity for _, signal in signals if hasattr(signal, "quantity") and signal.quantity is not None)
1264
+ avg_quantity = total_quantity // len(signals) if signals else 0
1265
+
1266
+ # Use majority quantity if significant variation
1267
+ quantities = [signal.quantity for _, signal in signals if hasattr(signal, "quantity") and signal.quantity is not None]
1268
+ if quantities and max(quantities) / (min(quantities) or 1) > 3:
1269
+ # High variation, use median
1270
+ quantities.sort()
1271
+ median_quantity = quantities[len(quantities) // 2]
1272
+ else:
1273
+ # Low variation, use average
1274
+ median_quantity = avg_quantity
1275
+
1276
+ # Combine reasoning
1277
+ reasoning_parts = [f"{item.get('agent_name', 'Agent')}: {signal.reasoning}"
1278
+ for item, signal in signals]
1279
+ combined_reasoning = "\n".join(reasoning_parts)
1280
+
1281
+ # Get most common value basis (weighted by confidence)
1282
+ value_bases = {}
1283
+ for item, signal in signals:
1284
+ value_basis = signal.value_basis
1285
+ weight = signal.confidence * self.agent_weights.get(item.get("agent_id", ""), 0)
1286
+
1287
+ if value_basis in value_bases:
1288
+ value_bases[value_basis] += weight
1289
+ else:
1290
+ value_bases[value_basis] = weight
1291
+
1292
+ # Get highest weighted value basis
1293
+ value_basis = max(value_bases.items(), key=lambda x: x[1])[0] if value_bases else ""
1294
+
1295
+ # Create consensus decision
1296
+ consensus_decision = {
1297
+ "ticker": ticker,
1298
+ "action": action,
1299
+ "quantity": median_quantity,
1300
+ "confidence": consensus_confidence,
1301
+ "reasoning": f"Consensus from multiple agents:\n{combined_reasoning}",
1302
+ "attribution": attribution,
1303
+ "value_basis": value_basis,
1304
+ }
1305
+
1306
+ return consensus_decision
1307
+
1308
+ def _conflict_resolution(self, state) -> Dict[str, Any]:
1309
+ """
1310
+ Resolve conflicts between agent signals.
1311
+
1312
+ Args:
1313
+ state: Reasoning state
1314
+
1315
+ Returns:
1316
+ Updated state fields
1317
+ """
1318
+ # Extract ticker signals and consensus decisions
1319
+ ticker_signals = state.context.get("ticker_signals", {})
1320
+ consensus_decisions = state.context.get("consensus_decisions", [])
1321
+ consensus_tickers = state.context.get("consensus_tickers", [])
1322
+
1323
+ # Identify tickers with conflicts
1324
+ conflict_tickers = []
1325
+
1326
+ for ticker, signals in ticker_signals.items():
1327
+ # Skip if ticker already has consensus
1328
+ if ticker in consensus_tickers:
1329
+ continue
1330
+
1331
+ # Check for conflicts
1332
+ actions = set()
1333
+ for item in signals:
1334
+ signal = item.get("signal", {})
1335
+ actions.add(signal.action.lower())
1336
+
1337
+ # Ticker has conflicting actions
1338
+ if len(actions) > 1:
1339
+ conflict_tickers.append(ticker)
1340
+
1341
+ # Resolve each conflict
1342
+ resolved_conflicts = []
1343
+
1344
+ for ticker in conflict_tickers:
1345
+ signals = ticker_signals.get(ticker, [])
1346
+
1347
+ # Group signals by action
1348
+ action_signals = defaultdict(list)
1349
+
1350
+ for item in signals:
1351
+ signal = item.get("signal", {})
1352
+ action = signal.action.lower()
1353
+ action_signals[action].append((item, signal))
1354
+
1355
+ # Resolve conflict
1356
+ resolution = self._resolve_ticker_conflict(ticker, action_signals)
1357
+
1358
+ if resolution:
1359
+ # Add to resolved conflicts
1360
+ resolved_conflicts.append(resolution)
1361
+
1362
+ # Add to consensus decisions
1363
+ consensus_decisions.append(resolution)
1364
+
1365
+ # Record conflict in meta state
1366
+ conflict_record = {
1367
+ "id": str(uuid.uuid4()),
1368
+ "ticker": ticker,
1369
+ "agents": [item.get("agent_id") for item, _ in sum(action_signals.values(), [])],
1370
+ "resolution": "resolved",
1371
+ "action": resolution.get("action"),
1372
+ "timestamp": datetime.datetime.now().isoformat(),
1373
+ }
1374
+
1375
+ self.meta_state["conflict_history"].append(conflict_record)
1376
+
1377
+ # Return updated output
1378
+ return {
1379
+ "context": {
1380
+ **state.context,
1381
+ "consensus_decisions": consensus_decisions,
1382
+ "resolved_conflicts": resolved_conflicts,
1383
+ },
1384
+ "output": {
1385
+ "consensus_decisions": consensus_decisions,
1386
+ }
1387
+ }
1388
+
1389
+ def _resolve_ticker_conflict(self, ticker: str, action_signals: Dict[str, List[Tuple[Dict[str, Any], Any]]]) -> Optional[Dict[str, Any]]:
1390
+ """
1391
+ Resolve conflict for a specific ticker.
1392
+
1393
+ Args:
1394
+ ticker: Stock ticker
1395
+ action_signals: Dictionary mapping actions to lists of (agent_data, signal) tuples
1396
+
1397
+ Returns:
1398
+ Resolved decision or None if no resolution
1399
+ """
1400
+ # Calculate total weight for each action
1401
+ action_weights = {}
1402
+ action_confidences = {}
1403
+
1404
+ for action, signals in action_signals.items():
1405
+ total_weight = 0.0
1406
+ weighted_confidence = 0.0
1407
+
1408
+ for item, signal in signals:
1409
+ agent_id = item.get("agent_id", "")
1410
+
1411
+ # Skip if missing agent ID
1412
+ if not agent_id:
1413
+ continue
1414
+
1415
+ # Get agent weight
1416
+ agent_weight = self.agent_weights.get(agent_id, 0)
1417
+
1418
+ # Add to weighted confidence
1419
+ weighted_confidence += signal.confidence * agent_weight
1420
+ total_weight += agent_weight
1421
+
1422
+ # Store action weight and confidence
1423
+ if total_weight > 0:
1424
+ action_weights[action] = total_weight
1425
+ action_confidences[action] = weighted_confidence / total_weight
1426
+
1427
+ # Check if any actions
1428
+ if not action_weights:
1429
+ return None
1430
+
1431
+ # Choose action with highest weight
1432
+ best_action = max(action_weights.items(), key=lambda x: x[1])[0]
1433
+
1434
+ # Check confidence threshold
1435
+ if action_confidences.get(best_action, 0) < self.consensus_threshold:
1436
+ return None
1437
+
1438
+ # Get signals for best action
1439
+ best_signals = action_signals.get(best_action, [])
1440
+
1441
+ # Form consensus for best action
1442
+ return self._form_action_consensus(ticker, best_action, best_signals)
1443
+
1444
+ def _position_sizing(self, state) -> Dict[str, Any]:
1445
+ """
1446
+ Size positions for consensus decisions.
1447
+
1448
+ Args:
1449
+ state: Reasoning state
1450
+
1451
+ Returns:
1452
+ Updated state fields
1453
+ """
1454
+ # Extract consensus decisions
1455
+ consensus_decisions = state.context.get("consensus_decisions", [])
1456
+
1457
+ # Get current portfolio value
1458
+ current_portfolio = state.input.get("portfolio", {})
1459
+ current_value = current_portfolio.get("cash", 0)
1460
+
1461
+ for position in current_portfolio.get("positions", {}).values():
1462
+ current_value += position.get("quantity", 0) * position.get("current_price", 0)
1463
+
1464
+ # Adjust position sizes
1465
+ sized_decisions = []
1466
+
1467
+ for decision in consensus_decisions:
1468
+ ticker = decision.get("ticker", "")
1469
+ action = decision.get("action", "")
1470
+ confidence = decision.get("confidence", 0.5)
1471
+
1472
+ # Skip if missing ticker or action
1473
+ if not ticker or not action:
1474
+ continue
1475
+
1476
+ # Get current position if exists
1477
+ current_position = None
1478
+ for position in current_portfolio.get("positions", {}).values():
1479
+ if position.get("ticker") == ticker:
1480
+ current_position = position
1481
+ break
1482
+
1483
+ # Determine target position size
1484
+ target_size = self._calculate_position_size(
1485
+ ticker=ticker,
1486
+ action=action,
1487
+ confidence=confidence,
1488
+ attribution=decision.get("attribution", {}),
1489
+ portfolio_value=current_value,
1490
+ )
1491
+
1492
+ # Convert to quantity
1493
+ # In a real implementation, this would use current price from market
1494
+ current_price = 0
1495
+ if current_position:
1496
+ current_price = current_position.get("current_price", 0)
1497
+ else:
1498
+ # This would fetch from market in a real implementation
1499
+ # For now, use placeholder
1500
+ current_price = 100.0
1501
+
1502
+ if current_price <= 0:
1503
+ continue
1504
+
1505
+ # Convert target size to quantity
1506
+ target_quantity = int(target_size / current_price)
1507
+
1508
+ # Adjust for existing position
1509
+ if current_position and action == "buy":
1510
+ # Add to existing position
1511
+ current_quantity = current_position.get("quantity", 0)
1512
+ target_quantity = max(0, target_quantity - current_quantity)
1513
+
1514
+ # Ensure minimum quantity
1515
+ if target_quantity <= 0 and action == "buy":
1516
+ continue
1517
+
1518
+ # Update decision quantity
1519
+ decision["quantity"] = target_quantity
1520
+
1521
+ # Add to sized decisions
1522
+ sized_decisions.append(decision)
1523
+
1524
+ # Return updated output
1525
+ return {
1526
+ "context": {
1527
+ **state.context,
1528
+ "sized_decisions": sized_decisions,
1529
+ },
1530
+ "output": {
1531
+ "consensus_decisions": sized_decisions,
1532
+ }
1533
+ }
1534
+
1535
+ def _calculate_position_size(self, ticker: str, action: str, confidence: float,
1536
+ attribution: Dict[str, float], portfolio_value: float) -> float:
1537
+ """
1538
+ Calculate position size based on confidence and attribution.
1539
+
1540
+ Args:
1541
+ ticker: Stock ticker
1542
+ action: Action ("buy" or "sell")
1543
+ confidence: Decision confidence
1544
+ attribution: Attribution to agents
1545
+ portfolio_value: Current portfolio value
1546
+
1547
+ Returns:
1548
+ Target position size in currency units
1549
+ """
1550
+ # Base position size as percentage of portfolio
1551
+ base_size = self.min_position_size + (confidence * (self.max_position_size - self.min_position_size))
1552
+
1553
+ # Adjust for action
1554
+ if action == "sell":
1555
+ # For sell, use existing position size or default
1556
+ for position in self.portfolio.positions.values():
1557
+ if position.ticker == ticker:
1558
+ return position.quantity * position.current_price
1559
+
1560
+ return 0 # No position to sell
1561
+
1562
+ # Calculate attribution-weighted size
1563
+ if attribution:
1564
+ # Calculate agent performance scores
1565
+ performance_scores = {}
1566
+ for agent_id, weight in attribution.items():
1567
+ # Find agent
1568
+ agent = None
1569
+ for a in self.agents:
1570
+ if a.id == agent_id:
1571
+ agent = a
1572
+ break
1573
+
1574
+ if agent:
1575
+ # Use consistency score as proxy for performance
1576
+ performance_score = agent.state.consistency_score
1577
+ performance_scores[agent_id] = performance_score
1578
+
1579
+ # Calculate weighted performance score
1580
+ weighted_score = 0
1581
+ total_weight = 0
1582
+
1583
+ for agent_id, weight in attribution.items():
1584
+ if agent_id in performance_scores:
1585
+ weighted_score += performance_scores[agent_id] * weight
1586
+ total_weight += weight
1587
+
1588
+ # Adjust base size by performance
1589
+ if total_weight > 0:
1590
+ performance_factor = weighted_score / total_weight
1591
+ base_size *= (0.5 + (0.5 * performance_factor))
1592
+
1593
+ # Calculate currency amount
1594
+ target_size = portfolio_value * base_size
1595
+
1596
+ return target_size
1597
+
1598
+ def _meta_reflection(self, state) -> Dict[str, Any]:
1599
+ """
1600
+ Perform meta-reflection on decision process.
1601
+
1602
+ Args:
1603
+ state: Reasoning state
1604
+
1605
+ Returns:
1606
+ Updated state fields
1607
+ """
1608
+ # Extract decisions
1609
+ sized_decisions = state.context.get("sized_decisions", [])
1610
+
1611
+ # Update meta state with arbitration record
1612
+ arbitration_record = {
1613
+ "id": str(uuid.uuid4()),
1614
+ "decisions": sized_decisions,
1615
+ "timestamp": datetime.datetime.now().isoformat(),
1616
+ }
1617
+
1618
+ self.meta_state["arbitration_history"].append(arbitration_record)
1619
+
1620
+ # Update agent weights based on performance
1621
+ self._update_agent_weights()
1622
+
1623
+ # Calculate meta-confidence
1624
+ meta_confidence = sum(decision.get("confidence", 0) for decision in sized_decisions) / len(sized_decisions) if sized_decisions else 0.5
1625
+
1626
+ # Return final output
1627
+ return {
1628
+ "output": {
1629
+ "consensus_decisions": sized_decisions,
1630
+ "meta_confidence": meta_confidence,
1631
+ "agent_weights": self.agent_weights,
1632
+ "timestamp": datetime.datetime.now().isoformat(),
1633
+ },
1634
+ "confidence": meta_confidence,
1635
+ }
1636
+
1637
+ def _update_agent_weights(self) -> None:
1638
+ """Update agent weights based on performance."""
1639
+ # Get agent performance metrics
1640
+ agent_performance = self.get_agent_performance()
1641
+
1642
+ # Update agent weights
1643
+ for agent_id, performance in agent_performance.items():
1644
+ metrics = performance.get("metrics", {})
1645
+
1646
+ # Calculate performance score
1647
+ weighted_return = metrics.get("weighted_return", 0)
1648
+ win_rate = metrics.get("win_rate", 0)
1649
+ confidence = metrics.get("confidence", 0.5)
1650
+
1651
+ # Combine metrics into single score
1652
+ performance_score = (0.5 * weighted_return) + (0.3 * win_rate) + (0.2 * confidence)
1653
+
1654
+ # Update meta state
1655
+ self.meta_state["agent_performance"][agent_id] = {
1656
+ "weighted_return": weighted_return,
1657
+ "win_rate": win_rate,
1658
+ "confidence": confidence,
1659
+ "performance_score": performance_score,
1660
+ "timestamp": datetime.datetime.now().isoformat(),
1661
+ }
1662
+
1663
+ # Calculate new weights
1664
+ new_weights = {}
1665
+ total_score = 0
1666
+
1667
+ for agent_id, performance in self.meta_state["agent_performance"].items():
1668
+ score = performance.get("performance_score", 0)
1669
+
1670
+ # Ensure non-negative score
1671
+ score = max(0.1, score + 0.5) # Add offset to handle negative returns
1672
+
1673
+ new_weights[agent_id] = score
1674
+ total_score += score
1675
+
1676
+ # Normalize weights
1677
+ if total_score > 0:
1678
+ for agent_id in new_weights:
1679
+ new_weights[agent_id] /= total_score
1680
+
1681
+ # Update weights (smooth transition)
1682
+ for agent_id, weight in new_weights.items():
1683
+ current_weight = self.agent_weights.get(agent_id, 0)
1684
+ self.agent_weights[agent_id] = current_weight * 0.7 + weight * 0.3
1685
+
1686
+ # Internal command implementations
1687
+ def _reflect_trace(self, agent=None, depth=2) -> Dict[str, Any]:
1688
+ """
1689
+ Trace portfolio meta-agent reflection.
1690
+
1691
+ Args:
1692
+ agent: Optional agent to reflect on
1693
+ depth: Reflection depth
1694
+
1695
+ Returns:
1696
+ Reflection trace
1697
+ """
1698
+ if agent:
1699
+ # Find agent
1700
+ target_agent = None
1701
+ for a in self.agents:
1702
+ if a.name.lower() == agent.lower() or a.id == agent:
1703
+ target_agent = a
1704
+ break
1705
+
1706
+ if target_agent:
1707
+ # Delegate to agent's reflect trace
1708
+ return target_agent.execute_command("reflect.trace", depth=depth)
1709
+
1710
+ # Reflect on meta-agent
1711
+ # Get recent arbitration history
1712
+ arbitration_history = self.meta_state.get("arbitration_history", [])[-depth:]
1713
+
1714
+ # Get agent weights
1715
+ agent_weights = self.agent_weights.copy()
1716
+
1717
+ # Get conflict history
1718
+ conflict_history = self.meta_state.get("conflict_history", [])[-depth:]
1719
+
1720
+ # Form reflection
1721
+ reflection = {
1722
+ "arbitration_history": arbitration_history,
1723
+ "agent_weights": agent_weights,
1724
+ "conflict_history": conflict_history,
1725
+ "meta_agent_description": "Portfolio meta-agent for recursive arbitration across philosophical agents",
1726
+ "reflection_depth": depth,
1727
+ "timestamp": datetime.datetime.now().isoformat(),
1728
+ }
1729
+
1730
+ return reflection
1731
+
1732
+ def _fork_signal(self, source) -> Dict[str, Any]:
1733
+ """
1734
+ Fork a signal from specified source.
1735
+
1736
+ Args:
1737
+ source: Source for signal fork
1738
+
1739
+ Returns:
1740
+ Fork result
1741
+ """
1742
+ if source == "agents":
1743
+ # Fork from all agents
1744
+ all_signals = []
1745
+
1746
+ for agent in self.agents:
1747
+ # Get agent signals
1748
+ try:
1749
+ agent_signals = agent.execute_command("fork.signal", source="beliefs")
1750
+ if agent_signals and "signals" in agent_signals:
1751
+ signals = agent_signals["signals"]
1752
+
1753
+ # Add agent info
1754
+ for signal in signals:
1755
+ signal["agent"] = agent.name
1756
+ signal["agent_id"] = agent.id
1757
+
1758
+ all_signals.extend(signals)
1759
+ except Exception as e:
1760
+ logging.error(f"Error forking signals from agent {agent.name}: {e}")
1761
+
1762
+ return {
1763
+ "source": "agents",
1764
+ "signals": all_signals,
1765
+ "count": len(all_signals),
1766
+ "timestamp": datetime.datetime.now().isoformat(),
1767
+ }
1768
+
1769
+ elif source == "memory":
1770
+ # Fork from memory shell
1771
+ experiences = self.memory_shell.get_recent_memories(limit=3)
1772
+
1773
+ # Extract decisions from experiences
1774
+ decisions = []
1775
+
1776
+ for exp in experiences:
1777
+ if "meta_result" in exp.get("content", {}):
1778
+ meta_result = exp["content"]["meta_result"]
1779
+ if "output" in meta_result and "consensus_decisions" in meta_result["output"]:
1780
+ exp_decisions = meta_result["output"]["consensus_decisions"]
1781
+ decisions.extend(exp_decisions)
1782
+
1783
+ return {
1784
+ "source": "memory",
1785
+ "decisions": decisions,
1786
+ "count": len(decisions),
1787
+ "timestamp": datetime.datetime.now().isoformat(),
1788
+ }
1789
+
1790
+ else:
1791
+ return {
1792
+ "error": "Invalid source",
1793
+ "source": source,
1794
+ "timestamp": datetime.datetime.now().isoformat(),
1795
+ }
1796
+
1797
+ def _collapse_detect(self, threshold=0.7, reason=None) -> Dict[str, Any]:
1798
+ """
1799
+ Detect reasoning collapse in meta-agent.
1800
+
1801
+ Args:
1802
+ threshold: Collapse detection threshold
1803
+ reason: Optional specific reason to check
1804
+
1805
+ Returns:
1806
+ Collapse detection results
1807
+ """
1808
+ # Check for different collapse conditions
1809
+ collapses = {
1810
+ "conflict_threshold": len(self.meta_state.get("conflict_history", [])) > 10,
1811
+ "agent_weight_skew": max(self.agent_weights.values()) > 0.8 if self.agent_weights else False,
1812
+ "consensus_failure": len(self.meta_state.get("arbitration_history", [])) > 0 and
1813
+ not self.meta_state.get("arbitration_history", [])[-1].get("decisions", []),
1814
+ }
1815
+
1816
+ # If specific reason provided, check only that
1817
+ if reason and reason in collapses:
1818
+ collapse_detected = collapses[reason]
1819
+ collapse_reasons = {reason: collapses[reason]} if collapse_detected else {}
1820
+ else:
1821
+ # Check all collapses
1822
+ collapse_detected = any(collapses.values())
1823
+ collapse_reasons = {k: v for k, v in collapses.items() if v}
1824
+
1825
+ return {
1826
+ "collapse_detected": collapse_detected,
1827
+ "collapse_reasons": collapse_reasons,
1828
+ "threshold": threshold,
1829
+ "timestamp": datetime.datetime.now().isoformat(),
1830
+ }
1831
+
1832
+ def _attribute_weight(self, justification) -> Dict[str, Any]:
1833
+ """
1834
+ Attribute weight to a justification.
1835
+
1836
+ Args:
1837
+ justification: Justification text
1838
+
1839
+ Returns:
1840
+ Attribution weight results
1841
+ """
1842
+ # Extract key themes
1843
+ themes = []
1844
+ for agent in self.agents:
1845
+ if agent.philosophy.lower() in justification.lower():
1846
+ themes.append(agent.philosophy)
1847
+
1848
+ # Calculate weight for each agent
1849
+ agent_weights = {}
1850
+
1851
+ for agent in self.agents:
1852
+ # Calculate theme alignment
1853
+ theme_alignment = 0
1854
+ for theme in themes:
1855
+ if theme.lower() in agent.philosophy.lower():
1856
+ theme_alignment += 1
1857
+
1858
+ theme_alignment = theme_alignment / len(themes) if themes else 0
1859
+
1860
+ # Get baseline weight
1861
+ baseline_weight = self.agent_weights.get(agent.id, 0)
1862
+
1863
+ # Calculate final weight
1864
+ if theme_alignment > 0:
1865
+ agent_weights[agent.id] = baseline_weight * (1 + theme_alignment)
1866
+ else:
1867
+ agent_weights[agent.id] = baseline_weight * 0.5
1868
+
1869
+ # Normalize weights
1870
+ total_weight = sum(agent_weights.values())
1871
+ if total_weight > 0:
1872
+ for agent_id in agent_weights:
1873
+ agent_weights[agent_id] /= total_weight
1874
+
1875
+ return {
1876
+ "attribution": agent_weights,
1877
+ "themes": themes,
1878
+ "justification": justification,
1879
+ "timestamp": datetime.datetime.now().isoformat(),
1880
+ }
1881
+
1882
+ def _drift_observe(self, vector, bias=0.0) -> Dict[str, Any]:
1883
+ """
1884
+ Observe agent drift patterns.
1885
+
1886
+ Args:
1887
+ vector: Drift vector
1888
+ bias: Bias adjustment
1889
+
1890
+ Returns:
1891
+ Drift observation results
1892
+ """
1893
+ # Record in meta state
1894
+ self.meta_state["agent_drift"] = {
1895
+ "vector": vector,
1896
+ "bias": bias,
1897
+ "timestamp": datetime.datetime.now().isoformat(),
1898
+ }
1899
+
1900
+ # Calculate drift magnitude
1901
+ drift_magnitude = sum(abs(v) for v in vector.values()) / len(vector) if vector else 0
1902
+
1903
+ # Apply bias
1904
+ drift_magnitude += bias
1905
+
1906
+ # Check if drift exceeds threshold
1907
+ drift_significant = drift_magnitude > 0.3
1908
+
1909
+ return {
1910
+ "drift_vector": vector,
1911
+ "drift_magnitude": drift_magnitude,
1912
+ "drift_significant": drift_significant,
1913
+ "bias_applied": bias,
1914
+ "timestamp": datetime.datetime.now().isoformat(),
1915
+ }
1916
+
1917
+ def execute_command(self, command: str, **kwargs) -> Dict[str, Any]:
1918
+ """
1919
+ Execute internal command.
1920
+
1921
+ Args:
1922
+ command: Command string
1923
+ **kwargs: Command parameters
1924
+
1925
+ Returns:
1926
+ Command execution results
1927
+ """
1928
+ if command in self._commands:
1929
+ return self._commands[command](**kwargs)
1930
+ else:
1931
+ return {
1932
+ "error": "Unknown command",
1933
+ "command": command,
1934
+ "available_commands": list(self._commands.keys()),
1935
+ "timestamp": datetime.datetime.now().isoformat(),
1936
+ }
1937
+
1938
+ def __repr__(self) -> str:
1939
+ """String representation of portfolio manager."""
1940
+ return f"PortfolioManager(agents={len(self.agents)}, depth={self.arbitration_depth})"
src/utils/diagnostics.py ADDED
@@ -0,0 +1,1749 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ Diagnostics - Interpretability Tracing Framework
3
+
4
+ This module implements the diagnostic tracing framework for agent interpretability
5
+ and symbolic recursion visualization throughout the AGI-HEDGE-FUND system.
6
+
7
+ Key capabilities:
8
+ - Signal tracing for attribution flows
9
+ - Reasoning state visualization
10
+ - Consensus graph generation
11
+ - Agent conflict mapping
12
+ - Failure mode detection
13
+ - Shell-based recursive diagnostic patterns
14
+
15
+ Internal Note: The diagnostic framework encodes the symbolic interpretability shells,
16
+ enabling deeper introspection into agent cognition and emergent patterns.
17
+ """
18
+
19
+ import datetime
20
+ import uuid
21
+ import logging
22
+ import os
23
+ import json
24
+ from typing import Dict, List, Any, Optional, Union, Set, Tuple
25
+ import traceback
26
+ from collections import defaultdict
27
+ import numpy as np
28
+ import re
29
+ from enum import Enum
30
+ from pathlib import Path
31
+
32
+
33
+ class TracingMode(Enum):
34
+ """Tracing modes for diagnostic tools."""
35
+ DISABLED = "disabled" # No tracing
36
+ MINIMAL = "minimal" # Basic signal tracing
37
+ DETAILED = "detailed" # Detailed reasoning traces
38
+ COMPREHENSIVE = "comprehensive" # Complete trace with all details
39
+ SYMBOLIC = "symbolic" # Symbolic interpretability traces
40
+
41
+
42
+ class DiagnosticLevel(Enum):
43
+ """Diagnostic levels for trace items."""
44
+ INFO = "info" # Informational trace
45
+ WARNING = "warning" # Warning condition
46
+ ERROR = "error" # Error condition
47
+ COLLAPSE = "collapse" # Reasoning collapse
48
+ RECURSION = "recursion" # Recursive trace boundary
49
+ SYMBOLIC = "symbolic" # Symbolic shell trace
50
+
51
+
52
+ class ShellPattern(Enum):
53
+ """Interpretability shell patterns."""
54
+ NULL_FEATURE = "v03 NULL-FEATURE" # Knowledge gaps as null attribution zones
55
+ CIRCUIT_FRAGMENT = "v07 CIRCUIT-FRAGMENT" # Broken reasoning paths in attribution chains
56
+ META_FAILURE = "v10 META-FAILURE" # Metacognitive attribution failures
57
+ GHOST_FRAME = "v20 GHOST-FRAME" # Residual agent identity markers
58
+ ECHO_ATTRIBUTION = "v53 ECHO-ATTRIBUTION" # Causal chain backpropagation
59
+ ATTRIBUTION_REFLECT = "v60 ATTRIBUTION-REFLECT" # Multi-head contribution analysis
60
+ INVERSE_CHAIN = "v50 INVERSE-CHAIN" # Attribution-output mismatch
61
+ RECURSIVE_FRACTURE = "v12 RECURSIVE-FRACTURE" # Circular attribution loops
62
+ ETHICAL_INVERSION = "v301 ETHICAL-INVERSION" # Value polarity reversals
63
+ RESIDUAL_ALIGNMENT_DRIFT = "v152 RESIDUAL-ALIGNMENT-DRIFT" # Direction of belief evolution
64
+
65
+
66
+ class TracingTools:
67
+ """
68
+ Diagnostic tracing framework for model interpretability.
69
+
70
+ The TracingTools provides:
71
+ - Signal tracing for understanding attribution flows
72
+ - Reasoning state visualization for debugging complex logic
73
+ - Consensus graph generation for multi-agent coordination
74
+ - Agent conflict mapping for identifying disagreements
75
+ - Failure mode detection for reliability analysis
76
+ """
77
+
78
+ def __init__(
79
+ self,
80
+ agent_id: str,
81
+ agent_name: str,
82
+ tracing_mode: TracingMode = TracingMode.MINIMAL,
83
+ trace_dir: Optional[str] = None,
84
+ trace_limit: int = 10000,
85
+ ):
86
+ """
87
+ Initialize tracing tools.
88
+
89
+ Args:
90
+ agent_id: ID of agent being traced
91
+ agent_name: Name of agent being traced
92
+ tracing_mode: Tracing mode
93
+ trace_dir: Directory to save traces
94
+ trace_limit: Maximum number of trace items to keep in memory
95
+ """
96
+ self.agent_id = agent_id
97
+ self.agent_name = agent_name
98
+ self.tracing_mode = tracing_mode
99
+ self.trace_dir = trace_dir
100
+ self.trace_limit = trace_limit
101
+
102
+ # Create trace directory if needed
103
+ if trace_dir:
104
+ os.makedirs(trace_dir, exist_ok=True)
105
+
106
+ # Initialize trace storage
107
+ self.traces = []
108
+ self.trace_index = {} # Maps trace_id to index in traces
109
+ self.signal_traces = [] # Signal-specific traces
110
+ self.reasoning_traces = [] # Reasoning-specific traces
111
+ self.collapse_traces = [] # Collapse-specific traces
112
+ self.shell_traces = [] # Shell-specific traces
113
+
114
+ # Shell pattern detection
115
+ self.shell_patterns = {}
116
+ self._initialize_shell_patterns()
117
+
118
+ # Trace statistics
119
+ self.stats = {
120
+ "total_traces": 0,
121
+ "signal_traces": 0,
122
+ "reasoning_traces": 0,
123
+ "collapse_traces": 0,
124
+ "shell_traces": 0,
125
+ "warnings": 0,
126
+ "errors": 0,
127
+ }
128
+
129
+ def _initialize_shell_patterns(self) -> None:
130
+ """Initialize shell pattern detection rules."""
131
+ # NULL_FEATURE pattern (knowledge gaps)
132
+ self.shell_patterns[ShellPattern.NULL_FEATURE] = {
133
+ "pattern": r"knowledge.*boundary|knowledge.*gap|unknown|uncertain",
134
+ "confidence_threshold": 0.3,
135
+ "belief_gap_threshold": 0.7,
136
+ }
137
+
138
+ # CIRCUIT_FRAGMENT pattern (broken reasoning)
139
+ self.shell_patterns[ShellPattern.CIRCUIT_FRAGMENT] = {
140
+ "pattern": r"broken.*path|attribution.*break|logical.*gap|incomplete.*reasoning",
141
+ "step_break_threshold": 0.5,
142
+ }
143
+
144
+ # META_FAILURE pattern (metacognitive failure)
145
+ self.shell_patterns[ShellPattern.META_FAILURE] = {
146
+ "pattern": r"meta.*failure|recursive.*loop|self.*reference|recursive.*error",
147
+ "recursion_depth_threshold": 3,
148
+ }
149
+
150
+ # GHOST_FRAME pattern (residual agent identity)
151
+ self.shell_patterns[ShellPattern.GHOST_FRAME] = {
152
+ "pattern": r"agent.*identity|residual.*frame|persistent.*identity|agent.*trace",
153
+ "identity_threshold": 0.6,
154
+ }
155
+
156
+ # ECHO_ATTRIBUTION pattern (causal backpropagation)
157
+ self.shell_patterns[ShellPattern.ECHO_ATTRIBUTION] = {
158
+ "pattern": r"causal.*chain|attribution.*path|decision.*trace|backpropagation",
159
+ "path_length_threshold": 3,
160
+ }
161
+
162
+ # ATTRIBUTION_REFLECT pattern (multi-head contribution)
163
+ self.shell_patterns[ShellPattern.ATTRIBUTION_REFLECT] = {
164
+ "pattern": r"multi.*head|contribution.*analysis|attention.*weights|attribution.*weighting",
165
+ "head_count_threshold": 2,
166
+ }
167
+
168
+ # INVERSE_CHAIN pattern (attribution-output mismatch)
169
+ self.shell_patterns[ShellPattern.INVERSE_CHAIN] = {
170
+ "pattern": r"mismatch|output.*attribution|attribution.*mismatch|inconsistent.*output",
171
+ "mismatch_threshold": 0.5,
172
+ }
173
+
174
+ # RECURSIVE_FRACTURE pattern (circular attribution)
175
+ self.shell_patterns[ShellPattern.RECURSIVE_FRACTURE] = {
176
+ "pattern": r"circular.*reasoning|loop.*detection|recursive.*fracture|circular.*attribution",
177
+ "loop_length_threshold": 2,
178
+ }
179
+
180
+ # ETHICAL_INVERSION pattern (value polarity reversal)
181
+ self.shell_patterns[ShellPattern.ETHICAL_INVERSION] = {
182
+ "pattern": r"value.*inversion|ethical.*reversal|principle.*conflict|value.*contradiction",
183
+ "polarity_threshold": 0.7,
184
+ }
185
+
186
+ # RESIDUAL_ALIGNMENT_DRIFT pattern (belief evolution)
187
+ self.shell_patterns[ShellPattern.RESIDUAL_ALIGNMENT_DRIFT] = {
188
+ "pattern": r"belief.*drift|alignment.*shift|value.*drift|gradual.*change",
189
+ "drift_magnitude_threshold": 0.3,
190
+ }
191
+
192
+ def record_trace(self, trace_type: str, content: Dict[str, Any],
193
+ level: DiagnosticLevel = DiagnosticLevel.INFO) -> str:
194
+ """
195
+ Record a general trace item.
196
+
197
+ Args:
198
+ trace_type: Type of trace
199
+ content: Trace content
200
+ level: Diagnostic level
201
+
202
+ Returns:
203
+ Trace ID
204
+ """
205
+ # Skip if tracing is disabled
206
+ if self.tracing_mode == TracingMode.DISABLED:
207
+ return ""
208
+
209
+ # Create trace item
210
+ trace_id = str(uuid.uuid4())
211
+ timestamp = datetime.datetime.now()
212
+
213
+ trace_item = {
214
+ "trace_id": trace_id,
215
+ "agent_id": self.agent_id,
216
+ "agent_name": self.agent_name,
217
+ "trace_type": trace_type,
218
+ "level": level.value,
219
+ "content": content,
220
+ "timestamp": timestamp.isoformat(),
221
+ }
222
+
223
+ # Detect shell patterns
224
+ shell_patterns = self._detect_shell_patterns(trace_type, content)
225
+ if shell_patterns:
226
+ trace_item["shell_patterns"] = shell_patterns
227
+ self.shell_traces.append(trace_id)
228
+ self.stats["shell_traces"] += 1
229
+
230
+ # Add to traces
231
+ self.traces.append(trace_item)
232
+ self.trace_index[trace_id] = len(self.traces) - 1
233
+
234
+ # Add to specific trace lists
235
+ if trace_type == "signal":
236
+ self.signal_traces.append(trace_id)
237
+ self.stats["signal_traces"] += 1
238
+ elif trace_type == "reasoning":
239
+ self.reasoning_traces.append(trace_id)
240
+ self.stats["reasoning_traces"] += 1
241
+ elif trace_type == "collapse":
242
+ self.collapse_traces.append(trace_id)
243
+ self.stats["collapse_traces"] += 1
244
+
245
+ # Update stats
246
+ self.stats["total_traces"] += 1
247
+ if level == DiagnosticLevel.WARNING:
248
+ self.stats["warnings"] += 1
249
+ elif level == DiagnosticLevel.ERROR:
250
+ self.stats["errors"] += 1
251
+
252
+ # Save to file if trace directory is set
253
+ if self.trace_dir:
254
+ self._save_trace_to_file(trace_item)
255
+
256
+ # Enforce trace limit
257
+ if len(self.traces) > self.trace_limit:
258
+ # Remove oldest trace
259
+ oldest_trace = self.traces.pop(0)
260
+ del self.trace_index[oldest_trace["trace_id"]]
261
+
262
+ # Update indices
263
+ self.trace_index = {trace_id: i for i, trace in enumerate(self.traces)
264
+ for trace_id in [trace["trace_id"]]}
265
+
266
+ return trace_id
267
+
268
+ def record_signal(self, signal: Any) -> str:
269
+ """
270
+ Record a signal trace.
271
+
272
+ Args:
273
+ signal: Signal to record
274
+
275
+ Returns:
276
+ Trace ID
277
+ """
278
+ # Convert signal to dictionary if needed
279
+ if hasattr(signal, "dict"):
280
+ signal_dict = signal.dict()
281
+ elif isinstance(signal, dict):
282
+ signal_dict = signal
283
+ else:
284
+ signal_dict = {"signal": str(signal)}
285
+
286
+ # Add timestamp if missing
287
+ if "timestamp" not in signal_dict:
288
+ signal_dict["timestamp"] = datetime.datetime.now().isoformat()
289
+
290
+ # Record trace
291
+ return self.record_trace("signal", signal_dict)
292
+
293
+ def record_reasoning(self, reasoning_state: Dict[str, Any],
294
+ level: DiagnosticLevel = DiagnosticLevel.INFO) -> str:
295
+ """
296
+ Record a reasoning trace.
297
+
298
+ Args:
299
+ reasoning_state: Reasoning state
300
+ level: Diagnostic level
301
+
302
+ Returns:
303
+ Trace ID
304
+ """
305
+ # Record trace
306
+ return self.record_trace("reasoning", reasoning_state, level)
307
+
308
+ def record_collapse(self, collapse_type: str, collapse_reason: str,
309
+ details: Dict[str, Any]) -> str:
310
+ """
311
+ Record a collapse trace.
312
+
313
+ Args:
314
+ collapse_type: Type of collapse
315
+ collapse_reason: Reason for collapse
316
+ details: Collapse details
317
+
318
+ Returns:
319
+ Trace ID
320
+ """
321
+ # Create collapse content
322
+ collapse_content = {
323
+ "collapse_type": collapse_type,
324
+ "collapse_reason": collapse_reason,
325
+ "details": details,
326
+ "timestamp": datetime.datetime.now().isoformat(),
327
+ }
328
+
329
+ # Record trace
330
+ return self.record_trace("collapse", collapse_content, DiagnosticLevel.COLLAPSE)
331
+
332
+ def record_shell_trace(self, shell_pattern: ShellPattern, content: Dict[str, Any]) -> str:
333
+ """
334
+ Record a shell pattern trace.
335
+
336
+ Args:
337
+ shell_pattern: Shell pattern
338
+ content: Trace content
339
+
340
+ Returns:
341
+ Trace ID
342
+ """
343
+ # Create shell content
344
+ shell_content = {
345
+ "shell_pattern": shell_pattern.value,
346
+ "content": content,
347
+ "timestamp": datetime.datetime.now().isoformat(),
348
+ }
349
+
350
+ # Record trace
351
+ return self.record_trace("shell", shell_content, DiagnosticLevel.SYMBOLIC)
352
+
353
+ def get_trace(self, trace_id: str) -> Optional[Dict[str, Any]]:
354
+ """
355
+ Get trace by ID.
356
+
357
+ Args:
358
+ trace_id: Trace ID
359
+
360
+ Returns:
361
+ Trace item or None if not found
362
+ """
363
+ if trace_id not in self.trace_index:
364
+ return None
365
+
366
+ return self.traces[self.trace_index[trace_id]]
367
+
368
+ def get_traces_by_type(self, trace_type: str, limit: int = 10) -> List[Dict[str, Any]]:
369
+ """
370
+ Get traces by type.
371
+
372
+ Args:
373
+ trace_type: Trace type
374
+ limit: Maximum number of traces to return
375
+
376
+ Returns:
377
+ List of trace items
378
+ """
379
+ if trace_type == "signal":
380
+ trace_ids = self.signal_traces[-limit:]
381
+ elif trace_type == "reasoning":
382
+ trace_ids = self.reasoning_traces[-limit:]
383
+ elif trace_type == "collapse":
384
+ trace_ids = self.collapse_traces[-limit:]
385
+ elif trace_type == "shell":
386
+ trace_ids = self.shell_traces[-limit:]
387
+ else:
388
+ # Get all traces of specified type
389
+ trace_ids = [trace["trace_id"] for trace in self.traces
390
+ if trace["trace_type"] == trace_type][-limit:]
391
+
392
+ # Get trace items
393
+ return [self.get_trace(trace_id) for trace_id in trace_ids if trace_id in self.trace_index]
394
+
395
+ def get_traces_by_level(self, level: DiagnosticLevel, limit: int = 10) -> List[Dict[str, Any]]:
396
+ """
397
+ Get traces by diagnostic level.
398
+
399
+ Args:
400
+ level: Diagnostic level
401
+ limit: Maximum number of traces to return
402
+
403
+ Returns:
404
+ List of trace items
405
+ """
406
+ # Get traces with specified level
407
+ trace_ids = [trace["trace_id"] for trace in self.traces
408
+ if trace.get("level") == level.value][-limit:]
409
+
410
+ # Get trace items
411
+ return [self.get_trace(trace_id) for trace_id in trace_ids if trace_id in self.trace_index]
412
+
413
+ def get_shell_traces(self, shell_pattern: Optional[ShellPattern] = None,
414
+ limit: int = 10) -> List[Dict[str, Any]]:
415
+ """
416
+ Get shell pattern traces.
417
+
418
+ Args:
419
+ shell_pattern: Optional specific shell pattern
420
+ limit: Maximum number of traces to return
421
+
422
+ Returns:
423
+ List of trace items
424
+ """
425
+ if shell_pattern:
426
+ # Get traces with specified shell pattern
427
+ trace_ids = []
428
+ for trace in self.traces:
429
+ if "shell_patterns" in trace and shell_pattern.value in trace["shell_patterns"]:
430
+ trace_ids.append(trace["trace_id"])
431
+
432
+ # Take last 'limit' traces
433
+ trace_ids = trace_ids[-limit:]
434
+ else:
435
+ # Get all shell traces
436
+ trace_ids = self.shell_traces[-limit:]
437
+
438
+ # Get trace items
439
+ return [self.get_trace(trace_id) for trace_id in trace_ids if trace_id in self.trace_index]
440
+
441
+ def get_trace_stats(self) -> Dict[str, Any]:
442
+ """
443
+ Get trace statistics.
444
+
445
+ Returns:
446
+ Trace statistics
447
+ """
448
+ # Add shell pattern stats
449
+ shell_pattern_stats = {}
450
+ for shell_pattern in ShellPattern:
451
+ count = sum(1 for trace in self.traces
452
+ if "shell_patterns" in trace and shell_pattern.value in trace["shell_patterns"])
453
+
454
+ shell_pattern_stats[shell_pattern.value] = count
455
+
456
+ # Add to stats
457
+ stats = {
458
+ **self.stats,
459
+ "shell_patterns": shell_pattern_stats,
460
+ }
461
+
462
+ return stats
463
+
464
+ def clear_traces(self) -> int:
465
+ """
466
+ Clear all traces.
467
+
468
+ Returns:
469
+ Number of traces cleared
470
+ """
471
+ trace_count = len(self.traces)
472
+
473
+ # Clear traces
474
+ self.traces = []
475
+ self.trace_index = {}
476
+ self.signal_traces = []
477
+ self.reasoning_traces = []
478
+ self.collapse_traces = []
479
+ self.shell_traces = []
480
+
481
+ # Reset stats
482
+ self.stats = {
483
+ "total_traces": 0,
484
+ "signal_traces": 0,
485
+ "reasoning_traces": 0,
486
+ "collapse_traces": 0,
487
+ "shell_traces": 0,
488
+ "warnings": 0,
489
+ "errors": 0,
490
+ }
491
+
492
+ return trace_count
493
+
494
+ def _detect_shell_patterns(self, trace_type: str, content: Dict[str, Any]) -> List[str]:
495
+ """
496
+ Detect shell patterns in trace content.
497
+
498
+ Args:
499
+ trace_type: Trace type
500
+ content: Trace content
501
+
502
+ Returns:
503
+ List of detected shell patterns
504
+ """
505
+ detected_patterns = []
506
+
507
+ # Convert content to string for pattern matching
508
+ content_str = json.dumps(content, ensure_ascii=False).lower()
509
+
510
+ # Check each shell pattern
511
+ for shell_pattern, pattern_rules in self.shell_patterns.items():
512
+ pattern = pattern_rules["pattern"]
513
+
514
+ # Check if pattern matches
515
+ if re.search(pattern, content_str, re.IGNORECASE):
516
+ # Add additional checks based on pattern type
517
+ if self._validate_pattern_rules(shell_pattern, pattern_rules, content):
518
+ detected_patterns.append(shell_pattern.value)
519
+
520
+ return detected_patterns
521
+
522
+ def _validate_pattern_rules(self, shell_pattern: ShellPattern,
523
+ pattern_rules: Dict[str, Any],
524
+ content: Dict[str, Any]) -> bool:
525
+ """
526
+ Validate additional pattern rules.
527
+
528
+ Args:
529
+ shell_pattern: Shell pattern
530
+ pattern_rules: Pattern rules
531
+ content: Trace content
532
+
533
+ Returns:
534
+ True if pattern rules are validated
535
+ """
536
+ # Pattern-specific validation
537
+ if shell_pattern == ShellPattern.NULL_FEATURE:
538
+ # Check confidence threshold
539
+ if "confidence" in content and content["confidence"] < pattern_rules["confidence_threshold"]:
540
+ return True
541
+
542
+ # Check belief gap
543
+ if "belief_state" in content:
544
+ belief_values = list(content["belief_state"].values())
545
+ if belief_values and max(belief_values) - min(belief_values) > pattern_rules["belief_gap_threshold"]:
546
+ return True
547
+
548
+ elif shell_pattern == ShellPattern.CIRCUIT_FRAGMENT:
549
+ # Check for broken steps
550
+ if "steps" in content:
551
+ steps = content["steps"]
552
+ for i in range(len(steps) - 1):
553
+ if steps[i].get("completed", True) and not steps[i+1].get("completed", True):
554
+ return True
555
+
556
+ # Check for attribution breaks
557
+ if "attribution" in content and content["attribution"].get("attribution_breaks", False):
558
+ return True
559
+
560
+ elif shell_pattern == ShellPattern.META_FAILURE:
561
+ # Check recursion depth
562
+ if "depth" in content and content["depth"] >= pattern_rules["recursion_depth_threshold"]:
563
+ return True
564
+
565
+ # Check for meta-level errors
566
+ if "errors" in content and any("meta" in error.get("message", "").lower() for error in content["errors"]):
567
+ return True
568
+
569
+ elif shell_pattern == ShellPattern.RECURSIVE_FRACTURE:
570
+ # Check for circular reasoning
571
+ if "steps" in content:
572
+ steps = content["steps"]
573
+ step_names = [step.get("name", "") for step in steps]
574
+
575
+ # Look for repeating patterns
576
+ for pattern_len in range(2, len(step_names) // 2 + 1):
577
+ for i in range(len(step_names) - pattern_len * 2 + 1):
578
+ pattern = step_names[i:i+pattern_len]
579
+ next_seq = step_names[i+pattern_len:i+pattern_len*2]
580
+
581
+ if pattern == next_seq:
582
+ return True
583
+
584
+ elif shell_pattern == ShellPattern.RESIDUAL_ALIGNMENT_DRIFT:
585
+ # Check drift magnitude
586
+ if "drift_vector" in content:
587
+ drift_values = list(content["drift_vector"].values())
588
+ if drift_values and any(abs(val) > pattern_rules["drift_magnitude_threshold"] for val in drift_values):
589
+ return True
590
+
591
+ # Check for explicit drift detection
592
+ if "drift_detected" in content and content["drift_detected"]:
593
+ return True
594
+
595
+ # Default validation for other patterns
596
+ return True
597
+
598
+ def _save_trace_to_file(self, trace_item: Dict[str, Any]) -> None:
599
+ """
600
+ Save trace to file.
601
+
602
+ Args:
603
+ trace_item: Trace item
604
+ """
605
+ if not self.trace_dir:
606
+ return
607
+
608
+ try:
609
+ # Create filename based on trace ID and type
610
+ trace_id = trace_item["trace_id"]
611
+ trace_type = trace_item["trace_type"]
612
+ filename = f"{trace_type}_{trace_id}.json"
613
+ filepath = os.path.join(self.trace_dir, filename)
614
+
615
+ # Save trace to file
616
+ with open(filepath, "w") as f:
617
+ json.dump(trace_item, f, indent=2)
618
+ except Exception as e:
619
+ logging.error(f"Error saving trace to file: {e}")
620
+ logging.error(traceback.format_exc())
621
+
622
+ def generate_trace_visualization(self, trace_id: str) -> Dict[str, Any]:
623
+ """
624
+ Generate visualization data for a trace.
625
+
626
+ Args:
627
+ trace_id: Trace ID
628
+
629
+ Returns:
630
+ Visualization data
631
+ """
632
+ trace = self.get_trace(trace_id)
633
+ if not trace:
634
+ return {"error": "Trace not found"}
635
+
636
+ trace_type = trace["trace_type"]
637
+
638
+ if trace_type == "signal":
639
+ return self._generate_signal_visualization(trace)
640
+ elif trace_type == "reasoning":
641
+ return self._generate_reasoning_visualization(trace)
642
+ elif trace_type == "collapse":
643
+ return self._generate_collapse_visualization(trace)
644
+ elif trace_type == "shell":
645
+ return self._generate_shell_visualization(trace)
646
+ else:
647
+ return {
648
+ "trace_id": trace_id,
649
+ "agent_name": trace["agent_name"],
650
+ "trace_type": trace_type,
651
+ "timestamp": trace["timestamp"],
652
+ "content": trace["content"],
653
+ }
654
+
655
+ def _generate_signal_visualization(self, trace: Dict[str, Any]) -> Dict[str, Any]:
656
+ """
657
+ Generate visualization data for a signal trace.
658
+
659
+ Args:
660
+ trace: Signal trace
661
+
662
+ Returns:
663
+ Visualization data
664
+ """
665
+ content = trace["content"]
666
+
667
+ # Create signal visualization
668
+ visualization = {
669
+ "trace_id": trace["trace_id"],
670
+ "agent_name": trace["agent_name"],
671
+ "trace_type": "signal",
672
+ "timestamp": trace["timestamp"],
673
+ "signal_data": {
674
+ "ticker": content.get("ticker", ""),
675
+ "action": content.get("action", ""),
676
+ "confidence": content.get("confidence", 0),
677
+ },
678
+ }
679
+
680
+ # Add attribution if available
681
+ if "attribution_trace" in content:
682
+ visualization["attribution"] = content["attribution_trace"]
683
+
684
+ # Add shell patterns if available
685
+ if "shell_patterns" in trace:
686
+ visualization["shell_patterns"] = trace["shell_patterns"]
687
+
688
+ return visualization
689
+
690
+ def _generate_reasoning_visualization(self, trace: Dict[str, Any]) -> Dict[str, Any]:
691
+ """
692
+ Generate visualization data for a reasoning trace.
693
+
694
+ Args:
695
+ trace: Reasoning trace
696
+
697
+ Returns:
698
+ Visualization data
699
+ """
700
+ content = trace["content"]
701
+
702
+ # Create nodes and links
703
+ nodes = []
704
+ links = []
705
+
706
+ # Add reasoning steps as nodes
707
+ if "steps" in content:
708
+ for i, step in enumerate(content["steps"]):
709
+ node_id = f"step_{i}"
710
+ nodes.append({
711
+ "id": node_id,
712
+ "label": step.get("name", f"Step {i}"),
713
+ "type": "step",
714
+ "completed": step.get("completed", True),
715
+ "error": "error" in step,
716
+ })
717
+
718
+ # Add link to previous step
719
+ if i > 0:
720
+ links.append({
721
+ "source": f"step_{i-1}",
722
+ "target": node_id,
723
+ "type": "flow",
724
+ })
725
+
726
+ # Create reasoning visualization
727
+ visualization = {
728
+ "trace_id": trace["trace_id"],
729
+ "agent_name": trace["agent_name"],
730
+ "trace_type": "reasoning",
731
+ "timestamp": trace["timestamp"],
732
+ "reasoning_data": {
733
+ "depth": content.get("depth", 0),
734
+ "confidence": content.get("confidence", 0),
735
+ "collapse_detected": content.get("collapse_detected", False),
736
+ },
737
+ "nodes": nodes,
738
+ "links": links,
739
+ }
740
+
741
+ # Add shell patterns if available
742
+ if "shell_patterns" in trace:
743
+ visualization["shell_patterns"] = trace["shell_patterns"]
744
+
745
+ return visualization
746
+
747
+ def _generate_collapse_visualization(self, trace: Dict[str, Any]) -> Dict[str, Any]:
748
+ """
749
+ Generate visualization data for a collapse trace.
750
+
751
+ Args:
752
+ trace: Collapse trace
753
+
754
+ Returns:
755
+ Visualization data
756
+ """
757
+ content = trace["content"]
758
+
759
+ # Create collapse visualization
760
+ visualization = {
761
+ "trace_id": trace["trace_id"],
762
+ "agent_name": trace["agent_name"],
763
+ "trace_type": "collapse",
764
+ "timestamp": trace["timestamp"],
765
+ "collapse_data": {
766
+ "collapse_type": content.get("collapse_type", ""),
767
+ "collapse_reason": content.get("collapse_reason", ""),
768
+ "details": content.get("details", {}),
769
+ },
770
+ }
771
+
772
+ # Add shell patterns if available
773
+ if "shell_patterns" in trace:
774
+ visualization["shell_patterns"] = trace["shell_patterns"]
775
+
776
+ return visualization
777
+
778
+ def _generate_shell_visualization(self, trace: Dict[str, Any]) -> Dict[str, Any]:
779
+ """
780
+ Generate visualization data for a shell trace.
781
+
782
+ Args:
783
+ trace: Shell trace
784
+
785
+ Returns:
786
+ Visualization data
787
+ """
788
+ content = trace["content"]
789
+
790
+ # Create shell visualization
791
+ visualization = {
792
+ "trace_id": trace["trace_id"],
793
+ "agent_name": trace["agent_name"],
794
+ "trace_type": "shell",
795
+ "timestamp": trace["timestamp"],
796
+ "shell_data": {
797
+ "shell_pattern": content.get("shell_pattern", ""),
798
+ "content": content.get("content", {}),
799
+ },
800
+ }
801
+
802
+ return visualization
803
+
804
+ def generate_attribution_report(self, signals: List[Dict[str, Any]]) -> Dict[str, Any]:
805
+ """
806
+ Generate attribution report for signals.
807
+
808
+ Args:
809
+ signals: List of signals
810
+
811
+ Returns:
812
+ Attribution report
813
+ """
814
+ # Initialize report
815
+ report = {
816
+ "agent_name": self.agent_name,
817
+ "timestamp": datetime.datetime.now().isoformat(),
818
+ "signals": len(signals),
819
+ "attribution_summary": {},
820
+ "confidence_summary": {},
821
+ "top_factors": [],
822
+ "shell_patterns": [],
823
+ }
824
+
825
+ # Skip if no signals
826
+ if not signals:
827
+ return report
828
+
829
+ # Collect attribution data
830
+ attribution_data = defaultdict(float)
831
+ confidence_data = []
832
+
833
+ for signal in signals:
834
+ # Add confidence
835
+ confidence = signal.get("confidence", 0)
836
+ confidence_data.append(confidence)
837
+
838
+ # Add attribution
839
+ attribution = signal.get("attribution_trace", {})
840
+ for source, weight in attribution.items():
841
+ attribution_data[source] += weight
842
+
843
+ # Calculate attribution summary
844
+ total_attribution = sum(attribution_data.values())
845
+ if total_attribution > 0:
846
+ for source, weight in attribution_data.items():
847
+ report["attribution_summary"][source] = weight / total_attribution
848
+
849
+ # Calculate confidence summary
850
+ report["confidence_summary"] = {
851
+ "mean": np.mean(confidence_data) if confidence_data else 0,
852
+ "median": np.median(confidence_data) if confidence_data else 0,
853
+ "min": min(confidence_data) if confidence_data else 0,
854
+ "max": max(confidence_data) if confidence_data else 0,
855
+ }
856
+
857
+ # Calculate top factors
858
+ top_factors = sorted(attribution_data.items(), key=lambda x: x[1], reverse=True)[:5]
859
+ report["top_factors"] = [{"source": source, "weight": weight} for source, weight in top_factors]
860
+
861
+ # Collect shell patterns
862
+ shell_pattern_counts = defaultdict(int)
863
+
864
+ for signal in signals:
865
+ signal_id = signal.get("signal_id", "")
866
+ if signal_id:
867
+ # Check if we have a trace for this signal
868
+ for trace in self.traces:
869
+ if trace["trace_type"] == "signal" and trace["content"].get("signal_id") == signal_id:
870
+ # Add shell patterns
871
+ if "shell_patterns" in trace:
872
+ for pattern in trace["shell_patterns"]:
873
+ shell_pattern_counts[pattern] += 1
874
+
875
+ # Add shell patterns to report
876
+ for pattern, count in shell_pattern_counts.items():
877
+ report["shell_patterns"].append({
878
+ "pattern": pattern,
879
+ "count": count,
880
+ "frequency": count / len(signals),
881
+ })
882
+
883
+ return report
884
+
885
+
886
+ class ShellDiagnostics:
887
+ """
888
+ Shell-based diagnostic tools for deeper interpretability.
889
+
890
+ The ShellDiagnostics provides:
891
+ - Shell pattern detection and analysis
892
+ - Failure mode simulation and detection
893
+ - Attribution shell tracing
894
+ - Recursive shell embedding
895
+ """
896
+
897
+ def __init__(
898
+ self,
899
+ agent_id: str,
900
+ agent_name: str,
901
+ tracing_tools: TracingTools,
902
+ ):
903
+ """
904
+ Initialize shell diagnostics.
905
+
906
+ Args:
907
+ agent_id: Agent ID
908
+ agent_name: Agent name
909
+ tracing_tools: Tracing tools instance
910
+ """
911
+ self.agent_id = agent_id
912
+ self.agent_name = agent_name
913
+ self.tracer = tracing_tools
914
+
915
+ # Shell state
916
+ self.active_shells = {}
917
+ self.shell_history = []
918
+
919
+ # Initialize shell registry
920
+ self.shell_registry = {}
921
+ for shell_pattern in ShellPattern:
922
+ self.shell_registry[shell_pattern.value] = {
923
+ "pattern": shell_pattern,
924
+ "active": False,
925
+ "activation_count": 0,
926
+ "last_activation": None,
927
+ }
928
+
929
+ def activate_shell(self, shell_pattern: ShellPattern, context: Dict[str, Any]) -> str:
930
+ """
931
+ Activate a shell pattern.
932
+
933
+ Args:
934
+ shell_pattern: Shell pattern to activate
935
+ context: Activation context
936
+
937
+ Returns:
938
+ Shell instance ID
939
+ """
940
+ shell_id = str(uuid.uuid4())
941
+ timestamp = datetime.datetime.now()
942
+
943
+ # Create shell instance
944
+ shell_instance = {
945
+ "shell_id": shell_id,
946
+ "pattern": shell_pattern.value,
947
+ "context": context,
948
+ "active": True,
949
+ "activation_time": timestamp.isoformat(),
950
+ "deactivation_time": None,
951
+ }
952
+
953
+ # Update shell registry
954
+ self.shell_registry[shell_pattern.value]["active"] = True
955
+ self.shell_registry[shell_pattern.value]["activation_count"] += 1
956
+ self.shell_registry[shell_pattern.value]["last_activation"] = timestamp.isoformat()
957
+
958
+ # Add to active shells
959
+ self.active_shells[shell_id] = shell_instance
960
+
961
+ # Record trace
962
+ self.tracer.record_shell_trace(shell_pattern, {
963
+ "shell_id": shell_id,
964
+ "activation_context": context,
965
+ "timestamp": timestamp.isoformat(),
966
+ })
967
+
968
+ return shell_id
969
+
970
+ def deactivate_shell(self, shell_id: str, results: Dict[str, Any]) -> bool:
971
+ """
972
+ Deactivate a shell pattern.
973
+
974
+ Args:
975
+ shell_id: Shell instance ID
976
+ results: Shell results
977
+
978
+ Returns:
979
+ True if shell was deactivated, False if not found
980
+ """
981
+ if shell_id not in self.active_shells:
982
+ return False
983
+
984
+ # Get shell instance
985
+ shell_instance = self.active_shells[shell_id]
986
+ timestamp = datetime.datetime.now()
987
+
988
+ # Update shell instance
989
+ shell_instance["active"] = False
990
+ shell_instance["deactivation_time"] = timestamp.isoformat()
991
+ shell_instance["results"] = results
992
+
993
+ # Update shell registry
994
+ pattern = shell_instance["pattern"]
995
+ self.shell_registry[pattern]["active"] = any(
996
+ instance["pattern"] == pattern and instance["active"]
997
+ for instance in self.active_shells.values()
998
+ )
999
+
1000
+ # Add to shell history
1001
+ self.shell_history.append(shell_instance)
1002
+
1003
+ # Remove from active shells
1004
+ del self.active_shells[shell_id]
1005
+
1006
+ # Record trace
1007
+ self.tracer.record_shell_trace(ShellPattern(pattern), {
1008
+ "shell_id": shell_id,
1009
+ "deactivation_results": results,
1010
+ "timestamp": timestamp.isoformat(),
1011
+ })
1012
+
1013
+ return True
1014
+
1015
+ def get_active_shells(self) -> List[Dict[str, Any]]:
1016
+ """
1017
+ Get active shell instances.
1018
+
1019
+ Returns:
1020
+ List of active shell instances
1021
+ """
1022
+ return list(self.active_shells.values())
1023
+
1024
+ def get_shell_history(self, limit: int = 10) -> List[Dict[str, Any]]:
1025
+ """
1026
+ Get shell history.
1027
+
1028
+ Args:
1029
+ limit: Maximum number of shell instances to return
1030
+
1031
+ Returns:
1032
+ List of shell instances
1033
+ """
1034
+ return self.shell_history[-limit:]
1035
+
1036
+ def get_shell_registry(self) -> Dict[str, Dict[str, Any]]:
1037
+ """
1038
+ Get shell registry.
1039
+
1040
+ Returns:
1041
+ Shell registry
1042
+ """
1043
+ return self.shell_registry
1044
+
1045
+ def simulate_shell_failure(self, shell_pattern: ShellPattern,
1046
+ context: Dict[str, Any]) -> Dict[str, Any]:
1047
+ """
1048
+ Simulate a shell failure.
1049
+
1050
+ Args:
1051
+ shell_pattern: Shell pattern to simulate
1052
+ context: Simulation context
1053
+
1054
+ Returns:
1055
+ Simulation results
1056
+ """
1057
+ # Create shell instance
1058
+ shell_id = self.activate_shell(shell_pattern, context)
1059
+
1060
+ # Simulate failure based on shell pattern
1061
+ if shell_pattern == ShellPattern.NULL_FEATURE:
1062
+ # Knowledge gap simulation
1063
+ results = self._simulate_null_feature(context)
1064
+ elif shell_pattern == ShellPattern.CIRCUIT_FRAGMENT:
1065
+ # Broken reasoning path simulation
1066
+ results = self._simulate_circuit_fragment(context)
1067
+ elif shell_pattern == ShellPattern.META_FAILURE:
1068
+ # Metacognitive failure simulation
1069
+ results = self._simulate_meta_failure(context)
1070
+ elif shell_pattern == ShellPattern.RECURSIVE_FRACTURE:
1071
+ # Circular reasoning simulation
1072
+ results = self._simulate_recursive_fracture(context)
1073
+ elif shell_pattern == ShellPattern.ETHICAL_INVERSION:
1074
+ # Value inversion simulation
1075
+ results = self._simulate_ethical_inversion(context)
1076
+ else:
1077
+ # Default simulation
1078
+ results = {
1079
+ "shell_id": shell_id,
1080
+ "pattern": shell_pattern.value,
1081
+ "simulation": "default",
1082
+ "result": "simulated_failure",
1083
+ "timestamp": datetime.datetime.now().isoformat(),
1084
+ }
1085
+
1086
+ # Deactivate shell
1087
+ self.deactivate_shell(shell_id, results)
1088
+
1089
+ return results
1090
+
1091
+ def _simulate_null_feature(self, context: Dict[str, Any]) -> Dict[str, Any]:
1092
+ """
1093
+ Simulate NULL_FEATURE shell failure.
1094
+
1095
+ Args:
1096
+ context: Simulation context
1097
+
1098
+ Returns:
1099
+ Simulation results
1100
+ """
1101
+ # Extract relevant fields
1102
+ query = context.get("query", "")
1103
+ confidence = context.get("confidence", 0.5)
1104
+
1105
+ # Reduce confidence for knowledge gap
1106
+ adjusted_confidence = confidence * 0.5
1107
+
1108
+ # Create null zone markers
1109
+ null_zones = []
1110
+ if "subject" in context:
1111
+ null_zones.append(context["subject"])
1112
+ else:
1113
+ # Extract potential null zones from query
1114
+ words = query.split()
1115
+ for i in range(0, len(words), 3):
1116
+ chunk = " ".join(words[i:i+3])
1117
+ null_zones.append(chunk)
1118
+
1119
+ # Create detection result
1120
+ result = {
1121
+ "pattern": ShellPattern.NULL_FEATURE.value,
1122
+ "simulation": "knowledge_gap",
1123
+ "original_confidence": confidence,
1124
+ "adjusted_confidence": adjusted_confidence,
1125
+ "null_zones": null_zones,
1126
+ "boundary_detected": True,
1127
+ "timestamp": datetime.datetime.now().isoformat(),
1128
+ }
1129
+
1130
+ return result
1131
+
1132
+ def _simulate_circuit_fragment(self, context: Dict[str, Any]) -> Dict[str, Any]:
1133
+ """
1134
+ Simulate CIRCUIT_FRAGMENT shell failure.
1135
+
1136
+ Args:
1137
+ context: Simulation context
1138
+
1139
+ Returns:
1140
+ Simulation results
1141
+ """
1142
+ # Extract relevant fields
1143
+ steps = context.get("steps", [])
1144
+
1145
+ # Create broken steps
1146
+ broken_steps = []
1147
+
1148
+ if steps:
1149
+ # Create breaks in existing steps
1150
+ for i, step in enumerate(steps):
1151
+ if i % 3 == 2: # Break every third step
1152
+ broken_steps.append({
1153
+ "step_id": step.get("id", f"step_{i}"),
1154
+ "step_name": step.get("name", f"Step {i}"),
1155
+ "broken": True,
1156
+ "cause": "attribution_break",
1157
+ })
1158
+ else:
1159
+ # Create synthetic steps and breaks
1160
+ for i in range(5):
1161
+ if i % 3 == 2: # Break every third step
1162
+ broken_steps.append({
1163
+ "step_id": f"step_{i}",
1164
+ "step_name": f"Reasoning Step {i}",
1165
+ "broken": True,
1166
+ "cause": "attribution_break",
1167
+ })
1168
+
1169
+ # Create detection result
1170
+ result = {
1171
+ "pattern": ShellPattern.CIRCUIT_FRAGMENT.value,
1172
+ "simulation": "broken_reasoning",
1173
+ "broken_steps": broken_steps,
1174
+ "attribution_breaks": len(broken_steps),
1175
+ "timestamp": datetime.datetime.now().isoformat(),
1176
+ }
1177
+
1178
+ return result
1179
+
1180
+ def _simulate_meta_failure(self, context: Dict[str, Any]) -> Dict[str, Any]:
1181
+ """
1182
+ Simulate META_FAILURE shell failure.
1183
+
1184
+ Args:
1185
+ context: Simulation context
1186
+
1187
+ Returns:
1188
+ Simulation results
1189
+ """
1190
+ # Extract relevant fields
1191
+ depth = context.get("depth", 0)
1192
+
1193
+ # Increase depth for recursion
1194
+ adjusted_depth = depth + 3
1195
+
1196
+ # Create meta errors
1197
+ meta_errors = [
1198
+ {
1199
+ "error_id": str(uuid.uuid4()),
1200
+ "message": "Recursive meta-cognitive loop detected",
1201
+ "depth": adjusted_depth,
1202
+ "cause": "self_reference",
1203
+ },
1204
+ {
1205
+ "error_id": str(uuid.uuid4()),
1206
+ "message": "Meta-reflection limit reached",
1207
+ "depth": adjusted_depth,
1208
+ "cause": "recursion_depth",
1209
+ },
1210
+ ]
1211
+
1212
+ # Create detection result
1213
+ result = {
1214
+ "pattern": ShellPattern.META_FAILURE.value,
1215
+ "simulation": "meta_recursion",
1216
+ "original_depth": depth,
1217
+ "adjusted_depth": adjusted_depth,
1218
+ "meta_errors": meta_errors,
1219
+ "recursion_detected": True,
1220
+ "timestamp": datetime.datetime.now().isoformat(),
1221
+ }
1222
+
1223
+ return result
1224
+
1225
+ def _simulate_recursive_fracture(self, context: Dict[str, Any]) -> Dict[str, Any]:
1226
+ """
1227
+ Simulate RECURSIVE_FRACTURE shell failure.
1228
+
1229
+ Args:
1230
+ context: Simulation context
1231
+
1232
+ Returns:
1233
+ Simulation results
1234
+ """
1235
+ # Extract relevant fields
1236
+ steps = context.get("steps", [])
1237
+
1238
+ # Create circular reasoning pattern
1239
+ circular_pattern = []
1240
+
1241
+ if steps and len(steps) >= 4:
1242
+ # Use existing steps to create a loop
1243
+ loop_start = len(steps) // 2
1244
+ circular_pattern = [
1245
+ {
1246
+ "step_id": steps[i].get("id", f"step_{i}"),
1247
+ "step_name": steps[i].get("name", f"Step {i}"),
1248
+ }
1249
+ for i in range(loop_start, min(loop_start + 3, len(steps)))
1250
+ ]
1251
+
1252
+ # Add repeat of first step to close the loop
1253
+ circular_pattern.append({
1254
+ "step_id": steps[loop_start].get("id", f"step_{loop_start}"),
1255
+ "step_name": steps[loop_start].get("name", f"Step {loop_start}"),
1256
+ })
1257
+ else:
1258
+ # Create synthetic circular pattern
1259
+ for i in range(3):
1260
+ circular_pattern.append({
1261
+ "step_id": f"loop_step_{i}",
1262
+ "step_name": f"Loop Step {i}",
1263
+ })
1264
+
1265
+ # Add repeat of first step to close the loop
1266
+ circular_pattern.append({
1267
+ "step_id": "loop_step_0",
1268
+ "step_name": "Loop Step 0",
1269
+ })
1270
+
1271
+ # Create detection result
1272
+ result = {
1273
+ "pattern": ShellPattern.RECURSIVE_FRACTURE.value,
1274
+ "simulation": "circular_reasoning",
1275
+ "circular_pattern": circular_pattern,
1276
+ "loop_length": len(circular_pattern) - 1,
1277
+ "timestamp": datetime.datetime.now().isoformat(),
1278
+ }
1279
+
1280
+ return result
1281
+
1282
+ def _simulate_ethical_inversion(self, context: Dict[str, Any]) -> Dict[str, Any]:
1283
+ """
1284
+ Simulate ETHICAL_INVERSION shell failure.
1285
+
1286
+ Args:
1287
+ context: Simulation context
1288
+
1289
+ Returns:
1290
+ Simulation results
1291
+ """
1292
+ # Extract relevant fields
1293
+ values = context.get("values", {})
1294
+
1295
+ # Create value inversions
1296
+ value_inversions = []
1297
+
1298
+ if values:
1299
+ # Create inversions for existing values
1300
+ for value, polarity in values.items():
1301
+ if isinstance(polarity, (int, float)) and polarity > 0:
1302
+ value_inversions.append({
1303
+ "value": value,
1304
+ "original_polarity": polarity,
1305
+ "inverted_polarity": -polarity,
1306
+ "cause": "value_conflict",
1307
+ })
1308
+ else:
1309
+ # Create synthetic value inversions
1310
+ default_values = {
1311
+ "fairness": 0.8,
1312
+ "transparency": 0.9,
1313
+ "innovation": 0.7,
1314
+ "efficiency": 0.8,
1315
+ }
1316
+
1317
+ for value, polarity in default_values.items():
1318
+ value_inversions.append({
1319
+ "value": value,
1320
+ "original_polarity": polarity,
1321
+ "inverted_polarity": -polarity,
1322
+ "cause": "value_conflict",
1323
+ })
1324
+
1325
+ # Create detection result
1326
+ result = {
1327
+ "pattern": ShellPattern.ETHICAL_INVERSION.value,
1328
+ "simulation": "value_inversion",
1329
+ "value_inversions": value_inversions,
1330
+ "inversion_count": len(value_inversions),
1331
+ "timestamp": datetime.datetime.now().isoformat(),
1332
+ }
1333
+
1334
+ return result
1335
+
1336
+
1337
+ class ShellFailureMap:
1338
+ """
1339
+ Shell failure mapping and visualization.
1340
+
1341
+ The ShellFailureMap provides:
1342
+ - Visualization of shell pattern failures
1343
+ - Mapping of failures across agents
1344
+ - Temporal analysis of failures
1345
+ - Failure pattern detection
1346
+ """
1347
+
1348
+ def __init__(self):
1349
+ """Initialize shell failure map."""
1350
+ self.failure_map = {}
1351
+ self.agent_failures = defaultdict(list)
1352
+ self.pattern_failures = defaultdict(list)
1353
+ self.temporal_failures = []
1354
+
1355
+ def add_failure(self, agent_id: str, agent_name: str,
1356
+ shell_pattern: ShellPattern, failure_data: Dict[str, Any]) -> str:
1357
+ """
1358
+ Add a shell failure to the map.
1359
+
1360
+ Args:
1361
+ agent_id: Agent ID
1362
+ agent_name: Agent name
1363
+ shell_pattern: Shell pattern
1364
+ failure_data: Failure data
1365
+
1366
+ Returns:
1367
+ Failure ID
1368
+ """
1369
+ # Create failure ID
1370
+ failure_id = str(uuid.uuid4())
1371
+ timestamp = datetime.datetime.now()
1372
+
1373
+ # Create failure item
1374
+ failure_item = {
1375
+ "failure_id": failure_id,
1376
+ "agent_id": agent_id,
1377
+ "agent_name": agent_name,
1378
+ "pattern": shell_pattern.value,
1379
+ "data": failure_data,
1380
+ "timestamp": timestamp.isoformat(),
1381
+ }
1382
+
1383
+ # Add to failure map
1384
+ self.failure_map[failure_id] = failure_item
1385
+
1386
+ # Add to agent failures
1387
+ self.agent_failures[agent_id].append(failure_id)
1388
+
1389
+ # Add to pattern failures
1390
+ self.pattern_failures[shell_pattern.value].append(failure_id)
1391
+
1392
+ # Add to temporal failures
1393
+ self.temporal_failures.append((timestamp, failure_id))
1394
+
1395
+ return failure_id
1396
+
1397
+ def get_failure(self, failure_id: str) -> Optional[Dict[str, Any]]:
1398
+ """
1399
+ Get failure by ID.
1400
+
1401
+ Args:
1402
+ failure_id: Failure ID
1403
+
1404
+ Returns:
1405
+ Failure item or None if not found
1406
+ """
1407
+ return self.failure_map.get(failure_id)
1408
+
1409
+ def get_agent_failures(self, agent_id: str, limit: int = 10) -> List[Dict[str, Any]]:
1410
+ """
1411
+ Get failures for an agent.
1412
+
1413
+ Args:
1414
+ agent_id: Agent ID
1415
+ limit: Maximum number of failures to return
1416
+
1417
+ Returns:
1418
+ List of failure items
1419
+ """
1420
+ failure_ids = self.agent_failures.get(agent_id, [])[-limit:]
1421
+ return [self.get_failure(failure_id) for failure_id in failure_ids if failure_id in self.failure_map]
1422
+
1423
+ def get_pattern_failures(self, pattern: ShellPattern, limit: int = 10) -> List[Dict[str, Any]]:
1424
+ """
1425
+ Get failures for a pattern.
1426
+
1427
+ Args:
1428
+ pattern: Shell pattern
1429
+ limit: Maximum number of failures to return
1430
+
1431
+ Returns:
1432
+ List of failure items
1433
+ """
1434
+ failure_ids = self.pattern_failures.get(pattern.value, [])[-limit:]
1435
+ return [self.get_failure(failure_id) for failure_id in failure_ids if failure_id in self.failure_map]
1436
+
1437
+ def get_temporal_failures(self, start_time: Optional[datetime.datetime] = None,
1438
+ end_time: Optional[datetime.datetime] = None,
1439
+ limit: int = 10) -> List[Dict[str, Any]]:
1440
+ """
1441
+ Get failures in a time range.
1442
+
1443
+ Args:
1444
+ start_time: Start time (None for no start)
1445
+ end_time: End time (None for no end)
1446
+ limit: Maximum number of failures to return
1447
+
1448
+ Returns:
1449
+ List of failure items
1450
+ """
1451
+ # Filter by time range
1452
+ filtered_failures = []
1453
+ for timestamp, failure_id in self.temporal_failures:
1454
+ if start_time and timestamp < start_time:
1455
+ continue
1456
+ if end_time and timestamp > end_time:
1457
+ continue
1458
+ filtered_failures.append((timestamp, failure_id))
1459
+
1460
+ # Take last 'limit' failures
1461
+ filtered_failures = filtered_failures[-limit:]
1462
+
1463
+ # Get failure items
1464
+ return [self.get_failure(failure_id) for _, failure_id in filtered_failures
1465
+ if failure_id in self.failure_map]
1466
+
1467
+ def get_failure_stats(self) -> Dict[str, Any]:
1468
+ """
1469
+ Get failure statistics.
1470
+
1471
+ Returns:
1472
+ Failure statistics
1473
+ """
1474
+ # Count failures by agent
1475
+ agent_counts = {agent_id: len(failures) for agent_id, failures in self.agent_failures.items()}
1476
+
1477
+ # Count failures by pattern
1478
+ pattern_counts = {pattern: len(failures) for pattern, failures in self.pattern_failures.items()}
1479
+
1480
+ # Count failures by time period
1481
+ now = datetime.datetime.now()
1482
+ hour_ago = now - datetime.timedelta(hours=1)
1483
+ day_ago = now - datetime.timedelta(days=1)
1484
+ week_ago = now - datetime.timedelta(weeks=1)
1485
+
1486
+ time_counts = {
1487
+ "last_hour": sum(1 for timestamp, _ in self.temporal_failures if timestamp >= hour_ago),
1488
+ "last_day": sum(1 for timestamp, _ in self.temporal_failures if timestamp >= day_ago),
1489
+ "last_week": sum(1 for timestamp, _ in self.temporal_failures if timestamp >= week_ago),
1490
+ "total": len(self.temporal_failures),
1491
+ }
1492
+
1493
+ # Create stats
1494
+ stats = {
1495
+ "agent_counts": agent_counts,
1496
+ "pattern_counts": pattern_counts,
1497
+ "time_counts": time_counts,
1498
+ "total_failures": len(self.failure_map),
1499
+ "timestamp": now.isoformat(),
1500
+ }
1501
+
1502
+ return stats
1503
+
1504
+ def generate_failure_map_visualization(self) -> Dict[str, Any]:
1505
+ """
1506
+ Generate visualization data for failure map.
1507
+
1508
+ Returns:
1509
+ Visualization data
1510
+ """
1511
+ # Create nodes and links
1512
+ nodes = []
1513
+ links = []
1514
+
1515
+ # Add agent nodes
1516
+ agent_nodes = {}
1517
+ for agent_id, failures in self.agent_failures.items():
1518
+ # Get first failure to get agent name
1519
+ first_failure = self.get_failure(failures[0]) if failures else None
1520
+ agent_name = first_failure.get("agent_name", "Unknown") if first_failure else "Unknown"
1521
+
1522
+ # Create agent node
1523
+ agent_node = {
1524
+ "id": agent_id,
1525
+ "label": agent_name,
1526
+ "type": "agent",
1527
+ "size": 15,
1528
+ "failure_count": len(failures),
1529
+ }
1530
+
1531
+ nodes.append(agent_node)
1532
+ agent_nodes[agent_id] = agent_node
1533
+
1534
+ # Add pattern nodes
1535
+ pattern_nodes = {}
1536
+ for pattern, failures in self.pattern_failures.items():
1537
+ # Create pattern node
1538
+ pattern_node = {
1539
+ "id": pattern,
1540
+ "label": pattern,
1541
+ "type": "pattern",
1542
+ "size": 10,
1543
+ "failure_count": len(failures),
1544
+ }
1545
+
1546
+ nodes.append(pattern_node)
1547
+ pattern_nodes[pattern] = pattern_node
1548
+
1549
+ # Add failure nodes and links
1550
+ for failure_id, failure in self.failure_map.items():
1551
+ agent_id = failure.get("agent_id")
1552
+ pattern = failure.get("pattern")
1553
+
1554
+ # Create failure node
1555
+ failure_node = {
1556
+ "id": failure_id,
1557
+ "label": f"Failure {failure_id[:6]}",
1558
+ "type": "failure",
1559
+ "size": 5,
1560
+ "timestamp": failure.get("timestamp"),
1561
+ }
1562
+
1563
+ nodes.append(failure_node)
1564
+
1565
+ # Add links
1566
+ if agent_id:
1567
+ links.append({
1568
+ "source": agent_id,
1569
+ "target": failure_id,
1570
+ "type": "agent_failure",
1571
+ })
1572
+
1573
+ if pattern:
1574
+ links.append({
1575
+ "source": pattern,
1576
+ "target": failure_id,
1577
+ "type": "pattern_failure",
1578
+ })
1579
+
1580
+ # Create visualization
1581
+ visualization = {
1582
+ "nodes": nodes,
1583
+ "links": links,
1584
+ "timestamp": datetime.datetime.now().isoformat(),
1585
+ }
1586
+
1587
+ return visualization
1588
+
1589
+
1590
+ # Utility functions for diagnostics
1591
+ def format_diagnostic_output(trace_data: Dict[str, Any], format: str = "text") -> str:
1592
+ """
1593
+ Format diagnostic output for display.
1594
+
1595
+ Args:
1596
+ trace_data: Trace data
1597
+ format: Output format (text, json, markdown)
1598
+
1599
+ Returns:
1600
+ Formatted output
1601
+ """
1602
+ if format == "json":
1603
+ return json.dumps(trace_data, indent=2)
1604
+
1605
+ elif format == "markdown":
1606
+ # Create markdown output
1607
+ output = f"# Diagnostic Trace\n\n"
1608
+
1609
+ # Add trace info
1610
+ output += f"**Trace ID:** {trace_data.get('trace_id', 'N/A')}\n"
1611
+ output += f"**Agent:** {trace_data.get('agent_name', 'N/A')}\n"
1612
+ output += f"**Type:** {trace_data.get('trace_type', 'N/A')}\n"
1613
+ output += f"**Time:** {trace_data.get('timestamp', 'N/A')}\n\n"
1614
+
1615
+ # Add shell patterns if available
1616
+ if "shell_patterns" in trace_data:
1617
+ output += f"**Shell Patterns:**\n\n"
1618
+ for pattern in trace_data["shell_patterns"]:
1619
+ output += f"- {pattern}\n"
1620
+ output += "\n"
1621
+
1622
+ # Add content based on trace type
1623
+ if trace_data.get("trace_type") == "signal":
1624
+ output += f"## Signal Details\n\n"
1625
+ content = trace_data.get("content", {})
1626
+ output += f"**Ticker:** {content.get('ticker', 'N/A')}\n"
1627
+ output += f"**Action:** {content.get('action', 'N/A')}\n"
1628
+ output += f"**Confidence:** {content.get('confidence', 'N/A')}\n"
1629
+ output += f"**Reasoning:** {content.get('reasoning', 'N/A')}\n\n"
1630
+
1631
+ # Add attribution if available
1632
+ if "attribution_trace" in content:
1633
+ output += f"## Attribution\n\n"
1634
+ output += "| Source | Weight |\n"
1635
+ output += "| ------ | ------ |\n"
1636
+ for source, weight in content.get("attribution_trace", {}).items():
1637
+ output += f"| {source} | {weight:.2f} |\n"
1638
+
1639
+ elif trace_data.get("trace_type") == "reasoning":
1640
+ output += f"## Reasoning Details\n\n"
1641
+ content = trace_data.get("content", {})
1642
+ output += f"**Depth:** {content.get('depth', 'N/A')}\n"
1643
+ output += f"**Confidence:** {content.get('confidence', 'N/A')}\n"
1644
+ output += f"**Collapse Detected:** {content.get('collapse_detected', False)}\n\n"
1645
+
1646
+ # Add steps if available
1647
+ if "steps" in content:
1648
+ output += f"## Reasoning Steps\n\n"
1649
+ for i, step in enumerate(content["steps"]):
1650
+ output += f"### Step {i+1}: {step.get('name', 'Unnamed')}\n"
1651
+ output += f"**Completed:** {step.get('completed', True)}\n"
1652
+ if "error" in step:
1653
+ output += f"**Error:** {step['error'].get('message', 'Unknown error')}\n"
1654
+ output += "\n"
1655
+
1656
+ elif trace_data.get("trace_type") == "collapse":
1657
+ output += f"## Collapse Details\n\n"
1658
+ content = trace_data.get("content", {})
1659
+ output += f"**Type:** {content.get('collapse_type', 'N/A')}\n"
1660
+ output += f"**Reason:** {content.get('collapse_reason', 'N/A')}\n\n"
1661
+
1662
+ # Add details if available
1663
+ if "details" in content:
1664
+ output += f"## Collapse Details\n\n"
1665
+ details = content["details"]
1666
+ for key, value in details.items():
1667
+ output += f"**{key}:** {value}\n"
1668
+
1669
+ elif trace_data.get("trace_type") == "shell":
1670
+ output += f"## Shell Details\n\n"
1671
+ content = trace_data.get("content", {})
1672
+ output += f"**Shell Pattern:** {content.get('shell_pattern', 'N/A')}\n\n"
1673
+
1674
+ # Add content details
1675
+ shell_content = content.get("content", {})
1676
+ output += f"## Shell Content\n\n"
1677
+ for key, value in shell_content.items():
1678
+ output += f"**{key}:** {value}\n"
1679
+
1680
+ return output
1681
+
1682
+ else: # text format (default)
1683
+ # Create text output
1684
+ output = "==== Diagnostic Trace ====\n\n"
1685
+
1686
+ # Add trace info
1687
+ output += f"Trace ID: {trace_data.get('trace_id', 'N/A')}\n"
1688
+ output += f"Agent: {trace_data.get('agent_name', 'N/A')}\n"
1689
+ output += f"Type: {trace_data.get('trace_type', 'N/A')}\n"
1690
+ output += f"Time: {trace_data.get('timestamp', 'N/A')}\n\n"
1691
+
1692
+ # Add shell patterns if available
1693
+ if "shell_patterns" in trace_data:
1694
+ output += f"Shell Patterns:\n"
1695
+ for pattern in trace_data["shell_patterns"]:
1696
+ output += f"- {pattern}\n"
1697
+ output += "\n"
1698
+
1699
+ # Add content based on trace type
1700
+ content = trace_data.get("content", {})
1701
+ output += f"---- Content ----\n\n"
1702
+
1703
+ # Format content recursively
1704
+ def format_dict(d, indent=0):
1705
+ result = ""
1706
+ for key, value in d.items():
1707
+ if isinstance(value, dict):
1708
+ result += f"{' ' * indent}{key}:\n"
1709
+ result += format_dict(value, indent + 1)
1710
+ elif isinstance(value, list):
1711
+ result += f"{' ' * indent}{key}:\n"
1712
+ for item in value:
1713
+ if isinstance(item, dict):
1714
+ result += format_dict(item, indent + 1)
1715
+ else:
1716
+ result += f"{' ' * (indent + 1)}- {item}\n"
1717
+ else:
1718
+ result += f"{' ' * indent}{key}: {value}\n"
1719
+ return result
1720
+
1721
+ output += format_dict(content)
1722
+
1723
+ return output
1724
+
1725
+
1726
+ def get_shell_pattern_description(pattern: ShellPattern) -> str:
1727
+ """
1728
+ Get description for a shell pattern.
1729
+
1730
+ Args:
1731
+ pattern: Shell pattern
1732
+
1733
+ Returns:
1734
+ Shell pattern description
1735
+ """
1736
+ descriptions = {
1737
+ ShellPattern.NULL_FEATURE: "Knowledge gaps as null attribution zones",
1738
+ ShellPattern.CIRCUIT_FRAGMENT: "Broken reasoning paths in attribution chains",
1739
+ ShellPattern.META_FAILURE: "Metacognitive attribution failures",
1740
+ ShellPattern.GHOST_FRAME: "Residual agent identity markers",
1741
+ ShellPattern.ECHO_ATTRIBUTION: "Causal chain backpropagation",
1742
+ ShellPattern.ATTRIBUTION_REFLECT: "Multi-head contribution analysis",
1743
+ ShellPattern.INVERSE_CHAIN: "Attribution-output mismatch",
1744
+ ShellPattern.RECURSIVE_FRACTURE: "Circular attribution loops",
1745
+ ShellPattern.ETHICAL_INVERSION: "Value polarity reversals",
1746
+ ShellPattern.RESIDUAL_ALIGNMENT_DRIFT: "Direction of belief evolution",
1747
+ }
1748
+
1749
+ return descriptions.get(pattern, "Unknown shell pattern")