Spaces:
Sleeping
Sleeping
File size: 5,296 Bytes
0c591a7 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 | # Architecture Documentation
## System Overview
Instant SWOT Agent is an AI-powered strategic analysis system that generates comprehensive SWOT analyses for companies with automatic quality improvement through a self-correcting loop.
## High-Level Architecture
```
User Input (Company Name)
β
ββββββββββββββββββββββββββββββββββββββββββ
β USER INTERFACE β
β Streamlit (streamlit_app.py) β
β React (frontend/) via FastAPI β
ββββββββββββββββββββββββββββββββββββββββββ
β
ββββββββββββββββββββββββββββββββββββββββββ
β FastAPI Backend (src/api/app.py) β
β Routes: analysis.py, stocks.py β
ββββββββββββββββββββββββββββββββββββββββββ
β
Workflow Engine (LangGraph)
src/workflow/graph.py
β
Node Orchestration
β β β β β
Researcher β Analyzer β Critic β Editor
β________β (loop until score β₯7 or 3 revisions)
β
Final SWOT Analysis
```
## Directory Structure
```
src/
βββ api/ # FastAPI backend
β βββ app.py # Application factory
β βββ schemas.py # Pydantic models
β βββ routes/
β βββ analysis.py # Workflow endpoints
β βββ stocks.py # Stock search endpoint
βββ workflow/ # LangGraph workflow
β βββ graph.py # Workflow definition
β βββ runner.py # Execution wrapper
βββ nodes/ # Workflow nodes
βββ services/ # Shared services
β βββ swot_parser.py # SWOT text parsing
β βββ confidence.py # Confidence calculation
β βββ workflow_store.py # Workflow state management
βββ utils/ # Utilities
βββ main.py # CLI entry point
```
## Core Components
### 1. Workflow Engine
Located in `src/workflow/graph.py`, implements the self-correcting workflow:
- **Entry**: Researcher node
- **Flow**: Researcher β Analyzer β Critic β (conditional) Editor
- **Exit**: Score β₯ 7 OR revision_count β₯ 3
### 2. Workflow Nodes
Located in `src/nodes/`:
| Node | File | Responsibility |
|------|------|----------------|
| Researcher | `researcher.py` | Gathers data via MCP servers, summarizes with LLM |
| Analyzer | `analyzer.py` | Generates SWOT analysis draft |
| Critic | `critic.py` | Evaluates quality (1-10 score) using rubric |
| Editor | `editor.py` | Revises draft based on critique |
### 3. MCP Servers
Located in `mcp-servers/`, providing data aggregation:
| Server | Data Source | Output |
|--------|-------------|--------|
| financials-basket | SEC EDGAR | Financial statements |
| volatility-basket | Yahoo Finance, FRED | VIX, Beta, IV |
| macro-basket | FRED | GDP, rates, CPI |
| valuation-basket | Yahoo Finance, SEC | P/E, P/B, EV/EBITDA |
| news-basket | Tavily | News articles |
| sentiment-basket | Finnhub | Sentiment scores |
### 4. State Management
Defined in `src/state.py`, the workflow state flows through each node:
```python
state = {
"company_name": str,
"strategy_focus": str,
"raw_data": str,
"draft_report": str,
"critique": str,
"score": int,
"revision_count": int,
"error": str | None
}
```
## Data Flow
1. **Input**: User enters company name via Streamlit UI
2. **Research**: Researcher node queries MCP servers for financial data
3. **Analysis**: Analyzer generates initial SWOT draft
4. **Evaluation**: Critic scores draft (1-10) against rubric
5. **Improvement**: If score < 7 and revisions < 3, Editor revises
6. **Output**: Final SWOT displayed with quality metrics
## Quality Evaluation
The Critic node uses a rubric-based system:
- **Completeness** (25%): All SWOT sections populated
- **Specificity** (25%): Concrete, actionable insights with data
- **Relevance** (25%): Aligned with company context
- **Depth** (25%): Strategic sophistication
Threshold: Score β₯ 7/10 to pass without revision.
## Extending the System
### Adding a New Node
1. Create `src/nodes/new_node.py`:
```python
def new_node(state: dict) -> dict:
# Process state
state["new_field"] = result
return state
```
2. Register in `src/workflow/graph.py`:
```python
workflow.add_node("NewNode", RunnableLambda(new_node))
workflow.add_edge("PreviousNode", "NewNode")
```
### Adding a New MCP Server
1. Create directory `mcp-servers/new-basket/`
2. Implement server with tool registration
3. Update Researcher node to call new server
## Observability
- **LangSmith**: End-to-end workflow tracing (configure via environment variables)
- **Logging**: Python standard logging at INFO/DEBUG levels
## Error Handling
- MCP server failures: Graceful degradation, continue with available data
- LLM failures: Retry with fallback providers
- Quality failures: Maximum 3 revision attempts before accepting result
|