Spaces:
Sleeping
Sleeping
Fix: Temporal data extraction and frontend date display
Browse filesBackend:
- Extract dates for valuation/volatility/macro in analyzer.py
- Fix workflow_store.py to use regular_market_time for valuation
- Fix workflow_store.py to use generated_at for volatility
- Add Layer 2 (minimum citation count) to numeric_validator.py
- Add Layer 3 (uncited number detection) to numeric_validator.py
- Integrate Layers 2/3 in critic.py
Frontend:
- Add normalizeDate() for YYYY-MM-DD format (2025Q3→2025-09-30)
- Add inferDataType() "Spot" for valuation metrics
- Add extractDate() with multiple field fallbacks for news
Fixes issues 2, 3, 5, 6, 8 from frontend data display audit.
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- CLAUDE.md +87 -0
- TEMPORAL_DATA_FIX.md +0 -179
- frontend/src/components/MCPDataPanel.tsx +96 -7
- src/nodes/analyzer.py +41 -15
- src/nodes/critic.py +74 -1
- src/services/workflow_store.py +12 -3
- src/utils/numeric_validator.py +168 -0
- static/assets/{index-Cb1-3_-g.js → index-juxGlBoI.js} +0 -0
- static/index.html +1 -1
CLAUDE.md
ADDED
|
@@ -0,0 +1,87 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# CLAUDE.md
|
| 2 |
+
|
| 3 |
+
This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
|
| 4 |
+
|
| 5 |
+
## Project Overview
|
| 6 |
+
|
| 7 |
+
Multi-agent AI system that generates SWOT analyses for publicly-traded companies using a self-correcting workflow. Python/FastAPI backend with TypeScript/React frontend.
|
| 8 |
+
|
| 9 |
+
**Live Demo:** https://huggingface.co/spaces/vn6295337/Instant-SWOT-Agent
|
| 10 |
+
|
| 11 |
+
## Common Commands
|
| 12 |
+
|
| 13 |
+
### Backend (Python)
|
| 14 |
+
```bash
|
| 15 |
+
make api # Run FastAPI server (port 7860)
|
| 16 |
+
make test # Run pytest tests
|
| 17 |
+
make lint # Run flake8 and pylint
|
| 18 |
+
make format # Format with black
|
| 19 |
+
make coverage # Run tests with coverage
|
| 20 |
+
make analyze TICKER=AAPL # CLI analysis for a ticker
|
| 21 |
+
```
|
| 22 |
+
|
| 23 |
+
### Frontend (TypeScript/React)
|
| 24 |
+
```bash
|
| 25 |
+
cd frontend
|
| 26 |
+
npm run dev # Vite dev server (port 5173)
|
| 27 |
+
npm run build # Production build
|
| 28 |
+
npm run lint # ESLint
|
| 29 |
+
npm test # Vitest unit tests
|
| 30 |
+
npm run test:e2e # Playwright E2E tests
|
| 31 |
+
npm run storybook # Component docs (port 6006)
|
| 32 |
+
```
|
| 33 |
+
|
| 34 |
+
### Full Stack
|
| 35 |
+
```bash
|
| 36 |
+
make frontend # Run both backend and React frontend
|
| 37 |
+
```
|
| 38 |
+
|
| 39 |
+
## Architecture
|
| 40 |
+
|
| 41 |
+
```
|
| 42 |
+
User Input → Researcher → [6 MCP Servers] → Raw Data
|
| 43 |
+
↓
|
| 44 |
+
Raw Data → Analyst → SWOT Draft → Critic → Score
|
| 45 |
+
↓
|
| 46 |
+
Score < 7 → Editor → Revised Draft → Critic
|
| 47 |
+
Score ≥ 7 or 3 revisions → Final Output
|
| 48 |
+
```
|
| 49 |
+
|
| 50 |
+
**Agent Workflow (LangGraph):**
|
| 51 |
+
1. **Researcher** - Gathers data from MCP servers (fundamentals, volatility, macro, valuation, news, sentiment)
|
| 52 |
+
2. **Analyst** - Generates SWOT draft based on strategy focus
|
| 53 |
+
3. **Critic** - Scores output 1-10 with rubric-based evaluation
|
| 54 |
+
4. **Editor** - Revises based on critique (loops until score ≥ 7 or 3 revisions)
|
| 55 |
+
|
| 56 |
+
**Key Files:**
|
| 57 |
+
- `src/workflow/graph.py` - LangGraph workflow definition
|
| 58 |
+
- `src/nodes/` - Agent implementations (researcher, analyzer, critic, editor)
|
| 59 |
+
- `src/api/app.py` - FastAPI application
|
| 60 |
+
- `src/state.py` - TypedDict for workflow state
|
| 61 |
+
- `frontend/src/App.tsx` - Main React component
|
| 62 |
+
|
| 63 |
+
## API Endpoints
|
| 64 |
+
|
| 65 |
+
- `POST /analyze` - Start SWOT analysis (returns workflow_id)
|
| 66 |
+
- `GET /workflow/{id}/status` - Progress and metrics
|
| 67 |
+
- `GET /workflow/{id}/result` - Final results
|
| 68 |
+
- `GET /api/stocks/search?q=` - Stock ticker search
|
| 69 |
+
- `GET /health` - Health check
|
| 70 |
+
|
| 71 |
+
## Environment Variables
|
| 72 |
+
|
| 73 |
+
Required (at least one LLM provider):
|
| 74 |
+
- `GROQ_API_KEY` - Primary LLM (Llama 3.1 8B)
|
| 75 |
+
- `GEMINI_API_KEY` - Fallback
|
| 76 |
+
- `OPENROUTER_API_KEY` - Fallback
|
| 77 |
+
|
| 78 |
+
Data sources:
|
| 79 |
+
- `TAVILY_API_KEY` - Web search
|
| 80 |
+
- `FRED_API_KEY` - Macro data
|
| 81 |
+
- `FINNHUB_API_KEY` - Sentiment/ratings
|
| 82 |
+
|
| 83 |
+
## Tech Stack
|
| 84 |
+
|
| 85 |
+
**Backend:** Python 3.11+, FastAPI, LangGraph, LangChain, Pydantic
|
| 86 |
+
**Frontend:** React 18, TypeScript, Vite, Tailwind CSS, Radix UI, React Query
|
| 87 |
+
**Testing:** pytest (backend), Vitest + Playwright (frontend)
|
TEMPORAL_DATA_FIX.md
DELETED
|
@@ -1,179 +0,0 @@
|
|
| 1 |
-
# Temporal Data Display Issue - Root Cause Analysis and Solution
|
| 2 |
-
|
| 3 |
-
## Problem Description
|
| 4 |
-
|
| 5 |
-
Financial metrics in the SWOT analysis are not displaying temporal context (e.g., "FY 2024", "Q3 2024") next to the values. This affects the user's ability to understand when the financial data is from.
|
| 6 |
-
|
| 7 |
-
## Current Behavior
|
| 8 |
-
|
| 9 |
-
```plaintext
|
| 10 |
-
Financials
|
| 11 |
-
• revenue: $723.9M
|
| 12 |
-
• net_margin: -35.30
|
| 13 |
-
• debt_to_equity: 1.23
|
| 14 |
-
• EPS: $2.45
|
| 15 |
-
```
|
| 16 |
-
|
| 17 |
-
## Expected Behavior
|
| 18 |
-
|
| 19 |
-
```plaintext
|
| 20 |
-
Financials
|
| 21 |
-
• revenue: $723.9M (FY 2024)
|
| 22 |
-
• net_margin: -35.30 (FY 2024)
|
| 23 |
-
• debt_to_equity: 1.23 (FY 2024)
|
| 24 |
-
• EPS: $2.45 (Q3 2024)
|
| 25 |
-
```
|
| 26 |
-
|
| 27 |
-
## Root Cause Analysis
|
| 28 |
-
|
| 29 |
-
### Primary Issue: Calculated Metrics Lose Temporal Data
|
| 30 |
-
|
| 31 |
-
**Location:** `/home/vn6295337/Researcher-Agent/mcp-servers/financials-basket/server.py`
|
| 32 |
-
|
| 33 |
-
The financials MCP server calculates metrics like `net_margin` and `debt_to_equity` but loses temporal data in the process:
|
| 34 |
-
|
| 35 |
-
```python
|
| 36 |
-
# Problematic code (lines ~200-220)
|
| 37 |
-
net_margin = None
|
| 38 |
-
if revenue and net_income and revenue["value"] and net_income["value"]:
|
| 39 |
-
net_margin = round((net_income["value"] / revenue["value"]) * 100, 2) # ❌ Just a number!
|
| 40 |
-
```
|
| 41 |
-
|
| 42 |
-
- `revenue` from SEC: `{value: 723900000, end_date: "2024-09-30", fiscal_year: 2024, form: "10-K"}`
|
| 43 |
-
- `net_income` from SEC: `{value: -255600000, end_date: "2024-09-30", fiscal_year: 2024, form: "10-K"}`
|
| 44 |
-
- `net_margin` calculated: `-35.30` (plain number, temporal data lost) ❌
|
| 45 |
-
|
| 46 |
-
### Secondary Issue: MCP Client Handling
|
| 47 |
-
|
| 48 |
-
**Location:** `/home/vn6295337/Researcher-Agent/mcp_client.py`
|
| 49 |
-
|
| 50 |
-
The MCP client's `_extract_and_emit_metrics` function doesn't properly handle calculated metrics that should have temporal data.
|
| 51 |
-
|
| 52 |
-
## Solution
|
| 53 |
-
|
| 54 |
-
### 1. Fix Financials MCP Server
|
| 55 |
-
|
| 56 |
-
**File:** `/home/vn6295337/Researcher-Agent/mcp-servers/financials-basket/server.py`
|
| 57 |
-
|
| 58 |
-
Add helper function and modify margin calculations:
|
| 59 |
-
|
| 60 |
-
```python
|
| 61 |
-
def create_temporal_metric(value, source_metric):
|
| 62 |
-
"""Create a metric with temporal data inherited from source metric."""
|
| 63 |
-
if source_metric and isinstance(source_metric, dict):
|
| 64 |
-
return {
|
| 65 |
-
"value": value,
|
| 66 |
-
"end_date": source_metric.get("end_date"),
|
| 67 |
-
"fiscal_year": source_metric.get("fiscal_year"),
|
| 68 |
-
"form": source_metric.get("form")
|
| 69 |
-
}
|
| 70 |
-
return {"value": value}
|
| 71 |
-
|
| 72 |
-
# Replace margin calculations
|
| 73 |
-
net_margin = None
|
| 74 |
-
if revenue and net_income and revenue["value"] and net_income["value"]:
|
| 75 |
-
net_margin = create_temporal_metric(
|
| 76 |
-
round((net_income["value"] / revenue["value"]) * 100, 2),
|
| 77 |
-
revenue # Inherit temporal data from revenue
|
| 78 |
-
)
|
| 79 |
-
```
|
| 80 |
-
|
| 81 |
-
### 2. Fix Debt Metrics
|
| 82 |
-
|
| 83 |
-
**File:** Same file, in `fetch_debt_metrics` function
|
| 84 |
-
|
| 85 |
-
```python
|
| 86 |
-
debt_to_equity = None
|
| 87 |
-
if total_debt and stockholders_equity:
|
| 88 |
-
debt_val = total_debt.get("value", 0) or 0
|
| 89 |
-
equity_val = stockholders_equity.get("value", 0) or 0
|
| 90 |
-
if equity_val > 0:
|
| 91 |
-
debt_to_equity = {
|
| 92 |
-
"value": round(debt_val / equity_val, 2),
|
| 93 |
-
"end_date": total_debt.get("end_date"),
|
| 94 |
-
"fiscal_year": total_debt.get("fiscal_year"),
|
| 95 |
-
"form": total_debt.get("form")
|
| 96 |
-
}
|
| 97 |
-
```
|
| 98 |
-
|
| 99 |
-
### 3. Enhance MCP Client
|
| 100 |
-
|
| 101 |
-
**File:** `/home/vn6295337/Researcher-Agent/mcp_client.py`
|
| 102 |
-
|
| 103 |
-
```python
|
| 104 |
-
# In _extract_and_emit_metrics function, enhance financials section
|
| 105 |
-
elif source == "financials":
|
| 106 |
-
financials = result.get("financials") or {}
|
| 107 |
-
|
| 108 |
-
def get_temporal_data(metric_data):
|
| 109 |
-
if isinstance(metric_data, dict):
|
| 110 |
-
return {
|
| 111 |
-
"end_date": metric_data.get("end_date"),
|
| 112 |
-
"fiscal_year": metric_data.get("fiscal_year"),
|
| 113 |
-
"form": metric_data.get("form")
|
| 114 |
-
}
|
| 115 |
-
return {"end_date": None, "fiscal_year": None, "form": None}
|
| 116 |
-
|
| 117 |
-
# Handle net_margin with temporal data
|
| 118 |
-
net_margin = financials.get("net_margin") or financials.get("net_margin_pct")
|
| 119 |
-
if isinstance(net_margin, dict) and net_margin.get("value") is not None:
|
| 120 |
-
temporal = get_temporal_data(net_margin)
|
| 121 |
-
await emit_metric(
|
| 122 |
-
progress_callback, source, "net_margin", net_margin["value"],
|
| 123 |
-
end_date=temporal["end_date"],
|
| 124 |
-
fiscal_year=temporal["fiscal_year"],
|
| 125 |
-
form=temporal["form"]
|
| 126 |
-
)
|
| 127 |
-
elif isinstance(net_margin, (int, float)):
|
| 128 |
-
# Fallback for old format
|
| 129 |
-
await emit_metric(progress_callback, source, "net_margin", net_margin)
|
| 130 |
-
```
|
| 131 |
-
|
| 132 |
-
## Files to Modify
|
| 133 |
-
|
| 134 |
-
1. **Primary Fix:** `/home/vn6295337/Researcher-Agent/mcp-servers/financials-basket/server.py`
|
| 135 |
-
- Add `create_temporal_metric` helper function
|
| 136 |
-
- Modify margin calculations to preserve temporal data
|
| 137 |
-
- Modify debt_to_equity calculation to preserve temporal data
|
| 138 |
-
|
| 139 |
-
2. **Secondary Fix:** `/home/vn6295337/Researcher-Agent/mcp_client.py`
|
| 140 |
-
- Enhance `_extract_and_emit_metrics` function
|
| 141 |
-
- Add proper handling for calculated metrics with temporal data
|
| 142 |
-
- Maintain backward compatibility with fallback handling
|
| 143 |
-
|
| 144 |
-
## Expected Results
|
| 145 |
-
|
| 146 |
-
After implementing the fix:
|
| 147 |
-
|
| 148 |
-
```plaintext
|
| 149 |
-
Financials
|
| 150 |
-
• revenue: $723.9M (FY 2024) ✅
|
| 151 |
-
• net_margin: -35.30 (FY 2024) ✅ FIXED
|
| 152 |
-
• debt_to_equity: 1.23 (FY 2024) ✅ FIXED
|
| 153 |
-
• EPS: $2.45 (Q3 2024) ✅
|
| 154 |
-
• gross_margin: 45.20 (FY 2024) ✅ FIXED
|
| 155 |
-
• operating_margin: 12.80 (FY 2024) ✅ FIXED
|
| 156 |
-
```
|
| 157 |
-
|
| 158 |
-
## Testing Plan
|
| 159 |
-
|
| 160 |
-
1. **Unit Test:** Verify `create_temporal_metric` function works correctly
|
| 161 |
-
2. **Integration Test:** Run full workflow and verify temporal data flows through system
|
| 162 |
-
3. **UI Test:** Confirm frontend displays fiscal period labels correctly
|
| 163 |
-
4. **Regression Test:** Ensure existing functionality still works
|
| 164 |
-
|
| 165 |
-
## Backward Compatibility
|
| 166 |
-
|
| 167 |
-
The solution maintains full backward compatibility:
|
| 168 |
-
- Old format (plain numbers) still works via fallback handling
|
| 169 |
-
- New format (objects with temporal data) provides enhanced functionality
|
| 170 |
-
- No breaking changes to existing API contracts
|
| 171 |
-
- Frontend already supports temporal data display
|
| 172 |
-
|
| 173 |
-
## Impact
|
| 174 |
-
|
| 175 |
-
This fix will significantly improve the user experience by:
|
| 176 |
-
- Providing clear temporal context for all financial metrics
|
| 177 |
-
- Enabling better financial analysis with period-specific data
|
| 178 |
-
- Maintaining data consistency across the entire system
|
| 179 |
-
- Supporting historical comparisons and trend analysis
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
frontend/src/components/MCPDataPanel.tsx
CHANGED
|
@@ -167,11 +167,26 @@ function inferDataSource(category: string, metric: string, form?: string, dataSo
|
|
| 167 |
}
|
| 168 |
|
| 169 |
// Infer data type from form and metric
|
| 170 |
-
function inferDataType(form?: string, metric?: string): string {
|
| 171 |
if (form === '10-K') return 'FY'
|
| 172 |
if (form === '10-Q') return 'Q'
|
| 173 |
|
| 174 |
const lowerMetric = (metric || '').toLowerCase()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 175 |
if (['vix', 'vxn'].includes(lowerMetric)) return 'Daily'
|
| 176 |
if (['gdp_growth'].includes(lowerMetric)) return 'Quarterly'
|
| 177 |
if (['interest_rate', 'cpi_inflation', 'unemployment'].includes(lowerMetric)) return 'Monthly'
|
|
@@ -182,6 +197,80 @@ function inferDataType(form?: string, metric?: string): string {
|
|
| 182 |
return 'TTM'
|
| 183 |
}
|
| 184 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 185 |
// Format fiscal period label (e.g., "FY 2023" or "Q3 2024")
|
| 186 |
function formatFiscalPeriod(form?: string, fiscalYear?: number, endDate?: string): string | null {
|
| 187 |
if (!fiscalYear) return null
|
|
@@ -257,7 +346,7 @@ export function MCPDataPanel({ metrics, rawData, companyName, ticker, exchange,
|
|
| 257 |
metric: m.metric,
|
| 258 |
value: formatValue(m.value, m.metric),
|
| 259 |
dataType: inferDataType(m.form, m.metric),
|
| 260 |
-
asOf: m.endDate
|
| 261 |
source: inferDataSource(cat, m.metric, m.form, m.dataSource),
|
| 262 |
category: cat.charAt(0).toUpperCase() + cat.slice(1)
|
| 263 |
})
|
|
@@ -290,7 +379,7 @@ export function MCPDataPanel({ metrics, rawData, companyName, ticker, exchange,
|
|
| 290 |
articles.push({
|
| 291 |
title: String(a.title || a.content || 'News article'),
|
| 292 |
url: String(a.url || '#'),
|
| 293 |
-
date:
|
| 294 |
source: a.source ? String(a.source) : 'Tavily'
|
| 295 |
})
|
| 296 |
}
|
|
@@ -303,7 +392,7 @@ export function MCPDataPanel({ metrics, rawData, companyName, ticker, exchange,
|
|
| 303 |
articles.push({
|
| 304 |
title: a.title || 'News article',
|
| 305 |
url: a.url || '#',
|
| 306 |
-
date: a
|
| 307 |
source: a.source || 'Tavily'
|
| 308 |
})
|
| 309 |
}
|
|
@@ -339,7 +428,7 @@ export function MCPDataPanel({ metrics, rawData, companyName, ticker, exchange,
|
|
| 339 |
results.push({
|
| 340 |
title: String(item.title || item.content || `${source} item`),
|
| 341 |
url: String(item.url || '#'),
|
| 342 |
-
date:
|
| 343 |
source,
|
| 344 |
subreddit: item.subreddit ? String(item.subreddit) : undefined
|
| 345 |
})
|
|
@@ -363,7 +452,7 @@ export function MCPDataPanel({ metrics, rawData, companyName, ticker, exchange,
|
|
| 363 |
for (const article of newsArticles) {
|
| 364 |
rows.push({
|
| 365 |
title: article.title,
|
| 366 |
-
date: article.date
|
| 367 |
source: article.source || 'Tavily',
|
| 368 |
subreddit: '-',
|
| 369 |
url: article.url,
|
|
@@ -375,7 +464,7 @@ export function MCPDataPanel({ metrics, rawData, companyName, ticker, exchange,
|
|
| 375 |
for (const item of sentimentItems) {
|
| 376 |
rows.push({
|
| 377 |
title: item.title,
|
| 378 |
-
date: item.date
|
| 379 |
source: item.source,
|
| 380 |
subreddit: item.subreddit ? `r/${item.subreddit}` : '-',
|
| 381 |
url: item.url,
|
|
|
|
| 167 |
}
|
| 168 |
|
| 169 |
// Infer data type from form and metric
|
| 170 |
+
function inferDataType(form?: string, metric?: string, source?: string): string {
|
| 171 |
if (form === '10-K') return 'FY'
|
| 172 |
if (form === '10-Q') return 'Q'
|
| 173 |
|
| 174 |
const lowerMetric = (metric || '').toLowerCase()
|
| 175 |
+
|
| 176 |
+
// Valuation metrics are spot/current prices (not TTM)
|
| 177 |
+
const spotMetrics = [
|
| 178 |
+
'current_price', 'market_cap', 'enterprise_value',
|
| 179 |
+
'trailing_pe', 'forward_pe', 'pb_ratio', 'ps_ratio',
|
| 180 |
+
'trailing_peg', 'forward_peg', 'ev_ebitda', 'ev_revenue',
|
| 181 |
+
'price_to_fcf', 'dividend_yield'
|
| 182 |
+
]
|
| 183 |
+
if (spotMetrics.includes(lowerMetric)) return 'Spot'
|
| 184 |
+
|
| 185 |
+
// Growth metrics are year-over-year
|
| 186 |
+
const yoyMetrics = ['revenue_growth', 'earnings_growth']
|
| 187 |
+
if (yoyMetrics.includes(lowerMetric)) return 'YoY'
|
| 188 |
+
|
| 189 |
+
// Volatility/macro metrics
|
| 190 |
if (['vix', 'vxn'].includes(lowerMetric)) return 'Daily'
|
| 191 |
if (['gdp_growth'].includes(lowerMetric)) return 'Quarterly'
|
| 192 |
if (['interest_rate', 'cpi_inflation', 'unemployment'].includes(lowerMetric)) return 'Monthly'
|
|
|
|
| 197 |
return 'TTM'
|
| 198 |
}
|
| 199 |
|
| 200 |
+
// Extract date from multiple possible field names
|
| 201 |
+
function extractDate(item: Record<string, unknown>): string | undefined {
|
| 202 |
+
// Check multiple possible date field names
|
| 203 |
+
const dateFields = ['datetime', 'published_date', 'date', 'publishedAt', 'timestamp', 'created_at']
|
| 204 |
+
for (const field of dateFields) {
|
| 205 |
+
if (item[field]) {
|
| 206 |
+
return String(item[field])
|
| 207 |
+
}
|
| 208 |
+
}
|
| 209 |
+
return undefined
|
| 210 |
+
}
|
| 211 |
+
|
| 212 |
+
// Normalize various date formats to YYYY-MM-DD
|
| 213 |
+
function normalizeDate(dateStr: string | undefined | null): string {
|
| 214 |
+
if (!dateStr) return '-'
|
| 215 |
+
|
| 216 |
+
const str = String(dateStr).trim()
|
| 217 |
+
|
| 218 |
+
// Already a dash or empty
|
| 219 |
+
if (str === '-' || str === '') return '-'
|
| 220 |
+
|
| 221 |
+
// Quarter format: 2025Q3 -> 2025-09-30 (BEA quarters: Q1=Mar, Q2=Jun, Q3=Sep, Q4=Dec)
|
| 222 |
+
const quarterMatch = str.match(/^(\d{4})Q(\d)$/)
|
| 223 |
+
if (quarterMatch) {
|
| 224 |
+
const year = quarterMatch[1]
|
| 225 |
+
const quarter = parseInt(quarterMatch[2], 10)
|
| 226 |
+
// BEA quarter end dates: Q1=03-31, Q2=06-30, Q3=09-30, Q4=12-31
|
| 227 |
+
const quarterEndDates: Record<number, string> = {
|
| 228 |
+
1: '03-31',
|
| 229 |
+
2: '06-30',
|
| 230 |
+
3: '09-30',
|
| 231 |
+
4: '12-31'
|
| 232 |
+
}
|
| 233 |
+
return `${year}-${quarterEndDates[quarter] || '12-31'}`
|
| 234 |
+
}
|
| 235 |
+
|
| 236 |
+
// Month-year format: 2025-November -> 2025-11-30 (last day of month)
|
| 237 |
+
const monthYearMatch = str.match(/^(\d{4})-(\w+)$/)
|
| 238 |
+
if (monthYearMatch) {
|
| 239 |
+
const year = parseInt(monthYearMatch[1], 10)
|
| 240 |
+
const monthName = monthYearMatch[2].toLowerCase()
|
| 241 |
+
const monthMap: Record<string, number> = {
|
| 242 |
+
january: 1, february: 2, march: 3, april: 4, may: 5, june: 6,
|
| 243 |
+
july: 7, august: 8, september: 9, october: 10, november: 11, december: 12
|
| 244 |
+
}
|
| 245 |
+
const month = monthMap[monthName]
|
| 246 |
+
if (month) {
|
| 247 |
+
// Get last day of month
|
| 248 |
+
const lastDay = new Date(year, month, 0).getDate()
|
| 249 |
+
return `${year}-${String(month).padStart(2, '0')}-${String(lastDay).padStart(2, '0')}`
|
| 250 |
+
}
|
| 251 |
+
}
|
| 252 |
+
|
| 253 |
+
// Compact format: 20260108 -> 2026-01-08
|
| 254 |
+
const compactMatch = str.match(/^(\d{4})(\d{2})(\d{2})$/)
|
| 255 |
+
if (compactMatch) {
|
| 256 |
+
return `${compactMatch[1]}-${compactMatch[2]}-${compactMatch[3]}`
|
| 257 |
+
}
|
| 258 |
+
|
| 259 |
+
// ISO format already: YYYY-MM-DD - return as is
|
| 260 |
+
if (/^\d{4}-\d{2}-\d{2}$/.test(str)) {
|
| 261 |
+
return str
|
| 262 |
+
}
|
| 263 |
+
|
| 264 |
+
// ISO datetime: YYYY-MM-DDTHH:MM:SS -> YYYY-MM-DD
|
| 265 |
+
const isoMatch = str.match(/^(\d{4}-\d{2}-\d{2})T/)
|
| 266 |
+
if (isoMatch) {
|
| 267 |
+
return isoMatch[1]
|
| 268 |
+
}
|
| 269 |
+
|
| 270 |
+
// Return original if no pattern matches
|
| 271 |
+
return str
|
| 272 |
+
}
|
| 273 |
+
|
| 274 |
// Format fiscal period label (e.g., "FY 2023" or "Q3 2024")
|
| 275 |
function formatFiscalPeriod(form?: string, fiscalYear?: number, endDate?: string): string | null {
|
| 276 |
if (!fiscalYear) return null
|
|
|
|
| 346 |
metric: m.metric,
|
| 347 |
value: formatValue(m.value, m.metric),
|
| 348 |
dataType: inferDataType(m.form, m.metric),
|
| 349 |
+
asOf: normalizeDate(m.endDate),
|
| 350 |
source: inferDataSource(cat, m.metric, m.form, m.dataSource),
|
| 351 |
category: cat.charAt(0).toUpperCase() + cat.slice(1)
|
| 352 |
})
|
|
|
|
| 379 |
articles.push({
|
| 380 |
title: String(a.title || a.content || 'News article'),
|
| 381 |
url: String(a.url || '#'),
|
| 382 |
+
date: extractDate(a),
|
| 383 |
source: a.source ? String(a.source) : 'Tavily'
|
| 384 |
})
|
| 385 |
}
|
|
|
|
| 392 |
articles.push({
|
| 393 |
title: a.title || 'News article',
|
| 394 |
url: a.url || '#',
|
| 395 |
+
date: extractDate(a as Record<string, unknown>),
|
| 396 |
source: a.source || 'Tavily'
|
| 397 |
})
|
| 398 |
}
|
|
|
|
| 428 |
results.push({
|
| 429 |
title: String(item.title || item.content || `${source} item`),
|
| 430 |
url: String(item.url || '#'),
|
| 431 |
+
date: extractDate(item),
|
| 432 |
source,
|
| 433 |
subreddit: item.subreddit ? String(item.subreddit) : undefined
|
| 434 |
})
|
|
|
|
| 452 |
for (const article of newsArticles) {
|
| 453 |
rows.push({
|
| 454 |
title: article.title,
|
| 455 |
+
date: normalizeDate(article.date),
|
| 456 |
source: article.source || 'Tavily',
|
| 457 |
subreddit: '-',
|
| 458 |
url: article.url,
|
|
|
|
| 464 |
for (const item of sentimentItems) {
|
| 465 |
rows.push({
|
| 466 |
title: item.title,
|
| 467 |
+
date: normalizeDate(item.date),
|
| 468 |
source: item.source,
|
| 469 |
subreddit: item.subreddit ? `r/${item.subreddit}` : '-',
|
| 470 |
url: item.url,
|
src/nodes/analyzer.py
CHANGED
|
@@ -595,39 +595,65 @@ def _extract_key_metrics(raw_data: str) -> dict:
|
|
| 595 |
"net_income": _extract_temporal_metric(fin_data.get("net_income", {})),
|
| 596 |
}
|
| 597 |
|
| 598 |
-
# Extract valuation
|
| 599 |
val = metrics.get("valuation", {})
|
| 600 |
if val and "error" not in val:
|
| 601 |
val_metrics = val.get("metrics", {})
|
| 602 |
pe = val_metrics.get("pe_ratio", {})
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 603 |
extracted["valuation"] = {
|
| 604 |
-
"pe_trailing": pe.get("trailing") if isinstance(pe, dict) else pe,
|
| 605 |
-
"pe_forward": pe.get("forward") if isinstance(pe, dict) else None,
|
| 606 |
-
"pb_ratio": val_metrics.get("pb_ratio"),
|
| 607 |
-
"ps_ratio": val_metrics.get("ps_ratio"),
|
| 608 |
-
"ev_ebitda": val_metrics.get("ev_ebitda"),
|
| 609 |
"valuation_signal": val.get("overall_signal"),
|
|
|
|
| 610 |
}
|
| 611 |
|
| 612 |
-
# Extract volatility
|
| 613 |
vol = metrics.get("volatility", {})
|
| 614 |
if vol and "error" not in vol:
|
| 615 |
vol_metrics = vol.get("metrics", {})
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 616 |
extracted["volatility"] = {
|
| 617 |
-
"beta":
|
| 618 |
-
|
| 619 |
-
"
|
|
|
|
|
|
|
|
|
|
|
|
|
| 620 |
}
|
| 621 |
|
| 622 |
-
# Extract macro
|
| 623 |
macro = metrics.get("macro", {})
|
| 624 |
if macro and "error" not in macro:
|
| 625 |
macro_metrics = macro.get("metrics", {})
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 626 |
extracted["macro"] = {
|
| 627 |
-
"gdp_growth":
|
| 628 |
-
|
| 629 |
-
"
|
| 630 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 631 |
}
|
| 632 |
|
| 633 |
# Extract news with VADER sentiment
|
|
|
|
| 595 |
"net_income": _extract_temporal_metric(fin_data.get("net_income", {})),
|
| 596 |
}
|
| 597 |
|
| 598 |
+
# Extract valuation (with temporal data)
|
| 599 |
val = metrics.get("valuation", {})
|
| 600 |
if val and "error" not in val:
|
| 601 |
val_metrics = val.get("metrics", {})
|
| 602 |
pe = val_metrics.get("pe_ratio", {})
|
| 603 |
+
# Get valuation date from sources or response-level
|
| 604 |
+
val_date = (
|
| 605 |
+
val.get("sources", {}).get("yahoo_finance", {}).get("regular_market_time")
|
| 606 |
+
or val.get("as_of")
|
| 607 |
+
or val.get("generated_at", "")[:10] if val.get("generated_at") else None
|
| 608 |
+
)
|
| 609 |
extracted["valuation"] = {
|
| 610 |
+
"pe_trailing": {"value": pe.get("trailing") if isinstance(pe, dict) else pe, "end_date": val_date},
|
| 611 |
+
"pe_forward": {"value": pe.get("forward") if isinstance(pe, dict) else None, "end_date": val_date},
|
| 612 |
+
"pb_ratio": {"value": val_metrics.get("pb_ratio"), "end_date": val_date},
|
| 613 |
+
"ps_ratio": {"value": val_metrics.get("ps_ratio"), "end_date": val_date},
|
| 614 |
+
"ev_ebitda": {"value": val_metrics.get("ev_ebitda"), "end_date": val_date},
|
| 615 |
"valuation_signal": val.get("overall_signal"),
|
| 616 |
+
"as_of": val_date,
|
| 617 |
}
|
| 618 |
|
| 619 |
+
# Extract volatility (with temporal data)
|
| 620 |
vol = metrics.get("volatility", {})
|
| 621 |
if vol and "error" not in vol:
|
| 622 |
vol_metrics = vol.get("metrics", {})
|
| 623 |
+
# Get response-level date as fallback
|
| 624 |
+
vol_date = vol.get("generated_at", "")[:10] if vol.get("generated_at") else None
|
| 625 |
+
# Extract each metric with its own date (or fallback to response date)
|
| 626 |
+
vix_data = vol_metrics.get("vix", {})
|
| 627 |
+
beta_data = vol_metrics.get("beta", {})
|
| 628 |
+
hv_data = vol_metrics.get("historical_volatility", {})
|
| 629 |
extracted["volatility"] = {
|
| 630 |
+
"beta": {"value": beta_data.get("value") if isinstance(beta_data, dict) else beta_data,
|
| 631 |
+
"end_date": beta_data.get("date") or vol_date if isinstance(beta_data, dict) else vol_date},
|
| 632 |
+
"vix": {"value": vix_data.get("value") if isinstance(vix_data, dict) else vix_data,
|
| 633 |
+
"end_date": vix_data.get("date") or vol_date if isinstance(vix_data, dict) else vol_date},
|
| 634 |
+
"historical_volatility": {"value": hv_data.get("value") if isinstance(hv_data, dict) else hv_data,
|
| 635 |
+
"end_date": hv_data.get("date") or vol_date if isinstance(hv_data, dict) else vol_date},
|
| 636 |
+
"as_of": vol_date,
|
| 637 |
}
|
| 638 |
|
| 639 |
+
# Extract macro (with temporal data)
|
| 640 |
macro = metrics.get("macro", {})
|
| 641 |
if macro and "error" not in macro:
|
| 642 |
macro_metrics = macro.get("metrics", {})
|
| 643 |
+
# Each macro metric has its own date/period
|
| 644 |
+
gdp = macro_metrics.get("gdp_growth", {})
|
| 645 |
+
interest = macro_metrics.get("interest_rate", {})
|
| 646 |
+
inflation = macro_metrics.get("cpi_inflation", {})
|
| 647 |
+
unemp = macro_metrics.get("unemployment", {})
|
| 648 |
extracted["macro"] = {
|
| 649 |
+
"gdp_growth": {"value": gdp.get("value") if isinstance(gdp, dict) else gdp,
|
| 650 |
+
"end_date": gdp.get("date") or gdp.get("period") if isinstance(gdp, dict) else None},
|
| 651 |
+
"interest_rate": {"value": interest.get("value") if isinstance(interest, dict) else interest,
|
| 652 |
+
"end_date": interest.get("date") if isinstance(interest, dict) else None},
|
| 653 |
+
"inflation": {"value": inflation.get("value") if isinstance(inflation, dict) else inflation,
|
| 654 |
+
"end_date": inflation.get("date") or inflation.get("period") if isinstance(inflation, dict) else None},
|
| 655 |
+
"unemployment": {"value": unemp.get("value") if isinstance(unemp, dict) else unemp,
|
| 656 |
+
"end_date": unemp.get("date") or unemp.get("period") if isinstance(unemp, dict) else None},
|
| 657 |
}
|
| 658 |
|
| 659 |
# Extract news with VADER sentiment
|
src/nodes/critic.py
CHANGED
|
@@ -4,7 +4,11 @@ import json
|
|
| 4 |
import time
|
| 5 |
|
| 6 |
# Layer 4: Deterministic numeric validation
|
| 7 |
-
from src.utils.numeric_validator import
|
|
|
|
|
|
|
|
|
|
|
|
|
| 8 |
from src.nodes.analyzer import _verify_reference_integrity
|
| 9 |
|
| 10 |
|
|
@@ -402,6 +406,75 @@ def critic_node(state, workflow_id=None, progress_store=None):
|
|
| 402 |
else:
|
| 403 |
_add_activity_log(workflow_id, progress_store, "critic",
|
| 404 |
"Numeric validation: all citations verified")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 405 |
else:
|
| 406 |
_add_activity_log(workflow_id, progress_store, "critic",
|
| 407 |
"Warning: metric reference integrity check failed - skipping numeric validation")
|
|
|
|
| 4 |
import time
|
| 5 |
|
| 6 |
# Layer 4: Deterministic numeric validation
|
| 7 |
+
from src.utils.numeric_validator import (
|
| 8 |
+
validate_numeric_accuracy,
|
| 9 |
+
validate_uncited_numbers,
|
| 10 |
+
validate_minimum_citations,
|
| 11 |
+
)
|
| 12 |
from src.nodes.analyzer import _verify_reference_integrity
|
| 13 |
|
| 14 |
|
|
|
|
| 406 |
else:
|
| 407 |
_add_activity_log(workflow_id, progress_store, "critic",
|
| 408 |
"Numeric validation: all citations verified")
|
| 409 |
+
|
| 410 |
+
# ============================================================
|
| 411 |
+
# LAYER 3: Uncited Number Detection
|
| 412 |
+
# ============================================================
|
| 413 |
+
uncited_warnings = validate_uncited_numbers(report, metric_ref)
|
| 414 |
+
if uncited_warnings:
|
| 415 |
+
_add_activity_log(workflow_id, progress_store, "critic",
|
| 416 |
+
f"Uncited numbers: {len(uncited_warnings)} suspicious value(s) found")
|
| 417 |
+
|
| 418 |
+
# Add to hallucinations_detected
|
| 419 |
+
if "hallucinations_detected" not in result:
|
| 420 |
+
result["hallucinations_detected"] = []
|
| 421 |
+
result["hallucinations_detected"].extend(uncited_warnings)
|
| 422 |
+
|
| 423 |
+
# Cap score and add feedback (less severe than mismatches)
|
| 424 |
+
if scores.get("evidence_grounding", 0) > 6:
|
| 425 |
+
scores["evidence_grounding"] = 6
|
| 426 |
+
if "hard_floor_violations" not in result:
|
| 427 |
+
result["hard_floor_violations"] = []
|
| 428 |
+
result["hard_floor_violations"].append(
|
| 429 |
+
"Uncited metric-like numbers found - evidence_grounding capped at 6"
|
| 430 |
+
)
|
| 431 |
+
|
| 432 |
+
# Add feedback
|
| 433 |
+
if "actionable_feedback" not in result:
|
| 434 |
+
result["actionable_feedback"] = []
|
| 435 |
+
result["actionable_feedback"].append(
|
| 436 |
+
f"Add [M##] citations for {len(uncited_warnings)} uncited metric value(s)"
|
| 437 |
+
)
|
| 438 |
+
|
| 439 |
+
# Recalculate and reject
|
| 440 |
+
weighted_score = calculate_weighted_score(scores)
|
| 441 |
+
result["weighted_score"] = weighted_score
|
| 442 |
+
status = "REJECTED"
|
| 443 |
+
result["status"] = status
|
| 444 |
+
|
| 445 |
+
# ============================================================
|
| 446 |
+
# LAYER 2: Minimum Citation Count Enforcement
|
| 447 |
+
# ============================================================
|
| 448 |
+
citation_check = validate_minimum_citations(report, metric_ref, min_ratio=0.3)
|
| 449 |
+
if not citation_check["valid"]:
|
| 450 |
+
_add_activity_log(workflow_id, progress_store, "critic",
|
| 451 |
+
f"Citation coverage insufficient: {citation_check['message']}")
|
| 452 |
+
|
| 453 |
+
# Cap score severely - this indicates LLM ignored citation instructions
|
| 454 |
+
if scores.get("evidence_grounding", 0) > 3:
|
| 455 |
+
scores["evidence_grounding"] = 3
|
| 456 |
+
if "hard_floor_violations" not in result:
|
| 457 |
+
result["hard_floor_violations"] = []
|
| 458 |
+
result["hard_floor_violations"].append(
|
| 459 |
+
f"Insufficient citation coverage ({citation_check['ratio']:.0%}) - evidence_grounding capped at 3"
|
| 460 |
+
)
|
| 461 |
+
|
| 462 |
+
# Add feedback
|
| 463 |
+
if "actionable_feedback" not in result:
|
| 464 |
+
result["actionable_feedback"] = []
|
| 465 |
+
result["actionable_feedback"].insert(0,
|
| 466 |
+
f"CRITICAL: Add more [M##] citations. Current: {citation_check['citations_found']}/{citation_check['metrics_available']} ({citation_check['ratio']:.0%})"
|
| 467 |
+
)
|
| 468 |
+
|
| 469 |
+
# Recalculate and reject
|
| 470 |
+
weighted_score = calculate_weighted_score(scores)
|
| 471 |
+
result["weighted_score"] = weighted_score
|
| 472 |
+
status = "REJECTED"
|
| 473 |
+
result["status"] = status
|
| 474 |
+
else:
|
| 475 |
+
_add_activity_log(workflow_id, progress_store, "critic",
|
| 476 |
+
f"Citation coverage OK: {citation_check['message']}")
|
| 477 |
+
|
| 478 |
else:
|
| 479 |
_add_activity_log(workflow_id, progress_store, "critic",
|
| 480 |
"Warning: metric reference integrity check failed - skipping numeric validation")
|
src/services/workflow_store.py
CHANGED
|
@@ -168,7 +168,12 @@ def _extract_metrics_from_raw_data(raw_data: dict) -> list:
|
|
| 168 |
yf_val = val_all.get("yahoo_finance", {}).get("data", {})
|
| 169 |
|
| 170 |
# Get valuation fetch date if available (point-in-time data)
|
| 171 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 172 |
|
| 173 |
val_metrics = [
|
| 174 |
"market_cap", "enterprise_value", "trailing_pe", "forward_pe",
|
|
@@ -221,8 +226,12 @@ def _extract_metrics_from_raw_data(raw_data: dict) -> list:
|
|
| 221 |
metrics.append(entry)
|
| 222 |
|
| 223 |
# Beta and volatility from Yahoo Finance
|
| 224 |
-
# Get volatility fetch date if available
|
| 225 |
-
vol_fetch_date =
|
|
|
|
|
|
|
|
|
|
|
|
|
| 226 |
|
| 227 |
for vol_metric in ["beta", "historical_volatility", "implied_volatility"]:
|
| 228 |
metric_data = yf_vol.get(vol_metric)
|
|
|
|
| 168 |
yf_val = val_all.get("yahoo_finance", {}).get("data", {})
|
| 169 |
|
| 170 |
# Get valuation fetch date if available (point-in-time data)
|
| 171 |
+
# MCP server returns regular_market_time from Yahoo Finance quote data
|
| 172 |
+
val_fetch_date = (
|
| 173 |
+
yf_val.get("_fetch_date")
|
| 174 |
+
or yf_val.get("fetch_date")
|
| 175 |
+
or multi_source.get("valuation_all", {}).get("yahoo_finance", {}).get("regular_market_time")
|
| 176 |
+
)
|
| 177 |
|
| 178 |
val_metrics = [
|
| 179 |
"market_cap", "enterprise_value", "trailing_pe", "forward_pe",
|
|
|
|
| 226 |
metrics.append(entry)
|
| 227 |
|
| 228 |
# Beta and volatility from Yahoo Finance
|
| 229 |
+
# Get volatility fetch date if available (MCP returns generated_at at response level)
|
| 230 |
+
vol_fetch_date = (
|
| 231 |
+
yf_vol.get("_fetch_date")
|
| 232 |
+
or yf_vol.get("fetch_date")
|
| 233 |
+
or vol_all.get("generated_at", "")[:10] if vol_all.get("generated_at") else None
|
| 234 |
+
)
|
| 235 |
|
| 236 |
for vol_metric in ["beta", "historical_volatility", "implied_volatility"]:
|
| 237 |
metric_data = yf_vol.get(vol_metric)
|
src/utils/numeric_validator.py
CHANGED
|
@@ -224,3 +224,171 @@ def validate_numeric_accuracy(swot_text: str, metric_reference: dict) -> list[st
|
|
| 224 |
errors.append(f"Invalid reference: {ref_id} not in metric table")
|
| 225 |
|
| 226 |
return errors
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 224 |
errors.append(f"Invalid reference: {ref_id} not in metric table")
|
| 225 |
|
| 226 |
return errors
|
| 227 |
+
|
| 228 |
+
|
| 229 |
+
# ============================================================
|
| 230 |
+
# LAYER 3: Uncited Number Detection
|
| 231 |
+
# ============================================================
|
| 232 |
+
|
| 233 |
+
# Pattern to match metric-like numbers (will filter out cited ones programmatically)
|
| 234 |
+
# Matches: $56.6B, $394M, 25.3%, 12.14, 0.84x, etc.
|
| 235 |
+
METRIC_NUMBER_PATTERN = re.compile(
|
| 236 |
+
r'('
|
| 237 |
+
r'\$[\d,]+\.?\d*[BMK]?' # Currency: $56.6B, $394M, $1,234
|
| 238 |
+
r'|'
|
| 239 |
+
r'[\d,]+\.?\d*%' # Percentage: 25.3%, 12%
|
| 240 |
+
r'|'
|
| 241 |
+
r'[\d,]+\.\d+x' # Ratio with x: 1.5x, 12.3x
|
| 242 |
+
r')',
|
| 243 |
+
re.IGNORECASE
|
| 244 |
+
)
|
| 245 |
+
|
| 246 |
+
# Keywords that indicate a number is likely a metric value
|
| 247 |
+
METRIC_CONTEXT_KEYWORDS = [
|
| 248 |
+
'revenue', 'income', 'profit', 'margin', 'cap', 'market cap', 'enterprise value',
|
| 249 |
+
'p/e', 'pe ratio', 'p/b', 'pb ratio', 'p/s', 'ps ratio', 'ev/ebitda',
|
| 250 |
+
'beta', 'volatility', 'vix', 'growth', 'yield', 'dividend',
|
| 251 |
+
'debt', 'equity', 'assets', 'liabilities', 'cash flow', 'fcf',
|
| 252 |
+
'eps', 'earnings', 'roi', 'roe', 'roa', 'ebitda',
|
| 253 |
+
'gdp', 'inflation', 'unemployment', 'interest rate',
|
| 254 |
+
]
|
| 255 |
+
|
| 256 |
+
|
| 257 |
+
def find_uncited_numbers(swot_text: str, metric_reference: dict) -> list[dict]:
|
| 258 |
+
"""
|
| 259 |
+
Find numbers that look like metrics but don't have [M##] citations.
|
| 260 |
+
|
| 261 |
+
Returns list of suspicious uncited numbers with context.
|
| 262 |
+
"""
|
| 263 |
+
uncited = []
|
| 264 |
+
|
| 265 |
+
# Get all cited positions to exclude
|
| 266 |
+
cited_matches = list(CITATION_PATTERN.finditer(swot_text))
|
| 267 |
+
cited_positions = set()
|
| 268 |
+
for match in cited_matches:
|
| 269 |
+
# Mark the entire citation span as "cited"
|
| 270 |
+
cited_positions.update(range(match.start(), match.end()))
|
| 271 |
+
|
| 272 |
+
# Find all metric-like numbers
|
| 273 |
+
for match in METRIC_NUMBER_PATTERN.finditer(swot_text):
|
| 274 |
+
# Skip if this position overlaps with a citation
|
| 275 |
+
if any(pos in cited_positions for pos in range(match.start(), match.end())):
|
| 276 |
+
continue
|
| 277 |
+
|
| 278 |
+
value_str = match.group(1)
|
| 279 |
+
normalized = normalize_value(value_str)
|
| 280 |
+
|
| 281 |
+
if normalized is None:
|
| 282 |
+
continue
|
| 283 |
+
|
| 284 |
+
# Get surrounding context (50 chars before and after)
|
| 285 |
+
start = max(0, match.start() - 50)
|
| 286 |
+
end = min(len(swot_text), match.end() + 50)
|
| 287 |
+
context = swot_text[start:end].replace('\n', ' ')
|
| 288 |
+
|
| 289 |
+
# Check if context contains metric-related keywords
|
| 290 |
+
context_lower = context.lower()
|
| 291 |
+
has_metric_context = any(kw in context_lower for kw in METRIC_CONTEXT_KEYWORDS)
|
| 292 |
+
|
| 293 |
+
# Check if value matches any known metric (within tolerance)
|
| 294 |
+
matches_known_metric = False
|
| 295 |
+
matched_metric_key = None
|
| 296 |
+
for ref_id, ref_entry in metric_reference.items():
|
| 297 |
+
expected = ref_entry.get("raw_value")
|
| 298 |
+
if expected and values_match(normalized, expected):
|
| 299 |
+
matches_known_metric = True
|
| 300 |
+
matched_metric_key = ref_entry.get("key")
|
| 301 |
+
break
|
| 302 |
+
|
| 303 |
+
# Flag as suspicious if it looks like a metric
|
| 304 |
+
if has_metric_context or matches_known_metric:
|
| 305 |
+
uncited.append({
|
| 306 |
+
"value": value_str,
|
| 307 |
+
"normalized": normalized,
|
| 308 |
+
"position": match.start(),
|
| 309 |
+
"context": context.strip(),
|
| 310 |
+
"has_metric_context": has_metric_context,
|
| 311 |
+
"matches_known_metric": matches_known_metric,
|
| 312 |
+
"matched_metric_key": matched_metric_key,
|
| 313 |
+
})
|
| 314 |
+
|
| 315 |
+
return uncited
|
| 316 |
+
|
| 317 |
+
|
| 318 |
+
def validate_uncited_numbers(swot_text: str, metric_reference: dict) -> list[str]:
|
| 319 |
+
"""
|
| 320 |
+
Validate that metric-like numbers have proper citations.
|
| 321 |
+
|
| 322 |
+
Returns list of warnings for uncited numbers that should have citations.
|
| 323 |
+
"""
|
| 324 |
+
if not metric_reference:
|
| 325 |
+
return []
|
| 326 |
+
|
| 327 |
+
uncited = find_uncited_numbers(swot_text, metric_reference)
|
| 328 |
+
warnings = []
|
| 329 |
+
|
| 330 |
+
for item in uncited:
|
| 331 |
+
if item["matches_known_metric"]:
|
| 332 |
+
# This number matches a known metric - MUST have citation
|
| 333 |
+
warnings.append(
|
| 334 |
+
f"Uncited metric value: {item['value']} appears to be {item['matched_metric_key']} - add [M##] citation"
|
| 335 |
+
)
|
| 336 |
+
elif item["has_metric_context"]:
|
| 337 |
+
# Number in metric context without citation - suspicious
|
| 338 |
+
warnings.append(
|
| 339 |
+
f"Uncited number in metric context: {item['value']} - verify source or add citation"
|
| 340 |
+
)
|
| 341 |
+
|
| 342 |
+
return warnings
|
| 343 |
+
|
| 344 |
+
|
| 345 |
+
def get_citation_count(swot_text: str) -> int:
|
| 346 |
+
"""Count the number of [M##] citations in the text."""
|
| 347 |
+
return len(CITATION_PATTERN.findall(swot_text))
|
| 348 |
+
|
| 349 |
+
|
| 350 |
+
def validate_minimum_citations(swot_text: str, metric_reference: dict, min_ratio: float = 0.5) -> dict:
|
| 351 |
+
"""
|
| 352 |
+
Check if SWOT has enough citations relative to available metrics.
|
| 353 |
+
|
| 354 |
+
Args:
|
| 355 |
+
swot_text: The SWOT analysis output
|
| 356 |
+
metric_reference: Available metrics
|
| 357 |
+
min_ratio: Minimum ratio of citations to available metrics (default 0.5 = 50%)
|
| 358 |
+
|
| 359 |
+
Returns:
|
| 360 |
+
{
|
| 361 |
+
"valid": bool,
|
| 362 |
+
"citations_found": int,
|
| 363 |
+
"metrics_available": int,
|
| 364 |
+
"ratio": float,
|
| 365 |
+
"message": str
|
| 366 |
+
}
|
| 367 |
+
"""
|
| 368 |
+
citations_found = get_citation_count(swot_text)
|
| 369 |
+
metrics_available = len(metric_reference) if metric_reference else 0
|
| 370 |
+
|
| 371 |
+
if metrics_available == 0:
|
| 372 |
+
return {
|
| 373 |
+
"valid": True,
|
| 374 |
+
"citations_found": citations_found,
|
| 375 |
+
"metrics_available": 0,
|
| 376 |
+
"ratio": 0,
|
| 377 |
+
"message": "No metrics available for citation"
|
| 378 |
+
}
|
| 379 |
+
|
| 380 |
+
ratio = citations_found / metrics_available
|
| 381 |
+
valid = ratio >= min_ratio
|
| 382 |
+
|
| 383 |
+
if valid:
|
| 384 |
+
message = f"Citation coverage: {citations_found}/{metrics_available} ({ratio:.0%})"
|
| 385 |
+
else:
|
| 386 |
+
message = f"Insufficient citations: {citations_found}/{metrics_available} ({ratio:.0%}) - minimum {min_ratio:.0%} required"
|
| 387 |
+
|
| 388 |
+
return {
|
| 389 |
+
"valid": valid,
|
| 390 |
+
"citations_found": citations_found,
|
| 391 |
+
"metrics_available": metrics_available,
|
| 392 |
+
"ratio": ratio,
|
| 393 |
+
"message": message
|
| 394 |
+
}
|
static/assets/{index-Cb1-3_-g.js → index-juxGlBoI.js}
RENAMED
|
The diff for this file is too large to render.
See raw diff
|
|
|
static/index.html
CHANGED
|
@@ -5,7 +5,7 @@
|
|
| 5 |
<link rel="icon" type="image/svg+xml" href="/vite.svg" />
|
| 6 |
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
| 7 |
<title>frontend</title>
|
| 8 |
-
<script type="module" crossorigin src="/assets/index-
|
| 9 |
<link rel="stylesheet" crossorigin href="/assets/index-DCSmN--O.css">
|
| 10 |
</head>
|
| 11 |
<body>
|
|
|
|
| 5 |
<link rel="icon" type="image/svg+xml" href="/vite.svg" />
|
| 6 |
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
|
| 7 |
<title>frontend</title>
|
| 8 |
+
<script type="module" crossorigin src="/assets/index-juxGlBoI.js"></script>
|
| 9 |
<link rel="stylesheet" crossorigin href="/assets/index-DCSmN--O.css">
|
| 10 |
</head>
|
| 11 |
<body>
|