Spaces:
Running
Running
Commit ·
332f271
0
Parent(s):
Initial deploy to Hugging Face Spaces
Browse filesClean history with no binary model files.
Co-Authored-By: Claude Sonnet 4.6 <noreply@anthropic.com>
- .gitattributes +35 -0
- CLAUDE.md +110 -0
- Dockerfile +36 -0
- README.md +179 -0
- be/agent_service.py +206 -0
- be/agent_tools.py +648 -0
- be/app.py +127 -0
- be/chat_routes.py +204 -0
- be/chat_service.py +385 -0
- be/config.py +36 -0
- be/forecast_model.py +410 -0
- be/forecast_routes.py +140 -0
- be/forecast_service.py +235 -0
- be/llm_client.py +267 -0
- be/polygon_api.py +90 -0
- be/rag_pipeline.py +450 -0
- be/requirements.txt +19 -0
- be/scraper.py +134 -0
- be/sentiment_analyzer.py +212 -0
- be/sentiment_routes.py +211 -0
- be/sentiment_service.py +470 -0
- be/social_scrapers.py +525 -0
- company_tickers.json +0 -0
- fe/app.js +2479 -0
- fe/index.html +304 -0
- fe/styles.css +1942 -0
.gitattributes
ADDED
|
@@ -0,0 +1,35 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
*.7z filter=lfs diff=lfs merge=lfs -text
|
| 2 |
+
*.arrow filter=lfs diff=lfs merge=lfs -text
|
| 3 |
+
*.bin filter=lfs diff=lfs merge=lfs -text
|
| 4 |
+
*.bz2 filter=lfs diff=lfs merge=lfs -text
|
| 5 |
+
*.ckpt filter=lfs diff=lfs merge=lfs -text
|
| 6 |
+
*.ftz filter=lfs diff=lfs merge=lfs -text
|
| 7 |
+
*.gz filter=lfs diff=lfs merge=lfs -text
|
| 8 |
+
*.h5 filter=lfs diff=lfs merge=lfs -text
|
| 9 |
+
*.joblib filter=lfs diff=lfs merge=lfs -text
|
| 10 |
+
*.lfs.* filter=lfs diff=lfs merge=lfs -text
|
| 11 |
+
*.mlmodel filter=lfs diff=lfs merge=lfs -text
|
| 12 |
+
*.model filter=lfs diff=lfs merge=lfs -text
|
| 13 |
+
*.msgpack filter=lfs diff=lfs merge=lfs -text
|
| 14 |
+
*.npy filter=lfs diff=lfs merge=lfs -text
|
| 15 |
+
*.npz filter=lfs diff=lfs merge=lfs -text
|
| 16 |
+
*.onnx filter=lfs diff=lfs merge=lfs -text
|
| 17 |
+
*.ot filter=lfs diff=lfs merge=lfs -text
|
| 18 |
+
*.parquet filter=lfs diff=lfs merge=lfs -text
|
| 19 |
+
*.pb filter=lfs diff=lfs merge=lfs -text
|
| 20 |
+
*.pickle filter=lfs diff=lfs merge=lfs -text
|
| 21 |
+
*.pkl filter=lfs diff=lfs merge=lfs -text
|
| 22 |
+
*.pt filter=lfs diff=lfs merge=lfs -text
|
| 23 |
+
*.pth filter=lfs diff=lfs merge=lfs -text
|
| 24 |
+
*.rar filter=lfs diff=lfs merge=lfs -text
|
| 25 |
+
*.safetensors filter=lfs diff=lfs merge=lfs -text
|
| 26 |
+
saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
| 27 |
+
*.tar.* filter=lfs diff=lfs merge=lfs -text
|
| 28 |
+
*.tar filter=lfs diff=lfs merge=lfs -text
|
| 29 |
+
*.tflite filter=lfs diff=lfs merge=lfs -text
|
| 30 |
+
*.tgz filter=lfs diff=lfs merge=lfs -text
|
| 31 |
+
*.wasm filter=lfs diff=lfs merge=lfs -text
|
| 32 |
+
*.xz filter=lfs diff=lfs merge=lfs -text
|
| 33 |
+
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
+
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
+
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
CLAUDE.md
ADDED
|
@@ -0,0 +1,110 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# CLAUDE.md
|
| 2 |
+
|
| 3 |
+
## Quick Start
|
| 4 |
+
|
| 5 |
+
```bash
|
| 6 |
+
cd be && pip install -r requirements.txt
|
| 7 |
+
# Add POLYGON_API_KEY and GEMINI_API_KEY to .env
|
| 8 |
+
python app.py
|
| 9 |
+
# Open http://localhost:5000
|
| 10 |
+
```
|
| 11 |
+
|
| 12 |
+
## Project Overview
|
| 13 |
+
|
| 14 |
+
Stock research assistant with Polygon.io data, AI chatbot (ReAct agent with tool calling), and social media sentiment analysis.
|
| 15 |
+
|
| 16 |
+
## Architecture
|
| 17 |
+
|
| 18 |
+
```
|
| 19 |
+
fe/ Vanilla JS frontend (no frameworks)
|
| 20 |
+
be/ Python Flask backend
|
| 21 |
+
app.py Main Flask app, stock data routes
|
| 22 |
+
agent_service.py ReAct agent loop (tool calling + streaming)
|
| 23 |
+
agent_tools.py 10 tool schemas + ToolExecutor with 3-layer cache
|
| 24 |
+
llm_client.py AgentLLMClient (google-genai SDK) + ConversationManager
|
| 25 |
+
chat_routes.py SSE streaming with structured events
|
| 26 |
+
rag_pipeline.py FAISS vector store, embeddings (google-genai SDK)
|
| 27 |
+
sentiment_*.py Social sentiment (FinBERT, scrapers)
|
| 28 |
+
forecast_*.py LSTM price forecasting
|
| 29 |
+
polygon_api.py Polygon.io wrapper
|
| 30 |
+
chat_service.py Legacy RAG chat (replaced by agent_service.py)
|
| 31 |
+
```
|
| 32 |
+
|
| 33 |
+
## Chat Agent Architecture
|
| 34 |
+
|
| 35 |
+
The chat uses a **ReAct-style agent** with Gemini 2.0 Flash function calling:
|
| 36 |
+
|
| 37 |
+
1. User sends message → `agent_service.py` builds conversation contents
|
| 38 |
+
2. Gemini decides which tools to call based on the question
|
| 39 |
+
3. `ToolExecutor` runs tools with 3-layer caching (frontend context → server cache → API)
|
| 40 |
+
4. Results fed back to Gemini as function responses
|
| 41 |
+
5. Loop repeats (max 5 iterations) until Gemini returns a text response
|
| 42 |
+
6. Final text streamed to frontend via structured SSE events
|
| 43 |
+
|
| 44 |
+
**10 Available Tools:** `get_stock_quote`, `get_company_info`, `get_financials`, `get_news`, `search_knowledge_base`, `analyze_sentiment`, `get_price_forecast`, `get_dividends`, `get_stock_splits`, `get_price_history`
|
| 45 |
+
|
| 46 |
+
**SSE Event Protocol:**
|
| 47 |
+
- `event: tool_call` — Agent is calling a tool (status: calling/complete/error)
|
| 48 |
+
- `event: text` — Response text chunk
|
| 49 |
+
- `event: done` — Stream complete
|
| 50 |
+
- `event: error` — Fatal error
|
| 51 |
+
|
| 52 |
+
## Key Technical Decisions
|
| 53 |
+
|
| 54 |
+
- **Frontend**: Pure vanilla JS - no frameworks allowed
|
| 55 |
+
- **AI SDK**: `google-genai` (new SDK) for chat + embeddings. Old `google-generativeai` still installed but only used by legacy `GeminiClient`
|
| 56 |
+
- **Agent**: Gemini function calling with manual dispatch (automatic calling disabled)
|
| 57 |
+
- **Vector DB**: FAISS local storage (`be/faiss_index/`)
|
| 58 |
+
- **Scraping**: cloudscraper for Cloudflare bypass (StockTwits, Reddit)
|
| 59 |
+
- **Sentiment**: FinBERT model (lazy-loaded, ~500MB)
|
| 60 |
+
|
| 61 |
+
## API Keys (.env)
|
| 62 |
+
|
| 63 |
+
```bash
|
| 64 |
+
POLYGON_API_KEY= # Stock data (5 calls/min free tier)
|
| 65 |
+
GEMINI_API_KEY= # Chat + embeddings (free)
|
| 66 |
+
TWITTER_BEARER_TOKEN= # Optional, paid ($100+/month)
|
| 67 |
+
```
|
| 68 |
+
|
| 69 |
+
## Rate Limits
|
| 70 |
+
|
| 71 |
+
- **Polygon.io**: 5 calls/min (free tier) — handled by 3-layer cache in ToolExecutor
|
| 72 |
+
- Layer 1: Frontend sends cached data as `context` in chat request
|
| 73 |
+
- Layer 2: Server-side TTL cache (5 min) in `ToolCache`
|
| 74 |
+
- Layer 3: Live API call (last resort)
|
| 75 |
+
- **Gemini**: 15 RPM chat, 1500 RPM embeddings (free tier)
|
| 76 |
+
|
| 77 |
+
## Main Endpoints
|
| 78 |
+
|
| 79 |
+
| Endpoint | Purpose |
|
| 80 |
+
|----------|---------|
|
| 81 |
+
| `POST /api/chat/message` | Agent chat with structured SSE streaming |
|
| 82 |
+
| `POST /api/chat/scrape-articles` | RAG article indexing into FAISS |
|
| 83 |
+
| `POST /api/sentiment/analyze` | Sentiment analysis |
|
| 84 |
+
| `POST /api/forecast/predict/<ticker>` | LSTM price forecast |
|
| 85 |
+
|
| 86 |
+
## Where to Find Details
|
| 87 |
+
|
| 88 |
+
| Topic | Location |
|
| 89 |
+
|-------|----------|
|
| 90 |
+
| Agent loop & tool calling | `be/agent_service.py`, `be/agent_tools.py` |
|
| 91 |
+
| Tool schemas (10 tools) | `be/agent_tools.py` - TOOL_DECLARATIONS |
|
| 92 |
+
| 3-layer caching | `be/agent_tools.py` - ToolExecutor._check_frontend_context() |
|
| 93 |
+
| LLM client (function calling) | `be/llm_client.py` - AgentLLMClient |
|
| 94 |
+
| Frontend SSE parser | `fe/app.js` - parseSSEBuffer(), sendChatMessage() |
|
| 95 |
+
| Frontend caching strategy | `fe/app.js` - stockCache object |
|
| 96 |
+
| Chart implementation | `fe/app.js` - chartState, drawChart functions |
|
| 97 |
+
| RAG pipeline / FAISS | `be/rag_pipeline.py` |
|
| 98 |
+
| Sentiment bias corrections | `be/sentiment_service.py` - aggregate calculation |
|
| 99 |
+
| Social scrapers | `be/social_scrapers.py` |
|
| 100 |
+
| Design system | `.design-engineer/system.md` |
|
| 101 |
+
|
| 102 |
+
## Development Rules
|
| 103 |
+
|
| 104 |
+
1. Keep frontend vanilla JS - no frameworks
|
| 105 |
+
2. Respect caching TTLs (see stockCache in app.js)
|
| 106 |
+
3. FAISS namespaces: `news:` for articles, `sentiment:` for posts
|
| 107 |
+
4. FinBERT has bullish bias - see sentiment_service.py for corrections
|
| 108 |
+
5. Tool responses use `role="tool"` in Gemini API (not "user")
|
| 109 |
+
6. Embeddings use `result.embeddings[0].values` with new google-genai SDK
|
| 110 |
+
7. `chat_service.py` is legacy — all new chat work goes through `agent_service.py`
|
Dockerfile
ADDED
|
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
FROM python:3.11-slim
|
| 2 |
+
|
| 3 |
+
WORKDIR /app
|
| 4 |
+
|
| 5 |
+
# System deps needed by faiss-cpu, lxml, and torch
|
| 6 |
+
RUN apt-get update && apt-get install -y --no-install-recommends \
|
| 7 |
+
gcc \
|
| 8 |
+
g++ \
|
| 9 |
+
&& rm -rf /var/lib/apt/lists/*
|
| 10 |
+
|
| 11 |
+
# Install CPU-only torch first to avoid the 2.5 GB CUDA download
|
| 12 |
+
RUN pip install --no-cache-dir \
|
| 13 |
+
torch==2.1.0+cpu \
|
| 14 |
+
--extra-index-url https://download.pytorch.org/whl/cpu
|
| 15 |
+
|
| 16 |
+
# Install gunicorn and remaining dependencies
|
| 17 |
+
COPY be/requirements.txt .
|
| 18 |
+
RUN pip install --no-cache-dir gunicorn && \
|
| 19 |
+
pip install --no-cache-dir -r requirements.txt
|
| 20 |
+
|
| 21 |
+
# Copy application code
|
| 22 |
+
COPY be/ ./be/
|
| 23 |
+
COPY fe/ ./fe/
|
| 24 |
+
COPY company_tickers.json .
|
| 25 |
+
|
| 26 |
+
# Create writable directories for runtime data
|
| 27 |
+
# (FAISS index and forecast models are regenerated each restart on free tier)
|
| 28 |
+
RUN mkdir -p /app/be/faiss_index /app/be/forecast_models /app/be/sentiment_cache
|
| 29 |
+
|
| 30 |
+
WORKDIR /app/be
|
| 31 |
+
|
| 32 |
+
EXPOSE 7860
|
| 33 |
+
|
| 34 |
+
# Use shell form so ${PORT:-7860} is expanded at runtime
|
| 35 |
+
# gthread workers allow concurrent SSE streaming without blocking
|
| 36 |
+
CMD ["sh", "-c", "gunicorn --bind 0.0.0.0:${PORT:-7860} --worker-class gthread --workers 1 --threads 4 --timeout 300 app:app"]
|
README.md
ADDED
|
@@ -0,0 +1,179 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
title: MarketLens
|
| 3 |
+
emoji: 📈
|
| 4 |
+
colorFrom: blue
|
| 5 |
+
colorTo: green
|
| 6 |
+
sdk: docker
|
| 7 |
+
app_port: 7860
|
| 8 |
+
pinned: false
|
| 9 |
+
---
|
| 10 |
+
|
| 11 |
+
# MarketLens - Stock Research Assistant
|
| 12 |
+
|
| 13 |
+
A personal stock research assistant that provides real-time market information and AI-powered insights. Users can explore stock data, news, earnings, and performance metrics powered by the Polygon API.
|
| 14 |
+
|
| 15 |
+
## Project Structure
|
| 16 |
+
|
| 17 |
+
```
|
| 18 |
+
MarketLens/
|
| 19 |
+
├── be/ # Python backend
|
| 20 |
+
│ ├── app.py # Flask application
|
| 21 |
+
│ ├── config.py # Configuration management
|
| 22 |
+
│ ├── polygon_api.py # Polygon API client
|
| 23 |
+
│ └── requirements.txt # Python dependencies
|
| 24 |
+
├── fe/ # JavaScript frontend
|
| 25 |
+
│ ├── index.html # Main HTML page
|
| 26 |
+
│ ├── app.js # Frontend logic
|
| 27 |
+
│ └── styles.css # Styling
|
| 28 |
+
├── company_tickers.json # List of stock tickers
|
| 29 |
+
├── .env.example # Environment variables template
|
| 30 |
+
└── README.md # This file
|
| 31 |
+
```
|
| 32 |
+
|
| 33 |
+
## Features
|
| 34 |
+
|
| 35 |
+
- **Stock Selection**: Search and select from thousands of stock tickers
|
| 36 |
+
- **Real-time Data**: View current market data including price, volume, and market cap
|
| 37 |
+
- **Price Charts**: Interactive charts with multiple timeframes (1M, 3M, 6M, 1Y, 5Y)
|
| 38 |
+
- **Financial Data**: Access quarterly and annual financial statements
|
| 39 |
+
- **News Feed**: Latest news articles related to selected stocks
|
| 40 |
+
- **Clean UI**: Simple, responsive interface built with vanilla JavaScript
|
| 41 |
+
|
| 42 |
+
## Setup Instructions
|
| 43 |
+
|
| 44 |
+
### Prerequisites
|
| 45 |
+
|
| 46 |
+
- Python 3.8 or higher
|
| 47 |
+
- Polygon API key (get one at [polygon.io](https://polygon.io/))
|
| 48 |
+
|
| 49 |
+
### 1. Install Python Dependencies
|
| 50 |
+
|
| 51 |
+
Navigate to the project directory and install the required Python packages:
|
| 52 |
+
|
| 53 |
+
```bash
|
| 54 |
+
cd "MarketLens"
|
| 55 |
+
pip install -r be/requirements.txt
|
| 56 |
+
```
|
| 57 |
+
|
| 58 |
+
### 2. Configure API Key
|
| 59 |
+
|
| 60 |
+
Create a `.env` file in the project root directory:
|
| 61 |
+
|
| 62 |
+
```bash
|
| 63 |
+
cp .env.example .env
|
| 64 |
+
```
|
| 65 |
+
|
| 66 |
+
Edit the `.env` file and add your Polygon API key:
|
| 67 |
+
|
| 68 |
+
```
|
| 69 |
+
POLYGON_API_KEY=your_actual_api_key_here
|
| 70 |
+
PORT=5000
|
| 71 |
+
```
|
| 72 |
+
|
| 73 |
+
**Important**: Get your free Polygon API key from [https://polygon.io/](https://polygon.io/)
|
| 74 |
+
|
| 75 |
+
### 3. Run the Application
|
| 76 |
+
|
| 77 |
+
Start the Flask backend server:
|
| 78 |
+
|
| 79 |
+
```bash
|
| 80 |
+
cd be
|
| 81 |
+
python app.py
|
| 82 |
+
```
|
| 83 |
+
|
| 84 |
+
The application will start on `http://localhost:5000`
|
| 85 |
+
|
| 86 |
+
### 4. Access the Application
|
| 87 |
+
|
| 88 |
+
Open your web browser and navigate to:
|
| 89 |
+
|
| 90 |
+
```
|
| 91 |
+
http://localhost:5000
|
| 92 |
+
```
|
| 93 |
+
|
| 94 |
+
## Usage
|
| 95 |
+
|
| 96 |
+
1. **Search for a Stock**: Use the search box to filter stocks by ticker symbol or company name
|
| 97 |
+
2. **Select a Stock**: Click on a stock from the dropdown list
|
| 98 |
+
3. **Explore Data**: Navigate through different tabs:
|
| 99 |
+
- **Overview**: Key metrics and company description
|
| 100 |
+
- **Chart**: Price performance over various timeframes
|
| 101 |
+
- **Financials**: Quarterly and annual financial statements
|
| 102 |
+
- **News**: Latest news articles about the company
|
| 103 |
+
|
| 104 |
+
## API Endpoints
|
| 105 |
+
|
| 106 |
+
The backend provides the following REST API endpoints:
|
| 107 |
+
|
| 108 |
+
- `GET /api/ticker/<ticker>/details` - Get detailed ticker information
|
| 109 |
+
- `GET /api/ticker/<ticker>/previous-close` - Get previous day's close data
|
| 110 |
+
- `GET /api/ticker/<ticker>/aggregates` - Get historical price data
|
| 111 |
+
- `GET /api/ticker/<ticker>/news` - Get news articles
|
| 112 |
+
- `GET /api/ticker/<ticker>/financials` - Get financial statements
|
| 113 |
+
- `GET /api/ticker/<ticker>/snapshot` - Get current market snapshot
|
| 114 |
+
|
| 115 |
+
## Technology Stack
|
| 116 |
+
|
| 117 |
+
### Backend
|
| 118 |
+
- **Flask**: Lightweight Python web framework
|
| 119 |
+
- **Requests**: HTTP library for API calls
|
| 120 |
+
- **python-dotenv**: Environment variable management
|
| 121 |
+
- **Flask-CORS**: Cross-origin resource sharing
|
| 122 |
+
|
| 123 |
+
### Frontend
|
| 124 |
+
- **Vanilla JavaScript**: No frameworks, pure JavaScript
|
| 125 |
+
- **HTML5 Canvas**: For rendering price charts
|
| 126 |
+
- **CSS3**: Modern styling with flexbox and grid
|
| 127 |
+
|
| 128 |
+
## Polygon API
|
| 129 |
+
|
| 130 |
+
This application uses the Polygon API to fetch:
|
| 131 |
+
- Real-time and historical stock prices
|
| 132 |
+
- Company information and details
|
| 133 |
+
- Financial statements
|
| 134 |
+
- News articles
|
| 135 |
+
- Market snapshots
|
| 136 |
+
|
| 137 |
+
API Documentation: [https://polygon.io/docs](https://polygon.io/docs)
|
| 138 |
+
|
| 139 |
+
## Development
|
| 140 |
+
|
| 141 |
+
### Running in Development Mode
|
| 142 |
+
|
| 143 |
+
The Flask server runs in debug mode by default, which provides:
|
| 144 |
+
- Auto-reload on code changes
|
| 145 |
+
- Detailed error messages
|
| 146 |
+
- Interactive debugger
|
| 147 |
+
|
| 148 |
+
### Environment Variables
|
| 149 |
+
|
| 150 |
+
- `POLYGON_API_KEY`: Your Polygon API key (required)
|
| 151 |
+
- `PORT`: Server port (default: 5000)
|
| 152 |
+
|
| 153 |
+
## Troubleshooting
|
| 154 |
+
|
| 155 |
+
### API Key Issues
|
| 156 |
+
- Ensure your `.env` file is in the project root directory
|
| 157 |
+
- Verify your API key is valid at [polygon.io](https://polygon.io/)
|
| 158 |
+
- Check that the API key is properly set in the `.env` file
|
| 159 |
+
|
| 160 |
+
### CORS Errors
|
| 161 |
+
- Make sure Flask-CORS is installed: `pip install Flask-CORS`
|
| 162 |
+
- The backend should be running on `http://localhost:5000`
|
| 163 |
+
|
| 164 |
+
### No Data Displayed
|
| 165 |
+
- Check browser console for error messages
|
| 166 |
+
- Verify the backend server is running
|
| 167 |
+
- Ensure your Polygon API key has sufficient permissions
|
| 168 |
+
|
| 169 |
+
## License
|
| 170 |
+
|
| 171 |
+
This is a personal project for educational and personal use.
|
| 172 |
+
|
| 173 |
+
## Future Enhancements
|
| 174 |
+
|
| 175 |
+
- AI-powered stock analysis and recommendations
|
| 176 |
+
- Portfolio tracking
|
| 177 |
+
- Real-time price updates with WebSockets
|
| 178 |
+
- Advanced charting with technical indicators
|
| 179 |
+
- User authentication and saved preferences
|
be/agent_service.py
ADDED
|
@@ -0,0 +1,206 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
ReAct-style agent service that replaces the static RAG chat pipeline.
|
| 3 |
+
Uses Gemini function calling to dynamically fetch data through tools.
|
| 4 |
+
"""
|
| 5 |
+
|
| 6 |
+
import json
|
| 7 |
+
import logging
|
| 8 |
+
|
| 9 |
+
from google.genai import types as genai_types
|
| 10 |
+
from llm_client import AgentLLMClient, ConversationManager
|
| 11 |
+
from agent_tools import TOOL_DECLARATIONS, ToolExecutor
|
| 12 |
+
from rag_pipeline import VectorStore, ContextRetriever
|
| 13 |
+
from polygon_api import PolygonAPI
|
| 14 |
+
from config import AGENT_MAX_ITERATIONS
|
| 15 |
+
|
| 16 |
+
logger = logging.getLogger(__name__)
|
| 17 |
+
|
| 18 |
+
|
| 19 |
+
class AgentService:
|
| 20 |
+
"""Orchestrates the ReAct agent loop with tool calling."""
|
| 21 |
+
|
| 22 |
+
def __init__(self):
|
| 23 |
+
self.polygon = PolygonAPI()
|
| 24 |
+
self.vector_store = VectorStore()
|
| 25 |
+
self.context_retriever = ContextRetriever(vector_store=self.vector_store)
|
| 26 |
+
self.llm_client = AgentLLMClient()
|
| 27 |
+
self.conversation_manager = ConversationManager()
|
| 28 |
+
self.tool_executor = ToolExecutor(
|
| 29 |
+
polygon_api=self.polygon,
|
| 30 |
+
context_retriever=self.context_retriever,
|
| 31 |
+
vector_store=self.vector_store,
|
| 32 |
+
)
|
| 33 |
+
|
| 34 |
+
def process_message(self, ticker, message, frontend_context, conversation_id):
|
| 35 |
+
"""
|
| 36 |
+
Process a user message through the ReAct agent loop.
|
| 37 |
+
|
| 38 |
+
Yields (event_type, data) tuples for SSE streaming:
|
| 39 |
+
- ("tool_call", {"tool": str, "args": dict, "status": "calling"|"complete"|"error"})
|
| 40 |
+
- ("text", str)
|
| 41 |
+
- ("done", {})
|
| 42 |
+
|
| 43 |
+
Args:
|
| 44 |
+
ticker: Stock ticker symbol
|
| 45 |
+
message: User message
|
| 46 |
+
frontend_context: Cached data from frontend (Layer 1 cache)
|
| 47 |
+
conversation_id: Conversation session ID
|
| 48 |
+
"""
|
| 49 |
+
try:
|
| 50 |
+
ticker = ticker.upper() if ticker else ""
|
| 51 |
+
|
| 52 |
+
# Prime the tool executor with frontend context (Layer 1)
|
| 53 |
+
self.tool_executor.set_context(frontend_context, ticker)
|
| 54 |
+
|
| 55 |
+
# Build conversation contents from history
|
| 56 |
+
history = self.conversation_manager.get_history(conversation_id)
|
| 57 |
+
contents = self.llm_client.history_to_contents(history)
|
| 58 |
+
|
| 59 |
+
# Append user message
|
| 60 |
+
contents.append(self.llm_client.make_user_content(message))
|
| 61 |
+
|
| 62 |
+
# Build config with tools
|
| 63 |
+
config = self.llm_client.build_config(TOOL_DECLARATIONS, ticker)
|
| 64 |
+
|
| 65 |
+
# ReAct loop
|
| 66 |
+
for iteration in range(AGENT_MAX_ITERATIONS):
|
| 67 |
+
logger.info(f"Agent iteration {iteration + 1}/{AGENT_MAX_ITERATIONS}")
|
| 68 |
+
|
| 69 |
+
response = self.llm_client.generate(contents, config)
|
| 70 |
+
function_calls, text_parts, response_content = self.llm_client.extract_parts(response)
|
| 71 |
+
|
| 72 |
+
if not function_calls:
|
| 73 |
+
# Final text response — stream it in chunks
|
| 74 |
+
final_text = "".join(text_parts)
|
| 75 |
+
if not final_text:
|
| 76 |
+
final_text = "I wasn't able to generate a response. Please try again."
|
| 77 |
+
|
| 78 |
+
chunk_size = 20
|
| 79 |
+
for i in range(0, len(final_text), chunk_size):
|
| 80 |
+
yield ("text", final_text[i:i + chunk_size])
|
| 81 |
+
|
| 82 |
+
yield ("done", {})
|
| 83 |
+
|
| 84 |
+
# Save to conversation history (only user message + final text)
|
| 85 |
+
self.conversation_manager.add_message(conversation_id, "user", message)
|
| 86 |
+
self.conversation_manager.add_message(conversation_id, "assistant", final_text)
|
| 87 |
+
return
|
| 88 |
+
|
| 89 |
+
# Process function calls
|
| 90 |
+
contents.append(response_content)
|
| 91 |
+
|
| 92 |
+
tool_response_parts = []
|
| 93 |
+
for part in function_calls:
|
| 94 |
+
fc = part.function_call
|
| 95 |
+
tool_name = fc.name
|
| 96 |
+
tool_args = dict(fc.args) if fc.args else {}
|
| 97 |
+
|
| 98 |
+
# Notify frontend: tool call starting
|
| 99 |
+
yield ("tool_call", {
|
| 100 |
+
"tool": tool_name,
|
| 101 |
+
"args": tool_args,
|
| 102 |
+
"status": "calling",
|
| 103 |
+
})
|
| 104 |
+
|
| 105 |
+
# Execute the tool
|
| 106 |
+
result = self.tool_executor.execute(tool_name, tool_args)
|
| 107 |
+
|
| 108 |
+
# Notify frontend: tool call complete
|
| 109 |
+
if "error" in result:
|
| 110 |
+
yield ("tool_call", {
|
| 111 |
+
"tool": tool_name,
|
| 112 |
+
"status": "error",
|
| 113 |
+
"error": result["error"],
|
| 114 |
+
})
|
| 115 |
+
else:
|
| 116 |
+
yield ("tool_call", {
|
| 117 |
+
"tool": tool_name,
|
| 118 |
+
"status": "complete",
|
| 119 |
+
})
|
| 120 |
+
|
| 121 |
+
# Build function response Part for Gemini
|
| 122 |
+
tool_response_parts.append(
|
| 123 |
+
genai_types.Part.from_function_response(
|
| 124 |
+
name=tool_name, response={"result": result}
|
| 125 |
+
)
|
| 126 |
+
)
|
| 127 |
+
|
| 128 |
+
# Add all tool responses as a single Content with multiple Parts
|
| 129 |
+
# Gemini requires all function responses for a turn in one Content object
|
| 130 |
+
contents.append(genai_types.Content(role="tool", parts=tool_response_parts))
|
| 131 |
+
|
| 132 |
+
# Hit max iterations — provide a fallback
|
| 133 |
+
yield ("text", "I gathered some information but reached the maximum number of analysis steps. Please try a more specific question.")
|
| 134 |
+
yield ("done", {})
|
| 135 |
+
|
| 136 |
+
except Exception as e:
|
| 137 |
+
logger.error(f"Agent error: {e}", exc_info=True)
|
| 138 |
+
yield ("error", {"message": f"An error occurred: {str(e)}"})
|
| 139 |
+
|
| 140 |
+
def scrape_and_embed_articles(self, ticker, articles):
|
| 141 |
+
"""
|
| 142 |
+
Background job to scrape and embed news articles into FAISS.
|
| 143 |
+
Delegates to the same logic as the old ChatService.
|
| 144 |
+
"""
|
| 145 |
+
from scraper import ArticleScraper
|
| 146 |
+
from rag_pipeline import EmbeddingGenerator
|
| 147 |
+
import concurrent.futures
|
| 148 |
+
import hashlib
|
| 149 |
+
|
| 150 |
+
scraper = ArticleScraper()
|
| 151 |
+
embedding_gen = EmbeddingGenerator()
|
| 152 |
+
|
| 153 |
+
results = {"scraped": 0, "embedded": 0, "failed": 0, "skipped": 0}
|
| 154 |
+
|
| 155 |
+
def process_article(article):
|
| 156 |
+
try:
|
| 157 |
+
article_url = article.get("article_url", "")
|
| 158 |
+
doc_id = f"{ticker}_news_{hashlib.md5(article_url.encode()).hexdigest()[:12]}"
|
| 159 |
+
|
| 160 |
+
if self.vector_store.document_exists(doc_id):
|
| 161 |
+
return "skipped"
|
| 162 |
+
|
| 163 |
+
content = scraper.scrape_article(article_url)
|
| 164 |
+
if not content:
|
| 165 |
+
content = article.get("description", "")
|
| 166 |
+
if not content or len(content) < 50:
|
| 167 |
+
return "failed"
|
| 168 |
+
|
| 169 |
+
embedding = embedding_gen.generate_embedding(content)
|
| 170 |
+
if not embedding:
|
| 171 |
+
return "failed"
|
| 172 |
+
|
| 173 |
+
metadata = {
|
| 174 |
+
"ticker": ticker,
|
| 175 |
+
"type": "news_article",
|
| 176 |
+
"title": article.get("title", ""),
|
| 177 |
+
"url": article_url,
|
| 178 |
+
"published_date": article.get("published_utc", ""),
|
| 179 |
+
"source": article.get("publisher", {}).get("name", "Unknown"),
|
| 180 |
+
"content_preview": content[:200],
|
| 181 |
+
"full_content": content,
|
| 182 |
+
}
|
| 183 |
+
|
| 184 |
+
success = self.vector_store.upsert_document(doc_id, embedding, metadata)
|
| 185 |
+
return "embedded" if success else "failed"
|
| 186 |
+
|
| 187 |
+
except Exception as e:
|
| 188 |
+
logger.error(f"Error processing article: {e}")
|
| 189 |
+
return "failed"
|
| 190 |
+
|
| 191 |
+
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
|
| 192 |
+
futures = [executor.submit(process_article, a) for a in articles[:20]]
|
| 193 |
+
for future in concurrent.futures.as_completed(futures):
|
| 194 |
+
status = future.result()
|
| 195 |
+
if status == "embedded":
|
| 196 |
+
results["embedded"] += 1
|
| 197 |
+
results["scraped"] += 1
|
| 198 |
+
elif status == "skipped":
|
| 199 |
+
results["skipped"] += 1
|
| 200 |
+
elif status == "failed":
|
| 201 |
+
results["failed"] += 1
|
| 202 |
+
|
| 203 |
+
if results["embedded"] > 0:
|
| 204 |
+
self.vector_store.save()
|
| 205 |
+
|
| 206 |
+
return results
|
be/agent_tools.py
ADDED
|
@@ -0,0 +1,648 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Tool definitions and executor for the ReAct agent.
|
| 3 |
+
Maps Gemini function declarations to existing backend services.
|
| 4 |
+
Includes hybrid caching: frontend context -> server cache -> API call.
|
| 5 |
+
"""
|
| 6 |
+
|
| 7 |
+
import time
|
| 8 |
+
import logging
|
| 9 |
+
from datetime import datetime, timedelta
|
| 10 |
+
|
| 11 |
+
from polygon_api import PolygonAPI
|
| 12 |
+
from rag_pipeline import ContextRetriever
|
| 13 |
+
from sentiment_service import get_sentiment_service
|
| 14 |
+
from forecast_service import get_forecast_service
|
| 15 |
+
|
| 16 |
+
logger = logging.getLogger(__name__)
|
| 17 |
+
|
| 18 |
+
|
| 19 |
+
# -- Tool Schemas (Gemini function declarations) --
|
| 20 |
+
|
| 21 |
+
TOOL_DECLARATIONS = [
|
| 22 |
+
{
|
| 23 |
+
"name": "get_stock_quote",
|
| 24 |
+
"description": "Get the most recent closing price, open, high, low, and volume for a stock ticker. Use this when the user asks about current price, today's price, or recent trading data.",
|
| 25 |
+
"parameters": {
|
| 26 |
+
"type": "object",
|
| 27 |
+
"properties": {
|
| 28 |
+
"ticker": {
|
| 29 |
+
"type": "string",
|
| 30 |
+
"description": "Stock ticker symbol, e.g. AAPL"
|
| 31 |
+
}
|
| 32 |
+
},
|
| 33 |
+
"required": ["ticker"]
|
| 34 |
+
}
|
| 35 |
+
},
|
| 36 |
+
{
|
| 37 |
+
"name": "get_company_info",
|
| 38 |
+
"description": "Get detailed company information including name, description, market cap, sector, industry, and exchange. Use this when the user asks about what a company does, its sector, or general company details.",
|
| 39 |
+
"parameters": {
|
| 40 |
+
"type": "object",
|
| 41 |
+
"properties": {
|
| 42 |
+
"ticker": {
|
| 43 |
+
"type": "string",
|
| 44 |
+
"description": "Stock ticker symbol, e.g. AAPL"
|
| 45 |
+
}
|
| 46 |
+
},
|
| 47 |
+
"required": ["ticker"]
|
| 48 |
+
}
|
| 49 |
+
},
|
| 50 |
+
{
|
| 51 |
+
"name": "get_financials",
|
| 52 |
+
"description": "Get recent financial statements including revenue, net income, gross profit, total assets, and liabilities. Returns the last 4 filing periods. Use this for questions about earnings, revenue, profitability, or balance sheet.",
|
| 53 |
+
"parameters": {
|
| 54 |
+
"type": "object",
|
| 55 |
+
"properties": {
|
| 56 |
+
"ticker": {
|
| 57 |
+
"type": "string",
|
| 58 |
+
"description": "Stock ticker symbol, e.g. AAPL"
|
| 59 |
+
}
|
| 60 |
+
},
|
| 61 |
+
"required": ["ticker"]
|
| 62 |
+
}
|
| 63 |
+
},
|
| 64 |
+
{
|
| 65 |
+
"name": "get_news",
|
| 66 |
+
"description": "Get recent news articles about a stock. Returns headlines, sources, dates, and descriptions. Use this when the user asks about recent news, headlines, or events.",
|
| 67 |
+
"parameters": {
|
| 68 |
+
"type": "object",
|
| 69 |
+
"properties": {
|
| 70 |
+
"ticker": {
|
| 71 |
+
"type": "string",
|
| 72 |
+
"description": "Stock ticker symbol, e.g. AAPL"
|
| 73 |
+
},
|
| 74 |
+
"limit": {
|
| 75 |
+
"type": "integer",
|
| 76 |
+
"description": "Number of articles to return (default 10, max 20)"
|
| 77 |
+
}
|
| 78 |
+
},
|
| 79 |
+
"required": ["ticker"]
|
| 80 |
+
}
|
| 81 |
+
},
|
| 82 |
+
{
|
| 83 |
+
"name": "search_knowledge_base",
|
| 84 |
+
"description": "Semantic search over previously indexed news articles and research. Use this when the user asks about a specific topic and you need in-depth article content beyond headlines.",
|
| 85 |
+
"parameters": {
|
| 86 |
+
"type": "object",
|
| 87 |
+
"properties": {
|
| 88 |
+
"query": {
|
| 89 |
+
"type": "string",
|
| 90 |
+
"description": "Natural language search query"
|
| 91 |
+
},
|
| 92 |
+
"ticker": {
|
| 93 |
+
"type": "string",
|
| 94 |
+
"description": "Stock ticker symbol to filter results"
|
| 95 |
+
}
|
| 96 |
+
},
|
| 97 |
+
"required": ["query", "ticker"]
|
| 98 |
+
}
|
| 99 |
+
},
|
| 100 |
+
{
|
| 101 |
+
"name": "analyze_sentiment",
|
| 102 |
+
"description": "Analyze social media sentiment for a stock by scraping StockTwits, Reddit, and Twitter posts and running FinBERT analysis. This operation takes 10-30 seconds. Use when the user asks about sentiment, social media buzz, or what people think about a stock.",
|
| 103 |
+
"parameters": {
|
| 104 |
+
"type": "object",
|
| 105 |
+
"properties": {
|
| 106 |
+
"ticker": {
|
| 107 |
+
"type": "string",
|
| 108 |
+
"description": "Stock ticker symbol, e.g. AAPL"
|
| 109 |
+
}
|
| 110 |
+
},
|
| 111 |
+
"required": ["ticker"]
|
| 112 |
+
}
|
| 113 |
+
},
|
| 114 |
+
{
|
| 115 |
+
"name": "get_price_forecast",
|
| 116 |
+
"description": "Get an LSTM neural network price forecast for the next 30 trading days. May take 30-60 seconds if the model needs training. Use when the user asks about price predictions or forecasts.",
|
| 117 |
+
"parameters": {
|
| 118 |
+
"type": "object",
|
| 119 |
+
"properties": {
|
| 120 |
+
"ticker": {
|
| 121 |
+
"type": "string",
|
| 122 |
+
"description": "Stock ticker symbol, e.g. AAPL"
|
| 123 |
+
}
|
| 124 |
+
},
|
| 125 |
+
"required": ["ticker"]
|
| 126 |
+
}
|
| 127 |
+
},
|
| 128 |
+
{
|
| 129 |
+
"name": "get_dividends",
|
| 130 |
+
"description": "Get dividend payment history including ex-dividend dates, pay dates, and amounts. Use when the user asks about dividends, yield, or dividend history.",
|
| 131 |
+
"parameters": {
|
| 132 |
+
"type": "object",
|
| 133 |
+
"properties": {
|
| 134 |
+
"ticker": {
|
| 135 |
+
"type": "string",
|
| 136 |
+
"description": "Stock ticker symbol, e.g. AAPL"
|
| 137 |
+
},
|
| 138 |
+
"limit": {
|
| 139 |
+
"type": "integer",
|
| 140 |
+
"description": "Number of dividend records to return (default 10)"
|
| 141 |
+
}
|
| 142 |
+
},
|
| 143 |
+
"required": ["ticker"]
|
| 144 |
+
}
|
| 145 |
+
},
|
| 146 |
+
{
|
| 147 |
+
"name": "get_stock_splits",
|
| 148 |
+
"description": "Get stock split history including execution dates and split ratios. Use when the user asks about stock splits.",
|
| 149 |
+
"parameters": {
|
| 150 |
+
"type": "object",
|
| 151 |
+
"properties": {
|
| 152 |
+
"ticker": {
|
| 153 |
+
"type": "string",
|
| 154 |
+
"description": "Stock ticker symbol, e.g. AAPL"
|
| 155 |
+
}
|
| 156 |
+
},
|
| 157 |
+
"required": ["ticker"]
|
| 158 |
+
}
|
| 159 |
+
},
|
| 160 |
+
{
|
| 161 |
+
"name": "get_price_history",
|
| 162 |
+
"description": "Get historical OHLCV price data for a date range. Use when the user asks about price trends, historical performance, or needs to compare prices between dates.",
|
| 163 |
+
"parameters": {
|
| 164 |
+
"type": "object",
|
| 165 |
+
"properties": {
|
| 166 |
+
"ticker": {
|
| 167 |
+
"type": "string",
|
| 168 |
+
"description": "Stock ticker symbol, e.g. AAPL"
|
| 169 |
+
},
|
| 170 |
+
"from_date": {
|
| 171 |
+
"type": "string",
|
| 172 |
+
"description": "Start date in YYYY-MM-DD format"
|
| 173 |
+
},
|
| 174 |
+
"to_date": {
|
| 175 |
+
"type": "string",
|
| 176 |
+
"description": "End date in YYYY-MM-DD format"
|
| 177 |
+
},
|
| 178 |
+
"timespan": {
|
| 179 |
+
"type": "string",
|
| 180 |
+
"description": "Time interval for each bar: day, week, or month (default: day)"
|
| 181 |
+
}
|
| 182 |
+
},
|
| 183 |
+
"required": ["ticker", "from_date", "to_date"]
|
| 184 |
+
}
|
| 185 |
+
}
|
| 186 |
+
]
|
| 187 |
+
|
| 188 |
+
|
| 189 |
+
class ToolCache:
|
| 190 |
+
"""Server-side TTL cache for tool results (Layer 2)."""
|
| 191 |
+
|
| 192 |
+
def __init__(self, ttl_seconds=300):
|
| 193 |
+
self._cache = {}
|
| 194 |
+
self._ttl = ttl_seconds
|
| 195 |
+
|
| 196 |
+
def get(self, key):
|
| 197 |
+
entry = self._cache.get(key)
|
| 198 |
+
if entry and time.time() - entry["ts"] < self._ttl:
|
| 199 |
+
return entry["data"]
|
| 200 |
+
return None
|
| 201 |
+
|
| 202 |
+
def set(self, key, data):
|
| 203 |
+
self._cache[key] = {"data": data, "ts": time.time()}
|
| 204 |
+
|
| 205 |
+
|
| 206 |
+
class ToolExecutor:
|
| 207 |
+
"""
|
| 208 |
+
Executes tool calls using a 3-layer cache strategy:
|
| 209 |
+
Layer 1: Frontend context (passed per-request via set_context)
|
| 210 |
+
Layer 2: Server-side TTL cache (ToolCache, 5-min TTL)
|
| 211 |
+
Layer 3: Live API call (last resort)
|
| 212 |
+
"""
|
| 213 |
+
|
| 214 |
+
def __init__(self, polygon_api, context_retriever, vector_store):
|
| 215 |
+
self.polygon = polygon_api
|
| 216 |
+
self.context_retriever = context_retriever
|
| 217 |
+
self.vector_store = vector_store
|
| 218 |
+
self.sentiment_service = get_sentiment_service(vector_store)
|
| 219 |
+
self.forecast_service = get_forecast_service()
|
| 220 |
+
self.server_cache = ToolCache(ttl_seconds=300)
|
| 221 |
+
|
| 222 |
+
# Frontend context for the current request (Layer 1)
|
| 223 |
+
self._frontend_context = {}
|
| 224 |
+
self._context_ticker = None
|
| 225 |
+
|
| 226 |
+
self._handlers = {
|
| 227 |
+
"get_stock_quote": self._get_stock_quote,
|
| 228 |
+
"get_company_info": self._get_company_info,
|
| 229 |
+
"get_financials": self._get_financials,
|
| 230 |
+
"get_news": self._get_news,
|
| 231 |
+
"search_knowledge_base": self._search_knowledge_base,
|
| 232 |
+
"analyze_sentiment": self._analyze_sentiment,
|
| 233 |
+
"get_price_forecast": self._get_price_forecast,
|
| 234 |
+
"get_dividends": self._get_dividends,
|
| 235 |
+
"get_stock_splits": self._get_stock_splits,
|
| 236 |
+
"get_price_history": self._get_price_history,
|
| 237 |
+
}
|
| 238 |
+
|
| 239 |
+
def set_context(self, frontend_context, ticker):
|
| 240 |
+
"""Prime Layer 1 cache with frontend-provided data for this request."""
|
| 241 |
+
self._frontend_context = frontend_context or {}
|
| 242 |
+
self._context_ticker = ticker.upper() if ticker else None
|
| 243 |
+
|
| 244 |
+
def execute(self, tool_name, args):
|
| 245 |
+
handler = self._handlers.get(tool_name)
|
| 246 |
+
if not handler:
|
| 247 |
+
return {"error": f"Unknown tool: {tool_name}"}
|
| 248 |
+
try:
|
| 249 |
+
return handler(**args)
|
| 250 |
+
except Exception as e:
|
| 251 |
+
logger.error(f"Tool {tool_name} failed: {e}")
|
| 252 |
+
return {"error": f"Tool execution failed: {str(e)}"}
|
| 253 |
+
|
| 254 |
+
def _check_frontend_context(self, tool_name, ticker):
|
| 255 |
+
"""Layer 1: Check if frontend already sent this data."""
|
| 256 |
+
if ticker.upper() != self._context_ticker:
|
| 257 |
+
return None
|
| 258 |
+
|
| 259 |
+
mapping = {
|
| 260 |
+
"get_stock_quote": "previousClose",
|
| 261 |
+
"get_company_info": "details",
|
| 262 |
+
"get_financials": "financials",
|
| 263 |
+
"get_news": "news",
|
| 264 |
+
"get_dividends": "dividends",
|
| 265 |
+
"get_stock_splits": "splits",
|
| 266 |
+
"analyze_sentiment": "sentiment",
|
| 267 |
+
}
|
| 268 |
+
|
| 269 |
+
context_key = mapping.get(tool_name)
|
| 270 |
+
if not context_key:
|
| 271 |
+
return None
|
| 272 |
+
|
| 273 |
+
overview = self._frontend_context.get("overview", {})
|
| 274 |
+
|
| 275 |
+
# Some keys are nested under overview
|
| 276 |
+
if context_key in ("previousClose", "details"):
|
| 277 |
+
data = overview.get(context_key)
|
| 278 |
+
else:
|
| 279 |
+
data = self._frontend_context.get(context_key)
|
| 280 |
+
|
| 281 |
+
return data if data else None
|
| 282 |
+
|
| 283 |
+
def _check_server_cache(self, tool_name, ticker):
|
| 284 |
+
"""Layer 2: Check server-side TTL cache."""
|
| 285 |
+
return self.server_cache.get(f"{tool_name}:{ticker}")
|
| 286 |
+
|
| 287 |
+
def _cache_result(self, tool_name, ticker, result):
|
| 288 |
+
"""Store result in server-side cache."""
|
| 289 |
+
self.server_cache.set(f"{tool_name}:{ticker}", result)
|
| 290 |
+
|
| 291 |
+
# -- Tool Handlers --
|
| 292 |
+
|
| 293 |
+
def _get_stock_quote(self, ticker):
|
| 294 |
+
ticker = ticker.upper()
|
| 295 |
+
|
| 296 |
+
# Layer 1: Frontend context
|
| 297 |
+
fe_data = self._check_frontend_context("get_stock_quote", ticker)
|
| 298 |
+
if fe_data:
|
| 299 |
+
results = fe_data.get("results", [])
|
| 300 |
+
if results:
|
| 301 |
+
r = results[0]
|
| 302 |
+
return {
|
| 303 |
+
"ticker": ticker,
|
| 304 |
+
"close": r.get("c"),
|
| 305 |
+
"open": r.get("o"),
|
| 306 |
+
"high": r.get("h"),
|
| 307 |
+
"low": r.get("l"),
|
| 308 |
+
"volume": r.get("v"),
|
| 309 |
+
"vwap": r.get("vw"),
|
| 310 |
+
"source": "cached"
|
| 311 |
+
}
|
| 312 |
+
|
| 313 |
+
# Layer 2: Server cache
|
| 314 |
+
cached = self._check_server_cache("get_stock_quote", ticker)
|
| 315 |
+
if cached:
|
| 316 |
+
return cached
|
| 317 |
+
|
| 318 |
+
# Layer 3: API call
|
| 319 |
+
data = self.polygon.get_previous_close(ticker)
|
| 320 |
+
results = data.get("results", [])
|
| 321 |
+
if not results:
|
| 322 |
+
return {"error": "No quote data available"}
|
| 323 |
+
|
| 324 |
+
r = results[0]
|
| 325 |
+
result = {
|
| 326 |
+
"ticker": ticker,
|
| 327 |
+
"close": r.get("c"),
|
| 328 |
+
"open": r.get("o"),
|
| 329 |
+
"high": r.get("h"),
|
| 330 |
+
"low": r.get("l"),
|
| 331 |
+
"volume": r.get("v"),
|
| 332 |
+
"vwap": r.get("vw"),
|
| 333 |
+
}
|
| 334 |
+
self._cache_result("get_stock_quote", ticker, result)
|
| 335 |
+
return result
|
| 336 |
+
|
| 337 |
+
def _get_company_info(self, ticker):
|
| 338 |
+
ticker = ticker.upper()
|
| 339 |
+
|
| 340 |
+
fe_data = self._check_frontend_context("get_company_info", ticker)
|
| 341 |
+
if fe_data:
|
| 342 |
+
r = fe_data.get("results", fe_data)
|
| 343 |
+
return {
|
| 344 |
+
"ticker": ticker,
|
| 345 |
+
"name": r.get("name"),
|
| 346 |
+
"description": r.get("description", "")[:500],
|
| 347 |
+
"market_cap": r.get("market_cap"),
|
| 348 |
+
"sector": r.get("sic_description"),
|
| 349 |
+
"homepage_url": r.get("homepage_url"),
|
| 350 |
+
"total_employees": r.get("total_employees"),
|
| 351 |
+
"source": "cached"
|
| 352 |
+
}
|
| 353 |
+
|
| 354 |
+
cached = self._check_server_cache("get_company_info", ticker)
|
| 355 |
+
if cached:
|
| 356 |
+
return cached
|
| 357 |
+
|
| 358 |
+
data = self.polygon.get_ticker_details(ticker)
|
| 359 |
+
r = data.get("results", {})
|
| 360 |
+
if not r:
|
| 361 |
+
return {"error": "No company data available"}
|
| 362 |
+
|
| 363 |
+
result = {
|
| 364 |
+
"ticker": ticker,
|
| 365 |
+
"name": r.get("name"),
|
| 366 |
+
"description": r.get("description", "")[:500],
|
| 367 |
+
"market_cap": r.get("market_cap"),
|
| 368 |
+
"sector": r.get("sic_description"),
|
| 369 |
+
"homepage_url": r.get("homepage_url"),
|
| 370 |
+
"total_employees": r.get("total_employees"),
|
| 371 |
+
}
|
| 372 |
+
self._cache_result("get_company_info", ticker, result)
|
| 373 |
+
return result
|
| 374 |
+
|
| 375 |
+
def _get_financials(self, ticker):
|
| 376 |
+
ticker = ticker.upper()
|
| 377 |
+
|
| 378 |
+
fe_data = self._check_frontend_context("get_financials", ticker)
|
| 379 |
+
if fe_data:
|
| 380 |
+
return self._format_financials(ticker, fe_data)
|
| 381 |
+
|
| 382 |
+
cached = self._check_server_cache("get_financials", ticker)
|
| 383 |
+
if cached:
|
| 384 |
+
return cached
|
| 385 |
+
|
| 386 |
+
data = self.polygon.get_financials(ticker)
|
| 387 |
+
result = self._format_financials(ticker, data)
|
| 388 |
+
self._cache_result("get_financials", ticker, result)
|
| 389 |
+
return result
|
| 390 |
+
|
| 391 |
+
def _format_financials(self, ticker, data):
|
| 392 |
+
results = data.get("results", [])
|
| 393 |
+
if not results:
|
| 394 |
+
return {"error": "No financial data available"}
|
| 395 |
+
|
| 396 |
+
periods = []
|
| 397 |
+
for r in results[:4]:
|
| 398 |
+
financials = r.get("financials", {})
|
| 399 |
+
income = financials.get("income_statement", {})
|
| 400 |
+
balance = financials.get("balance_sheet", {})
|
| 401 |
+
|
| 402 |
+
periods.append({
|
| 403 |
+
"period": f"{r.get('fiscal_period', '')} {r.get('fiscal_year', '')}",
|
| 404 |
+
"revenue": income.get("revenues", {}).get("value"),
|
| 405 |
+
"net_income": income.get("net_income_loss", {}).get("value"),
|
| 406 |
+
"gross_profit": income.get("gross_profit", {}).get("value"),
|
| 407 |
+
"total_assets": balance.get("assets", {}).get("value"),
|
| 408 |
+
"total_liabilities": balance.get("liabilities", {}).get("value"),
|
| 409 |
+
})
|
| 410 |
+
|
| 411 |
+
return {"ticker": ticker, "periods": periods}
|
| 412 |
+
|
| 413 |
+
def _get_news(self, ticker, limit=10):
|
| 414 |
+
ticker = ticker.upper()
|
| 415 |
+
limit = min(limit or 10, 20)
|
| 416 |
+
|
| 417 |
+
fe_data = self._check_frontend_context("get_news", ticker)
|
| 418 |
+
if fe_data:
|
| 419 |
+
return self._format_news(ticker, fe_data)
|
| 420 |
+
|
| 421 |
+
cached = self._check_server_cache("get_news", ticker)
|
| 422 |
+
if cached:
|
| 423 |
+
return cached
|
| 424 |
+
|
| 425 |
+
data = self.polygon.get_ticker_news(ticker, limit=limit)
|
| 426 |
+
result = self._format_news(ticker, data)
|
| 427 |
+
self._cache_result("get_news", ticker, result)
|
| 428 |
+
return result
|
| 429 |
+
|
| 430 |
+
def _format_news(self, ticker, data):
|
| 431 |
+
articles = data.get("results", data if isinstance(data, list) else [])
|
| 432 |
+
if not articles:
|
| 433 |
+
return {"error": "No news articles available"}
|
| 434 |
+
|
| 435 |
+
formatted = []
|
| 436 |
+
for a in articles[:10]:
|
| 437 |
+
formatted.append({
|
| 438 |
+
"title": a.get("title", ""),
|
| 439 |
+
"source": a.get("publisher", {}).get("name", "Unknown") if isinstance(a.get("publisher"), dict) else a.get("publisher", "Unknown"),
|
| 440 |
+
"published": a.get("published_utc", "")[:10],
|
| 441 |
+
"description": a.get("description", "")[:200],
|
| 442 |
+
"url": a.get("article_url", ""),
|
| 443 |
+
})
|
| 444 |
+
|
| 445 |
+
return {"ticker": ticker, "articles": formatted}
|
| 446 |
+
|
| 447 |
+
def _search_knowledge_base(self, query, ticker):
|
| 448 |
+
"""Search FAISS vector store — no caching, always live search."""
|
| 449 |
+
ticker = ticker.upper()
|
| 450 |
+
contexts = self.context_retriever.retrieve_context(query, ticker)
|
| 451 |
+
|
| 452 |
+
if not contexts:
|
| 453 |
+
return {"message": "No relevant articles found in knowledge base. Try using get_news for recent headlines."}
|
| 454 |
+
|
| 455 |
+
results = []
|
| 456 |
+
for ctx in contexts[:5]:
|
| 457 |
+
meta = ctx["metadata"]
|
| 458 |
+
results.append({
|
| 459 |
+
"title": meta.get("title", "Untitled"),
|
| 460 |
+
"source": meta.get("source", "Unknown"),
|
| 461 |
+
"date": meta.get("published_date", "")[:10],
|
| 462 |
+
"content": meta.get("full_content", meta.get("content_preview", ""))[:500],
|
| 463 |
+
"relevance_score": round(ctx["score"], 3),
|
| 464 |
+
})
|
| 465 |
+
|
| 466 |
+
return {"ticker": ticker, "results": results}
|
| 467 |
+
|
| 468 |
+
def _analyze_sentiment(self, ticker):
|
| 469 |
+
ticker = ticker.upper()
|
| 470 |
+
|
| 471 |
+
# Layer 1: Frontend context (already-analyzed sentiment)
|
| 472 |
+
fe_data = self._check_frontend_context("analyze_sentiment", ticker)
|
| 473 |
+
if fe_data:
|
| 474 |
+
aggregate = fe_data.get("aggregate", fe_data)
|
| 475 |
+
posts = fe_data.get("posts", [])
|
| 476 |
+
return {
|
| 477 |
+
"ticker": ticker,
|
| 478 |
+
"overall_sentiment": aggregate.get("label"),
|
| 479 |
+
"score": aggregate.get("score"),
|
| 480 |
+
"confidence": aggregate.get("confidence"),
|
| 481 |
+
"post_count": aggregate.get("post_count"),
|
| 482 |
+
"sources": aggregate.get("sources", {}),
|
| 483 |
+
"top_posts": [
|
| 484 |
+
{
|
| 485 |
+
"platform": p.get("platform"),
|
| 486 |
+
"content": p.get("content", "")[:200],
|
| 487 |
+
"sentiment": p.get("sentiment", {}).get("label", p.get("sentiment_label", "")),
|
| 488 |
+
}
|
| 489 |
+
for p in posts[:5]
|
| 490 |
+
],
|
| 491 |
+
"source": "cached"
|
| 492 |
+
}
|
| 493 |
+
|
| 494 |
+
cached = self._check_server_cache("analyze_sentiment", ticker)
|
| 495 |
+
if cached:
|
| 496 |
+
return cached
|
| 497 |
+
|
| 498 |
+
# Layer 3: Live scrape + analysis (slow, 10-30s)
|
| 499 |
+
data = self.sentiment_service.analyze_ticker(ticker)
|
| 500 |
+
aggregate = data.get("aggregate", {})
|
| 501 |
+
posts = data.get("posts", [])
|
| 502 |
+
|
| 503 |
+
result = {
|
| 504 |
+
"ticker": ticker,
|
| 505 |
+
"overall_sentiment": aggregate.get("label"),
|
| 506 |
+
"score": aggregate.get("score"),
|
| 507 |
+
"confidence": aggregate.get("confidence"),
|
| 508 |
+
"post_count": aggregate.get("post_count"),
|
| 509 |
+
"sources": aggregate.get("sources", {}),
|
| 510 |
+
"top_posts": [
|
| 511 |
+
{
|
| 512 |
+
"platform": p.get("platform"),
|
| 513 |
+
"content": p.get("content", "")[:200],
|
| 514 |
+
"sentiment": p.get("sentiment", {}).get("label", ""),
|
| 515 |
+
}
|
| 516 |
+
for p in posts[:5]
|
| 517 |
+
],
|
| 518 |
+
}
|
| 519 |
+
self._cache_result("analyze_sentiment", ticker, result)
|
| 520 |
+
return result
|
| 521 |
+
|
| 522 |
+
def _get_price_forecast(self, ticker):
|
| 523 |
+
ticker = ticker.upper()
|
| 524 |
+
|
| 525 |
+
cached = self._check_server_cache("get_price_forecast", ticker)
|
| 526 |
+
if cached:
|
| 527 |
+
return cached
|
| 528 |
+
|
| 529 |
+
data = self.forecast_service.get_forecast(ticker)
|
| 530 |
+
if "error" in data:
|
| 531 |
+
return {"error": data["error"]}
|
| 532 |
+
|
| 533 |
+
forecast = data.get("forecast", [])
|
| 534 |
+
result = {
|
| 535 |
+
"ticker": ticker,
|
| 536 |
+
"predictions": [
|
| 537 |
+
{
|
| 538 |
+
"date": f.get("date"),
|
| 539 |
+
"predicted_close": f.get("predicted_close"),
|
| 540 |
+
"upper_bound": f.get("upper_bound"),
|
| 541 |
+
"lower_bound": f.get("lower_bound"),
|
| 542 |
+
}
|
| 543 |
+
for f in forecast[:10] # First 10 days to keep context manageable
|
| 544 |
+
],
|
| 545 |
+
"model_info": data.get("model_info", {}),
|
| 546 |
+
}
|
| 547 |
+
self._cache_result("get_price_forecast", ticker, result)
|
| 548 |
+
return result
|
| 549 |
+
|
| 550 |
+
def _get_dividends(self, ticker, limit=10):
|
| 551 |
+
ticker = ticker.upper()
|
| 552 |
+
|
| 553 |
+
fe_data = self._check_frontend_context("get_dividends", ticker)
|
| 554 |
+
if fe_data:
|
| 555 |
+
return self._format_dividends(ticker, fe_data)
|
| 556 |
+
|
| 557 |
+
cached = self._check_server_cache("get_dividends", ticker)
|
| 558 |
+
if cached:
|
| 559 |
+
return cached
|
| 560 |
+
|
| 561 |
+
data = self.polygon.get_dividends(ticker, limit=limit or 10)
|
| 562 |
+
result = self._format_dividends(ticker, data)
|
| 563 |
+
self._cache_result("get_dividends", ticker, result)
|
| 564 |
+
return result
|
| 565 |
+
|
| 566 |
+
def _format_dividends(self, ticker, data):
|
| 567 |
+
results = data.get("results", data if isinstance(data, list) else [])
|
| 568 |
+
if not results:
|
| 569 |
+
return {"message": "No dividend data available for this ticker."}
|
| 570 |
+
|
| 571 |
+
formatted = []
|
| 572 |
+
for d in results[:10]:
|
| 573 |
+
formatted.append({
|
| 574 |
+
"ex_date": d.get("ex_dividend_date", ""),
|
| 575 |
+
"pay_date": d.get("pay_date", ""),
|
| 576 |
+
"amount": d.get("cash_amount"),
|
| 577 |
+
"frequency": d.get("frequency"),
|
| 578 |
+
})
|
| 579 |
+
|
| 580 |
+
return {"ticker": ticker, "dividends": formatted}
|
| 581 |
+
|
| 582 |
+
def _get_stock_splits(self, ticker):
|
| 583 |
+
ticker = ticker.upper()
|
| 584 |
+
|
| 585 |
+
fe_data = self._check_frontend_context("get_stock_splits", ticker)
|
| 586 |
+
if fe_data:
|
| 587 |
+
return self._format_splits(ticker, fe_data)
|
| 588 |
+
|
| 589 |
+
cached = self._check_server_cache("get_stock_splits", ticker)
|
| 590 |
+
if cached:
|
| 591 |
+
return cached
|
| 592 |
+
|
| 593 |
+
data = self.polygon.get_splits(ticker)
|
| 594 |
+
result = self._format_splits(ticker, data)
|
| 595 |
+
self._cache_result("get_stock_splits", ticker, result)
|
| 596 |
+
return result
|
| 597 |
+
|
| 598 |
+
def _format_splits(self, ticker, data):
|
| 599 |
+
results = data.get("results", data if isinstance(data, list) else [])
|
| 600 |
+
if not results:
|
| 601 |
+
return {"message": "No stock split history found for this ticker."}
|
| 602 |
+
|
| 603 |
+
formatted = []
|
| 604 |
+
for s in results[:10]:
|
| 605 |
+
formatted.append({
|
| 606 |
+
"execution_date": s.get("execution_date", ""),
|
| 607 |
+
"split_from": s.get("split_from"),
|
| 608 |
+
"split_to": s.get("split_to"),
|
| 609 |
+
"ratio": f"{s.get('split_to', 1)}-for-{s.get('split_from', 1)}",
|
| 610 |
+
})
|
| 611 |
+
|
| 612 |
+
return {"ticker": ticker, "splits": formatted}
|
| 613 |
+
|
| 614 |
+
def _get_price_history(self, ticker, from_date, to_date, timespan=None):
|
| 615 |
+
ticker = ticker.upper()
|
| 616 |
+
timespan = timespan or "day"
|
| 617 |
+
|
| 618 |
+
cache_key = f"get_price_history:{ticker}:{from_date}:{to_date}:{timespan}"
|
| 619 |
+
cached = self.server_cache.get(cache_key)
|
| 620 |
+
if cached:
|
| 621 |
+
return cached
|
| 622 |
+
|
| 623 |
+
data = self.polygon.get_aggregates(ticker, timespan=timespan, from_date=from_date, to_date=to_date)
|
| 624 |
+
results = data.get("results", [])
|
| 625 |
+
if not results:
|
| 626 |
+
return {"error": "No price history available for the given date range"}
|
| 627 |
+
|
| 628 |
+
formatted = []
|
| 629 |
+
for bar in results:
|
| 630 |
+
formatted.append({
|
| 631 |
+
"date": datetime.fromtimestamp(bar["t"] / 1000).strftime("%Y-%m-%d"),
|
| 632 |
+
"open": bar.get("o"),
|
| 633 |
+
"high": bar.get("h"),
|
| 634 |
+
"low": bar.get("l"),
|
| 635 |
+
"close": bar.get("c"),
|
| 636 |
+
"volume": bar.get("v"),
|
| 637 |
+
})
|
| 638 |
+
|
| 639 |
+
result = {
|
| 640 |
+
"ticker": ticker,
|
| 641 |
+
"timespan": timespan,
|
| 642 |
+
"from": from_date,
|
| 643 |
+
"to": to_date,
|
| 644 |
+
"bars": formatted,
|
| 645 |
+
"count": len(formatted),
|
| 646 |
+
}
|
| 647 |
+
self.server_cache.set(cache_key, result)
|
| 648 |
+
return result
|
be/app.py
ADDED
|
@@ -0,0 +1,127 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from flask import Flask, jsonify, request, send_from_directory
|
| 2 |
+
from flask_cors import CORS
|
| 3 |
+
from polygon_api import PolygonAPI
|
| 4 |
+
from chat_routes import register_chat_routes
|
| 5 |
+
from sentiment_routes import sentiment_bp
|
| 6 |
+
from forecast_routes import forecast_bp
|
| 7 |
+
import os
|
| 8 |
+
import atexit
|
| 9 |
+
|
| 10 |
+
app = Flask(__name__, static_folder='../fe', static_url_path='')
|
| 11 |
+
CORS(app)
|
| 12 |
+
|
| 13 |
+
polygon = PolygonAPI()
|
| 14 |
+
|
| 15 |
+
@app.route('/')
|
| 16 |
+
def index():
|
| 17 |
+
return send_from_directory(app.static_folder, 'index.html')
|
| 18 |
+
|
| 19 |
+
@app.route('/company_tickers.json')
|
| 20 |
+
def get_tickers():
|
| 21 |
+
return send_from_directory('../', 'company_tickers.json')
|
| 22 |
+
|
| 23 |
+
@app.route('/api/ticker/<ticker>/details', methods=['GET'])
|
| 24 |
+
def get_ticker_details(ticker):
|
| 25 |
+
try:
|
| 26 |
+
data = polygon.get_ticker_details(ticker.upper())
|
| 27 |
+
return jsonify(data)
|
| 28 |
+
except Exception as e:
|
| 29 |
+
return jsonify({"error": str(e)}), 500
|
| 30 |
+
|
| 31 |
+
@app.route('/api/ticker/<ticker>/previous-close', methods=['GET'])
|
| 32 |
+
def get_previous_close(ticker):
|
| 33 |
+
try:
|
| 34 |
+
data = polygon.get_previous_close(ticker.upper())
|
| 35 |
+
return jsonify(data)
|
| 36 |
+
except Exception as e:
|
| 37 |
+
return jsonify({"error": str(e)}), 500
|
| 38 |
+
|
| 39 |
+
@app.route('/api/ticker/<ticker>/aggregates', methods=['GET'])
|
| 40 |
+
def get_aggregates(ticker):
|
| 41 |
+
try:
|
| 42 |
+
from_date = request.args.get('from')
|
| 43 |
+
to_date = request.args.get('to')
|
| 44 |
+
timespan = request.args.get('timespan', 'day')
|
| 45 |
+
|
| 46 |
+
data = polygon.get_aggregates(ticker.upper(), timespan, from_date, to_date)
|
| 47 |
+
return jsonify(data)
|
| 48 |
+
except Exception as e:
|
| 49 |
+
return jsonify({"error": str(e)}), 500
|
| 50 |
+
|
| 51 |
+
@app.route('/api/ticker/<ticker>/news', methods=['GET'])
|
| 52 |
+
def get_news(ticker):
|
| 53 |
+
try:
|
| 54 |
+
limit = request.args.get('limit', 10, type=int)
|
| 55 |
+
data = polygon.get_ticker_news(ticker.upper(), limit)
|
| 56 |
+
return jsonify(data)
|
| 57 |
+
except Exception as e:
|
| 58 |
+
return jsonify({"error": str(e)}), 500
|
| 59 |
+
|
| 60 |
+
@app.route('/api/ticker/<ticker>/financials', methods=['GET'])
|
| 61 |
+
def get_financials(ticker):
|
| 62 |
+
try:
|
| 63 |
+
data = polygon.get_financials(ticker.upper())
|
| 64 |
+
return jsonify(data)
|
| 65 |
+
except Exception as e:
|
| 66 |
+
return jsonify({"error": str(e)}), 500
|
| 67 |
+
|
| 68 |
+
@app.route('/api/ticker/<ticker>/snapshot', methods=['GET'])
|
| 69 |
+
def get_snapshot(ticker):
|
| 70 |
+
try:
|
| 71 |
+
data = polygon.get_snapshot(ticker.upper())
|
| 72 |
+
return jsonify(data)
|
| 73 |
+
except Exception as e:
|
| 74 |
+
return jsonify({"error": str(e)}), 500
|
| 75 |
+
|
| 76 |
+
@app.route('/api/ticker/<ticker>/dividends', methods=['GET'])
|
| 77 |
+
def get_dividends(ticker):
|
| 78 |
+
try:
|
| 79 |
+
limit = request.args.get('limit', 10, type=int)
|
| 80 |
+
data = polygon.get_dividends(ticker.upper(), limit)
|
| 81 |
+
return jsonify(data)
|
| 82 |
+
except Exception as e:
|
| 83 |
+
return jsonify({"error": str(e)}), 500
|
| 84 |
+
|
| 85 |
+
@app.route('/api/ticker/<ticker>/splits', methods=['GET'])
|
| 86 |
+
def get_splits(ticker):
|
| 87 |
+
try:
|
| 88 |
+
limit = request.args.get('limit', 10, type=int)
|
| 89 |
+
data = polygon.get_splits(ticker.upper(), limit)
|
| 90 |
+
return jsonify(data)
|
| 91 |
+
except Exception as e:
|
| 92 |
+
return jsonify({"error": str(e)}), 500
|
| 93 |
+
|
| 94 |
+
@app.route('/api/market-status', methods=['GET'])
|
| 95 |
+
def get_market_status():
|
| 96 |
+
try:
|
| 97 |
+
data = polygon.get_market_status()
|
| 98 |
+
return jsonify(data)
|
| 99 |
+
except Exception as e:
|
| 100 |
+
return jsonify({"error": str(e)}), 500
|
| 101 |
+
|
| 102 |
+
# Register chat routes
|
| 103 |
+
register_chat_routes(app)
|
| 104 |
+
|
| 105 |
+
# Register sentiment routes
|
| 106 |
+
app.register_blueprint(sentiment_bp)
|
| 107 |
+
|
| 108 |
+
# Register forecast routes
|
| 109 |
+
app.register_blueprint(forecast_bp)
|
| 110 |
+
|
| 111 |
+
# Graceful shutdown handler for FAISS
|
| 112 |
+
from chat_routes import agent_service
|
| 113 |
+
|
| 114 |
+
def shutdown_handler():
|
| 115 |
+
"""Save FAISS index on graceful shutdown"""
|
| 116 |
+
print("Shutting down gracefully, saving FAISS index...")
|
| 117 |
+
try:
|
| 118 |
+
agent_service.vector_store.save()
|
| 119 |
+
print("FAISS index saved successfully")
|
| 120 |
+
except Exception as e:
|
| 121 |
+
print(f"Error saving FAISS index on shutdown: {e}")
|
| 122 |
+
|
| 123 |
+
atexit.register(shutdown_handler)
|
| 124 |
+
|
| 125 |
+
if __name__ == '__main__':
|
| 126 |
+
from config import PORT
|
| 127 |
+
app.run(debug=False, port=PORT)
|
be/chat_routes.py
ADDED
|
@@ -0,0 +1,204 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from flask import request, jsonify, Response, stream_with_context
|
| 2 |
+
from agent_service import AgentService
|
| 3 |
+
import json
|
| 4 |
+
|
| 5 |
+
# Initialize agent service
|
| 6 |
+
agent_service = AgentService()
|
| 7 |
+
|
| 8 |
+
|
| 9 |
+
def format_sse(event_type, data):
|
| 10 |
+
"""Format a Server-Sent Event."""
|
| 11 |
+
if isinstance(data, dict):
|
| 12 |
+
data_str = json.dumps(data)
|
| 13 |
+
else:
|
| 14 |
+
data_str = str(data)
|
| 15 |
+
return f"event: {event_type}\ndata: {data_str}\n\n"
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
def register_chat_routes(app):
|
| 19 |
+
"""Register chat-related routes with Flask app"""
|
| 20 |
+
|
| 21 |
+
@app.route('/api/chat/message', methods=['POST'])
|
| 22 |
+
def chat_message():
|
| 23 |
+
"""
|
| 24 |
+
Main chat endpoint - processes user messages via agent and streams responses
|
| 25 |
+
|
| 26 |
+
Request body:
|
| 27 |
+
{
|
| 28 |
+
"ticker": "AAPL",
|
| 29 |
+
"message": "What's the latest news?",
|
| 30 |
+
"context": {
|
| 31 |
+
"overview": {...},
|
| 32 |
+
"financials": {...},
|
| 33 |
+
"news": [...],
|
| 34 |
+
...
|
| 35 |
+
},
|
| 36 |
+
"conversation_id": "uuid"
|
| 37 |
+
}
|
| 38 |
+
|
| 39 |
+
Response: Server-Sent Events stream with typed events:
|
| 40 |
+
event: tool_call - Agent is calling a tool
|
| 41 |
+
event: text - Response text chunk
|
| 42 |
+
event: done - Stream complete
|
| 43 |
+
event: error - Error occurred
|
| 44 |
+
"""
|
| 45 |
+
try:
|
| 46 |
+
data = request.get_json()
|
| 47 |
+
|
| 48 |
+
ticker = data.get('ticker')
|
| 49 |
+
message = data.get('message')
|
| 50 |
+
context = data.get('context', {})
|
| 51 |
+
conversation_id = data.get('conversation_id', 'default')
|
| 52 |
+
|
| 53 |
+
if not ticker or not message:
|
| 54 |
+
return jsonify({"error": "Missing required fields"}), 400
|
| 55 |
+
|
| 56 |
+
def generate():
|
| 57 |
+
"""Generator for structured SSE streaming."""
|
| 58 |
+
try:
|
| 59 |
+
for event_type, event_data in agent_service.process_message(
|
| 60 |
+
ticker=ticker,
|
| 61 |
+
message=message,
|
| 62 |
+
frontend_context=context,
|
| 63 |
+
conversation_id=conversation_id
|
| 64 |
+
):
|
| 65 |
+
yield format_sse(event_type, event_data)
|
| 66 |
+
except Exception as e:
|
| 67 |
+
print(f"Error in stream: {e}")
|
| 68 |
+
yield format_sse("error", {"message": str(e)})
|
| 69 |
+
|
| 70 |
+
return Response(
|
| 71 |
+
stream_with_context(generate()),
|
| 72 |
+
mimetype='text/event-stream',
|
| 73 |
+
headers={
|
| 74 |
+
'Cache-Control': 'no-cache',
|
| 75 |
+
'X-Accel-Buffering': 'no'
|
| 76 |
+
}
|
| 77 |
+
)
|
| 78 |
+
|
| 79 |
+
except Exception as e:
|
| 80 |
+
print(f"Error in chat_message: {e}")
|
| 81 |
+
return jsonify({"error": str(e)}), 500
|
| 82 |
+
|
| 83 |
+
@app.route('/api/chat/scrape-articles', methods=['POST'])
|
| 84 |
+
def scrape_articles():
|
| 85 |
+
"""
|
| 86 |
+
Background job to scrape and embed articles
|
| 87 |
+
|
| 88 |
+
Request body:
|
| 89 |
+
{
|
| 90 |
+
"ticker": "AAPL",
|
| 91 |
+
"articles": [...]
|
| 92 |
+
}
|
| 93 |
+
|
| 94 |
+
Response:
|
| 95 |
+
{
|
| 96 |
+
"scraped": 8,
|
| 97 |
+
"embedded": 8,
|
| 98 |
+
"failed": 2,
|
| 99 |
+
"skipped": 5
|
| 100 |
+
}
|
| 101 |
+
"""
|
| 102 |
+
try:
|
| 103 |
+
data = request.get_json()
|
| 104 |
+
|
| 105 |
+
ticker = data.get('ticker')
|
| 106 |
+
articles = data.get('articles', [])
|
| 107 |
+
|
| 108 |
+
if not ticker:
|
| 109 |
+
return jsonify({"error": "Missing ticker"}), 400
|
| 110 |
+
|
| 111 |
+
if not articles:
|
| 112 |
+
return jsonify({
|
| 113 |
+
"scraped": 0,
|
| 114 |
+
"embedded": 0,
|
| 115 |
+
"failed": 0,
|
| 116 |
+
"skipped": 0,
|
| 117 |
+
"message": "No articles provided"
|
| 118 |
+
}), 200
|
| 119 |
+
|
| 120 |
+
results = agent_service.scrape_and_embed_articles(ticker, articles)
|
| 121 |
+
|
| 122 |
+
return jsonify(results), 200
|
| 123 |
+
|
| 124 |
+
except Exception as e:
|
| 125 |
+
print(f"Error in scrape_articles: {e}")
|
| 126 |
+
return jsonify({"error": str(e)}), 500
|
| 127 |
+
|
| 128 |
+
@app.route('/api/chat/conversations/<conversation_id>', methods=['GET'])
|
| 129 |
+
def get_conversation(conversation_id):
|
| 130 |
+
"""Get conversation history"""
|
| 131 |
+
try:
|
| 132 |
+
history = agent_service.conversation_manager.get_history(conversation_id)
|
| 133 |
+
return jsonify({"messages": history}), 200
|
| 134 |
+
except Exception as e:
|
| 135 |
+
print(f"Error in get_conversation: {e}")
|
| 136 |
+
return jsonify({"error": str(e)}), 500
|
| 137 |
+
|
| 138 |
+
@app.route('/api/chat/clear/<conversation_id>', methods=['DELETE'])
|
| 139 |
+
def clear_conversation(conversation_id):
|
| 140 |
+
"""Clear conversation history"""
|
| 141 |
+
try:
|
| 142 |
+
agent_service.conversation_manager.clear_conversation(conversation_id)
|
| 143 |
+
return jsonify({"success": True}), 200
|
| 144 |
+
except Exception as e:
|
| 145 |
+
print(f"Error in clear_conversation: {e}")
|
| 146 |
+
return jsonify({"error": str(e)}), 500
|
| 147 |
+
|
| 148 |
+
@app.route('/api/chat/health', methods=['GET'])
|
| 149 |
+
def chat_health():
|
| 150 |
+
"""Health check endpoint for chat service"""
|
| 151 |
+
try:
|
| 152 |
+
return jsonify({
|
| 153 |
+
"status": "healthy",
|
| 154 |
+
"components": {
|
| 155 |
+
"agent": "ok",
|
| 156 |
+
"embeddings": "ok",
|
| 157 |
+
"vector_store": "ok",
|
| 158 |
+
"llm": "ok"
|
| 159 |
+
}
|
| 160 |
+
}), 200
|
| 161 |
+
except Exception as e:
|
| 162 |
+
return jsonify({"status": "unhealthy", "error": str(e)}), 500
|
| 163 |
+
|
| 164 |
+
@app.route('/api/chat/debug/chunks', methods=['GET'])
|
| 165 |
+
def debug_chunks():
|
| 166 |
+
"""Debug endpoint to view all stored RAG chunks"""
|
| 167 |
+
try:
|
| 168 |
+
ticker_filter = request.args.get('ticker')
|
| 169 |
+
limit = int(request.args.get('limit', 50))
|
| 170 |
+
|
| 171 |
+
vector_store = agent_service.vector_store
|
| 172 |
+
all_metadata = vector_store.metadata
|
| 173 |
+
|
| 174 |
+
chunks = []
|
| 175 |
+
for internal_id, meta in all_metadata.items():
|
| 176 |
+
if ticker_filter and meta.get('ticker') != ticker_filter:
|
| 177 |
+
continue
|
| 178 |
+
|
| 179 |
+
chunks.append({
|
| 180 |
+
"id": internal_id,
|
| 181 |
+
"doc_id": meta.get('doc_id', ''),
|
| 182 |
+
"ticker": meta.get('ticker', ''),
|
| 183 |
+
"type": meta.get('type', ''),
|
| 184 |
+
"title": meta.get('title', ''),
|
| 185 |
+
"url": meta.get('url', ''),
|
| 186 |
+
"source": meta.get('source', ''),
|
| 187 |
+
"published_date": meta.get('published_date', ''),
|
| 188 |
+
"content_preview": meta.get('content_preview', ''),
|
| 189 |
+
"full_content": meta.get('full_content', '')
|
| 190 |
+
})
|
| 191 |
+
|
| 192 |
+
if len(chunks) >= limit:
|
| 193 |
+
break
|
| 194 |
+
|
| 195 |
+
return jsonify({
|
| 196 |
+
"total": len(all_metadata),
|
| 197 |
+
"returned": len(chunks),
|
| 198 |
+
"index_stats": vector_store.get_stats(),
|
| 199 |
+
"chunks": chunks
|
| 200 |
+
}), 200
|
| 201 |
+
|
| 202 |
+
except Exception as e:
|
| 203 |
+
print(f"Error in debug_chunks: {e}")
|
| 204 |
+
return jsonify({"error": str(e)}), 500
|
be/chat_service.py
ADDED
|
@@ -0,0 +1,385 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from scraper import ArticleScraper
|
| 2 |
+
from rag_pipeline import EmbeddingGenerator, VectorStore, ContextRetriever
|
| 3 |
+
from llm_client import GeminiClient, ConversationManager
|
| 4 |
+
import concurrent.futures
|
| 5 |
+
import hashlib
|
| 6 |
+
|
| 7 |
+
|
| 8 |
+
class ChatService:
|
| 9 |
+
"""Orchestrates chatbot components: scraping, RAG, and LLM"""
|
| 10 |
+
|
| 11 |
+
def __init__(self):
|
| 12 |
+
self.scraper = ArticleScraper()
|
| 13 |
+
self.embedding_gen = EmbeddingGenerator()
|
| 14 |
+
self.vector_store = VectorStore()
|
| 15 |
+
self.context_retriever = ContextRetriever(vector_store=self.vector_store)
|
| 16 |
+
self.llm_client = GeminiClient()
|
| 17 |
+
self.conversation_manager = ConversationManager()
|
| 18 |
+
|
| 19 |
+
def process_message(self, ticker, message, frontend_context, conversation_id):
|
| 20 |
+
"""
|
| 21 |
+
Process a user message and generate a response
|
| 22 |
+
|
| 23 |
+
Args:
|
| 24 |
+
ticker: Stock ticker symbol
|
| 25 |
+
message: User message
|
| 26 |
+
frontend_context: Context from frontend (overview, financials, news, etc.)
|
| 27 |
+
conversation_id: Unique conversation identifier
|
| 28 |
+
|
| 29 |
+
Yields:
|
| 30 |
+
Response chunks for streaming
|
| 31 |
+
"""
|
| 32 |
+
try:
|
| 33 |
+
# Build comprehensive context
|
| 34 |
+
prompt = self._assemble_prompt(
|
| 35 |
+
query=message,
|
| 36 |
+
ticker=ticker,
|
| 37 |
+
frontend_context=frontend_context,
|
| 38 |
+
conversation_id=conversation_id
|
| 39 |
+
)
|
| 40 |
+
|
| 41 |
+
# Get conversation history
|
| 42 |
+
history = self.conversation_manager.get_history(conversation_id)
|
| 43 |
+
|
| 44 |
+
# Stream response from LLM
|
| 45 |
+
full_response = ""
|
| 46 |
+
for chunk in self.llm_client.stream_response(prompt, history):
|
| 47 |
+
full_response += chunk
|
| 48 |
+
yield chunk
|
| 49 |
+
|
| 50 |
+
# Save to conversation history
|
| 51 |
+
self.conversation_manager.add_message(conversation_id, 'user', message)
|
| 52 |
+
self.conversation_manager.add_message(conversation_id, 'assistant', full_response)
|
| 53 |
+
|
| 54 |
+
except Exception as e:
|
| 55 |
+
print(f"Error processing message: {e}")
|
| 56 |
+
yield "I encountered an error processing your request. Please try again."
|
| 57 |
+
|
| 58 |
+
def scrape_and_embed_articles(self, ticker, articles):
|
| 59 |
+
"""
|
| 60 |
+
Background job to scrape and embed news articles
|
| 61 |
+
|
| 62 |
+
Args:
|
| 63 |
+
ticker: Stock ticker symbol
|
| 64 |
+
articles: List of article metadata from Polygon API
|
| 65 |
+
|
| 66 |
+
Returns:
|
| 67 |
+
Dictionary with scraping statistics
|
| 68 |
+
"""
|
| 69 |
+
results = {
|
| 70 |
+
"scraped": 0,
|
| 71 |
+
"embedded": 0,
|
| 72 |
+
"failed": 0,
|
| 73 |
+
"skipped": 0
|
| 74 |
+
}
|
| 75 |
+
|
| 76 |
+
def process_article(article):
|
| 77 |
+
"""Process a single article"""
|
| 78 |
+
try:
|
| 79 |
+
# Generate unique document ID
|
| 80 |
+
article_url = article.get('article_url', '')
|
| 81 |
+
doc_id = f"{ticker}_news_{self._hash_url(article_url)}"
|
| 82 |
+
|
| 83 |
+
# Check if already processed
|
| 84 |
+
if self.vector_store.document_exists(doc_id):
|
| 85 |
+
return {'status': 'skipped'}
|
| 86 |
+
|
| 87 |
+
# Scrape full content
|
| 88 |
+
content = self.scraper.scrape_article(article_url)
|
| 89 |
+
|
| 90 |
+
if not content:
|
| 91 |
+
# Fall back to article description if scraping fails
|
| 92 |
+
content = article.get('description', '')
|
| 93 |
+
if not content or len(content) < 50:
|
| 94 |
+
return {'status': 'failed'}
|
| 95 |
+
|
| 96 |
+
# Generate embedding
|
| 97 |
+
embedding = self.embedding_gen.generate_embedding(content)
|
| 98 |
+
|
| 99 |
+
if not embedding:
|
| 100 |
+
return {'status': 'failed'}
|
| 101 |
+
|
| 102 |
+
# Prepare metadata
|
| 103 |
+
metadata = {
|
| 104 |
+
"ticker": ticker,
|
| 105 |
+
"type": "news_article",
|
| 106 |
+
"title": article.get('title', ''),
|
| 107 |
+
"url": article_url,
|
| 108 |
+
"published_date": article.get('published_utc', ''),
|
| 109 |
+
"source": article.get('publisher', {}).get('name', 'Unknown'),
|
| 110 |
+
"content_preview": content[:200],
|
| 111 |
+
"full_content": content # Store full content in metadata
|
| 112 |
+
}
|
| 113 |
+
|
| 114 |
+
# Store in FAISS
|
| 115 |
+
success = self.vector_store.upsert_document(doc_id, embedding, metadata)
|
| 116 |
+
|
| 117 |
+
if success:
|
| 118 |
+
return {'status': 'embedded'}
|
| 119 |
+
else:
|
| 120 |
+
return {'status': 'failed'}
|
| 121 |
+
|
| 122 |
+
except Exception as e:
|
| 123 |
+
print(f"Error processing article: {e}")
|
| 124 |
+
return {'status': 'failed'}
|
| 125 |
+
|
| 126 |
+
# Process articles in parallel
|
| 127 |
+
with concurrent.futures.ThreadPoolExecutor(max_workers=5) as executor:
|
| 128 |
+
futures = [executor.submit(process_article, article) for article in articles[:20]] # Limit to 20 articles
|
| 129 |
+
|
| 130 |
+
for future in concurrent.futures.as_completed(futures):
|
| 131 |
+
result = future.result()
|
| 132 |
+
status = result.get('status', 'failed')
|
| 133 |
+
|
| 134 |
+
if status == 'embedded':
|
| 135 |
+
results['embedded'] += 1
|
| 136 |
+
results['scraped'] += 1
|
| 137 |
+
elif status == 'skipped':
|
| 138 |
+
results['skipped'] += 1
|
| 139 |
+
elif status == 'failed':
|
| 140 |
+
results['failed'] += 1
|
| 141 |
+
|
| 142 |
+
# Save FAISS index after batch operations
|
| 143 |
+
if results['embedded'] > 0:
|
| 144 |
+
self.vector_store.save()
|
| 145 |
+
|
| 146 |
+
return results
|
| 147 |
+
|
| 148 |
+
def _assemble_prompt(self, query, ticker, frontend_context, conversation_id):
|
| 149 |
+
"""
|
| 150 |
+
Assemble comprehensive prompt with all context
|
| 151 |
+
|
| 152 |
+
Args:
|
| 153 |
+
query: User query
|
| 154 |
+
ticker: Stock ticker
|
| 155 |
+
frontend_context: Data from frontend
|
| 156 |
+
conversation_id: Conversation ID
|
| 157 |
+
|
| 158 |
+
Returns:
|
| 159 |
+
Complete prompt string
|
| 160 |
+
"""
|
| 161 |
+
prompt_parts = []
|
| 162 |
+
|
| 163 |
+
# Add current stock overview
|
| 164 |
+
overview = frontend_context.get('overview', {})
|
| 165 |
+
if overview:
|
| 166 |
+
prompt_parts.append(self._format_overview(ticker, overview))
|
| 167 |
+
|
| 168 |
+
# Retrieve relevant context from RAG
|
| 169 |
+
rag_contexts = self.context_retriever.retrieve_context(query, ticker)
|
| 170 |
+
if rag_contexts:
|
| 171 |
+
prompt_parts.append(self._format_rag_contexts(rag_contexts))
|
| 172 |
+
|
| 173 |
+
# Add financials if relevant to query
|
| 174 |
+
if any(keyword in query.lower() for keyword in ['revenue', 'profit', 'income', 'earnings', 'financial', 'balance']):
|
| 175 |
+
financials = frontend_context.get('financials')
|
| 176 |
+
if financials:
|
| 177 |
+
prompt_parts.append(self._format_financials(financials))
|
| 178 |
+
|
| 179 |
+
# Add dividends if relevant
|
| 180 |
+
if 'dividend' in query.lower():
|
| 181 |
+
dividends = frontend_context.get('dividends')
|
| 182 |
+
if dividends:
|
| 183 |
+
prompt_parts.append(self._format_dividends(dividends))
|
| 184 |
+
|
| 185 |
+
# Add splits if relevant
|
| 186 |
+
if 'split' in query.lower():
|
| 187 |
+
splits = frontend_context.get('splits')
|
| 188 |
+
if splits:
|
| 189 |
+
prompt_parts.append(self._format_splits(splits))
|
| 190 |
+
|
| 191 |
+
# Add sentiment if relevant to query
|
| 192 |
+
sentiment_keywords = ['sentiment', 'bullish', 'bearish', 'feel', 'opinion',
|
| 193 |
+
'mood', 'social', 'twitter', 'reddit', 'stocktwits', 'buzz']
|
| 194 |
+
if any(keyword in query.lower() for keyword in sentiment_keywords):
|
| 195 |
+
# Retrieve sentiment posts from RAG
|
| 196 |
+
sentiment_contexts = self._retrieve_sentiment_context(query, ticker)
|
| 197 |
+
if sentiment_contexts:
|
| 198 |
+
prompt_parts.append(self._format_sentiment_contexts(sentiment_contexts))
|
| 199 |
+
|
| 200 |
+
# Include aggregate sentiment from frontend if available
|
| 201 |
+
sentiment_data = frontend_context.get('sentiment')
|
| 202 |
+
if sentiment_data:
|
| 203 |
+
prompt_parts.append(self._format_aggregate_sentiment(sentiment_data))
|
| 204 |
+
|
| 205 |
+
# Combine all context
|
| 206 |
+
context_str = "\n\n---\n\n".join(prompt_parts) if prompt_parts else "No additional context available."
|
| 207 |
+
|
| 208 |
+
# Final prompt
|
| 209 |
+
full_prompt = f"""Context Information:
|
| 210 |
+
{context_str}
|
| 211 |
+
|
| 212 |
+
---
|
| 213 |
+
|
| 214 |
+
User Question: {query}
|
| 215 |
+
|
| 216 |
+
Please provide a data-driven answer based on the context above."""
|
| 217 |
+
|
| 218 |
+
return full_prompt
|
| 219 |
+
|
| 220 |
+
def _format_overview(self, ticker, overview):
|
| 221 |
+
"""Format stock overview data"""
|
| 222 |
+
details = overview.get('details', {}).get('results', {})
|
| 223 |
+
prev_close = overview.get('previousClose', {}).get('results', [{}])[0]
|
| 224 |
+
|
| 225 |
+
company_name = details.get('name', ticker)
|
| 226 |
+
description = details.get('description', 'No description available')[:300]
|
| 227 |
+
market_cap = details.get('market_cap', 0)
|
| 228 |
+
|
| 229 |
+
close_price = prev_close.get('c', 0)
|
| 230 |
+
volume = prev_close.get('v', 0)
|
| 231 |
+
high = prev_close.get('h', 0)
|
| 232 |
+
low = prev_close.get('l', 0)
|
| 233 |
+
|
| 234 |
+
return f"""Stock Overview for {ticker} - {company_name}:
|
| 235 |
+
- Current Price: ${close_price:,.2f}
|
| 236 |
+
- Market Cap: ${market_cap:,.0f}
|
| 237 |
+
- Volume: {volume:,.0f}
|
| 238 |
+
- Day High: ${high:,.2f}
|
| 239 |
+
- Day Low: ${low:,.2f}
|
| 240 |
+
- Description: {description}"""
|
| 241 |
+
|
| 242 |
+
def _format_rag_contexts(self, contexts):
|
| 243 |
+
"""Format RAG retrieved contexts"""
|
| 244 |
+
if not contexts:
|
| 245 |
+
return ""
|
| 246 |
+
|
| 247 |
+
formatted = ["Relevant News Articles:"]
|
| 248 |
+
|
| 249 |
+
for ctx in contexts[:5]: # Top 5 results
|
| 250 |
+
metadata = ctx['metadata']
|
| 251 |
+
title = metadata.get('title', 'Untitled')
|
| 252 |
+
source = metadata.get('source', 'Unknown')
|
| 253 |
+
date = metadata.get('published_date', '')[:10] # Just the date
|
| 254 |
+
content = metadata.get('full_content', metadata.get('content_preview', ''))[:500] # First 500 chars
|
| 255 |
+
|
| 256 |
+
formatted.append(f"\n- {title} ({source}, {date})")
|
| 257 |
+
formatted.append(f" Content: {content}...")
|
| 258 |
+
|
| 259 |
+
return "\n".join(formatted)
|
| 260 |
+
|
| 261 |
+
def _format_financials(self, financials):
|
| 262 |
+
"""Format financial data"""
|
| 263 |
+
results = financials.get('results', [])
|
| 264 |
+
if not results:
|
| 265 |
+
return ""
|
| 266 |
+
|
| 267 |
+
formatted = ["Recent Financial Data:"]
|
| 268 |
+
|
| 269 |
+
for result in results[:4]: # Last 4 quarters/years
|
| 270 |
+
fiscal_period = result.get('fiscal_period', '')
|
| 271 |
+
fiscal_year = result.get('fiscal_year', '')
|
| 272 |
+
|
| 273 |
+
financials_data = result.get('financials', {})
|
| 274 |
+
income_statement = financials_data.get('income_statement', {})
|
| 275 |
+
balance_sheet = financials_data.get('balance_sheet', {})
|
| 276 |
+
|
| 277 |
+
revenue = income_statement.get('revenues', {}).get('value', 0)
|
| 278 |
+
net_income = income_statement.get('net_income_loss', {}).get('value', 0)
|
| 279 |
+
assets = balance_sheet.get('assets', {}).get('value', 0)
|
| 280 |
+
|
| 281 |
+
formatted.append(f"\n{fiscal_period} {fiscal_year}:")
|
| 282 |
+
formatted.append(f" - Revenue: ${revenue:,.0f}")
|
| 283 |
+
formatted.append(f" - Net Income: ${net_income:,.0f}")
|
| 284 |
+
formatted.append(f" - Total Assets: ${assets:,.0f}")
|
| 285 |
+
|
| 286 |
+
return "\n".join(formatted)
|
| 287 |
+
|
| 288 |
+
def _format_dividends(self, dividends):
|
| 289 |
+
"""Format dividend data"""
|
| 290 |
+
results = dividends.get('results', [])
|
| 291 |
+
if not results:
|
| 292 |
+
return ""
|
| 293 |
+
|
| 294 |
+
formatted = ["Recent Dividends:"]
|
| 295 |
+
|
| 296 |
+
for div in results[:5]:
|
| 297 |
+
ex_date = div.get('ex_dividend_date', '')
|
| 298 |
+
amount = div.get('cash_amount', 0)
|
| 299 |
+
formatted.append(f"- {ex_date}: ${amount:.2f} per share")
|
| 300 |
+
|
| 301 |
+
return "\n".join(formatted)
|
| 302 |
+
|
| 303 |
+
def _format_splits(self, splits):
|
| 304 |
+
"""Format stock split data"""
|
| 305 |
+
results = splits.get('results', [])
|
| 306 |
+
if not results:
|
| 307 |
+
return ""
|
| 308 |
+
|
| 309 |
+
formatted = ["Stock Splits:"]
|
| 310 |
+
|
| 311 |
+
for split in results[:5]:
|
| 312 |
+
execution_date = split.get('execution_date', '')
|
| 313 |
+
split_from = split.get('split_from', 1)
|
| 314 |
+
split_to = split.get('split_to', 1)
|
| 315 |
+
formatted.append(f"- {execution_date}: {split_to}-for-{split_from} split")
|
| 316 |
+
|
| 317 |
+
return "\n".join(formatted)
|
| 318 |
+
|
| 319 |
+
def _hash_url(self, url):
|
| 320 |
+
"""Generate a short hash for URL"""
|
| 321 |
+
return hashlib.md5(url.encode()).hexdigest()[:12]
|
| 322 |
+
|
| 323 |
+
def _retrieve_sentiment_context(self, query, ticker):
|
| 324 |
+
"""Retrieve sentiment posts from FAISS"""
|
| 325 |
+
try:
|
| 326 |
+
query_embedding = self.embedding_gen.generate_query_embedding(query)
|
| 327 |
+
if not query_embedding:
|
| 328 |
+
return []
|
| 329 |
+
|
| 330 |
+
matches = self.vector_store.search(
|
| 331 |
+
query_embedding=query_embedding,
|
| 332 |
+
ticker=ticker,
|
| 333 |
+
namespace="sentiment",
|
| 334 |
+
top_k=5
|
| 335 |
+
)
|
| 336 |
+
|
| 337 |
+
contexts = []
|
| 338 |
+
for match in matches:
|
| 339 |
+
contexts.append({
|
| 340 |
+
'score': match.score,
|
| 341 |
+
'metadata': match.metadata,
|
| 342 |
+
'id': match.id
|
| 343 |
+
})
|
| 344 |
+
|
| 345 |
+
return contexts
|
| 346 |
+
except Exception as e:
|
| 347 |
+
print(f"Error retrieving sentiment context: {e}")
|
| 348 |
+
return []
|
| 349 |
+
|
| 350 |
+
def _format_sentiment_contexts(self, contexts):
|
| 351 |
+
"""Format sentiment posts from RAG"""
|
| 352 |
+
if not contexts:
|
| 353 |
+
return ""
|
| 354 |
+
|
| 355 |
+
formatted = ["Relevant Social Media Posts:"]
|
| 356 |
+
|
| 357 |
+
for ctx in contexts[:5]:
|
| 358 |
+
metadata = ctx['metadata']
|
| 359 |
+
platform = metadata.get('platform', 'unknown')
|
| 360 |
+
sentiment = metadata.get('sentiment_label', 'neutral')
|
| 361 |
+
content = metadata.get('full_content', metadata.get('content', ''))[:300]
|
| 362 |
+
author = metadata.get('author', 'unknown')
|
| 363 |
+
likes = metadata.get('likes', 0)
|
| 364 |
+
|
| 365 |
+
formatted.append(f"\n- [{platform.upper()}] @{author} ({sentiment}, {likes} likes)")
|
| 366 |
+
formatted.append(f" \"{content}...\"")
|
| 367 |
+
|
| 368 |
+
return "\n".join(formatted)
|
| 369 |
+
|
| 370 |
+
def _format_aggregate_sentiment(self, sentiment_data):
|
| 371 |
+
"""Format aggregate sentiment data from frontend"""
|
| 372 |
+
aggregate = sentiment_data.get('aggregate', {})
|
| 373 |
+
if not aggregate:
|
| 374 |
+
return ""
|
| 375 |
+
|
| 376 |
+
label = aggregate.get('label', 'neutral')
|
| 377 |
+
score = aggregate.get('score', 0)
|
| 378 |
+
confidence = aggregate.get('confidence', 0)
|
| 379 |
+
post_count = aggregate.get('post_count', 0)
|
| 380 |
+
sources = aggregate.get('sources', {})
|
| 381 |
+
|
| 382 |
+
return f"""Current Social Media Sentiment:
|
| 383 |
+
- Overall: {label.upper()} (score: {score:.2f}, confidence: {confidence:.0%})
|
| 384 |
+
- Posts analyzed: {post_count}
|
| 385 |
+
- Sources: StockTwits ({sources.get('stocktwits', 0)}), Reddit ({sources.get('reddit', 0)}), Twitter ({sources.get('twitter', 0)})"""
|
be/config.py
ADDED
|
@@ -0,0 +1,36 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import os
|
| 2 |
+
from dotenv import load_dotenv
|
| 3 |
+
|
| 4 |
+
load_dotenv()
|
| 5 |
+
|
| 6 |
+
# Existing
|
| 7 |
+
POLYGON_API_KEY = os.getenv('POLYGON_API_KEY', '')
|
| 8 |
+
PORT = int(os.getenv('PORT', 5000))
|
| 9 |
+
|
| 10 |
+
# Gemini configuration (for chat and embeddings - FREE)
|
| 11 |
+
GEMINI_API_KEY = os.getenv('GEMINI_API_KEY', '')
|
| 12 |
+
GEMINI_MODEL = os.getenv('GEMINI_MODEL', 'gemini-2.0-flash')
|
| 13 |
+
|
| 14 |
+
# FAISS configuration (path relative to this file's location)
|
| 15 |
+
FAISS_INDEX_PATH = os.getenv('FAISS_INDEX_PATH',
|
| 16 |
+
os.path.join(os.path.dirname(__file__), 'faiss_index'))
|
| 17 |
+
|
| 18 |
+
# Embedding settings (using Google's free embedding model)
|
| 19 |
+
EMBEDDING_MODEL = os.getenv('EMBEDDING_MODEL', 'gemini-embedding-001')
|
| 20 |
+
MAX_CONTEXT_LENGTH = int(os.getenv('MAX_CONTEXT_LENGTH', 8000))
|
| 21 |
+
RAG_TOP_K = int(os.getenv('RAG_TOP_K', 5))
|
| 22 |
+
|
| 23 |
+
# Sentiment Analysis configuration
|
| 24 |
+
FINBERT_MODEL = os.getenv('FINBERT_MODEL', 'ProsusAI/finbert')
|
| 25 |
+
SENTIMENT_CACHE_TTL = int(os.getenv('SENTIMENT_CACHE_TTL', 15)) # minutes
|
| 26 |
+
|
| 27 |
+
# Reddit API (free - get credentials at reddit.com/prefs/apps)
|
| 28 |
+
REDDIT_CLIENT_ID = os.getenv('REDDIT_CLIENT_ID', '')
|
| 29 |
+
REDDIT_CLIENT_SECRET = os.getenv('REDDIT_CLIENT_SECRET', '')
|
| 30 |
+
REDDIT_USER_AGENT = os.getenv('REDDIT_USER_AGENT', 'StockAssistant/1.0')
|
| 31 |
+
|
| 32 |
+
# Twitter API (optional - requires paid tier $100+/month)
|
| 33 |
+
TWITTER_BEARER_TOKEN = os.getenv('TWITTER_BEARER_TOKEN', '')
|
| 34 |
+
|
| 35 |
+
# Agent configuration
|
| 36 |
+
AGENT_MAX_ITERATIONS = int(os.getenv('AGENT_MAX_ITERATIONS', 5))
|
be/forecast_model.py
ADDED
|
@@ -0,0 +1,410 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
LSTM-based stock price forecaster.
|
| 3 |
+
Uses historical OHLCV data to predict future closing prices.
|
| 4 |
+
"""
|
| 5 |
+
|
| 6 |
+
import torch
|
| 7 |
+
import torch.nn as nn
|
| 8 |
+
import numpy as np
|
| 9 |
+
import pickle
|
| 10 |
+
import json
|
| 11 |
+
import os
|
| 12 |
+
from datetime import datetime
|
| 13 |
+
from typing import Dict, List, Optional, Tuple
|
| 14 |
+
import logging
|
| 15 |
+
|
| 16 |
+
logger = logging.getLogger(__name__)
|
| 17 |
+
|
| 18 |
+
|
| 19 |
+
class LSTMModel(nn.Module):
|
| 20 |
+
"""LSTM neural network for time series forecasting."""
|
| 21 |
+
|
| 22 |
+
def __init__(self, input_size: int = 5, hidden_size: int = 128,
|
| 23 |
+
num_layers: int = 2, output_size: int = 30, dropout: float = 0.2):
|
| 24 |
+
super(LSTMModel, self).__init__()
|
| 25 |
+
self.hidden_size = hidden_size
|
| 26 |
+
self.num_layers = num_layers
|
| 27 |
+
|
| 28 |
+
self.lstm = nn.LSTM(
|
| 29 |
+
input_size=input_size,
|
| 30 |
+
hidden_size=hidden_size,
|
| 31 |
+
num_layers=num_layers,
|
| 32 |
+
batch_first=True,
|
| 33 |
+
dropout=dropout if num_layers > 1 else 0
|
| 34 |
+
)
|
| 35 |
+
|
| 36 |
+
self.fc = nn.Sequential(
|
| 37 |
+
nn.Linear(hidden_size, 64),
|
| 38 |
+
nn.ReLU(),
|
| 39 |
+
nn.Dropout(dropout),
|
| 40 |
+
nn.Linear(64, output_size)
|
| 41 |
+
)
|
| 42 |
+
|
| 43 |
+
def forward(self, x: torch.Tensor) -> torch.Tensor:
|
| 44 |
+
# x shape: (batch, seq_len, input_size)
|
| 45 |
+
lstm_out, _ = self.lstm(x)
|
| 46 |
+
# Take the last output
|
| 47 |
+
last_output = lstm_out[:, -1, :]
|
| 48 |
+
# Predict future prices
|
| 49 |
+
predictions = self.fc(last_output)
|
| 50 |
+
return predictions
|
| 51 |
+
|
| 52 |
+
|
| 53 |
+
class MinMaxScaler:
|
| 54 |
+
"""Simple MinMax scaler for normalization."""
|
| 55 |
+
|
| 56 |
+
def __init__(self):
|
| 57 |
+
self.min_vals = None
|
| 58 |
+
self.max_vals = None
|
| 59 |
+
self.fitted = False
|
| 60 |
+
|
| 61 |
+
def fit(self, data: np.ndarray) -> 'MinMaxScaler':
|
| 62 |
+
self.min_vals = data.min(axis=0)
|
| 63 |
+
self.max_vals = data.max(axis=0)
|
| 64 |
+
# Avoid division by zero
|
| 65 |
+
self.range_vals = self.max_vals - self.min_vals
|
| 66 |
+
self.range_vals[self.range_vals == 0] = 1
|
| 67 |
+
self.fitted = True
|
| 68 |
+
return self
|
| 69 |
+
|
| 70 |
+
def transform(self, data: np.ndarray) -> np.ndarray:
|
| 71 |
+
if not self.fitted:
|
| 72 |
+
raise ValueError("Scaler not fitted. Call fit() first.")
|
| 73 |
+
return (data - self.min_vals) / self.range_vals
|
| 74 |
+
|
| 75 |
+
def fit_transform(self, data: np.ndarray) -> np.ndarray:
|
| 76 |
+
self.fit(data)
|
| 77 |
+
return self.transform(data)
|
| 78 |
+
|
| 79 |
+
def inverse_transform(self, data: np.ndarray, col_idx: int = 0) -> np.ndarray:
|
| 80 |
+
"""Inverse transform for a single column (default: close price at index 0)."""
|
| 81 |
+
if not self.fitted:
|
| 82 |
+
raise ValueError("Scaler not fitted. Call fit() first.")
|
| 83 |
+
return data * self.range_vals[col_idx] + self.min_vals[col_idx]
|
| 84 |
+
|
| 85 |
+
|
| 86 |
+
class StockForecaster:
|
| 87 |
+
"""
|
| 88 |
+
Stock price forecaster using LSTM neural network.
|
| 89 |
+
|
| 90 |
+
The model is lazy-loaded on first use to avoid slow startup times.
|
| 91 |
+
Supports training on historical OHLCV data and predicting future prices.
|
| 92 |
+
"""
|
| 93 |
+
|
| 94 |
+
MODEL_DIR = os.path.join(os.path.dirname(__file__), 'forecast_models')
|
| 95 |
+
|
| 96 |
+
def __init__(self, sequence_length: int = 60, forecast_horizon: int = 30):
|
| 97 |
+
self._models: Dict[str, LSTMModel] = {}
|
| 98 |
+
self._scalers: Dict[str, MinMaxScaler] = {}
|
| 99 |
+
self._device = "cuda" if torch.cuda.is_available() else "cpu"
|
| 100 |
+
self.sequence_length = sequence_length
|
| 101 |
+
self.forecast_horizon = forecast_horizon
|
| 102 |
+
|
| 103 |
+
# Ensure model directory exists
|
| 104 |
+
os.makedirs(self.MODEL_DIR, exist_ok=True)
|
| 105 |
+
|
| 106 |
+
def _get_ticker_dir(self, ticker: str) -> str:
|
| 107 |
+
"""Get the directory path for a ticker's model files."""
|
| 108 |
+
return os.path.join(self.MODEL_DIR, ticker.upper())
|
| 109 |
+
|
| 110 |
+
def _load_model(self, ticker: str) -> bool:
|
| 111 |
+
"""
|
| 112 |
+
Load a trained model for a ticker from disk.
|
| 113 |
+
|
| 114 |
+
Returns:
|
| 115 |
+
True if model was loaded successfully, False otherwise.
|
| 116 |
+
"""
|
| 117 |
+
ticker = ticker.upper()
|
| 118 |
+
if ticker in self._models:
|
| 119 |
+
return True
|
| 120 |
+
|
| 121 |
+
ticker_dir = self._get_ticker_dir(ticker)
|
| 122 |
+
model_path = os.path.join(ticker_dir, 'model.pt')
|
| 123 |
+
scaler_path = os.path.join(ticker_dir, 'scaler.pkl')
|
| 124 |
+
|
| 125 |
+
if not os.path.exists(model_path) or not os.path.exists(scaler_path):
|
| 126 |
+
return False
|
| 127 |
+
|
| 128 |
+
try:
|
| 129 |
+
logger.info(f"Loading forecast model for {ticker}...")
|
| 130 |
+
|
| 131 |
+
# Load scaler
|
| 132 |
+
with open(scaler_path, 'rb') as f:
|
| 133 |
+
self._scalers[ticker] = pickle.load(f)
|
| 134 |
+
|
| 135 |
+
# Build and load model
|
| 136 |
+
model = LSTMModel(output_size=self.forecast_horizon)
|
| 137 |
+
model.load_state_dict(torch.load(model_path, map_location=self._device, weights_only=True))
|
| 138 |
+
model.to(self._device)
|
| 139 |
+
model.eval()
|
| 140 |
+
self._models[ticker] = model
|
| 141 |
+
|
| 142 |
+
logger.info(f"Forecast model for {ticker} loaded successfully on {self._device}")
|
| 143 |
+
return True
|
| 144 |
+
|
| 145 |
+
except Exception as e:
|
| 146 |
+
logger.error(f"Failed to load forecast model for {ticker}: {e}")
|
| 147 |
+
return False
|
| 148 |
+
|
| 149 |
+
def _save_model(self, ticker: str, metadata: Dict) -> None:
|
| 150 |
+
"""Save trained model and scaler to disk."""
|
| 151 |
+
ticker = ticker.upper()
|
| 152 |
+
ticker_dir = self._get_ticker_dir(ticker)
|
| 153 |
+
os.makedirs(ticker_dir, exist_ok=True)
|
| 154 |
+
|
| 155 |
+
model_path = os.path.join(ticker_dir, 'model.pt')
|
| 156 |
+
scaler_path = os.path.join(ticker_dir, 'scaler.pkl')
|
| 157 |
+
metadata_path = os.path.join(ticker_dir, 'metadata.json')
|
| 158 |
+
|
| 159 |
+
# Save model weights
|
| 160 |
+
torch.save(self._models[ticker].state_dict(), model_path)
|
| 161 |
+
|
| 162 |
+
# Save scaler
|
| 163 |
+
with open(scaler_path, 'wb') as f:
|
| 164 |
+
pickle.dump(self._scalers[ticker], f)
|
| 165 |
+
|
| 166 |
+
# Save metadata
|
| 167 |
+
with open(metadata_path, 'w') as f:
|
| 168 |
+
json.dump(metadata, f, indent=2)
|
| 169 |
+
|
| 170 |
+
logger.info(f"Forecast model for {ticker} saved to {ticker_dir}")
|
| 171 |
+
|
| 172 |
+
def _prepare_data(self, data: List[Dict]) -> Tuple[np.ndarray, np.ndarray]:
|
| 173 |
+
"""
|
| 174 |
+
Prepare OHLCV data for training.
|
| 175 |
+
|
| 176 |
+
Args:
|
| 177 |
+
data: List of dicts with keys: o, h, l, c, v (open, high, low, close, volume)
|
| 178 |
+
|
| 179 |
+
Returns:
|
| 180 |
+
Tuple of (X, y) numpy arrays for training
|
| 181 |
+
"""
|
| 182 |
+
# Extract OHLCV features - close first for easy inverse transform
|
| 183 |
+
features = np.array([[d['c'], d['o'], d['h'], d['l'], d['v']] for d in data], dtype=np.float32)
|
| 184 |
+
|
| 185 |
+
# Create sequences
|
| 186 |
+
X, y = [], []
|
| 187 |
+
for i in range(len(features) - self.sequence_length - self.forecast_horizon + 1):
|
| 188 |
+
X.append(features[i:i + self.sequence_length])
|
| 189 |
+
# Target: next forecast_horizon closing prices
|
| 190 |
+
y.append(features[i + self.sequence_length:i + self.sequence_length + self.forecast_horizon, 0])
|
| 191 |
+
|
| 192 |
+
return np.array(X), np.array(y)
|
| 193 |
+
|
| 194 |
+
def train(self, ticker: str, data: List[Dict], epochs: int = 50,
|
| 195 |
+
learning_rate: float = 0.001, batch_size: int = 32) -> Dict:
|
| 196 |
+
"""
|
| 197 |
+
Train the LSTM model on historical price data.
|
| 198 |
+
|
| 199 |
+
Args:
|
| 200 |
+
ticker: Stock ticker symbol
|
| 201 |
+
data: List of OHLCV dicts (must have at least sequence_length + forecast_horizon entries)
|
| 202 |
+
epochs: Number of training epochs
|
| 203 |
+
learning_rate: Learning rate for optimizer
|
| 204 |
+
batch_size: Training batch size
|
| 205 |
+
|
| 206 |
+
Returns:
|
| 207 |
+
Dict with training results (loss, metadata)
|
| 208 |
+
"""
|
| 209 |
+
ticker = ticker.upper()
|
| 210 |
+
|
| 211 |
+
if len(data) < self.sequence_length + self.forecast_horizon:
|
| 212 |
+
raise ValueError(f"Insufficient data: need at least {self.sequence_length + self.forecast_horizon} data points")
|
| 213 |
+
|
| 214 |
+
logger.info(f"Training forecast model for {ticker} with {len(data)} data points...")
|
| 215 |
+
|
| 216 |
+
# Prepare data
|
| 217 |
+
X, y = self._prepare_data(data)
|
| 218 |
+
|
| 219 |
+
# Normalize features
|
| 220 |
+
scaler = MinMaxScaler()
|
| 221 |
+
X_flat = X.reshape(-1, X.shape[-1])
|
| 222 |
+
scaler.fit(X_flat)
|
| 223 |
+
X_scaled = np.array([scaler.transform(seq) for seq in X])
|
| 224 |
+
|
| 225 |
+
# Normalize targets using close price stats
|
| 226 |
+
y_scaled = (y - scaler.min_vals[0]) / scaler.range_vals[0]
|
| 227 |
+
|
| 228 |
+
self._scalers[ticker] = scaler
|
| 229 |
+
|
| 230 |
+
# Convert to tensors
|
| 231 |
+
X_tensor = torch.FloatTensor(X_scaled).to(self._device)
|
| 232 |
+
y_tensor = torch.FloatTensor(y_scaled).to(self._device)
|
| 233 |
+
|
| 234 |
+
# Split train/validation (80/20)
|
| 235 |
+
split_idx = int(len(X_tensor) * 0.8)
|
| 236 |
+
X_train, X_val = X_tensor[:split_idx], X_tensor[split_idx:]
|
| 237 |
+
y_train, y_val = y_tensor[:split_idx], y_tensor[split_idx:]
|
| 238 |
+
|
| 239 |
+
# Build model
|
| 240 |
+
model = LSTMModel(output_size=self.forecast_horizon)
|
| 241 |
+
model.to(self._device)
|
| 242 |
+
|
| 243 |
+
criterion = nn.MSELoss()
|
| 244 |
+
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
|
| 245 |
+
|
| 246 |
+
# Training loop
|
| 247 |
+
best_val_loss = float('inf')
|
| 248 |
+
train_losses = []
|
| 249 |
+
|
| 250 |
+
for epoch in range(epochs):
|
| 251 |
+
model.train()
|
| 252 |
+
epoch_loss = 0
|
| 253 |
+
|
| 254 |
+
# Mini-batch training
|
| 255 |
+
for i in range(0, len(X_train), batch_size):
|
| 256 |
+
batch_X = X_train[i:i + batch_size]
|
| 257 |
+
batch_y = y_train[i:i + batch_size]
|
| 258 |
+
|
| 259 |
+
optimizer.zero_grad()
|
| 260 |
+
outputs = model(batch_X)
|
| 261 |
+
loss = criterion(outputs, batch_y)
|
| 262 |
+
loss.backward()
|
| 263 |
+
optimizer.step()
|
| 264 |
+
|
| 265 |
+
epoch_loss += loss.item()
|
| 266 |
+
|
| 267 |
+
avg_train_loss = epoch_loss / (len(X_train) // batch_size + 1)
|
| 268 |
+
train_losses.append(avg_train_loss)
|
| 269 |
+
|
| 270 |
+
# Validation
|
| 271 |
+
model.eval()
|
| 272 |
+
with torch.no_grad():
|
| 273 |
+
val_outputs = model(X_val)
|
| 274 |
+
val_loss = criterion(val_outputs, y_val).item()
|
| 275 |
+
|
| 276 |
+
if val_loss < best_val_loss:
|
| 277 |
+
best_val_loss = val_loss
|
| 278 |
+
|
| 279 |
+
if (epoch + 1) % 10 == 0:
|
| 280 |
+
logger.info(f"Epoch {epoch + 1}/{epochs} - Train Loss: {avg_train_loss:.6f}, Val Loss: {val_loss:.6f}")
|
| 281 |
+
|
| 282 |
+
self._models[ticker] = model
|
| 283 |
+
|
| 284 |
+
# Save model and metadata
|
| 285 |
+
metadata = {
|
| 286 |
+
"ticker": ticker,
|
| 287 |
+
"trained_at": datetime.utcnow().isoformat() + "Z",
|
| 288 |
+
"training_epochs": epochs,
|
| 289 |
+
"final_train_loss": float(train_losses[-1]),
|
| 290 |
+
"final_val_loss": float(best_val_loss),
|
| 291 |
+
"data_points": len(data),
|
| 292 |
+
"sequence_length": self.sequence_length,
|
| 293 |
+
"forecast_horizon": self.forecast_horizon,
|
| 294 |
+
"model_version": "1.0"
|
| 295 |
+
}
|
| 296 |
+
|
| 297 |
+
self._save_model(ticker, metadata)
|
| 298 |
+
|
| 299 |
+
logger.info(f"Training complete for {ticker}. Final val loss: {best_val_loss:.6f}")
|
| 300 |
+
|
| 301 |
+
return {
|
| 302 |
+
"status": "training_complete",
|
| 303 |
+
"ticker": ticker,
|
| 304 |
+
"epochs": epochs,
|
| 305 |
+
"final_loss": float(train_losses[-1]),
|
| 306 |
+
"validation_loss": float(best_val_loss),
|
| 307 |
+
"data_points": len(data)
|
| 308 |
+
}
|
| 309 |
+
|
| 310 |
+
def predict(self, ticker: str, recent_data: List[Dict]) -> Dict:
|
| 311 |
+
"""
|
| 312 |
+
Generate price forecast using trained model.
|
| 313 |
+
|
| 314 |
+
Args:
|
| 315 |
+
ticker: Stock ticker symbol
|
| 316 |
+
recent_data: Most recent OHLCV data (at least sequence_length entries)
|
| 317 |
+
|
| 318 |
+
Returns:
|
| 319 |
+
Dict with predictions and confidence bounds
|
| 320 |
+
"""
|
| 321 |
+
ticker = ticker.upper()
|
| 322 |
+
|
| 323 |
+
# Load model if not in memory
|
| 324 |
+
if ticker not in self._models:
|
| 325 |
+
if not self._load_model(ticker):
|
| 326 |
+
raise ValueError(f"No trained model found for {ticker}. Train the model first.")
|
| 327 |
+
|
| 328 |
+
if len(recent_data) < self.sequence_length:
|
| 329 |
+
raise ValueError(f"Need at least {self.sequence_length} data points for prediction")
|
| 330 |
+
|
| 331 |
+
# Use last sequence_length data points
|
| 332 |
+
data = recent_data[-self.sequence_length:]
|
| 333 |
+
features = np.array([[d['c'], d['o'], d['h'], d['l'], d['v']] for d in data], dtype=np.float32)
|
| 334 |
+
|
| 335 |
+
# Normalize
|
| 336 |
+
scaler = self._scalers[ticker]
|
| 337 |
+
features_scaled = scaler.transform(features)
|
| 338 |
+
|
| 339 |
+
# Predict
|
| 340 |
+
model = self._models[ticker]
|
| 341 |
+
model.eval()
|
| 342 |
+
|
| 343 |
+
X = torch.FloatTensor(features_scaled).unsqueeze(0).to(self._device)
|
| 344 |
+
|
| 345 |
+
with torch.no_grad():
|
| 346 |
+
predictions_scaled = model(X).cpu().numpy()[0]
|
| 347 |
+
|
| 348 |
+
# Inverse transform predictions
|
| 349 |
+
predictions = scaler.inverse_transform(predictions_scaled, col_idx=0)
|
| 350 |
+
|
| 351 |
+
# Calculate confidence bounds (simple approach: +/- percentage based on historical volatility)
|
| 352 |
+
recent_closes = [d['c'] for d in recent_data[-30:]]
|
| 353 |
+
volatility = np.std(recent_closes) / np.mean(recent_closes)
|
| 354 |
+
confidence_pct = max(0.02, min(0.10, volatility * 2)) # 2-10% bounds
|
| 355 |
+
|
| 356 |
+
upper_bound = predictions * (1 + confidence_pct)
|
| 357 |
+
lower_bound = predictions * (1 - confidence_pct)
|
| 358 |
+
|
| 359 |
+
# Get last date from data for generating forecast dates
|
| 360 |
+
last_timestamp = recent_data[-1].get('t', 0)
|
| 361 |
+
|
| 362 |
+
return {
|
| 363 |
+
"predictions": predictions.tolist(),
|
| 364 |
+
"upper_bound": upper_bound.tolist(),
|
| 365 |
+
"lower_bound": lower_bound.tolist(),
|
| 366 |
+
"last_timestamp": last_timestamp,
|
| 367 |
+
"forecast_horizon": self.forecast_horizon
|
| 368 |
+
}
|
| 369 |
+
|
| 370 |
+
def has_model(self, ticker: str) -> bool:
|
| 371 |
+
"""Check if a trained model exists for the ticker."""
|
| 372 |
+
ticker = ticker.upper()
|
| 373 |
+
if ticker in self._models:
|
| 374 |
+
return True
|
| 375 |
+
|
| 376 |
+
ticker_dir = self._get_ticker_dir(ticker)
|
| 377 |
+
return os.path.exists(os.path.join(ticker_dir, 'model.pt'))
|
| 378 |
+
|
| 379 |
+
def get_model_metadata(self, ticker: str) -> Optional[Dict]:
|
| 380 |
+
"""Get metadata for a trained model."""
|
| 381 |
+
ticker = ticker.upper()
|
| 382 |
+
metadata_path = os.path.join(self._get_ticker_dir(ticker), 'metadata.json')
|
| 383 |
+
|
| 384 |
+
if not os.path.exists(metadata_path):
|
| 385 |
+
return None
|
| 386 |
+
|
| 387 |
+
with open(metadata_path, 'r') as f:
|
| 388 |
+
return json.load(f)
|
| 389 |
+
|
| 390 |
+
def unload_model(self, ticker: str) -> None:
|
| 391 |
+
"""Unload a model from memory to free resources."""
|
| 392 |
+
ticker = ticker.upper()
|
| 393 |
+
if ticker in self._models:
|
| 394 |
+
del self._models[ticker]
|
| 395 |
+
del self._scalers[ticker]
|
| 396 |
+
if torch.cuda.is_available():
|
| 397 |
+
torch.cuda.empty_cache()
|
| 398 |
+
logger.info(f"Forecast model for {ticker} unloaded")
|
| 399 |
+
|
| 400 |
+
|
| 401 |
+
# Singleton instance
|
| 402 |
+
_forecaster_instance: Optional[StockForecaster] = None
|
| 403 |
+
|
| 404 |
+
|
| 405 |
+
def get_stock_forecaster() -> StockForecaster:
|
| 406 |
+
"""Get or create the singleton stock forecaster instance."""
|
| 407 |
+
global _forecaster_instance
|
| 408 |
+
if _forecaster_instance is None:
|
| 409 |
+
_forecaster_instance = StockForecaster()
|
| 410 |
+
return _forecaster_instance
|
be/forecast_routes.py
ADDED
|
@@ -0,0 +1,140 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Flask routes for stock price forecasting.
|
| 3 |
+
"""
|
| 4 |
+
|
| 5 |
+
from flask import Blueprint, request, jsonify
|
| 6 |
+
import logging
|
| 7 |
+
|
| 8 |
+
from forecast_service import get_forecast_service
|
| 9 |
+
|
| 10 |
+
logger = logging.getLogger(__name__)
|
| 11 |
+
|
| 12 |
+
forecast_bp = Blueprint('forecast', __name__, url_prefix='/api/forecast')
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
@forecast_bp.route('/predict/<ticker>', methods=['POST'])
|
| 16 |
+
def get_forecast(ticker: str):
|
| 17 |
+
"""
|
| 18 |
+
Get price forecast for a ticker. Auto-trains if no model exists.
|
| 19 |
+
|
| 20 |
+
Request body (optional):
|
| 21 |
+
{
|
| 22 |
+
"force_retrain": false,
|
| 23 |
+
"historical_data": [...] // Optional: pre-fetched OHLCV data from frontend
|
| 24 |
+
}
|
| 25 |
+
|
| 26 |
+
Response:
|
| 27 |
+
{
|
| 28 |
+
"ticker": "AAPL",
|
| 29 |
+
"forecast": [...],
|
| 30 |
+
"model_info": {...},
|
| 31 |
+
"historical": [...],
|
| 32 |
+
"confidence_bounds": {...}
|
| 33 |
+
}
|
| 34 |
+
"""
|
| 35 |
+
try:
|
| 36 |
+
data = request.get_json() or {}
|
| 37 |
+
force_retrain = data.get('force_retrain', False)
|
| 38 |
+
historical_data = data.get('historical_data', None)
|
| 39 |
+
|
| 40 |
+
service = get_forecast_service()
|
| 41 |
+
result = service.get_forecast(ticker.upper(), force_retrain=force_retrain, historical_data=historical_data)
|
| 42 |
+
|
| 43 |
+
if "error" in result:
|
| 44 |
+
return jsonify(result), 400
|
| 45 |
+
|
| 46 |
+
return jsonify(result), 200
|
| 47 |
+
|
| 48 |
+
except Exception as e:
|
| 49 |
+
logger.error(f"Error getting forecast for {ticker}: {e}")
|
| 50 |
+
return jsonify({"error": str(e)}), 500
|
| 51 |
+
|
| 52 |
+
|
| 53 |
+
@forecast_bp.route('/train/<ticker>', methods=['POST'])
|
| 54 |
+
def train_model(ticker: str):
|
| 55 |
+
"""
|
| 56 |
+
Force train/retrain model for a ticker.
|
| 57 |
+
|
| 58 |
+
Request body (optional):
|
| 59 |
+
{ "historical_data": [...] } // Optional: pre-fetched OHLCV data
|
| 60 |
+
|
| 61 |
+
Response:
|
| 62 |
+
{
|
| 63 |
+
"status": "training_complete",
|
| 64 |
+
"ticker": "AAPL",
|
| 65 |
+
"epochs": 50,
|
| 66 |
+
"final_loss": 0.0023
|
| 67 |
+
}
|
| 68 |
+
"""
|
| 69 |
+
try:
|
| 70 |
+
data = request.get_json() or {}
|
| 71 |
+
historical_data = data.get('historical_data', None)
|
| 72 |
+
|
| 73 |
+
service = get_forecast_service()
|
| 74 |
+
result = service.train_model(ticker.upper(), historical_data=historical_data)
|
| 75 |
+
|
| 76 |
+
if "error" in result:
|
| 77 |
+
return jsonify(result), 400
|
| 78 |
+
|
| 79 |
+
return jsonify(result), 200
|
| 80 |
+
|
| 81 |
+
except Exception as e:
|
| 82 |
+
logger.error(f"Error training model for {ticker}: {e}")
|
| 83 |
+
return jsonify({"error": str(e)}), 500
|
| 84 |
+
|
| 85 |
+
|
| 86 |
+
@forecast_bp.route('/status/<ticker>', methods=['GET'])
|
| 87 |
+
def model_status(ticker: str):
|
| 88 |
+
"""
|
| 89 |
+
Check model status for a ticker.
|
| 90 |
+
|
| 91 |
+
Response:
|
| 92 |
+
{
|
| 93 |
+
"ticker": "AAPL",
|
| 94 |
+
"model_exists": true,
|
| 95 |
+
"metadata": {...}
|
| 96 |
+
}
|
| 97 |
+
"""
|
| 98 |
+
try:
|
| 99 |
+
service = get_forecast_service()
|
| 100 |
+
result = service.get_model_status(ticker.upper())
|
| 101 |
+
return jsonify(result), 200
|
| 102 |
+
|
| 103 |
+
except Exception as e:
|
| 104 |
+
logger.error(f"Error getting model status for {ticker}: {e}")
|
| 105 |
+
return jsonify({"error": str(e)}), 500
|
| 106 |
+
|
| 107 |
+
|
| 108 |
+
@forecast_bp.route('/health', methods=['GET'])
|
| 109 |
+
def forecast_health():
|
| 110 |
+
"""
|
| 111 |
+
Health check for forecast service.
|
| 112 |
+
|
| 113 |
+
Response:
|
| 114 |
+
{
|
| 115 |
+
"status": "healthy",
|
| 116 |
+
"service": "forecast",
|
| 117 |
+
"model_dir_exists": true
|
| 118 |
+
}
|
| 119 |
+
"""
|
| 120 |
+
try:
|
| 121 |
+
import os
|
| 122 |
+
from forecast_model import StockForecaster
|
| 123 |
+
|
| 124 |
+
model_dir = StockForecaster.MODEL_DIR
|
| 125 |
+
model_dir_exists = os.path.exists(model_dir)
|
| 126 |
+
|
| 127 |
+
return jsonify({
|
| 128 |
+
"status": "healthy",
|
| 129 |
+
"service": "forecast",
|
| 130 |
+
"model_dir": model_dir,
|
| 131 |
+
"model_dir_exists": model_dir_exists
|
| 132 |
+
}), 200
|
| 133 |
+
|
| 134 |
+
except Exception as e:
|
| 135 |
+
logger.error(f"Health check failed: {e}")
|
| 136 |
+
return jsonify({
|
| 137 |
+
"status": "unhealthy",
|
| 138 |
+
"service": "forecast",
|
| 139 |
+
"error": str(e)
|
| 140 |
+
}), 500
|
be/forecast_service.py
ADDED
|
@@ -0,0 +1,235 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Service layer for stock price forecasting.
|
| 3 |
+
Orchestrates data fetching, model training, and predictions.
|
| 4 |
+
"""
|
| 5 |
+
|
| 6 |
+
import logging
|
| 7 |
+
from datetime import datetime, timedelta
|
| 8 |
+
from typing import Dict, Optional
|
| 9 |
+
from polygon_api import PolygonAPI
|
| 10 |
+
from forecast_model import get_stock_forecaster, StockForecaster
|
| 11 |
+
|
| 12 |
+
logger = logging.getLogger(__name__)
|
| 13 |
+
|
| 14 |
+
|
| 15 |
+
class ForecastService:
|
| 16 |
+
"""
|
| 17 |
+
Service for stock price forecasting.
|
| 18 |
+
Manages model training, prediction, and caching.
|
| 19 |
+
"""
|
| 20 |
+
|
| 21 |
+
TRAINING_DATA_YEARS = 2 # Years of historical data for training
|
| 22 |
+
|
| 23 |
+
def __init__(self):
|
| 24 |
+
self.forecaster: StockForecaster = get_stock_forecaster()
|
| 25 |
+
self.polygon = PolygonAPI()
|
| 26 |
+
self._forecast_cache: Dict[str, Dict] = {} # {ticker: {timestamp, forecast}}
|
| 27 |
+
self.cache_ttl_minutes = 60
|
| 28 |
+
|
| 29 |
+
def get_forecast(self, ticker: str, force_retrain: bool = False, historical_data: list = None) -> Dict:
|
| 30 |
+
"""
|
| 31 |
+
Get forecast for a ticker. Auto-trains if no model exists.
|
| 32 |
+
|
| 33 |
+
Args:
|
| 34 |
+
ticker: Stock ticker symbol
|
| 35 |
+
force_retrain: If True, retrain even if model exists
|
| 36 |
+
historical_data: Optional pre-fetched OHLCV data from frontend cache
|
| 37 |
+
|
| 38 |
+
Returns:
|
| 39 |
+
Dict with forecast data
|
| 40 |
+
"""
|
| 41 |
+
ticker = ticker.upper()
|
| 42 |
+
|
| 43 |
+
# Check if we need to train
|
| 44 |
+
needs_training = force_retrain or not self.forecaster.has_model(ticker)
|
| 45 |
+
|
| 46 |
+
if needs_training:
|
| 47 |
+
logger.info(f"Training model for {ticker}...")
|
| 48 |
+
training_result = self.train_model(ticker, historical_data)
|
| 49 |
+
if training_result.get("status") != "training_complete":
|
| 50 |
+
return {"error": "Training failed", "details": training_result}
|
| 51 |
+
|
| 52 |
+
# Use provided data or fetch if not available
|
| 53 |
+
if historical_data and len(historical_data) >= 60:
|
| 54 |
+
recent_data = historical_data
|
| 55 |
+
else:
|
| 56 |
+
recent_data = self._fetch_recent_data(ticker)
|
| 57 |
+
if not recent_data:
|
| 58 |
+
return {"error": "Failed to fetch recent data for prediction"}
|
| 59 |
+
|
| 60 |
+
# Generate forecast
|
| 61 |
+
try:
|
| 62 |
+
forecast_result = self.forecaster.predict(ticker, recent_data)
|
| 63 |
+
except Exception as e:
|
| 64 |
+
logger.error(f"Prediction failed for {ticker}: {e}")
|
| 65 |
+
return {"error": f"Prediction failed: {str(e)}"}
|
| 66 |
+
|
| 67 |
+
# Get model metadata
|
| 68 |
+
metadata = self.forecaster.get_model_metadata(ticker)
|
| 69 |
+
|
| 70 |
+
# Format response with dates
|
| 71 |
+
forecast = self._format_forecast(forecast_result, recent_data)
|
| 72 |
+
|
| 73 |
+
return {
|
| 74 |
+
"ticker": ticker,
|
| 75 |
+
"forecast": forecast,
|
| 76 |
+
"model_info": metadata,
|
| 77 |
+
"historical": self._format_historical(recent_data[-60:]), # Last 60 days for chart
|
| 78 |
+
"confidence_bounds": {
|
| 79 |
+
"upper": forecast_result["upper_bound"],
|
| 80 |
+
"lower": forecast_result["lower_bound"]
|
| 81 |
+
}
|
| 82 |
+
}
|
| 83 |
+
|
| 84 |
+
def train_model(self, ticker: str, historical_data: list = None) -> Dict:
|
| 85 |
+
"""
|
| 86 |
+
Train/retrain model for a ticker.
|
| 87 |
+
|
| 88 |
+
Args:
|
| 89 |
+
ticker: Stock ticker symbol
|
| 90 |
+
historical_data: Optional pre-fetched OHLCV data from frontend cache
|
| 91 |
+
|
| 92 |
+
Returns:
|
| 93 |
+
Dict with training results
|
| 94 |
+
"""
|
| 95 |
+
ticker = ticker.upper()
|
| 96 |
+
|
| 97 |
+
# Use provided data or fetch if not available
|
| 98 |
+
if historical_data and len(historical_data) >= 100:
|
| 99 |
+
training_data = historical_data
|
| 100 |
+
else:
|
| 101 |
+
training_data = self._fetch_training_data(ticker)
|
| 102 |
+
if not training_data:
|
| 103 |
+
return {"error": "Failed to fetch training data", "ticker": ticker}
|
| 104 |
+
|
| 105 |
+
if len(training_data) < 100:
|
| 106 |
+
return {
|
| 107 |
+
"error": "Insufficient historical data",
|
| 108 |
+
"ticker": ticker,
|
| 109 |
+
"data_points": len(training_data),
|
| 110 |
+
"required": 100
|
| 111 |
+
}
|
| 112 |
+
|
| 113 |
+
# Train the model
|
| 114 |
+
try:
|
| 115 |
+
result = self.forecaster.train(ticker, training_data)
|
| 116 |
+
return result
|
| 117 |
+
except Exception as e:
|
| 118 |
+
logger.error(f"Training failed for {ticker}: {e}")
|
| 119 |
+
return {"error": f"Training failed: {str(e)}", "ticker": ticker}
|
| 120 |
+
|
| 121 |
+
def _fetch_training_data(self, ticker: str) -> Optional[list]:
|
| 122 |
+
"""Fetch historical data for training (2 years)."""
|
| 123 |
+
to_date = datetime.now()
|
| 124 |
+
from_date = to_date - timedelta(days=self.TRAINING_DATA_YEARS * 365)
|
| 125 |
+
|
| 126 |
+
try:
|
| 127 |
+
response = self.polygon.get_aggregates(
|
| 128 |
+
ticker,
|
| 129 |
+
timespan="day",
|
| 130 |
+
from_date=from_date.strftime("%Y-%m-%d"),
|
| 131 |
+
to_date=to_date.strftime("%Y-%m-%d")
|
| 132 |
+
)
|
| 133 |
+
|
| 134 |
+
if "results" not in response or not response["results"]:
|
| 135 |
+
logger.error(f"Failed to fetch training data for {ticker}: {response}")
|
| 136 |
+
return None
|
| 137 |
+
|
| 138 |
+
return response["results"]
|
| 139 |
+
|
| 140 |
+
except Exception as e:
|
| 141 |
+
logger.error(f"Error fetching training data for {ticker}: {e}")
|
| 142 |
+
return None
|
| 143 |
+
|
| 144 |
+
def _fetch_recent_data(self, ticker: str) -> Optional[list]:
|
| 145 |
+
"""Fetch recent data for prediction (90 days to ensure enough data)."""
|
| 146 |
+
to_date = datetime.now()
|
| 147 |
+
from_date = to_date - timedelta(days=90)
|
| 148 |
+
|
| 149 |
+
try:
|
| 150 |
+
response = self.polygon.get_aggregates(
|
| 151 |
+
ticker,
|
| 152 |
+
timespan="day",
|
| 153 |
+
from_date=from_date.strftime("%Y-%m-%d"),
|
| 154 |
+
to_date=to_date.strftime("%Y-%m-%d")
|
| 155 |
+
)
|
| 156 |
+
|
| 157 |
+
if "results" not in response or not response["results"]:
|
| 158 |
+
logger.error(f"Failed to fetch recent data for {ticker}: {response}")
|
| 159 |
+
return None
|
| 160 |
+
|
| 161 |
+
return response["results"]
|
| 162 |
+
|
| 163 |
+
except Exception as e:
|
| 164 |
+
logger.error(f"Error fetching recent data for {ticker}: {e}")
|
| 165 |
+
return None
|
| 166 |
+
|
| 167 |
+
def _format_forecast(self, forecast_result: Dict, historical_data: list) -> list:
|
| 168 |
+
"""Format forecast with dates."""
|
| 169 |
+
predictions = forecast_result["predictions"]
|
| 170 |
+
upper = forecast_result["upper_bound"]
|
| 171 |
+
lower = forecast_result["lower_bound"]
|
| 172 |
+
|
| 173 |
+
# Start from the last historical date
|
| 174 |
+
if historical_data:
|
| 175 |
+
last_timestamp = historical_data[-1].get("t", 0)
|
| 176 |
+
last_date = datetime.fromtimestamp(last_timestamp / 1000)
|
| 177 |
+
else:
|
| 178 |
+
last_date = datetime.now()
|
| 179 |
+
|
| 180 |
+
forecast = []
|
| 181 |
+
current_date = last_date
|
| 182 |
+
|
| 183 |
+
for i, pred in enumerate(predictions):
|
| 184 |
+
# Skip weekends
|
| 185 |
+
current_date += timedelta(days=1)
|
| 186 |
+
while current_date.weekday() >= 5: # 5=Saturday, 6=Sunday
|
| 187 |
+
current_date += timedelta(days=1)
|
| 188 |
+
|
| 189 |
+
forecast.append({
|
| 190 |
+
"date": current_date.strftime("%Y-%m-%d"),
|
| 191 |
+
"predicted_close": round(pred, 2),
|
| 192 |
+
"upper_bound": round(upper[i], 2),
|
| 193 |
+
"lower_bound": round(lower[i], 2),
|
| 194 |
+
"day": i + 1
|
| 195 |
+
})
|
| 196 |
+
|
| 197 |
+
return forecast
|
| 198 |
+
|
| 199 |
+
def _format_historical(self, data: list) -> list:
|
| 200 |
+
"""Format historical data for chart display."""
|
| 201 |
+
return [
|
| 202 |
+
{
|
| 203 |
+
"date": datetime.fromtimestamp(d["t"] / 1000).strftime("%Y-%m-%d"),
|
| 204 |
+
"close": d["c"],
|
| 205 |
+
"open": d["o"],
|
| 206 |
+
"high": d["h"],
|
| 207 |
+
"low": d["l"],
|
| 208 |
+
"volume": d["v"]
|
| 209 |
+
}
|
| 210 |
+
for d in data
|
| 211 |
+
]
|
| 212 |
+
|
| 213 |
+
def get_model_status(self, ticker: str) -> Dict:
|
| 214 |
+
"""Get model status for a ticker."""
|
| 215 |
+
ticker = ticker.upper()
|
| 216 |
+
has_model = self.forecaster.has_model(ticker)
|
| 217 |
+
metadata = self.forecaster.get_model_metadata(ticker) if has_model else None
|
| 218 |
+
|
| 219 |
+
return {
|
| 220 |
+
"ticker": ticker,
|
| 221 |
+
"model_exists": has_model,
|
| 222 |
+
"metadata": metadata
|
| 223 |
+
}
|
| 224 |
+
|
| 225 |
+
|
| 226 |
+
# Singleton instance
|
| 227 |
+
_forecast_service: Optional[ForecastService] = None
|
| 228 |
+
|
| 229 |
+
|
| 230 |
+
def get_forecast_service() -> ForecastService:
|
| 231 |
+
"""Get or create the singleton forecast service instance."""
|
| 232 |
+
global _forecast_service
|
| 233 |
+
if _forecast_service is None:
|
| 234 |
+
_forecast_service = ForecastService()
|
| 235 |
+
return _forecast_service
|
be/llm_client.py
ADDED
|
@@ -0,0 +1,267 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import google.generativeai as genai
|
| 2 |
+
from google.genai import types as genai_types
|
| 3 |
+
from google import genai as genai_new
|
| 4 |
+
from datetime import datetime, timedelta
|
| 5 |
+
from config import GEMINI_API_KEY, GEMINI_MODEL, MAX_CONTEXT_LENGTH
|
| 6 |
+
|
| 7 |
+
|
| 8 |
+
class GeminiClient:
|
| 9 |
+
"""Handles interactions with Google Gemini API"""
|
| 10 |
+
|
| 11 |
+
SYSTEM_PROMPT = """#Context
|
| 12 |
+
You are an expert stock market analyst with access to the following data for a particular stock ticker:
|
| 13 |
+
- Real-time stock data (price, volume, market cap)
|
| 14 |
+
- Historical price charts and trends
|
| 15 |
+
- News articles with full content
|
| 16 |
+
- Dividend and split history
|
| 17 |
+
|
| 18 |
+
# Task
|
| 19 |
+
Your task is answer questions related to the ticker based on the information you have been provided.
|
| 20 |
+
|
| 21 |
+
When answering questions:
|
| 22 |
+
- Be concise and data-driven
|
| 23 |
+
- Cite specific numbers from the data provided
|
| 24 |
+
- Format numbers with proper units (e.g., $1.5B, 10.5M shares)
|
| 25 |
+
- Do not include any markdown syntax elements. There is no need to format plain text.
|
| 26 |
+
|
| 27 |
+
Always ground your responses in the provided data. If information is not available, say so clearly."""
|
| 28 |
+
|
| 29 |
+
def __init__(self, api_key=None, model=None):
|
| 30 |
+
genai.configure(api_key=api_key or GEMINI_API_KEY)
|
| 31 |
+
self.model_name = model or GEMINI_MODEL
|
| 32 |
+
self.model = genai.GenerativeModel(
|
| 33 |
+
model_name=self.model_name,
|
| 34 |
+
system_instruction=self.SYSTEM_PROMPT
|
| 35 |
+
)
|
| 36 |
+
|
| 37 |
+
def generate_response(self, prompt, conversation_history=None):
|
| 38 |
+
"""
|
| 39 |
+
Generate a response from Gemini
|
| 40 |
+
|
| 41 |
+
Args:
|
| 42 |
+
prompt: User prompt with context
|
| 43 |
+
conversation_history: List of previous messages
|
| 44 |
+
|
| 45 |
+
Returns:
|
| 46 |
+
Generated response text
|
| 47 |
+
"""
|
| 48 |
+
try:
|
| 49 |
+
# Convert history to Gemini format and start chat
|
| 50 |
+
chat = self.model.start_chat(
|
| 51 |
+
history=self._convert_history(conversation_history)
|
| 52 |
+
)
|
| 53 |
+
|
| 54 |
+
# Generate response
|
| 55 |
+
response = chat.send_message(prompt)
|
| 56 |
+
return response.text
|
| 57 |
+
|
| 58 |
+
except Exception as e:
|
| 59 |
+
print(f"Error generating response: {e}")
|
| 60 |
+
return f"I encountered an error processing your request. Please try again."
|
| 61 |
+
|
| 62 |
+
def stream_response(self, prompt, conversation_history=None):
|
| 63 |
+
"""
|
| 64 |
+
Generate a streaming response from Gemini
|
| 65 |
+
|
| 66 |
+
Args:
|
| 67 |
+
prompt: User prompt with context
|
| 68 |
+
conversation_history: List of previous messages
|
| 69 |
+
|
| 70 |
+
Yields:
|
| 71 |
+
Response chunks as they arrive
|
| 72 |
+
"""
|
| 73 |
+
try:
|
| 74 |
+
# Convert history to Gemini format and start chat
|
| 75 |
+
chat = self.model.start_chat(
|
| 76 |
+
history=self._convert_history(conversation_history)
|
| 77 |
+
)
|
| 78 |
+
|
| 79 |
+
# Stream response
|
| 80 |
+
response = chat.send_message(prompt, stream=True)
|
| 81 |
+
|
| 82 |
+
for chunk in response:
|
| 83 |
+
if chunk.text:
|
| 84 |
+
yield chunk.text
|
| 85 |
+
|
| 86 |
+
except Exception as e:
|
| 87 |
+
print(f"Error streaming response: {e}")
|
| 88 |
+
yield f"I encountered an error processing your request. Please try again."
|
| 89 |
+
|
| 90 |
+
def _convert_history(self, history):
|
| 91 |
+
"""
|
| 92 |
+
Convert OpenAI-style history to Gemini format
|
| 93 |
+
|
| 94 |
+
Args:
|
| 95 |
+
history: List of messages with 'role' and 'content' keys
|
| 96 |
+
|
| 97 |
+
Returns:
|
| 98 |
+
List of Gemini-formatted messages
|
| 99 |
+
"""
|
| 100 |
+
if not history:
|
| 101 |
+
return []
|
| 102 |
+
|
| 103 |
+
gemini_history = []
|
| 104 |
+
for msg in history:
|
| 105 |
+
role = msg.get('role', 'user')
|
| 106 |
+
content = msg.get('content', '')
|
| 107 |
+
|
| 108 |
+
# Gemini uses 'user' and 'model' roles
|
| 109 |
+
gemini_role = 'user' if role == 'user' else 'model'
|
| 110 |
+
|
| 111 |
+
gemini_history.append({
|
| 112 |
+
'role': gemini_role,
|
| 113 |
+
'parts': [content]
|
| 114 |
+
})
|
| 115 |
+
|
| 116 |
+
return gemini_history
|
| 117 |
+
|
| 118 |
+
|
| 119 |
+
class ConversationManager:
|
| 120 |
+
"""Manages conversation history for chat sessions"""
|
| 121 |
+
|
| 122 |
+
def __init__(self):
|
| 123 |
+
self.conversations = {}
|
| 124 |
+
self.ttl_hours = 24
|
| 125 |
+
|
| 126 |
+
def add_message(self, conversation_id, role, content):
|
| 127 |
+
"""
|
| 128 |
+
Add a message to conversation history
|
| 129 |
+
|
| 130 |
+
Args:
|
| 131 |
+
conversation_id: Unique conversation identifier
|
| 132 |
+
role: Message role (user/assistant)
|
| 133 |
+
content: Message content
|
| 134 |
+
"""
|
| 135 |
+
if conversation_id not in self.conversations:
|
| 136 |
+
self.conversations[conversation_id] = {
|
| 137 |
+
'messages': [],
|
| 138 |
+
'created_at': datetime.now()
|
| 139 |
+
}
|
| 140 |
+
|
| 141 |
+
self.conversations[conversation_id]['messages'].append({
|
| 142 |
+
'role': role,
|
| 143 |
+
'content': content
|
| 144 |
+
})
|
| 145 |
+
|
| 146 |
+
# Clean up old conversations
|
| 147 |
+
self._cleanup_old_conversations()
|
| 148 |
+
|
| 149 |
+
def get_history(self, conversation_id, last_n=5):
|
| 150 |
+
"""
|
| 151 |
+
Get conversation history
|
| 152 |
+
|
| 153 |
+
Args:
|
| 154 |
+
conversation_id: Unique conversation identifier
|
| 155 |
+
last_n: Number of recent exchanges to return
|
| 156 |
+
|
| 157 |
+
Returns:
|
| 158 |
+
List of recent messages
|
| 159 |
+
"""
|
| 160 |
+
if conversation_id not in self.conversations:
|
| 161 |
+
return []
|
| 162 |
+
|
| 163 |
+
messages = self.conversations[conversation_id]['messages']
|
| 164 |
+
|
| 165 |
+
# Return last N exchanges (user + assistant pairs)
|
| 166 |
+
return messages[-(last_n * 2):] if len(messages) > last_n * 2 else messages
|
| 167 |
+
|
| 168 |
+
def clear_conversation(self, conversation_id):
|
| 169 |
+
"""Clear a conversation"""
|
| 170 |
+
if conversation_id in self.conversations:
|
| 171 |
+
del self.conversations[conversation_id]
|
| 172 |
+
|
| 173 |
+
def _cleanup_old_conversations(self):
|
| 174 |
+
"""Remove conversations older than TTL"""
|
| 175 |
+
cutoff = datetime.now() - timedelta(hours=self.ttl_hours)
|
| 176 |
+
|
| 177 |
+
to_delete = [
|
| 178 |
+
conv_id for conv_id, data in self.conversations.items()
|
| 179 |
+
if data['created_at'] < cutoff
|
| 180 |
+
]
|
| 181 |
+
|
| 182 |
+
for conv_id in to_delete:
|
| 183 |
+
del self.conversations[conv_id]
|
| 184 |
+
|
| 185 |
+
|
| 186 |
+
class AgentLLMClient:
|
| 187 |
+
"""Gemini client with function calling support using google-genai SDK."""
|
| 188 |
+
|
| 189 |
+
SYSTEM_PROMPT = """You are an expert stock market analyst assistant for MarketLens.
|
| 190 |
+
You have access to tools that provide real-time stock data, financial statements,
|
| 191 |
+
news, sentiment analysis, and price forecasts.
|
| 192 |
+
|
| 193 |
+
When answering questions:
|
| 194 |
+
- Use your tools to fetch relevant data before answering. Do not guess prices or financial figures.
|
| 195 |
+
- You may call multiple tools if the question requires different types of data.
|
| 196 |
+
- Be concise and data-driven. Cite specific numbers from tool results.
|
| 197 |
+
- Format numbers with proper units (e.g., $1.5B, 10.5M shares).
|
| 198 |
+
- If a tool returns an error, acknowledge the issue and work with whatever data you have.
|
| 199 |
+
- Do not include markdown formatting syntax. Write in plain text.
|
| 200 |
+
- The user is currently viewing the stock ticker: {ticker}. Use this ticker for tool calls unless the user explicitly asks about a different stock.
|
| 201 |
+
- For general questions that do not require data (e.g., "what is a P/E ratio?"), respond directly without calling tools."""
|
| 202 |
+
|
| 203 |
+
def __init__(self, api_key=None, model=None):
|
| 204 |
+
self.client = genai_new.Client(api_key=api_key or GEMINI_API_KEY)
|
| 205 |
+
self.model_name = model or GEMINI_MODEL
|
| 206 |
+
|
| 207 |
+
def build_config(self, tools, ticker):
|
| 208 |
+
"""Build GenerateContentConfig with tools and system prompt."""
|
| 209 |
+
from agent_tools import TOOL_DECLARATIONS
|
| 210 |
+
|
| 211 |
+
function_declarations = [
|
| 212 |
+
genai_types.FunctionDeclaration(
|
| 213 |
+
name=t["name"],
|
| 214 |
+
description=t["description"],
|
| 215 |
+
parameters_json_schema=t["parameters"],
|
| 216 |
+
)
|
| 217 |
+
for t in tools
|
| 218 |
+
]
|
| 219 |
+
|
| 220 |
+
return genai_types.GenerateContentConfig(
|
| 221 |
+
system_instruction=self.SYSTEM_PROMPT.format(ticker=ticker),
|
| 222 |
+
tools=[genai_types.Tool(function_declarations=function_declarations)],
|
| 223 |
+
automatic_function_calling=genai_types.AutomaticFunctionCallingConfig(disable=True),
|
| 224 |
+
)
|
| 225 |
+
|
| 226 |
+
def generate(self, contents, config):
|
| 227 |
+
"""Send contents to Gemini and return the full response."""
|
| 228 |
+
return self.client.models.generate_content(
|
| 229 |
+
model=self.model_name,
|
| 230 |
+
contents=contents,
|
| 231 |
+
config=config,
|
| 232 |
+
)
|
| 233 |
+
|
| 234 |
+
@staticmethod
|
| 235 |
+
def make_user_content(text):
|
| 236 |
+
"""Create a user Content message."""
|
| 237 |
+
return genai_types.Content(role="user", parts=[genai_types.Part(text=text)])
|
| 238 |
+
|
| 239 |
+
@staticmethod
|
| 240 |
+
def make_tool_response(name, result):
|
| 241 |
+
"""Create a tool response Content message."""
|
| 242 |
+
return genai_types.Content(
|
| 243 |
+
role="tool",
|
| 244 |
+
parts=[genai_types.Part.from_function_response(name=name, response={"result": result})],
|
| 245 |
+
)
|
| 246 |
+
|
| 247 |
+
@staticmethod
|
| 248 |
+
def history_to_contents(history):
|
| 249 |
+
"""Convert ConversationManager history to genai Contents list."""
|
| 250 |
+
contents = []
|
| 251 |
+
for msg in history:
|
| 252 |
+
role = "user" if msg["role"] == "user" else "model"
|
| 253 |
+
contents.append(
|
| 254 |
+
genai_types.Content(role=role, parts=[genai_types.Part(text=msg["content"])])
|
| 255 |
+
)
|
| 256 |
+
return contents
|
| 257 |
+
|
| 258 |
+
@staticmethod
|
| 259 |
+
def extract_parts(response):
|
| 260 |
+
"""Extract function_call parts and text from a response."""
|
| 261 |
+
candidate = response.candidates[0]
|
| 262 |
+
parts = candidate.content.parts
|
| 263 |
+
|
| 264 |
+
function_calls = [p for p in parts if p.function_call]
|
| 265 |
+
text_parts = [p.text for p in parts if p.text]
|
| 266 |
+
|
| 267 |
+
return function_calls, text_parts, candidate.content
|
be/polygon_api.py
ADDED
|
@@ -0,0 +1,90 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import requests
|
| 2 |
+
from config import POLYGON_API_KEY
|
| 3 |
+
|
| 4 |
+
BASE_URL = "https://api.polygon.io"
|
| 5 |
+
|
| 6 |
+
class PolygonAPI:
|
| 7 |
+
def __init__(self):
|
| 8 |
+
self.api_key = POLYGON_API_KEY
|
| 9 |
+
|
| 10 |
+
def get_ticker_details(self, ticker):
|
| 11 |
+
"""Get detailed information about a ticker"""
|
| 12 |
+
url = f"{BASE_URL}/v3/reference/tickers/{ticker}"
|
| 13 |
+
params = {"apiKey": self.api_key}
|
| 14 |
+
response = requests.get(url, params=params)
|
| 15 |
+
return response.json()
|
| 16 |
+
|
| 17 |
+
def get_previous_close(self, ticker):
|
| 18 |
+
"""Get previous day's close data"""
|
| 19 |
+
url = f"{BASE_URL}/v2/aggs/ticker/{ticker}/prev"
|
| 20 |
+
params = {"adjusted": "true", "apiKey": self.api_key}
|
| 21 |
+
response = requests.get(url, params=params)
|
| 22 |
+
return response.json()
|
| 23 |
+
|
| 24 |
+
def get_aggregates(self, ticker, timespan="day", from_date=None, to_date=None):
|
| 25 |
+
"""Get aggregate bars for a ticker over a given date range"""
|
| 26 |
+
url = f"{BASE_URL}/v2/aggs/ticker/{ticker}/range/1/{timespan}/{from_date}/{to_date}"
|
| 27 |
+
params = {"adjusted": "true", "sort": "asc", "apiKey": self.api_key}
|
| 28 |
+
response = requests.get(url, params=params)
|
| 29 |
+
return response.json()
|
| 30 |
+
|
| 31 |
+
def get_ticker_news(self, ticker, limit=10):
|
| 32 |
+
"""Get news articles for a ticker"""
|
| 33 |
+
url = f"{BASE_URL}/v2/reference/news"
|
| 34 |
+
params = {
|
| 35 |
+
"ticker": ticker,
|
| 36 |
+
"limit": limit,
|
| 37 |
+
"apiKey": self.api_key
|
| 38 |
+
}
|
| 39 |
+
response = requests.get(url, params=params)
|
| 40 |
+
return response.json()
|
| 41 |
+
|
| 42 |
+
def get_financials(self, ticker):
|
| 43 |
+
"""Get financial data for a ticker"""
|
| 44 |
+
url = f"{BASE_URL}/vX/reference/financials"
|
| 45 |
+
params = {
|
| 46 |
+
"ticker": ticker,
|
| 47 |
+
"limit": 4,
|
| 48 |
+
"apiKey": self.api_key
|
| 49 |
+
}
|
| 50 |
+
response = requests.get(url, params=params)
|
| 51 |
+
return response.json()
|
| 52 |
+
|
| 53 |
+
def get_snapshot(self, ticker):
|
| 54 |
+
"""Get current snapshot of a ticker"""
|
| 55 |
+
url = f"{BASE_URL}/v2/snapshot/locale/us/markets/stocks/tickers/{ticker}"
|
| 56 |
+
params = {"apiKey": self.api_key}
|
| 57 |
+
response = requests.get(url, params=params)
|
| 58 |
+
return response.json()
|
| 59 |
+
|
| 60 |
+
def get_dividends(self, ticker, limit=10):
|
| 61 |
+
"""Get dividend history for a ticker"""
|
| 62 |
+
url = f"{BASE_URL}/v3/reference/dividends"
|
| 63 |
+
params = {
|
| 64 |
+
"ticker": ticker,
|
| 65 |
+
"limit": limit,
|
| 66 |
+
"order": "desc",
|
| 67 |
+
"apiKey": self.api_key
|
| 68 |
+
}
|
| 69 |
+
response = requests.get(url, params=params)
|
| 70 |
+
return response.json()
|
| 71 |
+
|
| 72 |
+
def get_splits(self, ticker, limit=10):
|
| 73 |
+
"""Get stock split history for a ticker"""
|
| 74 |
+
url = f"{BASE_URL}/v3/reference/splits"
|
| 75 |
+
params = {
|
| 76 |
+
"ticker": ticker,
|
| 77 |
+
"limit": limit,
|
| 78 |
+
"order": "desc",
|
| 79 |
+
"apiKey": self.api_key
|
| 80 |
+
}
|
| 81 |
+
response = requests.get(url, params=params)
|
| 82 |
+
return response.json()
|
| 83 |
+
|
| 84 |
+
def get_market_status(self):
|
| 85 |
+
"""Get current market status"""
|
| 86 |
+
url = f"{BASE_URL}/v1/marketstatus/now"
|
| 87 |
+
params = {"apiKey": self.api_key}
|
| 88 |
+
response = requests.get(url, params=params)
|
| 89 |
+
return response.json()
|
| 90 |
+
|
be/rag_pipeline.py
ADDED
|
@@ -0,0 +1,450 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
from google import genai as genai_client
|
| 2 |
+
from google.genai import types as genai_types
|
| 3 |
+
import faiss
|
| 4 |
+
import numpy as np
|
| 5 |
+
import json
|
| 6 |
+
import os
|
| 7 |
+
import traceback
|
| 8 |
+
from pathlib import Path
|
| 9 |
+
from config import (
|
| 10 |
+
GEMINI_API_KEY,
|
| 11 |
+
FAISS_INDEX_PATH,
|
| 12 |
+
EMBEDDING_MODEL,
|
| 13 |
+
RAG_TOP_K
|
| 14 |
+
)
|
| 15 |
+
|
| 16 |
+
|
| 17 |
+
class EmbeddingGenerator:
|
| 18 |
+
"""Generates embeddings using Google's Gemini embedding API (FREE)"""
|
| 19 |
+
|
| 20 |
+
def __init__(self, api_key=None, model=EMBEDDING_MODEL):
|
| 21 |
+
self.client = genai_client.Client(api_key=api_key or GEMINI_API_KEY)
|
| 22 |
+
self.model = f"models/{model}"
|
| 23 |
+
|
| 24 |
+
def generate_embedding(self, text):
|
| 25 |
+
"""
|
| 26 |
+
Generate embedding vector for text
|
| 27 |
+
|
| 28 |
+
Args:
|
| 29 |
+
text: Text to embed
|
| 30 |
+
|
| 31 |
+
Returns:
|
| 32 |
+
List of floats representing the embedding vector
|
| 33 |
+
"""
|
| 34 |
+
try:
|
| 35 |
+
if len(text) > 25000:
|
| 36 |
+
text = text[:25000]
|
| 37 |
+
|
| 38 |
+
result = self.client.models.embed_content(
|
| 39 |
+
model=self.model,
|
| 40 |
+
contents=text,
|
| 41 |
+
config=genai_types.EmbedContentConfig(task_type="RETRIEVAL_DOCUMENT"),
|
| 42 |
+
)
|
| 43 |
+
return result.embeddings[0].values
|
| 44 |
+
except Exception as e:
|
| 45 |
+
print(f"Error generating embedding: {e}")
|
| 46 |
+
return None
|
| 47 |
+
|
| 48 |
+
def generate_query_embedding(self, text):
|
| 49 |
+
"""
|
| 50 |
+
Generate embedding vector for a query (uses retrieval_query task type)
|
| 51 |
+
|
| 52 |
+
Args:
|
| 53 |
+
text: Query text to embed
|
| 54 |
+
|
| 55 |
+
Returns:
|
| 56 |
+
List of floats representing the embedding vector
|
| 57 |
+
"""
|
| 58 |
+
try:
|
| 59 |
+
result = self.client.models.embed_content(
|
| 60 |
+
model=self.model,
|
| 61 |
+
contents=text,
|
| 62 |
+
config=genai_types.EmbedContentConfig(task_type="RETRIEVAL_QUERY"),
|
| 63 |
+
)
|
| 64 |
+
return result.embeddings[0].values
|
| 65 |
+
except Exception as e:
|
| 66 |
+
print(f"Error generating query embedding: {e}")
|
| 67 |
+
return None
|
| 68 |
+
|
| 69 |
+
|
| 70 |
+
class VectorStore:
|
| 71 |
+
"""Manages FAISS vector database for RAG"""
|
| 72 |
+
|
| 73 |
+
def __init__(self, index_path=None):
|
| 74 |
+
self.index_path = Path(index_path or FAISS_INDEX_PATH)
|
| 75 |
+
self.index_file = self.index_path / "index.faiss"
|
| 76 |
+
self.metadata_file = self.index_path / "metadata.json"
|
| 77 |
+
self.doc_ids_file = self.index_path / "doc_ids.json"
|
| 78 |
+
|
| 79 |
+
self.dimension = 3072 # Google gemini-embedding-001
|
| 80 |
+
self.index = None
|
| 81 |
+
self.metadata = {} # {internal_id: metadata_dict}
|
| 82 |
+
self.doc_id_to_index = {} # {doc_id: internal_id}
|
| 83 |
+
self.next_id = 0 # Counter for internal IDs
|
| 84 |
+
|
| 85 |
+
self._initialize_index()
|
| 86 |
+
|
| 87 |
+
def _initialize_index(self):
|
| 88 |
+
"""Initialize or load FAISS index from disk"""
|
| 89 |
+
try:
|
| 90 |
+
# Create directory if doesn't exist
|
| 91 |
+
self.index_path.mkdir(parents=True, exist_ok=True)
|
| 92 |
+
|
| 93 |
+
# Check if index files exist
|
| 94 |
+
if self.index_file.exists():
|
| 95 |
+
try:
|
| 96 |
+
# Load existing index
|
| 97 |
+
self.index = faiss.read_index(str(self.index_file))
|
| 98 |
+
|
| 99 |
+
# Check dimension matches expected
|
| 100 |
+
if self.index.d != self.dimension:
|
| 101 |
+
print(f"Warning: Index dimension {self.index.d} != expected {self.dimension}, rebuilding index")
|
| 102 |
+
raise ValueError("Dimension mismatch")
|
| 103 |
+
|
| 104 |
+
# Load metadata
|
| 105 |
+
if self.metadata_file.exists():
|
| 106 |
+
with open(self.metadata_file, 'r') as f:
|
| 107 |
+
# Convert string keys back to int
|
| 108 |
+
loaded = json.load(f)
|
| 109 |
+
self.metadata = {int(k): v for k, v in loaded.items()}
|
| 110 |
+
|
| 111 |
+
# Load doc_id mapping
|
| 112 |
+
if self.doc_ids_file.exists():
|
| 113 |
+
with open(self.doc_ids_file, 'r') as f:
|
| 114 |
+
self.doc_id_to_index = json.load(f)
|
| 115 |
+
|
| 116 |
+
# Set next_id to max existing ID + 1
|
| 117 |
+
if self.metadata:
|
| 118 |
+
self.next_id = max(self.metadata.keys()) + 1
|
| 119 |
+
|
| 120 |
+
print(f"Loaded FAISS index from {self.index_path} with {self.index.ntotal} vectors")
|
| 121 |
+
except Exception as e:
|
| 122 |
+
print(f"Warning: Corrupted index file, creating fresh index: {e}")
|
| 123 |
+
# Delete corrupted files
|
| 124 |
+
if self.index_file.exists():
|
| 125 |
+
self.index_file.unlink()
|
| 126 |
+
if self.metadata_file.exists():
|
| 127 |
+
self.metadata_file.unlink()
|
| 128 |
+
if self.doc_ids_file.exists():
|
| 129 |
+
self.doc_ids_file.unlink()
|
| 130 |
+
# Create fresh index
|
| 131 |
+
self.index = faiss.IndexFlatIP(self.dimension)
|
| 132 |
+
self.metadata = {}
|
| 133 |
+
self.doc_id_to_index = {}
|
| 134 |
+
self.next_id = 0
|
| 135 |
+
else:
|
| 136 |
+
# Create new index
|
| 137 |
+
# Use IndexFlatIP for cosine similarity (requires L2 normalized vectors)
|
| 138 |
+
self.index = faiss.IndexFlatIP(self.dimension)
|
| 139 |
+
self.metadata = {}
|
| 140 |
+
self.doc_id_to_index = {}
|
| 141 |
+
self.next_id = 0
|
| 142 |
+
print(f"Created new FAISS index at {self.index_path}")
|
| 143 |
+
|
| 144 |
+
except Exception as e:
|
| 145 |
+
print(f"Error initializing FAISS index: {e}")
|
| 146 |
+
raise
|
| 147 |
+
|
| 148 |
+
def upsert_document(self, doc_id, embedding, metadata, namespace="news"):
|
| 149 |
+
"""
|
| 150 |
+
Store document embedding with metadata in FAISS
|
| 151 |
+
|
| 152 |
+
Args:
|
| 153 |
+
doc_id: Unique document identifier
|
| 154 |
+
embedding: Embedding vector (1536 dimensions)
|
| 155 |
+
metadata: Dictionary of metadata
|
| 156 |
+
namespace: Namespace for organization (stored in doc_id prefix)
|
| 157 |
+
|
| 158 |
+
Returns:
|
| 159 |
+
True if successful, False otherwise
|
| 160 |
+
"""
|
| 161 |
+
try:
|
| 162 |
+
# Add namespace to doc_id for organization
|
| 163 |
+
full_doc_id = f"{namespace}:{doc_id}"
|
| 164 |
+
|
| 165 |
+
# Check if document already exists (update case)
|
| 166 |
+
if full_doc_id in self.doc_id_to_index:
|
| 167 |
+
# For FAISS, we'll skip true updates and just log
|
| 168 |
+
# (Alternative: remove old and add new, but requires index rebuild)
|
| 169 |
+
print(f"Document {full_doc_id} already exists, skipping update")
|
| 170 |
+
return True
|
| 171 |
+
|
| 172 |
+
# Convert to numpy array and normalize for cosine similarity
|
| 173 |
+
vector = np.array([embedding], dtype=np.float32)
|
| 174 |
+
faiss.normalize_L2(vector) # L2 normalize for IndexFlatIP
|
| 175 |
+
|
| 176 |
+
# Add to FAISS index
|
| 177 |
+
self.index.add(vector)
|
| 178 |
+
|
| 179 |
+
# Store metadata
|
| 180 |
+
internal_id = self.next_id
|
| 181 |
+
self.metadata[internal_id] = metadata.copy()
|
| 182 |
+
self.metadata[internal_id]['doc_id'] = full_doc_id # Store doc_id in metadata
|
| 183 |
+
|
| 184 |
+
# Update doc_id mapping
|
| 185 |
+
self.doc_id_to_index[full_doc_id] = internal_id
|
| 186 |
+
|
| 187 |
+
# Increment counter
|
| 188 |
+
self.next_id += 1
|
| 189 |
+
|
| 190 |
+
return True
|
| 191 |
+
|
| 192 |
+
except Exception as e:
|
| 193 |
+
print(f"Error upserting document {doc_id}: {e}")
|
| 194 |
+
traceback.print_exc()
|
| 195 |
+
return False
|
| 196 |
+
|
| 197 |
+
def search(self, query_embedding, ticker=None, doc_type=None, top_k=None, namespace="news"):
|
| 198 |
+
"""
|
| 199 |
+
Search for similar documents
|
| 200 |
+
|
| 201 |
+
Args:
|
| 202 |
+
query_embedding: Query embedding vector
|
| 203 |
+
ticker: Filter by ticker symbol
|
| 204 |
+
doc_type: Filter by document type
|
| 205 |
+
top_k: Number of results to return
|
| 206 |
+
namespace: Namespace to search
|
| 207 |
+
|
| 208 |
+
Returns:
|
| 209 |
+
List of matching documents with metadata (Pinecone-compatible format)
|
| 210 |
+
"""
|
| 211 |
+
try:
|
| 212 |
+
if self.index.ntotal == 0:
|
| 213 |
+
return []
|
| 214 |
+
|
| 215 |
+
# Normalize query vector for cosine similarity
|
| 216 |
+
query_vector = np.array([query_embedding], dtype=np.float32)
|
| 217 |
+
faiss.normalize_L2(query_vector)
|
| 218 |
+
|
| 219 |
+
# Calculate search_k (fetch more to allow for filtering)
|
| 220 |
+
k = top_k or RAG_TOP_K
|
| 221 |
+
# Fetch 5x more results to account for filtering
|
| 222 |
+
search_k = min(k * 5, self.index.ntotal)
|
| 223 |
+
|
| 224 |
+
# Search FAISS index
|
| 225 |
+
distances, indices = self.index.search(query_vector, search_k)
|
| 226 |
+
|
| 227 |
+
# Build results list with filtering
|
| 228 |
+
matches = []
|
| 229 |
+
for dist, idx in zip(distances[0], indices[0]):
|
| 230 |
+
if idx == -1: # FAISS returns -1 for invalid indices
|
| 231 |
+
continue
|
| 232 |
+
|
| 233 |
+
if idx not in self.metadata:
|
| 234 |
+
continue
|
| 235 |
+
|
| 236 |
+
meta = self.metadata[idx].copy()
|
| 237 |
+
doc_id = meta.get('doc_id', '')
|
| 238 |
+
|
| 239 |
+
# Apply namespace filter
|
| 240 |
+
if not doc_id.startswith(f"{namespace}:"):
|
| 241 |
+
continue
|
| 242 |
+
|
| 243 |
+
# Apply ticker filter
|
| 244 |
+
if ticker and meta.get('ticker') != ticker:
|
| 245 |
+
continue
|
| 246 |
+
|
| 247 |
+
# Apply doc_type filter
|
| 248 |
+
if doc_type and meta.get('type') != doc_type:
|
| 249 |
+
continue
|
| 250 |
+
|
| 251 |
+
# Remove doc_id from metadata (stored separately)
|
| 252 |
+
meta.pop('doc_id', None)
|
| 253 |
+
|
| 254 |
+
# Create Pinecone-compatible match object
|
| 255 |
+
match = type('Match', (), {
|
| 256 |
+
'id': doc_id.replace(f"{namespace}:", ""), # Remove namespace prefix
|
| 257 |
+
'score': float(dist), # Cosine similarity score
|
| 258 |
+
'metadata': meta
|
| 259 |
+
})()
|
| 260 |
+
|
| 261 |
+
matches.append(match)
|
| 262 |
+
|
| 263 |
+
# Stop if we have enough matches
|
| 264 |
+
if len(matches) >= k:
|
| 265 |
+
break
|
| 266 |
+
|
| 267 |
+
return matches
|
| 268 |
+
|
| 269 |
+
except Exception as e:
|
| 270 |
+
print(f"Error searching FAISS: {e}")
|
| 271 |
+
return []
|
| 272 |
+
|
| 273 |
+
def document_exists(self, doc_id, namespace="news"):
|
| 274 |
+
"""
|
| 275 |
+
Check if document already exists in index
|
| 276 |
+
|
| 277 |
+
Args:
|
| 278 |
+
doc_id: Document identifier
|
| 279 |
+
namespace: Namespace
|
| 280 |
+
|
| 281 |
+
Returns:
|
| 282 |
+
Boolean indicating if document exists
|
| 283 |
+
"""
|
| 284 |
+
try:
|
| 285 |
+
full_doc_id = f"{namespace}:{doc_id}"
|
| 286 |
+
return full_doc_id in self.doc_id_to_index
|
| 287 |
+
except Exception as e:
|
| 288 |
+
print(f"Error checking document existence: {e}")
|
| 289 |
+
return False
|
| 290 |
+
|
| 291 |
+
def delete_by_ticker(self, ticker, namespace="news"):
|
| 292 |
+
"""
|
| 293 |
+
Delete all documents for a ticker
|
| 294 |
+
Note: FAISS doesn't support deletion, so we rebuild the index
|
| 295 |
+
|
| 296 |
+
Args:
|
| 297 |
+
ticker: Ticker symbol
|
| 298 |
+
namespace: Namespace
|
| 299 |
+
"""
|
| 300 |
+
try:
|
| 301 |
+
# Find indices to keep
|
| 302 |
+
indices_to_keep = []
|
| 303 |
+
metadata_to_keep = {}
|
| 304 |
+
doc_ids_to_keep = {}
|
| 305 |
+
new_id = 0
|
| 306 |
+
|
| 307 |
+
for internal_id, meta in self.metadata.items():
|
| 308 |
+
doc_id = meta.get('doc_id', '')
|
| 309 |
+
|
| 310 |
+
# Skip if wrong namespace
|
| 311 |
+
if not doc_id.startswith(f"{namespace}:"):
|
| 312 |
+
indices_to_keep.append(internal_id)
|
| 313 |
+
metadata_to_keep[new_id] = meta.copy()
|
| 314 |
+
doc_ids_to_keep[doc_id] = new_id
|
| 315 |
+
new_id += 1
|
| 316 |
+
continue
|
| 317 |
+
|
| 318 |
+
# Skip if matching ticker
|
| 319 |
+
if meta.get('ticker') == ticker:
|
| 320 |
+
continue
|
| 321 |
+
|
| 322 |
+
# Keep this document
|
| 323 |
+
indices_to_keep.append(internal_id)
|
| 324 |
+
metadata_to_keep[new_id] = meta.copy()
|
| 325 |
+
doc_ids_to_keep[doc_id] = new_id
|
| 326 |
+
new_id += 1
|
| 327 |
+
|
| 328 |
+
if len(indices_to_keep) == len(self.metadata):
|
| 329 |
+
print(f"No documents found for ticker {ticker}")
|
| 330 |
+
return
|
| 331 |
+
|
| 332 |
+
# Rebuild index with remaining vectors
|
| 333 |
+
new_index = faiss.IndexFlatIP(self.dimension)
|
| 334 |
+
|
| 335 |
+
# Extract and re-add vectors
|
| 336 |
+
for old_id in indices_to_keep:
|
| 337 |
+
# Get vector from old index
|
| 338 |
+
vector = self.index.reconstruct(old_id)
|
| 339 |
+
# Reshape and add to new index
|
| 340 |
+
vector = vector.reshape(1, -1)
|
| 341 |
+
new_index.add(vector)
|
| 342 |
+
|
| 343 |
+
# Replace old index and metadata
|
| 344 |
+
self.index = new_index
|
| 345 |
+
self.metadata = metadata_to_keep
|
| 346 |
+
self.doc_id_to_index = doc_ids_to_keep
|
| 347 |
+
self.next_id = new_id
|
| 348 |
+
|
| 349 |
+
deleted_count = len(self.metadata) - len(metadata_to_keep)
|
| 350 |
+
print(f"Deleted {deleted_count} documents for {ticker}")
|
| 351 |
+
|
| 352 |
+
except Exception as e:
|
| 353 |
+
print(f"Error deleting documents for {ticker}: {e}")
|
| 354 |
+
|
| 355 |
+
def save(self):
|
| 356 |
+
"""
|
| 357 |
+
Manually save index and metadata to disk
|
| 358 |
+
Call this after batch operations or on graceful shutdown
|
| 359 |
+
"""
|
| 360 |
+
try:
|
| 361 |
+
# Ensure directory exists
|
| 362 |
+
self.index_path.mkdir(parents=True, exist_ok=True)
|
| 363 |
+
|
| 364 |
+
# Write to temporary files first for atomic saves
|
| 365 |
+
temp_index = str(self.index_file) + ".tmp"
|
| 366 |
+
temp_metadata = str(self.metadata_file) + ".tmp"
|
| 367 |
+
temp_docids = str(self.doc_ids_file) + ".tmp"
|
| 368 |
+
|
| 369 |
+
# Save FAISS index
|
| 370 |
+
faiss.write_index(self.index, temp_index)
|
| 371 |
+
|
| 372 |
+
# Save metadata (convert int keys to strings for JSON)
|
| 373 |
+
with open(temp_metadata, 'w') as f:
|
| 374 |
+
json.dump({str(k): v for k, v in self.metadata.items()}, f, indent=2)
|
| 375 |
+
|
| 376 |
+
# Save doc_id mapping
|
| 377 |
+
with open(temp_docids, 'w') as f:
|
| 378 |
+
json.dump(self.doc_id_to_index, f, indent=2)
|
| 379 |
+
|
| 380 |
+
# Atomic rename (overwrite old files)
|
| 381 |
+
os.replace(temp_index, str(self.index_file))
|
| 382 |
+
os.replace(temp_metadata, str(self.metadata_file))
|
| 383 |
+
os.replace(temp_docids, str(self.doc_ids_file))
|
| 384 |
+
|
| 385 |
+
print(f"Saved FAISS index with {self.index.ntotal} vectors to {self.index_path}")
|
| 386 |
+
return True
|
| 387 |
+
|
| 388 |
+
except Exception as e:
|
| 389 |
+
print(f"Error saving FAISS index: {e}")
|
| 390 |
+
# Clean up temp files
|
| 391 |
+
for temp_file in [temp_index, temp_metadata, temp_docids]:
|
| 392 |
+
if os.path.exists(temp_file):
|
| 393 |
+
os.remove(temp_file)
|
| 394 |
+
return False
|
| 395 |
+
|
| 396 |
+
def get_stats(self):
|
| 397 |
+
"""Get index statistics"""
|
| 398 |
+
return {
|
| 399 |
+
"total_vectors": self.index.ntotal,
|
| 400 |
+
"dimension": self.dimension,
|
| 401 |
+
"total_metadata": len(self.metadata),
|
| 402 |
+
"index_type": "IndexFlatIP",
|
| 403 |
+
"metric": "cosine"
|
| 404 |
+
}
|
| 405 |
+
|
| 406 |
+
|
| 407 |
+
class ContextRetriever:
|
| 408 |
+
"""High-level interface for retrieving relevant context"""
|
| 409 |
+
|
| 410 |
+
def __init__(self, vector_store=None):
|
| 411 |
+
self.embedding_gen = EmbeddingGenerator()
|
| 412 |
+
self.vector_store = vector_store or VectorStore()
|
| 413 |
+
|
| 414 |
+
def retrieve_context(self, query, ticker, doc_type=None, top_k=None):
|
| 415 |
+
"""
|
| 416 |
+
Retrieve relevant documents for a query
|
| 417 |
+
|
| 418 |
+
Args:
|
| 419 |
+
query: User query text
|
| 420 |
+
ticker: Ticker symbol to filter by
|
| 421 |
+
doc_type: Optional document type filter
|
| 422 |
+
top_k: Number of results to return
|
| 423 |
+
|
| 424 |
+
Returns:
|
| 425 |
+
List of relevant documents with metadata
|
| 426 |
+
"""
|
| 427 |
+
# Generate query embedding (use query-specific embedding for better retrieval)
|
| 428 |
+
query_embedding = self.embedding_gen.generate_query_embedding(query)
|
| 429 |
+
|
| 430 |
+
if not query_embedding:
|
| 431 |
+
return []
|
| 432 |
+
|
| 433 |
+
# Search vector store
|
| 434 |
+
matches = self.vector_store.search(
|
| 435 |
+
query_embedding=query_embedding,
|
| 436 |
+
ticker=ticker,
|
| 437 |
+
doc_type=doc_type,
|
| 438 |
+
top_k=top_k
|
| 439 |
+
)
|
| 440 |
+
|
| 441 |
+
# Format results
|
| 442 |
+
contexts = []
|
| 443 |
+
for match in matches:
|
| 444 |
+
contexts.append({
|
| 445 |
+
'score': match.score,
|
| 446 |
+
'metadata': match.metadata,
|
| 447 |
+
'id': match.id
|
| 448 |
+
})
|
| 449 |
+
|
| 450 |
+
return contexts
|
be/requirements.txt
ADDED
|
@@ -0,0 +1,19 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
Flask==3.0.0
|
| 2 |
+
gunicorn==21.2.0
|
| 3 |
+
Flask-CORS==4.0.0
|
| 4 |
+
requests==2.31.0
|
| 5 |
+
python-dotenv==1.0.0
|
| 6 |
+
|
| 7 |
+
# Chatbot (using Google Gemini - FREE)
|
| 8 |
+
google-genai>=1.0.0
|
| 9 |
+
faiss-cpu==1.7.4
|
| 10 |
+
numpy==1.24.3
|
| 11 |
+
beautifulsoup4==4.12.3
|
| 12 |
+
lxml==5.1.0
|
| 13 |
+
tenacity==8.2.3
|
| 14 |
+
|
| 15 |
+
# Sentiment Analysis
|
| 16 |
+
transformers==4.36.0
|
| 17 |
+
torch==2.1.0
|
| 18 |
+
praw==7.7.1
|
| 19 |
+
cloudscraper==1.2.71
|
be/scraper.py
ADDED
|
@@ -0,0 +1,134 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import requests
|
| 2 |
+
from bs4 import BeautifulSoup
|
| 3 |
+
import re
|
| 4 |
+
import json
|
| 5 |
+
|
| 6 |
+
|
| 7 |
+
class ArticleScraper:
|
| 8 |
+
"""Web scraper for extracting full article content from news URLs"""
|
| 9 |
+
|
| 10 |
+
def __init__(self):
|
| 11 |
+
self.session = requests.Session()
|
| 12 |
+
self.session.headers.update({
|
| 13 |
+
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/91.0.4472.124 Safari/537.36'
|
| 14 |
+
})
|
| 15 |
+
|
| 16 |
+
def scrape_article(self, url, timeout=10):
|
| 17 |
+
"""
|
| 18 |
+
Scrape full article content from a news URL
|
| 19 |
+
|
| 20 |
+
Args:
|
| 21 |
+
url: Article URL to scrape
|
| 22 |
+
timeout: Request timeout in seconds
|
| 23 |
+
|
| 24 |
+
Returns:
|
| 25 |
+
Cleaned article text or None if scraping fails
|
| 26 |
+
"""
|
| 27 |
+
try:
|
| 28 |
+
response = self.session.get(url, timeout=timeout)
|
| 29 |
+
response.raise_for_status()
|
| 30 |
+
|
| 31 |
+
soup = BeautifulSoup(response.content, 'lxml')
|
| 32 |
+
|
| 33 |
+
# Try multiple extraction methods in order of reliability
|
| 34 |
+
article_content = (
|
| 35 |
+
self._extract_by_schema(soup) or
|
| 36 |
+
self._extract_by_selector(soup, 'article') or
|
| 37 |
+
self._extract_by_selector(soup, '.article-body') or
|
| 38 |
+
self._extract_by_selector(soup, '.article-content') or
|
| 39 |
+
self._extract_by_selector(soup, '#article-content') or
|
| 40 |
+
self._extract_by_selector(soup, '.story-body') or
|
| 41 |
+
self._extract_by_selector(soup, '.entry-content') or
|
| 42 |
+
self._extract_paragraphs(soup)
|
| 43 |
+
)
|
| 44 |
+
|
| 45 |
+
if article_content:
|
| 46 |
+
return self._clean_text(article_content)
|
| 47 |
+
else:
|
| 48 |
+
print(f"Could not extract content from {url}")
|
| 49 |
+
return None
|
| 50 |
+
|
| 51 |
+
except requests.exceptions.Timeout:
|
| 52 |
+
print(f"Timeout scraping {url}")
|
| 53 |
+
return None
|
| 54 |
+
except requests.exceptions.RequestException as e:
|
| 55 |
+
print(f"Request error scraping {url}: {e}")
|
| 56 |
+
return None
|
| 57 |
+
except Exception as e:
|
| 58 |
+
print(f"Unexpected error scraping {url}: {e}")
|
| 59 |
+
return None
|
| 60 |
+
|
| 61 |
+
def _extract_by_selector(self, soup, selector):
|
| 62 |
+
"""Extract text from a CSS selector"""
|
| 63 |
+
element = soup.select_one(selector)
|
| 64 |
+
if element:
|
| 65 |
+
paragraphs = element.find_all('p')
|
| 66 |
+
if paragraphs:
|
| 67 |
+
return ' '.join(p.get_text() for p in paragraphs)
|
| 68 |
+
return None
|
| 69 |
+
|
| 70 |
+
def _extract_by_schema(self, soup):
|
| 71 |
+
"""Extract article body from JSON-LD schema.org metadata"""
|
| 72 |
+
script_tags = soup.find_all('script', type='application/ld+json')
|
| 73 |
+
|
| 74 |
+
for script_tag in script_tags:
|
| 75 |
+
try:
|
| 76 |
+
data = json.loads(script_tag.string)
|
| 77 |
+
|
| 78 |
+
# Handle both single objects and arrays
|
| 79 |
+
if isinstance(data, list):
|
| 80 |
+
for item in data:
|
| 81 |
+
if self._extract_article_body(item):
|
| 82 |
+
return self._extract_article_body(item)
|
| 83 |
+
else:
|
| 84 |
+
if self._extract_article_body(data):
|
| 85 |
+
return self._extract_article_body(data)
|
| 86 |
+
except (json.JSONDecodeError, AttributeError):
|
| 87 |
+
continue
|
| 88 |
+
|
| 89 |
+
return None
|
| 90 |
+
|
| 91 |
+
def _extract_article_body(self, data):
|
| 92 |
+
"""Extract articleBody from JSON-LD data"""
|
| 93 |
+
if isinstance(data, dict):
|
| 94 |
+
if data.get('@type') in ['Article', 'NewsArticle', 'BlogPosting']:
|
| 95 |
+
return data.get('articleBody')
|
| 96 |
+
return None
|
| 97 |
+
|
| 98 |
+
def _extract_paragraphs(self, soup):
|
| 99 |
+
"""Fallback: Extract all paragraph tags from body"""
|
| 100 |
+
# Remove script, style, nav, footer, and header elements
|
| 101 |
+
for element in soup(['script', 'style', 'nav', 'footer', 'header', 'aside']):
|
| 102 |
+
element.decompose()
|
| 103 |
+
|
| 104 |
+
# Find all paragraphs
|
| 105 |
+
paragraphs = soup.find_all('p')
|
| 106 |
+
|
| 107 |
+
if len(paragraphs) >= 3: # Only use if we found a reasonable number of paragraphs
|
| 108 |
+
text = ' '.join(p.get_text() for p in paragraphs)
|
| 109 |
+
# Only return if we got substantial content
|
| 110 |
+
if len(text) > 200:
|
| 111 |
+
return text
|
| 112 |
+
|
| 113 |
+
return None
|
| 114 |
+
|
| 115 |
+
def _clean_text(self, text):
|
| 116 |
+
"""Clean and normalize extracted text"""
|
| 117 |
+
if not text:
|
| 118 |
+
return None
|
| 119 |
+
|
| 120 |
+
# Remove extra whitespace
|
| 121 |
+
text = re.sub(r'\s+', ' ', text)
|
| 122 |
+
|
| 123 |
+
# Remove common cruft
|
| 124 |
+
text = re.sub(r'(Advertisement|ADVERTISEMENT)', '', text)
|
| 125 |
+
text = re.sub(r'(Read more:.*?\.)', '', text)
|
| 126 |
+
|
| 127 |
+
# Strip leading/trailing whitespace
|
| 128 |
+
text = text.strip()
|
| 129 |
+
|
| 130 |
+
# Only return if we have substantial content
|
| 131 |
+
if len(text) > 100:
|
| 132 |
+
return text
|
| 133 |
+
|
| 134 |
+
return None
|
be/sentiment_analyzer.py
ADDED
|
@@ -0,0 +1,212 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
FinBERT-based sentiment analyzer for financial text.
|
| 3 |
+
Uses ProsusAI/finbert model for classifying text as positive, negative, or neutral.
|
| 4 |
+
"""
|
| 5 |
+
|
| 6 |
+
import warnings
|
| 7 |
+
# Suppress huggingface_hub deprecation warning about resume_download
|
| 8 |
+
warnings.filterwarnings("ignore", message=".*resume_download.*", category=FutureWarning)
|
| 9 |
+
|
| 10 |
+
import torch
|
| 11 |
+
from transformers import AutoTokenizer, AutoModelForSequenceClassification
|
| 12 |
+
from typing import List, Dict, Optional
|
| 13 |
+
import logging
|
| 14 |
+
|
| 15 |
+
logger = logging.getLogger(__name__)
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
class SentimentAnalyzer:
|
| 19 |
+
"""
|
| 20 |
+
Financial sentiment analyzer using FinBERT.
|
| 21 |
+
|
| 22 |
+
The model is lazy-loaded on first use to avoid slow startup times.
|
| 23 |
+
Supports both single text and batch analysis for efficiency.
|
| 24 |
+
"""
|
| 25 |
+
|
| 26 |
+
MODEL_NAME = "ProsusAI/finbert"
|
| 27 |
+
LABELS = ["negative", "neutral", "positive"]
|
| 28 |
+
|
| 29 |
+
def __init__(self):
|
| 30 |
+
self._model: Optional[AutoModelForSequenceClassification] = None
|
| 31 |
+
self._tokenizer: Optional[AutoTokenizer] = None
|
| 32 |
+
self._device = "cuda" if torch.cuda.is_available() else "cpu"
|
| 33 |
+
|
| 34 |
+
def _load_model(self) -> None:
|
| 35 |
+
"""Lazy load the FinBERT model on first use."""
|
| 36 |
+
if self._model is not None:
|
| 37 |
+
return
|
| 38 |
+
|
| 39 |
+
logger.info(f"Loading FinBERT model ({self.MODEL_NAME})...")
|
| 40 |
+
try:
|
| 41 |
+
self._tokenizer = AutoTokenizer.from_pretrained(self.MODEL_NAME)
|
| 42 |
+
self._model = AutoModelForSequenceClassification.from_pretrained(self.MODEL_NAME)
|
| 43 |
+
self._model.to(self._device)
|
| 44 |
+
self._model.eval()
|
| 45 |
+
logger.info(f"FinBERT model loaded successfully on {self._device}")
|
| 46 |
+
except Exception as e:
|
| 47 |
+
logger.error(f"Failed to load FinBERT model: {e}")
|
| 48 |
+
raise
|
| 49 |
+
|
| 50 |
+
def analyze(self, text: str) -> Dict:
|
| 51 |
+
"""
|
| 52 |
+
Analyze sentiment of a single text.
|
| 53 |
+
|
| 54 |
+
Args:
|
| 55 |
+
text: The financial text to analyze
|
| 56 |
+
|
| 57 |
+
Returns:
|
| 58 |
+
dict with keys:
|
| 59 |
+
- label: "positive", "neutral", or "negative"
|
| 60 |
+
- score: confidence score (0-1)
|
| 61 |
+
- scores: dict of all label scores
|
| 62 |
+
"""
|
| 63 |
+
if not text or not text.strip():
|
| 64 |
+
return {
|
| 65 |
+
"label": "neutral",
|
| 66 |
+
"score": 1.0,
|
| 67 |
+
"scores": {"negative": 0.0, "neutral": 1.0, "positive": 0.0}
|
| 68 |
+
}
|
| 69 |
+
|
| 70 |
+
self._load_model()
|
| 71 |
+
|
| 72 |
+
try:
|
| 73 |
+
inputs = self._tokenizer(
|
| 74 |
+
text,
|
| 75 |
+
return_tensors="pt",
|
| 76 |
+
truncation=True,
|
| 77 |
+
max_length=512,
|
| 78 |
+
padding=True
|
| 79 |
+
)
|
| 80 |
+
inputs = {k: v.to(self._device) for k, v in inputs.items()}
|
| 81 |
+
|
| 82 |
+
with torch.no_grad():
|
| 83 |
+
outputs = self._model(**inputs)
|
| 84 |
+
probs = torch.nn.functional.softmax(outputs.logits, dim=-1)
|
| 85 |
+
|
| 86 |
+
probs = probs.cpu()
|
| 87 |
+
scores = {label: float(prob) for label, prob in zip(self.LABELS, probs[0])}
|
| 88 |
+
predicted_idx = probs.argmax().item()
|
| 89 |
+
|
| 90 |
+
return {
|
| 91 |
+
"label": self.LABELS[predicted_idx],
|
| 92 |
+
"score": float(probs[0][predicted_idx]),
|
| 93 |
+
"scores": scores
|
| 94 |
+
}
|
| 95 |
+
|
| 96 |
+
except Exception as e:
|
| 97 |
+
logger.error(f"Sentiment analysis failed: {e}")
|
| 98 |
+
return {
|
| 99 |
+
"label": "neutral",
|
| 100 |
+
"score": 0.5,
|
| 101 |
+
"scores": {"negative": 0.0, "neutral": 1.0, "positive": 0.0}
|
| 102 |
+
}
|
| 103 |
+
|
| 104 |
+
def analyze_batch(self, texts: List[str], batch_size: int = 16) -> List[Dict]:
|
| 105 |
+
"""
|
| 106 |
+
Analyze sentiment of multiple texts efficiently.
|
| 107 |
+
|
| 108 |
+
Args:
|
| 109 |
+
texts: List of texts to analyze
|
| 110 |
+
batch_size: Number of texts to process at once
|
| 111 |
+
|
| 112 |
+
Returns:
|
| 113 |
+
List of sentiment dicts (same format as analyze())
|
| 114 |
+
"""
|
| 115 |
+
if not texts:
|
| 116 |
+
return []
|
| 117 |
+
|
| 118 |
+
self._load_model()
|
| 119 |
+
|
| 120 |
+
results = []
|
| 121 |
+
|
| 122 |
+
for i in range(0, len(texts), batch_size):
|
| 123 |
+
batch_texts = texts[i:i + batch_size]
|
| 124 |
+
|
| 125 |
+
# Filter out empty texts, keeping track of indices
|
| 126 |
+
valid_indices = []
|
| 127 |
+
valid_texts = []
|
| 128 |
+
for j, text in enumerate(batch_texts):
|
| 129 |
+
if text and text.strip():
|
| 130 |
+
valid_indices.append(j)
|
| 131 |
+
valid_texts.append(text)
|
| 132 |
+
|
| 133 |
+
# Initialize results for this batch with neutral defaults
|
| 134 |
+
batch_results = [{
|
| 135 |
+
"label": "neutral",
|
| 136 |
+
"score": 1.0,
|
| 137 |
+
"scores": {"negative": 0.0, "neutral": 1.0, "positive": 0.0}
|
| 138 |
+
} for _ in batch_texts]
|
| 139 |
+
|
| 140 |
+
if not valid_texts:
|
| 141 |
+
results.extend(batch_results)
|
| 142 |
+
continue
|
| 143 |
+
|
| 144 |
+
try:
|
| 145 |
+
inputs = self._tokenizer(
|
| 146 |
+
valid_texts,
|
| 147 |
+
return_tensors="pt",
|
| 148 |
+
truncation=True,
|
| 149 |
+
max_length=512,
|
| 150 |
+
padding=True
|
| 151 |
+
)
|
| 152 |
+
inputs = {k: v.to(self._device) for k, v in inputs.items()}
|
| 153 |
+
|
| 154 |
+
with torch.no_grad():
|
| 155 |
+
outputs = self._model(**inputs)
|
| 156 |
+
probs = torch.nn.functional.softmax(outputs.logits, dim=-1)
|
| 157 |
+
|
| 158 |
+
probs = probs.cpu()
|
| 159 |
+
|
| 160 |
+
for idx, (valid_idx, prob) in enumerate(zip(valid_indices, probs)):
|
| 161 |
+
scores = {label: float(p) for label, p in zip(self.LABELS, prob)}
|
| 162 |
+
predicted_idx = prob.argmax().item()
|
| 163 |
+
batch_results[valid_idx] = {
|
| 164 |
+
"label": self.LABELS[predicted_idx],
|
| 165 |
+
"score": float(prob[predicted_idx]),
|
| 166 |
+
"scores": scores
|
| 167 |
+
}
|
| 168 |
+
|
| 169 |
+
except Exception as e:
|
| 170 |
+
logger.error(f"Batch sentiment analysis failed: {e}")
|
| 171 |
+
|
| 172 |
+
results.extend(batch_results)
|
| 173 |
+
|
| 174 |
+
return results
|
| 175 |
+
|
| 176 |
+
def convert_to_aggregate_score(self, sentiment: Dict) -> float:
|
| 177 |
+
"""
|
| 178 |
+
Convert sentiment dict to a single score from -1 (bearish) to +1 (bullish).
|
| 179 |
+
|
| 180 |
+
Args:
|
| 181 |
+
sentiment: Sentiment dict from analyze()
|
| 182 |
+
|
| 183 |
+
Returns:
|
| 184 |
+
float: Score from -1 to +1
|
| 185 |
+
"""
|
| 186 |
+
scores = sentiment.get("scores", {})
|
| 187 |
+
positive = scores.get("positive", 0)
|
| 188 |
+
negative = scores.get("negative", 0)
|
| 189 |
+
return positive - negative
|
| 190 |
+
|
| 191 |
+
def unload_model(self) -> None:
|
| 192 |
+
"""Unload the model to free memory."""
|
| 193 |
+
if self._model is not None:
|
| 194 |
+
del self._model
|
| 195 |
+
del self._tokenizer
|
| 196 |
+
self._model = None
|
| 197 |
+
self._tokenizer = None
|
| 198 |
+
if torch.cuda.is_available():
|
| 199 |
+
torch.cuda.empty_cache()
|
| 200 |
+
logger.info("FinBERT model unloaded")
|
| 201 |
+
|
| 202 |
+
|
| 203 |
+
# Singleton instance for reuse
|
| 204 |
+
_analyzer_instance: Optional[SentimentAnalyzer] = None
|
| 205 |
+
|
| 206 |
+
|
| 207 |
+
def get_sentiment_analyzer() -> SentimentAnalyzer:
|
| 208 |
+
"""Get or create the singleton sentiment analyzer instance."""
|
| 209 |
+
global _analyzer_instance
|
| 210 |
+
if _analyzer_instance is None:
|
| 211 |
+
_analyzer_instance = SentimentAnalyzer()
|
| 212 |
+
return _analyzer_instance
|
be/sentiment_routes.py
ADDED
|
@@ -0,0 +1,211 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Flask routes for sentiment analysis API.
|
| 3 |
+
"""
|
| 4 |
+
|
| 5 |
+
from flask import Blueprint, request, jsonify
|
| 6 |
+
import logging
|
| 7 |
+
import traceback
|
| 8 |
+
|
| 9 |
+
from sentiment_service import get_sentiment_service
|
| 10 |
+
|
| 11 |
+
logger = logging.getLogger(__name__)
|
| 12 |
+
|
| 13 |
+
sentiment_bp = Blueprint('sentiment', __name__, url_prefix='/api/sentiment')
|
| 14 |
+
|
| 15 |
+
|
| 16 |
+
@sentiment_bp.route('/analyze', methods=['POST'])
|
| 17 |
+
def analyze_sentiment():
|
| 18 |
+
"""
|
| 19 |
+
Analyze social media sentiment for a ticker.
|
| 20 |
+
|
| 21 |
+
Request body:
|
| 22 |
+
{ "ticker": "AAPL" }
|
| 23 |
+
|
| 24 |
+
Response:
|
| 25 |
+
{
|
| 26 |
+
"aggregate": {
|
| 27 |
+
"score": 0.65,
|
| 28 |
+
"label": "bullish",
|
| 29 |
+
"confidence": 0.78,
|
| 30 |
+
"post_count": 47,
|
| 31 |
+
"sources": { "stocktwits": 30, "reddit": 12, "twitter": 5 }
|
| 32 |
+
},
|
| 33 |
+
"posts": [...],
|
| 34 |
+
"scraped": 50,
|
| 35 |
+
"embedded": 47,
|
| 36 |
+
"failed": 3
|
| 37 |
+
}
|
| 38 |
+
"""
|
| 39 |
+
try:
|
| 40 |
+
data = request.get_json()
|
| 41 |
+
if not data:
|
| 42 |
+
return jsonify({"error": "Request body required"}), 400
|
| 43 |
+
|
| 44 |
+
ticker = data.get('ticker', '').strip().upper()
|
| 45 |
+
if not ticker:
|
| 46 |
+
return jsonify({"error": "Ticker symbol required"}), 400
|
| 47 |
+
|
| 48 |
+
force_refresh = data.get('force_refresh', False)
|
| 49 |
+
|
| 50 |
+
logger.info(f"Sentiment analysis request for {ticker} (force_refresh={force_refresh})")
|
| 51 |
+
|
| 52 |
+
service = get_sentiment_service()
|
| 53 |
+
result = service.analyze_ticker(ticker, force_refresh=force_refresh)
|
| 54 |
+
|
| 55 |
+
return jsonify(result)
|
| 56 |
+
|
| 57 |
+
except Exception as e:
|
| 58 |
+
logger.error(f"Error in sentiment analysis: {e}")
|
| 59 |
+
traceback.print_exc()
|
| 60 |
+
return jsonify({"error": str(e)}), 500
|
| 61 |
+
|
| 62 |
+
|
| 63 |
+
@sentiment_bp.route('/summary/<ticker>', methods=['GET'])
|
| 64 |
+
def get_sentiment_summary(ticker: str):
|
| 65 |
+
"""
|
| 66 |
+
Get cached sentiment summary for a ticker.
|
| 67 |
+
|
| 68 |
+
Does not re-scrape - uses existing data from FAISS.
|
| 69 |
+
|
| 70 |
+
Response:
|
| 71 |
+
{
|
| 72 |
+
"ticker": "AAPL",
|
| 73 |
+
"aggregate_score": 0.65,
|
| 74 |
+
"label": "bullish",
|
| 75 |
+
"confidence": 0.78,
|
| 76 |
+
"post_count": 47,
|
| 77 |
+
"last_updated": "2026-01-18T12:00:00Z"
|
| 78 |
+
}
|
| 79 |
+
"""
|
| 80 |
+
try:
|
| 81 |
+
ticker = ticker.strip().upper()
|
| 82 |
+
if not ticker:
|
| 83 |
+
return jsonify({"error": "Ticker symbol required"}), 400
|
| 84 |
+
|
| 85 |
+
service = get_sentiment_service()
|
| 86 |
+
summary = service.get_summary(ticker)
|
| 87 |
+
|
| 88 |
+
return jsonify(summary)
|
| 89 |
+
|
| 90 |
+
except Exception as e:
|
| 91 |
+
logger.error(f"Error getting sentiment summary: {e}")
|
| 92 |
+
traceback.print_exc()
|
| 93 |
+
return jsonify({"error": str(e)}), 500
|
| 94 |
+
|
| 95 |
+
|
| 96 |
+
@sentiment_bp.route('/posts/<ticker>', methods=['GET'])
|
| 97 |
+
def get_sentiment_posts(ticker: str):
|
| 98 |
+
"""
|
| 99 |
+
Get sentiment posts for a ticker with filtering.
|
| 100 |
+
|
| 101 |
+
Query params:
|
| 102 |
+
platform: Filter by platform (stocktwits, reddit, twitter)
|
| 103 |
+
sentiment: Filter by sentiment (positive, negative, neutral)
|
| 104 |
+
limit: Max posts to return (default 50)
|
| 105 |
+
offset: Pagination offset (default 0)
|
| 106 |
+
|
| 107 |
+
Response:
|
| 108 |
+
{
|
| 109 |
+
"posts": [...],
|
| 110 |
+
"total": 47,
|
| 111 |
+
"filters": { "platform": "all", "sentiment": "all" }
|
| 112 |
+
}
|
| 113 |
+
"""
|
| 114 |
+
try:
|
| 115 |
+
ticker = ticker.strip().upper()
|
| 116 |
+
if not ticker:
|
| 117 |
+
return jsonify({"error": "Ticker symbol required"}), 400
|
| 118 |
+
|
| 119 |
+
# Parse query parameters
|
| 120 |
+
platform = request.args.get('platform', 'all').lower()
|
| 121 |
+
sentiment = request.args.get('sentiment', 'all').lower()
|
| 122 |
+
limit = min(int(request.args.get('limit', 50)), 100)
|
| 123 |
+
offset = int(request.args.get('offset', 0))
|
| 124 |
+
|
| 125 |
+
service = get_sentiment_service()
|
| 126 |
+
|
| 127 |
+
# Retrieve posts from FAISS
|
| 128 |
+
contexts = service.retrieve_sentiment_context(
|
| 129 |
+
query=f"{ticker} stock social media",
|
| 130 |
+
ticker=ticker,
|
| 131 |
+
top_k=limit + offset + 50 # Fetch extra for filtering
|
| 132 |
+
)
|
| 133 |
+
|
| 134 |
+
# Apply filters
|
| 135 |
+
filtered_posts = []
|
| 136 |
+
for ctx in contexts:
|
| 137 |
+
meta = ctx['metadata']
|
| 138 |
+
|
| 139 |
+
# Platform filter
|
| 140 |
+
if platform != 'all' and meta.get('platform') != platform:
|
| 141 |
+
continue
|
| 142 |
+
|
| 143 |
+
# Sentiment filter
|
| 144 |
+
if sentiment != 'all' and meta.get('sentiment_label') != sentiment:
|
| 145 |
+
continue
|
| 146 |
+
|
| 147 |
+
filtered_posts.append({
|
| 148 |
+
"id": ctx['id'],
|
| 149 |
+
"platform": meta.get('platform', 'unknown'),
|
| 150 |
+
"content": meta.get('full_content', meta.get('content', ''))[:500],
|
| 151 |
+
"author": meta.get('author', ''),
|
| 152 |
+
"timestamp": meta.get('timestamp', ''),
|
| 153 |
+
"sentiment": {
|
| 154 |
+
"label": meta.get('sentiment_label', 'neutral'),
|
| 155 |
+
"score": meta.get('sentiment_score', 0.5)
|
| 156 |
+
},
|
| 157 |
+
"engagement": {
|
| 158 |
+
"likes": meta.get('likes', 0),
|
| 159 |
+
"comments": meta.get('comments', 0),
|
| 160 |
+
"score": meta.get('engagement_score', 0)
|
| 161 |
+
},
|
| 162 |
+
"url": meta.get('url', '')
|
| 163 |
+
})
|
| 164 |
+
|
| 165 |
+
# Apply pagination
|
| 166 |
+
total = len(filtered_posts)
|
| 167 |
+
paginated = filtered_posts[offset:offset + limit]
|
| 168 |
+
|
| 169 |
+
return jsonify({
|
| 170 |
+
"posts": paginated,
|
| 171 |
+
"total": total,
|
| 172 |
+
"limit": limit,
|
| 173 |
+
"offset": offset,
|
| 174 |
+
"filters": {
|
| 175 |
+
"platform": platform,
|
| 176 |
+
"sentiment": sentiment
|
| 177 |
+
}
|
| 178 |
+
})
|
| 179 |
+
|
| 180 |
+
except Exception as e:
|
| 181 |
+
logger.error(f"Error getting sentiment posts: {e}")
|
| 182 |
+
traceback.print_exc()
|
| 183 |
+
return jsonify({"error": str(e)}), 500
|
| 184 |
+
|
| 185 |
+
|
| 186 |
+
@sentiment_bp.route('/health', methods=['GET'])
|
| 187 |
+
def sentiment_health():
|
| 188 |
+
"""Health check for sentiment service."""
|
| 189 |
+
try:
|
| 190 |
+
service = get_sentiment_service()
|
| 191 |
+
|
| 192 |
+
# Check if sentiment analyzer can be accessed
|
| 193 |
+
analyzer = service.sentiment_analyzer
|
| 194 |
+
|
| 195 |
+
return jsonify({
|
| 196 |
+
"status": "healthy",
|
| 197 |
+
"model": "ProsusAI/finbert",
|
| 198 |
+
"model_loaded": analyzer._model is not None,
|
| 199 |
+
"platforms": {
|
| 200 |
+
"stocktwits": True,
|
| 201 |
+
"reddit": bool(service.aggregator.scrapers["reddit"].enabled),
|
| 202 |
+
"twitter": bool(service.aggregator.scrapers["twitter"].enabled)
|
| 203 |
+
}
|
| 204 |
+
})
|
| 205 |
+
|
| 206 |
+
except Exception as e:
|
| 207 |
+
logger.error(f"Sentiment health check failed: {e}")
|
| 208 |
+
return jsonify({
|
| 209 |
+
"status": "unhealthy",
|
| 210 |
+
"error": str(e)
|
| 211 |
+
}), 500
|
be/sentiment_service.py
ADDED
|
@@ -0,0 +1,470 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Sentiment analysis service that orchestrates:
|
| 3 |
+
1. Social media scraping
|
| 4 |
+
2. FinBERT sentiment analysis
|
| 5 |
+
3. FAISS vector storage for RAG
|
| 6 |
+
4. Aggregate sentiment calculation
|
| 7 |
+
"""
|
| 8 |
+
|
| 9 |
+
import json
|
| 10 |
+
import math
|
| 11 |
+
import os
|
| 12 |
+
import logging
|
| 13 |
+
from datetime import datetime, timezone
|
| 14 |
+
from typing import Dict, List, Optional
|
| 15 |
+
from concurrent.futures import ThreadPoolExecutor, as_completed
|
| 16 |
+
|
| 17 |
+
from sentiment_analyzer import get_sentiment_analyzer
|
| 18 |
+
from social_scrapers import SocialMediaAggregator
|
| 19 |
+
from rag_pipeline import EmbeddingGenerator, VectorStore
|
| 20 |
+
from config import (
|
| 21 |
+
REDDIT_CLIENT_ID,
|
| 22 |
+
REDDIT_CLIENT_SECRET,
|
| 23 |
+
REDDIT_USER_AGENT,
|
| 24 |
+
TWITTER_BEARER_TOKEN
|
| 25 |
+
)
|
| 26 |
+
|
| 27 |
+
logger = logging.getLogger(__name__)
|
| 28 |
+
|
| 29 |
+
|
| 30 |
+
class SentimentService:
|
| 31 |
+
"""
|
| 32 |
+
Main service for social media sentiment analysis.
|
| 33 |
+
|
| 34 |
+
Coordinates scraping, sentiment analysis, embedding generation,
|
| 35 |
+
and vector storage for RAG integration.
|
| 36 |
+
"""
|
| 37 |
+
|
| 38 |
+
NAMESPACE = "sentiment"
|
| 39 |
+
MAX_POSTS_PER_PLATFORM = 30
|
| 40 |
+
MAX_WORKERS = 5
|
| 41 |
+
CACHE_TTL_MINUTES = 15
|
| 42 |
+
CACHE_DIR = os.path.join(os.path.dirname(os.path.abspath(__file__)), "sentiment_cache")
|
| 43 |
+
|
| 44 |
+
# Bias correction: asymmetric thresholds to counteract bullish bias in data sources
|
| 45 |
+
# Social media (WSB, StockTwits) skews positive; FinBERT also has slight positive bias
|
| 46 |
+
BULLISH_THRESHOLD = 0.3 # Higher bar for bullish (was 0.2)
|
| 47 |
+
BEARISH_THRESHOLD = -0.15 # Lower bar for bearish (was -0.2)
|
| 48 |
+
|
| 49 |
+
# Minimum confidence to include a post in aggregate calculation
|
| 50 |
+
MIN_CONFIDENCE_THRESHOLD = 0.6
|
| 51 |
+
|
| 52 |
+
def __init__(self, vector_store: Optional[VectorStore] = None):
|
| 53 |
+
"""
|
| 54 |
+
Initialize the sentiment service.
|
| 55 |
+
|
| 56 |
+
Args:
|
| 57 |
+
vector_store: Shared VectorStore instance (for consistency with chat service)
|
| 58 |
+
"""
|
| 59 |
+
self.vector_store = vector_store or VectorStore()
|
| 60 |
+
self.embedding_gen = EmbeddingGenerator()
|
| 61 |
+
self.sentiment_analyzer = get_sentiment_analyzer()
|
| 62 |
+
|
| 63 |
+
self.aggregator = SocialMediaAggregator(
|
| 64 |
+
reddit_client_id=REDDIT_CLIENT_ID,
|
| 65 |
+
reddit_client_secret=REDDIT_CLIENT_SECRET,
|
| 66 |
+
reddit_user_agent=REDDIT_USER_AGENT,
|
| 67 |
+
twitter_bearer_token=TWITTER_BEARER_TOKEN
|
| 68 |
+
)
|
| 69 |
+
|
| 70 |
+
def _get_cached_result(self, ticker: str) -> Optional[Dict]:
|
| 71 |
+
"""Return cached sentiment result if fresh, None if stale or missing."""
|
| 72 |
+
cache_path = os.path.join(self.CACHE_DIR, f"{ticker}.json")
|
| 73 |
+
try:
|
| 74 |
+
if not os.path.exists(cache_path):
|
| 75 |
+
return None
|
| 76 |
+
with open(cache_path, "r") as f:
|
| 77 |
+
cached = json.load(f)
|
| 78 |
+
cached_at = datetime.fromisoformat(cached["cached_at"])
|
| 79 |
+
age_minutes = (datetime.now(timezone.utc) - cached_at).total_seconds() / 60
|
| 80 |
+
if age_minutes > self.CACHE_TTL_MINUTES:
|
| 81 |
+
logger.info(f"Sentiment cache expired for {ticker} ({age_minutes:.1f} min old)")
|
| 82 |
+
return None
|
| 83 |
+
logger.info(f"Sentiment cache hit for {ticker} ({age_minutes:.1f} min old)")
|
| 84 |
+
return cached["result"]
|
| 85 |
+
except Exception as e:
|
| 86 |
+
logger.warning(f"Error reading sentiment cache for {ticker}: {e}")
|
| 87 |
+
return None
|
| 88 |
+
|
| 89 |
+
def _save_result_to_cache(self, ticker: str, result: Dict) -> None:
|
| 90 |
+
"""Persist sentiment result to disk cache."""
|
| 91 |
+
try:
|
| 92 |
+
os.makedirs(self.CACHE_DIR, exist_ok=True)
|
| 93 |
+
cache_path = os.path.join(self.CACHE_DIR, f"{ticker}.json")
|
| 94 |
+
payload = {
|
| 95 |
+
"cached_at": datetime.now(timezone.utc).isoformat(),
|
| 96 |
+
"result": result
|
| 97 |
+
}
|
| 98 |
+
with open(cache_path, "w") as f:
|
| 99 |
+
json.dump(payload, f)
|
| 100 |
+
logger.info(f"Sentiment result cached for {ticker}")
|
| 101 |
+
except Exception as e:
|
| 102 |
+
logger.warning(f"Error saving sentiment cache for {ticker}: {e}")
|
| 103 |
+
|
| 104 |
+
def analyze_ticker(self, ticker: str, force_refresh: bool = False) -> Dict:
|
| 105 |
+
"""
|
| 106 |
+
Full sentiment analysis pipeline for a ticker.
|
| 107 |
+
|
| 108 |
+
1. Scrapes social media posts
|
| 109 |
+
2. Analyzes sentiment with FinBERT
|
| 110 |
+
3. Embeds and stores in FAISS
|
| 111 |
+
4. Calculates aggregate sentiment
|
| 112 |
+
|
| 113 |
+
Args:
|
| 114 |
+
ticker: Stock ticker symbol (e.g., "AAPL")
|
| 115 |
+
|
| 116 |
+
Returns:
|
| 117 |
+
dict with aggregate sentiment, posts, and stats
|
| 118 |
+
"""
|
| 119 |
+
ticker = ticker.upper()
|
| 120 |
+
|
| 121 |
+
# Check disk cache first (skip if force refresh)
|
| 122 |
+
if not force_refresh:
|
| 123 |
+
cached = self._get_cached_result(ticker)
|
| 124 |
+
if cached is not None:
|
| 125 |
+
return cached
|
| 126 |
+
|
| 127 |
+
logger.info(f"Starting sentiment analysis for {ticker}")
|
| 128 |
+
|
| 129 |
+
# Step 1: Scrape posts from all platforms
|
| 130 |
+
platform_posts = self.aggregator.scrape_all(ticker, limit_per_platform=self.MAX_POSTS_PER_PLATFORM)
|
| 131 |
+
|
| 132 |
+
all_posts = []
|
| 133 |
+
for posts in platform_posts.values():
|
| 134 |
+
all_posts.extend(posts)
|
| 135 |
+
|
| 136 |
+
if not all_posts:
|
| 137 |
+
logger.warning(f"No social media posts found for {ticker}")
|
| 138 |
+
return {
|
| 139 |
+
"aggregate": {
|
| 140 |
+
"score": 0,
|
| 141 |
+
"label": "neutral",
|
| 142 |
+
"confidence": 0,
|
| 143 |
+
"post_count": 0,
|
| 144 |
+
"sources": {"stocktwits": 0, "reddit": 0, "twitter": 0}
|
| 145 |
+
},
|
| 146 |
+
"posts": [],
|
| 147 |
+
"scraped": 0,
|
| 148 |
+
"embedded": 0,
|
| 149 |
+
"failed": 0
|
| 150 |
+
}
|
| 151 |
+
|
| 152 |
+
# Step 2: Analyze sentiment for all posts
|
| 153 |
+
texts = [post["content"] for post in all_posts]
|
| 154 |
+
sentiments = self.sentiment_analyzer.analyze_batch(texts)
|
| 155 |
+
|
| 156 |
+
# Attach sentiment to posts
|
| 157 |
+
for post, sentiment in zip(all_posts, sentiments):
|
| 158 |
+
post["sentiment"] = sentiment
|
| 159 |
+
post["sentiment_label"] = sentiment["label"]
|
| 160 |
+
post["sentiment_score"] = sentiment["score"]
|
| 161 |
+
|
| 162 |
+
# Step 3: Embed and store posts in FAISS
|
| 163 |
+
embedded_count = 0
|
| 164 |
+
failed_count = 0
|
| 165 |
+
|
| 166 |
+
def embed_and_store(post):
|
| 167 |
+
"""Embed a single post and store in vector DB."""
|
| 168 |
+
try:
|
| 169 |
+
doc_id = post["id"]
|
| 170 |
+
|
| 171 |
+
# Skip if already exists
|
| 172 |
+
if self.vector_store.document_exists(doc_id, namespace=self.NAMESPACE):
|
| 173 |
+
return "skipped"
|
| 174 |
+
|
| 175 |
+
# Generate embedding
|
| 176 |
+
embedding = self.embedding_gen.generate_embedding(post["content"])
|
| 177 |
+
if not embedding:
|
| 178 |
+
return "failed"
|
| 179 |
+
|
| 180 |
+
# Prepare metadata for storage
|
| 181 |
+
metadata = {
|
| 182 |
+
"ticker": ticker,
|
| 183 |
+
"type": "social_post",
|
| 184 |
+
"platform": post.get("platform", "unknown"),
|
| 185 |
+
"content": post["content"][:500], # Truncate for storage
|
| 186 |
+
"content_preview": post["content"][:200],
|
| 187 |
+
"full_content": post["content"],
|
| 188 |
+
"author": post.get("author", ""),
|
| 189 |
+
"timestamp": post.get("timestamp", ""),
|
| 190 |
+
"likes": post.get("likes", 0),
|
| 191 |
+
"comments": post.get("comments", 0),
|
| 192 |
+
"engagement_score": post.get("engagement_score", 0),
|
| 193 |
+
"sentiment_label": post["sentiment_label"],
|
| 194 |
+
"sentiment_score": post["sentiment_score"],
|
| 195 |
+
"url": post.get("url", "")
|
| 196 |
+
}
|
| 197 |
+
|
| 198 |
+
# Store in FAISS
|
| 199 |
+
success = self.vector_store.upsert_document(
|
| 200 |
+
doc_id=doc_id,
|
| 201 |
+
embedding=embedding,
|
| 202 |
+
metadata=metadata,
|
| 203 |
+
namespace=self.NAMESPACE
|
| 204 |
+
)
|
| 205 |
+
|
| 206 |
+
return "embedded" if success else "failed"
|
| 207 |
+
|
| 208 |
+
except Exception as e:
|
| 209 |
+
logger.error(f"Error embedding post {post.get('id', 'unknown')}: {e}")
|
| 210 |
+
return "failed"
|
| 211 |
+
|
| 212 |
+
# Process posts in parallel
|
| 213 |
+
with ThreadPoolExecutor(max_workers=self.MAX_WORKERS) as executor:
|
| 214 |
+
futures = {executor.submit(embed_and_store, post): post for post in all_posts}
|
| 215 |
+
|
| 216 |
+
for future in as_completed(futures):
|
| 217 |
+
result = future.result()
|
| 218 |
+
if result == "embedded":
|
| 219 |
+
embedded_count += 1
|
| 220 |
+
elif result == "failed":
|
| 221 |
+
failed_count += 1
|
| 222 |
+
|
| 223 |
+
# Save FAISS index after batch operation
|
| 224 |
+
self.vector_store.save()
|
| 225 |
+
|
| 226 |
+
# Step 4: Calculate aggregate sentiment
|
| 227 |
+
aggregate = self._calculate_aggregate_sentiment(all_posts)
|
| 228 |
+
aggregate["sources"] = self.aggregator.get_source_counts(all_posts)
|
| 229 |
+
|
| 230 |
+
# Sort posts by engagement and recency for display
|
| 231 |
+
sorted_posts = sorted(
|
| 232 |
+
all_posts,
|
| 233 |
+
key=lambda x: (x.get("engagement_score", 0), x.get("timestamp", "")),
|
| 234 |
+
reverse=True
|
| 235 |
+
)
|
| 236 |
+
|
| 237 |
+
# Format posts for response
|
| 238 |
+
formatted_posts = [self._format_post_for_response(p) for p in sorted_posts[:50]]
|
| 239 |
+
|
| 240 |
+
logger.info(f"Sentiment analysis complete for {ticker}: {aggregate['label']} ({aggregate['score']:.2f})")
|
| 241 |
+
|
| 242 |
+
result = {
|
| 243 |
+
"aggregate": aggregate,
|
| 244 |
+
"posts": formatted_posts,
|
| 245 |
+
"scraped": len(all_posts),
|
| 246 |
+
"embedded": embedded_count,
|
| 247 |
+
"failed": failed_count
|
| 248 |
+
}
|
| 249 |
+
|
| 250 |
+
# Save to disk cache
|
| 251 |
+
self._save_result_to_cache(ticker, result)
|
| 252 |
+
|
| 253 |
+
return result
|
| 254 |
+
|
| 255 |
+
def get_summary(self, ticker: str) -> Dict:
|
| 256 |
+
"""
|
| 257 |
+
Get a quick sentiment summary without re-scraping.
|
| 258 |
+
|
| 259 |
+
Uses existing data from FAISS if available.
|
| 260 |
+
|
| 261 |
+
Args:
|
| 262 |
+
ticker: Stock ticker symbol
|
| 263 |
+
|
| 264 |
+
Returns:
|
| 265 |
+
Summary dict with aggregate score and metadata
|
| 266 |
+
"""
|
| 267 |
+
ticker = ticker.upper()
|
| 268 |
+
|
| 269 |
+
# Search for existing sentiment data
|
| 270 |
+
query_embedding = self.embedding_gen.generate_query_embedding(f"{ticker} stock sentiment")
|
| 271 |
+
if not query_embedding:
|
| 272 |
+
return {"ticker": ticker, "error": "Failed to generate query embedding"}
|
| 273 |
+
|
| 274 |
+
matches = self.vector_store.search(
|
| 275 |
+
query_embedding=query_embedding,
|
| 276 |
+
ticker=ticker,
|
| 277 |
+
namespace=self.NAMESPACE,
|
| 278 |
+
top_k=100
|
| 279 |
+
)
|
| 280 |
+
|
| 281 |
+
if not matches:
|
| 282 |
+
return {
|
| 283 |
+
"ticker": ticker,
|
| 284 |
+
"aggregate_score": 0,
|
| 285 |
+
"label": "neutral",
|
| 286 |
+
"post_count": 0,
|
| 287 |
+
"last_updated": None
|
| 288 |
+
}
|
| 289 |
+
|
| 290 |
+
# Reconstruct posts from matches
|
| 291 |
+
posts = []
|
| 292 |
+
latest_timestamp = None
|
| 293 |
+
|
| 294 |
+
for match in matches:
|
| 295 |
+
meta = match.metadata
|
| 296 |
+
posts.append({
|
| 297 |
+
"sentiment_label": meta.get("sentiment_label", "neutral"),
|
| 298 |
+
"sentiment_score": meta.get("sentiment_score", 0.5),
|
| 299 |
+
"engagement_score": meta.get("engagement_score", 0),
|
| 300 |
+
"timestamp": meta.get("timestamp", "")
|
| 301 |
+
})
|
| 302 |
+
|
| 303 |
+
ts = meta.get("timestamp")
|
| 304 |
+
if ts and (not latest_timestamp or ts > latest_timestamp):
|
| 305 |
+
latest_timestamp = ts
|
| 306 |
+
|
| 307 |
+
aggregate = self._calculate_aggregate_sentiment(posts)
|
| 308 |
+
|
| 309 |
+
return {
|
| 310 |
+
"ticker": ticker,
|
| 311 |
+
"aggregate_score": aggregate["score"],
|
| 312 |
+
"label": aggregate["label"],
|
| 313 |
+
"confidence": aggregate["confidence"],
|
| 314 |
+
"post_count": len(posts),
|
| 315 |
+
"last_updated": latest_timestamp
|
| 316 |
+
}
|
| 317 |
+
|
| 318 |
+
def retrieve_sentiment_context(self, query: str, ticker: str, top_k: int = 5) -> List[Dict]:
|
| 319 |
+
"""
|
| 320 |
+
Retrieve relevant sentiment posts for RAG.
|
| 321 |
+
|
| 322 |
+
Args:
|
| 323 |
+
query: User query
|
| 324 |
+
ticker: Stock ticker
|
| 325 |
+
top_k: Number of results
|
| 326 |
+
|
| 327 |
+
Returns:
|
| 328 |
+
List of relevant posts with sentiment data
|
| 329 |
+
"""
|
| 330 |
+
query_embedding = self.embedding_gen.generate_query_embedding(query)
|
| 331 |
+
if not query_embedding:
|
| 332 |
+
return []
|
| 333 |
+
|
| 334 |
+
matches = self.vector_store.search(
|
| 335 |
+
query_embedding=query_embedding,
|
| 336 |
+
ticker=ticker,
|
| 337 |
+
namespace=self.NAMESPACE,
|
| 338 |
+
top_k=top_k
|
| 339 |
+
)
|
| 340 |
+
|
| 341 |
+
contexts = []
|
| 342 |
+
for match in matches:
|
| 343 |
+
contexts.append({
|
| 344 |
+
"score": match.score,
|
| 345 |
+
"metadata": match.metadata,
|
| 346 |
+
"id": match.id
|
| 347 |
+
})
|
| 348 |
+
|
| 349 |
+
return contexts
|
| 350 |
+
|
| 351 |
+
def _calculate_aggregate_sentiment(self, posts: List[Dict]) -> Dict:
|
| 352 |
+
"""
|
| 353 |
+
Calculate weighted aggregate sentiment with bias correction.
|
| 354 |
+
|
| 355 |
+
Weighting factors:
|
| 356 |
+
- Recency: Posts from last 24h weighted 2x
|
| 357 |
+
- Engagement: log(1 + engagement_score)
|
| 358 |
+
- Confidence: FinBERT confidence score (filtered by MIN_CONFIDENCE_THRESHOLD)
|
| 359 |
+
|
| 360 |
+
Bias corrections:
|
| 361 |
+
- Asymmetric thresholds for bullish/bearish classification
|
| 362 |
+
- Confidence filtering to exclude low-confidence predictions
|
| 363 |
+
- Neutral posts contribute slightly negative (-0.05) to counteract positive bias
|
| 364 |
+
"""
|
| 365 |
+
if not posts:
|
| 366 |
+
return {"score": 0, "label": "neutral", "confidence": 0, "post_count": 0}
|
| 367 |
+
|
| 368 |
+
weighted_sum = 0
|
| 369 |
+
total_weight = 0
|
| 370 |
+
now = datetime.now(timezone.utc)
|
| 371 |
+
|
| 372 |
+
# Track sentiment distribution for logging
|
| 373 |
+
distribution = {"positive": 0, "neutral": 0, "negative": 0}
|
| 374 |
+
filtered_count = 0
|
| 375 |
+
|
| 376 |
+
for post in posts:
|
| 377 |
+
label = post.get("sentiment_label", "neutral")
|
| 378 |
+
confidence = post.get("sentiment_score", 0.5)
|
| 379 |
+
|
| 380 |
+
# Track raw distribution before filtering
|
| 381 |
+
distribution[label] = distribution.get(label, 0) + 1
|
| 382 |
+
|
| 383 |
+
# Filter out low-confidence predictions
|
| 384 |
+
if confidence < self.MIN_CONFIDENCE_THRESHOLD:
|
| 385 |
+
filtered_count += 1
|
| 386 |
+
continue
|
| 387 |
+
|
| 388 |
+
# Convert sentiment label to numeric score
|
| 389 |
+
# Neutral posts get slight negative bias (-0.05) to counteract data source bias
|
| 390 |
+
base_score = {"negative": -1, "neutral": -0.05, "positive": 1}.get(label, 0)
|
| 391 |
+
|
| 392 |
+
# Recency weight
|
| 393 |
+
recency_weight = 1.0
|
| 394 |
+
timestamp = post.get("timestamp")
|
| 395 |
+
if timestamp:
|
| 396 |
+
try:
|
| 397 |
+
if isinstance(timestamp, str):
|
| 398 |
+
post_time = datetime.fromisoformat(timestamp.replace("Z", "+00:00"))
|
| 399 |
+
else:
|
| 400 |
+
post_time = timestamp
|
| 401 |
+
|
| 402 |
+
hours_old = (now - post_time).total_seconds() / 3600
|
| 403 |
+
recency_weight = 2.0 if hours_old < 24 else (1.5 if hours_old < 72 else 1.0)
|
| 404 |
+
except:
|
| 405 |
+
pass
|
| 406 |
+
|
| 407 |
+
# Engagement weight
|
| 408 |
+
engagement = post.get("engagement_score", 0)
|
| 409 |
+
engagement_weight = math.log(1 + engagement + 1)
|
| 410 |
+
|
| 411 |
+
# Combined weight
|
| 412 |
+
weight = confidence * recency_weight * engagement_weight
|
| 413 |
+
weighted_sum += base_score * weight
|
| 414 |
+
total_weight += weight
|
| 415 |
+
|
| 416 |
+
# Log sentiment distribution for debugging
|
| 417 |
+
logger.info(
|
| 418 |
+
f"Sentiment distribution: {distribution} | "
|
| 419 |
+
f"Filtered (low confidence): {filtered_count} | "
|
| 420 |
+
f"Included: {len(posts) - filtered_count}"
|
| 421 |
+
)
|
| 422 |
+
|
| 423 |
+
avg_score = weighted_sum / total_weight if total_weight > 0 else 0
|
| 424 |
+
|
| 425 |
+
# Determine label using asymmetric thresholds to counteract bullish bias
|
| 426 |
+
if avg_score < self.BEARISH_THRESHOLD:
|
| 427 |
+
label = "bearish"
|
| 428 |
+
elif avg_score > self.BULLISH_THRESHOLD:
|
| 429 |
+
label = "bullish"
|
| 430 |
+
else:
|
| 431 |
+
label = "neutral"
|
| 432 |
+
|
| 433 |
+
included_count = len(posts) - filtered_count
|
| 434 |
+
return {
|
| 435 |
+
"score": round(avg_score, 3),
|
| 436 |
+
"label": label,
|
| 437 |
+
"confidence": round(min(1.0, total_weight / max(1, included_count) / 2), 3),
|
| 438 |
+
"post_count": len(posts),
|
| 439 |
+
"included_count": included_count,
|
| 440 |
+
"distribution": distribution
|
| 441 |
+
}
|
| 442 |
+
|
| 443 |
+
def _format_post_for_response(self, post: Dict) -> Dict:
|
| 444 |
+
"""Format a post for API response."""
|
| 445 |
+
return {
|
| 446 |
+
"id": post.get("id", ""),
|
| 447 |
+
"platform": post.get("platform", "unknown"),
|
| 448 |
+
"content": post.get("content", "")[:500],
|
| 449 |
+
"author": post.get("author", ""),
|
| 450 |
+
"timestamp": post.get("timestamp", ""),
|
| 451 |
+
"sentiment": post.get("sentiment", {}),
|
| 452 |
+
"engagement": {
|
| 453 |
+
"likes": post.get("likes", 0),
|
| 454 |
+
"comments": post.get("comments", 0),
|
| 455 |
+
"score": post.get("engagement_score", 0)
|
| 456 |
+
},
|
| 457 |
+
"url": post.get("url", "")
|
| 458 |
+
}
|
| 459 |
+
|
| 460 |
+
|
| 461 |
+
# Singleton instance
|
| 462 |
+
_sentiment_service: Optional[SentimentService] = None
|
| 463 |
+
|
| 464 |
+
|
| 465 |
+
def get_sentiment_service(vector_store: Optional[VectorStore] = None) -> SentimentService:
|
| 466 |
+
"""Get or create the singleton sentiment service instance."""
|
| 467 |
+
global _sentiment_service
|
| 468 |
+
if _sentiment_service is None:
|
| 469 |
+
_sentiment_service = SentimentService(vector_store=vector_store)
|
| 470 |
+
return _sentiment_service
|
be/social_scrapers.py
ADDED
|
@@ -0,0 +1,525 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
Social media scrapers for stock sentiment data.
|
| 3 |
+
Supports StockTwits, Reddit, and Twitter (optional).
|
| 4 |
+
"""
|
| 5 |
+
|
| 6 |
+
import requests
|
| 7 |
+
import cloudscraper
|
| 8 |
+
from abc import ABC, abstractmethod
|
| 9 |
+
from typing import List, Dict, Optional
|
| 10 |
+
from datetime import datetime, timezone
|
| 11 |
+
import logging
|
| 12 |
+
import hashlib
|
| 13 |
+
|
| 14 |
+
logger = logging.getLogger(__name__)
|
| 15 |
+
|
| 16 |
+
|
| 17 |
+
class BaseScraper(ABC):
|
| 18 |
+
"""Base class for social media scrapers."""
|
| 19 |
+
|
| 20 |
+
@abstractmethod
|
| 21 |
+
def scrape(self, ticker: str, limit: int = 50) -> List[Dict]:
|
| 22 |
+
"""
|
| 23 |
+
Fetch posts for a ticker.
|
| 24 |
+
|
| 25 |
+
Args:
|
| 26 |
+
ticker: Stock ticker symbol (e.g., "AAPL")
|
| 27 |
+
limit: Maximum number of posts to return
|
| 28 |
+
|
| 29 |
+
Returns:
|
| 30 |
+
List of standardized post dicts
|
| 31 |
+
"""
|
| 32 |
+
pass
|
| 33 |
+
|
| 34 |
+
def generate_post_id(self, platform: str, unique_id: str) -> str:
|
| 35 |
+
"""Generate a unique post ID."""
|
| 36 |
+
return f"{platform}_{unique_id}"
|
| 37 |
+
|
| 38 |
+
def calculate_engagement_score(self, likes: int, comments: int, retweets: int = 0) -> int:
|
| 39 |
+
"""Calculate total engagement score."""
|
| 40 |
+
return likes + comments + retweets
|
| 41 |
+
|
| 42 |
+
|
| 43 |
+
class StockTwitsScraper(BaseScraper):
|
| 44 |
+
"""
|
| 45 |
+
StockTwits API scraper.
|
| 46 |
+
|
| 47 |
+
- FREE, no authentication required for basic access
|
| 48 |
+
- Rate limit: ~200 requests/hour
|
| 49 |
+
- Endpoint: https://api.stocktwits.com/api/2/streams/symbol/{symbol}.json
|
| 50 |
+
- Uses cloudscraper to bypass Cloudflare protection
|
| 51 |
+
"""
|
| 52 |
+
|
| 53 |
+
BASE_URL = "https://api.stocktwits.com/api/2"
|
| 54 |
+
TIMEOUT = 10
|
| 55 |
+
|
| 56 |
+
def __init__(self):
|
| 57 |
+
"""Initialize with cloudscraper session to handle Cloudflare."""
|
| 58 |
+
self.scraper = cloudscraper.create_scraper()
|
| 59 |
+
|
| 60 |
+
def scrape(self, ticker: str, limit: int = 50) -> List[Dict]:
|
| 61 |
+
"""Fetch posts from StockTwits for a ticker."""
|
| 62 |
+
url = f"{self.BASE_URL}/streams/symbol/{ticker.upper()}.json"
|
| 63 |
+
|
| 64 |
+
try:
|
| 65 |
+
response = self.scraper.get(url, timeout=self.TIMEOUT)
|
| 66 |
+
response.raise_for_status()
|
| 67 |
+
data = response.json()
|
| 68 |
+
|
| 69 |
+
if data.get("response", {}).get("status") != 200:
|
| 70 |
+
logger.warning(f"StockTwits API error for {ticker}: {data}")
|
| 71 |
+
return []
|
| 72 |
+
|
| 73 |
+
posts = []
|
| 74 |
+
messages = data.get("messages", [])[:limit]
|
| 75 |
+
|
| 76 |
+
for msg in messages:
|
| 77 |
+
try:
|
| 78 |
+
post = self._standardize_post(msg, ticker)
|
| 79 |
+
if post:
|
| 80 |
+
posts.append(post)
|
| 81 |
+
except Exception as e:
|
| 82 |
+
logger.debug(f"Failed to parse StockTwits message: {e}")
|
| 83 |
+
continue
|
| 84 |
+
|
| 85 |
+
logger.info(f"Scraped {len(posts)} posts from StockTwits for {ticker}")
|
| 86 |
+
return posts
|
| 87 |
+
|
| 88 |
+
except requests.exceptions.Timeout:
|
| 89 |
+
logger.warning(f"StockTwits request timed out for {ticker}")
|
| 90 |
+
return []
|
| 91 |
+
except requests.exceptions.RequestException as e:
|
| 92 |
+
logger.error(f"StockTwits scraping error for {ticker}: {e}")
|
| 93 |
+
return []
|
| 94 |
+
except Exception as e:
|
| 95 |
+
logger.error(f"Unexpected StockTwits error for {ticker}: {e}")
|
| 96 |
+
return []
|
| 97 |
+
|
| 98 |
+
def _standardize_post(self, raw: Dict, ticker: str) -> Optional[Dict]:
|
| 99 |
+
"""Convert StockTwits message to standard format."""
|
| 100 |
+
msg_id = raw.get("id")
|
| 101 |
+
if not msg_id:
|
| 102 |
+
return None
|
| 103 |
+
|
| 104 |
+
body = raw.get("body", "").strip()
|
| 105 |
+
if not body:
|
| 106 |
+
return None
|
| 107 |
+
|
| 108 |
+
user = raw.get("user", {})
|
| 109 |
+
created_at = raw.get("created_at", "")
|
| 110 |
+
|
| 111 |
+
# Parse timestamp
|
| 112 |
+
timestamp = None
|
| 113 |
+
if created_at:
|
| 114 |
+
try:
|
| 115 |
+
timestamp = datetime.fromisoformat(created_at.replace("Z", "+00:00")).isoformat()
|
| 116 |
+
except:
|
| 117 |
+
timestamp = created_at
|
| 118 |
+
|
| 119 |
+
# StockTwits has built-in sentiment
|
| 120 |
+
st_sentiment = raw.get("entities", {}).get("sentiment", {})
|
| 121 |
+
st_sentiment_label = st_sentiment.get("basic") if st_sentiment else None
|
| 122 |
+
|
| 123 |
+
likes_data = raw.get("likes", {})
|
| 124 |
+
likes_count = likes_data.get("total", 0) if isinstance(likes_data, dict) else 0
|
| 125 |
+
|
| 126 |
+
return {
|
| 127 |
+
"id": self.generate_post_id("stocktwits", str(msg_id)),
|
| 128 |
+
"platform": "stocktwits",
|
| 129 |
+
"ticker": ticker.upper(),
|
| 130 |
+
"content": body,
|
| 131 |
+
"author": user.get("username", "unknown"),
|
| 132 |
+
"author_followers": user.get("followers", 0),
|
| 133 |
+
"timestamp": timestamp,
|
| 134 |
+
"likes": likes_count,
|
| 135 |
+
"comments": 0, # Not available in basic API
|
| 136 |
+
"retweets": 0,
|
| 137 |
+
"engagement_score": self.calculate_engagement_score(likes_count, 0),
|
| 138 |
+
"url": f"https://stocktwits.com/{user.get('username', '')}/message/{msg_id}",
|
| 139 |
+
"stocktwits_sentiment": st_sentiment_label # "Bullish" or "Bearish" if available
|
| 140 |
+
}
|
| 141 |
+
|
| 142 |
+
|
| 143 |
+
class RedditScraper(BaseScraper):
|
| 144 |
+
"""
|
| 145 |
+
Direct Reddit scraper using public JSON endpoints.
|
| 146 |
+
|
| 147 |
+
- No API credentials required
|
| 148 |
+
- Uses cloudscraper to bypass Cloudflare
|
| 149 |
+
- Subreddits: wallstreetbets, stocks, investing, options
|
| 150 |
+
"""
|
| 151 |
+
|
| 152 |
+
SUBREDDITS = ["wallstreetbets", "stocks", "investing", "options"]
|
| 153 |
+
BASE_URL = "https://www.reddit.com"
|
| 154 |
+
TIMEOUT = 10
|
| 155 |
+
|
| 156 |
+
def __init__(self, client_id: str = "", client_secret: str = "", user_agent: str = "StockAssistant/1.0"):
|
| 157 |
+
# Credentials no longer needed, but keep params for backwards compatibility
|
| 158 |
+
self.scraper = cloudscraper.create_scraper()
|
| 159 |
+
self.user_agent = user_agent
|
| 160 |
+
|
| 161 |
+
def scrape(self, ticker: str, limit: int = 50) -> List[Dict]:
|
| 162 |
+
"""Fetch posts from Reddit for a ticker using public JSON endpoints."""
|
| 163 |
+
posts = []
|
| 164 |
+
seen_ids = set()
|
| 165 |
+
per_subreddit = max(1, limit // len(self.SUBREDDITS))
|
| 166 |
+
|
| 167 |
+
for subreddit_name in self.SUBREDDITS:
|
| 168 |
+
if len(posts) >= limit:
|
| 169 |
+
break
|
| 170 |
+
|
| 171 |
+
# Search with $ prefix (common in finance subs) and plain ticker
|
| 172 |
+
for query in [f"${ticker.upper()}", ticker.upper()]:
|
| 173 |
+
if len(posts) >= limit:
|
| 174 |
+
break
|
| 175 |
+
|
| 176 |
+
try:
|
| 177 |
+
url = f"{self.BASE_URL}/r/{subreddit_name}/search.json"
|
| 178 |
+
params = {
|
| 179 |
+
"q": query,
|
| 180 |
+
"limit": per_subreddit,
|
| 181 |
+
"t": "week",
|
| 182 |
+
"sort": "relevance",
|
| 183 |
+
"restrict_sr": "true"
|
| 184 |
+
}
|
| 185 |
+
|
| 186 |
+
response = self.scraper.get(
|
| 187 |
+
url,
|
| 188 |
+
params=params,
|
| 189 |
+
timeout=self.TIMEOUT,
|
| 190 |
+
headers={"User-Agent": self.user_agent}
|
| 191 |
+
)
|
| 192 |
+
response.raise_for_status()
|
| 193 |
+
data = response.json()
|
| 194 |
+
|
| 195 |
+
children = data.get("data", {}).get("children", [])
|
| 196 |
+
|
| 197 |
+
for child in children:
|
| 198 |
+
if len(posts) >= limit:
|
| 199 |
+
break
|
| 200 |
+
|
| 201 |
+
post_data = child.get("data", {})
|
| 202 |
+
post_id = post_data.get("id")
|
| 203 |
+
|
| 204 |
+
if post_id and post_id not in seen_ids:
|
| 205 |
+
post = self._standardize_post(post_data, ticker, subreddit_name)
|
| 206 |
+
if post:
|
| 207 |
+
posts.append(post)
|
| 208 |
+
seen_ids.add(post_id)
|
| 209 |
+
|
| 210 |
+
except requests.exceptions.Timeout:
|
| 211 |
+
logger.warning(f"Reddit request timed out for r/{subreddit_name}")
|
| 212 |
+
continue
|
| 213 |
+
except requests.exceptions.RequestException as e:
|
| 214 |
+
logger.warning(f"Reddit scraping error for r/{subreddit_name}: {e}")
|
| 215 |
+
continue
|
| 216 |
+
except Exception as e:
|
| 217 |
+
logger.debug(f"Reddit search failed for {query} in r/{subreddit_name}: {e}")
|
| 218 |
+
continue
|
| 219 |
+
|
| 220 |
+
logger.info(f"Scraped {len(posts)} posts from Reddit for {ticker}")
|
| 221 |
+
return posts[:limit]
|
| 222 |
+
|
| 223 |
+
def _standardize_post(self, post_data: Dict, ticker: str, subreddit: str) -> Optional[Dict]:
|
| 224 |
+
"""Convert Reddit JSON post to standard format."""
|
| 225 |
+
try:
|
| 226 |
+
post_id = post_data.get("id")
|
| 227 |
+
if not post_id:
|
| 228 |
+
return None
|
| 229 |
+
|
| 230 |
+
title = post_data.get("title", "").strip()
|
| 231 |
+
selftext = post_data.get("selftext", "").strip()
|
| 232 |
+
|
| 233 |
+
# Combine title and selftext
|
| 234 |
+
if selftext and selftext != "[removed]" and selftext != "[deleted]":
|
| 235 |
+
content = f"{title}\n\n{selftext}"
|
| 236 |
+
else:
|
| 237 |
+
content = title
|
| 238 |
+
|
| 239 |
+
if not content:
|
| 240 |
+
return None
|
| 241 |
+
|
| 242 |
+
# Truncate very long posts
|
| 243 |
+
if len(content) > 2000:
|
| 244 |
+
content = content[:2000] + "..."
|
| 245 |
+
|
| 246 |
+
# Parse timestamp
|
| 247 |
+
created_utc = post_data.get("created_utc", 0)
|
| 248 |
+
timestamp = datetime.fromtimestamp(created_utc, tz=timezone.utc).isoformat()
|
| 249 |
+
|
| 250 |
+
author = post_data.get("author", "[deleted]")
|
| 251 |
+
if author == "[deleted]":
|
| 252 |
+
author = "[deleted]"
|
| 253 |
+
|
| 254 |
+
score = post_data.get("score", 0)
|
| 255 |
+
num_comments = post_data.get("num_comments", 0)
|
| 256 |
+
permalink = post_data.get("permalink", "")
|
| 257 |
+
|
| 258 |
+
return {
|
| 259 |
+
"id": self.generate_post_id("reddit", post_id),
|
| 260 |
+
"platform": "reddit",
|
| 261 |
+
"ticker": ticker.upper(),
|
| 262 |
+
"content": content,
|
| 263 |
+
"author": author,
|
| 264 |
+
"author_followers": 0,
|
| 265 |
+
"timestamp": timestamp,
|
| 266 |
+
"likes": score,
|
| 267 |
+
"comments": num_comments,
|
| 268 |
+
"retweets": 0,
|
| 269 |
+
"engagement_score": self.calculate_engagement_score(score, num_comments),
|
| 270 |
+
"url": f"https://reddit.com{permalink}",
|
| 271 |
+
"subreddit": subreddit
|
| 272 |
+
}
|
| 273 |
+
|
| 274 |
+
except Exception as e:
|
| 275 |
+
logger.debug(f"Failed to parse Reddit post: {e}")
|
| 276 |
+
return None
|
| 277 |
+
|
| 278 |
+
|
| 279 |
+
class TwitterScraper(BaseScraper):
|
| 280 |
+
"""
|
| 281 |
+
Twitter/X API v2 scraper.
|
| 282 |
+
|
| 283 |
+
NOTE: Twitter API v2 requires paid access ($100+/month for Basic tier).
|
| 284 |
+
This scraper is disabled by default unless TWITTER_BEARER_TOKEN is set.
|
| 285 |
+
|
| 286 |
+
- Rate limit (Basic tier): 10,000 tweets/month
|
| 287 |
+
- Endpoint: https://api.twitter.com/2/tweets/search/recent
|
| 288 |
+
- Search window: Last 7 days only (recent search)
|
| 289 |
+
"""
|
| 290 |
+
|
| 291 |
+
BASE_URL = "https://api.twitter.com/2"
|
| 292 |
+
TIMEOUT = 15
|
| 293 |
+
|
| 294 |
+
def __init__(self, bearer_token: str = ""):
|
| 295 |
+
self.bearer_token = bearer_token
|
| 296 |
+
self.enabled = bool(bearer_token)
|
| 297 |
+
|
| 298 |
+
def scrape(self, ticker: str, limit: int = 50) -> List[Dict]:
|
| 299 |
+
"""
|
| 300 |
+
Fetch tweets for a ticker using Twitter API v2 recent search.
|
| 301 |
+
|
| 302 |
+
Args:
|
| 303 |
+
ticker: Stock ticker symbol (e.g., "AAPL")
|
| 304 |
+
limit: Maximum number of tweets to return (max 100 per request)
|
| 305 |
+
|
| 306 |
+
Returns:
|
| 307 |
+
List of standardized post dicts
|
| 308 |
+
"""
|
| 309 |
+
if not self.enabled:
|
| 310 |
+
logger.debug("Twitter scraper disabled - no bearer token (requires $100+/month)")
|
| 311 |
+
return []
|
| 312 |
+
|
| 313 |
+
# Build search query for stock ticker
|
| 314 |
+
# Search for cashtag ($AAPL) and common stock-related terms
|
| 315 |
+
query = f"${ticker.upper()} (stock OR shares OR trading OR buy OR sell OR price) -is:retweet lang:en"
|
| 316 |
+
|
| 317 |
+
# Limit to max 100 per request (Twitter API limit)
|
| 318 |
+
max_results = min(limit, 100)
|
| 319 |
+
|
| 320 |
+
url = f"{self.BASE_URL}/tweets/search/recent"
|
| 321 |
+
headers = {
|
| 322 |
+
"Authorization": f"Bearer {self.bearer_token}",
|
| 323 |
+
"Content-Type": "application/json"
|
| 324 |
+
}
|
| 325 |
+
params = {
|
| 326 |
+
"query": query,
|
| 327 |
+
"max_results": max_results,
|
| 328 |
+
"tweet.fields": "created_at,public_metrics,author_id,conversation_id",
|
| 329 |
+
"user.fields": "username,name,public_metrics",
|
| 330 |
+
"expansions": "author_id"
|
| 331 |
+
}
|
| 332 |
+
|
| 333 |
+
try:
|
| 334 |
+
response = requests.get(url, headers=headers, params=params, timeout=self.TIMEOUT)
|
| 335 |
+
|
| 336 |
+
# Handle rate limiting
|
| 337 |
+
if response.status_code == 429:
|
| 338 |
+
logger.warning("Twitter API rate limit reached")
|
| 339 |
+
return []
|
| 340 |
+
|
| 341 |
+
# Handle authentication errors
|
| 342 |
+
if response.status_code == 401:
|
| 343 |
+
logger.error("Twitter API authentication failed - check bearer token")
|
| 344 |
+
return []
|
| 345 |
+
|
| 346 |
+
if response.status_code == 403:
|
| 347 |
+
logger.error("Twitter API access forbidden - may need higher tier access")
|
| 348 |
+
return []
|
| 349 |
+
|
| 350 |
+
response.raise_for_status()
|
| 351 |
+
data = response.json()
|
| 352 |
+
|
| 353 |
+
# Check for errors in response
|
| 354 |
+
if "errors" in data and not data.get("data"):
|
| 355 |
+
for error in data.get("errors", []):
|
| 356 |
+
logger.warning(f"Twitter API error: {error.get('message', 'Unknown error')}")
|
| 357 |
+
return []
|
| 358 |
+
|
| 359 |
+
tweets = data.get("data", [])
|
| 360 |
+
if not tweets:
|
| 361 |
+
logger.info(f"No tweets found for {ticker}")
|
| 362 |
+
return []
|
| 363 |
+
|
| 364 |
+
# Build user lookup map from expansions
|
| 365 |
+
users = {}
|
| 366 |
+
includes = data.get("includes", {})
|
| 367 |
+
for user in includes.get("users", []):
|
| 368 |
+
users[user["id"]] = user
|
| 369 |
+
|
| 370 |
+
# Parse tweets
|
| 371 |
+
posts = []
|
| 372 |
+
for tweet in tweets:
|
| 373 |
+
try:
|
| 374 |
+
post = self._standardize_post(tweet, ticker, users)
|
| 375 |
+
if post:
|
| 376 |
+
posts.append(post)
|
| 377 |
+
except Exception as e:
|
| 378 |
+
logger.debug(f"Failed to parse tweet: {e}")
|
| 379 |
+
continue
|
| 380 |
+
|
| 381 |
+
logger.info(f"Scraped {len(posts)} tweets from Twitter for {ticker}")
|
| 382 |
+
return posts
|
| 383 |
+
|
| 384 |
+
except requests.exceptions.Timeout:
|
| 385 |
+
logger.warning(f"Twitter request timed out for {ticker}")
|
| 386 |
+
return []
|
| 387 |
+
except requests.exceptions.RequestException as e:
|
| 388 |
+
logger.error(f"Twitter API error for {ticker}: {e}")
|
| 389 |
+
return []
|
| 390 |
+
except Exception as e:
|
| 391 |
+
logger.error(f"Unexpected Twitter error for {ticker}: {e}")
|
| 392 |
+
return []
|
| 393 |
+
|
| 394 |
+
def _standardize_post(self, tweet: Dict, ticker: str, users: Dict) -> Optional[Dict]:
|
| 395 |
+
"""
|
| 396 |
+
Convert Twitter API v2 tweet to standard format.
|
| 397 |
+
|
| 398 |
+
Args:
|
| 399 |
+
tweet: Raw tweet data from API
|
| 400 |
+
ticker: Stock ticker symbol
|
| 401 |
+
users: User lookup map from expansions
|
| 402 |
+
|
| 403 |
+
Returns:
|
| 404 |
+
Standardized post dict or None
|
| 405 |
+
"""
|
| 406 |
+
tweet_id = tweet.get("id")
|
| 407 |
+
if not tweet_id:
|
| 408 |
+
return None
|
| 409 |
+
|
| 410 |
+
text = tweet.get("text", "").strip()
|
| 411 |
+
if not text:
|
| 412 |
+
return None
|
| 413 |
+
|
| 414 |
+
# Get author info from users map
|
| 415 |
+
author_id = tweet.get("author_id", "")
|
| 416 |
+
author_info = users.get(author_id, {})
|
| 417 |
+
username = author_info.get("username", "unknown")
|
| 418 |
+
author_metrics = author_info.get("public_metrics", {})
|
| 419 |
+
followers = author_metrics.get("followers_count", 0)
|
| 420 |
+
|
| 421 |
+
# Parse timestamp
|
| 422 |
+
created_at = tweet.get("created_at", "")
|
| 423 |
+
timestamp = None
|
| 424 |
+
if created_at:
|
| 425 |
+
try:
|
| 426 |
+
# Twitter API v2 uses ISO 8601 format
|
| 427 |
+
timestamp = datetime.fromisoformat(created_at.replace("Z", "+00:00")).isoformat()
|
| 428 |
+
except Exception:
|
| 429 |
+
timestamp = created_at
|
| 430 |
+
|
| 431 |
+
# Get engagement metrics
|
| 432 |
+
metrics = tweet.get("public_metrics", {})
|
| 433 |
+
likes = metrics.get("like_count", 0)
|
| 434 |
+
retweets = metrics.get("retweet_count", 0)
|
| 435 |
+
replies = metrics.get("reply_count", 0)
|
| 436 |
+
quotes = metrics.get("quote_count", 0)
|
| 437 |
+
|
| 438 |
+
return {
|
| 439 |
+
"id": self.generate_post_id("twitter", tweet_id),
|
| 440 |
+
"platform": "twitter",
|
| 441 |
+
"ticker": ticker.upper(),
|
| 442 |
+
"content": text,
|
| 443 |
+
"author": username,
|
| 444 |
+
"author_followers": followers,
|
| 445 |
+
"timestamp": timestamp,
|
| 446 |
+
"likes": likes,
|
| 447 |
+
"comments": replies,
|
| 448 |
+
"retweets": retweets + quotes,
|
| 449 |
+
"engagement_score": self.calculate_engagement_score(likes, replies, retweets + quotes),
|
| 450 |
+
"url": f"https://twitter.com/{username}/status/{tweet_id}"
|
| 451 |
+
}
|
| 452 |
+
|
| 453 |
+
|
| 454 |
+
class SocialMediaAggregator:
|
| 455 |
+
"""
|
| 456 |
+
Aggregates posts from all social media platforms.
|
| 457 |
+
"""
|
| 458 |
+
|
| 459 |
+
def __init__(
|
| 460 |
+
self,
|
| 461 |
+
reddit_client_id: str = "",
|
| 462 |
+
reddit_client_secret: str = "",
|
| 463 |
+
reddit_user_agent: str = "StockAssistant/1.0",
|
| 464 |
+
twitter_bearer_token: str = ""
|
| 465 |
+
):
|
| 466 |
+
self.scrapers = {
|
| 467 |
+
"stocktwits": StockTwitsScraper(),
|
| 468 |
+
"reddit": RedditScraper(reddit_client_id, reddit_client_secret, reddit_user_agent),
|
| 469 |
+
"twitter": TwitterScraper(twitter_bearer_token)
|
| 470 |
+
}
|
| 471 |
+
|
| 472 |
+
def scrape_all(self, ticker: str, limit_per_platform: int = 30) -> Dict[str, List[Dict]]:
|
| 473 |
+
"""
|
| 474 |
+
Scrape posts from all available platforms.
|
| 475 |
+
|
| 476 |
+
Args:
|
| 477 |
+
ticker: Stock ticker symbol
|
| 478 |
+
limit_per_platform: Max posts per platform
|
| 479 |
+
|
| 480 |
+
Returns:
|
| 481 |
+
Dict mapping platform name to list of posts
|
| 482 |
+
"""
|
| 483 |
+
results = {}
|
| 484 |
+
|
| 485 |
+
for platform, scraper in self.scrapers.items():
|
| 486 |
+
try:
|
| 487 |
+
posts = scraper.scrape(ticker, limit=limit_per_platform)
|
| 488 |
+
results[platform] = posts
|
| 489 |
+
except Exception as e:
|
| 490 |
+
logger.error(f"Failed to scrape {platform}: {e}")
|
| 491 |
+
results[platform] = []
|
| 492 |
+
|
| 493 |
+
return results
|
| 494 |
+
|
| 495 |
+
def scrape_all_combined(self, ticker: str, total_limit: int = 50) -> List[Dict]:
|
| 496 |
+
"""
|
| 497 |
+
Scrape from all platforms and return combined list sorted by recency.
|
| 498 |
+
|
| 499 |
+
Args:
|
| 500 |
+
ticker: Stock ticker symbol
|
| 501 |
+
total_limit: Total max posts across all platforms
|
| 502 |
+
|
| 503 |
+
Returns:
|
| 504 |
+
List of posts sorted by timestamp (newest first)
|
| 505 |
+
"""
|
| 506 |
+
per_platform = max(1, total_limit // 3) # Distribute across 3 platforms
|
| 507 |
+
results = self.scrape_all(ticker, limit_per_platform=per_platform)
|
| 508 |
+
|
| 509 |
+
all_posts = []
|
| 510 |
+
for posts in results.values():
|
| 511 |
+
all_posts.extend(posts)
|
| 512 |
+
|
| 513 |
+
# Sort by timestamp (newest first)
|
| 514 |
+
all_posts.sort(key=lambda x: x.get("timestamp", ""), reverse=True)
|
| 515 |
+
|
| 516 |
+
return all_posts[:total_limit]
|
| 517 |
+
|
| 518 |
+
def get_source_counts(self, posts: List[Dict]) -> Dict[str, int]:
|
| 519 |
+
"""Count posts by platform."""
|
| 520 |
+
counts = {"stocktwits": 0, "reddit": 0, "twitter": 0}
|
| 521 |
+
for post in posts:
|
| 522 |
+
platform = post.get("platform", "")
|
| 523 |
+
if platform in counts:
|
| 524 |
+
counts[platform] += 1
|
| 525 |
+
return counts
|
company_tickers.json
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|
fe/app.js
ADDED
|
@@ -0,0 +1,2479 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
const API_BASE = 'http://localhost:5000/api';
|
| 2 |
+
let allTickers = [];
|
| 3 |
+
let currentTicker = '';
|
| 4 |
+
let chartInstance = null;
|
| 5 |
+
let highlightedIndex = -1;
|
| 6 |
+
let dropdownItems = [];
|
| 7 |
+
|
| 8 |
+
// Cache TTL configuration (in minutes)
|
| 9 |
+
const CACHE_TTL = {
|
| 10 |
+
STATIC: null, // Cache until page refresh
|
| 11 |
+
DAILY: 1440, // 24 hours (for EOD data)
|
| 12 |
+
MODERATE: 30, // 30 minutes
|
| 13 |
+
SHORT: 15 // 15 minutes
|
| 14 |
+
};
|
| 15 |
+
|
| 16 |
+
// Cache manager
|
| 17 |
+
const cache = {
|
| 18 |
+
data: {},
|
| 19 |
+
|
| 20 |
+
set(key, value, ttlMinutes = null) {
|
| 21 |
+
this.data[key] = {
|
| 22 |
+
value: value,
|
| 23 |
+
timestamp: Date.now(),
|
| 24 |
+
ttl: ttlMinutes ? ttlMinutes * 60 * 1000 : null
|
| 25 |
+
};
|
| 26 |
+
},
|
| 27 |
+
|
| 28 |
+
get(key) {
|
| 29 |
+
const item = this.data[key];
|
| 30 |
+
if (!item) return null;
|
| 31 |
+
|
| 32 |
+
// Check if expired
|
| 33 |
+
if (item.ttl && (Date.now() - item.timestamp > item.ttl)) {
|
| 34 |
+
delete this.data[key];
|
| 35 |
+
return null;
|
| 36 |
+
}
|
| 37 |
+
|
| 38 |
+
return item.value;
|
| 39 |
+
},
|
| 40 |
+
|
| 41 |
+
has(key) {
|
| 42 |
+
return this.get(key) !== null;
|
| 43 |
+
},
|
| 44 |
+
|
| 45 |
+
clear() {
|
| 46 |
+
this.data = {};
|
| 47 |
+
},
|
| 48 |
+
|
| 49 |
+
getStats() {
|
| 50 |
+
return {
|
| 51 |
+
entries: Object.keys(this.data).length,
|
| 52 |
+
keys: Object.keys(this.data)
|
| 53 |
+
};
|
| 54 |
+
}
|
| 55 |
+
};
|
| 56 |
+
|
| 57 |
+
// Expose cache to global scope for debugging
|
| 58 |
+
window.stockCache = cache;
|
| 59 |
+
|
| 60 |
+
// Recent/Popular Ticker Functions
|
| 61 |
+
function getRecentTickers() {
|
| 62 |
+
try {
|
| 63 |
+
const recent = localStorage.getItem('recentTickers');
|
| 64 |
+
if (recent) {
|
| 65 |
+
return JSON.parse(recent);
|
| 66 |
+
}
|
| 67 |
+
} catch (error) {
|
| 68 |
+
console.error('Error reading recent tickers:', error);
|
| 69 |
+
}
|
| 70 |
+
return [];
|
| 71 |
+
}
|
| 72 |
+
|
| 73 |
+
function saveRecentTicker(ticker, title) {
|
| 74 |
+
try {
|
| 75 |
+
let recent = getRecentTickers();
|
| 76 |
+
|
| 77 |
+
// Remove if already exists
|
| 78 |
+
recent = recent.filter(item => item.ticker !== ticker);
|
| 79 |
+
|
| 80 |
+
// Add to front
|
| 81 |
+
recent.unshift({ ticker, title });
|
| 82 |
+
|
| 83 |
+
// Keep only 5 most recent
|
| 84 |
+
recent = recent.slice(0, 5);
|
| 85 |
+
|
| 86 |
+
localStorage.setItem('recentTickers', JSON.stringify(recent));
|
| 87 |
+
} catch (error) {
|
| 88 |
+
console.error('Error saving recent ticker:', error);
|
| 89 |
+
}
|
| 90 |
+
}
|
| 91 |
+
|
| 92 |
+
function getPopularTickers() {
|
| 93 |
+
const popularSymbols = ['AAPL', 'MSFT', 'GOOGL', 'AMZN', 'TSLA', 'META', 'NVDA', 'AMD'];
|
| 94 |
+
return allTickers.filter(ticker => popularSymbols.includes(ticker.ticker));
|
| 95 |
+
}
|
| 96 |
+
|
| 97 |
+
// Dropdown control functions
|
| 98 |
+
function showDropdown() {
|
| 99 |
+
document.getElementById('dropdownList').classList.remove('hidden');
|
| 100 |
+
}
|
| 101 |
+
|
| 102 |
+
function hideDropdown() {
|
| 103 |
+
document.getElementById('dropdownList').classList.add('hidden');
|
| 104 |
+
highlightedIndex = -1;
|
| 105 |
+
}
|
| 106 |
+
|
| 107 |
+
function populateDropdown(tickers, searchTerm = '') {
|
| 108 |
+
const dropdown = document.getElementById('dropdownList');
|
| 109 |
+
dropdown.innerHTML = '';
|
| 110 |
+
dropdownItems = [];
|
| 111 |
+
|
| 112 |
+
if (searchTerm === '') {
|
| 113 |
+
// Show recent and popular tickers
|
| 114 |
+
const recent = getRecentTickers();
|
| 115 |
+
const popular = getPopularTickers();
|
| 116 |
+
|
| 117 |
+
if (recent.length > 0) {
|
| 118 |
+
const recentHeader = document.createElement('div');
|
| 119 |
+
recentHeader.className = 'dropdown-section-header';
|
| 120 |
+
recentHeader.textContent = 'Recent';
|
| 121 |
+
dropdown.appendChild(recentHeader);
|
| 122 |
+
|
| 123 |
+
recent.forEach(item => {
|
| 124 |
+
const div = createDropdownItem(item.ticker, item.title);
|
| 125 |
+
dropdown.appendChild(div);
|
| 126 |
+
dropdownItems.push({ element: div, ticker: item.ticker, title: item.title });
|
| 127 |
+
});
|
| 128 |
+
}
|
| 129 |
+
|
| 130 |
+
if (popular.length > 0) {
|
| 131 |
+
const popularHeader = document.createElement('div');
|
| 132 |
+
popularHeader.className = 'dropdown-section-header';
|
| 133 |
+
popularHeader.textContent = 'Popular';
|
| 134 |
+
dropdown.appendChild(popularHeader);
|
| 135 |
+
|
| 136 |
+
popular.forEach(item => {
|
| 137 |
+
const div = createDropdownItem(item.ticker, item.title);
|
| 138 |
+
dropdown.appendChild(div);
|
| 139 |
+
dropdownItems.push({ element: div, ticker: item.ticker, title: item.title });
|
| 140 |
+
});
|
| 141 |
+
}
|
| 142 |
+
} else {
|
| 143 |
+
// Show filtered results
|
| 144 |
+
const limited = tickers.slice(0, 50);
|
| 145 |
+
|
| 146 |
+
if (limited.length === 0) {
|
| 147 |
+
const noResults = document.createElement('div');
|
| 148 |
+
noResults.className = 'dropdown-no-results';
|
| 149 |
+
noResults.textContent = 'No results found';
|
| 150 |
+
dropdown.appendChild(noResults);
|
| 151 |
+
} else {
|
| 152 |
+
limited.forEach(item => {
|
| 153 |
+
const div = createDropdownItem(item.ticker, item.title);
|
| 154 |
+
dropdown.appendChild(div);
|
| 155 |
+
dropdownItems.push({ element: div, ticker: item.ticker, title: item.title });
|
| 156 |
+
});
|
| 157 |
+
}
|
| 158 |
+
}
|
| 159 |
+
}
|
| 160 |
+
|
| 161 |
+
function createDropdownItem(ticker, title) {
|
| 162 |
+
const div = document.createElement('div');
|
| 163 |
+
div.className = 'dropdown-item';
|
| 164 |
+
div.textContent = `${ticker} - ${title}`;
|
| 165 |
+
div.dataset.ticker = ticker;
|
| 166 |
+
div.dataset.title = title;
|
| 167 |
+
|
| 168 |
+
// Use mousedown instead of click to fire before blur
|
| 169 |
+
div.addEventListener('mousedown', (e) => {
|
| 170 |
+
e.preventDefault();
|
| 171 |
+
selectTicker(ticker, title);
|
| 172 |
+
});
|
| 173 |
+
|
| 174 |
+
return div;
|
| 175 |
+
}
|
| 176 |
+
|
| 177 |
+
function selectTicker(ticker, title) {
|
| 178 |
+
const input = document.getElementById('tickerSearch');
|
| 179 |
+
input.value = `${ticker} - ${title}`;
|
| 180 |
+
currentTicker = ticker;
|
| 181 |
+
hideDropdown();
|
| 182 |
+
saveRecentTicker(ticker, title);
|
| 183 |
+
loadStockData(ticker);
|
| 184 |
+
updateChatContext();
|
| 185 |
+
}
|
| 186 |
+
|
| 187 |
+
function highlightItem(index) {
|
| 188 |
+
// Remove all highlights
|
| 189 |
+
dropdownItems.forEach(item => item.element.classList.remove('highlighted'));
|
| 190 |
+
|
| 191 |
+
if (index >= 0 && index < dropdownItems.length) {
|
| 192 |
+
dropdownItems[index].element.classList.add('highlighted');
|
| 193 |
+
dropdownItems[index].element.scrollIntoView({ block: 'nearest' });
|
| 194 |
+
}
|
| 195 |
+
}
|
| 196 |
+
|
| 197 |
+
// Load tickers on page load
|
| 198 |
+
document.addEventListener('DOMContentLoaded', async () => {
|
| 199 |
+
initTheme();
|
| 200 |
+
await loadTickers();
|
| 201 |
+
await loadMarketStatus();
|
| 202 |
+
setupEventListeners();
|
| 203 |
+
});
|
| 204 |
+
|
| 205 |
+
// Theme toggle functionality
|
| 206 |
+
function initTheme() {
|
| 207 |
+
const savedTheme = localStorage.getItem('theme');
|
| 208 |
+
const prefersDark = window.matchMedia('(prefers-color-scheme: dark)').matches;
|
| 209 |
+
|
| 210 |
+
if (savedTheme) {
|
| 211 |
+
document.documentElement.setAttribute('data-theme', savedTheme);
|
| 212 |
+
} else if (prefersDark) {
|
| 213 |
+
document.documentElement.setAttribute('data-theme', 'dark');
|
| 214 |
+
}
|
| 215 |
+
|
| 216 |
+
const themeToggle = document.getElementById('themeToggle');
|
| 217 |
+
if (themeToggle) {
|
| 218 |
+
themeToggle.addEventListener('click', toggleTheme);
|
| 219 |
+
}
|
| 220 |
+
}
|
| 221 |
+
|
| 222 |
+
function toggleTheme() {
|
| 223 |
+
const currentTheme = document.documentElement.getAttribute('data-theme');
|
| 224 |
+
const newTheme = currentTheme === 'dark' ? 'light' : 'dark';
|
| 225 |
+
|
| 226 |
+
document.documentElement.setAttribute('data-theme', newTheme);
|
| 227 |
+
localStorage.setItem('theme', newTheme);
|
| 228 |
+
|
| 229 |
+
// Redraw chart with new theme colors
|
| 230 |
+
if (chartState.data) {
|
| 231 |
+
drawChart(1);
|
| 232 |
+
}
|
| 233 |
+
}
|
| 234 |
+
|
| 235 |
+
async function loadTickers() {
|
| 236 |
+
try {
|
| 237 |
+
const response = await fetch('../company_tickers.json');
|
| 238 |
+
const data = await response.json();
|
| 239 |
+
|
| 240 |
+
allTickers = Object.values(data).map(item => ({
|
| 241 |
+
ticker: item.ticker,
|
| 242 |
+
title: item.title,
|
| 243 |
+
cik: item.cik_str
|
| 244 |
+
}));
|
| 245 |
+
} catch (error) {
|
| 246 |
+
console.error('Error loading tickers:', error);
|
| 247 |
+
}
|
| 248 |
+
}
|
| 249 |
+
|
| 250 |
+
function setupEventListeners() {
|
| 251 |
+
const tickerSearch = document.getElementById('tickerSearch');
|
| 252 |
+
const dropdownContainer = document.querySelector('.dropdown-container');
|
| 253 |
+
|
| 254 |
+
// Input focus - show dropdown with recent/popular or current results
|
| 255 |
+
tickerSearch.addEventListener('focus', (e) => {
|
| 256 |
+
const searchTerm = e.target.value.trim();
|
| 257 |
+
if (searchTerm === '') {
|
| 258 |
+
populateDropdown([], '');
|
| 259 |
+
} else {
|
| 260 |
+
// Select all text for easy replacement
|
| 261 |
+
e.target.select();
|
| 262 |
+
const filtered = allTickers.filter(item =>
|
| 263 |
+
item.ticker.toLowerCase().includes(searchTerm.toLowerCase()) ||
|
| 264 |
+
item.title.toLowerCase().includes(searchTerm.toLowerCase())
|
| 265 |
+
);
|
| 266 |
+
populateDropdown(filtered, searchTerm);
|
| 267 |
+
}
|
| 268 |
+
showDropdown();
|
| 269 |
+
});
|
| 270 |
+
|
| 271 |
+
// Input blur - hide dropdown with delay
|
| 272 |
+
tickerSearch.addEventListener('blur', () => {
|
| 273 |
+
setTimeout(() => {
|
| 274 |
+
hideDropdown();
|
| 275 |
+
}, 200);
|
| 276 |
+
});
|
| 277 |
+
|
| 278 |
+
// Input keydown - handle keyboard navigation
|
| 279 |
+
tickerSearch.addEventListener('keydown', (e) => {
|
| 280 |
+
const dropdown = document.getElementById('dropdownList');
|
| 281 |
+
const isOpen = !dropdown.classList.contains('hidden');
|
| 282 |
+
|
| 283 |
+
if (!isOpen && e.key !== 'Escape') return;
|
| 284 |
+
|
| 285 |
+
switch(e.key) {
|
| 286 |
+
case 'Escape':
|
| 287 |
+
if (e.target.value) {
|
| 288 |
+
e.target.value = '';
|
| 289 |
+
populateDropdown([], '');
|
| 290 |
+
showDropdown();
|
| 291 |
+
} else {
|
| 292 |
+
hideDropdown();
|
| 293 |
+
}
|
| 294 |
+
break;
|
| 295 |
+
|
| 296 |
+
case 'ArrowDown':
|
| 297 |
+
e.preventDefault();
|
| 298 |
+
highlightedIndex++;
|
| 299 |
+
if (highlightedIndex >= dropdownItems.length) {
|
| 300 |
+
highlightedIndex = 0;
|
| 301 |
+
}
|
| 302 |
+
highlightItem(highlightedIndex);
|
| 303 |
+
break;
|
| 304 |
+
|
| 305 |
+
case 'ArrowUp':
|
| 306 |
+
e.preventDefault();
|
| 307 |
+
highlightedIndex--;
|
| 308 |
+
if (highlightedIndex < 0) {
|
| 309 |
+
highlightedIndex = dropdownItems.length - 1;
|
| 310 |
+
}
|
| 311 |
+
highlightItem(highlightedIndex);
|
| 312 |
+
break;
|
| 313 |
+
|
| 314 |
+
case 'Enter':
|
| 315 |
+
e.preventDefault();
|
| 316 |
+
if (highlightedIndex >= 0 && highlightedIndex < dropdownItems.length) {
|
| 317 |
+
const item = dropdownItems[highlightedIndex];
|
| 318 |
+
selectTicker(item.ticker, item.title);
|
| 319 |
+
}
|
| 320 |
+
break;
|
| 321 |
+
}
|
| 322 |
+
});
|
| 323 |
+
|
| 324 |
+
// Input input - filter and show results
|
| 325 |
+
tickerSearch.addEventListener('input', (e) => {
|
| 326 |
+
const searchTerm = e.target.value.trim();
|
| 327 |
+
highlightedIndex = -1;
|
| 328 |
+
|
| 329 |
+
if (searchTerm === '') {
|
| 330 |
+
populateDropdown([], '');
|
| 331 |
+
} else {
|
| 332 |
+
const filtered = allTickers.filter(item =>
|
| 333 |
+
item.ticker.toLowerCase().includes(searchTerm.toLowerCase()) ||
|
| 334 |
+
item.title.toLowerCase().includes(searchTerm.toLowerCase())
|
| 335 |
+
);
|
| 336 |
+
populateDropdown(filtered, searchTerm);
|
| 337 |
+
}
|
| 338 |
+
showDropdown();
|
| 339 |
+
});
|
| 340 |
+
|
| 341 |
+
// Click outside to close dropdown
|
| 342 |
+
document.addEventListener('click', (e) => {
|
| 343 |
+
if (!dropdownContainer.contains(e.target)) {
|
| 344 |
+
hideDropdown();
|
| 345 |
+
}
|
| 346 |
+
});
|
| 347 |
+
|
| 348 |
+
// Tab switching
|
| 349 |
+
const tabButtons = document.querySelectorAll('.tab-button');
|
| 350 |
+
tabButtons.forEach(button => {
|
| 351 |
+
button.addEventListener('click', () => {
|
| 352 |
+
const tabName = button.dataset.tab;
|
| 353 |
+
switchTab(tabName);
|
| 354 |
+
});
|
| 355 |
+
});
|
| 356 |
+
|
| 357 |
+
// Chart range buttons
|
| 358 |
+
const rangeButtons = document.querySelectorAll('.chart-range-btn');
|
| 359 |
+
rangeButtons.forEach(button => {
|
| 360 |
+
button.addEventListener('click', () => {
|
| 361 |
+
const range = button.dataset.range;
|
| 362 |
+
loadChartData(currentTicker, range);
|
| 363 |
+
rangeButtons.forEach(btn => btn.classList.remove('active'));
|
| 364 |
+
button.classList.add('active');
|
| 365 |
+
});
|
| 366 |
+
});
|
| 367 |
+
|
| 368 |
+
// Chart view toggle buttons (line/candle)
|
| 369 |
+
const viewButtons = document.querySelectorAll('.chart-view-btn');
|
| 370 |
+
viewButtons.forEach(button => {
|
| 371 |
+
button.addEventListener('click', () => {
|
| 372 |
+
viewButtons.forEach(btn => btn.classList.remove('active'));
|
| 373 |
+
button.classList.add('active');
|
| 374 |
+
chartState.viewMode = button.dataset.view;
|
| 375 |
+
if (chartState.data) {
|
| 376 |
+
drawChart(1);
|
| 377 |
+
}
|
| 378 |
+
});
|
| 379 |
+
});
|
| 380 |
+
|
| 381 |
+
// Setup chat listeners
|
| 382 |
+
setupChatListeners();
|
| 383 |
+
}
|
| 384 |
+
|
| 385 |
+
function switchTab(tabName) {
|
| 386 |
+
const tabButtons = document.querySelectorAll('.tab-button');
|
| 387 |
+
const tabPanes = document.querySelectorAll('.tab-pane');
|
| 388 |
+
|
| 389 |
+
tabButtons.forEach(btn => btn.classList.remove('active'));
|
| 390 |
+
tabPanes.forEach(pane => pane.classList.remove('active'));
|
| 391 |
+
|
| 392 |
+
document.querySelector(`[data-tab="${tabName}"]`).classList.add('active');
|
| 393 |
+
document.getElementById(tabName).classList.add('active');
|
| 394 |
+
|
| 395 |
+
if (tabName === 'overview' && currentTicker) {
|
| 396 |
+
loadChartData(currentTicker, document.querySelector('.chart-range-btn.active').dataset.range || '1M');
|
| 397 |
+
} else if (tabName === 'financials' && currentTicker) {
|
| 398 |
+
loadFinancials(currentTicker);
|
| 399 |
+
} else if (tabName === 'news' && currentTicker) {
|
| 400 |
+
loadNews(currentTicker);
|
| 401 |
+
} else if (tabName === 'dividends' && currentTicker) {
|
| 402 |
+
loadDividends(currentTicker);
|
| 403 |
+
} else if (tabName === 'splits' && currentTicker) {
|
| 404 |
+
loadSplits(currentTicker);
|
| 405 |
+
} else if (tabName === 'sentiment' && currentTicker) {
|
| 406 |
+
loadSentiment(currentTicker);
|
| 407 |
+
} else if (tabName === 'forecast' && currentTicker) {
|
| 408 |
+
loadForecast(currentTicker);
|
| 409 |
+
}
|
| 410 |
+
}
|
| 411 |
+
|
| 412 |
+
function clearStockDisplay() {
|
| 413 |
+
document.getElementById('stockTitle').textContent = '';
|
| 414 |
+
document.getElementById('stockPrice').textContent = '';
|
| 415 |
+
document.getElementById('stockPrice').className = 'stock-price';
|
| 416 |
+
document.getElementById('companyDesc').textContent = '';
|
| 417 |
+
document.getElementById('marketCap').textContent = '--';
|
| 418 |
+
document.getElementById('openPrice').textContent = '--';
|
| 419 |
+
document.getElementById('highPrice').textContent = '--';
|
| 420 |
+
document.getElementById('lowPrice').textContent = '--';
|
| 421 |
+
document.getElementById('volume').textContent = '--';
|
| 422 |
+
document.getElementById('peRatio').textContent = '--';
|
| 423 |
+
}
|
| 424 |
+
|
| 425 |
+
async function loadStockData(ticker) {
|
| 426 |
+
currentTicker = ticker;
|
| 427 |
+
document.getElementById('stockData').classList.remove('hidden');
|
| 428 |
+
clearStockDisplay();
|
| 429 |
+
|
| 430 |
+
try {
|
| 431 |
+
const activeTab = document.querySelector('.tab-button.active').dataset.tab;
|
| 432 |
+
const loadPromises = [
|
| 433 |
+
loadTickerDetails(ticker),
|
| 434 |
+
loadPreviousClose(ticker)
|
| 435 |
+
];
|
| 436 |
+
if (activeTab === 'overview') {
|
| 437 |
+
loadPromises.push(loadChartData(ticker, '1M'));
|
| 438 |
+
}
|
| 439 |
+
await Promise.all(loadPromises);
|
| 440 |
+
|
| 441 |
+
if (activeTab === 'financials') {
|
| 442 |
+
await loadFinancials(ticker);
|
| 443 |
+
} else if (activeTab === 'news') {
|
| 444 |
+
await loadNews(ticker);
|
| 445 |
+
} else if (activeTab === 'sentiment') {
|
| 446 |
+
loadSentiment(ticker);
|
| 447 |
+
} else if (activeTab === 'forecast') {
|
| 448 |
+
loadForecast(ticker);
|
| 449 |
+
} else if (activeTab === 'dividends') {
|
| 450 |
+
loadDividends(ticker);
|
| 451 |
+
} else if (activeTab === 'splits') {
|
| 452 |
+
loadSplits(ticker);
|
| 453 |
+
}
|
| 454 |
+
|
| 455 |
+
// Preload news and trigger article scraping for RAG in background
|
| 456 |
+
preloadNewsForRAG(ticker);
|
| 457 |
+
} catch (error) {
|
| 458 |
+
console.error('Error loading stock data:', error);
|
| 459 |
+
alert('Error loading stock data. Please check your API key and try again.');
|
| 460 |
+
}
|
| 461 |
+
}
|
| 462 |
+
|
| 463 |
+
async function loadTickerDetails(ticker) {
|
| 464 |
+
const cacheKey = `details_${ticker}`;
|
| 465 |
+
|
| 466 |
+
// Check cache first
|
| 467 |
+
if (cache.has(cacheKey)) {
|
| 468 |
+
const data = cache.get(cacheKey);
|
| 469 |
+
renderTickerDetails(data);
|
| 470 |
+
return;
|
| 471 |
+
}
|
| 472 |
+
|
| 473 |
+
try {
|
| 474 |
+
const response = await fetch(`${API_BASE}/ticker/${ticker}/details`);
|
| 475 |
+
const data = await response.json();
|
| 476 |
+
|
| 477 |
+
if (data.results) {
|
| 478 |
+
cache.set(cacheKey, data, CACHE_TTL.STATIC);
|
| 479 |
+
renderTickerDetails(data);
|
| 480 |
+
}
|
| 481 |
+
} catch (error) {
|
| 482 |
+
console.error('Error loading ticker details:', error);
|
| 483 |
+
}
|
| 484 |
+
}
|
| 485 |
+
|
| 486 |
+
function renderTickerDetails(data) {
|
| 487 |
+
if (data.results) {
|
| 488 |
+
const results = data.results;
|
| 489 |
+
document.getElementById('stockTitle').textContent =
|
| 490 |
+
`${results.ticker} - ${results.name}`;
|
| 491 |
+
document.getElementById('companyDesc').textContent =
|
| 492 |
+
results.description || 'No description available';
|
| 493 |
+
document.getElementById('marketCap').textContent =
|
| 494 |
+
results.market_cap ? formatLargeNumber(results.market_cap) : '--';
|
| 495 |
+
}
|
| 496 |
+
}
|
| 497 |
+
|
| 498 |
+
async function loadSnapshot(ticker) {
|
| 499 |
+
try {
|
| 500 |
+
const response = await fetch(`${API_BASE}/ticker/${ticker}/snapshot`);
|
| 501 |
+
const data = await response.json();
|
| 502 |
+
|
| 503 |
+
if (data.ticker) {
|
| 504 |
+
const ticker_data = data.ticker;
|
| 505 |
+
const day = ticker_data.day || {};
|
| 506 |
+
|
| 507 |
+
document.getElementById('openPrice').textContent =
|
| 508 |
+
day.o ? `$${day.o.toFixed(2)}` : '--';
|
| 509 |
+
document.getElementById('highPrice').textContent =
|
| 510 |
+
day.h ? `$${day.h.toFixed(2)}` : '--';
|
| 511 |
+
document.getElementById('lowPrice').textContent =
|
| 512 |
+
day.l ? `$${day.l.toFixed(2)}` : '--';
|
| 513 |
+
document.getElementById('volume').textContent =
|
| 514 |
+
day.v ? formatLargeNumber(day.v) : '--';
|
| 515 |
+
}
|
| 516 |
+
} catch (error) {
|
| 517 |
+
console.error('Error loading snapshot:', error);
|
| 518 |
+
}
|
| 519 |
+
}
|
| 520 |
+
|
| 521 |
+
async function loadPreviousClose(ticker) {
|
| 522 |
+
const cacheKey = `prev_close_${ticker}`;
|
| 523 |
+
|
| 524 |
+
// Check cache first
|
| 525 |
+
if (cache.has(cacheKey)) {
|
| 526 |
+
const data = cache.get(cacheKey);
|
| 527 |
+
renderPreviousClose(data);
|
| 528 |
+
return;
|
| 529 |
+
}
|
| 530 |
+
|
| 531 |
+
try {
|
| 532 |
+
const response = await fetch(`${API_BASE}/ticker/${ticker}/previous-close`);
|
| 533 |
+
const data = await response.json();
|
| 534 |
+
|
| 535 |
+
if (data.results && data.results.length > 0) {
|
| 536 |
+
cache.set(cacheKey, data, CACHE_TTL.DAILY);
|
| 537 |
+
renderPreviousClose(data);
|
| 538 |
+
}
|
| 539 |
+
} catch (error) {
|
| 540 |
+
console.error('Error loading previous close:', error);
|
| 541 |
+
}
|
| 542 |
+
}
|
| 543 |
+
|
| 544 |
+
function renderPreviousClose(data) {
|
| 545 |
+
if (data.results && data.results.length > 0) {
|
| 546 |
+
const result = data.results[0];
|
| 547 |
+
const priceElement = document.getElementById('stockPrice');
|
| 548 |
+
priceElement.textContent = `$${result.c.toFixed(2)}`;
|
| 549 |
+
|
| 550 |
+
const change = result.c - result.o;
|
| 551 |
+
const changePercent = ((change / result.o) * 100).toFixed(2);
|
| 552 |
+
|
| 553 |
+
if (change >= 0) {
|
| 554 |
+
priceElement.classList.add('positive');
|
| 555 |
+
priceElement.classList.remove('negative');
|
| 556 |
+
priceElement.innerHTML += ` <span style="font-size: 0.6em;">+${changePercent}%</span>`;
|
| 557 |
+
} else {
|
| 558 |
+
priceElement.classList.add('negative');
|
| 559 |
+
priceElement.classList.remove('positive');
|
| 560 |
+
priceElement.innerHTML += ` <span style="font-size: 0.6em;">${changePercent}%</span>`;
|
| 561 |
+
}
|
| 562 |
+
|
| 563 |
+
// Populate Overview metrics from previous close data
|
| 564 |
+
document.getElementById('openPrice').textContent = `$${result.o.toFixed(2)}`;
|
| 565 |
+
document.getElementById('highPrice').textContent = `$${result.h.toFixed(2)}`;
|
| 566 |
+
document.getElementById('lowPrice').textContent = `$${result.l.toFixed(2)}`;
|
| 567 |
+
document.getElementById('volume').textContent = formatLargeNumber(result.v);
|
| 568 |
+
}
|
| 569 |
+
}
|
| 570 |
+
|
| 571 |
+
async function loadChartData(ticker, range) {
|
| 572 |
+
const cacheKey = `chart_${ticker}_${range}`;
|
| 573 |
+
const chartLoading = document.getElementById('chartLoading');
|
| 574 |
+
|
| 575 |
+
// Check cache first
|
| 576 |
+
if (cache.has(cacheKey)) {
|
| 577 |
+
const data = cache.get(cacheKey);
|
| 578 |
+
if (data.results) {
|
| 579 |
+
renderChart(data.results);
|
| 580 |
+
}
|
| 581 |
+
return;
|
| 582 |
+
}
|
| 583 |
+
|
| 584 |
+
chartLoading.classList.remove('hidden');
|
| 585 |
+
const { from, to } = getDateRange(range);
|
| 586 |
+
|
| 587 |
+
try {
|
| 588 |
+
const response = await fetch(
|
| 589 |
+
`${API_BASE}/ticker/${ticker}/aggregates?from=${from}&to=${to}×pan=day`
|
| 590 |
+
);
|
| 591 |
+
const data = await response.json();
|
| 592 |
+
|
| 593 |
+
if (data.results) {
|
| 594 |
+
cache.set(cacheKey, data, CACHE_TTL.DAILY);
|
| 595 |
+
renderChart(data.results);
|
| 596 |
+
}
|
| 597 |
+
} catch (error) {
|
| 598 |
+
console.error('Error loading chart data:', error);
|
| 599 |
+
} finally {
|
| 600 |
+
chartLoading.classList.add('hidden');
|
| 601 |
+
}
|
| 602 |
+
}
|
| 603 |
+
|
| 604 |
+
function getDateRange(range) {
|
| 605 |
+
const to = new Date();
|
| 606 |
+
const from = new Date();
|
| 607 |
+
|
| 608 |
+
switch(range) {
|
| 609 |
+
case '1M':
|
| 610 |
+
from.setMonth(from.getMonth() - 1);
|
| 611 |
+
break;
|
| 612 |
+
case '3M':
|
| 613 |
+
from.setMonth(from.getMonth() - 3);
|
| 614 |
+
break;
|
| 615 |
+
case '6M':
|
| 616 |
+
from.setMonth(from.getMonth() - 6);
|
| 617 |
+
break;
|
| 618 |
+
case '1Y':
|
| 619 |
+
from.setFullYear(from.getFullYear() - 1);
|
| 620 |
+
break;
|
| 621 |
+
case '5Y':
|
| 622 |
+
from.setFullYear(from.getFullYear() - 5);
|
| 623 |
+
break;
|
| 624 |
+
}
|
| 625 |
+
|
| 626 |
+
return {
|
| 627 |
+
from: from.toISOString().split('T')[0],
|
| 628 |
+
to: to.toISOString().split('T')[0]
|
| 629 |
+
};
|
| 630 |
+
}
|
| 631 |
+
|
| 632 |
+
// Chart state for interactivity
|
| 633 |
+
let chartState = {
|
| 634 |
+
data: null,
|
| 635 |
+
canvas: null,
|
| 636 |
+
ctx: null,
|
| 637 |
+
padding: { top: 20, right: 20, bottom: 40, left: 65 },
|
| 638 |
+
hoveredIndex: -1,
|
| 639 |
+
animationProgress: 0,
|
| 640 |
+
animationFrame: null,
|
| 641 |
+
viewMode: 'line' // 'line' or 'candle'
|
| 642 |
+
};
|
| 643 |
+
|
| 644 |
+
function getChartColors() {
|
| 645 |
+
const isDark = document.documentElement.getAttribute('data-theme') === 'dark';
|
| 646 |
+
return {
|
| 647 |
+
line: isDark ? '#818cf8' : '#4f46e5',
|
| 648 |
+
lineLight: isDark ? '#a5b4fc' : '#6366f1',
|
| 649 |
+
gradientTop: isDark ? 'rgba(129, 140, 248, 0.3)' : 'rgba(79, 70, 229, 0.15)',
|
| 650 |
+
gradientBottom: isDark ? 'rgba(129, 140, 248, 0)' : 'rgba(79, 70, 229, 0)',
|
| 651 |
+
grid: isDark ? 'rgba(148, 163, 184, 0.1)' : 'rgba(148, 163, 184, 0.3)',
|
| 652 |
+
text: isDark ? '#94a3b8' : '#64748b',
|
| 653 |
+
textStrong: isDark ? '#cbd5e1' : '#475569',
|
| 654 |
+
crosshair: isDark ? 'rgba(148, 163, 184, 0.5)' : 'rgba(100, 116, 139, 0.4)',
|
| 655 |
+
tooltipBg: isDark ? '#1e293b' : '#ffffff',
|
| 656 |
+
tooltipBorder: isDark ? '#334155' : '#e2e8f0',
|
| 657 |
+
positive: '#10b981',
|
| 658 |
+
negative: '#ef4444'
|
| 659 |
+
};
|
| 660 |
+
}
|
| 661 |
+
|
| 662 |
+
function renderChart(data) {
|
| 663 |
+
const canvas = document.getElementById('priceChart');
|
| 664 |
+
const ctx = canvas.getContext('2d');
|
| 665 |
+
|
| 666 |
+
// High DPI support
|
| 667 |
+
const dpr = window.devicePixelRatio || 1;
|
| 668 |
+
const rect = canvas.getBoundingClientRect();
|
| 669 |
+
canvas.width = rect.width * dpr;
|
| 670 |
+
canvas.height = rect.height * dpr;
|
| 671 |
+
ctx.scale(dpr, dpr);
|
| 672 |
+
canvas.style.width = rect.width + 'px';
|
| 673 |
+
canvas.style.height = rect.height + 'px';
|
| 674 |
+
|
| 675 |
+
chartState.data = data;
|
| 676 |
+
chartState.canvas = canvas;
|
| 677 |
+
chartState.ctx = ctx;
|
| 678 |
+
chartState.hoveredIndex = -1;
|
| 679 |
+
|
| 680 |
+
// Cancel any existing animation
|
| 681 |
+
if (chartState.animationFrame) {
|
| 682 |
+
cancelAnimationFrame(chartState.animationFrame);
|
| 683 |
+
}
|
| 684 |
+
|
| 685 |
+
// Animate the chart drawing
|
| 686 |
+
chartState.animationProgress = 0;
|
| 687 |
+
animateChart();
|
| 688 |
+
|
| 689 |
+
// Set up mouse events
|
| 690 |
+
canvas.onmousemove = handleChartMouseMove;
|
| 691 |
+
canvas.onmouseleave = handleChartMouseLeave;
|
| 692 |
+
}
|
| 693 |
+
|
| 694 |
+
function animateChart() {
|
| 695 |
+
chartState.animationProgress += 0.04;
|
| 696 |
+
if (chartState.animationProgress > 1) chartState.animationProgress = 1;
|
| 697 |
+
|
| 698 |
+
drawChart(chartState.animationProgress);
|
| 699 |
+
|
| 700 |
+
if (chartState.animationProgress < 1) {
|
| 701 |
+
chartState.animationFrame = requestAnimationFrame(animateChart);
|
| 702 |
+
}
|
| 703 |
+
}
|
| 704 |
+
|
| 705 |
+
function drawChart(progress = 1) {
|
| 706 |
+
const { data, canvas, ctx, padding, viewMode } = chartState;
|
| 707 |
+
if (!data || !ctx) return;
|
| 708 |
+
|
| 709 |
+
const colors = getChartColors();
|
| 710 |
+
const dpr = window.devicePixelRatio || 1;
|
| 711 |
+
const width = canvas.width / dpr;
|
| 712 |
+
const height = canvas.height / dpr;
|
| 713 |
+
const chartWidth = width - padding.left - padding.right;
|
| 714 |
+
const chartHeight = height - padding.top - padding.bottom;
|
| 715 |
+
|
| 716 |
+
// For candlestick, use high/low for price range
|
| 717 |
+
let minPrice, maxPrice;
|
| 718 |
+
if (viewMode === 'candle') {
|
| 719 |
+
minPrice = Math.min(...data.map(d => d.l));
|
| 720 |
+
maxPrice = Math.max(...data.map(d => d.h));
|
| 721 |
+
} else {
|
| 722 |
+
const prices = data.map(d => d.c);
|
| 723 |
+
minPrice = Math.min(...prices);
|
| 724 |
+
maxPrice = Math.max(...prices);
|
| 725 |
+
}
|
| 726 |
+
const pricePadding = (maxPrice - minPrice) * 0.05;
|
| 727 |
+
const adjustedMin = minPrice - pricePadding;
|
| 728 |
+
const adjustedMax = maxPrice + pricePadding;
|
| 729 |
+
const priceRange = adjustedMax - adjustedMin;
|
| 730 |
+
|
| 731 |
+
ctx.clearRect(0, 0, width, height);
|
| 732 |
+
|
| 733 |
+
// Draw horizontal grid lines and Y-axis labels
|
| 734 |
+
const numGridLines = 5;
|
| 735 |
+
ctx.strokeStyle = colors.grid;
|
| 736 |
+
ctx.lineWidth = 1;
|
| 737 |
+
ctx.fillStyle = colors.text;
|
| 738 |
+
ctx.font = '11px -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif';
|
| 739 |
+
ctx.textAlign = 'right';
|
| 740 |
+
ctx.textBaseline = 'middle';
|
| 741 |
+
|
| 742 |
+
for (let i = 0; i <= numGridLines; i++) {
|
| 743 |
+
const y = padding.top + (i / numGridLines) * chartHeight;
|
| 744 |
+
const price = adjustedMax - (i / numGridLines) * priceRange;
|
| 745 |
+
|
| 746 |
+
ctx.beginPath();
|
| 747 |
+
ctx.setLineDash([4, 4]);
|
| 748 |
+
ctx.moveTo(padding.left, y);
|
| 749 |
+
ctx.lineTo(width - padding.right, y);
|
| 750 |
+
ctx.stroke();
|
| 751 |
+
ctx.setLineDash([]);
|
| 752 |
+
|
| 753 |
+
ctx.fillText(`$${price.toFixed(2)}`, padding.left - 8, y);
|
| 754 |
+
}
|
| 755 |
+
|
| 756 |
+
// Draw chart based on view mode
|
| 757 |
+
if (viewMode === 'candle') {
|
| 758 |
+
drawCandlesticks(data, adjustedMax, priceRange, chartWidth, chartHeight, height, padding, colors, progress);
|
| 759 |
+
} else {
|
| 760 |
+
drawLineChart(data, adjustedMax, priceRange, chartWidth, chartHeight, height, padding, colors, progress);
|
| 761 |
+
}
|
| 762 |
+
|
| 763 |
+
// Draw X-axis date labels
|
| 764 |
+
drawXAxisLabels(data, chartWidth, height, padding, colors);
|
| 765 |
+
|
| 766 |
+
// Draw crosshair and tooltip if hovering
|
| 767 |
+
if (chartState.hoveredIndex >= 0 && chartState.hoveredIndex < data.length && progress === 1) {
|
| 768 |
+
drawCrosshair(chartState.hoveredIndex, data, adjustedMin, adjustedMax, priceRange, chartWidth, chartHeight, width, height, padding, colors);
|
| 769 |
+
}
|
| 770 |
+
}
|
| 771 |
+
|
| 772 |
+
function drawLineChart(data, adjustedMax, priceRange, chartWidth, chartHeight, height, padding, colors, progress) {
|
| 773 |
+
const ctx = chartState.ctx;
|
| 774 |
+
|
| 775 |
+
// Calculate points for animation
|
| 776 |
+
const pointsToDraw = Math.floor(data.length * progress);
|
| 777 |
+
const points = [];
|
| 778 |
+
|
| 779 |
+
for (let i = 0; i < pointsToDraw; i++) {
|
| 780 |
+
const x = padding.left + (i / (data.length - 1)) * chartWidth;
|
| 781 |
+
const y = padding.top + ((adjustedMax - data[i].c) / priceRange) * chartHeight;
|
| 782 |
+
points.push({ x, y, data: data[i] });
|
| 783 |
+
}
|
| 784 |
+
|
| 785 |
+
if (points.length < 2) return;
|
| 786 |
+
|
| 787 |
+
// Draw gradient fill
|
| 788 |
+
const gradient = ctx.createLinearGradient(0, padding.top, 0, height - padding.bottom);
|
| 789 |
+
gradient.addColorStop(0, colors.gradientTop);
|
| 790 |
+
gradient.addColorStop(1, colors.gradientBottom);
|
| 791 |
+
|
| 792 |
+
ctx.beginPath();
|
| 793 |
+
ctx.moveTo(points[0].x, height - padding.bottom);
|
| 794 |
+
points.forEach(p => ctx.lineTo(p.x, p.y));
|
| 795 |
+
ctx.lineTo(points[points.length - 1].x, height - padding.bottom);
|
| 796 |
+
ctx.closePath();
|
| 797 |
+
ctx.fillStyle = gradient;
|
| 798 |
+
ctx.fill();
|
| 799 |
+
|
| 800 |
+
// Draw the line
|
| 801 |
+
ctx.beginPath();
|
| 802 |
+
ctx.strokeStyle = colors.line;
|
| 803 |
+
ctx.lineWidth = 2;
|
| 804 |
+
ctx.lineJoin = 'round';
|
| 805 |
+
ctx.lineCap = 'round';
|
| 806 |
+
|
| 807 |
+
points.forEach((p, i) => {
|
| 808 |
+
if (i === 0) {
|
| 809 |
+
ctx.moveTo(p.x, p.y);
|
| 810 |
+
} else {
|
| 811 |
+
ctx.lineTo(p.x, p.y);
|
| 812 |
+
}
|
| 813 |
+
});
|
| 814 |
+
ctx.stroke();
|
| 815 |
+
}
|
| 816 |
+
|
| 817 |
+
function drawCandlesticks(data, adjustedMax, priceRange, chartWidth, chartHeight, height, padding, colors, progress) {
|
| 818 |
+
const ctx = chartState.ctx;
|
| 819 |
+
const candleCount = data.length;
|
| 820 |
+
const totalCandleSpace = chartWidth / candleCount;
|
| 821 |
+
const candleWidth = Math.max(1, totalCandleSpace * 0.7);
|
| 822 |
+
const candlesToDraw = Math.floor(candleCount * progress);
|
| 823 |
+
|
| 824 |
+
for (let i = 0; i < candlesToDraw; i++) {
|
| 825 |
+
const point = data[i];
|
| 826 |
+
const x = padding.left + (i + 0.5) * totalCandleSpace;
|
| 827 |
+
|
| 828 |
+
const openY = padding.top + ((adjustedMax - point.o) / priceRange) * chartHeight;
|
| 829 |
+
const closeY = padding.top + ((adjustedMax - point.c) / priceRange) * chartHeight;
|
| 830 |
+
const highY = padding.top + ((adjustedMax - point.h) / priceRange) * chartHeight;
|
| 831 |
+
const lowY = padding.top + ((adjustedMax - point.l) / priceRange) * chartHeight;
|
| 832 |
+
|
| 833 |
+
const isUp = point.c >= point.o;
|
| 834 |
+
const candleColor = isUp ? colors.positive : colors.negative;
|
| 835 |
+
|
| 836 |
+
// Draw wick (high to low line)
|
| 837 |
+
ctx.beginPath();
|
| 838 |
+
ctx.strokeStyle = candleColor;
|
| 839 |
+
ctx.lineWidth = 1;
|
| 840 |
+
ctx.moveTo(x, highY);
|
| 841 |
+
ctx.lineTo(x, lowY);
|
| 842 |
+
ctx.stroke();
|
| 843 |
+
|
| 844 |
+
// Draw body (open to close rectangle)
|
| 845 |
+
const bodyTop = Math.min(openY, closeY);
|
| 846 |
+
const bodyHeight = Math.max(1, Math.abs(closeY - openY));
|
| 847 |
+
|
| 848 |
+
ctx.fillStyle = candleColor;
|
| 849 |
+
ctx.fillRect(x - candleWidth / 2, bodyTop, candleWidth, bodyHeight);
|
| 850 |
+
}
|
| 851 |
+
}
|
| 852 |
+
|
| 853 |
+
function drawXAxisLabels(data, chartWidth, height, padding, colors) {
|
| 854 |
+
const ctx = chartState.ctx;
|
| 855 |
+
const labelCount = Math.min(6, data.length);
|
| 856 |
+
const step = Math.floor(data.length / labelCount);
|
| 857 |
+
|
| 858 |
+
ctx.fillStyle = colors.text;
|
| 859 |
+
ctx.font = '10px -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif';
|
| 860 |
+
ctx.textAlign = 'center';
|
| 861 |
+
ctx.textBaseline = 'top';
|
| 862 |
+
|
| 863 |
+
for (let i = 0; i < data.length; i += step) {
|
| 864 |
+
const x = padding.left + (i / (data.length - 1)) * chartWidth;
|
| 865 |
+
const date = new Date(data[i].t);
|
| 866 |
+
const label = formatDateLabel(date, data.length);
|
| 867 |
+
ctx.fillText(label, x, height - padding.bottom + 8);
|
| 868 |
+
}
|
| 869 |
+
|
| 870 |
+
// Always show last date
|
| 871 |
+
const lastX = padding.left + chartWidth;
|
| 872 |
+
const lastDate = new Date(data[data.length - 1].t);
|
| 873 |
+
ctx.fillText(formatDateLabel(lastDate, data.length), lastX, height - padding.bottom + 8);
|
| 874 |
+
}
|
| 875 |
+
|
| 876 |
+
function formatDateLabel(date, dataLength) {
|
| 877 |
+
const month = date.toLocaleDateString('en-US', { month: 'short' });
|
| 878 |
+
const day = date.getDate();
|
| 879 |
+
const year = date.getFullYear().toString().slice(-2);
|
| 880 |
+
|
| 881 |
+
if (dataLength > 365) {
|
| 882 |
+
return `${month} '${year}`;
|
| 883 |
+
}
|
| 884 |
+
return `${month} ${day}`;
|
| 885 |
+
}
|
| 886 |
+
|
| 887 |
+
function drawCrosshair(index, data, adjustedMin, adjustedMax, priceRange, chartWidth, chartHeight, width, height, padding, colors) {
|
| 888 |
+
const ctx = chartState.ctx;
|
| 889 |
+
const point = data[index];
|
| 890 |
+
const x = padding.left + (index / (data.length - 1)) * chartWidth;
|
| 891 |
+
const y = padding.top + ((adjustedMax - point.c) / priceRange) * chartHeight;
|
| 892 |
+
|
| 893 |
+
// Vertical line
|
| 894 |
+
ctx.strokeStyle = colors.crosshair;
|
| 895 |
+
ctx.lineWidth = 1;
|
| 896 |
+
ctx.setLineDash([4, 4]);
|
| 897 |
+
ctx.beginPath();
|
| 898 |
+
ctx.moveTo(x, padding.top);
|
| 899 |
+
ctx.lineTo(x, height - padding.bottom);
|
| 900 |
+
ctx.stroke();
|
| 901 |
+
|
| 902 |
+
// Horizontal line
|
| 903 |
+
ctx.beginPath();
|
| 904 |
+
ctx.moveTo(padding.left, y);
|
| 905 |
+
ctx.lineTo(width - padding.right, y);
|
| 906 |
+
ctx.stroke();
|
| 907 |
+
ctx.setLineDash([]);
|
| 908 |
+
|
| 909 |
+
// Point dot
|
| 910 |
+
ctx.beginPath();
|
| 911 |
+
ctx.fillStyle = colors.line;
|
| 912 |
+
ctx.arc(x, y, 5, 0, Math.PI * 2);
|
| 913 |
+
ctx.fill();
|
| 914 |
+
ctx.strokeStyle = colors.tooltipBg;
|
| 915 |
+
ctx.lineWidth = 2;
|
| 916 |
+
ctx.stroke();
|
| 917 |
+
|
| 918 |
+
// Tooltip
|
| 919 |
+
drawTooltip(x, y, point, data, index, width, height, padding, colors);
|
| 920 |
+
}
|
| 921 |
+
|
| 922 |
+
function drawTooltip(x, y, point, data, index, width, height, padding, colors) {
|
| 923 |
+
const ctx = chartState.ctx;
|
| 924 |
+
const isCandleMode = chartState.viewMode === 'candle';
|
| 925 |
+
|
| 926 |
+
const date = new Date(point.t);
|
| 927 |
+
const dateStr = date.toLocaleDateString('en-US', { month: 'short', day: 'numeric', year: 'numeric' });
|
| 928 |
+
|
| 929 |
+
const change = index > 0 ? point.c - data[index - 1].c : 0;
|
| 930 |
+
const changePercent = index > 0 ? (change / data[index - 1].c) * 100 : 0;
|
| 931 |
+
const changeColor = change >= 0 ? colors.positive : colors.negative;
|
| 932 |
+
const changeSign = change >= 0 ? '+' : '';
|
| 933 |
+
|
| 934 |
+
const tooltipWidth = isCandleMode ? 155 : 140;
|
| 935 |
+
const tooltipHeight = isCandleMode ? 105 : 72;
|
| 936 |
+
let tooltipX = x + 12;
|
| 937 |
+
let tooltipY = y - tooltipHeight / 2;
|
| 938 |
+
|
| 939 |
+
// Keep tooltip in bounds
|
| 940 |
+
if (tooltipX + tooltipWidth > width - padding.right) {
|
| 941 |
+
tooltipX = x - tooltipWidth - 12;
|
| 942 |
+
}
|
| 943 |
+
if (tooltipY < padding.top) {
|
| 944 |
+
tooltipY = padding.top;
|
| 945 |
+
}
|
| 946 |
+
if (tooltipY + tooltipHeight > height - padding.bottom) {
|
| 947 |
+
tooltipY = height - padding.bottom - tooltipHeight;
|
| 948 |
+
}
|
| 949 |
+
|
| 950 |
+
// Tooltip background
|
| 951 |
+
ctx.fillStyle = colors.tooltipBg;
|
| 952 |
+
ctx.strokeStyle = colors.tooltipBorder;
|
| 953 |
+
ctx.lineWidth = 1;
|
| 954 |
+
ctx.beginPath();
|
| 955 |
+
ctx.roundRect(tooltipX, tooltipY, tooltipWidth, tooltipHeight, 6);
|
| 956 |
+
ctx.fill();
|
| 957 |
+
ctx.stroke();
|
| 958 |
+
|
| 959 |
+
// Tooltip content
|
| 960 |
+
ctx.fillStyle = colors.text;
|
| 961 |
+
ctx.font = '10px -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif';
|
| 962 |
+
ctx.textAlign = 'left';
|
| 963 |
+
ctx.textBaseline = 'top';
|
| 964 |
+
ctx.fillText(dateStr, tooltipX + 10, tooltipY + 10);
|
| 965 |
+
|
| 966 |
+
if (isCandleMode) {
|
| 967 |
+
// OHLC display for candlestick mode
|
| 968 |
+
ctx.font = '11px -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif';
|
| 969 |
+
const ohlcY = tooltipY + 28;
|
| 970 |
+
const lineHeight = 16;
|
| 971 |
+
|
| 972 |
+
ctx.fillStyle = colors.text;
|
| 973 |
+
ctx.fillText('O:', tooltipX + 10, ohlcY);
|
| 974 |
+
ctx.fillText('H:', tooltipX + 10, ohlcY + lineHeight);
|
| 975 |
+
ctx.fillText('L:', tooltipX + 10, ohlcY + lineHeight * 2);
|
| 976 |
+
ctx.fillText('C:', tooltipX + 10, ohlcY + lineHeight * 3);
|
| 977 |
+
|
| 978 |
+
ctx.fillStyle = colors.textStrong;
|
| 979 |
+
ctx.fillText(`$${point.o.toFixed(2)}`, tooltipX + 28, ohlcY);
|
| 980 |
+
ctx.fillText(`$${point.h.toFixed(2)}`, tooltipX + 28, ohlcY + lineHeight);
|
| 981 |
+
ctx.fillText(`$${point.l.toFixed(2)}`, tooltipX + 28, ohlcY + lineHeight * 2);
|
| 982 |
+
|
| 983 |
+
ctx.fillStyle = changeColor;
|
| 984 |
+
ctx.fillText(`$${point.c.toFixed(2)}`, tooltipX + 28, ohlcY + lineHeight * 3);
|
| 985 |
+
|
| 986 |
+
// Change indicator on the right
|
| 987 |
+
ctx.fillStyle = changeColor;
|
| 988 |
+
ctx.textAlign = 'right';
|
| 989 |
+
ctx.fillText(`${changeSign}${changePercent.toFixed(2)}%`, tooltipX + tooltipWidth - 10, ohlcY + lineHeight * 3);
|
| 990 |
+
} else {
|
| 991 |
+
// Simple display for line chart
|
| 992 |
+
ctx.fillStyle = colors.textStrong;
|
| 993 |
+
ctx.font = 'bold 16px -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif';
|
| 994 |
+
ctx.fillText(`$${point.c.toFixed(2)}`, tooltipX + 10, tooltipY + 26);
|
| 995 |
+
|
| 996 |
+
ctx.fillStyle = changeColor;
|
| 997 |
+
ctx.font = '11px -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif';
|
| 998 |
+
ctx.fillText(`${changeSign}${change.toFixed(2)} (${changeSign}${changePercent.toFixed(2)}%)`, tooltipX + 10, tooltipY + 50);
|
| 999 |
+
}
|
| 1000 |
+
}
|
| 1001 |
+
|
| 1002 |
+
function handleChartMouseMove(e) {
|
| 1003 |
+
const { data, canvas, padding } = chartState;
|
| 1004 |
+
if (!data) return;
|
| 1005 |
+
|
| 1006 |
+
const rect = canvas.getBoundingClientRect();
|
| 1007 |
+
const dpr = window.devicePixelRatio || 1;
|
| 1008 |
+
const width = canvas.width / dpr;
|
| 1009 |
+
const chartWidth = width - padding.left - padding.right;
|
| 1010 |
+
const mouseX = e.clientX - rect.left;
|
| 1011 |
+
|
| 1012 |
+
const relativeX = mouseX - padding.left;
|
| 1013 |
+
const index = Math.round((relativeX / chartWidth) * (data.length - 1));
|
| 1014 |
+
|
| 1015 |
+
if (index >= 0 && index < data.length && index !== chartState.hoveredIndex) {
|
| 1016 |
+
chartState.hoveredIndex = index;
|
| 1017 |
+
drawChart(1);
|
| 1018 |
+
}
|
| 1019 |
+
}
|
| 1020 |
+
|
| 1021 |
+
function handleChartMouseLeave() {
|
| 1022 |
+
chartState.hoveredIndex = -1;
|
| 1023 |
+
drawChart(1);
|
| 1024 |
+
}
|
| 1025 |
+
|
| 1026 |
+
async function loadFinancials(ticker) {
|
| 1027 |
+
const cacheKey = `financials_${ticker}`;
|
| 1028 |
+
const container = document.getElementById('financialsData');
|
| 1029 |
+
container.innerHTML = '<p>Loading financial data...</p>';
|
| 1030 |
+
|
| 1031 |
+
// Check cache first
|
| 1032 |
+
if (cache.has(cacheKey)) {
|
| 1033 |
+
const data = cache.get(cacheKey);
|
| 1034 |
+
renderFinancials(data, container);
|
| 1035 |
+
return;
|
| 1036 |
+
}
|
| 1037 |
+
|
| 1038 |
+
try {
|
| 1039 |
+
const response = await fetch(`${API_BASE}/ticker/${ticker}/financials`);
|
| 1040 |
+
const data = await response.json();
|
| 1041 |
+
|
| 1042 |
+
// Store in cache
|
| 1043 |
+
cache.set(cacheKey, data, CACHE_TTL.MODERATE);
|
| 1044 |
+
|
| 1045 |
+
renderFinancials(data, container);
|
| 1046 |
+
} catch (error) {
|
| 1047 |
+
console.error('Error loading financials:', error);
|
| 1048 |
+
container.innerHTML = '<p>Error loading financial data.</p>';
|
| 1049 |
+
}
|
| 1050 |
+
}
|
| 1051 |
+
|
| 1052 |
+
function renderFinancials(data, container) {
|
| 1053 |
+
if (data.results && data.results.length > 0) {
|
| 1054 |
+
let html = '';
|
| 1055 |
+
|
| 1056 |
+
data.results.forEach(period => {
|
| 1057 |
+
const financials = period.financials;
|
| 1058 |
+
const endDate = period.end_date ? new Date(period.end_date).toLocaleDateString() : '';
|
| 1059 |
+
const dateDisplay = endDate ? ` (${endDate})` : '';
|
| 1060 |
+
|
| 1061 |
+
html += `
|
| 1062 |
+
<div class="financial-period">
|
| 1063 |
+
<h4>${period.fiscal_year} - ${period.fiscal_period}${dateDisplay}</h4>
|
| 1064 |
+
<div class="financial-grid">
|
| 1065 |
+
`;
|
| 1066 |
+
|
| 1067 |
+
if (financials.income_statement) {
|
| 1068 |
+
const income = financials.income_statement;
|
| 1069 |
+
if (income.revenues) {
|
| 1070 |
+
html += `
|
| 1071 |
+
<div class="financial-item">
|
| 1072 |
+
<span class="financial-item-label">Revenue</span>
|
| 1073 |
+
<span class="financial-item-value">${formatLargeNumber(income.revenues.value)}</span>
|
| 1074 |
+
</div>
|
| 1075 |
+
`;
|
| 1076 |
+
}
|
| 1077 |
+
if (income.net_income_loss) {
|
| 1078 |
+
html += `
|
| 1079 |
+
<div class="financial-item">
|
| 1080 |
+
<span class="financial-item-label">Net Income</span>
|
| 1081 |
+
<span class="financial-item-value">${formatLargeNumber(income.net_income_loss.value)}</span>
|
| 1082 |
+
</div>
|
| 1083 |
+
`;
|
| 1084 |
+
}
|
| 1085 |
+
if (income.gross_profit) {
|
| 1086 |
+
html += `
|
| 1087 |
+
<div class="financial-item">
|
| 1088 |
+
<span class="financial-item-label">Gross Profit</span>
|
| 1089 |
+
<span class="financial-item-value">${formatLargeNumber(income.gross_profit.value)}</span>
|
| 1090 |
+
</div>
|
| 1091 |
+
`;
|
| 1092 |
+
}
|
| 1093 |
+
}
|
| 1094 |
+
|
| 1095 |
+
if (financials.balance_sheet) {
|
| 1096 |
+
const balance = financials.balance_sheet;
|
| 1097 |
+
if (balance.assets) {
|
| 1098 |
+
html += `
|
| 1099 |
+
<div class="financial-item">
|
| 1100 |
+
<span class="financial-item-label">Total Assets</span>
|
| 1101 |
+
<span class="financial-item-value">${formatLargeNumber(balance.assets.value)}</span>
|
| 1102 |
+
</div>
|
| 1103 |
+
`;
|
| 1104 |
+
}
|
| 1105 |
+
if (balance.liabilities) {
|
| 1106 |
+
html += `
|
| 1107 |
+
<div class="financial-item">
|
| 1108 |
+
<span class="financial-item-label">Total Liabilities</span>
|
| 1109 |
+
<span class="financial-item-value">${formatLargeNumber(balance.liabilities.value)}</span>
|
| 1110 |
+
</div>
|
| 1111 |
+
`;
|
| 1112 |
+
}
|
| 1113 |
+
}
|
| 1114 |
+
|
| 1115 |
+
html += `
|
| 1116 |
+
</div>
|
| 1117 |
+
</div>
|
| 1118 |
+
`;
|
| 1119 |
+
});
|
| 1120 |
+
|
| 1121 |
+
container.innerHTML = html;
|
| 1122 |
+
} else {
|
| 1123 |
+
container.innerHTML = '<p>No financial data available.</p>';
|
| 1124 |
+
}
|
| 1125 |
+
}
|
| 1126 |
+
|
| 1127 |
+
async function loadNews(ticker) {
|
| 1128 |
+
const cacheKey = `news_${ticker}`;
|
| 1129 |
+
const container = document.getElementById('newsContainer');
|
| 1130 |
+
container.innerHTML = '<p>Loading news...</p>';
|
| 1131 |
+
|
| 1132 |
+
// Check cache first
|
| 1133 |
+
if (cache.has(cacheKey)) {
|
| 1134 |
+
const data = cache.get(cacheKey);
|
| 1135 |
+
renderNews(data, container);
|
| 1136 |
+
return;
|
| 1137 |
+
}
|
| 1138 |
+
|
| 1139 |
+
try {
|
| 1140 |
+
const response = await fetch(`${API_BASE}/ticker/${ticker}/news?limit=10`);
|
| 1141 |
+
const data = await response.json();
|
| 1142 |
+
|
| 1143 |
+
// Store in cache
|
| 1144 |
+
cache.set(cacheKey, data, CACHE_TTL.SHORT);
|
| 1145 |
+
|
| 1146 |
+
renderNews(data, container);
|
| 1147 |
+
|
| 1148 |
+
// Trigger article scraping in background
|
| 1149 |
+
scrapeAndEmbedArticles();
|
| 1150 |
+
} catch (error) {
|
| 1151 |
+
console.error('Error loading news:', error);
|
| 1152 |
+
container.innerHTML = '<p>Error loading news.</p>';
|
| 1153 |
+
}
|
| 1154 |
+
}
|
| 1155 |
+
|
| 1156 |
+
function renderNews(data, container) {
|
| 1157 |
+
if (data.results && data.results.length > 0) {
|
| 1158 |
+
let html = '';
|
| 1159 |
+
|
| 1160 |
+
data.results.forEach(article => {
|
| 1161 |
+
const date = new Date(article.published_utc).toLocaleDateString();
|
| 1162 |
+
html += `
|
| 1163 |
+
<div class="news-article">
|
| 1164 |
+
<h4><a href="${article.article_url}" target="_blank">${article.title}</a></h4>
|
| 1165 |
+
<div class="news-meta">
|
| 1166 |
+
${article.publisher?.name || 'Unknown'} - ${date}
|
| 1167 |
+
</div>
|
| 1168 |
+
<div class="news-description">
|
| 1169 |
+
${article.description || ''}
|
| 1170 |
+
</div>
|
| 1171 |
+
</div>
|
| 1172 |
+
`;
|
| 1173 |
+
});
|
| 1174 |
+
|
| 1175 |
+
container.innerHTML = html;
|
| 1176 |
+
} else {
|
| 1177 |
+
container.innerHTML = '<p>No news available.</p>';
|
| 1178 |
+
}
|
| 1179 |
+
}
|
| 1180 |
+
|
| 1181 |
+
function formatLargeNumber(num) {
|
| 1182 |
+
if (num >= 1e12) return `$${(num / 1e12).toFixed(2)}T`;
|
| 1183 |
+
if (num >= 1e9) return `$${(num / 1e9).toFixed(2)}B`;
|
| 1184 |
+
if (num >= 1e6) return `$${(num / 1e6).toFixed(2)}M`;
|
| 1185 |
+
if (num >= 1e3) return `$${(num / 1e3).toFixed(2)}K`;
|
| 1186 |
+
return `$${num.toFixed(2)}`;
|
| 1187 |
+
}
|
| 1188 |
+
|
| 1189 |
+
function showLoading(show) {
|
| 1190 |
+
const loading = document.getElementById('loading');
|
| 1191 |
+
if (show) {
|
| 1192 |
+
loading.classList.remove('hidden');
|
| 1193 |
+
} else {
|
| 1194 |
+
loading.classList.add('hidden');
|
| 1195 |
+
}
|
| 1196 |
+
}
|
| 1197 |
+
|
| 1198 |
+
async function loadMarketStatus() {
|
| 1199 |
+
const cacheKey = 'market_status';
|
| 1200 |
+
|
| 1201 |
+
// Check cache first
|
| 1202 |
+
if (cache.has(cacheKey)) {
|
| 1203 |
+
const data = cache.get(cacheKey);
|
| 1204 |
+
renderMarketStatus(data);
|
| 1205 |
+
return;
|
| 1206 |
+
}
|
| 1207 |
+
|
| 1208 |
+
try {
|
| 1209 |
+
const response = await fetch('http://localhost:5000/api/market-status');
|
| 1210 |
+
const data = await response.json();
|
| 1211 |
+
|
| 1212 |
+
// Store in cache
|
| 1213 |
+
cache.set(cacheKey, data, CACHE_TTL.DAILY);
|
| 1214 |
+
|
| 1215 |
+
renderMarketStatus(data);
|
| 1216 |
+
} catch (error) {
|
| 1217 |
+
console.error('Error loading market status:', error);
|
| 1218 |
+
document.getElementById('marketStatusText').textContent = 'Unknown';
|
| 1219 |
+
}
|
| 1220 |
+
}
|
| 1221 |
+
|
| 1222 |
+
function renderMarketStatus(data) {
|
| 1223 |
+
const statusText = document.getElementById('marketStatusText');
|
| 1224 |
+
if (data.market === 'open') {
|
| 1225 |
+
statusText.textContent = 'Open';
|
| 1226 |
+
statusText.classList.add('status-open');
|
| 1227 |
+
statusText.classList.remove('status-closed');
|
| 1228 |
+
} else {
|
| 1229 |
+
statusText.textContent = 'Closed';
|
| 1230 |
+
statusText.classList.add('status-closed');
|
| 1231 |
+
statusText.classList.remove('status-open');
|
| 1232 |
+
}
|
| 1233 |
+
}
|
| 1234 |
+
|
| 1235 |
+
async function loadDividends(ticker) {
|
| 1236 |
+
const cacheKey = `dividends_${ticker}`;
|
| 1237 |
+
const container = document.getElementById('dividendsContainer');
|
| 1238 |
+
container.innerHTML = '<p>Loading dividend data...</p>';
|
| 1239 |
+
|
| 1240 |
+
// Check cache first
|
| 1241 |
+
if (cache.has(cacheKey)) {
|
| 1242 |
+
const data = cache.get(cacheKey);
|
| 1243 |
+
renderDividends(data, container);
|
| 1244 |
+
return;
|
| 1245 |
+
}
|
| 1246 |
+
|
| 1247 |
+
try {
|
| 1248 |
+
const response = await fetch(`${API_BASE}/ticker/${ticker}/dividends?limit=20`);
|
| 1249 |
+
const data = await response.json();
|
| 1250 |
+
|
| 1251 |
+
// Store in cache
|
| 1252 |
+
cache.set(cacheKey, data, CACHE_TTL.STATIC);
|
| 1253 |
+
|
| 1254 |
+
renderDividends(data, container);
|
| 1255 |
+
} catch (error) {
|
| 1256 |
+
console.error('Error loading dividends:', error);
|
| 1257 |
+
container.innerHTML = '<p>Error loading dividend data.</p>';
|
| 1258 |
+
}
|
| 1259 |
+
}
|
| 1260 |
+
|
| 1261 |
+
function renderDividends(data, container) {
|
| 1262 |
+
if (data.results && data.results.length > 0) {
|
| 1263 |
+
let html = '<table class="data-table"><thead><tr>';
|
| 1264 |
+
html += '<th>Ex-Dividend Date</th>';
|
| 1265 |
+
html += '<th>Pay Date</th>';
|
| 1266 |
+
html += '<th>Amount</th>';
|
| 1267 |
+
html += '<th>Frequency</th>';
|
| 1268 |
+
html += '</tr></thead><tbody>';
|
| 1269 |
+
|
| 1270 |
+
data.results.forEach(dividend => {
|
| 1271 |
+
const exDate = new Date(dividend.ex_dividend_date).toLocaleDateString();
|
| 1272 |
+
const payDate = dividend.pay_date ? new Date(dividend.pay_date).toLocaleDateString() : 'N/A';
|
| 1273 |
+
const frequency = getFrequencyText(dividend.frequency);
|
| 1274 |
+
|
| 1275 |
+
html += '<tr>';
|
| 1276 |
+
html += `<td>${exDate}</td>`;
|
| 1277 |
+
html += `<td>${payDate}</td>`;
|
| 1278 |
+
html += `<td>$${dividend.cash_amount.toFixed(4)}</td>`;
|
| 1279 |
+
html += `<td>${frequency}</td>`;
|
| 1280 |
+
html += '</tr>';
|
| 1281 |
+
});
|
| 1282 |
+
|
| 1283 |
+
html += '</tbody></table>';
|
| 1284 |
+
container.innerHTML = html;
|
| 1285 |
+
} else {
|
| 1286 |
+
container.innerHTML = '<p>No dividend data available for this stock.</p>';
|
| 1287 |
+
}
|
| 1288 |
+
}
|
| 1289 |
+
|
| 1290 |
+
async function loadSplits(ticker) {
|
| 1291 |
+
const cacheKey = `splits_${ticker}`;
|
| 1292 |
+
const container = document.getElementById('splitsContainer');
|
| 1293 |
+
container.innerHTML = '<p>Loading stock split data...</p>';
|
| 1294 |
+
|
| 1295 |
+
// Check cache first
|
| 1296 |
+
if (cache.has(cacheKey)) {
|
| 1297 |
+
const data = cache.get(cacheKey);
|
| 1298 |
+
renderSplits(data, container);
|
| 1299 |
+
return;
|
| 1300 |
+
}
|
| 1301 |
+
|
| 1302 |
+
try {
|
| 1303 |
+
const response = await fetch(`${API_BASE}/ticker/${ticker}/splits?limit=20`);
|
| 1304 |
+
const data = await response.json();
|
| 1305 |
+
|
| 1306 |
+
// Store in cache
|
| 1307 |
+
cache.set(cacheKey, data, CACHE_TTL.STATIC);
|
| 1308 |
+
|
| 1309 |
+
renderSplits(data, container);
|
| 1310 |
+
} catch (error) {
|
| 1311 |
+
console.error('Error loading splits:', error);
|
| 1312 |
+
container.innerHTML = '<p>Error loading stock split data.</p>';
|
| 1313 |
+
}
|
| 1314 |
+
}
|
| 1315 |
+
|
| 1316 |
+
function renderSplits(data, container) {
|
| 1317 |
+
if (data.results && data.results.length > 0) {
|
| 1318 |
+
let html = '<table class="data-table"><thead><tr>';
|
| 1319 |
+
html += '<th>Execution Date</th>';
|
| 1320 |
+
html += '<th>Split Ratio</th>';
|
| 1321 |
+
html += '<th>Type</th>';
|
| 1322 |
+
html += '</tr></thead><tbody>';
|
| 1323 |
+
|
| 1324 |
+
data.results.forEach(split => {
|
| 1325 |
+
const execDate = new Date(split.execution_date).toLocaleDateString();
|
| 1326 |
+
const ratio = `${split.split_to}:${split.split_from}`;
|
| 1327 |
+
const type = split.split_from > split.split_to ? 'Reverse Split' : 'Forward Split';
|
| 1328 |
+
|
| 1329 |
+
html += '<tr>';
|
| 1330 |
+
html += `<td>${execDate}</td>`;
|
| 1331 |
+
html += `<td>${ratio}</td>`;
|
| 1332 |
+
html += `<td>${type}</td>`;
|
| 1333 |
+
html += '</tr>';
|
| 1334 |
+
});
|
| 1335 |
+
|
| 1336 |
+
html += '</tbody></table>';
|
| 1337 |
+
container.innerHTML = html;
|
| 1338 |
+
} else {
|
| 1339 |
+
container.innerHTML = '<p>No stock split data available for this stock.</p>';
|
| 1340 |
+
}
|
| 1341 |
+
}
|
| 1342 |
+
|
| 1343 |
+
function getFrequencyText(frequency) {
|
| 1344 |
+
const frequencies = {
|
| 1345 |
+
0: 'One-time',
|
| 1346 |
+
1: 'Annual',
|
| 1347 |
+
2: 'Semi-Annual',
|
| 1348 |
+
4: 'Quarterly',
|
| 1349 |
+
12: 'Monthly',
|
| 1350 |
+
24: 'Bi-Monthly',
|
| 1351 |
+
52: 'Weekly'
|
| 1352 |
+
};
|
| 1353 |
+
return frequencies[frequency] || 'Unknown';
|
| 1354 |
+
}
|
| 1355 |
+
|
| 1356 |
+
// ============================================
|
| 1357 |
+
// CHAT FUNCTIONALITY
|
| 1358 |
+
// ============================================
|
| 1359 |
+
|
| 1360 |
+
// Chat state management
|
| 1361 |
+
let chatState = {
|
| 1362 |
+
conversationId: generateUUID(),
|
| 1363 |
+
messages: [],
|
| 1364 |
+
isOpen: true, // Default to open for persistent panel
|
| 1365 |
+
isLoading: false
|
| 1366 |
+
};
|
| 1367 |
+
|
| 1368 |
+
function generateUUID() {
|
| 1369 |
+
return 'xxxxxxxx-xxxx-4xxx-yxxx-xxxxxxxxxxxx'.replace(/[xy]/g, function(c) {
|
| 1370 |
+
const r = Math.random() * 16 | 0;
|
| 1371 |
+
const v = c === 'x' ? r : (r & 0x3 | 0x8);
|
| 1372 |
+
return v.toString(16);
|
| 1373 |
+
});
|
| 1374 |
+
}
|
| 1375 |
+
|
| 1376 |
+
function updateChatContext() {
|
| 1377 |
+
const contextEl = document.getElementById('chatCurrentTicker');
|
| 1378 |
+
|
| 1379 |
+
if (currentTicker) {
|
| 1380 |
+
const detailsCache = cache.get(`details_${currentTicker}`);
|
| 1381 |
+
const companyName = detailsCache?.results?.name || currentTicker;
|
| 1382 |
+
contextEl.textContent = `${currentTicker} - ${companyName}`;
|
| 1383 |
+
} else {
|
| 1384 |
+
contextEl.textContent = 'Select a stock to start';
|
| 1385 |
+
}
|
| 1386 |
+
}
|
| 1387 |
+
|
| 1388 |
+
// Tool display names for status indicators
|
| 1389 |
+
const TOOL_DISPLAY_NAMES = {
|
| 1390 |
+
'get_stock_quote': 'Fetching stock price',
|
| 1391 |
+
'get_company_info': 'Looking up company info',
|
| 1392 |
+
'get_financials': 'Retrieving financial data',
|
| 1393 |
+
'get_news': 'Searching news articles',
|
| 1394 |
+
'search_knowledge_base': 'Searching knowledge base',
|
| 1395 |
+
'analyze_sentiment': 'Analyzing social sentiment',
|
| 1396 |
+
'get_price_forecast': 'Generating price forecast',
|
| 1397 |
+
'get_dividends': 'Fetching dividend history',
|
| 1398 |
+
'get_stock_splits': 'Checking split history',
|
| 1399 |
+
'get_price_history': 'Loading price history'
|
| 1400 |
+
};
|
| 1401 |
+
|
| 1402 |
+
function parseSSEBuffer(buffer) {
|
| 1403 |
+
const parsed = [];
|
| 1404 |
+
const lines = buffer.split('\n');
|
| 1405 |
+
let currentEvent = { type: null, data: null };
|
| 1406 |
+
let remaining = '';
|
| 1407 |
+
|
| 1408 |
+
for (let i = 0; i < lines.length; i++) {
|
| 1409 |
+
const line = lines[i];
|
| 1410 |
+
|
| 1411 |
+
if (line.startsWith('event: ')) {
|
| 1412 |
+
currentEvent.type = line.substring(7).trim();
|
| 1413 |
+
} else if (line.startsWith('data: ')) {
|
| 1414 |
+
currentEvent.data = line.substring(6);
|
| 1415 |
+
} else if (line === '' && currentEvent.type !== null) {
|
| 1416 |
+
let data = currentEvent.data;
|
| 1417 |
+
if (currentEvent.type !== 'text') {
|
| 1418 |
+
try { data = JSON.parse(data); } catch (e) {}
|
| 1419 |
+
}
|
| 1420 |
+
parsed.push({ type: currentEvent.type, data: data });
|
| 1421 |
+
currentEvent = { type: null, data: null };
|
| 1422 |
+
}
|
| 1423 |
+
}
|
| 1424 |
+
|
| 1425 |
+
// Keep incomplete event in buffer
|
| 1426 |
+
if (currentEvent.type !== null || currentEvent.data !== null) {
|
| 1427 |
+
remaining = '';
|
| 1428 |
+
if (currentEvent.type) remaining += `event: ${currentEvent.type}\n`;
|
| 1429 |
+
if (currentEvent.data !== null) remaining += `data: ${currentEvent.data}\n`;
|
| 1430 |
+
}
|
| 1431 |
+
|
| 1432 |
+
return { parsed, remaining };
|
| 1433 |
+
}
|
| 1434 |
+
|
| 1435 |
+
function renderToolStatuses(elementId, statuses) {
|
| 1436 |
+
const el = document.getElementById(elementId);
|
| 1437 |
+
if (!el) return;
|
| 1438 |
+
|
| 1439 |
+
let html = '';
|
| 1440 |
+
for (const status of statuses) {
|
| 1441 |
+
const icon = status.status === 'complete' ? '✓' :
|
| 1442 |
+
status.status === 'error' ? '✗' :
|
| 1443 |
+
'<span class="tool-spinner"></span>';
|
| 1444 |
+
const className = `tool-status-item ${status.status}`;
|
| 1445 |
+
html += `<div class="${className}">${icon} ${escapeHtml(status.displayName)}</div>`;
|
| 1446 |
+
}
|
| 1447 |
+
el.innerHTML = html;
|
| 1448 |
+
el.scrollIntoView({ behavior: 'smooth', block: 'end' });
|
| 1449 |
+
}
|
| 1450 |
+
|
| 1451 |
+
async function sendChatMessage() {
|
| 1452 |
+
const input = document.getElementById('chatInput');
|
| 1453 |
+
const message = input.value.trim();
|
| 1454 |
+
|
| 1455 |
+
if (!message) return;
|
| 1456 |
+
|
| 1457 |
+
if (!currentTicker) {
|
| 1458 |
+
addMessageToChat('error', 'Please select a stock first.');
|
| 1459 |
+
return;
|
| 1460 |
+
}
|
| 1461 |
+
|
| 1462 |
+
// Add user message to UI
|
| 1463 |
+
addMessageToChat('user', message);
|
| 1464 |
+
input.value = '';
|
| 1465 |
+
|
| 1466 |
+
// Disable input while processing
|
| 1467 |
+
input.disabled = true;
|
| 1468 |
+
document.getElementById('sendChatBtn').disabled = true;
|
| 1469 |
+
|
| 1470 |
+
// Show loading indicator
|
| 1471 |
+
const loadingId = addMessageToChat('loading', '');
|
| 1472 |
+
|
| 1473 |
+
// Collect current stock context from cache (for agent's hybrid caching)
|
| 1474 |
+
const context = {
|
| 1475 |
+
overview: {
|
| 1476 |
+
details: cache.get(`details_${currentTicker}`),
|
| 1477 |
+
previousClose: cache.get(`prev_close_${currentTicker}`)
|
| 1478 |
+
},
|
| 1479 |
+
financials: cache.get(`financials_${currentTicker}`),
|
| 1480 |
+
news: cache.get(`news_${currentTicker}`),
|
| 1481 |
+
dividends: cache.get(`dividends_${currentTicker}`),
|
| 1482 |
+
splits: cache.get(`splits_${currentTicker}`),
|
| 1483 |
+
sentiment: cache.get(`sentiment_${currentTicker}`)
|
| 1484 |
+
};
|
| 1485 |
+
|
| 1486 |
+
try {
|
| 1487 |
+
const response = await fetch(`${API_BASE}/chat/message`, {
|
| 1488 |
+
method: 'POST',
|
| 1489 |
+
headers: { 'Content-Type': 'application/json' },
|
| 1490 |
+
body: JSON.stringify({
|
| 1491 |
+
ticker: currentTicker,
|
| 1492 |
+
message: message,
|
| 1493 |
+
context: context,
|
| 1494 |
+
conversation_id: chatState.conversationId
|
| 1495 |
+
})
|
| 1496 |
+
});
|
| 1497 |
+
|
| 1498 |
+
if (!response.ok) {
|
| 1499 |
+
throw new Error(`HTTP error! status: ${response.status}`);
|
| 1500 |
+
}
|
| 1501 |
+
|
| 1502 |
+
// Remove loading indicator
|
| 1503 |
+
removeMessage(loadingId);
|
| 1504 |
+
|
| 1505 |
+
// Create tool status container and assistant message container
|
| 1506 |
+
const toolStatusId = addMessageToChat('tool-status', '');
|
| 1507 |
+
let toolStatuses = [];
|
| 1508 |
+
const messageId = addMessageToChat('assistant', '');
|
| 1509 |
+
let assistantMessage = '';
|
| 1510 |
+
|
| 1511 |
+
// Parse structured SSE events
|
| 1512 |
+
const reader = response.body.getReader();
|
| 1513 |
+
const decoder = new TextDecoder();
|
| 1514 |
+
let buffer = '';
|
| 1515 |
+
|
| 1516 |
+
while (true) {
|
| 1517 |
+
const { done, value } = await reader.read();
|
| 1518 |
+
if (done) break;
|
| 1519 |
+
|
| 1520 |
+
buffer += decoder.decode(value, { stream: true });
|
| 1521 |
+
const result = parseSSEBuffer(buffer);
|
| 1522 |
+
buffer = result.remaining;
|
| 1523 |
+
|
| 1524 |
+
for (const event of result.parsed) {
|
| 1525 |
+
if (event.type === 'tool_call') {
|
| 1526 |
+
const displayName = TOOL_DISPLAY_NAMES[event.data.tool] || event.data.tool;
|
| 1527 |
+
|
| 1528 |
+
if (event.data.status === 'calling') {
|
| 1529 |
+
toolStatuses.push({ tool: event.data.tool, displayName, status: 'calling' });
|
| 1530 |
+
} else {
|
| 1531 |
+
const existing = toolStatuses.find(t => t.tool === event.data.tool);
|
| 1532 |
+
if (existing) existing.status = event.data.status;
|
| 1533 |
+
}
|
| 1534 |
+
renderToolStatuses(toolStatusId, toolStatuses);
|
| 1535 |
+
|
| 1536 |
+
} else if (event.type === 'text') {
|
| 1537 |
+
assistantMessage += event.data;
|
| 1538 |
+
updateMessage(messageId, assistantMessage);
|
| 1539 |
+
|
| 1540 |
+
} else if (event.type === 'done') {
|
| 1541 |
+
// Fade out tool statuses
|
| 1542 |
+
const toolEl = document.getElementById(toolStatusId);
|
| 1543 |
+
if (toolEl && toolStatuses.length > 0) {
|
| 1544 |
+
toolEl.classList.add('tool-status-complete');
|
| 1545 |
+
}
|
| 1546 |
+
|
| 1547 |
+
} else if (event.type === 'error') {
|
| 1548 |
+
removeMessage(messageId);
|
| 1549 |
+
addMessageToChat('error', event.data.message || 'An error occurred');
|
| 1550 |
+
}
|
| 1551 |
+
}
|
| 1552 |
+
}
|
| 1553 |
+
|
| 1554 |
+
// Remove tool status container if no tools were called
|
| 1555 |
+
if (toolStatuses.length === 0) {
|
| 1556 |
+
removeMessage(toolStatusId);
|
| 1557 |
+
}
|
| 1558 |
+
|
| 1559 |
+
// Save to chat state
|
| 1560 |
+
chatState.messages.push(
|
| 1561 |
+
{ role: 'user', content: message },
|
| 1562 |
+
{ role: 'assistant', content: assistantMessage }
|
| 1563 |
+
);
|
| 1564 |
+
|
| 1565 |
+
} catch (error) {
|
| 1566 |
+
console.error('Chat error:', error);
|
| 1567 |
+
removeMessage(loadingId);
|
| 1568 |
+
addMessageToChat('error', 'Failed to get response. Please try again.');
|
| 1569 |
+
} finally {
|
| 1570 |
+
input.disabled = false;
|
| 1571 |
+
document.getElementById('sendChatBtn').disabled = false;
|
| 1572 |
+
input.focus();
|
| 1573 |
+
}
|
| 1574 |
+
}
|
| 1575 |
+
|
| 1576 |
+
function addMessageToChat(type, content) {
|
| 1577 |
+
const container = document.getElementById('chatMessages');
|
| 1578 |
+
const messageId = generateUUID();
|
| 1579 |
+
|
| 1580 |
+
const messageDiv = document.createElement('div');
|
| 1581 |
+
messageDiv.id = messageId;
|
| 1582 |
+
messageDiv.className = `message ${type}`;
|
| 1583 |
+
|
| 1584 |
+
if (type === 'loading') {
|
| 1585 |
+
messageDiv.innerHTML = '<div class="typing-indicator"><span></span><span></span><span></span></div>';
|
| 1586 |
+
} else {
|
| 1587 |
+
messageDiv.textContent = content;
|
| 1588 |
+
}
|
| 1589 |
+
|
| 1590 |
+
container.appendChild(messageDiv);
|
| 1591 |
+
messageDiv.scrollIntoView({ behavior: 'smooth', block: 'end' });
|
| 1592 |
+
|
| 1593 |
+
return messageId;
|
| 1594 |
+
}
|
| 1595 |
+
|
| 1596 |
+
function updateMessage(messageId, content) {
|
| 1597 |
+
const messageDiv = document.getElementById(messageId);
|
| 1598 |
+
if (messageDiv) {
|
| 1599 |
+
messageDiv.textContent = content;
|
| 1600 |
+
messageDiv.scrollIntoView({ behavior: 'smooth', block: 'end' });
|
| 1601 |
+
}
|
| 1602 |
+
}
|
| 1603 |
+
|
| 1604 |
+
function removeMessage(messageId) {
|
| 1605 |
+
const messageDiv = document.getElementById(messageId);
|
| 1606 |
+
if (messageDiv) {
|
| 1607 |
+
messageDiv.remove();
|
| 1608 |
+
}
|
| 1609 |
+
}
|
| 1610 |
+
|
| 1611 |
+
// Preload news data and trigger RAG scraping when stock is selected
|
| 1612 |
+
async function preloadNewsForRAG(ticker) {
|
| 1613 |
+
const cacheKey = `news_${ticker}`;
|
| 1614 |
+
|
| 1615 |
+
// Skip if already cached
|
| 1616 |
+
if (cache.has(cacheKey)) {
|
| 1617 |
+
// Trigger scraping with cached data
|
| 1618 |
+
scrapeAndEmbedArticles();
|
| 1619 |
+
return;
|
| 1620 |
+
}
|
| 1621 |
+
|
| 1622 |
+
try {
|
| 1623 |
+
const response = await fetch(`${API_BASE}/ticker/${ticker}/news?limit=10`);
|
| 1624 |
+
const data = await response.json();
|
| 1625 |
+
|
| 1626 |
+
// Store in cache
|
| 1627 |
+
cache.set(cacheKey, data, CACHE_TTL.SHORT);
|
| 1628 |
+
|
| 1629 |
+
// Trigger article scraping in background
|
| 1630 |
+
scrapeAndEmbedArticles();
|
| 1631 |
+
} catch (error) {
|
| 1632 |
+
console.error('Error preloading news for RAG:', error);
|
| 1633 |
+
}
|
| 1634 |
+
}
|
| 1635 |
+
|
| 1636 |
+
// Background job to scrape and embed articles when News tab is loaded
|
| 1637 |
+
async function scrapeAndEmbedArticles() {
|
| 1638 |
+
if (!currentTicker) return;
|
| 1639 |
+
|
| 1640 |
+
const newsCache = cache.get(`news_${currentTicker}`);
|
| 1641 |
+
if (!newsCache || !newsCache.results) return;
|
| 1642 |
+
|
| 1643 |
+
try {
|
| 1644 |
+
// Call scrape endpoint in background (don't await)
|
| 1645 |
+
fetch(`${API_BASE}/chat/scrape-articles`, {
|
| 1646 |
+
method: 'POST',
|
| 1647 |
+
headers: { 'Content-Type': 'application/json' },
|
| 1648 |
+
body: JSON.stringify({
|
| 1649 |
+
ticker: currentTicker,
|
| 1650 |
+
articles: newsCache.results
|
| 1651 |
+
})
|
| 1652 |
+
}).then(response => response.json())
|
| 1653 |
+
.then(result => {
|
| 1654 |
+
console.log('Article scraping complete:', result);
|
| 1655 |
+
})
|
| 1656 |
+
.catch(error => {
|
| 1657 |
+
console.error('Article scraping error:', error);
|
| 1658 |
+
});
|
| 1659 |
+
} catch (error) {
|
| 1660 |
+
console.error('Failed to initiate article scraping:', error);
|
| 1661 |
+
}
|
| 1662 |
+
}
|
| 1663 |
+
|
| 1664 |
+
// Setup chat event listeners
|
| 1665 |
+
function setupChatListeners() {
|
| 1666 |
+
// Mobile chat toggle button
|
| 1667 |
+
const mobileToggle = document.getElementById('mobileChatToggle');
|
| 1668 |
+
if (mobileToggle) {
|
| 1669 |
+
mobileToggle.addEventListener('click', toggleMobileChat);
|
| 1670 |
+
}
|
| 1671 |
+
|
| 1672 |
+
// Mobile chat close button
|
| 1673 |
+
const mobileClose = document.getElementById('closeMobileChat');
|
| 1674 |
+
if (mobileClose) {
|
| 1675 |
+
mobileClose.addEventListener('click', toggleMobileChat);
|
| 1676 |
+
}
|
| 1677 |
+
|
| 1678 |
+
// Send button and input
|
| 1679 |
+
document.getElementById('sendChatBtn').addEventListener('click', sendChatMessage);
|
| 1680 |
+
|
| 1681 |
+
document.getElementById('chatInput').addEventListener('keydown', (e) => {
|
| 1682 |
+
if (e.key === 'Enter' && !e.shiftKey) {
|
| 1683 |
+
e.preventDefault();
|
| 1684 |
+
sendChatMessage();
|
| 1685 |
+
}
|
| 1686 |
+
});
|
| 1687 |
+
|
| 1688 |
+
// Setup sentiment listeners
|
| 1689 |
+
setupSentimentListeners();
|
| 1690 |
+
|
| 1691 |
+
// Setup forecast listeners
|
| 1692 |
+
setupForecastListeners();
|
| 1693 |
+
}
|
| 1694 |
+
|
| 1695 |
+
// Toggle chat for mobile (full screen overlay)
|
| 1696 |
+
function toggleMobileChat() {
|
| 1697 |
+
const chatPanel = document.getElementById('chatPanel');
|
| 1698 |
+
const isOpen = chatPanel.classList.contains('open');
|
| 1699 |
+
|
| 1700 |
+
if (isOpen) {
|
| 1701 |
+
chatPanel.classList.remove('open');
|
| 1702 |
+
} else {
|
| 1703 |
+
chatPanel.classList.add('open');
|
| 1704 |
+
updateChatContext();
|
| 1705 |
+
}
|
| 1706 |
+
}
|
| 1707 |
+
|
| 1708 |
+
// ============================================
|
| 1709 |
+
// SENTIMENT ANALYSIS FUNCTIONALITY
|
| 1710 |
+
// ============================================
|
| 1711 |
+
|
| 1712 |
+
let sentimentState = {
|
| 1713 |
+
currentFilter: 'all',
|
| 1714 |
+
posts: [],
|
| 1715 |
+
isLoading: false
|
| 1716 |
+
};
|
| 1717 |
+
|
| 1718 |
+
function setupSentimentListeners() {
|
| 1719 |
+
// Filter buttons
|
| 1720 |
+
const filterBtns = document.querySelectorAll('.posts-filter .filter-btn');
|
| 1721 |
+
filterBtns.forEach(btn => {
|
| 1722 |
+
btn.addEventListener('click', () => {
|
| 1723 |
+
filterBtns.forEach(b => b.classList.remove('active'));
|
| 1724 |
+
btn.classList.add('active');
|
| 1725 |
+
sentimentState.currentFilter = btn.dataset.filter;
|
| 1726 |
+
renderSentimentPosts(sentimentState.posts);
|
| 1727 |
+
});
|
| 1728 |
+
});
|
| 1729 |
+
|
| 1730 |
+
// Refresh button
|
| 1731 |
+
const refreshBtn = document.getElementById('sentimentRefreshBtn');
|
| 1732 |
+
if (refreshBtn) {
|
| 1733 |
+
refreshBtn.addEventListener('click', () => {
|
| 1734 |
+
if (currentTicker && !sentimentState.isLoading) {
|
| 1735 |
+
loadSentiment(currentTicker, true);
|
| 1736 |
+
}
|
| 1737 |
+
});
|
| 1738 |
+
}
|
| 1739 |
+
}
|
| 1740 |
+
|
| 1741 |
+
async function loadSentiment(ticker, forceRefresh = false) {
|
| 1742 |
+
const cacheKey = `sentiment_${ticker}`;
|
| 1743 |
+
const container = document.getElementById('sentimentPostsContainer');
|
| 1744 |
+
const refreshBtn = document.getElementById('sentimentRefreshBtn');
|
| 1745 |
+
|
| 1746 |
+
// Show loading state
|
| 1747 |
+
container.innerHTML = '<p class="loading-text">Analyzing social media sentiment...</p>';
|
| 1748 |
+
|
| 1749 |
+
sentimentState.isLoading = true;
|
| 1750 |
+
if (refreshBtn) {
|
| 1751 |
+
refreshBtn.disabled = true;
|
| 1752 |
+
refreshBtn.classList.add('loading');
|
| 1753 |
+
}
|
| 1754 |
+
|
| 1755 |
+
// Check cache first (skip if force refresh)
|
| 1756 |
+
if (!forceRefresh && cache.has(cacheKey)) {
|
| 1757 |
+
const data = cache.get(cacheKey);
|
| 1758 |
+
renderSentiment(data);
|
| 1759 |
+
sentimentState.isLoading = false;
|
| 1760 |
+
if (refreshBtn) {
|
| 1761 |
+
refreshBtn.disabled = false;
|
| 1762 |
+
refreshBtn.classList.remove('loading');
|
| 1763 |
+
}
|
| 1764 |
+
return;
|
| 1765 |
+
}
|
| 1766 |
+
|
| 1767 |
+
try {
|
| 1768 |
+
const response = await fetch(`${API_BASE}/sentiment/analyze`, {
|
| 1769 |
+
method: 'POST',
|
| 1770 |
+
headers: { 'Content-Type': 'application/json' },
|
| 1771 |
+
body: JSON.stringify({ ticker: ticker, force_refresh: forceRefresh })
|
| 1772 |
+
});
|
| 1773 |
+
|
| 1774 |
+
if (!response.ok) {
|
| 1775 |
+
throw new Error(`HTTP error! status: ${response.status}`);
|
| 1776 |
+
}
|
| 1777 |
+
|
| 1778 |
+
const data = await response.json();
|
| 1779 |
+
|
| 1780 |
+
// Cache with short TTL (15 minutes)
|
| 1781 |
+
cache.set(cacheKey, data, CACHE_TTL.SHORT);
|
| 1782 |
+
|
| 1783 |
+
renderSentiment(data);
|
| 1784 |
+
|
| 1785 |
+
} catch (error) {
|
| 1786 |
+
console.error('Error loading sentiment:', error);
|
| 1787 |
+
container.innerHTML = '<p class="error-text">Error loading sentiment data. Please try again.</p>';
|
| 1788 |
+
resetSentimentUI();
|
| 1789 |
+
} finally {
|
| 1790 |
+
sentimentState.isLoading = false;
|
| 1791 |
+
if (refreshBtn) {
|
| 1792 |
+
refreshBtn.disabled = false;
|
| 1793 |
+
refreshBtn.classList.remove('loading');
|
| 1794 |
+
}
|
| 1795 |
+
}
|
| 1796 |
+
}
|
| 1797 |
+
|
| 1798 |
+
function renderSentiment(data) {
|
| 1799 |
+
const aggregate = data.aggregate;
|
| 1800 |
+
|
| 1801 |
+
// Update gauge
|
| 1802 |
+
updateSentimentGauge(aggregate.score);
|
| 1803 |
+
|
| 1804 |
+
// Update label and confidence
|
| 1805 |
+
const labelEl = document.getElementById('sentimentLabel');
|
| 1806 |
+
labelEl.textContent = aggregate.label.toUpperCase();
|
| 1807 |
+
labelEl.className = 'sentiment-label ' + aggregate.label;
|
| 1808 |
+
|
| 1809 |
+
// Update stats
|
| 1810 |
+
document.getElementById('sentimentPostCount').textContent = aggregate.post_count;
|
| 1811 |
+
document.getElementById('sentimentLastUpdated').textContent = new Date().toLocaleTimeString();
|
| 1812 |
+
|
| 1813 |
+
// Update source breakdown
|
| 1814 |
+
const sources = aggregate.sources || {};
|
| 1815 |
+
updateSourceItem('stocktwitsSource', sources.stocktwits || 0, aggregate.post_count);
|
| 1816 |
+
updateSourceItem('redditSource', sources.reddit || 0, aggregate.post_count);
|
| 1817 |
+
updateSourceItem('twitterSource', sources.twitter || 0, aggregate.post_count);
|
| 1818 |
+
|
| 1819 |
+
// Store posts and render
|
| 1820 |
+
sentimentState.posts = data.posts || [];
|
| 1821 |
+
renderSentimentPosts(sentimentState.posts);
|
| 1822 |
+
}
|
| 1823 |
+
|
| 1824 |
+
function updateSentimentGauge(score) {
|
| 1825 |
+
// Score ranges from -1 (bearish) to +1 (bullish)
|
| 1826 |
+
// Map to rotation: -90deg (bearish) to +90deg (bullish)
|
| 1827 |
+
const rotation = score * 90;
|
| 1828 |
+
|
| 1829 |
+
const needle = document.getElementById('gaugeNeedle');
|
| 1830 |
+
if (needle) {
|
| 1831 |
+
needle.style.transform = `rotate(${rotation}deg)`;
|
| 1832 |
+
}
|
| 1833 |
+
|
| 1834 |
+
// Update gauge fill color based on sentiment
|
| 1835 |
+
const fill = document.getElementById('gaugeFill');
|
| 1836 |
+
if (fill) {
|
| 1837 |
+
if (score > 0.2) {
|
| 1838 |
+
fill.className = 'gauge-fill bullish';
|
| 1839 |
+
} else if (score < -0.2) {
|
| 1840 |
+
fill.className = 'gauge-fill bearish';
|
| 1841 |
+
} else {
|
| 1842 |
+
fill.className = 'gauge-fill neutral';
|
| 1843 |
+
}
|
| 1844 |
+
}
|
| 1845 |
+
}
|
| 1846 |
+
|
| 1847 |
+
function updateSourceItem(elementId, count, total) {
|
| 1848 |
+
const element = document.getElementById(elementId);
|
| 1849 |
+
if (!element) return;
|
| 1850 |
+
|
| 1851 |
+
const countEl = element.querySelector('.source-count');
|
| 1852 |
+
if (countEl) {
|
| 1853 |
+
countEl.textContent = count;
|
| 1854 |
+
}
|
| 1855 |
+
}
|
| 1856 |
+
|
| 1857 |
+
function renderSentimentPosts(posts) {
|
| 1858 |
+
const container = document.getElementById('sentimentPostsContainer');
|
| 1859 |
+
|
| 1860 |
+
if (!posts || posts.length === 0) {
|
| 1861 |
+
container.innerHTML = '<p class="no-data-text">No sentiment data available for this stock.</p>';
|
| 1862 |
+
return;
|
| 1863 |
+
}
|
| 1864 |
+
|
| 1865 |
+
// Apply filter
|
| 1866 |
+
let filteredPosts = posts;
|
| 1867 |
+
if (sentimentState.currentFilter !== 'all') {
|
| 1868 |
+
filteredPosts = posts.filter(post => {
|
| 1869 |
+
const label = post.sentiment?.label || 'neutral';
|
| 1870 |
+
return label === sentimentState.currentFilter;
|
| 1871 |
+
});
|
| 1872 |
+
}
|
| 1873 |
+
|
| 1874 |
+
if (filteredPosts.length === 0) {
|
| 1875 |
+
container.innerHTML = `<p class="no-data-text">No ${sentimentState.currentFilter} posts found.</p>`;
|
| 1876 |
+
return;
|
| 1877 |
+
}
|
| 1878 |
+
|
| 1879 |
+
let html = '';
|
| 1880 |
+
filteredPosts.forEach(post => {
|
| 1881 |
+
const sentimentLabel = post.sentiment?.label || 'neutral';
|
| 1882 |
+
const sentimentScore = post.sentiment?.score || 0;
|
| 1883 |
+
const scorePercent = (sentimentScore * 100).toFixed(0);
|
| 1884 |
+
const timestamp = post.timestamp ? formatRelativeTime(post.timestamp) : '';
|
| 1885 |
+
const platform = post.platform || 'unknown';
|
| 1886 |
+
const engagement = post.engagement || {};
|
| 1887 |
+
|
| 1888 |
+
html += `
|
| 1889 |
+
<div class="sentiment-post ${sentimentLabel}">
|
| 1890 |
+
<div class="post-header">
|
| 1891 |
+
<span class="post-platform">${getPlatformIcon(platform)} ${platform}</span>
|
| 1892 |
+
<span class="post-sentiment ${sentimentLabel}">
|
| 1893 |
+
${sentimentLabel} (${scorePercent}%)
|
| 1894 |
+
</span>
|
| 1895 |
+
</div>
|
| 1896 |
+
<p class="post-content">${escapeHtml(post.content || '')}</p>
|
| 1897 |
+
<div class="post-meta">
|
| 1898 |
+
<span class="post-author">@${escapeHtml(post.author || 'unknown')}</span>
|
| 1899 |
+
<span class="post-time">${timestamp}</span>
|
| 1900 |
+
<span class="post-engagement">
|
| 1901 |
+
${engagement.likes || 0} likes ${engagement.comments ? `· ${engagement.comments} comments` : ''}
|
| 1902 |
+
</span>
|
| 1903 |
+
</div>
|
| 1904 |
+
${post.url ? `<a href="${post.url}" target="_blank" class="post-link">View original</a>` : ''}
|
| 1905 |
+
</div>
|
| 1906 |
+
`;
|
| 1907 |
+
});
|
| 1908 |
+
|
| 1909 |
+
container.innerHTML = html;
|
| 1910 |
+
}
|
| 1911 |
+
|
| 1912 |
+
function getPlatformIcon(platform) {
|
| 1913 |
+
const icons = {
|
| 1914 |
+
'stocktwits': '📈',
|
| 1915 |
+
'reddit': '👽',
|
| 1916 |
+
'twitter': '🐦'
|
| 1917 |
+
};
|
| 1918 |
+
return icons[platform] || '💬';
|
| 1919 |
+
}
|
| 1920 |
+
|
| 1921 |
+
function formatRelativeTime(timestamp) {
|
| 1922 |
+
try {
|
| 1923 |
+
const date = new Date(timestamp);
|
| 1924 |
+
const now = new Date();
|
| 1925 |
+
const diffMs = now - date;
|
| 1926 |
+
const diffMins = Math.floor(diffMs / 60000);
|
| 1927 |
+
const diffHours = Math.floor(diffMs / 3600000);
|
| 1928 |
+
const diffDays = Math.floor(diffMs / 86400000);
|
| 1929 |
+
|
| 1930 |
+
if (diffMins < 1) return 'just now';
|
| 1931 |
+
if (diffMins < 60) return `${diffMins}m ago`;
|
| 1932 |
+
if (diffHours < 24) return `${diffHours}h ago`;
|
| 1933 |
+
if (diffDays < 7) return `${diffDays}d ago`;
|
| 1934 |
+
return date.toLocaleDateString();
|
| 1935 |
+
} catch {
|
| 1936 |
+
return '';
|
| 1937 |
+
}
|
| 1938 |
+
}
|
| 1939 |
+
|
| 1940 |
+
function escapeHtml(text) {
|
| 1941 |
+
const div = document.createElement('div');
|
| 1942 |
+
div.textContent = text;
|
| 1943 |
+
return div.innerHTML;
|
| 1944 |
+
}
|
| 1945 |
+
|
| 1946 |
+
function resetSentimentUI() {
|
| 1947 |
+
document.getElementById('sentimentLabel').textContent = '--';
|
| 1948 |
+
document.getElementById('sentimentLabel').className = 'sentiment-label';
|
| 1949 |
+
document.getElementById('sentimentConfidence').textContent = '--';
|
| 1950 |
+
document.getElementById('sentimentPostCount').textContent = '--';
|
| 1951 |
+
document.getElementById('sentimentLastUpdated').textContent = '--';
|
| 1952 |
+
|
| 1953 |
+
updateSourceItem('stocktwitsSource', 0, 0);
|
| 1954 |
+
updateSourceItem('redditSource', 0, 0);
|
| 1955 |
+
updateSourceItem('twitterSource', 0, 0);
|
| 1956 |
+
|
| 1957 |
+
const needle = document.getElementById('gaugeNeedle');
|
| 1958 |
+
if (needle) {
|
| 1959 |
+
needle.style.transform = 'rotate(0deg)';
|
| 1960 |
+
}
|
| 1961 |
+
}
|
| 1962 |
+
|
| 1963 |
+
// ============================================
|
| 1964 |
+
// FORECAST FUNCTIONALITY
|
| 1965 |
+
// ============================================
|
| 1966 |
+
|
| 1967 |
+
let forecastState = {
|
| 1968 |
+
data: null,
|
| 1969 |
+
isLoading: false,
|
| 1970 |
+
modelStatus: null
|
| 1971 |
+
};
|
| 1972 |
+
|
| 1973 |
+
let forecastChartState = {
|
| 1974 |
+
data: null,
|
| 1975 |
+
canvas: null,
|
| 1976 |
+
ctx: null,
|
| 1977 |
+
padding: { top: 20, right: 20, bottom: 40, left: 65 },
|
| 1978 |
+
hoveredIndex: -1,
|
| 1979 |
+
totalPoints: 0,
|
| 1980 |
+
historicalLength: 0
|
| 1981 |
+
};
|
| 1982 |
+
|
| 1983 |
+
function setupForecastListeners() {
|
| 1984 |
+
const refreshBtn = document.getElementById('forecastRefreshBtn');
|
| 1985 |
+
if (refreshBtn) {
|
| 1986 |
+
refreshBtn.addEventListener('click', () => {
|
| 1987 |
+
if (currentTicker && !forecastState.isLoading) {
|
| 1988 |
+
loadForecast(currentTicker, true);
|
| 1989 |
+
}
|
| 1990 |
+
});
|
| 1991 |
+
}
|
| 1992 |
+
}
|
| 1993 |
+
|
| 1994 |
+
async function getHistoricalDataForForecast(ticker) {
|
| 1995 |
+
// Try to get 2-year data from cache first (used for training)
|
| 1996 |
+
const cacheKey2Y = `chart_${ticker}_2Y`;
|
| 1997 |
+
if (cache.has(cacheKey2Y)) {
|
| 1998 |
+
const data = cache.get(cacheKey2Y);
|
| 1999 |
+
if (data.results && data.results.length > 0) {
|
| 2000 |
+
return data.results;
|
| 2001 |
+
}
|
| 2002 |
+
}
|
| 2003 |
+
|
| 2004 |
+
// Try 5-year cache as fallback (has more than enough data)
|
| 2005 |
+
const cacheKey5Y = `chart_${ticker}_5Y`;
|
| 2006 |
+
if (cache.has(cacheKey5Y)) {
|
| 2007 |
+
const data = cache.get(cacheKey5Y);
|
| 2008 |
+
if (data.results && data.results.length > 0) {
|
| 2009 |
+
return data.results;
|
| 2010 |
+
}
|
| 2011 |
+
}
|
| 2012 |
+
|
| 2013 |
+
// Try 1-year cache (minimum for decent training)
|
| 2014 |
+
const cacheKey1Y = `chart_${ticker}_1Y`;
|
| 2015 |
+
if (cache.has(cacheKey1Y)) {
|
| 2016 |
+
const data = cache.get(cacheKey1Y);
|
| 2017 |
+
if (data.results && data.results.length > 0) {
|
| 2018 |
+
return data.results;
|
| 2019 |
+
}
|
| 2020 |
+
}
|
| 2021 |
+
|
| 2022 |
+
// No cached data available, fetch 2 years of data
|
| 2023 |
+
const to = new Date();
|
| 2024 |
+
const from = new Date();
|
| 2025 |
+
from.setFullYear(from.getFullYear() - 2);
|
| 2026 |
+
|
| 2027 |
+
const fromStr = from.toISOString().split('T')[0];
|
| 2028 |
+
const toStr = to.toISOString().split('T')[0];
|
| 2029 |
+
|
| 2030 |
+
try {
|
| 2031 |
+
const response = await fetch(
|
| 2032 |
+
`${API_BASE}/ticker/${ticker}/aggregates?from=${fromStr}&to=${toStr}×pan=day`
|
| 2033 |
+
);
|
| 2034 |
+
const data = await response.json();
|
| 2035 |
+
|
| 2036 |
+
// Cache it for future use
|
| 2037 |
+
if (data.results) {
|
| 2038 |
+
cache.set(cacheKey2Y, data, CACHE_TTL.DAILY);
|
| 2039 |
+
return data.results;
|
| 2040 |
+
}
|
| 2041 |
+
} catch (error) {
|
| 2042 |
+
console.error('Error fetching historical data for forecast:', error);
|
| 2043 |
+
}
|
| 2044 |
+
|
| 2045 |
+
return null;
|
| 2046 |
+
}
|
| 2047 |
+
|
| 2048 |
+
async function loadForecast(ticker, forceRefresh = false) {
|
| 2049 |
+
const cacheKey = `forecast_${ticker}`;
|
| 2050 |
+
const container = document.getElementById('forecastTableContainer');
|
| 2051 |
+
const refreshBtn = document.getElementById('forecastRefreshBtn');
|
| 2052 |
+
|
| 2053 |
+
// Show loading state
|
| 2054 |
+
container.innerHTML = '<p class="loading-text">Loading forecast...</p>';
|
| 2055 |
+
|
| 2056 |
+
forecastState.isLoading = true;
|
| 2057 |
+
if (refreshBtn) {
|
| 2058 |
+
refreshBtn.disabled = true;
|
| 2059 |
+
refreshBtn.classList.add('loading');
|
| 2060 |
+
}
|
| 2061 |
+
|
| 2062 |
+
// Update status to show loading
|
| 2063 |
+
document.getElementById('forecastModelStatus').textContent = 'Loading...';
|
| 2064 |
+
|
| 2065 |
+
// Check cache first (skip if force refresh)
|
| 2066 |
+
if (!forceRefresh && cache.has(cacheKey)) {
|
| 2067 |
+
const data = cache.get(cacheKey);
|
| 2068 |
+
renderForecast(data);
|
| 2069 |
+
forecastState.isLoading = false;
|
| 2070 |
+
if (refreshBtn) {
|
| 2071 |
+
refreshBtn.disabled = false;
|
| 2072 |
+
refreshBtn.classList.remove('loading');
|
| 2073 |
+
}
|
| 2074 |
+
return;
|
| 2075 |
+
}
|
| 2076 |
+
|
| 2077 |
+
try {
|
| 2078 |
+
// Get historical data from cache or fetch it (reuse existing chart data)
|
| 2079 |
+
const historicalData = await getHistoricalDataForForecast(ticker);
|
| 2080 |
+
|
| 2081 |
+
const response = await fetch(`${API_BASE}/forecast/predict/${ticker}`, {
|
| 2082 |
+
method: 'POST',
|
| 2083 |
+
headers: { 'Content-Type': 'application/json' },
|
| 2084 |
+
body: JSON.stringify({
|
| 2085 |
+
historical_data: historicalData,
|
| 2086 |
+
force_retrain: forceRefresh
|
| 2087 |
+
})
|
| 2088 |
+
});
|
| 2089 |
+
|
| 2090 |
+
if (!response.ok) {
|
| 2091 |
+
const error = await response.json();
|
| 2092 |
+
throw new Error(error.error || 'Forecast failed');
|
| 2093 |
+
}
|
| 2094 |
+
|
| 2095 |
+
const data = await response.json();
|
| 2096 |
+
cache.set(cacheKey, data, CACHE_TTL.MODERATE);
|
| 2097 |
+
renderForecast(data);
|
| 2098 |
+
|
| 2099 |
+
} catch (error) {
|
| 2100 |
+
console.error('Error loading forecast:', error);
|
| 2101 |
+
container.innerHTML = `<p class="error-text">${error.message || 'Error loading forecast'}</p>`;
|
| 2102 |
+
resetForecastUI();
|
| 2103 |
+
} finally {
|
| 2104 |
+
forecastState.isLoading = false;
|
| 2105 |
+
if (refreshBtn) {
|
| 2106 |
+
refreshBtn.disabled = false;
|
| 2107 |
+
refreshBtn.classList.remove('loading');
|
| 2108 |
+
}
|
| 2109 |
+
}
|
| 2110 |
+
}
|
| 2111 |
+
|
| 2112 |
+
function renderForecast(data) {
|
| 2113 |
+
forecastState.data = data;
|
| 2114 |
+
|
| 2115 |
+
// Clear loading text
|
| 2116 |
+
document.getElementById('forecastTableContainer').innerHTML = '';
|
| 2117 |
+
|
| 2118 |
+
// Update status
|
| 2119 |
+
if (data.model_info) {
|
| 2120 |
+
document.getElementById('forecastModelStatus').textContent = 'Trained';
|
| 2121 |
+
const trainedAt = data.model_info.trained_at;
|
| 2122 |
+
if (trainedAt) {
|
| 2123 |
+
const date = new Date(trainedAt);
|
| 2124 |
+
document.getElementById('forecastLastUpdated').textContent = date.toLocaleDateString();
|
| 2125 |
+
}
|
| 2126 |
+
} else {
|
| 2127 |
+
document.getElementById('forecastModelStatus').textContent = 'Not trained';
|
| 2128 |
+
}
|
| 2129 |
+
|
| 2130 |
+
// Draw forecast chart
|
| 2131 |
+
drawForecastChart(data);
|
| 2132 |
+
|
| 2133 |
+
}
|
| 2134 |
+
|
| 2135 |
+
function drawForecastChart(data, skipSetup = false) {
|
| 2136 |
+
const canvas = document.getElementById('forecastChart');
|
| 2137 |
+
if (!canvas) return;
|
| 2138 |
+
|
| 2139 |
+
const ctx = canvas.getContext('2d');
|
| 2140 |
+
const colors = getChartColors();
|
| 2141 |
+
|
| 2142 |
+
// High DPI support
|
| 2143 |
+
const dpr = window.devicePixelRatio || 1;
|
| 2144 |
+
const rect = canvas.getBoundingClientRect();
|
| 2145 |
+
|
| 2146 |
+
if (!skipSetup) {
|
| 2147 |
+
canvas.width = rect.width * dpr;
|
| 2148 |
+
canvas.height = rect.height * dpr;
|
| 2149 |
+
ctx.scale(dpr, dpr);
|
| 2150 |
+
canvas.style.width = rect.width + 'px';
|
| 2151 |
+
canvas.style.height = rect.height + 'px';
|
| 2152 |
+
}
|
| 2153 |
+
|
| 2154 |
+
const padding = { top: 20, right: 20, bottom: 40, left: 65 };
|
| 2155 |
+
const width = rect.width;
|
| 2156 |
+
const height = rect.height;
|
| 2157 |
+
const chartWidth = width - padding.left - padding.right;
|
| 2158 |
+
const chartHeight = height - padding.top - padding.bottom;
|
| 2159 |
+
|
| 2160 |
+
// Combine historical and forecast data
|
| 2161 |
+
const historical = data.historical || [];
|
| 2162 |
+
const forecast = data.forecast || [];
|
| 2163 |
+
const confidenceBounds = data.confidence_bounds || {};
|
| 2164 |
+
|
| 2165 |
+
if (historical.length === 0 && forecast.length === 0) {
|
| 2166 |
+
ctx.fillStyle = colors.text;
|
| 2167 |
+
ctx.font = '14px -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif';
|
| 2168 |
+
ctx.textAlign = 'center';
|
| 2169 |
+
ctx.fillText('No data available', width / 2, height / 2);
|
| 2170 |
+
return;
|
| 2171 |
+
}
|
| 2172 |
+
|
| 2173 |
+
// Store state for hover interactions
|
| 2174 |
+
forecastChartState.data = data;
|
| 2175 |
+
forecastChartState.canvas = canvas;
|
| 2176 |
+
forecastChartState.ctx = ctx;
|
| 2177 |
+
forecastChartState.totalPoints = historical.length + forecast.length;
|
| 2178 |
+
forecastChartState.historicalLength = historical.length;
|
| 2179 |
+
|
| 2180 |
+
// Calculate price range
|
| 2181 |
+
const historicalPrices = historical.map(d => d.close);
|
| 2182 |
+
const forecastPrices = forecast.map(d => d.predicted_close);
|
| 2183 |
+
const upperBound = confidenceBounds.upper || [];
|
| 2184 |
+
const lowerBound = confidenceBounds.lower || [];
|
| 2185 |
+
|
| 2186 |
+
const allPrices = [...historicalPrices, ...forecastPrices, ...upperBound, ...lowerBound];
|
| 2187 |
+
const minPrice = Math.min(...allPrices);
|
| 2188 |
+
const maxPrice = Math.max(...allPrices);
|
| 2189 |
+
const pricePadding = (maxPrice - minPrice) * 0.1;
|
| 2190 |
+
const adjustedMin = minPrice - pricePadding;
|
| 2191 |
+
const adjustedMax = maxPrice + pricePadding;
|
| 2192 |
+
const priceRange = adjustedMax - adjustedMin;
|
| 2193 |
+
|
| 2194 |
+
const totalPoints = historical.length + forecast.length;
|
| 2195 |
+
|
| 2196 |
+
ctx.clearRect(0, 0, width, height);
|
| 2197 |
+
|
| 2198 |
+
// Draw grid lines
|
| 2199 |
+
const numGridLines = 5;
|
| 2200 |
+
ctx.strokeStyle = colors.grid;
|
| 2201 |
+
ctx.lineWidth = 1;
|
| 2202 |
+
ctx.fillStyle = colors.text;
|
| 2203 |
+
ctx.font = '11px -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif';
|
| 2204 |
+
ctx.textAlign = 'right';
|
| 2205 |
+
ctx.textBaseline = 'middle';
|
| 2206 |
+
|
| 2207 |
+
for (let i = 0; i <= numGridLines; i++) {
|
| 2208 |
+
const y = padding.top + (i / numGridLines) * chartHeight;
|
| 2209 |
+
const price = adjustedMax - (i / numGridLines) * priceRange;
|
| 2210 |
+
|
| 2211 |
+
ctx.beginPath();
|
| 2212 |
+
ctx.setLineDash([4, 4]);
|
| 2213 |
+
ctx.moveTo(padding.left, y);
|
| 2214 |
+
ctx.lineTo(width - padding.right, y);
|
| 2215 |
+
ctx.stroke();
|
| 2216 |
+
ctx.setLineDash([]);
|
| 2217 |
+
|
| 2218 |
+
ctx.fillText(`$${price.toFixed(2)}`, padding.left - 8, y);
|
| 2219 |
+
}
|
| 2220 |
+
|
| 2221 |
+
// Draw vertical divider line between historical and forecast
|
| 2222 |
+
if (historical.length > 0 && forecast.length > 0) {
|
| 2223 |
+
const dividerX = padding.left + (historical.length / totalPoints) * chartWidth;
|
| 2224 |
+
ctx.strokeStyle = colors.crosshair;
|
| 2225 |
+
ctx.lineWidth = 1;
|
| 2226 |
+
ctx.setLineDash([8, 4]);
|
| 2227 |
+
ctx.beginPath();
|
| 2228 |
+
ctx.moveTo(dividerX, padding.top);
|
| 2229 |
+
ctx.lineTo(dividerX, height - padding.bottom);
|
| 2230 |
+
ctx.stroke();
|
| 2231 |
+
ctx.setLineDash([]);
|
| 2232 |
+
}
|
| 2233 |
+
|
| 2234 |
+
// Draw confidence band (shaded area)
|
| 2235 |
+
if (upperBound.length > 0 && lowerBound.length > 0) {
|
| 2236 |
+
ctx.beginPath();
|
| 2237 |
+
ctx.fillStyle = 'rgba(16, 185, 129, 0.15)';
|
| 2238 |
+
|
| 2239 |
+
// Start from first forecast point
|
| 2240 |
+
const startIdx = historical.length;
|
| 2241 |
+
for (let i = 0; i < forecast.length; i++) {
|
| 2242 |
+
const x = padding.left + ((startIdx + i) / (totalPoints - 1)) * chartWidth;
|
| 2243 |
+
const y = padding.top + ((adjustedMax - upperBound[i]) / priceRange) * chartHeight;
|
| 2244 |
+
if (i === 0) ctx.moveTo(x, y);
|
| 2245 |
+
else ctx.lineTo(x, y);
|
| 2246 |
+
}
|
| 2247 |
+
|
| 2248 |
+
// Go back along lower bound
|
| 2249 |
+
for (let i = forecast.length - 1; i >= 0; i--) {
|
| 2250 |
+
const x = padding.left + ((startIdx + i) / (totalPoints - 1)) * chartWidth;
|
| 2251 |
+
const y = padding.top + ((adjustedMax - lowerBound[i]) / priceRange) * chartHeight;
|
| 2252 |
+
ctx.lineTo(x, y);
|
| 2253 |
+
}
|
| 2254 |
+
|
| 2255 |
+
ctx.closePath();
|
| 2256 |
+
ctx.fill();
|
| 2257 |
+
}
|
| 2258 |
+
|
| 2259 |
+
// Draw historical line
|
| 2260 |
+
if (historical.length > 1) {
|
| 2261 |
+
ctx.beginPath();
|
| 2262 |
+
ctx.strokeStyle = colors.line;
|
| 2263 |
+
ctx.lineWidth = 2;
|
| 2264 |
+
ctx.lineJoin = 'round';
|
| 2265 |
+
ctx.lineCap = 'round';
|
| 2266 |
+
|
| 2267 |
+
historical.forEach((d, i) => {
|
| 2268 |
+
const x = padding.left + (i / (totalPoints - 1)) * chartWidth;
|
| 2269 |
+
const y = padding.top + ((adjustedMax - d.close) / priceRange) * chartHeight;
|
| 2270 |
+
if (i === 0) ctx.moveTo(x, y);
|
| 2271 |
+
else ctx.lineTo(x, y);
|
| 2272 |
+
});
|
| 2273 |
+
ctx.stroke();
|
| 2274 |
+
}
|
| 2275 |
+
|
| 2276 |
+
// Draw forecast line (dashed)
|
| 2277 |
+
if (forecast.length > 0) {
|
| 2278 |
+
ctx.beginPath();
|
| 2279 |
+
ctx.strokeStyle = colors.positive;
|
| 2280 |
+
ctx.lineWidth = 2;
|
| 2281 |
+
ctx.setLineDash([6, 4]);
|
| 2282 |
+
ctx.lineJoin = 'round';
|
| 2283 |
+
ctx.lineCap = 'round';
|
| 2284 |
+
|
| 2285 |
+
// Connect from last historical point
|
| 2286 |
+
if (historical.length > 0) {
|
| 2287 |
+
const lastHistorical = historical[historical.length - 1];
|
| 2288 |
+
const x = padding.left + ((historical.length - 1) / (totalPoints - 1)) * chartWidth;
|
| 2289 |
+
const y = padding.top + ((adjustedMax - lastHistorical.close) / priceRange) * chartHeight;
|
| 2290 |
+
ctx.moveTo(x, y);
|
| 2291 |
+
}
|
| 2292 |
+
|
| 2293 |
+
forecast.forEach((d, i) => {
|
| 2294 |
+
const x = padding.left + ((historical.length + i) / (totalPoints - 1)) * chartWidth;
|
| 2295 |
+
const y = padding.top + ((adjustedMax - d.predicted_close) / priceRange) * chartHeight;
|
| 2296 |
+
if (historical.length === 0 && i === 0) ctx.moveTo(x, y);
|
| 2297 |
+
else ctx.lineTo(x, y);
|
| 2298 |
+
});
|
| 2299 |
+
ctx.stroke();
|
| 2300 |
+
ctx.setLineDash([]);
|
| 2301 |
+
}
|
| 2302 |
+
|
| 2303 |
+
// Draw X-axis labels
|
| 2304 |
+
ctx.fillStyle = colors.text;
|
| 2305 |
+
ctx.font = '10px -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif';
|
| 2306 |
+
ctx.textAlign = 'center';
|
| 2307 |
+
ctx.textBaseline = 'top';
|
| 2308 |
+
|
| 2309 |
+
// Show a few date labels
|
| 2310 |
+
const allDates = [...historical.map(d => d.date), ...forecast.map(d => d.date)];
|
| 2311 |
+
const labelIndices = [0, Math.floor(historical.length / 2), historical.length - 1, historical.length + Math.floor(forecast.length / 2), totalPoints - 1];
|
| 2312 |
+
|
| 2313 |
+
labelIndices.forEach(idx => {
|
| 2314 |
+
if (idx >= 0 && idx < allDates.length) {
|
| 2315 |
+
const x = padding.left + (idx / (totalPoints - 1)) * chartWidth;
|
| 2316 |
+
const date = new Date(allDates[idx]);
|
| 2317 |
+
const label = date.toLocaleDateString('en-US', { month: 'short', day: 'numeric' });
|
| 2318 |
+
ctx.fillText(label, x, height - padding.bottom + 8);
|
| 2319 |
+
}
|
| 2320 |
+
});
|
| 2321 |
+
|
| 2322 |
+
// Draw crosshair and tooltip if hovering
|
| 2323 |
+
if (forecastChartState.hoveredIndex >= 0 && forecastChartState.hoveredIndex < totalPoints) {
|
| 2324 |
+
drawForecastCrosshair(forecastChartState.hoveredIndex, data, adjustedMin, adjustedMax, priceRange, chartWidth, chartHeight, width, height, padding, colors);
|
| 2325 |
+
}
|
| 2326 |
+
|
| 2327 |
+
// Set up mouse events (only on initial draw)
|
| 2328 |
+
if (!skipSetup) {
|
| 2329 |
+
canvas.onmousemove = handleForecastChartMouseMove;
|
| 2330 |
+
canvas.onmouseleave = handleForecastChartMouseLeave;
|
| 2331 |
+
}
|
| 2332 |
+
}
|
| 2333 |
+
|
| 2334 |
+
function drawForecastCrosshair(index, data, adjustedMin, adjustedMax, priceRange, chartWidth, chartHeight, width, height, padding, colors) {
|
| 2335 |
+
const ctx = forecastChartState.ctx;
|
| 2336 |
+
const historical = data.historical || [];
|
| 2337 |
+
const forecast = data.forecast || [];
|
| 2338 |
+
const totalPoints = historical.length + forecast.length;
|
| 2339 |
+
|
| 2340 |
+
// Determine if we're in historical or forecast region
|
| 2341 |
+
const isHistorical = index < historical.length;
|
| 2342 |
+
let point, price, date;
|
| 2343 |
+
|
| 2344 |
+
if (isHistorical) {
|
| 2345 |
+
point = historical[index];
|
| 2346 |
+
price = point.close;
|
| 2347 |
+
date = point.date;
|
| 2348 |
+
} else {
|
| 2349 |
+
const forecastIdx = index - historical.length;
|
| 2350 |
+
point = forecast[forecastIdx];
|
| 2351 |
+
price = point.predicted_close;
|
| 2352 |
+
date = point.date;
|
| 2353 |
+
}
|
| 2354 |
+
|
| 2355 |
+
const x = padding.left + (index / (totalPoints - 1)) * chartWidth;
|
| 2356 |
+
const y = padding.top + ((adjustedMax - price) / priceRange) * chartHeight;
|
| 2357 |
+
|
| 2358 |
+
// Crosshair lines
|
| 2359 |
+
ctx.strokeStyle = colors.crosshair;
|
| 2360 |
+
ctx.lineWidth = 1;
|
| 2361 |
+
ctx.setLineDash([4, 4]);
|
| 2362 |
+
|
| 2363 |
+
// Vertical line
|
| 2364 |
+
ctx.beginPath();
|
| 2365 |
+
ctx.moveTo(x, padding.top);
|
| 2366 |
+
ctx.lineTo(x, height - padding.bottom);
|
| 2367 |
+
ctx.stroke();
|
| 2368 |
+
|
| 2369 |
+
// Horizontal line
|
| 2370 |
+
ctx.beginPath();
|
| 2371 |
+
ctx.moveTo(padding.left, y);
|
| 2372 |
+
ctx.lineTo(width - padding.right, y);
|
| 2373 |
+
ctx.stroke();
|
| 2374 |
+
ctx.setLineDash([]);
|
| 2375 |
+
|
| 2376 |
+
// Point dot
|
| 2377 |
+
ctx.beginPath();
|
| 2378 |
+
ctx.fillStyle = isHistorical ? colors.line : colors.positive;
|
| 2379 |
+
ctx.arc(x, y, 5, 0, Math.PI * 2);
|
| 2380 |
+
ctx.fill();
|
| 2381 |
+
ctx.strokeStyle = colors.tooltipBg;
|
| 2382 |
+
ctx.lineWidth = 2;
|
| 2383 |
+
ctx.stroke();
|
| 2384 |
+
|
| 2385 |
+
// Tooltip
|
| 2386 |
+
drawForecastTooltip(x, y, point, isHistorical, width, height, padding, colors);
|
| 2387 |
+
}
|
| 2388 |
+
|
| 2389 |
+
function drawForecastTooltip(x, y, point, isHistorical, width, height, padding, colors) {
|
| 2390 |
+
const ctx = forecastChartState.ctx;
|
| 2391 |
+
|
| 2392 |
+
const date = new Date(point.date);
|
| 2393 |
+
const dateStr = date.toLocaleDateString('en-US', { month: 'short', day: 'numeric', year: 'numeric' });
|
| 2394 |
+
|
| 2395 |
+
const price = isHistorical ? point.close : point.predicted_close;
|
| 2396 |
+
const tooltipWidth = isHistorical ? 140 : 155;
|
| 2397 |
+
const tooltipHeight = isHistorical ? 52 : 72;
|
| 2398 |
+
let tooltipX = x + 12;
|
| 2399 |
+
let tooltipY = y - tooltipHeight / 2;
|
| 2400 |
+
|
| 2401 |
+
// Keep tooltip in bounds
|
| 2402 |
+
if (tooltipX + tooltipWidth > width - padding.right) {
|
| 2403 |
+
tooltipX = x - tooltipWidth - 12;
|
| 2404 |
+
}
|
| 2405 |
+
if (tooltipY < padding.top) {
|
| 2406 |
+
tooltipY = padding.top;
|
| 2407 |
+
}
|
| 2408 |
+
if (tooltipY + tooltipHeight > height - padding.bottom) {
|
| 2409 |
+
tooltipY = height - padding.bottom - tooltipHeight;
|
| 2410 |
+
}
|
| 2411 |
+
|
| 2412 |
+
// Tooltip background
|
| 2413 |
+
ctx.fillStyle = colors.tooltipBg;
|
| 2414 |
+
ctx.strokeStyle = colors.tooltipBorder;
|
| 2415 |
+
ctx.lineWidth = 1;
|
| 2416 |
+
ctx.beginPath();
|
| 2417 |
+
ctx.roundRect(tooltipX, tooltipY, tooltipWidth, tooltipHeight, 6);
|
| 2418 |
+
ctx.fill();
|
| 2419 |
+
ctx.stroke();
|
| 2420 |
+
|
| 2421 |
+
// Tooltip content
|
| 2422 |
+
ctx.fillStyle = colors.text;
|
| 2423 |
+
ctx.font = '10px -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif';
|
| 2424 |
+
ctx.textAlign = 'left';
|
| 2425 |
+
ctx.textBaseline = 'top';
|
| 2426 |
+
ctx.fillText(dateStr, tooltipX + 10, tooltipY + 8);
|
| 2427 |
+
|
| 2428 |
+
ctx.fillStyle = colors.textStrong;
|
| 2429 |
+
ctx.font = 'bold 16px -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif';
|
| 2430 |
+
ctx.fillText(`$${price.toFixed(2)}`, tooltipX + 10, tooltipY + 24);
|
| 2431 |
+
|
| 2432 |
+
if (!isHistorical) {
|
| 2433 |
+
// Show confidence range for predictions
|
| 2434 |
+
ctx.fillStyle = colors.text;
|
| 2435 |
+
ctx.font = '10px -apple-system, BlinkMacSystemFont, "Segoe UI", sans-serif';
|
| 2436 |
+
const rangeStr = `Range: $${point.lower_bound.toFixed(2)} - $${point.upper_bound.toFixed(2)}`;
|
| 2437 |
+
ctx.fillText(rangeStr, tooltipX + 10, tooltipY + 48);
|
| 2438 |
+
}
|
| 2439 |
+
}
|
| 2440 |
+
|
| 2441 |
+
function handleForecastChartMouseMove(e) {
|
| 2442 |
+
const { data, canvas, padding, totalPoints } = forecastChartState;
|
| 2443 |
+
if (!data) return;
|
| 2444 |
+
|
| 2445 |
+
const rect = canvas.getBoundingClientRect();
|
| 2446 |
+
const dpr = window.devicePixelRatio || 1;
|
| 2447 |
+
const width = canvas.width / dpr;
|
| 2448 |
+
const chartWidth = width - padding.left - padding.right;
|
| 2449 |
+
const mouseX = e.clientX - rect.left;
|
| 2450 |
+
|
| 2451 |
+
const relativeX = mouseX - padding.left;
|
| 2452 |
+
const index = Math.round((relativeX / chartWidth) * (totalPoints - 1));
|
| 2453 |
+
|
| 2454 |
+
if (index >= 0 && index < totalPoints && index !== forecastChartState.hoveredIndex) {
|
| 2455 |
+
forecastChartState.hoveredIndex = index;
|
| 2456 |
+
drawForecastChart(data, true);
|
| 2457 |
+
}
|
| 2458 |
+
}
|
| 2459 |
+
|
| 2460 |
+
function handleForecastChartMouseLeave() {
|
| 2461 |
+
forecastChartState.hoveredIndex = -1;
|
| 2462 |
+
if (forecastChartState.data) {
|
| 2463 |
+
drawForecastChart(forecastChartState.data, true);
|
| 2464 |
+
}
|
| 2465 |
+
}
|
| 2466 |
+
|
| 2467 |
+
|
| 2468 |
+
|
| 2469 |
+
function resetForecastUI() {
|
| 2470 |
+
document.getElementById('forecastModelStatus').textContent = 'Not trained';
|
| 2471 |
+
document.getElementById('forecastLastUpdated').textContent = '--';
|
| 2472 |
+
forecastState.data = null;
|
| 2473 |
+
|
| 2474 |
+
const canvas = document.getElementById('forecastChart');
|
| 2475 |
+
if (canvas) {
|
| 2476 |
+
const ctx = canvas.getContext('2d');
|
| 2477 |
+
ctx.clearRect(0, 0, canvas.width, canvas.height);
|
| 2478 |
+
}
|
| 2479 |
+
}
|
fe/index.html
ADDED
|
@@ -0,0 +1,304 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
<!DOCTYPE html>
|
| 2 |
+
<html lang="en">
|
| 3 |
+
<head>
|
| 4 |
+
<meta charset="UTF-8">
|
| 5 |
+
<meta name="viewport" content="width=device-width, initial-scale=1.0">
|
| 6 |
+
<title>MarketLens</title>
|
| 7 |
+
<link rel="stylesheet" href="styles.css">
|
| 8 |
+
</head>
|
| 9 |
+
<body>
|
| 10 |
+
<header class="global-header">
|
| 11 |
+
<div class="header-content">
|
| 12 |
+
<h1>MarketLens</h1>
|
| 13 |
+
<div class="header-actions">
|
| 14 |
+
<div id="marketStatus" class="market-status">
|
| 15 |
+
<span class="status-label">Market:</span>
|
| 16 |
+
<span id="marketStatusText" class="status-text">Loading...</span>
|
| 17 |
+
</div>
|
| 18 |
+
<button id="themeToggle" class="theme-toggle" aria-label="Toggle dark mode">
|
| 19 |
+
<svg class="icon-moon" xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke="currentColor">
|
| 20 |
+
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M20.354 15.354A9 9 0 018.646 3.646 9.003 9.003 0 0012 21a9.003 9.003 0 008.354-5.646z" />
|
| 21 |
+
</svg>
|
| 22 |
+
<svg class="icon-sun" xmlns="http://www.w3.org/2000/svg" fill="none" viewBox="0 0 24 24" stroke="currentColor">
|
| 23 |
+
<path stroke-linecap="round" stroke-linejoin="round" stroke-width="2" d="M12 3v1m0 16v1m9-9h-1M4 12H3m15.364 6.364l-.707-.707M6.343 6.343l-.707-.707m12.728 0l-.707.707M6.343 17.657l-.707.707M16 12a4 4 0 11-8 0 4 4 0 018 0z" />
|
| 24 |
+
</svg>
|
| 25 |
+
</button>
|
| 26 |
+
</div>
|
| 27 |
+
</div>
|
| 28 |
+
</header>
|
| 29 |
+
|
| 30 |
+
<div class="app-layout">
|
| 31 |
+
<main class="main-content">
|
| 32 |
+
<div class="ticker-selector">
|
| 33 |
+
<label for="tickerSearch">Select a Stock:</label>
|
| 34 |
+
<div class="dropdown-container">
|
| 35 |
+
<input type="text"
|
| 36 |
+
id="tickerSearch"
|
| 37 |
+
placeholder="Search ticker or company name..."
|
| 38 |
+
autocomplete="off">
|
| 39 |
+
<div id="dropdownList" class="dropdown-list hidden"></div>
|
| 40 |
+
</div>
|
| 41 |
+
</div>
|
| 42 |
+
|
| 43 |
+
<div id="stockData" class="stock-data hidden">
|
| 44 |
+
<div class="stock-header">
|
| 45 |
+
<h2 id="stockTitle"></h2>
|
| 46 |
+
<div id="stockPrice" class="stock-price"></div>
|
| 47 |
+
</div>
|
| 48 |
+
|
| 49 |
+
<div class="tabs">
|
| 50 |
+
<button class="tab-button active" data-tab="overview">Overview</button>
|
| 51 |
+
<button class="tab-button" data-tab="financials">Financials</button>
|
| 52 |
+
<button class="tab-button" data-tab="dividends">Dividends</button>
|
| 53 |
+
<button class="tab-button" data-tab="splits">Splits</button>
|
| 54 |
+
<button class="tab-button" data-tab="news">News</button>
|
| 55 |
+
<button class="tab-button" data-tab="sentiment">Sentiment</button>
|
| 56 |
+
<button class="tab-button" data-tab="forecast">Forecast <span class="beta-tag">Beta</span></button>
|
| 57 |
+
</div>
|
| 58 |
+
|
| 59 |
+
<div class="tab-content">
|
| 60 |
+
<div id="overview" class="tab-pane active">
|
| 61 |
+
<div class="chart-section">
|
| 62 |
+
<h3>Price Chart</h3>
|
| 63 |
+
<div class="chart-controls">
|
| 64 |
+
<button class="chart-range-btn active" data-range="1M">1M</button>
|
| 65 |
+
<button class="chart-range-btn" data-range="3M">3M</button>
|
| 66 |
+
<button class="chart-range-btn" data-range="6M">6M</button>
|
| 67 |
+
<button class="chart-range-btn" data-range="1Y">1Y</button>
|
| 68 |
+
<button class="chart-range-btn" data-range="5Y">5Y</button>
|
| 69 |
+
<div class="chart-controls-spacer"></div>
|
| 70 |
+
<div class="chart-view-toggle">
|
| 71 |
+
<button class="chart-view-btn active" data-view="line">Line</button>
|
| 72 |
+
<button class="chart-view-btn" data-view="candle">Candle</button>
|
| 73 |
+
</div>
|
| 74 |
+
</div>
|
| 75 |
+
<div class="chart-canvas-wrapper">
|
| 76 |
+
<canvas id="priceChart"></canvas>
|
| 77 |
+
<div id="chartLoading" class="chart-loading hidden">
|
| 78 |
+
<div class="spinner"></div>
|
| 79 |
+
</div>
|
| 80 |
+
</div>
|
| 81 |
+
</div>
|
| 82 |
+
<div class="metrics-grid">
|
| 83 |
+
<div class="metric-card">
|
| 84 |
+
<div class="metric-label">Open</div>
|
| 85 |
+
<div class="metric-value" id="openPrice">--</div>
|
| 86 |
+
</div>
|
| 87 |
+
<div class="metric-card">
|
| 88 |
+
<div class="metric-label">High</div>
|
| 89 |
+
<div class="metric-value" id="highPrice">--</div>
|
| 90 |
+
</div>
|
| 91 |
+
<div class="metric-card">
|
| 92 |
+
<div class="metric-label">Low</div>
|
| 93 |
+
<div class="metric-value" id="lowPrice">--</div>
|
| 94 |
+
</div>
|
| 95 |
+
<div class="metric-card">
|
| 96 |
+
<div class="metric-label">Volume</div>
|
| 97 |
+
<div class="metric-value" id="volume">--</div>
|
| 98 |
+
</div>
|
| 99 |
+
<div class="metric-card">
|
| 100 |
+
<div class="metric-label">Market Cap</div>
|
| 101 |
+
<div class="metric-value" id="marketCap">--</div>
|
| 102 |
+
</div>
|
| 103 |
+
<div class="metric-card">
|
| 104 |
+
<div class="metric-label">P/E Ratio</div>
|
| 105 |
+
<div class="metric-value" id="peRatio">--</div>
|
| 106 |
+
</div>
|
| 107 |
+
</div>
|
| 108 |
+
<div class="company-description">
|
| 109 |
+
<h3>Company Description</h3>
|
| 110 |
+
<p id="companyDesc">--</p>
|
| 111 |
+
</div>
|
| 112 |
+
</div>
|
| 113 |
+
|
| 114 |
+
<div id="financials" class="tab-pane">
|
| 115 |
+
<div id="financialsData">
|
| 116 |
+
<p>Loading financial data...</p>
|
| 117 |
+
</div>
|
| 118 |
+
</div>
|
| 119 |
+
|
| 120 |
+
<div id="dividends" class="tab-pane">
|
| 121 |
+
<div id="dividendsContainer">
|
| 122 |
+
<p>Loading dividend data...</p>
|
| 123 |
+
</div>
|
| 124 |
+
</div>
|
| 125 |
+
|
| 126 |
+
<div id="splits" class="tab-pane">
|
| 127 |
+
<div id="splitsContainer">
|
| 128 |
+
<p>Loading stock split data...</p>
|
| 129 |
+
</div>
|
| 130 |
+
</div>
|
| 131 |
+
|
| 132 |
+
<div id="news" class="tab-pane">
|
| 133 |
+
<div id="newsContainer">
|
| 134 |
+
<p>Loading news...</p>
|
| 135 |
+
</div>
|
| 136 |
+
</div>
|
| 137 |
+
|
| 138 |
+
<div id="sentiment" class="tab-pane">
|
| 139 |
+
<div class="sentiment-header">
|
| 140 |
+
<div class="sentiment-aggregate">
|
| 141 |
+
<div class="sentiment-score-container">
|
| 142 |
+
<div class="sentiment-gauge" id="sentimentGauge">
|
| 143 |
+
<div class="gauge-background"></div>
|
| 144 |
+
<div class="gauge-fill" id="gaugeFill"></div>
|
| 145 |
+
<div class="gauge-needle" id="gaugeNeedle"></div>
|
| 146 |
+
<div class="gauge-center"></div>
|
| 147 |
+
</div>
|
| 148 |
+
<div class="sentiment-label" id="sentimentLabel">--</div>
|
| 149 |
+
</div>
|
| 150 |
+
<div class="sentiment-stats">
|
| 151 |
+
<div class="stat-item">
|
| 152 |
+
<span class="stat-label">Posts Analyzed</span>
|
| 153 |
+
<span class="stat-value" id="sentimentPostCount">--</span>
|
| 154 |
+
</div>
|
| 155 |
+
<div class="stat-item">
|
| 156 |
+
<span class="stat-label">Last Updated</span>
|
| 157 |
+
<span class="stat-value" id="sentimentLastUpdated">--</span>
|
| 158 |
+
</div>
|
| 159 |
+
</div>
|
| 160 |
+
</div>
|
| 161 |
+
<button class="refresh-btn" id="sentimentRefreshBtn" title="Re-scrape and re-analyze">
|
| 162 |
+
<svg width="14" height="14" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"><polyline points="23 4 23 10 17 10"/><polyline points="1 20 1 14 7 14"/><path d="M3.51 9a9 9 0 0 1 14.85-3.36L23 10M1 14l4.64 4.36A9 9 0 0 0 20.49 15"/></svg>
|
| 163 |
+
Refresh
|
| 164 |
+
</button>
|
| 165 |
+
</div>
|
| 166 |
+
|
| 167 |
+
<div class="sentiment-sources">
|
| 168 |
+
<h4>Sources</h4>
|
| 169 |
+
<div class="source-breakdown">
|
| 170 |
+
<div class="source-item" id="stocktwitsSource">
|
| 171 |
+
<span class="source-icon">📈</span>
|
| 172 |
+
<span class="source-name">Stocktwits</span>
|
| 173 |
+
<span class="source-count">--</span>
|
| 174 |
+
</div>
|
| 175 |
+
<div class="source-item" id="redditSource">
|
| 176 |
+
<span class="source-icon">👽</span>
|
| 177 |
+
<span class="source-name">Reddit</span>
|
| 178 |
+
<span class="source-count">--</span>
|
| 179 |
+
</div>
|
| 180 |
+
<div class="source-item" id="twitterSource">
|
| 181 |
+
<span class="source-icon">🐦</span>
|
| 182 |
+
<span class="source-name">Twitter/X</span>
|
| 183 |
+
<span class="source-count">--</span>
|
| 184 |
+
</div>
|
| 185 |
+
</div>
|
| 186 |
+
</div>
|
| 187 |
+
|
| 188 |
+
<div class="sentiment-posts-section">
|
| 189 |
+
<div class="posts-header">
|
| 190 |
+
<h4>Recent Posts</h4>
|
| 191 |
+
<div class="posts-filter">
|
| 192 |
+
<button class="filter-btn active" data-filter="all">All</button>
|
| 193 |
+
<button class="filter-btn" data-filter="positive">Bullish</button>
|
| 194 |
+
<button class="filter-btn" data-filter="negative">Bearish</button>
|
| 195 |
+
</div>
|
| 196 |
+
</div>
|
| 197 |
+
<div id="sentimentPostsContainer">
|
| 198 |
+
<p class="loading-text">Select a stock to analyze sentiment...</p>
|
| 199 |
+
</div>
|
| 200 |
+
</div>
|
| 201 |
+
</div>
|
| 202 |
+
|
| 203 |
+
<div id="forecast" class="tab-pane">
|
| 204 |
+
<div class="forecast-header">
|
| 205 |
+
<div class="forecast-info">
|
| 206 |
+
<h4>Price Forecast</h4>
|
| 207 |
+
<p class="forecast-description">
|
| 208 |
+
LSTM neural network prediction based on historical price patterns
|
| 209 |
+
</p>
|
| 210 |
+
</div>
|
| 211 |
+
<div class="forecast-controls">
|
| 212 |
+
<div class="forecast-status">
|
| 213 |
+
<div class="status-item">
|
| 214 |
+
<span class="status-label">Model Status</span>
|
| 215 |
+
<span class="status-value" id="forecastModelStatus">Not trained</span>
|
| 216 |
+
</div>
|
| 217 |
+
<div class="status-item">
|
| 218 |
+
<span class="status-label">Last Updated</span>
|
| 219 |
+
<span class="status-value" id="forecastLastUpdated">--</span>
|
| 220 |
+
</div>
|
| 221 |
+
</div>
|
| 222 |
+
<div class="forecast-actions">
|
| 223 |
+
<button class="refresh-btn" id="forecastRefreshBtn" title="Re-train model and re-predict">
|
| 224 |
+
<svg width="14" height="14" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round"><polyline points="23 4 23 10 17 10"/><polyline points="1 20 1 14 7 14"/><path d="M3.51 9a9 9 0 0 1 14.85-3.36L23 10M1 14l4.64 4.36A9 9 0 0 0 20.49 15"/></svg>
|
| 225 |
+
Refresh
|
| 226 |
+
</button>
|
| 227 |
+
</div>
|
| 228 |
+
</div>
|
| 229 |
+
</div>
|
| 230 |
+
|
| 231 |
+
<div id="forecastTableContainer"></div>
|
| 232 |
+
|
| 233 |
+
<div class="forecast-chart-section">
|
| 234 |
+
<div class="chart-legend">
|
| 235 |
+
<span class="legend-item historical">
|
| 236 |
+
<span class="legend-color"></span> Historical
|
| 237 |
+
</span>
|
| 238 |
+
<span class="legend-item predicted">
|
| 239 |
+
<span class="legend-color"></span> Predicted
|
| 240 |
+
</span>
|
| 241 |
+
<span class="legend-item confidence">
|
| 242 |
+
<span class="legend-color"></span> Confidence Band
|
| 243 |
+
</span>
|
| 244 |
+
</div>
|
| 245 |
+
<canvas id="forecastChart"></canvas>
|
| 246 |
+
</div>
|
| 247 |
+
|
| 248 |
+
<div class="forecast-disclaimer">
|
| 249 |
+
<p>
|
| 250 |
+
<strong>Disclaimer:</strong> These predictions are generated by a machine learning model
|
| 251 |
+
for educational purposes only. Past performance does not guarantee future results.
|
| 252 |
+
This is not financial advice.
|
| 253 |
+
</p>
|
| 254 |
+
</div>
|
| 255 |
+
</div>
|
| 256 |
+
</div>
|
| 257 |
+
|
| 258 |
+
<div id="loading" class="loading hidden">
|
| 259 |
+
<div class="spinner"></div>
|
| 260 |
+
</div>
|
| 261 |
+
</div>
|
| 262 |
+
</main>
|
| 263 |
+
|
| 264 |
+
<!-- Chat Panel -->
|
| 265 |
+
<aside class="chat-panel" id="chatPanel">
|
| 266 |
+
<div class="chat-panel-content">
|
| 267 |
+
<div class="chat-header">
|
| 268 |
+
<h3>AI Assistant</h3>
|
| 269 |
+
<button id="closeMobileChat" class="close-mobile-chat" aria-label="Close chat">
|
| 270 |
+
<svg xmlns="http://www.w3.org/2000/svg" width="18" height="18" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
|
| 271 |
+
<line x1="18" y1="6" x2="6" y2="18"></line>
|
| 272 |
+
<line x1="6" y1="6" x2="18" y2="18"></line>
|
| 273 |
+
</svg>
|
| 274 |
+
</button>
|
| 275 |
+
</div>
|
| 276 |
+
|
| 277 |
+
<div class="chat-ticker-context">
|
| 278 |
+
<span id="chatCurrentTicker">Select a stock to start</span>
|
| 279 |
+
</div>
|
| 280 |
+
|
| 281 |
+
<div id="chatMessages" class="chat-messages">
|
| 282 |
+
<!-- Messages populated dynamically -->
|
| 283 |
+
</div>
|
| 284 |
+
|
| 285 |
+
<div class="chat-input-container">
|
| 286 |
+
<textarea id="chatInput"
|
| 287 |
+
placeholder="Ask about this stock..."
|
| 288 |
+
rows="2"></textarea>
|
| 289 |
+
<button id="sendChatBtn" class="send-chat-btn">Send</button>
|
| 290 |
+
</div>
|
| 291 |
+
</div>
|
| 292 |
+
</aside>
|
| 293 |
+
</div>
|
| 294 |
+
|
| 295 |
+
<!-- Mobile Chat Toggle (visible on small screens only) -->
|
| 296 |
+
<button id="mobileChatToggle" class="mobile-chat-toggle" aria-label="Toggle chat">
|
| 297 |
+
<svg xmlns="http://www.w3.org/2000/svg" width="20" height="20" viewBox="0 0 24 24" fill="none" stroke="currentColor" stroke-width="2" stroke-linecap="round" stroke-linejoin="round">
|
| 298 |
+
<path d="M21 15a2 2 0 0 1-2 2H7l-4 4V5a2 2 0 0 1 2-2h14a2 2 0 0 1 2 2z"></path>
|
| 299 |
+
</svg>
|
| 300 |
+
</button>
|
| 301 |
+
|
| 302 |
+
<script src="app.js"></script>
|
| 303 |
+
</body>
|
| 304 |
+
</html>
|
fe/styles.css
ADDED
|
@@ -0,0 +1,1942 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
/* ============================================
|
| 2 |
+
MarketLens - Design System
|
| 3 |
+
Direction: Data-dense, professional, terminal-inspired
|
| 4 |
+
Base unit: 16px | Depth: Subtle shadows
|
| 5 |
+
============================================ */
|
| 6 |
+
|
| 7 |
+
/* CSS Custom Properties */
|
| 8 |
+
:root {
|
| 9 |
+
/* Neutrals - Slate palette */
|
| 10 |
+
--color-bg: #f8fafc;
|
| 11 |
+
--color-bg-elevated: #ffffff;
|
| 12 |
+
--color-bg-subtle: #f1f5f9;
|
| 13 |
+
--color-bg-muted: #e2e8f0;
|
| 14 |
+
|
| 15 |
+
--color-border: #e2e8f0;
|
| 16 |
+
--color-border-subtle: #f1f5f9;
|
| 17 |
+
|
| 18 |
+
--color-text: #0f172a;
|
| 19 |
+
--color-text-secondary: #475569;
|
| 20 |
+
--color-text-muted: #94a3b8;
|
| 21 |
+
|
| 22 |
+
/* Accent - Indigo */
|
| 23 |
+
--color-accent: #4f46e5;
|
| 24 |
+
--color-accent-hover: #4338ca;
|
| 25 |
+
--color-accent-subtle: #eef2ff;
|
| 26 |
+
|
| 27 |
+
/* Semantic */
|
| 28 |
+
--color-positive: #059669;
|
| 29 |
+
--color-positive-bg: #ecfdf5;
|
| 30 |
+
--color-negative: #dc2626;
|
| 31 |
+
--color-negative-bg: #fef2f2;
|
| 32 |
+
--color-neutral-sentiment: #94a3b8;
|
| 33 |
+
--color-neutral-sentiment-bg: rgba(148, 163, 184, 0.15);
|
| 34 |
+
|
| 35 |
+
/* Spacing */
|
| 36 |
+
--space-1: 4px;
|
| 37 |
+
--space-2: 8px;
|
| 38 |
+
--space-3: 12px;
|
| 39 |
+
--space-4: 16px;
|
| 40 |
+
--space-5: 20px;
|
| 41 |
+
--space-6: 24px;
|
| 42 |
+
--space-8: 32px;
|
| 43 |
+
--space-10: 40px;
|
| 44 |
+
--space-12: 48px;
|
| 45 |
+
|
| 46 |
+
/* Typography */
|
| 47 |
+
--font-sans: -apple-system, BlinkMacSystemFont, 'Segoe UI', 'Inter', system-ui, sans-serif;
|
| 48 |
+
--font-mono: 'SF Mono', 'Fira Code', 'Consolas', monospace;
|
| 49 |
+
|
| 50 |
+
--text-2xs: 0.625rem;
|
| 51 |
+
--text-xs: 0.75rem;
|
| 52 |
+
--text-sm: 0.875rem;
|
| 53 |
+
--text-base: 1rem;
|
| 54 |
+
--text-lg: 1.125rem;
|
| 55 |
+
--text-xl: 1.25rem;
|
| 56 |
+
--text-2xl: 1.5rem;
|
| 57 |
+
--text-3xl: 2rem;
|
| 58 |
+
|
| 59 |
+
/* Line heights */
|
| 60 |
+
--line-height-dense: 1.3;
|
| 61 |
+
--line-height-tight: 1.15;
|
| 62 |
+
|
| 63 |
+
/* Shadows */
|
| 64 |
+
--shadow-sm: 0 1px 2px rgba(15, 23, 42, 0.04);
|
| 65 |
+
--shadow-md: 0 2px 8px rgba(15, 23, 42, 0.06);
|
| 66 |
+
--shadow-lg: 0 4px 16px rgba(15, 23, 42, 0.08);
|
| 67 |
+
|
| 68 |
+
/* Radius */
|
| 69 |
+
--radius-sm: 4px;
|
| 70 |
+
--radius-md: 6px;
|
| 71 |
+
--radius-lg: 8px;
|
| 72 |
+
--radius-xl: 12px;
|
| 73 |
+
|
| 74 |
+
/* Layout */
|
| 75 |
+
--header-height: 60px;
|
| 76 |
+
--chat-width: 340px;
|
| 77 |
+
}
|
| 78 |
+
|
| 79 |
+
/* Dark Mode */
|
| 80 |
+
[data-theme="dark"] {
|
| 81 |
+
--color-bg: #0f172a;
|
| 82 |
+
--color-bg-elevated: #1e293b;
|
| 83 |
+
--color-bg-subtle: #1e293b;
|
| 84 |
+
--color-bg-muted: #334155;
|
| 85 |
+
|
| 86 |
+
--color-border: #334155;
|
| 87 |
+
--color-border-subtle: #1e293b;
|
| 88 |
+
|
| 89 |
+
--color-text: #f1f5f9;
|
| 90 |
+
--color-text-secondary: #cbd5e1;
|
| 91 |
+
--color-text-muted: #64748b;
|
| 92 |
+
|
| 93 |
+
--color-accent: #818cf8;
|
| 94 |
+
--color-accent-hover: #a5b4fc;
|
| 95 |
+
--color-accent-subtle: rgba(129, 140, 248, 0.15);
|
| 96 |
+
|
| 97 |
+
--color-positive: #34d399;
|
| 98 |
+
--color-positive-bg: rgba(52, 211, 153, 0.15);
|
| 99 |
+
--color-negative: #f87171;
|
| 100 |
+
--color-negative-bg: rgba(248, 113, 113, 0.15);
|
| 101 |
+
|
| 102 |
+
--shadow-sm: 0 1px 2px rgba(0, 0, 0, 0.2);
|
| 103 |
+
--shadow-md: 0 2px 8px rgba(0, 0, 0, 0.25);
|
| 104 |
+
--shadow-lg: 0 4px 16px rgba(0, 0, 0, 0.3);
|
| 105 |
+
}
|
| 106 |
+
|
| 107 |
+
/* Reset & Base */
|
| 108 |
+
* {
|
| 109 |
+
margin: 0;
|
| 110 |
+
padding: 0;
|
| 111 |
+
box-sizing: border-box;
|
| 112 |
+
}
|
| 113 |
+
|
| 114 |
+
html, body {
|
| 115 |
+
height: 100%;
|
| 116 |
+
}
|
| 117 |
+
|
| 118 |
+
body {
|
| 119 |
+
font-family: var(--font-sans);
|
| 120 |
+
background-color: var(--color-bg);
|
| 121 |
+
color: var(--color-text);
|
| 122 |
+
line-height: 1.5;
|
| 123 |
+
font-size: var(--text-sm);
|
| 124 |
+
-webkit-font-smoothing: antialiased;
|
| 125 |
+
-moz-osx-font-smoothing: grayscale;
|
| 126 |
+
overflow: hidden;
|
| 127 |
+
}
|
| 128 |
+
|
| 129 |
+
/* ============================================
|
| 130 |
+
App Layout - Persistent Chat Panel
|
| 131 |
+
============================================ */
|
| 132 |
+
|
| 133 |
+
.global-header {
|
| 134 |
+
height: var(--header-height);
|
| 135 |
+
background: var(--color-bg-elevated);
|
| 136 |
+
border-bottom: 1px solid var(--color-border);
|
| 137 |
+
padding: 0 var(--space-5);
|
| 138 |
+
display: flex;
|
| 139 |
+
align-items: center;
|
| 140 |
+
box-shadow: var(--shadow-sm);
|
| 141 |
+
position: relative;
|
| 142 |
+
z-index: 100;
|
| 143 |
+
}
|
| 144 |
+
|
| 145 |
+
.global-header .header-content {
|
| 146 |
+
display: flex;
|
| 147 |
+
justify-content: space-between;
|
| 148 |
+
align-items: center;
|
| 149 |
+
width: 100%;
|
| 150 |
+
max-width: 1600px;
|
| 151 |
+
margin: 0 auto;
|
| 152 |
+
}
|
| 153 |
+
|
| 154 |
+
.global-header h1 {
|
| 155 |
+
font-size: var(--text-lg);
|
| 156 |
+
font-weight: 700;
|
| 157 |
+
color: var(--color-text);
|
| 158 |
+
letter-spacing: -0.03em;
|
| 159 |
+
}
|
| 160 |
+
|
| 161 |
+
.header-actions {
|
| 162 |
+
display: flex;
|
| 163 |
+
align-items: center;
|
| 164 |
+
gap: var(--space-4);
|
| 165 |
+
}
|
| 166 |
+
|
| 167 |
+
.market-status {
|
| 168 |
+
display: flex;
|
| 169 |
+
align-items: center;
|
| 170 |
+
gap: var(--space-2);
|
| 171 |
+
font-size: var(--text-xs);
|
| 172 |
+
}
|
| 173 |
+
|
| 174 |
+
.status-label {
|
| 175 |
+
color: var(--color-text-muted);
|
| 176 |
+
font-weight: 500;
|
| 177 |
+
text-transform: uppercase;
|
| 178 |
+
letter-spacing: 0.05em;
|
| 179 |
+
}
|
| 180 |
+
|
| 181 |
+
.status-text {
|
| 182 |
+
padding: var(--space-1) var(--space-2);
|
| 183 |
+
border-radius: var(--radius-sm);
|
| 184 |
+
font-weight: 600;
|
| 185 |
+
font-size: 10px;
|
| 186 |
+
text-transform: uppercase;
|
| 187 |
+
letter-spacing: 0.05em;
|
| 188 |
+
}
|
| 189 |
+
|
| 190 |
+
.status-open {
|
| 191 |
+
background: var(--color-positive-bg);
|
| 192 |
+
color: var(--color-positive);
|
| 193 |
+
}
|
| 194 |
+
|
| 195 |
+
.status-closed {
|
| 196 |
+
background: var(--color-negative-bg);
|
| 197 |
+
color: var(--color-negative);
|
| 198 |
+
}
|
| 199 |
+
|
| 200 |
+
.theme-toggle {
|
| 201 |
+
background: var(--color-bg-subtle);
|
| 202 |
+
border: 1px solid var(--color-border);
|
| 203 |
+
border-radius: var(--radius-sm);
|
| 204 |
+
padding: var(--space-2);
|
| 205 |
+
cursor: pointer;
|
| 206 |
+
display: flex;
|
| 207 |
+
align-items: center;
|
| 208 |
+
justify-content: center;
|
| 209 |
+
width: 32px;
|
| 210 |
+
height: 32px;
|
| 211 |
+
transition: background 0.15s, border-color 0.15s;
|
| 212 |
+
}
|
| 213 |
+
|
| 214 |
+
.theme-toggle:hover {
|
| 215 |
+
background: var(--color-bg-muted);
|
| 216 |
+
}
|
| 217 |
+
|
| 218 |
+
.theme-toggle:active {
|
| 219 |
+
transform: scale(0.96);
|
| 220 |
+
}
|
| 221 |
+
|
| 222 |
+
.theme-toggle svg {
|
| 223 |
+
width: 16px;
|
| 224 |
+
height: 16px;
|
| 225 |
+
color: var(--color-text-secondary);
|
| 226 |
+
}
|
| 227 |
+
|
| 228 |
+
.theme-toggle .icon-moon {
|
| 229 |
+
display: block;
|
| 230 |
+
}
|
| 231 |
+
|
| 232 |
+
.theme-toggle .icon-sun {
|
| 233 |
+
display: none;
|
| 234 |
+
}
|
| 235 |
+
|
| 236 |
+
[data-theme="dark"] .theme-toggle .icon-moon {
|
| 237 |
+
display: none;
|
| 238 |
+
}
|
| 239 |
+
|
| 240 |
+
[data-theme="dark"] .theme-toggle .icon-sun {
|
| 241 |
+
display: block;
|
| 242 |
+
}
|
| 243 |
+
|
| 244 |
+
/* App Layout Grid */
|
| 245 |
+
.app-layout {
|
| 246 |
+
display: grid;
|
| 247 |
+
grid-template-columns: 1fr var(--chat-width);
|
| 248 |
+
height: calc(100vh - var(--header-height));
|
| 249 |
+
}
|
| 250 |
+
|
| 251 |
+
/* Main Content Area */
|
| 252 |
+
.main-content {
|
| 253 |
+
overflow-y: auto;
|
| 254 |
+
padding: var(--space-4) var(--space-5);
|
| 255 |
+
background: var(--color-bg);
|
| 256 |
+
scrollbar-width: thin;
|
| 257 |
+
scrollbar-color: var(--color-border) transparent;
|
| 258 |
+
}
|
| 259 |
+
|
| 260 |
+
.main-content::-webkit-scrollbar {
|
| 261 |
+
width: 6px;
|
| 262 |
+
}
|
| 263 |
+
|
| 264 |
+
.main-content::-webkit-scrollbar-track {
|
| 265 |
+
background: transparent;
|
| 266 |
+
}
|
| 267 |
+
|
| 268 |
+
.main-content::-webkit-scrollbar-thumb {
|
| 269 |
+
background: var(--color-border);
|
| 270 |
+
border-radius: 3px;
|
| 271 |
+
}
|
| 272 |
+
|
| 273 |
+
.main-content::-webkit-scrollbar-thumb:hover {
|
| 274 |
+
background: var(--color-text-muted);
|
| 275 |
+
}
|
| 276 |
+
|
| 277 |
+
/* ============================================
|
| 278 |
+
Chat Panel - Persistent Right Side
|
| 279 |
+
============================================ */
|
| 280 |
+
|
| 281 |
+
.chat-panel {
|
| 282 |
+
background: var(--color-bg-elevated);
|
| 283 |
+
border-left: 1px solid var(--color-border);
|
| 284 |
+
display: flex;
|
| 285 |
+
flex-direction: column;
|
| 286 |
+
position: relative;
|
| 287 |
+
overflow: hidden;
|
| 288 |
+
}
|
| 289 |
+
|
| 290 |
+
.chat-panel-content {
|
| 291 |
+
display: flex;
|
| 292 |
+
flex-direction: column;
|
| 293 |
+
flex: 1;
|
| 294 |
+
overflow: hidden;
|
| 295 |
+
transition: opacity 0.2s;
|
| 296 |
+
}
|
| 297 |
+
|
| 298 |
+
.chat-header {
|
| 299 |
+
padding: var(--space-3) var(--space-4);
|
| 300 |
+
background: var(--color-bg-elevated);
|
| 301 |
+
border-bottom: 1px solid var(--color-border);
|
| 302 |
+
display: flex;
|
| 303 |
+
justify-content: space-between;
|
| 304 |
+
align-items: center;
|
| 305 |
+
}
|
| 306 |
+
|
| 307 |
+
.chat-header h3 {
|
| 308 |
+
margin: 0;
|
| 309 |
+
font-size: var(--text-sm);
|
| 310 |
+
font-weight: 600;
|
| 311 |
+
color: var(--color-text);
|
| 312 |
+
text-transform: uppercase;
|
| 313 |
+
letter-spacing: 0.05em;
|
| 314 |
+
}
|
| 315 |
+
|
| 316 |
+
.close-mobile-chat {
|
| 317 |
+
display: none;
|
| 318 |
+
background: none;
|
| 319 |
+
border: none;
|
| 320 |
+
color: var(--color-text-muted);
|
| 321 |
+
cursor: pointer;
|
| 322 |
+
padding: var(--space-1);
|
| 323 |
+
border-radius: var(--radius-sm);
|
| 324 |
+
transition: background 0.15s, color 0.15s;
|
| 325 |
+
}
|
| 326 |
+
|
| 327 |
+
.close-mobile-chat:hover {
|
| 328 |
+
background: var(--color-bg-subtle);
|
| 329 |
+
color: var(--color-text);
|
| 330 |
+
}
|
| 331 |
+
|
| 332 |
+
.chat-ticker-context {
|
| 333 |
+
padding: var(--space-2) var(--space-4);
|
| 334 |
+
background: var(--color-bg-subtle);
|
| 335 |
+
border-bottom: 1px solid var(--color-border-subtle);
|
| 336 |
+
font-size: 10px;
|
| 337 |
+
font-weight: 600;
|
| 338 |
+
color: var(--color-text-muted);
|
| 339 |
+
text-transform: uppercase;
|
| 340 |
+
letter-spacing: 0.05em;
|
| 341 |
+
}
|
| 342 |
+
|
| 343 |
+
.chat-messages {
|
| 344 |
+
flex: 1;
|
| 345 |
+
overflow-y: auto;
|
| 346 |
+
padding: var(--space-4);
|
| 347 |
+
display: flex;
|
| 348 |
+
flex-direction: column;
|
| 349 |
+
gap: var(--space-3);
|
| 350 |
+
background: var(--color-bg);
|
| 351 |
+
}
|
| 352 |
+
|
| 353 |
+
.message {
|
| 354 |
+
max-width: 90%;
|
| 355 |
+
padding: var(--space-2) var(--space-3);
|
| 356 |
+
border-radius: var(--radius-md);
|
| 357 |
+
line-height: var(--line-height-dense);
|
| 358 |
+
word-wrap: break-word;
|
| 359 |
+
font-size: var(--text-sm);
|
| 360 |
+
animation: messageIn 0.2s ease-out;
|
| 361 |
+
}
|
| 362 |
+
|
| 363 |
+
@keyframes messageIn {
|
| 364 |
+
from {
|
| 365 |
+
opacity: 0;
|
| 366 |
+
transform: translateY(6px);
|
| 367 |
+
}
|
| 368 |
+
to {
|
| 369 |
+
opacity: 1;
|
| 370 |
+
transform: translateY(0);
|
| 371 |
+
}
|
| 372 |
+
}
|
| 373 |
+
|
| 374 |
+
.message.user {
|
| 375 |
+
align-self: flex-end;
|
| 376 |
+
background: var(--color-accent);
|
| 377 |
+
color: white;
|
| 378 |
+
border-bottom-right-radius: var(--radius-sm);
|
| 379 |
+
}
|
| 380 |
+
|
| 381 |
+
.message.assistant {
|
| 382 |
+
align-self: flex-start;
|
| 383 |
+
background: var(--color-bg-elevated);
|
| 384 |
+
color: var(--color-text);
|
| 385 |
+
border: 1px solid var(--color-border);
|
| 386 |
+
border-bottom-left-radius: var(--radius-sm);
|
| 387 |
+
}
|
| 388 |
+
|
| 389 |
+
.message.error {
|
| 390 |
+
align-self: flex-start;
|
| 391 |
+
background: var(--color-negative-bg);
|
| 392 |
+
color: var(--color-negative);
|
| 393 |
+
border: 1px solid var(--color-negative);
|
| 394 |
+
}
|
| 395 |
+
|
| 396 |
+
.message.loading {
|
| 397 |
+
align-self: flex-start;
|
| 398 |
+
background: var(--color-bg-elevated);
|
| 399 |
+
border: 1px solid var(--color-border);
|
| 400 |
+
padding: var(--space-3);
|
| 401 |
+
}
|
| 402 |
+
|
| 403 |
+
.typing-indicator {
|
| 404 |
+
display: flex;
|
| 405 |
+
gap: 4px;
|
| 406 |
+
align-items: center;
|
| 407 |
+
}
|
| 408 |
+
|
| 409 |
+
.typing-indicator span {
|
| 410 |
+
width: 5px;
|
| 411 |
+
height: 5px;
|
| 412 |
+
background: var(--color-text-muted);
|
| 413 |
+
border-radius: 50%;
|
| 414 |
+
animation: typing 1.2s infinite;
|
| 415 |
+
}
|
| 416 |
+
|
| 417 |
+
.typing-indicator span:nth-child(2) {
|
| 418 |
+
animation-delay: 0.15s;
|
| 419 |
+
}
|
| 420 |
+
|
| 421 |
+
.typing-indicator span:nth-child(3) {
|
| 422 |
+
animation-delay: 0.3s;
|
| 423 |
+
}
|
| 424 |
+
|
| 425 |
+
@keyframes typing {
|
| 426 |
+
0%, 60%, 100% {
|
| 427 |
+
transform: translateY(0);
|
| 428 |
+
opacity: 0.4;
|
| 429 |
+
}
|
| 430 |
+
30% {
|
| 431 |
+
transform: translateY(-4px);
|
| 432 |
+
opacity: 1;
|
| 433 |
+
}
|
| 434 |
+
}
|
| 435 |
+
|
| 436 |
+
/* Agent tool call status indicators */
|
| 437 |
+
.message.tool-status {
|
| 438 |
+
align-self: flex-start;
|
| 439 |
+
background: transparent;
|
| 440 |
+
padding: var(--space-1) var(--space-2);
|
| 441 |
+
max-width: 100%;
|
| 442 |
+
border: none;
|
| 443 |
+
}
|
| 444 |
+
|
| 445 |
+
.tool-status-item {
|
| 446 |
+
display: flex;
|
| 447 |
+
align-items: center;
|
| 448 |
+
gap: 6px;
|
| 449 |
+
padding: 2px 0;
|
| 450 |
+
font-size: var(--text-xs);
|
| 451 |
+
color: var(--color-text-muted);
|
| 452 |
+
}
|
| 453 |
+
|
| 454 |
+
.tool-status-item.complete {
|
| 455 |
+
color: var(--color-positive);
|
| 456 |
+
}
|
| 457 |
+
|
| 458 |
+
.tool-status-item.error {
|
| 459 |
+
color: var(--color-negative);
|
| 460 |
+
}
|
| 461 |
+
|
| 462 |
+
.tool-status-item.calling {
|
| 463 |
+
color: var(--color-accent);
|
| 464 |
+
}
|
| 465 |
+
|
| 466 |
+
.tool-spinner {
|
| 467 |
+
display: inline-block;
|
| 468 |
+
width: 10px;
|
| 469 |
+
height: 10px;
|
| 470 |
+
border: 2px solid var(--color-border);
|
| 471 |
+
border-top-color: var(--color-accent);
|
| 472 |
+
border-radius: 50%;
|
| 473 |
+
animation: tool-spin 0.6s linear infinite;
|
| 474 |
+
}
|
| 475 |
+
|
| 476 |
+
@keyframes tool-spin {
|
| 477 |
+
to { transform: rotate(360deg); }
|
| 478 |
+
}
|
| 479 |
+
|
| 480 |
+
.tool-status-complete {
|
| 481 |
+
opacity: 0.5;
|
| 482 |
+
max-height: 80px;
|
| 483 |
+
overflow: hidden;
|
| 484 |
+
transition: opacity 0.3s ease, max-height 0.3s ease;
|
| 485 |
+
}
|
| 486 |
+
|
| 487 |
+
.chat-input-container {
|
| 488 |
+
padding: var(--space-3);
|
| 489 |
+
border-top: 1px solid var(--color-border);
|
| 490 |
+
background: var(--color-bg-elevated);
|
| 491 |
+
display: flex;
|
| 492 |
+
gap: var(--space-2);
|
| 493 |
+
}
|
| 494 |
+
|
| 495 |
+
#chatInput {
|
| 496 |
+
flex: 1;
|
| 497 |
+
padding: var(--space-2) var(--space-3);
|
| 498 |
+
border: 1px solid var(--color-border);
|
| 499 |
+
border-radius: var(--radius-sm);
|
| 500 |
+
font-family: var(--font-sans);
|
| 501 |
+
font-size: var(--text-sm);
|
| 502 |
+
resize: none;
|
| 503 |
+
transition: border-color 0.15s, box-shadow 0.15s;
|
| 504 |
+
background: var(--color-bg-elevated);
|
| 505 |
+
color: var(--color-text);
|
| 506 |
+
}
|
| 507 |
+
|
| 508 |
+
#chatInput::placeholder {
|
| 509 |
+
color: var(--color-text-muted);
|
| 510 |
+
}
|
| 511 |
+
|
| 512 |
+
#chatInput:focus {
|
| 513 |
+
outline: none;
|
| 514 |
+
border-color: var(--color-accent);
|
| 515 |
+
box-shadow: 0 0 0 2px var(--color-accent-subtle);
|
| 516 |
+
}
|
| 517 |
+
|
| 518 |
+
.send-chat-btn {
|
| 519 |
+
background: var(--color-accent);
|
| 520 |
+
color: white;
|
| 521 |
+
border: none;
|
| 522 |
+
padding: var(--space-2) var(--space-4);
|
| 523 |
+
border-radius: var(--radius-sm);
|
| 524 |
+
cursor: pointer;
|
| 525 |
+
font-weight: 600;
|
| 526 |
+
font-size: var(--text-xs);
|
| 527 |
+
text-transform: uppercase;
|
| 528 |
+
letter-spacing: 0.05em;
|
| 529 |
+
transition: background 0.15s, transform 0.1s;
|
| 530 |
+
font-family: var(--font-sans);
|
| 531 |
+
}
|
| 532 |
+
|
| 533 |
+
.send-chat-btn:hover {
|
| 534 |
+
background: var(--color-accent-hover);
|
| 535 |
+
}
|
| 536 |
+
|
| 537 |
+
.send-chat-btn:active {
|
| 538 |
+
transform: scale(0.96);
|
| 539 |
+
}
|
| 540 |
+
|
| 541 |
+
.send-chat-btn:disabled {
|
| 542 |
+
background: var(--color-bg-muted);
|
| 543 |
+
color: var(--color-text-muted);
|
| 544 |
+
cursor: not-allowed;
|
| 545 |
+
}
|
| 546 |
+
|
| 547 |
+
/* ============================================
|
| 548 |
+
Ticker Selector - Compact
|
| 549 |
+
============================================ */
|
| 550 |
+
|
| 551 |
+
.ticker-selector {
|
| 552 |
+
background: var(--color-bg-elevated);
|
| 553 |
+
border: 1px solid var(--color-border);
|
| 554 |
+
padding: var(--space-3) var(--space-4);
|
| 555 |
+
border-radius: var(--radius-md);
|
| 556 |
+
margin-bottom: var(--space-4);
|
| 557 |
+
box-shadow: var(--shadow-sm);
|
| 558 |
+
}
|
| 559 |
+
|
| 560 |
+
.ticker-selector label {
|
| 561 |
+
display: block;
|
| 562 |
+
margin-bottom: var(--space-1);
|
| 563 |
+
font-weight: 600;
|
| 564 |
+
font-size: 10px;
|
| 565 |
+
color: var(--color-text-muted);
|
| 566 |
+
text-transform: uppercase;
|
| 567 |
+
letter-spacing: 0.08em;
|
| 568 |
+
}
|
| 569 |
+
|
| 570 |
+
.dropdown-container {
|
| 571 |
+
position: relative;
|
| 572 |
+
}
|
| 573 |
+
|
| 574 |
+
#tickerSearch {
|
| 575 |
+
width: 100%;
|
| 576 |
+
padding: var(--space-2) var(--space-3);
|
| 577 |
+
border: 1px solid var(--color-border);
|
| 578 |
+
border-radius: var(--radius-sm);
|
| 579 |
+
font-size: var(--text-sm);
|
| 580 |
+
font-family: var(--font-sans);
|
| 581 |
+
background: var(--color-bg-elevated);
|
| 582 |
+
color: var(--color-text);
|
| 583 |
+
transition: border-color 0.15s, box-shadow 0.15s;
|
| 584 |
+
}
|
| 585 |
+
|
| 586 |
+
#tickerSearch::placeholder {
|
| 587 |
+
color: var(--color-text-muted);
|
| 588 |
+
}
|
| 589 |
+
|
| 590 |
+
#tickerSearch:focus {
|
| 591 |
+
outline: none;
|
| 592 |
+
border-color: var(--color-accent);
|
| 593 |
+
box-shadow: 0 0 0 2px var(--color-accent-subtle);
|
| 594 |
+
}
|
| 595 |
+
|
| 596 |
+
.dropdown-list {
|
| 597 |
+
position: absolute;
|
| 598 |
+
top: calc(100% + 4px);
|
| 599 |
+
left: 0;
|
| 600 |
+
right: 0;
|
| 601 |
+
max-height: 320px;
|
| 602 |
+
overflow-y: auto;
|
| 603 |
+
background: var(--color-bg-elevated);
|
| 604 |
+
border: 1px solid var(--color-border);
|
| 605 |
+
border-radius: var(--radius-md);
|
| 606 |
+
box-shadow: var(--shadow-lg);
|
| 607 |
+
z-index: 1000;
|
| 608 |
+
}
|
| 609 |
+
|
| 610 |
+
.dropdown-section-header {
|
| 611 |
+
padding: var(--space-1) var(--space-3);
|
| 612 |
+
font-size: 10px;
|
| 613 |
+
font-weight: 700;
|
| 614 |
+
color: var(--color-text-muted);
|
| 615 |
+
background: var(--color-bg-subtle);
|
| 616 |
+
text-transform: uppercase;
|
| 617 |
+
letter-spacing: 0.08em;
|
| 618 |
+
border-bottom: 1px solid var(--color-border-subtle);
|
| 619 |
+
position: sticky;
|
| 620 |
+
top: 0;
|
| 621 |
+
}
|
| 622 |
+
|
| 623 |
+
.dropdown-item {
|
| 624 |
+
padding: var(--space-2) var(--space-3);
|
| 625 |
+
cursor: pointer;
|
| 626 |
+
transition: background 0.1s;
|
| 627 |
+
border-bottom: 1px solid var(--color-border-subtle);
|
| 628 |
+
font-size: var(--text-sm);
|
| 629 |
+
}
|
| 630 |
+
|
| 631 |
+
.dropdown-item:hover,
|
| 632 |
+
.dropdown-item.highlighted {
|
| 633 |
+
background: var(--color-bg-subtle);
|
| 634 |
+
}
|
| 635 |
+
|
| 636 |
+
.dropdown-item:last-child {
|
| 637 |
+
border-bottom: none;
|
| 638 |
+
}
|
| 639 |
+
|
| 640 |
+
.dropdown-no-results {
|
| 641 |
+
padding: var(--space-4);
|
| 642 |
+
text-align: center;
|
| 643 |
+
color: var(--color-text-muted);
|
| 644 |
+
font-size: var(--text-sm);
|
| 645 |
+
}
|
| 646 |
+
|
| 647 |
+
/* ============================================
|
| 648 |
+
Stock Data Card - Compact
|
| 649 |
+
============================================ */
|
| 650 |
+
|
| 651 |
+
.stock-data {
|
| 652 |
+
background: var(--color-bg-elevated);
|
| 653 |
+
border: 1px solid var(--color-border);
|
| 654 |
+
padding: var(--space-4);
|
| 655 |
+
border-radius: var(--radius-md);
|
| 656 |
+
box-shadow: var(--shadow-sm);
|
| 657 |
+
}
|
| 658 |
+
|
| 659 |
+
.stock-header {
|
| 660 |
+
display: flex;
|
| 661 |
+
justify-content: space-between;
|
| 662 |
+
align-items: baseline;
|
| 663 |
+
margin-bottom: var(--space-4);
|
| 664 |
+
padding-bottom: var(--space-3);
|
| 665 |
+
border-bottom: 1px solid var(--color-border);
|
| 666 |
+
}
|
| 667 |
+
|
| 668 |
+
.stock-header h2 {
|
| 669 |
+
font-size: var(--text-xl);
|
| 670 |
+
font-weight: 700;
|
| 671 |
+
color: var(--color-text);
|
| 672 |
+
letter-spacing: -0.025em;
|
| 673 |
+
}
|
| 674 |
+
|
| 675 |
+
.stock-price {
|
| 676 |
+
font-size: var(--text-2xl);
|
| 677 |
+
font-weight: 700;
|
| 678 |
+
font-family: var(--font-mono);
|
| 679 |
+
letter-spacing: -0.02em;
|
| 680 |
+
}
|
| 681 |
+
|
| 682 |
+
.stock-price.positive {
|
| 683 |
+
color: var(--color-positive);
|
| 684 |
+
}
|
| 685 |
+
|
| 686 |
+
.stock-price.negative {
|
| 687 |
+
color: var(--color-negative);
|
| 688 |
+
}
|
| 689 |
+
|
| 690 |
+
/* ============================================
|
| 691 |
+
Tabs - Terminal Style
|
| 692 |
+
============================================ */
|
| 693 |
+
|
| 694 |
+
.tabs {
|
| 695 |
+
display: flex;
|
| 696 |
+
gap: 0;
|
| 697 |
+
margin-bottom: var(--space-4);
|
| 698 |
+
border-bottom: 1px solid var(--color-border);
|
| 699 |
+
background: var(--color-bg-subtle);
|
| 700 |
+
border-radius: var(--radius-sm) var(--radius-sm) 0 0;
|
| 701 |
+
overflow-x: auto;
|
| 702 |
+
scrollbar-width: none;
|
| 703 |
+
-ms-overflow-style: none;
|
| 704 |
+
}
|
| 705 |
+
|
| 706 |
+
.tabs::-webkit-scrollbar {
|
| 707 |
+
display: none;
|
| 708 |
+
}
|
| 709 |
+
|
| 710 |
+
.tab-button {
|
| 711 |
+
padding: var(--space-2) var(--space-4);
|
| 712 |
+
background: none;
|
| 713 |
+
border: none;
|
| 714 |
+
font-size: 11px;
|
| 715 |
+
font-weight: 600;
|
| 716 |
+
cursor: pointer;
|
| 717 |
+
color: var(--color-text-muted);
|
| 718 |
+
transition: color 0.15s, background 0.15s;
|
| 719 |
+
border-bottom: 2px solid transparent;
|
| 720 |
+
margin-bottom: -1px;
|
| 721 |
+
white-space: nowrap;
|
| 722 |
+
font-family: var(--font-sans);
|
| 723 |
+
text-transform: uppercase;
|
| 724 |
+
letter-spacing: 0.05em;
|
| 725 |
+
}
|
| 726 |
+
|
| 727 |
+
.tab-button:hover {
|
| 728 |
+
color: var(--color-text-secondary);
|
| 729 |
+
background: var(--color-bg-muted);
|
| 730 |
+
}
|
| 731 |
+
|
| 732 |
+
.tab-button.active {
|
| 733 |
+
color: var(--color-accent);
|
| 734 |
+
background: var(--color-bg-elevated);
|
| 735 |
+
border-bottom-color: var(--color-accent);
|
| 736 |
+
}
|
| 737 |
+
|
| 738 |
+
.beta-tag {
|
| 739 |
+
display: inline-block;
|
| 740 |
+
padding: 1px 4px;
|
| 741 |
+
margin-left: var(--space-1);
|
| 742 |
+
font-size: 9px;
|
| 743 |
+
font-weight: 700;
|
| 744 |
+
text-transform: uppercase;
|
| 745 |
+
letter-spacing: 0.5px;
|
| 746 |
+
background: var(--color-accent-subtle);
|
| 747 |
+
color: var(--color-accent);
|
| 748 |
+
border-radius: 2px;
|
| 749 |
+
vertical-align: middle;
|
| 750 |
+
}
|
| 751 |
+
|
| 752 |
+
.tab-content {
|
| 753 |
+
position: relative;
|
| 754 |
+
}
|
| 755 |
+
|
| 756 |
+
.tab-pane {
|
| 757 |
+
display: none;
|
| 758 |
+
opacity: 0;
|
| 759 |
+
transform: translateY(6px);
|
| 760 |
+
}
|
| 761 |
+
|
| 762 |
+
.tab-pane.active {
|
| 763 |
+
display: block;
|
| 764 |
+
animation: fadeInUp 0.2s ease forwards;
|
| 765 |
+
}
|
| 766 |
+
|
| 767 |
+
@keyframes fadeInUp {
|
| 768 |
+
to {
|
| 769 |
+
opacity: 1;
|
| 770 |
+
transform: translateY(0);
|
| 771 |
+
}
|
| 772 |
+
}
|
| 773 |
+
|
| 774 |
+
/* ============================================
|
| 775 |
+
Metrics Grid - Dense
|
| 776 |
+
============================================ */
|
| 777 |
+
|
| 778 |
+
.metrics-grid {
|
| 779 |
+
display: grid;
|
| 780 |
+
grid-template-columns: repeat(6, 1fr);
|
| 781 |
+
gap: var(--space-2);
|
| 782 |
+
margin-bottom: var(--space-4);
|
| 783 |
+
}
|
| 784 |
+
|
| 785 |
+
.metric-card {
|
| 786 |
+
background: var(--color-bg-subtle);
|
| 787 |
+
padding: var(--space-2) var(--space-3);
|
| 788 |
+
border-radius: var(--radius-sm);
|
| 789 |
+
border: 1px solid var(--color-border-subtle);
|
| 790 |
+
}
|
| 791 |
+
|
| 792 |
+
.metric-label {
|
| 793 |
+
font-size: 10px;
|
| 794 |
+
font-weight: 600;
|
| 795 |
+
color: var(--color-text-muted);
|
| 796 |
+
text-transform: uppercase;
|
| 797 |
+
letter-spacing: 0.08em;
|
| 798 |
+
margin-bottom: 2px;
|
| 799 |
+
}
|
| 800 |
+
|
| 801 |
+
.metric-value {
|
| 802 |
+
font-size: var(--text-base);
|
| 803 |
+
font-weight: 600;
|
| 804 |
+
color: var(--color-text);
|
| 805 |
+
font-family: var(--font-mono);
|
| 806 |
+
line-height: var(--line-height-tight);
|
| 807 |
+
}
|
| 808 |
+
|
| 809 |
+
/* ============================================
|
| 810 |
+
Chart Section - Compact
|
| 811 |
+
============================================ */
|
| 812 |
+
|
| 813 |
+
.chart-section {
|
| 814 |
+
margin-bottom: var(--space-4);
|
| 815 |
+
padding: var(--space-3);
|
| 816 |
+
background: var(--color-bg-subtle);
|
| 817 |
+
border-radius: var(--radius-md);
|
| 818 |
+
border: 1px solid var(--color-border-subtle);
|
| 819 |
+
}
|
| 820 |
+
|
| 821 |
+
.chart-section h3 {
|
| 822 |
+
font-size: 10px;
|
| 823 |
+
font-weight: 700;
|
| 824 |
+
color: var(--color-text-muted);
|
| 825 |
+
margin-bottom: var(--space-3);
|
| 826 |
+
text-transform: uppercase;
|
| 827 |
+
letter-spacing: 0.08em;
|
| 828 |
+
}
|
| 829 |
+
|
| 830 |
+
.chart-controls {
|
| 831 |
+
display: flex;
|
| 832 |
+
gap: var(--space-1);
|
| 833 |
+
margin-bottom: var(--space-3);
|
| 834 |
+
}
|
| 835 |
+
|
| 836 |
+
.chart-range-btn {
|
| 837 |
+
padding: var(--space-1) var(--space-2);
|
| 838 |
+
background: var(--color-bg-elevated);
|
| 839 |
+
border: 1px solid var(--color-border);
|
| 840 |
+
border-radius: var(--radius-sm);
|
| 841 |
+
cursor: pointer;
|
| 842 |
+
font-size: 10px;
|
| 843 |
+
font-weight: 600;
|
| 844 |
+
color: var(--color-text-muted);
|
| 845 |
+
transition: all 0.15s;
|
| 846 |
+
font-family: var(--font-mono);
|
| 847 |
+
}
|
| 848 |
+
|
| 849 |
+
.chart-range-btn:hover {
|
| 850 |
+
background: var(--color-bg-muted);
|
| 851 |
+
color: var(--color-text-secondary);
|
| 852 |
+
}
|
| 853 |
+
|
| 854 |
+
.chart-range-btn:active {
|
| 855 |
+
transform: scale(0.96);
|
| 856 |
+
}
|
| 857 |
+
|
| 858 |
+
.chart-range-btn.active {
|
| 859 |
+
background: var(--color-accent);
|
| 860 |
+
color: white;
|
| 861 |
+
border-color: var(--color-accent);
|
| 862 |
+
}
|
| 863 |
+
|
| 864 |
+
.chart-controls-spacer {
|
| 865 |
+
flex: 1;
|
| 866 |
+
}
|
| 867 |
+
|
| 868 |
+
.chart-view-toggle {
|
| 869 |
+
display: flex;
|
| 870 |
+
border: 1px solid var(--color-border);
|
| 871 |
+
border-radius: var(--radius-sm);
|
| 872 |
+
overflow: hidden;
|
| 873 |
+
}
|
| 874 |
+
|
| 875 |
+
.chart-view-btn {
|
| 876 |
+
padding: var(--space-1) var(--space-2);
|
| 877 |
+
background: var(--color-bg-elevated);
|
| 878 |
+
border: none;
|
| 879 |
+
border-right: 1px solid var(--color-border);
|
| 880 |
+
cursor: pointer;
|
| 881 |
+
font-size: 10px;
|
| 882 |
+
font-weight: 600;
|
| 883 |
+
color: var(--color-text-muted);
|
| 884 |
+
transition: all 0.15s;
|
| 885 |
+
font-family: var(--font-sans);
|
| 886 |
+
}
|
| 887 |
+
|
| 888 |
+
.chart-view-btn:last-child {
|
| 889 |
+
border-right: none;
|
| 890 |
+
}
|
| 891 |
+
|
| 892 |
+
.chart-view-btn:hover {
|
| 893 |
+
background: var(--color-bg-muted);
|
| 894 |
+
}
|
| 895 |
+
|
| 896 |
+
.chart-view-btn:active {
|
| 897 |
+
transform: scale(0.96);
|
| 898 |
+
}
|
| 899 |
+
|
| 900 |
+
.chart-view-btn.active {
|
| 901 |
+
background: var(--color-accent);
|
| 902 |
+
color: white;
|
| 903 |
+
}
|
| 904 |
+
|
| 905 |
+
.chart-canvas-wrapper {
|
| 906 |
+
position: relative;
|
| 907 |
+
}
|
| 908 |
+
|
| 909 |
+
.chart-loading {
|
| 910 |
+
position: absolute;
|
| 911 |
+
top: 50%;
|
| 912 |
+
left: 50%;
|
| 913 |
+
transform: translate(-50%, -50%);
|
| 914 |
+
}
|
| 915 |
+
|
| 916 |
+
#priceChart {
|
| 917 |
+
width: 100%;
|
| 918 |
+
height: 280px;
|
| 919 |
+
background: var(--color-bg-elevated);
|
| 920 |
+
border: 1px solid var(--color-border);
|
| 921 |
+
border-radius: var(--radius-sm);
|
| 922 |
+
cursor: crosshair;
|
| 923 |
+
}
|
| 924 |
+
|
| 925 |
+
/* ============================================
|
| 926 |
+
Company Description - Compact
|
| 927 |
+
============================================ */
|
| 928 |
+
|
| 929 |
+
.company-description {
|
| 930 |
+
padding: var(--space-3);
|
| 931 |
+
background: var(--color-bg-subtle);
|
| 932 |
+
border-radius: var(--radius-md);
|
| 933 |
+
border: 1px solid var(--color-border-subtle);
|
| 934 |
+
}
|
| 935 |
+
|
| 936 |
+
.company-description h3 {
|
| 937 |
+
font-size: 10px;
|
| 938 |
+
font-weight: 700;
|
| 939 |
+
color: var(--color-text-muted);
|
| 940 |
+
margin-bottom: var(--space-2);
|
| 941 |
+
text-transform: uppercase;
|
| 942 |
+
letter-spacing: 0.08em;
|
| 943 |
+
}
|
| 944 |
+
|
| 945 |
+
.company-description p {
|
| 946 |
+
font-size: var(--text-sm);
|
| 947 |
+
line-height: 1.6;
|
| 948 |
+
color: var(--color-text-secondary);
|
| 949 |
+
}
|
| 950 |
+
|
| 951 |
+
/* ============================================
|
| 952 |
+
Financials - Dense
|
| 953 |
+
============================================ */
|
| 954 |
+
|
| 955 |
+
#financialsData {
|
| 956 |
+
padding: var(--space-2);
|
| 957 |
+
}
|
| 958 |
+
|
| 959 |
+
.financial-period {
|
| 960 |
+
margin-bottom: var(--space-4);
|
| 961 |
+
padding: var(--space-3);
|
| 962 |
+
background: var(--color-bg-subtle);
|
| 963 |
+
border-radius: var(--radius-md);
|
| 964 |
+
border: 1px solid var(--color-border-subtle);
|
| 965 |
+
}
|
| 966 |
+
|
| 967 |
+
.financial-period h4 {
|
| 968 |
+
margin-bottom: var(--space-3);
|
| 969 |
+
color: var(--color-text);
|
| 970 |
+
font-size: 10px;
|
| 971 |
+
font-weight: 700;
|
| 972 |
+
text-transform: uppercase;
|
| 973 |
+
letter-spacing: 0.08em;
|
| 974 |
+
}
|
| 975 |
+
|
| 976 |
+
.financial-grid {
|
| 977 |
+
display: grid;
|
| 978 |
+
grid-template-columns: repeat(auto-fit, minmax(180px, 1fr));
|
| 979 |
+
gap: var(--space-2);
|
| 980 |
+
}
|
| 981 |
+
|
| 982 |
+
.financial-item {
|
| 983 |
+
display: flex;
|
| 984 |
+
justify-content: space-between;
|
| 985 |
+
align-items: center;
|
| 986 |
+
padding: var(--space-2) var(--space-3);
|
| 987 |
+
background: var(--color-bg-elevated);
|
| 988 |
+
border-radius: var(--radius-sm);
|
| 989 |
+
border: 1px solid var(--color-border-subtle);
|
| 990 |
+
}
|
| 991 |
+
|
| 992 |
+
.financial-item-label {
|
| 993 |
+
color: var(--color-text-secondary);
|
| 994 |
+
font-size: var(--text-xs);
|
| 995 |
+
}
|
| 996 |
+
|
| 997 |
+
.financial-item-value {
|
| 998 |
+
font-weight: 600;
|
| 999 |
+
color: var(--color-text);
|
| 1000 |
+
font-family: var(--font-mono);
|
| 1001 |
+
font-size: var(--text-xs);
|
| 1002 |
+
}
|
| 1003 |
+
|
| 1004 |
+
/* ============================================
|
| 1005 |
+
News - Dense
|
| 1006 |
+
============================================ */
|
| 1007 |
+
|
| 1008 |
+
#newsContainer {
|
| 1009 |
+
display: flex;
|
| 1010 |
+
flex-direction: column;
|
| 1011 |
+
gap: var(--space-3);
|
| 1012 |
+
}
|
| 1013 |
+
|
| 1014 |
+
.news-article {
|
| 1015 |
+
padding: var(--space-3);
|
| 1016 |
+
background: var(--color-bg-subtle);
|
| 1017 |
+
border-radius: var(--radius-md);
|
| 1018 |
+
border: 1px solid var(--color-border-subtle);
|
| 1019 |
+
transition: border-color 0.15s;
|
| 1020 |
+
}
|
| 1021 |
+
|
| 1022 |
+
.news-article:hover {
|
| 1023 |
+
border-color: var(--color-border);
|
| 1024 |
+
}
|
| 1025 |
+
|
| 1026 |
+
.news-article h4 {
|
| 1027 |
+
margin-bottom: var(--space-1);
|
| 1028 |
+
font-size: var(--text-sm);
|
| 1029 |
+
font-weight: 600;
|
| 1030 |
+
line-height: var(--line-height-dense);
|
| 1031 |
+
}
|
| 1032 |
+
|
| 1033 |
+
.news-article h4 a {
|
| 1034 |
+
color: var(--color-text);
|
| 1035 |
+
text-decoration: none;
|
| 1036 |
+
transition: color 0.15s;
|
| 1037 |
+
}
|
| 1038 |
+
|
| 1039 |
+
.news-article h4 a:hover {
|
| 1040 |
+
color: var(--color-accent);
|
| 1041 |
+
}
|
| 1042 |
+
|
| 1043 |
+
.news-meta {
|
| 1044 |
+
font-size: 10px;
|
| 1045 |
+
color: var(--color-text-muted);
|
| 1046 |
+
margin-bottom: var(--space-2);
|
| 1047 |
+
text-transform: uppercase;
|
| 1048 |
+
letter-spacing: 0.03em;
|
| 1049 |
+
}
|
| 1050 |
+
|
| 1051 |
+
.news-description {
|
| 1052 |
+
color: var(--color-text-secondary);
|
| 1053 |
+
font-size: var(--text-xs);
|
| 1054 |
+
line-height: 1.5;
|
| 1055 |
+
}
|
| 1056 |
+
|
| 1057 |
+
/* ============================================
|
| 1058 |
+
Data Tables - Dense with Sticky Headers
|
| 1059 |
+
============================================ */
|
| 1060 |
+
|
| 1061 |
+
.data-table-container {
|
| 1062 |
+
max-height: 400px;
|
| 1063 |
+
overflow-y: auto;
|
| 1064 |
+
border: 1px solid var(--color-border);
|
| 1065 |
+
border-radius: var(--radius-md);
|
| 1066 |
+
}
|
| 1067 |
+
|
| 1068 |
+
.data-table {
|
| 1069 |
+
width: 100%;
|
| 1070 |
+
border-collapse: collapse;
|
| 1071 |
+
font-size: var(--text-xs);
|
| 1072 |
+
}
|
| 1073 |
+
|
| 1074 |
+
.data-table thead {
|
| 1075 |
+
position: sticky;
|
| 1076 |
+
top: 0;
|
| 1077 |
+
z-index: 10;
|
| 1078 |
+
background: var(--color-bg-subtle);
|
| 1079 |
+
}
|
| 1080 |
+
|
| 1081 |
+
.data-table th {
|
| 1082 |
+
padding: var(--space-2) var(--space-3);
|
| 1083 |
+
text-align: left;
|
| 1084 |
+
font-weight: 700;
|
| 1085 |
+
color: var(--color-text-muted);
|
| 1086 |
+
font-size: 10px;
|
| 1087 |
+
text-transform: uppercase;
|
| 1088 |
+
letter-spacing: 0.06em;
|
| 1089 |
+
border-bottom: 2px solid var(--color-border);
|
| 1090 |
+
white-space: nowrap;
|
| 1091 |
+
}
|
| 1092 |
+
|
| 1093 |
+
.data-table td {
|
| 1094 |
+
padding: var(--space-2) var(--space-3);
|
| 1095 |
+
border-bottom: 1px solid var(--color-border-subtle);
|
| 1096 |
+
color: var(--color-text);
|
| 1097 |
+
font-family: var(--font-mono);
|
| 1098 |
+
font-size: 12px;
|
| 1099 |
+
}
|
| 1100 |
+
|
| 1101 |
+
.data-table td[data-type="number"] {
|
| 1102 |
+
text-align: right;
|
| 1103 |
+
}
|
| 1104 |
+
|
| 1105 |
+
.data-table tbody tr:nth-child(even) {
|
| 1106 |
+
background: var(--color-bg-subtle);
|
| 1107 |
+
}
|
| 1108 |
+
|
| 1109 |
+
.data-table tbody tr:hover {
|
| 1110 |
+
background: var(--color-accent-subtle);
|
| 1111 |
+
}
|
| 1112 |
+
|
| 1113 |
+
.data-table tbody tr:last-child td {
|
| 1114 |
+
border-bottom: none;
|
| 1115 |
+
}
|
| 1116 |
+
|
| 1117 |
+
/* ============================================
|
| 1118 |
+
Loading States & Skeletons
|
| 1119 |
+
============================================ */
|
| 1120 |
+
|
| 1121 |
+
.loading {
|
| 1122 |
+
position: absolute;
|
| 1123 |
+
top: 50%;
|
| 1124 |
+
left: 50%;
|
| 1125 |
+
transform: translate(-50%, -50%);
|
| 1126 |
+
}
|
| 1127 |
+
|
| 1128 |
+
.spinner {
|
| 1129 |
+
width: 24px;
|
| 1130 |
+
height: 24px;
|
| 1131 |
+
border: 2px solid var(--color-border);
|
| 1132 |
+
border-top-color: var(--color-accent);
|
| 1133 |
+
border-radius: 50%;
|
| 1134 |
+
animation: spin 0.7s linear infinite;
|
| 1135 |
+
}
|
| 1136 |
+
|
| 1137 |
+
@keyframes spin {
|
| 1138 |
+
to { transform: rotate(360deg); }
|
| 1139 |
+
}
|
| 1140 |
+
|
| 1141 |
+
.hidden {
|
| 1142 |
+
display: none;
|
| 1143 |
+
}
|
| 1144 |
+
|
| 1145 |
+
/* Skeleton Loading */
|
| 1146 |
+
.skeleton {
|
| 1147 |
+
background: linear-gradient(
|
| 1148 |
+
90deg,
|
| 1149 |
+
var(--color-bg-subtle) 25%,
|
| 1150 |
+
var(--color-bg-muted) 50%,
|
| 1151 |
+
var(--color-bg-subtle) 75%
|
| 1152 |
+
);
|
| 1153 |
+
background-size: 200% 100%;
|
| 1154 |
+
animation: skeleton-loading 1.5s infinite;
|
| 1155 |
+
border-radius: var(--radius-sm);
|
| 1156 |
+
}
|
| 1157 |
+
|
| 1158 |
+
@keyframes skeleton-loading {
|
| 1159 |
+
0% { background-position: 200% 0; }
|
| 1160 |
+
100% { background-position: -200% 0; }
|
| 1161 |
+
}
|
| 1162 |
+
|
| 1163 |
+
.skeleton-metric {
|
| 1164 |
+
height: 48px;
|
| 1165 |
+
}
|
| 1166 |
+
|
| 1167 |
+
.skeleton-chart {
|
| 1168 |
+
height: 280px;
|
| 1169 |
+
}
|
| 1170 |
+
|
| 1171 |
+
.skeleton-text {
|
| 1172 |
+
height: 14px;
|
| 1173 |
+
margin-bottom: 8px;
|
| 1174 |
+
}
|
| 1175 |
+
|
| 1176 |
+
.skeleton-text:last-child {
|
| 1177 |
+
width: 60%;
|
| 1178 |
+
}
|
| 1179 |
+
|
| 1180 |
+
.skeleton-text-sm {
|
| 1181 |
+
height: 12px;
|
| 1182 |
+
margin-bottom: 6px;
|
| 1183 |
+
}
|
| 1184 |
+
|
| 1185 |
+
/* ============================================
|
| 1186 |
+
Empty States
|
| 1187 |
+
============================================ */
|
| 1188 |
+
|
| 1189 |
+
.empty-state {
|
| 1190 |
+
text-align: center;
|
| 1191 |
+
padding: var(--space-8) var(--space-4);
|
| 1192 |
+
color: var(--color-text-muted);
|
| 1193 |
+
}
|
| 1194 |
+
|
| 1195 |
+
.empty-state-icon {
|
| 1196 |
+
width: 40px;
|
| 1197 |
+
height: 40px;
|
| 1198 |
+
margin: 0 auto var(--space-3);
|
| 1199 |
+
opacity: 0.5;
|
| 1200 |
+
color: var(--color-text-muted);
|
| 1201 |
+
}
|
| 1202 |
+
|
| 1203 |
+
.empty-state-title {
|
| 1204 |
+
font-size: var(--text-sm);
|
| 1205 |
+
font-weight: 600;
|
| 1206 |
+
color: var(--color-text-secondary);
|
| 1207 |
+
margin-bottom: var(--space-1);
|
| 1208 |
+
}
|
| 1209 |
+
|
| 1210 |
+
.empty-state-description {
|
| 1211 |
+
font-size: var(--text-xs);
|
| 1212 |
+
max-width: 240px;
|
| 1213 |
+
margin: 0 auto;
|
| 1214 |
+
line-height: 1.5;
|
| 1215 |
+
}
|
| 1216 |
+
|
| 1217 |
+
/* ============================================
|
| 1218 |
+
Micro-animations
|
| 1219 |
+
============================================ */
|
| 1220 |
+
|
| 1221 |
+
/* Button press feedback */
|
| 1222 |
+
button:active:not(:disabled) {
|
| 1223 |
+
transform: scale(0.98);
|
| 1224 |
+
}
|
| 1225 |
+
|
| 1226 |
+
/* Data update flash */
|
| 1227 |
+
.data-updated {
|
| 1228 |
+
animation: dataFlash 0.3s ease;
|
| 1229 |
+
}
|
| 1230 |
+
|
| 1231 |
+
@keyframes dataFlash {
|
| 1232 |
+
0%, 100% { background: transparent; }
|
| 1233 |
+
50% { background: var(--color-accent-subtle); }
|
| 1234 |
+
}
|
| 1235 |
+
|
| 1236 |
+
/* Focus visible states */
|
| 1237 |
+
:focus-visible {
|
| 1238 |
+
outline: 2px solid var(--color-accent);
|
| 1239 |
+
outline-offset: 2px;
|
| 1240 |
+
}
|
| 1241 |
+
|
| 1242 |
+
/* ============================================
|
| 1243 |
+
Sentiment Analysis Tab
|
| 1244 |
+
============================================ */
|
| 1245 |
+
|
| 1246 |
+
.sentiment-header {
|
| 1247 |
+
display: flex;
|
| 1248 |
+
justify-content: space-between;
|
| 1249 |
+
align-items: flex-start;
|
| 1250 |
+
gap: var(--space-4);
|
| 1251 |
+
margin-bottom: var(--space-4);
|
| 1252 |
+
padding: var(--space-3);
|
| 1253 |
+
background: var(--color-bg-elevated);
|
| 1254 |
+
border-radius: var(--radius-md);
|
| 1255 |
+
border: 1px solid var(--color-border);
|
| 1256 |
+
}
|
| 1257 |
+
|
| 1258 |
+
.sentiment-aggregate {
|
| 1259 |
+
display: flex;
|
| 1260 |
+
gap: var(--space-6);
|
| 1261 |
+
align-items: center;
|
| 1262 |
+
}
|
| 1263 |
+
|
| 1264 |
+
.sentiment-score-container {
|
| 1265 |
+
text-align: center;
|
| 1266 |
+
}
|
| 1267 |
+
|
| 1268 |
+
/* Sentiment Gauge */
|
| 1269 |
+
.sentiment-gauge {
|
| 1270 |
+
width: 120px;
|
| 1271 |
+
height: 60px;
|
| 1272 |
+
position: relative;
|
| 1273 |
+
overflow: hidden;
|
| 1274 |
+
margin-bottom: var(--space-2);
|
| 1275 |
+
}
|
| 1276 |
+
|
| 1277 |
+
.gauge-background {
|
| 1278 |
+
position: absolute;
|
| 1279 |
+
bottom: 0;
|
| 1280 |
+
left: 0;
|
| 1281 |
+
width: 120px;
|
| 1282 |
+
height: 60px;
|
| 1283 |
+
border-radius: 60px 60px 0 0;
|
| 1284 |
+
background: linear-gradient(90deg,
|
| 1285 |
+
var(--color-negative) 0%,
|
| 1286 |
+
var(--color-negative) 20%,
|
| 1287 |
+
var(--color-neutral-sentiment) 40%,
|
| 1288 |
+
var(--color-neutral-sentiment) 60%,
|
| 1289 |
+
var(--color-positive) 80%,
|
| 1290 |
+
var(--color-positive) 100%
|
| 1291 |
+
);
|
| 1292 |
+
opacity: 0.15;
|
| 1293 |
+
}
|
| 1294 |
+
|
| 1295 |
+
.gauge-fill {
|
| 1296 |
+
position: absolute;
|
| 1297 |
+
bottom: 0;
|
| 1298 |
+
left: 50%;
|
| 1299 |
+
width: 3px;
|
| 1300 |
+
height: 50px;
|
| 1301 |
+
background: var(--color-text-muted);
|
| 1302 |
+
transform-origin: bottom center;
|
| 1303 |
+
transform: rotate(0deg);
|
| 1304 |
+
transition: transform 0.5s ease-out, background 0.3s;
|
| 1305 |
+
border-radius: 2px;
|
| 1306 |
+
z-index: 2;
|
| 1307 |
+
}
|
| 1308 |
+
|
| 1309 |
+
.gauge-fill.bullish {
|
| 1310 |
+
background: var(--color-positive);
|
| 1311 |
+
}
|
| 1312 |
+
|
| 1313 |
+
.gauge-fill.bearish {
|
| 1314 |
+
background: var(--color-negative);
|
| 1315 |
+
}
|
| 1316 |
+
|
| 1317 |
+
.gauge-fill.neutral {
|
| 1318 |
+
background: var(--color-neutral-sentiment);
|
| 1319 |
+
}
|
| 1320 |
+
|
| 1321 |
+
.gauge-needle {
|
| 1322 |
+
position: absolute;
|
| 1323 |
+
bottom: 0;
|
| 1324 |
+
left: 50%;
|
| 1325 |
+
width: 2px;
|
| 1326 |
+
height: 45px;
|
| 1327 |
+
background: var(--color-text);
|
| 1328 |
+
transform-origin: bottom center;
|
| 1329 |
+
transform: rotate(0deg);
|
| 1330 |
+
transition: transform 0.6s cubic-bezier(0.4, 0, 0.2, 1);
|
| 1331 |
+
border-radius: 2px;
|
| 1332 |
+
z-index: 3;
|
| 1333 |
+
margin-left: -1px;
|
| 1334 |
+
}
|
| 1335 |
+
|
| 1336 |
+
.gauge-center {
|
| 1337 |
+
position: absolute;
|
| 1338 |
+
bottom: -6px;
|
| 1339 |
+
left: 50%;
|
| 1340 |
+
transform: translateX(-50%);
|
| 1341 |
+
width: 12px;
|
| 1342 |
+
height: 12px;
|
| 1343 |
+
background: var(--color-bg-elevated);
|
| 1344 |
+
border: 2px solid var(--color-text);
|
| 1345 |
+
border-radius: 50%;
|
| 1346 |
+
z-index: 4;
|
| 1347 |
+
}
|
| 1348 |
+
|
| 1349 |
+
.sentiment-label {
|
| 1350 |
+
font-size: var(--text-base);
|
| 1351 |
+
font-weight: 700;
|
| 1352 |
+
text-transform: uppercase;
|
| 1353 |
+
letter-spacing: 0.05em;
|
| 1354 |
+
}
|
| 1355 |
+
|
| 1356 |
+
.sentiment-label.bullish {
|
| 1357 |
+
color: var(--color-positive);
|
| 1358 |
+
}
|
| 1359 |
+
|
| 1360 |
+
.sentiment-label.bearish {
|
| 1361 |
+
color: var(--color-negative);
|
| 1362 |
+
}
|
| 1363 |
+
|
| 1364 |
+
.sentiment-label.neutral {
|
| 1365 |
+
color: var(--color-neutral-sentiment);
|
| 1366 |
+
}
|
| 1367 |
+
|
| 1368 |
+
.sentiment-stats {
|
| 1369 |
+
display: flex;
|
| 1370 |
+
flex-direction: column;
|
| 1371 |
+
gap: var(--space-3);
|
| 1372 |
+
}
|
| 1373 |
+
|
| 1374 |
+
.sentiment-stats .stat-item {
|
| 1375 |
+
display: flex;
|
| 1376 |
+
flex-direction: column;
|
| 1377 |
+
gap: 2px;
|
| 1378 |
+
}
|
| 1379 |
+
|
| 1380 |
+
.sentiment-stats .stat-label {
|
| 1381 |
+
font-size: 10px;
|
| 1382 |
+
color: var(--color-text-muted);
|
| 1383 |
+
text-transform: uppercase;
|
| 1384 |
+
letter-spacing: 0.05em;
|
| 1385 |
+
}
|
| 1386 |
+
|
| 1387 |
+
.sentiment-stats .stat-value {
|
| 1388 |
+
font-size: var(--text-base);
|
| 1389 |
+
font-weight: 600;
|
| 1390 |
+
color: var(--color-text);
|
| 1391 |
+
font-family: var(--font-mono);
|
| 1392 |
+
}
|
| 1393 |
+
|
| 1394 |
+
.refresh-btn {
|
| 1395 |
+
display: flex;
|
| 1396 |
+
align-items: center;
|
| 1397 |
+
gap: var(--space-2);
|
| 1398 |
+
padding: var(--space-2) var(--space-3);
|
| 1399 |
+
background: var(--color-bg-subtle);
|
| 1400 |
+
border: 1px solid var(--color-border);
|
| 1401 |
+
border-radius: var(--radius-sm);
|
| 1402 |
+
cursor: pointer;
|
| 1403 |
+
font-size: 10px;
|
| 1404 |
+
font-weight: 600;
|
| 1405 |
+
color: var(--color-text-secondary);
|
| 1406 |
+
transition: all 0.15s;
|
| 1407 |
+
font-family: var(--font-sans);
|
| 1408 |
+
text-transform: uppercase;
|
| 1409 |
+
letter-spacing: 0.05em;
|
| 1410 |
+
}
|
| 1411 |
+
|
| 1412 |
+
.refresh-btn:hover {
|
| 1413 |
+
background: var(--color-bg-muted);
|
| 1414 |
+
}
|
| 1415 |
+
|
| 1416 |
+
.refresh-btn:disabled {
|
| 1417 |
+
opacity: 0.5;
|
| 1418 |
+
cursor: not-allowed;
|
| 1419 |
+
}
|
| 1420 |
+
|
| 1421 |
+
.refresh-btn.loading svg {
|
| 1422 |
+
animation: spin 1s linear infinite;
|
| 1423 |
+
}
|
| 1424 |
+
|
| 1425 |
+
/* Source Breakdown */
|
| 1426 |
+
.sentiment-sources {
|
| 1427 |
+
margin-bottom: var(--space-4);
|
| 1428 |
+
}
|
| 1429 |
+
|
| 1430 |
+
.sentiment-sources h4 {
|
| 1431 |
+
font-size: 10px;
|
| 1432 |
+
font-weight: 700;
|
| 1433 |
+
color: var(--color-text-muted);
|
| 1434 |
+
margin-bottom: var(--space-2);
|
| 1435 |
+
text-transform: uppercase;
|
| 1436 |
+
letter-spacing: 0.08em;
|
| 1437 |
+
}
|
| 1438 |
+
|
| 1439 |
+
.source-breakdown {
|
| 1440 |
+
display: grid;
|
| 1441 |
+
grid-template-columns: repeat(3, 1fr);
|
| 1442 |
+
gap: var(--space-2);
|
| 1443 |
+
}
|
| 1444 |
+
|
| 1445 |
+
.source-item {
|
| 1446 |
+
display: flex;
|
| 1447 |
+
align-items: center;
|
| 1448 |
+
gap: var(--space-2);
|
| 1449 |
+
padding: var(--space-2) var(--space-3);
|
| 1450 |
+
background: var(--color-bg-elevated);
|
| 1451 |
+
border-radius: var(--radius-sm);
|
| 1452 |
+
border: 1px solid var(--color-border);
|
| 1453 |
+
}
|
| 1454 |
+
|
| 1455 |
+
.source-icon {
|
| 1456 |
+
font-size: var(--text-sm);
|
| 1457 |
+
}
|
| 1458 |
+
|
| 1459 |
+
.source-name {
|
| 1460 |
+
font-size: var(--text-xs);
|
| 1461 |
+
color: var(--color-text-secondary);
|
| 1462 |
+
flex: 1;
|
| 1463 |
+
}
|
| 1464 |
+
|
| 1465 |
+
.source-count {
|
| 1466 |
+
font-size: var(--text-sm);
|
| 1467 |
+
font-weight: 600;
|
| 1468 |
+
color: var(--color-text);
|
| 1469 |
+
font-family: var(--font-mono);
|
| 1470 |
+
}
|
| 1471 |
+
|
| 1472 |
+
/* Sentiment Posts Section */
|
| 1473 |
+
.sentiment-posts-section {
|
| 1474 |
+
background: var(--color-bg-elevated);
|
| 1475 |
+
border-radius: var(--radius-md);
|
| 1476 |
+
border: 1px solid var(--color-border);
|
| 1477 |
+
padding: var(--space-3);
|
| 1478 |
+
}
|
| 1479 |
+
|
| 1480 |
+
.posts-header {
|
| 1481 |
+
display: flex;
|
| 1482 |
+
justify-content: space-between;
|
| 1483 |
+
align-items: center;
|
| 1484 |
+
margin-bottom: var(--space-3);
|
| 1485 |
+
}
|
| 1486 |
+
|
| 1487 |
+
.posts-header h4 {
|
| 1488 |
+
font-size: 10px;
|
| 1489 |
+
font-weight: 700;
|
| 1490 |
+
color: var(--color-text-muted);
|
| 1491 |
+
text-transform: uppercase;
|
| 1492 |
+
letter-spacing: 0.08em;
|
| 1493 |
+
}
|
| 1494 |
+
|
| 1495 |
+
.posts-filter {
|
| 1496 |
+
display: flex;
|
| 1497 |
+
gap: var(--space-1);
|
| 1498 |
+
}
|
| 1499 |
+
|
| 1500 |
+
.filter-btn {
|
| 1501 |
+
padding: var(--space-1) var(--space-2);
|
| 1502 |
+
background: var(--color-bg-subtle);
|
| 1503 |
+
border: 1px solid var(--color-border);
|
| 1504 |
+
border-radius: var(--radius-sm);
|
| 1505 |
+
cursor: pointer;
|
| 1506 |
+
font-size: 10px;
|
| 1507 |
+
font-weight: 600;
|
| 1508 |
+
color: var(--color-text-muted);
|
| 1509 |
+
transition: all 0.15s;
|
| 1510 |
+
font-family: var(--font-sans);
|
| 1511 |
+
}
|
| 1512 |
+
|
| 1513 |
+
.filter-btn:hover {
|
| 1514 |
+
background: var(--color-bg-muted);
|
| 1515 |
+
}
|
| 1516 |
+
|
| 1517 |
+
.filter-btn.active {
|
| 1518 |
+
background: var(--color-accent);
|
| 1519 |
+
color: white;
|
| 1520 |
+
border-color: var(--color-accent);
|
| 1521 |
+
}
|
| 1522 |
+
|
| 1523 |
+
#sentimentPostsContainer {
|
| 1524 |
+
max-height: 400px;
|
| 1525 |
+
overflow-y: auto;
|
| 1526 |
+
}
|
| 1527 |
+
|
| 1528 |
+
.sentiment-post {
|
| 1529 |
+
padding: var(--space-3);
|
| 1530 |
+
background: var(--color-bg-subtle);
|
| 1531 |
+
border-radius: var(--radius-sm);
|
| 1532 |
+
border-left: 3px solid var(--color-border);
|
| 1533 |
+
margin-bottom: var(--space-2);
|
| 1534 |
+
transition: border-color 0.15s;
|
| 1535 |
+
}
|
| 1536 |
+
|
| 1537 |
+
.sentiment-post:last-child {
|
| 1538 |
+
margin-bottom: 0;
|
| 1539 |
+
}
|
| 1540 |
+
|
| 1541 |
+
.sentiment-post.positive {
|
| 1542 |
+
border-left-color: var(--color-positive);
|
| 1543 |
+
}
|
| 1544 |
+
|
| 1545 |
+
.sentiment-post.negative {
|
| 1546 |
+
border-left-color: var(--color-negative);
|
| 1547 |
+
}
|
| 1548 |
+
|
| 1549 |
+
.sentiment-post.neutral {
|
| 1550 |
+
border-left-color: var(--color-neutral-sentiment);
|
| 1551 |
+
}
|
| 1552 |
+
|
| 1553 |
+
.sentiment-post .post-header {
|
| 1554 |
+
display: flex;
|
| 1555 |
+
justify-content: space-between;
|
| 1556 |
+
align-items: center;
|
| 1557 |
+
margin-bottom: var(--space-1);
|
| 1558 |
+
}
|
| 1559 |
+
|
| 1560 |
+
.post-platform {
|
| 1561 |
+
font-size: 10px;
|
| 1562 |
+
font-weight: 700;
|
| 1563 |
+
color: var(--color-text-muted);
|
| 1564 |
+
text-transform: uppercase;
|
| 1565 |
+
letter-spacing: 0.02em;
|
| 1566 |
+
}
|
| 1567 |
+
|
| 1568 |
+
.post-sentiment {
|
| 1569 |
+
font-size: 10px;
|
| 1570 |
+
font-weight: 600;
|
| 1571 |
+
padding: 2px var(--space-2);
|
| 1572 |
+
border-radius: 2px;
|
| 1573 |
+
}
|
| 1574 |
+
|
| 1575 |
+
.post-sentiment.positive {
|
| 1576 |
+
background: var(--color-positive-bg);
|
| 1577 |
+
color: var(--color-positive);
|
| 1578 |
+
}
|
| 1579 |
+
|
| 1580 |
+
.post-sentiment.negative {
|
| 1581 |
+
background: var(--color-negative-bg);
|
| 1582 |
+
color: var(--color-negative);
|
| 1583 |
+
}
|
| 1584 |
+
|
| 1585 |
+
.post-sentiment.neutral {
|
| 1586 |
+
background: var(--color-neutral-sentiment-bg);
|
| 1587 |
+
color: var(--color-neutral-sentiment);
|
| 1588 |
+
}
|
| 1589 |
+
|
| 1590 |
+
.post-content {
|
| 1591 |
+
font-size: var(--text-xs);
|
| 1592 |
+
line-height: 1.5;
|
| 1593 |
+
color: var(--color-text);
|
| 1594 |
+
margin-bottom: var(--space-2);
|
| 1595 |
+
word-break: break-word;
|
| 1596 |
+
}
|
| 1597 |
+
|
| 1598 |
+
.post-meta {
|
| 1599 |
+
display: flex;
|
| 1600 |
+
flex-wrap: wrap;
|
| 1601 |
+
gap: var(--space-2);
|
| 1602 |
+
font-size: 10px;
|
| 1603 |
+
color: var(--color-text-muted);
|
| 1604 |
+
}
|
| 1605 |
+
|
| 1606 |
+
.post-link {
|
| 1607 |
+
display: inline-block;
|
| 1608 |
+
margin-top: var(--space-1);
|
| 1609 |
+
font-size: 10px;
|
| 1610 |
+
color: var(--color-accent);
|
| 1611 |
+
text-decoration: none;
|
| 1612 |
+
}
|
| 1613 |
+
|
| 1614 |
+
.post-link:hover {
|
| 1615 |
+
text-decoration: underline;
|
| 1616 |
+
}
|
| 1617 |
+
|
| 1618 |
+
.loading-text,
|
| 1619 |
+
.error-text,
|
| 1620 |
+
.no-data-text {
|
| 1621 |
+
text-align: center;
|
| 1622 |
+
padding: var(--space-6);
|
| 1623 |
+
color: var(--color-text-muted);
|
| 1624 |
+
font-size: var(--text-sm);
|
| 1625 |
+
}
|
| 1626 |
+
|
| 1627 |
+
.error-text {
|
| 1628 |
+
color: var(--color-negative);
|
| 1629 |
+
}
|
| 1630 |
+
|
| 1631 |
+
/* ============================================
|
| 1632 |
+
Forecast Tab
|
| 1633 |
+
============================================ */
|
| 1634 |
+
|
| 1635 |
+
.forecast-header {
|
| 1636 |
+
display: flex;
|
| 1637 |
+
justify-content: space-between;
|
| 1638 |
+
align-items: flex-start;
|
| 1639 |
+
gap: var(--space-4);
|
| 1640 |
+
margin-bottom: var(--space-4);
|
| 1641 |
+
padding: var(--space-3);
|
| 1642 |
+
background: var(--color-bg-elevated);
|
| 1643 |
+
border-radius: var(--radius-md);
|
| 1644 |
+
border: 1px solid var(--color-border);
|
| 1645 |
+
flex-wrap: wrap;
|
| 1646 |
+
}
|
| 1647 |
+
|
| 1648 |
+
.forecast-info h4 {
|
| 1649 |
+
font-size: var(--text-base);
|
| 1650 |
+
font-weight: 600;
|
| 1651 |
+
color: var(--color-text);
|
| 1652 |
+
margin-bottom: var(--space-1);
|
| 1653 |
+
}
|
| 1654 |
+
|
| 1655 |
+
.forecast-description {
|
| 1656 |
+
font-size: var(--text-xs);
|
| 1657 |
+
color: var(--color-text-secondary);
|
| 1658 |
+
}
|
| 1659 |
+
|
| 1660 |
+
.forecast-controls {
|
| 1661 |
+
display: flex;
|
| 1662 |
+
flex-direction: column;
|
| 1663 |
+
gap: var(--space-3);
|
| 1664 |
+
align-items: flex-end;
|
| 1665 |
+
}
|
| 1666 |
+
|
| 1667 |
+
.forecast-status {
|
| 1668 |
+
display: flex;
|
| 1669 |
+
gap: var(--space-4);
|
| 1670 |
+
}
|
| 1671 |
+
|
| 1672 |
+
.forecast-status .status-item {
|
| 1673 |
+
display: flex;
|
| 1674 |
+
flex-direction: column;
|
| 1675 |
+
gap: 2px;
|
| 1676 |
+
text-align: right;
|
| 1677 |
+
}
|
| 1678 |
+
|
| 1679 |
+
.forecast-status .status-label {
|
| 1680 |
+
font-size: 10px;
|
| 1681 |
+
color: var(--color-text-muted);
|
| 1682 |
+
text-transform: uppercase;
|
| 1683 |
+
letter-spacing: 0.05em;
|
| 1684 |
+
}
|
| 1685 |
+
|
| 1686 |
+
.forecast-status .status-value {
|
| 1687 |
+
font-size: var(--text-xs);
|
| 1688 |
+
font-weight: 600;
|
| 1689 |
+
color: var(--color-text);
|
| 1690 |
+
font-family: var(--font-mono);
|
| 1691 |
+
}
|
| 1692 |
+
|
| 1693 |
+
.forecast-actions {
|
| 1694 |
+
display: flex;
|
| 1695 |
+
gap: var(--space-2);
|
| 1696 |
+
}
|
| 1697 |
+
|
| 1698 |
+
.forecast-btn {
|
| 1699 |
+
display: flex;
|
| 1700 |
+
align-items: center;
|
| 1701 |
+
gap: var(--space-2);
|
| 1702 |
+
padding: var(--space-2) var(--space-3);
|
| 1703 |
+
background: var(--color-accent);
|
| 1704 |
+
border: none;
|
| 1705 |
+
border-radius: var(--radius-sm);
|
| 1706 |
+
cursor: pointer;
|
| 1707 |
+
font-size: 10px;
|
| 1708 |
+
font-weight: 600;
|
| 1709 |
+
color: white;
|
| 1710 |
+
transition: background 0.15s;
|
| 1711 |
+
font-family: var(--font-sans);
|
| 1712 |
+
text-transform: uppercase;
|
| 1713 |
+
letter-spacing: 0.05em;
|
| 1714 |
+
}
|
| 1715 |
+
|
| 1716 |
+
.forecast-btn:hover:not(:disabled) {
|
| 1717 |
+
background: var(--color-accent-hover);
|
| 1718 |
+
}
|
| 1719 |
+
|
| 1720 |
+
.forecast-btn:disabled {
|
| 1721 |
+
opacity: 0.5;
|
| 1722 |
+
cursor: not-allowed;
|
| 1723 |
+
}
|
| 1724 |
+
|
| 1725 |
+
.forecast-btn.loading svg {
|
| 1726 |
+
animation: spin 1s linear infinite;
|
| 1727 |
+
}
|
| 1728 |
+
|
| 1729 |
+
.forecast-chart-section {
|
| 1730 |
+
margin-bottom: var(--space-4);
|
| 1731 |
+
padding: var(--space-3);
|
| 1732 |
+
background: var(--color-bg-elevated);
|
| 1733 |
+
border-radius: var(--radius-md);
|
| 1734 |
+
border: 1px solid var(--color-border);
|
| 1735 |
+
}
|
| 1736 |
+
|
| 1737 |
+
.chart-legend {
|
| 1738 |
+
display: flex;
|
| 1739 |
+
gap: var(--space-4);
|
| 1740 |
+
margin-bottom: var(--space-3);
|
| 1741 |
+
flex-wrap: wrap;
|
| 1742 |
+
}
|
| 1743 |
+
|
| 1744 |
+
.legend-item {
|
| 1745 |
+
display: flex;
|
| 1746 |
+
align-items: center;
|
| 1747 |
+
gap: var(--space-2);
|
| 1748 |
+
font-size: 10px;
|
| 1749 |
+
color: var(--color-text-muted);
|
| 1750 |
+
text-transform: uppercase;
|
| 1751 |
+
letter-spacing: 0.03em;
|
| 1752 |
+
}
|
| 1753 |
+
|
| 1754 |
+
.legend-color {
|
| 1755 |
+
width: 16px;
|
| 1756 |
+
height: 3px;
|
| 1757 |
+
border-radius: 2px;
|
| 1758 |
+
}
|
| 1759 |
+
|
| 1760 |
+
.legend-item.historical .legend-color {
|
| 1761 |
+
background: var(--color-accent);
|
| 1762 |
+
}
|
| 1763 |
+
|
| 1764 |
+
.legend-item.predicted .legend-color {
|
| 1765 |
+
background: var(--color-positive);
|
| 1766 |
+
background-image: linear-gradient(90deg, var(--color-positive) 60%, transparent 60%);
|
| 1767 |
+
background-size: 6px 3px;
|
| 1768 |
+
}
|
| 1769 |
+
|
| 1770 |
+
.legend-item.confidence .legend-color {
|
| 1771 |
+
background: rgba(16, 185, 129, 0.25);
|
| 1772 |
+
height: 10px;
|
| 1773 |
+
width: 20px;
|
| 1774 |
+
}
|
| 1775 |
+
|
| 1776 |
+
#forecastChart {
|
| 1777 |
+
width: 100%;
|
| 1778 |
+
height: 300px;
|
| 1779 |
+
background: var(--color-bg-subtle);
|
| 1780 |
+
border-radius: var(--radius-sm);
|
| 1781 |
+
}
|
| 1782 |
+
|
| 1783 |
+
.forecast-disclaimer {
|
| 1784 |
+
padding: var(--space-3);
|
| 1785 |
+
background: var(--color-negative-bg);
|
| 1786 |
+
border-radius: var(--radius-sm);
|
| 1787 |
+
border: 1px solid var(--color-negative);
|
| 1788 |
+
border-left: 3px solid var(--color-negative);
|
| 1789 |
+
}
|
| 1790 |
+
|
| 1791 |
+
.forecast-disclaimer p {
|
| 1792 |
+
font-size: var(--text-xs);
|
| 1793 |
+
color: var(--color-negative);
|
| 1794 |
+
line-height: 1.5;
|
| 1795 |
+
margin: 0;
|
| 1796 |
+
}
|
| 1797 |
+
|
| 1798 |
+
.forecast-disclaimer strong {
|
| 1799 |
+
font-weight: 700;
|
| 1800 |
+
}
|
| 1801 |
+
|
| 1802 |
+
/* ============================================
|
| 1803 |
+
Responsive Design
|
| 1804 |
+
============================================ */
|
| 1805 |
+
|
| 1806 |
+
@media (max-width: 1024px) {
|
| 1807 |
+
.metrics-grid {
|
| 1808 |
+
grid-template-columns: repeat(3, 1fr);
|
| 1809 |
+
}
|
| 1810 |
+
}
|
| 1811 |
+
|
| 1812 |
+
@media (max-width: 768px) {
|
| 1813 |
+
:root {
|
| 1814 |
+
--chat-width: 100%;
|
| 1815 |
+
}
|
| 1816 |
+
|
| 1817 |
+
.app-layout {
|
| 1818 |
+
grid-template-columns: 1fr;
|
| 1819 |
+
}
|
| 1820 |
+
|
| 1821 |
+
.chat-panel {
|
| 1822 |
+
position: fixed;
|
| 1823 |
+
right: 0;
|
| 1824 |
+
top: var(--header-height);
|
| 1825 |
+
height: calc(100vh - var(--header-height));
|
| 1826 |
+
width: 100%;
|
| 1827 |
+
transform: translateX(100%);
|
| 1828 |
+
transition: transform 0.25s ease;
|
| 1829 |
+
z-index: 1000;
|
| 1830 |
+
}
|
| 1831 |
+
|
| 1832 |
+
.chat-panel.open {
|
| 1833 |
+
transform: translateX(0);
|
| 1834 |
+
}
|
| 1835 |
+
|
| 1836 |
+
.close-mobile-chat {
|
| 1837 |
+
display: block;
|
| 1838 |
+
}
|
| 1839 |
+
|
| 1840 |
+
/* Mobile chat toggle button */
|
| 1841 |
+
.mobile-chat-toggle {
|
| 1842 |
+
position: fixed;
|
| 1843 |
+
right: var(--space-4);
|
| 1844 |
+
bottom: var(--space-4);
|
| 1845 |
+
width: 48px;
|
| 1846 |
+
height: 48px;
|
| 1847 |
+
background: var(--color-accent);
|
| 1848 |
+
color: white;
|
| 1849 |
+
border: none;
|
| 1850 |
+
border-radius: 50%;
|
| 1851 |
+
cursor: pointer;
|
| 1852 |
+
box-shadow: var(--shadow-lg);
|
| 1853 |
+
z-index: 999;
|
| 1854 |
+
display: flex;
|
| 1855 |
+
align-items: center;
|
| 1856 |
+
justify-content: center;
|
| 1857 |
+
font-size: var(--text-lg);
|
| 1858 |
+
}
|
| 1859 |
+
|
| 1860 |
+
.main-content {
|
| 1861 |
+
padding: var(--space-3);
|
| 1862 |
+
}
|
| 1863 |
+
|
| 1864 |
+
.global-header {
|
| 1865 |
+
padding: 0 var(--space-4);
|
| 1866 |
+
}
|
| 1867 |
+
|
| 1868 |
+
.global-header h1 {
|
| 1869 |
+
font-size: var(--text-base);
|
| 1870 |
+
}
|
| 1871 |
+
|
| 1872 |
+
.stock-header {
|
| 1873 |
+
flex-direction: column;
|
| 1874 |
+
align-items: flex-start;
|
| 1875 |
+
gap: var(--space-2);
|
| 1876 |
+
}
|
| 1877 |
+
|
| 1878 |
+
.stock-header h2 {
|
| 1879 |
+
font-size: var(--text-lg);
|
| 1880 |
+
}
|
| 1881 |
+
|
| 1882 |
+
.stock-price {
|
| 1883 |
+
font-size: var(--text-xl);
|
| 1884 |
+
}
|
| 1885 |
+
|
| 1886 |
+
.metrics-grid {
|
| 1887 |
+
grid-template-columns: repeat(2, 1fr);
|
| 1888 |
+
}
|
| 1889 |
+
|
| 1890 |
+
.tabs {
|
| 1891 |
+
-webkit-overflow-scrolling: touch;
|
| 1892 |
+
}
|
| 1893 |
+
|
| 1894 |
+
.tab-button {
|
| 1895 |
+
padding: var(--space-2) var(--space-3);
|
| 1896 |
+
font-size: 10px;
|
| 1897 |
+
}
|
| 1898 |
+
|
| 1899 |
+
#priceChart {
|
| 1900 |
+
height: 200px;
|
| 1901 |
+
}
|
| 1902 |
+
|
| 1903 |
+
.source-breakdown {
|
| 1904 |
+
grid-template-columns: 1fr;
|
| 1905 |
+
}
|
| 1906 |
+
|
| 1907 |
+
.financial-grid {
|
| 1908 |
+
grid-template-columns: 1fr;
|
| 1909 |
+
}
|
| 1910 |
+
|
| 1911 |
+
.sentiment-aggregate {
|
| 1912 |
+
flex-direction: column;
|
| 1913 |
+
gap: var(--space-3);
|
| 1914 |
+
}
|
| 1915 |
+
|
| 1916 |
+
.forecast-header {
|
| 1917 |
+
flex-direction: column;
|
| 1918 |
+
}
|
| 1919 |
+
|
| 1920 |
+
.forecast-controls {
|
| 1921 |
+
width: 100%;
|
| 1922 |
+
align-items: flex-start;
|
| 1923 |
+
}
|
| 1924 |
+
}
|
| 1925 |
+
|
| 1926 |
+
@media (max-width: 480px) {
|
| 1927 |
+
.metrics-grid {
|
| 1928 |
+
grid-template-columns: 1fr;
|
| 1929 |
+
}
|
| 1930 |
+
|
| 1931 |
+
.chart-controls {
|
| 1932 |
+
flex-wrap: wrap;
|
| 1933 |
+
}
|
| 1934 |
+
|
| 1935 |
+
.header-actions {
|
| 1936 |
+
gap: var(--space-2);
|
| 1937 |
+
}
|
| 1938 |
+
|
| 1939 |
+
.market-status {
|
| 1940 |
+
display: none;
|
| 1941 |
+
}
|
| 1942 |
+
}
|