Spaces:
Running
Running
Upload folder using huggingface_hub
Browse files- README.md +28 -59
- app.py +804 -430
- requirements.txt +6 -3
README.md
CHANGED
|
@@ -1,81 +1,50 @@
|
|
| 1 |
---
|
| 2 |
-
title:
|
| 3 |
-
emoji:
|
| 4 |
-
colorFrom:
|
| 5 |
-
colorTo:
|
| 6 |
sdk: gradio
|
| 7 |
-
sdk_version:
|
| 8 |
python_version: "3.10"
|
| 9 |
app_file: app.py
|
| 10 |
pinned: false
|
| 11 |
license: mit
|
| 12 |
-
short_description:
|
| 13 |
---
|
| 14 |
|
| 15 |
-
#
|
| 16 |
|
| 17 |
-
An
|
| 18 |
|
| 19 |
## Features
|
| 20 |
|
| 21 |
-
- **
|
| 22 |
-
- **
|
| 23 |
-
- **
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
- Word-count statistics with compression ratio
|
| 29 |
-
- **Long Document Support** -- Papers of any length are automatically chunked and summarized in multiple passes, then combined into a coherent final summary.
|
| 30 |
-
- **Clean PDF Processing** -- Handles hyphenated line breaks, control characters, and other common PDF artifacts.
|
| 31 |
|
| 32 |
## How It Works
|
| 33 |
|
| 34 |
-
1.
|
| 35 |
-
2.
|
| 36 |
-
3.
|
| 37 |
-
4.
|
| 38 |
-
5. **Section Extraction** -- Regex heuristics identify Results, Methodology, and Conclusion sections for targeted summarization of key findings and methods.
|
| 39 |
|
| 40 |
-
##
|
| 41 |
|
| 42 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 43 |
|
| 44 |
-
##
|
| 45 |
|
| 46 |
-
|
| 47 |
-
- **Summarization quality** depends on the structure and clarity of the input text.
|
| 48 |
-
- **Processing time** may be longer for very large papers due to CPU-only inference.
|
| 49 |
-
|
| 50 |
-
## Tech Stack
|
| 51 |
-
|
| 52 |
-
| Component | Library |
|
| 53 |
-
|---|---|
|
| 54 |
-
| Web framework | Gradio 4.44 |
|
| 55 |
-
| Summarization model | HuggingFace Transformers (BART-Large-CNN) |
|
| 56 |
-
| PDF parsing | PyMuPDF (fitz) |
|
| 57 |
-
| Inference backend | PyTorch (CPU) |
|
| 58 |
-
|
| 59 |
-
## Local Development
|
| 60 |
-
|
| 61 |
-
```bash
|
| 62 |
-
# Clone the repository
|
| 63 |
-
git clone https://huggingface.co/spaces/gr8monk3ys/paper-summarizer
|
| 64 |
-
cd paper-summarizer
|
| 65 |
-
|
| 66 |
-
# Install dependencies
|
| 67 |
-
pip install -r requirements.txt
|
| 68 |
-
|
| 69 |
-
# Run the application
|
| 70 |
-
python app.py
|
| 71 |
-
```
|
| 72 |
-
|
| 73 |
-
The app will be available at `http://localhost:7860`.
|
| 74 |
-
|
| 75 |
-
## License
|
| 76 |
-
|
| 77 |
-
MIT
|
| 78 |
|
| 79 |
## Author
|
| 80 |
|
| 81 |
-
Built by [Lorenzo Scaturchio](https://huggingface.co/gr8monk3ys)
|
|
|
|
| 1 |
---
|
| 2 |
+
title: Trading Signal Dashboard
|
| 3 |
+
emoji: 📈
|
| 4 |
+
colorFrom: purple
|
| 5 |
+
colorTo: pink
|
| 6 |
sdk: gradio
|
| 7 |
+
sdk_version: 5.9.1
|
| 8 |
python_version: "3.10"
|
| 9 |
app_file: app.py
|
| 10 |
pinned: false
|
| 11 |
license: mit
|
| 12 |
+
short_description: Technical analysis dashboard with trading signals
|
| 13 |
---
|
| 14 |
|
| 15 |
+
# Trading Signal Dashboard
|
| 16 |
|
| 17 |
+
An interactive technical analysis dashboard that fetches real-time stock data and generates trading signals using classical technical indicators.
|
| 18 |
|
| 19 |
## Features
|
| 20 |
|
| 21 |
+
- **Real-time data** via Yahoo Finance for any publicly traded ticker
|
| 22 |
+
- **Technical indicators**: SMA (20/50), EMA (12/26), RSI (14), MACD, Bollinger Bands
|
| 23 |
+
- **Signal generation**: Buy/sell signals from indicator crossovers (SMA, MACD, RSI)
|
| 24 |
+
- **Interactive Plotly charts**: Price with overlays, RSI, MACD, and Volume subplots
|
| 25 |
+
- **Signal summary table** with recent buy/sell signals and reasoning
|
| 26 |
+
- **Backtesting engine**: Compare signal-based strategy returns against buy-and-hold
|
| 27 |
+
- **Multiple timeframes**: 1 Month, 3 Months, 6 Months, 1 Year, 2 Years
|
|
|
|
|
|
|
|
|
|
| 28 |
|
| 29 |
## How It Works
|
| 30 |
|
| 31 |
+
1. Enter a stock ticker (e.g., AAPL, GOOGL, MSFT, TSLA)
|
| 32 |
+
2. Select a lookback timeframe
|
| 33 |
+
3. The dashboard fetches historical price data, computes technical indicators, and identifies trading signals
|
| 34 |
+
4. Review charts, signals, and backtest results across three organized tabs
|
|
|
|
| 35 |
|
| 36 |
+
## Indicators & Signals
|
| 37 |
|
| 38 |
+
| Indicator | Signal Logic |
|
| 39 |
+
|-----------|-------------|
|
| 40 |
+
| SMA Crossover | Buy when SMA-20 crosses above SMA-50; Sell when SMA-20 crosses below SMA-50 |
|
| 41 |
+
| MACD Crossover | Buy when MACD line crosses above Signal line; Sell on downward cross |
|
| 42 |
+
| RSI Extremes | Buy when RSI crosses above 30 (oversold); Sell when RSI crosses below 70 (overbought) |
|
| 43 |
|
| 44 |
+
## Disclaimer
|
| 45 |
|
| 46 |
+
**This application is for educational and informational purposes only. It does NOT constitute financial advice, investment recommendations, or solicitation to buy or sell any securities.** Past performance and backtesting results do not guarantee future returns. Always consult a qualified financial advisor before making investment decisions. The creator assumes no liability for any financial losses incurred from using this tool.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 47 |
|
| 48 |
## Author
|
| 49 |
|
| 50 |
+
Built by [Lorenzo Scaturchio (gr8monk3ys)](https://huggingface.co/gr8monk3ys)
|
app.py
CHANGED
|
@@ -1,535 +1,909 @@
|
|
| 1 |
"""
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
| 4 |
|
| 5 |
-
|
| 6 |
-
of academic papers. It supports both PDF uploads and pasted text input, handles long
|
| 7 |
-
documents through intelligent chunking, and produces summaries with extracted titles,
|
| 8 |
-
key findings, methodology notes, and concise abstracts.
|
| 9 |
|
| 10 |
Author: Lorenzo Scaturchio (gr8monk3ys)
|
| 11 |
License: MIT
|
|
|
|
|
|
|
|
|
|
| 12 |
"""
|
| 13 |
|
| 14 |
-
import
|
| 15 |
-
import re
|
| 16 |
-
import logging
|
| 17 |
from typing import Optional
|
| 18 |
|
| 19 |
-
import fitz # PyMuPDF
|
| 20 |
import gradio as gr
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
level=logging.INFO,
|
| 28 |
-
format="%(asctime)s [%(levelname)s] %(message)s",
|
| 29 |
-
)
|
| 30 |
-
logger = logging.getLogger(__name__)
|
| 31 |
|
| 32 |
# ---------------------------------------------------------------------------
|
| 33 |
# Constants
|
| 34 |
# ---------------------------------------------------------------------------
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 42 |
|
| 43 |
# ---------------------------------------------------------------------------
|
| 44 |
-
#
|
| 45 |
# ---------------------------------------------------------------------------
|
| 46 |
-
logger.info("Initializing HuggingFace Inference Client for: %s", MODEL_NAME)
|
| 47 |
-
client = InferenceClient(model=MODEL_NAME)
|
| 48 |
-
logger.info("Inference client ready.")
|
| 49 |
|
| 50 |
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
def extract_text_from_pdf(pdf_path: str) -> str:
|
| 56 |
-
"""Extract all text content from a PDF file using PyMuPDF.
|
| 57 |
|
| 58 |
Args:
|
| 59 |
-
|
|
|
|
| 60 |
|
| 61 |
Returns:
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
Raises:
|
| 65 |
-
ValueError: If the PDF contains no extractable text.
|
| 66 |
"""
|
|
|
|
|
|
|
|
|
|
|
|
|
| 67 |
try:
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
|
| 71 |
-
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
pages: list[str] = []
|
| 76 |
-
for page_num, page in enumerate(doc):
|
| 77 |
-
text = page.get_text("text")
|
| 78 |
-
if text.strip():
|
| 79 |
-
pages.append(text)
|
| 80 |
-
logger.debug("Page %d: extracted %d characters", page_num + 1, len(text))
|
| 81 |
-
|
| 82 |
-
doc.close()
|
| 83 |
-
|
| 84 |
-
if not pages:
|
| 85 |
-
raise ValueError(
|
| 86 |
-
"The PDF appears to contain no extractable text. "
|
| 87 |
-
"It may be a scanned document or consist only of images."
|
| 88 |
)
|
|
|
|
|
|
|
| 89 |
|
| 90 |
-
|
|
|
|
|
|
|
| 91 |
|
|
|
|
|
|
|
|
|
|
|
|
|
| 92 |
|
| 93 |
-
|
| 94 |
-
|
|
|
|
| 95 |
|
| 96 |
-
|
| 97 |
-
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
|
| 101 |
-
|
| 102 |
-
|
| 103 |
-
|
| 104 |
-
|
| 105 |
-
|
| 106 |
-
|
| 107 |
-
|
| 108 |
-
|
| 109 |
-
|
| 110 |
-
|
| 111 |
-
|
| 112 |
-
# ===========================================================================
|
| 113 |
-
|
| 114 |
-
def extract_title(text: str) -> str:
|
| 115 |
-
"""Attempt to extract the paper title from the first few lines.
|
| 116 |
-
|
| 117 |
-
Academic papers typically place the title in the first 1-5 lines before the
|
| 118 |
-
author block. We use a simple heuristic: the longest line among the first
|
| 119 |
-
few non-empty lines that is not all-caps (which would be a header like
|
| 120 |
-
"ABSTRACT") and does not look like an author list.
|
| 121 |
-
"""
|
| 122 |
-
lines = [ln.strip() for ln in text.split("\n") if ln.strip()][:12]
|
| 123 |
-
|
| 124 |
-
candidates: list[str] = []
|
| 125 |
-
for line in lines:
|
| 126 |
-
# Skip very short lines (page numbers, dates, etc.)
|
| 127 |
-
if len(line) < 10:
|
| 128 |
-
continue
|
| 129 |
-
# Skip lines that are likely author names / affiliations (contain '@')
|
| 130 |
-
if "@" in line:
|
| 131 |
-
continue
|
| 132 |
-
# Skip lines that are section headers (all uppercase, short)
|
| 133 |
-
if line.isupper() and len(line) < 60:
|
| 134 |
-
continue
|
| 135 |
-
# Skip lines that look like emails or URLs
|
| 136 |
-
if re.search(r"https?://|www\.", line):
|
| 137 |
-
continue
|
| 138 |
-
candidates.append(line)
|
| 139 |
-
|
| 140 |
-
if not candidates:
|
| 141 |
-
return "Untitled Paper"
|
| 142 |
-
|
| 143 |
-
# Return the first substantial candidate (titles usually come first)
|
| 144 |
-
return candidates[0]
|
| 145 |
-
|
| 146 |
-
|
| 147 |
-
# ===========================================================================
|
| 148 |
-
# Chunking and summarization
|
| 149 |
-
# ===========================================================================
|
| 150 |
-
|
| 151 |
-
def chunk_text(text: str, max_words: int = CHUNK_WORD_LIMIT) -> list[str]:
|
| 152 |
-
"""Split text into chunks of approximately *max_words* words.
|
| 153 |
-
|
| 154 |
-
Splitting is done on paragraph boundaries where possible so that chunks
|
| 155 |
-
remain coherent. If a single paragraph exceeds the limit it is split on
|
| 156 |
-
sentence boundaries instead.
|
| 157 |
"""
|
| 158 |
-
|
| 159 |
-
|
| 160 |
-
|
| 161 |
-
|
| 162 |
-
|
| 163 |
-
|
| 164 |
-
|
| 165 |
-
|
| 166 |
-
|
| 167 |
-
|
| 168 |
-
|
| 169 |
-
|
| 170 |
-
|
| 171 |
-
|
| 172 |
-
|
| 173 |
-
|
| 174 |
-
|
| 175 |
-
|
| 176 |
-
|
| 177 |
-
|
| 178 |
-
|
| 179 |
-
|
| 180 |
-
|
| 181 |
-
|
| 182 |
-
|
| 183 |
-
|
| 184 |
-
|
| 185 |
-
|
| 186 |
-
|
| 187 |
-
|
| 188 |
-
|
| 189 |
-
|
| 190 |
-
|
| 191 |
-
|
| 192 |
-
|
| 193 |
-
def
|
| 194 |
-
"""
|
| 195 |
-
|
| 196 |
-
|
| 197 |
-
|
| 198 |
"""
|
| 199 |
-
|
| 200 |
-
|
| 201 |
-
|
| 202 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 203 |
|
| 204 |
-
max_len = min(SUMMARY_MAX_LENGTH, max(50, word_count // 2))
|
| 205 |
-
min_len = min(SUMMARY_MIN_LENGTH, max_len - 10)
|
| 206 |
|
| 207 |
-
|
| 208 |
-
|
| 209 |
-
|
| 210 |
-
|
| 211 |
-
|
| 212 |
-
|
| 213 |
-
|
| 214 |
-
|
|
|
|
|
|
|
|
|
|
| 215 |
)
|
| 216 |
-
return result.summary_text
|
| 217 |
-
except Exception as e:
|
| 218 |
-
logger.warning("Summarization failed: %s", e)
|
| 219 |
-
# Fallback: return truncated text
|
| 220 |
-
return " ".join(text.split()[:100]) + "..."
|
| 221 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 222 |
|
| 223 |
-
def generate_full_summary(text: str) -> str:
|
| 224 |
-
"""Produce a final summary for arbitrarily long documents.
|
| 225 |
|
| 226 |
-
|
| 227 |
-
|
| 228 |
-
|
| 229 |
-
|
| 230 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 231 |
"""
|
| 232 |
-
|
| 233 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 234 |
|
| 235 |
-
chunk_summaries = [summarize_text(chunk) for chunk in chunks]
|
| 236 |
|
| 237 |
-
|
| 238 |
-
|
|
|
|
| 239 |
|
| 240 |
-
|
| 241 |
-
|
| 242 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 243 |
|
| 244 |
-
if combined_words < 50:
|
| 245 |
-
return combined
|
| 246 |
|
| 247 |
-
|
| 248 |
-
|
| 249 |
|
| 250 |
-
|
| 251 |
-
|
| 252 |
-
|
| 253 |
-
|
| 254 |
-
|
| 255 |
-
|
| 256 |
-
|
| 257 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 258 |
)
|
| 259 |
-
return result.summary_text
|
| 260 |
-
except Exception as e:
|
| 261 |
-
logger.warning("Combined summarization failed: %s", e)
|
| 262 |
-
return combined
|
| 263 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 264 |
|
| 265 |
-
#
|
| 266 |
-
|
| 267 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 268 |
|
| 269 |
-
|
| 270 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 271 |
|
| 272 |
-
|
| 273 |
-
|
| 274 |
-
|
| 275 |
-
|
| 276 |
-
|
| 277 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 278 |
)
|
| 279 |
-
match = pattern.search(text)
|
| 280 |
-
if match:
|
| 281 |
-
content = match.group(1).strip()
|
| 282 |
-
if len(content) > 30:
|
| 283 |
-
return content
|
| 284 |
-
return fallback
|
| 285 |
-
|
| 286 |
-
|
| 287 |
-
def extract_key_findings(text: str) -> str:
|
| 288 |
-
"""Try to extract key findings from Results / Conclusion sections, or
|
| 289 |
-
fall back to summarizing the last portion of the paper."""
|
| 290 |
-
for heading in [
|
| 291 |
-
r"(?:key\s+)?findings",
|
| 292 |
-
r"results?\s*(?:and\s+discussion)?",
|
| 293 |
-
r"conclusions?\s*(?:and\s+future\s+work)?",
|
| 294 |
-
r"discussion",
|
| 295 |
-
]:
|
| 296 |
-
content = extract_section(text, heading)
|
| 297 |
-
if content:
|
| 298 |
-
return summarize_text(content[:3000])
|
| 299 |
-
# Fallback: summarize the last quarter of the document.
|
| 300 |
-
words = text.split()
|
| 301 |
-
tail = " ".join(words[-(len(words) // 4):])
|
| 302 |
-
if len(tail.split()) > 50:
|
| 303 |
-
return summarize_text(tail[:3000])
|
| 304 |
-
return "Key findings could not be automatically extracted."
|
| 305 |
-
|
| 306 |
-
|
| 307 |
-
def extract_methodology(text: str) -> str:
|
| 308 |
-
"""Try to extract methodology information from the paper."""
|
| 309 |
-
for heading in [
|
| 310 |
-
r"method(?:ology|s)?",
|
| 311 |
-
r"approach",
|
| 312 |
-
r"experimental\s+setup",
|
| 313 |
-
r"materials?\s+and\s+methods",
|
| 314 |
-
r"(?:proposed\s+)?(?:framework|system|model|architecture)",
|
| 315 |
-
]:
|
| 316 |
-
content = extract_section(text, heading)
|
| 317 |
-
if content:
|
| 318 |
-
return summarize_text(content[:3000])
|
| 319 |
-
return "Methodology section could not be automatically extracted."
|
| 320 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 321 |
|
| 322 |
-
|
| 323 |
-
|
| 324 |
-
# ===========================================================================
|
| 325 |
|
| 326 |
-
|
| 327 |
-
pdf_file: Optional[str],
|
| 328 |
-
pasted_text: Optional[str],
|
| 329 |
-
) -> str:
|
| 330 |
-
"""Process a research paper and return a structured summary.
|
| 331 |
|
| 332 |
-
|
| 333 |
-
|
| 334 |
-
|
| 335 |
-
|
| 336 |
-
|
| 337 |
-
|
| 338 |
-
|
| 339 |
-
|
| 340 |
-
|
| 341 |
-
|
| 342 |
-
|
| 343 |
-
|
| 344 |
-
|
| 345 |
-
|
| 346 |
-
|
| 347 |
-
|
| 348 |
-
|
| 349 |
-
"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 350 |
)
|
|
|
|
| 351 |
|
| 352 |
-
|
| 353 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 354 |
|
| 355 |
-
|
| 356 |
-
|
| 357 |
-
|
| 358 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 359 |
)
|
| 360 |
|
| 361 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 362 |
|
| 363 |
-
|
| 364 |
-
|
| 365 |
-
|
| 366 |
-
|
| 367 |
-
|
| 368 |
-
|
| 369 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 370 |
|
| 371 |
-
|
|
|
|
| 372 |
|
| 373 |
-
|
| 374 |
-
# 3. Format the output
|
| 375 |
-
# ------------------------------------------------------------------
|
| 376 |
-
output = f"""## {title}
|
| 377 |
|
| 378 |
-
---
|
| 379 |
|
| 380 |
-
#
|
| 381 |
-
|
|
|
|
| 382 |
|
| 383 |
-
---
|
| 384 |
|
| 385 |
-
|
| 386 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
| 387 |
|
| 388 |
-
|
|
|
|
| 389 |
|
| 390 |
-
|
| 391 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 392 |
|
| 393 |
---
|
| 394 |
|
| 395 |
-
|
| 396 |
-
| Metric | Value |
|
| 397 |
-
|---|---|
|
| 398 |
-
| Original length | {original_word_count:,} words |
|
| 399 |
-
| Summary length | {summary_word_count:,} words |
|
| 400 |
-
| Compression ratio | {original_word_count / max(summary_word_count, 1):.1f}x |
|
| 401 |
"""
|
| 402 |
-
return output
|
| 403 |
|
| 404 |
|
| 405 |
-
#
|
| 406 |
-
#
|
| 407 |
-
#
|
| 408 |
|
| 409 |
-
EXAMPLE_TEXT = """Attention Is All You Need
|
| 410 |
|
| 411 |
-
|
|
|
|
| 412 |
|
| 413 |
-
|
| 414 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 415 |
|
| 416 |
-
|
| 417 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 418 |
|
| 419 |
-
|
| 420 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 421 |
|
| 422 |
-
|
| 423 |
-
On the WMT 2014 English-to-German translation task, the big transformer model outperforms the best previously reported models including ensembles by more than 2.0 BLEU, establishing a new state-of-the-art BLEU score of 28.4. On the WMT 2014 English-to-French translation task, our big model achieves a BLEU score of 41.0, outperforming all of the previously published single models, at less than 1/4 the training cost of the previous state-of-the-art model. The Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers.
|
| 424 |
|
| 425 |
-
Conclusions
|
| 426 |
-
In this work, we presented the Transformer, the first sequence transduction model based entirely on attention, replacing the recurrent layers most commonly used in encoder-decoder architectures with multi-headed self-attention. The Transformer can be trained significantly faster than architectures based on recurrent or convolutional layers. We achieved new state of the art on both WMT 2014 English-to-German and WMT 2014 English-to-French translation tasks. We plan to extend the Transformer to problems involving input and output modalities other than text and to investigate local, restricted attention mechanisms to efficiently handle large inputs and outputs such as images, audio and video."""
|
| 427 |
|
|
|
|
|
|
|
|
|
|
| 428 |
|
| 429 |
-
# ===========================================================================
|
| 430 |
-
# Gradio interface
|
| 431 |
-
# ===========================================================================
|
| 432 |
|
| 433 |
-
def
|
| 434 |
-
"""
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 435 |
|
| 436 |
with gr.Blocks(
|
| 437 |
-
title="
|
| 438 |
-
theme=gr.themes.
|
| 439 |
-
primary_hue=
|
| 440 |
-
secondary_hue=
|
|
|
|
|
|
|
| 441 |
),
|
| 442 |
-
css=
|
| 443 |
-
|
| 444 |
-
|
| 445 |
-
footer { display: none !important; }
|
| 446 |
-
""",
|
| 447 |
-
) as demo:
|
| 448 |
-
# --- Header ---
|
| 449 |
gr.Markdown(
|
| 450 |
-
""
|
| 451 |
-
|
| 452 |
-
<p class="subheader">
|
| 453 |
-
Summarize academic research papers into structured, digestible insights.<br>
|
| 454 |
-
Upload a PDF or paste the full text below.
|
| 455 |
-
</p>
|
| 456 |
-
""",
|
| 457 |
)
|
|
|
|
| 458 |
|
|
|
|
| 459 |
with gr.Row():
|
| 460 |
-
|
| 461 |
-
|
| 462 |
-
|
| 463 |
-
|
| 464 |
-
|
| 465 |
-
|
| 466 |
-
|
| 467 |
-
|
| 468 |
-
|
| 469 |
-
|
| 470 |
-
|
| 471 |
-
|
| 472 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 473 |
)
|
| 474 |
-
|
| 475 |
-
|
| 476 |
-
|
| 477 |
-
|
| 478 |
-
gr.Markdown("### Structured Summary")
|
| 479 |
-
output = gr.Markdown(
|
| 480 |
-
value="*Your summary will appear here after processing.*",
|
| 481 |
-
label="Summary",
|
| 482 |
)
|
| 483 |
|
| 484 |
-
|
| 485 |
-
|
| 486 |
-
|
| 487 |
-
|
| 488 |
-
|
| 489 |
-
|
| 490 |
-
|
| 491 |
-
|
| 492 |
-
|
| 493 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 494 |
)
|
| 495 |
|
| 496 |
-
#
|
| 497 |
-
|
| 498 |
-
|
| 499 |
-
|
| 500 |
-
|
| 501 |
-
|
| 502 |
-
|
| 503 |
-
|
| 504 |
-
|
| 505 |
-
|
| 506 |
-
|
| 507 |
-
|
| 508 |
-
|
| 509 |
-
|
| 510 |
-
|
| 511 |
-
|
| 512 |
-
|
| 513 |
-
|
| 514 |
-
|
| 515 |
-
|
| 516 |
-
|
| 517 |
-
|
|
|
|
| 518 |
|
| 519 |
-
#
|
| 520 |
-
|
| 521 |
-
|
| 522 |
-
|
| 523 |
-
|
|
|
|
| 524 |
)
|
| 525 |
|
| 526 |
-
return
|
| 527 |
|
| 528 |
|
| 529 |
-
#
|
| 530 |
-
# Entry
|
| 531 |
-
#
|
| 532 |
|
| 533 |
if __name__ == "__main__":
|
| 534 |
-
app =
|
| 535 |
app.launch()
|
|
|
|
| 1 |
"""
|
| 2 |
+
Trading Signal Dashboard
|
| 3 |
+
========================
|
| 4 |
+
A Gradio-based technical analysis dashboard that fetches real stock data,
|
| 5 |
+
computes indicators, generates buy/sell signals, and backtests strategies.
|
| 6 |
|
| 7 |
+
Version: 2.0.0 (Gradio 5.x compatible)
|
|
|
|
|
|
|
|
|
|
| 8 |
|
| 9 |
Author: Lorenzo Scaturchio (gr8monk3ys)
|
| 10 |
License: MIT
|
| 11 |
+
|
| 12 |
+
DISCLAIMER: This tool is for educational purposes only and does NOT
|
| 13 |
+
constitute financial advice. Use at your own risk.
|
| 14 |
"""
|
| 15 |
|
| 16 |
+
import datetime
|
|
|
|
|
|
|
| 17 |
from typing import Optional
|
| 18 |
|
|
|
|
| 19 |
import gradio as gr
|
| 20 |
+
import numpy as np
|
| 21 |
+
import pandas as pd
|
| 22 |
+
import plotly.graph_objects as go
|
| 23 |
+
import ta
|
| 24 |
+
import yfinance as yf
|
| 25 |
+
from plotly.subplots import make_subplots
|
|
|
|
|
|
|
|
|
|
|
|
|
| 26 |
|
| 27 |
# ---------------------------------------------------------------------------
|
| 28 |
# Constants
|
| 29 |
# ---------------------------------------------------------------------------
|
| 30 |
+
|
| 31 |
+
DEFAULT_TICKERS = ["AAPL", "GOOGL", "MSFT", "TSLA"]
|
| 32 |
+
|
| 33 |
+
TIMEFRAME_MAP = {
|
| 34 |
+
"1 Month": 30,
|
| 35 |
+
"3 Months": 90,
|
| 36 |
+
"6 Months": 180,
|
| 37 |
+
"1 Year": 365,
|
| 38 |
+
"2 Years": 730,
|
| 39 |
+
}
|
| 40 |
+
|
| 41 |
+
COLORS = {
|
| 42 |
+
"bg": "#0e1117",
|
| 43 |
+
"card": "#1a1d29",
|
| 44 |
+
"text": "#e0e0e0",
|
| 45 |
+
"green": "#00d4aa",
|
| 46 |
+
"red": "#ff6b6b",
|
| 47 |
+
"blue": "#4dabf7",
|
| 48 |
+
"purple": "#b197fc",
|
| 49 |
+
"orange": "#ffa94d",
|
| 50 |
+
"yellow": "#ffe066",
|
| 51 |
+
"grid": "#2a2d3a",
|
| 52 |
+
"band_fill": "rgba(77, 171, 247, 0.08)",
|
| 53 |
+
}
|
| 54 |
+
|
| 55 |
+
DISCLAIMER_TEXT = (
|
| 56 |
+
"**Disclaimer:** This dashboard is for educational and informational "
|
| 57 |
+
"purposes only. It does NOT constitute financial advice. Past performance "
|
| 58 |
+
"does not guarantee future results. Always do your own research and consult "
|
| 59 |
+
"a qualified financial advisor before making investment decisions."
|
| 60 |
+
)
|
| 61 |
|
| 62 |
# ---------------------------------------------------------------------------
|
| 63 |
+
# Data Fetching
|
| 64 |
# ---------------------------------------------------------------------------
|
|
|
|
|
|
|
|
|
|
| 65 |
|
| 66 |
|
| 67 |
+
def fetch_stock_data(
|
| 68 |
+
ticker: str, timeframe: str
|
| 69 |
+
) -> tuple[Optional[pd.DataFrame], Optional[str]]:
|
| 70 |
+
"""Fetch historical stock data from Yahoo Finance.
|
|
|
|
|
|
|
| 71 |
|
| 72 |
Args:
|
| 73 |
+
ticker: Stock ticker symbol (e.g. 'AAPL').
|
| 74 |
+
timeframe: Human-readable timeframe key from TIMEFRAME_MAP.
|
| 75 |
|
| 76 |
Returns:
|
| 77 |
+
Tuple of (DataFrame with OHLCV data, error message or None).
|
|
|
|
|
|
|
|
|
|
| 78 |
"""
|
| 79 |
+
days = TIMEFRAME_MAP.get(timeframe, 365)
|
| 80 |
+
end_date = datetime.date.today()
|
| 81 |
+
start_date = end_date - datetime.timedelta(days=days)
|
| 82 |
+
|
| 83 |
try:
|
| 84 |
+
data = yf.download(
|
| 85 |
+
ticker,
|
| 86 |
+
start=start_date.isoformat(),
|
| 87 |
+
end=end_date.isoformat(),
|
| 88 |
+
progress=False,
|
| 89 |
+
auto_adjust=True,
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 90 |
)
|
| 91 |
+
if data.empty:
|
| 92 |
+
return None, f"No data found for ticker '{ticker}'. Verify the symbol."
|
| 93 |
|
| 94 |
+
# Flatten MultiIndex columns if present (yfinance >= 0.2.31 quirk)
|
| 95 |
+
if isinstance(data.columns, pd.MultiIndex):
|
| 96 |
+
data.columns = data.columns.get_level_values(0)
|
| 97 |
|
| 98 |
+
# Ensure expected columns exist
|
| 99 |
+
required = {"Open", "High", "Low", "Close", "Volume"}
|
| 100 |
+
if not required.issubset(set(data.columns)):
|
| 101 |
+
return None, f"Incomplete data columns for '{ticker}'."
|
| 102 |
|
| 103 |
+
data = data.copy()
|
| 104 |
+
data.index = pd.to_datetime(data.index)
|
| 105 |
+
return data, None
|
| 106 |
|
| 107 |
+
except Exception as exc:
|
| 108 |
+
return None, f"Error fetching data for '{ticker}': {exc}"
|
| 109 |
+
|
| 110 |
+
|
| 111 |
+
# ---------------------------------------------------------------------------
|
| 112 |
+
# Technical Indicators
|
| 113 |
+
# ---------------------------------------------------------------------------
|
| 114 |
+
|
| 115 |
+
|
| 116 |
+
def compute_indicators(df: pd.DataFrame) -> pd.DataFrame:
|
| 117 |
+
"""Compute all technical indicators on an OHLCV DataFrame.
|
| 118 |
+
|
| 119 |
+
Adds the following columns:
|
| 120 |
+
SMA_20, SMA_50, EMA_12, EMA_26, RSI,
|
| 121 |
+
MACD, MACD_Signal, MACD_Hist,
|
| 122 |
+
BB_Upper, BB_Middle, BB_Lower
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 123 |
"""
|
| 124 |
+
close = df["Close"].astype(float)
|
| 125 |
+
high = df["High"].astype(float)
|
| 126 |
+
low = df["Low"].astype(float)
|
| 127 |
+
|
| 128 |
+
# Simple Moving Averages
|
| 129 |
+
df["SMA_20"] = ta.trend.sma_indicator(close, window=20)
|
| 130 |
+
df["SMA_50"] = ta.trend.sma_indicator(close, window=50)
|
| 131 |
+
|
| 132 |
+
# Exponential Moving Averages
|
| 133 |
+
df["EMA_12"] = ta.trend.ema_indicator(close, window=12)
|
| 134 |
+
df["EMA_26"] = ta.trend.ema_indicator(close, window=26)
|
| 135 |
+
|
| 136 |
+
# Relative Strength Index
|
| 137 |
+
df["RSI"] = ta.momentum.rsi(close, window=14)
|
| 138 |
+
|
| 139 |
+
# MACD
|
| 140 |
+
macd_obj = ta.trend.MACD(close, window_slow=26, window_fast=12, window_sign=9)
|
| 141 |
+
df["MACD"] = macd_obj.macd()
|
| 142 |
+
df["MACD_Signal"] = macd_obj.macd_signal()
|
| 143 |
+
df["MACD_Hist"] = macd_obj.macd_diff()
|
| 144 |
+
|
| 145 |
+
# Bollinger Bands
|
| 146 |
+
bb_obj = ta.volatility.BollingerBands(close, window=20, window_dev=2)
|
| 147 |
+
df["BB_Upper"] = bb_obj.bollinger_hband()
|
| 148 |
+
df["BB_Middle"] = bb_obj.bollinger_mavg()
|
| 149 |
+
df["BB_Lower"] = bb_obj.bollinger_lband()
|
| 150 |
+
|
| 151 |
+
return df
|
| 152 |
+
|
| 153 |
+
|
| 154 |
+
# ---------------------------------------------------------------------------
|
| 155 |
+
# Signal Generation
|
| 156 |
+
# ---------------------------------------------------------------------------
|
| 157 |
+
|
| 158 |
+
|
| 159 |
+
def generate_signals(df: pd.DataFrame) -> pd.DataFrame:
|
| 160 |
+
"""Generate composite buy/sell signals from indicator crossovers.
|
| 161 |
+
|
| 162 |
+
Adds columns: Signal, Signal_Reason.
|
| 163 |
+
Signal values: 1 (buy), -1 (sell), 0 (hold).
|
| 164 |
"""
|
| 165 |
+
n = len(df)
|
| 166 |
+
signals = np.zeros(n, dtype=int)
|
| 167 |
+
reasons = [""] * n
|
| 168 |
+
|
| 169 |
+
sma20 = df["SMA_20"].values
|
| 170 |
+
sma50 = df["SMA_50"].values
|
| 171 |
+
macd = df["MACD"].values
|
| 172 |
+
macd_sig = df["MACD_Signal"].values
|
| 173 |
+
rsi = df["RSI"].values
|
| 174 |
+
|
| 175 |
+
for i in range(1, n):
|
| 176 |
+
buy_reasons: list[str] = []
|
| 177 |
+
sell_reasons: list[str] = []
|
| 178 |
+
|
| 179 |
+
# --- SMA crossover --------------------------------------------------
|
| 180 |
+
if _crosses_above(sma20, sma50, i):
|
| 181 |
+
buy_reasons.append("SMA 20/50 golden cross")
|
| 182 |
+
elif _crosses_below(sma20, sma50, i):
|
| 183 |
+
sell_reasons.append("SMA 20/50 death cross")
|
| 184 |
+
|
| 185 |
+
# --- MACD crossover --------------------------------------------------
|
| 186 |
+
if _crosses_above(macd, macd_sig, i):
|
| 187 |
+
buy_reasons.append("MACD bullish crossover")
|
| 188 |
+
elif _crosses_below(macd, macd_sig, i):
|
| 189 |
+
sell_reasons.append("MACD bearish crossover")
|
| 190 |
+
|
| 191 |
+
# --- RSI extremes ----------------------------------------------------
|
| 192 |
+
if not np.isnan(rsi[i]) and not np.isnan(rsi[i - 1]):
|
| 193 |
+
if rsi[i - 1] <= 30 < rsi[i]:
|
| 194 |
+
buy_reasons.append("RSI exits oversold (<30)")
|
| 195 |
+
elif rsi[i - 1] >= 70 > rsi[i]:
|
| 196 |
+
sell_reasons.append("RSI exits overbought (>70)")
|
| 197 |
+
|
| 198 |
+
# --- Composite decision (majority vote) ------------------------------
|
| 199 |
+
if len(buy_reasons) > len(sell_reasons) and len(buy_reasons) >= 1:
|
| 200 |
+
signals[i] = 1
|
| 201 |
+
reasons[i] = "; ".join(buy_reasons)
|
| 202 |
+
elif len(sell_reasons) > len(buy_reasons) and len(sell_reasons) >= 1:
|
| 203 |
+
signals[i] = -1
|
| 204 |
+
reasons[i] = "; ".join(sell_reasons)
|
| 205 |
+
|
| 206 |
+
df["Signal"] = signals
|
| 207 |
+
df["Signal_Reason"] = reasons
|
| 208 |
+
return df
|
| 209 |
+
|
| 210 |
+
|
| 211 |
+
def _crosses_above(fast: np.ndarray, slow: np.ndarray, i: int) -> bool:
|
| 212 |
+
"""Return True if *fast* crosses above *slow* at index *i*."""
|
| 213 |
+
if np.isnan(fast[i]) or np.isnan(slow[i]) or np.isnan(fast[i - 1]) or np.isnan(slow[i - 1]):
|
| 214 |
+
return False
|
| 215 |
+
return fast[i - 1] <= slow[i - 1] and fast[i] > slow[i]
|
| 216 |
+
|
| 217 |
+
|
| 218 |
+
def _crosses_below(fast: np.ndarray, slow: np.ndarray, i: int) -> bool:
|
| 219 |
+
"""Return True if *fast* crosses below *slow* at index *i*."""
|
| 220 |
+
if np.isnan(fast[i]) or np.isnan(slow[i]) or np.isnan(fast[i - 1]) or np.isnan(slow[i - 1]):
|
| 221 |
+
return False
|
| 222 |
+
return fast[i - 1] >= slow[i - 1] and fast[i] < slow[i]
|
| 223 |
|
|
|
|
|
|
|
| 224 |
|
| 225 |
+
# ---------------------------------------------------------------------------
|
| 226 |
+
# Signal Summary Table
|
| 227 |
+
# ---------------------------------------------------------------------------
|
| 228 |
+
|
| 229 |
+
|
| 230 |
+
def build_signal_table(df: pd.DataFrame) -> pd.DataFrame:
|
| 231 |
+
"""Return a tidy DataFrame of only the rows where a signal fired."""
|
| 232 |
+
mask = df["Signal"] != 0
|
| 233 |
+
if mask.sum() == 0:
|
| 234 |
+
return pd.DataFrame(
|
| 235 |
+
columns=["Date", "Close", "Type", "Reason"]
|
| 236 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 237 |
|
| 238 |
+
out = df.loc[mask, ["Close", "Signal", "Signal_Reason"]].copy()
|
| 239 |
+
out.index.name = "Date"
|
| 240 |
+
out = out.reset_index()
|
| 241 |
+
out["Date"] = out["Date"].dt.strftime("%Y-%m-%d")
|
| 242 |
+
out["Close"] = out["Close"].round(2)
|
| 243 |
+
out["Type"] = out["Signal"].map({1: "BUY", -1: "SELL"})
|
| 244 |
+
out["Reason"] = out["Signal_Reason"]
|
| 245 |
+
return out[["Date", "Close", "Type", "Reason"]]
|
| 246 |
|
|
|
|
|
|
|
| 247 |
|
| 248 |
+
# ---------------------------------------------------------------------------
|
| 249 |
+
# Backtesting Engine
|
| 250 |
+
# ---------------------------------------------------------------------------
|
| 251 |
+
|
| 252 |
+
|
| 253 |
+
def run_backtest(df: pd.DataFrame) -> dict:
|
| 254 |
+
"""Run a simple signal-following backtest and compare to buy-and-hold.
|
| 255 |
+
|
| 256 |
+
Strategy rules:
|
| 257 |
+
- Start in cash.
|
| 258 |
+
- On a BUY signal, go fully invested (buy at close).
|
| 259 |
+
- On a SELL signal, exit to cash (sell at close).
|
| 260 |
+
- At the end, liquidate any open position.
|
| 261 |
+
|
| 262 |
+
Returns a dict with summary statistics.
|
| 263 |
"""
|
| 264 |
+
close = df["Close"].values.astype(float)
|
| 265 |
+
signals = df["Signal"].values
|
| 266 |
+
|
| 267 |
+
# Buy-and-hold
|
| 268 |
+
bh_return = (close[-1] / close[0] - 1) * 100 if close[0] != 0 else 0.0
|
| 269 |
+
|
| 270 |
+
# Signal strategy
|
| 271 |
+
cash = 10_000.0
|
| 272 |
+
shares = 0.0
|
| 273 |
+
initial_capital = cash
|
| 274 |
+
in_position = False
|
| 275 |
+
trades = 0
|
| 276 |
+
winning_trades = 0
|
| 277 |
+
entry_price = 0.0
|
| 278 |
+
|
| 279 |
+
equity_curve = np.full(len(close), np.nan)
|
| 280 |
+
|
| 281 |
+
for i in range(len(close)):
|
| 282 |
+
if signals[i] == 1 and not in_position and cash > 0:
|
| 283 |
+
shares = cash / close[i]
|
| 284 |
+
entry_price = close[i]
|
| 285 |
+
cash = 0.0
|
| 286 |
+
in_position = True
|
| 287 |
+
trades += 1
|
| 288 |
+
elif signals[i] == -1 and in_position:
|
| 289 |
+
cash = shares * close[i]
|
| 290 |
+
if close[i] > entry_price:
|
| 291 |
+
winning_trades += 1
|
| 292 |
+
shares = 0.0
|
| 293 |
+
in_position = False
|
| 294 |
+
|
| 295 |
+
equity_curve[i] = cash + shares * close[i]
|
| 296 |
+
|
| 297 |
+
# Liquidate at end if still in position
|
| 298 |
+
if in_position:
|
| 299 |
+
cash = shares * close[-1]
|
| 300 |
+
if close[-1] > entry_price:
|
| 301 |
+
winning_trades += 1
|
| 302 |
+
shares = 0.0
|
| 303 |
+
equity_curve[-1] = cash
|
| 304 |
+
|
| 305 |
+
final_value = cash + shares * close[-1]
|
| 306 |
+
strategy_return = (final_value / initial_capital - 1) * 100
|
| 307 |
+
win_rate = (winning_trades / trades * 100) if trades > 0 else 0.0
|
| 308 |
+
|
| 309 |
+
# Equity curve for charting (forward-fill NaN gaps)
|
| 310 |
+
eq_series = pd.Series(equity_curve)
|
| 311 |
+
eq_series = eq_series.ffill().bfill()
|
| 312 |
+
|
| 313 |
+
return {
|
| 314 |
+
"initial_capital": initial_capital,
|
| 315 |
+
"final_value": round(final_value, 2),
|
| 316 |
+
"strategy_return_pct": round(strategy_return, 2),
|
| 317 |
+
"buy_hold_return_pct": round(bh_return, 2),
|
| 318 |
+
"total_trades": trades,
|
| 319 |
+
"winning_trades": winning_trades,
|
| 320 |
+
"win_rate_pct": round(win_rate, 1),
|
| 321 |
+
"equity_curve": eq_series.values,
|
| 322 |
+
}
|
| 323 |
|
|
|
|
| 324 |
|
| 325 |
+
# ---------------------------------------------------------------------------
|
| 326 |
+
# Chart Building
|
| 327 |
+
# ---------------------------------------------------------------------------
|
| 328 |
|
| 329 |
+
_PLOTLY_LAYOUT_DEFAULTS = dict(
|
| 330 |
+
template="plotly_dark",
|
| 331 |
+
paper_bgcolor=COLORS["bg"],
|
| 332 |
+
plot_bgcolor=COLORS["bg"],
|
| 333 |
+
font=dict(color=COLORS["text"], family="Inter, sans-serif"),
|
| 334 |
+
hovermode="x unified",
|
| 335 |
+
legend=dict(
|
| 336 |
+
orientation="h",
|
| 337 |
+
yanchor="bottom",
|
| 338 |
+
y=1.02,
|
| 339 |
+
xanchor="right",
|
| 340 |
+
x=1,
|
| 341 |
+
font=dict(size=11),
|
| 342 |
+
),
|
| 343 |
+
margin=dict(l=60, r=30, t=60, b=40),
|
| 344 |
+
)
|
| 345 |
|
|
|
|
|
|
|
| 346 |
|
| 347 |
+
def build_main_chart(df: pd.DataFrame, ticker: str) -> go.Figure:
|
| 348 |
+
"""Build the multi-subplot price / RSI / MACD / Volume chart."""
|
| 349 |
|
| 350 |
+
fig = make_subplots(
|
| 351 |
+
rows=4,
|
| 352 |
+
cols=1,
|
| 353 |
+
shared_xaxes=True,
|
| 354 |
+
vertical_spacing=0.03,
|
| 355 |
+
row_heights=[0.45, 0.18, 0.18, 0.19],
|
| 356 |
+
subplot_titles=("", "", "", ""),
|
| 357 |
+
)
|
| 358 |
+
|
| 359 |
+
dates = df.index
|
| 360 |
+
|
| 361 |
+
# --- Row 1: Candlestick + overlays + Bollinger Bands --------------------
|
| 362 |
+
fig.add_trace(
|
| 363 |
+
go.Candlestick(
|
| 364 |
+
x=dates,
|
| 365 |
+
open=df["Open"],
|
| 366 |
+
high=df["High"],
|
| 367 |
+
low=df["Low"],
|
| 368 |
+
close=df["Close"],
|
| 369 |
+
increasing_line_color=COLORS["green"],
|
| 370 |
+
decreasing_line_color=COLORS["red"],
|
| 371 |
+
name="Price",
|
| 372 |
+
),
|
| 373 |
+
row=1,
|
| 374 |
+
col=1,
|
| 375 |
+
)
|
| 376 |
+
|
| 377 |
+
# Bollinger Bands (shaded region)
|
| 378 |
+
fig.add_trace(
|
| 379 |
+
go.Scatter(
|
| 380 |
+
x=dates,
|
| 381 |
+
y=df["BB_Upper"],
|
| 382 |
+
line=dict(width=0),
|
| 383 |
+
showlegend=False,
|
| 384 |
+
hoverinfo="skip",
|
| 385 |
+
),
|
| 386 |
+
row=1,
|
| 387 |
+
col=1,
|
| 388 |
+
)
|
| 389 |
+
fig.add_trace(
|
| 390 |
+
go.Scatter(
|
| 391 |
+
x=dates,
|
| 392 |
+
y=df["BB_Lower"],
|
| 393 |
+
fill="tonexty",
|
| 394 |
+
fillcolor=COLORS["band_fill"],
|
| 395 |
+
line=dict(width=0),
|
| 396 |
+
name="Bollinger Bands",
|
| 397 |
+
),
|
| 398 |
+
row=1,
|
| 399 |
+
col=1,
|
| 400 |
+
)
|
| 401 |
+
|
| 402 |
+
# SMA / EMA lines
|
| 403 |
+
for col_name, color, dash in [
|
| 404 |
+
("SMA_20", COLORS["blue"], "solid"),
|
| 405 |
+
("SMA_50", COLORS["purple"], "solid"),
|
| 406 |
+
("EMA_12", COLORS["orange"], "dot"),
|
| 407 |
+
("EMA_26", COLORS["yellow"], "dot"),
|
| 408 |
+
]:
|
| 409 |
+
fig.add_trace(
|
| 410 |
+
go.Scatter(
|
| 411 |
+
x=dates,
|
| 412 |
+
y=df[col_name],
|
| 413 |
+
mode="lines",
|
| 414 |
+
line=dict(color=color, width=1.2, dash=dash),
|
| 415 |
+
name=col_name,
|
| 416 |
+
),
|
| 417 |
+
row=1,
|
| 418 |
+
col=1,
|
| 419 |
+
)
|
| 420 |
+
|
| 421 |
+
# Buy / Sell markers
|
| 422 |
+
buys = df[df["Signal"] == 1]
|
| 423 |
+
sells = df[df["Signal"] == -1]
|
| 424 |
+
|
| 425 |
+
if not buys.empty:
|
| 426 |
+
fig.add_trace(
|
| 427 |
+
go.Scatter(
|
| 428 |
+
x=buys.index,
|
| 429 |
+
y=buys["Close"],
|
| 430 |
+
mode="markers",
|
| 431 |
+
marker=dict(
|
| 432 |
+
symbol="triangle-up",
|
| 433 |
+
size=12,
|
| 434 |
+
color=COLORS["green"],
|
| 435 |
+
line=dict(color="white", width=1),
|
| 436 |
+
),
|
| 437 |
+
name="Buy Signal",
|
| 438 |
+
text=buys["Signal_Reason"],
|
| 439 |
+
hovertemplate="%{text}<extra>BUY</extra>",
|
| 440 |
+
),
|
| 441 |
+
row=1,
|
| 442 |
+
col=1,
|
| 443 |
+
)
|
| 444 |
+
|
| 445 |
+
if not sells.empty:
|
| 446 |
+
fig.add_trace(
|
| 447 |
+
go.Scatter(
|
| 448 |
+
x=sells.index,
|
| 449 |
+
y=sells["Close"],
|
| 450 |
+
mode="markers",
|
| 451 |
+
marker=dict(
|
| 452 |
+
symbol="triangle-down",
|
| 453 |
+
size=12,
|
| 454 |
+
color=COLORS["red"],
|
| 455 |
+
line=dict(color="white", width=1),
|
| 456 |
+
),
|
| 457 |
+
name="Sell Signal",
|
| 458 |
+
text=sells["Signal_Reason"],
|
| 459 |
+
hovertemplate="%{text}<extra>SELL</extra>",
|
| 460 |
+
),
|
| 461 |
+
row=1,
|
| 462 |
+
col=1,
|
| 463 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
| 464 |
|
| 465 |
+
# --- Row 2: RSI ---------------------------------------------------------
|
| 466 |
+
fig.add_trace(
|
| 467 |
+
go.Scatter(
|
| 468 |
+
x=dates,
|
| 469 |
+
y=df["RSI"],
|
| 470 |
+
mode="lines",
|
| 471 |
+
line=dict(color=COLORS["purple"], width=1.3),
|
| 472 |
+
name="RSI (14)",
|
| 473 |
+
),
|
| 474 |
+
row=2,
|
| 475 |
+
col=1,
|
| 476 |
+
)
|
| 477 |
+
# Overbought / oversold lines
|
| 478 |
+
for level, clr in [(70, COLORS["red"]), (30, COLORS["green"])]:
|
| 479 |
+
fig.add_hline(
|
| 480 |
+
y=level,
|
| 481 |
+
line_dash="dash",
|
| 482 |
+
line_color=clr,
|
| 483 |
+
opacity=0.5,
|
| 484 |
+
row=2,
|
| 485 |
+
col=1,
|
| 486 |
+
)
|
| 487 |
+
# Shade the 30-70 zone
|
| 488 |
+
fig.add_hrect(
|
| 489 |
+
y0=30,
|
| 490 |
+
y1=70,
|
| 491 |
+
fillcolor="rgba(255,255,255,0.03)",
|
| 492 |
+
line_width=0,
|
| 493 |
+
row=2,
|
| 494 |
+
col=1,
|
| 495 |
+
)
|
| 496 |
|
| 497 |
+
# --- Row 3: MACD --------------------------------------------------------
|
| 498 |
+
macd_colors = [
|
| 499 |
+
COLORS["green"] if v >= 0 else COLORS["red"]
|
| 500 |
+
for v in df["MACD_Hist"].fillna(0)
|
| 501 |
+
]
|
| 502 |
+
fig.add_trace(
|
| 503 |
+
go.Bar(
|
| 504 |
+
x=dates,
|
| 505 |
+
y=df["MACD_Hist"],
|
| 506 |
+
marker_color=macd_colors,
|
| 507 |
+
name="MACD Hist",
|
| 508 |
+
showlegend=False,
|
| 509 |
+
),
|
| 510 |
+
row=3,
|
| 511 |
+
col=1,
|
| 512 |
+
)
|
| 513 |
+
fig.add_trace(
|
| 514 |
+
go.Scatter(
|
| 515 |
+
x=dates,
|
| 516 |
+
y=df["MACD"],
|
| 517 |
+
mode="lines",
|
| 518 |
+
line=dict(color=COLORS["blue"], width=1.2),
|
| 519 |
+
name="MACD",
|
| 520 |
+
),
|
| 521 |
+
row=3,
|
| 522 |
+
col=1,
|
| 523 |
+
)
|
| 524 |
+
fig.add_trace(
|
| 525 |
+
go.Scatter(
|
| 526 |
+
x=dates,
|
| 527 |
+
y=df["MACD_Signal"],
|
| 528 |
+
mode="lines",
|
| 529 |
+
line=dict(color=COLORS["orange"], width=1.2),
|
| 530 |
+
name="Signal Line",
|
| 531 |
+
),
|
| 532 |
+
row=3,
|
| 533 |
+
col=1,
|
| 534 |
+
)
|
| 535 |
|
| 536 |
+
# --- Row 4: Volume ------------------------------------------------------
|
| 537 |
+
vol_colors = [
|
| 538 |
+
COLORS["green"] if c >= o else COLORS["red"]
|
| 539 |
+
for c, o in zip(df["Close"], df["Open"])
|
| 540 |
+
]
|
| 541 |
+
fig.add_trace(
|
| 542 |
+
go.Bar(
|
| 543 |
+
x=dates,
|
| 544 |
+
y=df["Volume"],
|
| 545 |
+
marker_color=vol_colors,
|
| 546 |
+
name="Volume",
|
| 547 |
+
showlegend=False,
|
| 548 |
+
),
|
| 549 |
+
row=4,
|
| 550 |
+
col=1,
|
| 551 |
+
)
|
| 552 |
|
| 553 |
+
# --- Layout -------------------------------------------------------------
|
| 554 |
+
fig.update_layout(
|
| 555 |
+
**_PLOTLY_LAYOUT_DEFAULTS,
|
| 556 |
+
title=dict(
|
| 557 |
+
text=f"{ticker} -- Technical Analysis Dashboard",
|
| 558 |
+
font=dict(size=20),
|
| 559 |
+
x=0.5,
|
| 560 |
+
),
|
| 561 |
+
height=900,
|
| 562 |
+
xaxis_rangeslider_visible=False,
|
| 563 |
)
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 564 |
|
| 565 |
+
# Y-axis labels
|
| 566 |
+
fig.update_yaxes(title_text="Price ($)", row=1, col=1, gridcolor=COLORS["grid"])
|
| 567 |
+
fig.update_yaxes(title_text="RSI", row=2, col=1, gridcolor=COLORS["grid"], range=[0, 100])
|
| 568 |
+
fig.update_yaxes(title_text="MACD", row=3, col=1, gridcolor=COLORS["grid"])
|
| 569 |
+
fig.update_yaxes(title_text="Volume", row=4, col=1, gridcolor=COLORS["grid"])
|
| 570 |
|
| 571 |
+
for i in range(1, 5):
|
| 572 |
+
fig.update_xaxes(gridcolor=COLORS["grid"], row=i, col=1)
|
|
|
|
| 573 |
|
| 574 |
+
return fig
|
|
|
|
|
|
|
|
|
|
|
|
|
| 575 |
|
| 576 |
+
|
| 577 |
+
def build_backtest_chart(
|
| 578 |
+
df: pd.DataFrame, backtest: dict, ticker: str
|
| 579 |
+
) -> go.Figure:
|
| 580 |
+
"""Build the equity-curve comparison chart for the backtest tab."""
|
| 581 |
+
|
| 582 |
+
close = df["Close"].values.astype(float)
|
| 583 |
+
dates = df.index
|
| 584 |
+
|
| 585 |
+
# Normalize buy-and-hold to same starting capital
|
| 586 |
+
bh_equity = (close / close[0]) * backtest["initial_capital"]
|
| 587 |
+
|
| 588 |
+
fig = go.Figure()
|
| 589 |
+
|
| 590 |
+
fig.add_trace(
|
| 591 |
+
go.Scatter(
|
| 592 |
+
x=dates,
|
| 593 |
+
y=backtest["equity_curve"],
|
| 594 |
+
mode="lines",
|
| 595 |
+
name="Signal Strategy",
|
| 596 |
+
line=dict(color=COLORS["green"], width=2),
|
| 597 |
+
fill="tozeroy",
|
| 598 |
+
fillcolor="rgba(0, 212, 170, 0.07)",
|
| 599 |
)
|
| 600 |
+
)
|
| 601 |
|
| 602 |
+
fig.add_trace(
|
| 603 |
+
go.Scatter(
|
| 604 |
+
x=dates,
|
| 605 |
+
y=bh_equity,
|
| 606 |
+
mode="lines",
|
| 607 |
+
name="Buy & Hold",
|
| 608 |
+
line=dict(color=COLORS["blue"], width=2, dash="dot"),
|
| 609 |
+
)
|
| 610 |
+
)
|
| 611 |
|
| 612 |
+
# Buy / sell markers on equity curve
|
| 613 |
+
buys = df[df["Signal"] == 1]
|
| 614 |
+
sells = df[df["Signal"] == -1]
|
| 615 |
+
|
| 616 |
+
if not buys.empty:
|
| 617 |
+
buy_indices = [df.index.get_loc(d) for d in buys.index]
|
| 618 |
+
fig.add_trace(
|
| 619 |
+
go.Scatter(
|
| 620 |
+
x=buys.index,
|
| 621 |
+
y=[backtest["equity_curve"][i] for i in buy_indices],
|
| 622 |
+
mode="markers",
|
| 623 |
+
marker=dict(symbol="triangle-up", size=10, color=COLORS["green"]),
|
| 624 |
+
name="Buy",
|
| 625 |
+
showlegend=False,
|
| 626 |
+
)
|
| 627 |
)
|
| 628 |
|
| 629 |
+
if not sells.empty:
|
| 630 |
+
sell_indices = [df.index.get_loc(d) for d in sells.index]
|
| 631 |
+
fig.add_trace(
|
| 632 |
+
go.Scatter(
|
| 633 |
+
x=sells.index,
|
| 634 |
+
y=[backtest["equity_curve"][i] for i in sell_indices],
|
| 635 |
+
mode="markers",
|
| 636 |
+
marker=dict(symbol="triangle-down", size=10, color=COLORS["red"]),
|
| 637 |
+
name="Sell",
|
| 638 |
+
showlegend=False,
|
| 639 |
+
)
|
| 640 |
+
)
|
| 641 |
|
| 642 |
+
fig.update_layout(
|
| 643 |
+
**_PLOTLY_LAYOUT_DEFAULTS,
|
| 644 |
+
title=dict(
|
| 645 |
+
text=f"{ticker} -- Strategy vs Buy & Hold (${backtest['initial_capital']:,.0f} start)",
|
| 646 |
+
font=dict(size=18),
|
| 647 |
+
x=0.5,
|
| 648 |
+
),
|
| 649 |
+
yaxis_title="Portfolio Value ($)",
|
| 650 |
+
xaxis_title="Date",
|
| 651 |
+
height=500,
|
| 652 |
+
)
|
| 653 |
|
| 654 |
+
fig.update_yaxes(gridcolor=COLORS["grid"])
|
| 655 |
+
fig.update_xaxes(gridcolor=COLORS["grid"])
|
| 656 |
|
| 657 |
+
return fig
|
|
|
|
|
|
|
|
|
|
| 658 |
|
|
|
|
| 659 |
|
| 660 |
+
# ---------------------------------------------------------------------------
|
| 661 |
+
# Backtest Summary Markdown
|
| 662 |
+
# ---------------------------------------------------------------------------
|
| 663 |
|
|
|
|
| 664 |
|
| 665 |
+
def format_backtest_summary(bt: dict, ticker: str) -> str:
|
| 666 |
+
"""Return a Markdown summary of backtest results."""
|
| 667 |
+
strat_color = "green" if bt["strategy_return_pct"] >= 0 else "red"
|
| 668 |
+
bh_color = "green" if bt["buy_hold_return_pct"] >= 0 else "red"
|
| 669 |
+
outperform = bt["strategy_return_pct"] - bt["buy_hold_return_pct"]
|
| 670 |
+
op_color = "green" if outperform >= 0 else "red"
|
| 671 |
|
| 672 |
+
return f"""
|
| 673 |
+
### Backtest Results for {ticker}
|
| 674 |
|
| 675 |
+
| Metric | Value |
|
| 676 |
+
|--------|-------|
|
| 677 |
+
| Initial Capital | ${bt['initial_capital']:,.2f} |
|
| 678 |
+
| Final Portfolio Value | ${bt['final_value']:,.2f} |
|
| 679 |
+
| **Strategy Return** | **{bt['strategy_return_pct']:+.2f}%** |
|
| 680 |
+
| **Buy & Hold Return** | **{bt['buy_hold_return_pct']:+.2f}%** |
|
| 681 |
+
| **Outperformance** | **{outperform:+.2f}%** |
|
| 682 |
+
| Total Trades | {bt['total_trades']} |
|
| 683 |
+
| Winning Trades | {bt['winning_trades']} |
|
| 684 |
+
| Win Rate | {bt['win_rate_pct']:.1f}% |
|
| 685 |
|
| 686 |
---
|
| 687 |
|
| 688 |
+
*Starting capital: $10,000. Strategy goes fully invested on BUY signals and exits to cash on SELL signals. No transaction costs or slippage modeled.*
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 689 |
"""
|
|
|
|
| 690 |
|
| 691 |
|
| 692 |
+
# ---------------------------------------------------------------------------
|
| 693 |
+
# Main Analysis Pipeline
|
| 694 |
+
# ---------------------------------------------------------------------------
|
| 695 |
|
|
|
|
| 696 |
|
| 697 |
+
def analyze(ticker: str, timeframe: str):
|
| 698 |
+
"""Run the full analysis pipeline and return outputs for all three tabs.
|
| 699 |
|
| 700 |
+
Returns:
|
| 701 |
+
main_chart: Plotly figure for the Charts tab.
|
| 702 |
+
signal_table: DataFrame for the Signals tab.
|
| 703 |
+
backtest_chart: Plotly figure for the Backtest tab.
|
| 704 |
+
backtest_summary: Markdown string for the Backtest tab.
|
| 705 |
+
status: Status message string.
|
| 706 |
+
"""
|
| 707 |
+
ticker = ticker.strip().upper()
|
| 708 |
+
if not ticker:
|
| 709 |
+
empty_fig = go.Figure()
|
| 710 |
+
empty_fig.update_layout(**_PLOTLY_LAYOUT_DEFAULTS, height=400)
|
| 711 |
+
return (
|
| 712 |
+
empty_fig,
|
| 713 |
+
pd.DataFrame(columns=["Date", "Close", "Type", "Reason"]),
|
| 714 |
+
empty_fig,
|
| 715 |
+
"Please enter a valid ticker symbol.",
|
| 716 |
+
"Enter a ticker to begin.",
|
| 717 |
+
)
|
| 718 |
|
| 719 |
+
# Fetch data
|
| 720 |
+
df, error = fetch_stock_data(ticker, timeframe)
|
| 721 |
+
if error:
|
| 722 |
+
empty_fig = go.Figure()
|
| 723 |
+
empty_fig.update_layout(**_PLOTLY_LAYOUT_DEFAULTS, height=400)
|
| 724 |
+
return (
|
| 725 |
+
empty_fig,
|
| 726 |
+
pd.DataFrame(columns=["Date", "Close", "Type", "Reason"]),
|
| 727 |
+
empty_fig,
|
| 728 |
+
f"**Error:** {error}",
|
| 729 |
+
f"Error: {error}",
|
| 730 |
+
)
|
| 731 |
|
| 732 |
+
# Compute indicators and signals
|
| 733 |
+
df = compute_indicators(df)
|
| 734 |
+
df = generate_signals(df)
|
| 735 |
+
|
| 736 |
+
# Build outputs
|
| 737 |
+
main_chart = build_main_chart(df, ticker)
|
| 738 |
+
signal_table = build_signal_table(df)
|
| 739 |
+
bt = run_backtest(df)
|
| 740 |
+
backtest_chart = build_backtest_chart(df, bt, ticker)
|
| 741 |
+
backtest_summary = format_backtest_summary(bt, ticker)
|
| 742 |
+
|
| 743 |
+
n_buys = int((df["Signal"] == 1).sum())
|
| 744 |
+
n_sells = int((df["Signal"] == -1).sum())
|
| 745 |
+
latest_close = df["Close"].iloc[-1]
|
| 746 |
+
latest_rsi = df["RSI"].iloc[-1]
|
| 747 |
+
|
| 748 |
+
status = (
|
| 749 |
+
f"**{ticker}** | Last Close: ${latest_close:.2f} | "
|
| 750 |
+
f"RSI: {latest_rsi:.1f} | "
|
| 751 |
+
f"Signals: {n_buys} buys, {n_sells} sells ({timeframe})"
|
| 752 |
+
)
|
| 753 |
|
| 754 |
+
return main_chart, signal_table, backtest_chart, backtest_summary, status
|
|
|
|
| 755 |
|
|
|
|
|
|
|
| 756 |
|
| 757 |
+
# ---------------------------------------------------------------------------
|
| 758 |
+
# Gradio Interface
|
| 759 |
+
# ---------------------------------------------------------------------------
|
| 760 |
|
|
|
|
|
|
|
|
|
|
| 761 |
|
| 762 |
+
def create_app() -> gr.Blocks:
|
| 763 |
+
"""Build and return the Gradio Blocks application."""
|
| 764 |
+
|
| 765 |
+
css = """
|
| 766 |
+
.disclaimer {
|
| 767 |
+
background-color: #2a1a1a;
|
| 768 |
+
border-left: 4px solid #ff6b6b;
|
| 769 |
+
padding: 12px 16px;
|
| 770 |
+
border-radius: 4px;
|
| 771 |
+
margin-bottom: 16px;
|
| 772 |
+
font-size: 0.85em;
|
| 773 |
+
}
|
| 774 |
+
.status-bar {
|
| 775 |
+
background-color: #1a1d29;
|
| 776 |
+
padding: 10px 16px;
|
| 777 |
+
border-radius: 6px;
|
| 778 |
+
border: 1px solid #2a2d3a;
|
| 779 |
+
font-size: 0.95em;
|
| 780 |
+
}
|
| 781 |
+
footer { display: none !important; }
|
| 782 |
+
"""
|
| 783 |
|
| 784 |
with gr.Blocks(
|
| 785 |
+
title="Trading Signal Dashboard",
|
| 786 |
+
theme=gr.themes.Base(
|
| 787 |
+
primary_hue=gr.themes.colors.purple,
|
| 788 |
+
secondary_hue=gr.themes.colors.pink,
|
| 789 |
+
neutral_hue=gr.themes.colors.gray,
|
| 790 |
+
font=gr.themes.GoogleFont("Inter"),
|
| 791 |
),
|
| 792 |
+
css=css,
|
| 793 |
+
) as app:
|
| 794 |
+
# Header
|
|
|
|
|
|
|
|
|
|
|
|
|
| 795 |
gr.Markdown(
|
| 796 |
+
"# Trading Signal Dashboard\n"
|
| 797 |
+
"Real-time technical analysis with automated signal generation & backtesting"
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 798 |
)
|
| 799 |
+
gr.Markdown(DISCLAIMER_TEXT, elem_classes=["disclaimer"])
|
| 800 |
|
| 801 |
+
# Controls
|
| 802 |
with gr.Row():
|
| 803 |
+
ticker_input = gr.Dropdown(
|
| 804 |
+
choices=DEFAULT_TICKERS,
|
| 805 |
+
value="AAPL",
|
| 806 |
+
label="Stock Ticker",
|
| 807 |
+
allow_custom_value=True,
|
| 808 |
+
info="Select a preset or type any valid ticker symbol",
|
| 809 |
+
scale=2,
|
| 810 |
+
)
|
| 811 |
+
timeframe_input = gr.Dropdown(
|
| 812 |
+
choices=list(TIMEFRAME_MAP.keys()),
|
| 813 |
+
value="6 Months",
|
| 814 |
+
label="Timeframe",
|
| 815 |
+
scale=1,
|
| 816 |
+
)
|
| 817 |
+
analyze_btn = gr.Button(
|
| 818 |
+
"Analyze",
|
| 819 |
+
variant="primary",
|
| 820 |
+
scale=1,
|
| 821 |
+
)
|
| 822 |
+
|
| 823 |
+
# Status bar
|
| 824 |
+
status_output = gr.Markdown(
|
| 825 |
+
"Enter a ticker and click **Analyze** to begin.",
|
| 826 |
+
elem_classes=["status-bar"],
|
| 827 |
+
)
|
| 828 |
+
|
| 829 |
+
# Tabbed outputs
|
| 830 |
+
with gr.Tabs():
|
| 831 |
+
with gr.TabItem("Charts", id="charts"):
|
| 832 |
+
chart_output = gr.Plot(label="Technical Analysis Chart")
|
| 833 |
+
|
| 834 |
+
with gr.TabItem("Signals", id="signals"):
|
| 835 |
+
gr.Markdown("### Recent Trading Signals")
|
| 836 |
+
gr.Markdown(
|
| 837 |
+
"Signals are generated from SMA crossovers, MACD crossovers, "
|
| 838 |
+
"and RSI overbought/oversold exits."
|
| 839 |
)
|
| 840 |
+
signal_table_output = gr.Dataframe(
|
| 841 |
+
headers=["Date", "Close", "Type", "Reason"],
|
| 842 |
+
label="Signal Log",
|
| 843 |
+
wrap=True,
|
|
|
|
|
|
|
|
|
|
|
|
|
| 844 |
)
|
| 845 |
|
| 846 |
+
with gr.TabItem("Backtest Results", id="backtest"):
|
| 847 |
+
gr.Markdown("### Strategy Backtest")
|
| 848 |
+
gr.Markdown(
|
| 849 |
+
"Simulates following the generated signals with a $10,000 starting "
|
| 850 |
+
"portfolio and compares against a simple buy-and-hold strategy."
|
| 851 |
+
)
|
| 852 |
+
backtest_summary_output = gr.Markdown()
|
| 853 |
+
backtest_chart_output = gr.Plot(label="Equity Curve")
|
| 854 |
+
|
| 855 |
+
# Wire up the button
|
| 856 |
+
analyze_btn.click(
|
| 857 |
+
fn=analyze,
|
| 858 |
+
inputs=[ticker_input, timeframe_input],
|
| 859 |
+
outputs=[
|
| 860 |
+
chart_output,
|
| 861 |
+
signal_table_output,
|
| 862 |
+
backtest_chart_output,
|
| 863 |
+
backtest_summary_output,
|
| 864 |
+
status_output,
|
| 865 |
+
],
|
| 866 |
)
|
| 867 |
|
| 868 |
+
# Also trigger on dropdown change for quick exploration
|
| 869 |
+
ticker_input.change(
|
| 870 |
+
fn=analyze,
|
| 871 |
+
inputs=[ticker_input, timeframe_input],
|
| 872 |
+
outputs=[
|
| 873 |
+
chart_output,
|
| 874 |
+
signal_table_output,
|
| 875 |
+
backtest_chart_output,
|
| 876 |
+
backtest_summary_output,
|
| 877 |
+
status_output,
|
| 878 |
+
],
|
| 879 |
+
)
|
| 880 |
+
timeframe_input.change(
|
| 881 |
+
fn=analyze,
|
| 882 |
+
inputs=[ticker_input, timeframe_input],
|
| 883 |
+
outputs=[
|
| 884 |
+
chart_output,
|
| 885 |
+
signal_table_output,
|
| 886 |
+
backtest_chart_output,
|
| 887 |
+
backtest_summary_output,
|
| 888 |
+
status_output,
|
| 889 |
+
],
|
| 890 |
+
)
|
| 891 |
|
| 892 |
+
# Footer
|
| 893 |
+
gr.Markdown(
|
| 894 |
+
"---\n"
|
| 895 |
+
"Built by [Lorenzo Scaturchio](https://huggingface.co/gr8monk3ys) | "
|
| 896 |
+
"Data from Yahoo Finance | "
|
| 897 |
+
"Not financial advice"
|
| 898 |
)
|
| 899 |
|
| 900 |
+
return app
|
| 901 |
|
| 902 |
|
| 903 |
+
# ---------------------------------------------------------------------------
|
| 904 |
+
# Entry Point
|
| 905 |
+
# ---------------------------------------------------------------------------
|
| 906 |
|
| 907 |
if __name__ == "__main__":
|
| 908 |
+
app = create_app()
|
| 909 |
app.launch()
|
requirements.txt
CHANGED
|
@@ -1,3 +1,6 @@
|
|
| 1 |
-
gradio==
|
| 2 |
-
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
gradio==5.9.1
|
| 2 |
+
yfinance>=0.2.31
|
| 3 |
+
plotly>=5.18.0
|
| 4 |
+
pandas>=2.0.0
|
| 5 |
+
numpy>=1.24.0
|
| 6 |
+
ta>=0.11.0
|