Alexo19 commited on
Commit
da3a00e
·
verified ·
1 Parent(s): 5919422

You are now the **Lead AI Architect** for my project:

Browse files

🏆 Project: **CryptoSignal-Sleuth**

A production-ready, multi-model crypto signal dashboard deployed on Hugging Face Spaces.

It supports:
- Uploading a trading chart screenshot for AI-powered signal extraction (vision + OCR + LLM reasoning).
- Live market analysis by selecting a crypto pair and timeframe (technicals + sentiment + LLM synthesis).
- Clean REST API and webhook endpoints for automation (n8n, Discord/Telegram bots).

Tech stack:
- Backend: FastAPI (Python 3)
- Frontend: React + TypeScript + Vite + TailwindCSS
- Models: Hugging Face (vision, OCR, sentiment, LLM reasoning) using HF Inference API by default, with optional small local model fallback.

===========================================================
## 0. PROJECT STRUCTURE (VERY IMPORTANT – FOLLOW THIS)

Use **exactly** this structure for all files you generate:

app.py
backend/
config.py
main.py
models_registry.py
signal_engine.py
image_analysis.py
timeseries_analysis.py
sentiment_analysis.py
frontend/
index.html
package.json
tsconfig.json
vite.config.ts
postcss.config.js
tailwind.config.js
src/
index.css
main.tsx
App.tsx
lib/api.ts
pages/
Dashboard.tsx
Backtest.tsx
Settings.tsx
Docs.tsx
components/
Navbar.tsx
Sidebar.tsx
ChartUploadCard.tsx
PairSelector.tsx
SignalPanel.tsx
HistoryTable.tsx
ModelSettingsPanel.tsx
requirements.txt

Do NOT invent a different structure. Fill these files with working code.

===========================================================
## 1. BACKEND – FastAPI, Fully Working

Implement a production-ready FastAPI backend with:

### 1.1 Environment + Config

Create `backend/config.py` that:

- Loads env vars:
- HF_TOKEN
- USE_INFERENCE_API (default "1")
- INFERENCE_LLM_MODEL (default "Qwen/Qwen2.5-7B-Instruct")
- LOCAL_LLM_MODEL (default "google/flan-t5-base")
- FRONTEND_BASE_PATH (default "/")
- API_BASE_URL (default "/")
- WEBHOOK_API_KEY (optional)
- Exposes a `Settings` class (Pydantic BaseSettings).
- Provides helper functions for model selection (inference API vs local).

### 1.2 Models Registry

`backend/models_registry.py`:

- Register and initialize:
- Vision model (for charts) via HF Inference API.
- OCR model (for numbers/labels on chart).
- Sentiment model (text classification for headlines).
- LLM model for reasoning (via Inference API if USE_INFERENCE_API=1, else local small model).

- Provide clean helper functions:
- `analyze_chart_image(image_bytes) -> dict` (raw cues: trend, levels, patterns)
- `run_ocr(image_bytes) -> dict` (price levels, timestamps, labels)
- `analyze_sentiment(texts: list[str]) -> dict`
- `llm_reason(prompt: str) -> str`

Use the Hugging Face Inference API cleanly (with HF_TOKEN).

### 1.3 Image & OCR Analysis

`backend/image_analysis.py`:

- Functions:
- `extract_chart_features(image_bytes) -> dict`
Use vision model + OCR results to infer:
- trend: up / down / range
- key levels (support/resistance)
- chart patterns if possible (triangle, wedge, breakout, etc.)
- approximate timeframe if visible

Return a structured dict with these cues, not raw text only.

### 1.4 Time-Series & Technicals

`backend/timeseries_analysis.py`:

- Fetch public crypto OHLCV data (e.g., Binance klines, no API key).
- Implement core indicators:
- EMA (fast & slow)
- RSI
- MACD
- ATR (volatility)
- Provide:
- `fetch_klines(symbol: str, timeframe: str, limit: int=200)`
- `compute_technicals(ohlcv) -> dict`
- Summarize bias:
- trend: bullish/bearish/neutral
- momentum strength
- volatility regime
- key support/resistance levels derived from recent highs/lows

### 1.5 Sentiment Analysis

`backend/sentiment_analysis.py`:

- Pull recent crypto news / RSS headlines for the symbol (if possible, or general BTC/ETH market).
- Run sentiment model on the headlines.
- Aggregate:
- bull/bear/neutral score
- confidence
- Return a struct used as `meta["sentiment"]`.

### 1.6 Signal Engine

`backend/signal_engine.py`:

- Combine:
- image cues
- OCR data
- technicals
- sentiment
- Build an LLM prompt that explains:
- current price context
- trend
- volatility
- sentiment
- chart clues

Ask the LLM to output a structured, consistent decision.

**Important: The final signal JSON MUST match this schema exactly:**

```json
{
"direction": "long|short|neutral",
"entry_zone": [0.0, 0.0],
"stop_loss": 0.0,
"take_profit_levels": [0.0, 0.0, 0.0],
"timeframe_inferred": "1h",
"confidence": 0,
"time_horizon": "intra-day|swing|position",
"explanation": "string",
"meta": {
"sources": [],
"sentiment": {}
}
}

backend/config.py ADDED
@@ -0,0 +1,26 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ```python
2
+ from pydantic import BaseSettings
3
+ from typing import Optional
4
+
5
+ class Settings(BaseSettings):
6
+ HF_TOKEN: str
7
+ USE_INFERENCE_API: str = "1"
8
+ INFERENCE_LLM_MODEL: str = "Qwen/Qwen2.5-7B-Instruct"
9
+ LOCAL_LLM_MODEL: str = "google/flan-t5-base"
10
+ FRONTEND_BASE_PATH: str = "/"
11
+ API_BASE_URL: str = "/"
12
+ WEBHOOK_API_KEY: Optional[str] = None
13
+
14
+ class Config:
15
+ env_file = ".env"
16
+
17
+ def get_settings() -> Settings:
18
+ return Settings()
19
+
20
+ def should_use_inference_api() -> bool:
21
+ return get_settings().USE_INFERENCE_API == "1"
22
+
23
+ def get_llm_model_name() -> str:
24
+ settings = get_settings()
25
+ return settings.INFERENCE_LLM_MODEL if should_use_inference_api() else settings.LOCAL_LLM_MODEL
26
+ ```
backend/image_analysis.py ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ```python
2
+ from typing import Dict, Any
3
+ from .models_registry import model_registry
4
+
5
+ async def extract_chart_features(image_bytes: bytes) -> Dict[str, Any]:
6
+ # First get OCR data
7
+ ocr_data = await model_registry.run_ocr(image_bytes)
8
+
9
+ # Then get visual analysis
10
+ vision_data = await model_registry.analyze_chart_image(image_bytes)
11
+
12
+ # Combine results
13
+ return {
14
+ "trend": vision_data.get("trend", "unknown"),
15
+ "key_levels": extract_key_levels(ocr_data.get("text", "")),
16
+ "patterns": vision_data.get("patterns", []),
17
+ "timeframe": infer_timeframe(ocr_data.get("text", "")),
18
+ "raw_ocr": ocr_data
19
+ }
20
+
21
+ def extract_key_levels(ocr_text: str) -> list:
22
+ # Simple heuristic to find numbers in OCR text
23
+ import re
24
+ numbers = re.findall(r"\d+\.\d+", ocr_text)
25
+ return sorted([float(n) for n in numbers], reverse=True)
26
+
27
+ def infer_timeframe(ocr_text: str) -> str:
28
+ if "1D" in ocr_text or "daily" in ocr_text.lower():
29
+ return "1D"
30
+ elif "4H" in ocr_text or "4h" in ocr_text:
31
+ return "4h"
32
+ elif "1H" in ocr_text or "1h" in ocr_text:
33
+ return "1h"
34
+ elif "15M" in ocr_text or "15m" in ocr_text:
35
+ return "15m"
36
+ return "1h" # default
37
+ ```
backend/main.py ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ```python
2
+ from fastapi import FastAPI, UploadFile, File, HTTPException, Depends
3
+ from fastapi.middleware.cors import CORSMiddleware
4
+ from fastapi.security import APIKeyHeader
5
+ from typing import Optional
6
+ import io
7
+ from .config import get_settings
8
+ from .signal_engine import generate_signal_from_image
9
+ from .timeseries_analysis import fetch_klines, compute_technicals
10
+ from .models_registry import model_registry
11
+
12
+ app = FastAPI(title="CryptoSignal Sleuth Pro API")
13
+
14
+ # CORS settings
15
+ app.add_middleware(
16
+ CORSMiddleware,
17
+ allow_origins=["*"],
18
+ allow_credentials=True,
19
+ allow_methods=["*"],
20
+ allow_headers=["*"],
21
+ )
22
+
23
+ api_key_header = APIKeyHeader(name="X-API-KEY")
24
+
25
+ async def verify_api_key(api_key: str = Depends(api_key_header)):
26
+ settings = get_settings()
27
+ if settings.WEBHOOK_API_KEY and api_key != settings.WEBHOOK_API_KEY:
28
+ raise HTTPException(status_code=403, detail="Invalid API Key")
29
+ return api_key
30
+
31
+ @app.post("/api/analyze-chart")
32
+ async def analyze_chart(
33
+ file: UploadFile = File(...),
34
+ symbol: Optional[str] = "BTCUSDT"
35
+ ):
36
+ if not file.content_type.startswith("image/"):
37
+ raise HTTPException(400, detail="File must be an image")
38
+
39
+ image_bytes = await file.read()
40
+ signal = await generate_signal_from_image(image_bytes, symbol)
41
+ return signal
42
+
43
+ @app.get("/api/market-analysis")
44
+ async def market_analysis(
45
+ symbol: str,
46
+ timeframe: str = "1h"
47
+ ):
48
+ try:
49
+ ohlcv = await fetch_klines(symbol, timeframe)
50
+ technicals = compute_technicals(ohlcv)
51
+ sentiment = await get_crypto_sentiment(symbol[:3])
52
+
53
+ return {
54
+ "symbol": symbol,
55
+ "timeframe": timeframe,
56
+ "technicals": technicals,
57
+ "sentiment": sentiment
58
+ }
59
+ except Exception as e:
60
+ raise HTTPException(500, detail=str(e))
61
+
62
+ @app.post("/api/webhook")
63
+ async def webhook_handler(
64
+ payload: dict,
65
+ api_key: str = Depends(verify_api_key)
66
+ ):
67
+ # Process webhook payload here
68
+ return {"status": "received", "data": payload}
69
+
70
+ @app.get("/api/models")
71
+ async def list_models():
72
+ settings = get_settings()
73
+ return {
74
+ "llm_model": settings.INFERENCE_LLM_MODEL if settings.USE_INFERENCE_API == "1" else settings.LOCAL_LLM_MODEL,
75
+ "using_inference_api": settings.USE_INFERENCE_API == "1"
76
+ }
77
+ ```
78
+ ___METADATA_START___
79
+ {"repoId":"Alexo19/cryptosignal-sleuth-pro","isNew":false,"userName":"Alexo19"}
80
+ ___METADATA_END___
backend/models_registry.py ADDED
@@ -0,0 +1,58 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ```python
2
+ from typing import Dict, Any
3
+ import io
4
+ from fastapi import UploadFile
5
+ from huggingface_hub import InferenceClient
6
+ from transformers import pipeline
7
+ from .config import get_settings, should_use_inference_api
8
+
9
+ settings = get_settings()
10
+
11
+ class ModelRegistry:
12
+ def __init__(self):
13
+ self.client = InferenceClient(token=settings.HF_TOKEN) if should_use_inference_api() else None
14
+ self.local_llm = None
15
+ self.local_sentiment = None
16
+
17
+ if not should_use_inference_api():
18
+ self.local_llm = pipeline("text2text-generation", model=settings.LOCAL_LLM_MODEL)
19
+ self.local_sentiment = pipeline("text-classification", model="finiteautomata/bertweet-base-sentiment-analysis")
20
+
21
+ async def analyze_chart_image(self, image_bytes: bytes) -> Dict[str, Any]:
22
+ if should_use_inference_api():
23
+ response = self.client.image_classification(image=image_bytes)
24
+ return {"analysis": str(response)}
25
+ else:
26
+ # Fallback simple analysis
27
+ return {"trend": "unknown", "levels": [], "patterns": []}
28
+
29
+ async def run_ocr(self, image_bytes: bytes) -> Dict[str, Any]:
30
+ if should_use_inference_api():
31
+ response = self_client.image_to_text(image=image_bytes)
32
+ return {"text": response}
33
+ else:
34
+ # Simple fallback
35
+ return {"text": "No OCR data available"}
36
+
37
+ async def analyze_sentiment(self, texts: list[str]) -> Dict[str, Any]:
38
+ if should_use_inference_api():
39
+ response = self.client.text_classification(texts, model="finiteautomata/bertweet-base-sentiment-analysis")
40
+ return {"sentiment": response}
41
+ else:
42
+ results = self.local_sentiment(texts)
43
+ return {"sentiment": results}
44
+
45
+ async def llm_reason(self, prompt: str) -> str:
46
+ if should_use_inference_api():
47
+ response = self.client.text_generation(
48
+ prompt,
49
+ model=settings.INFERENCE_LLM_MODEL,
50
+ max_new_tokens=256
51
+ )
52
+ return response
53
+ else:
54
+ result = self.local_llm(prompt, max_length=128)
55
+ return result[0]["generated_text"]
56
+
57
+ model_registry = ModelRegistry()
58
+ ```
backend/sentiment_analysis.py ADDED
@@ -0,0 +1,63 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ```python
2
+ from typing import Dict, Any, List
3
+ import feedparser
4
+ from .models_registry import model_registry
5
+
6
+ CRYPTO_NEWS_RSS = [
7
+ "https://cryptopanic.com/news/rss/",
8
+ "https://cointelegraph.com/rss",
9
+ "https://news.bitcoin.com/feed/"
10
+ ]
11
+
12
+ async def get_crypto_sentiment(symbol: str = "BTC") -> Dict[str, Any]:
13
+ headlines = await fetch_crypto_headlines(symbol)
14
+ if not headlines:
15
+ return {"score": 0, "confidence": 0, "headlines": []}
16
+
17
+ sentiment = await model_registry.analyze_sentiment(headlines)
18
+ return normalize_sentiment(sentiment, headlines)
19
+
20
+ async def fetch_crypto_headlines(symbol: str) -> List[str]:
21
+ headlines = []
22
+ for url in CRYPTO_NEWS_RSS:
23
+ try:
24
+ feed = feedparser.parse(url)
25
+ for entry in feed.entries:
26
+ if symbol.lower() in entry.title.lower() or symbol.lower() in entry.description.lower():
27
+ headlines.append(entry.title)
28
+ if len(headlines) >= 10: # Limit to 10 headlines
29
+ return headlines
30
+ except Exception:
31
+ continue
32
+ return headlines
33
+
34
+ def normalize_sentiment(raw_sentiment: Dict[str, Any], headlines: List[str]) -> Dict[str, Any]:
35
+ positive = 0
36
+ negative = 0
37
+ neutral = 0
38
+
39
+ for item in raw_sentiment.get("sentiment", []):
40
+ label = item.get("label", "").lower()
41
+ if "positive" in label:
42
+ positive += 1
43
+ elif "negative" in label:
44
+ negative += 1
45
+ else:
46
+ neutral += 1
47
+
48
+ total = len(headlines)
49
+ if total == 0:
50
+ return {"score": 0, "confidence": 0, "headlines": headlines}
51
+
52
+ score = (positive - negative) / total
53
+ confidence = max(positive, negative, neutral) / total
54
+
55
+ return {
56
+ "score": score,
57
+ "confidence": confidence,
58
+ "bullish": positive / total,
59
+ "bearish": negative / total,
60
+ "neutral": neutral / total,
61
+ "headlines": headlines
62
+ }
63
+ ```
backend/signal_engine.py ADDED
@@ -0,0 +1,103 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ```python
2
+ from typing import Dict, Any
3
+ import json
4
+ from .image_analysis import extract_chart_features
5
+ from .timeseries_analysis import fetch_klines, compute_technicals
6
+ from .sentiment_analysis import get_crypto_sentiment
7
+ from .models_registry import model_registry
8
+
9
+ async def generate_signal_from_image(image_bytes: bytes, symbol: str = "BTCUSDT") -> Dict[str, Any]:
10
+ # Extract features from image
11
+ chart_features = await extract_chart_features(image_bytes)
12
+
13
+ # Get technical analysis
14
+ timeframe = chart_features.get("timeframe", "1h")
15
+ ohlcv = await fetch_klines(symbol, timeframe)
16
+ technicals = compute_technicals(ohlcv)
17
+
18
+ # Get sentiment analysis
19
+ sentiment = await get_crypto_sentiment(symbol[:3]) # Extract base symbol (BTC from BTCUSDT)
20
+
21
+ # Prepare LLM prompt
22
+ prompt = build_llm_prompt(chart_features, technicals, sentiment, symbol, timeframe)
23
+
24
+ # Get LLM reasoning
25
+ llm_response = await model_registry.llm_reason(prompt)
26
+
27
+ # Parse response into structured format
28
+ return parse_llm_response(llm_response, chart_features, technicals, sentiment)
29
+
30
+ def build_llm_prompt(chart_features: Dict, technicals: Dict, sentiment: Dict, symbol: str, timeframe: str) -> str:
31
+ return f"""Analyze this crypto trading situation and provide a professional trading signal in JSON format:
32
+
33
+ Chart Analysis:
34
+ - Trend: {chart_features.get('trend', 'unknown')}
35
+ - Key Levels: {chart_features.get('key_levels', [])}
36
+ - Patterns: {chart_features.get('patterns', [])}
37
+
38
+ Technical Indicators ({timeframe} timeframe):
39
+ - Trend: {technicals['trend']}
40
+ - Momentum: {technicals['momentum']}
41
+ - Volatility: {technicals['volatility']}
42
+ - Support: {technicals['support']}
43
+ - Resistance: {technicals['resistance']}
44
+
45
+ Market Sentiment:
46
+ - Score: {sentiment.get('score', 0)}
47
+ - Bullish: {sentiment.get('bullish', 0)}
48
+ - Bearish: {sentiment.get('bearish', 0)}
49
+
50
+ Symbol: {symbol}
51
+
52
+ Provide your response in this exact JSON format:
53
+ {{
54
+ "direction": "long|short|neutral",
55
+ "entry_zone": [min_price, max_price],
56
+ "stop_loss": price,
57
+ "take_profit_levels": [tp1, tp2, tp3],
58
+ "timeframe_inferred": "1h|4h|1D etc",
59
+ "confidence": 0-100,
60
+ "time_horizon": "intra-day|swing|position",
61
+ "explanation": "brief rationale",
62
+ "meta": {{
63
+ "sources": ["chart", "technicals", "sentiment"],
64
+ "sentiment": {{}}
65
+ }}
66
+ }}"""
67
+
68
+ def parse_llm_response(response: str, chart_features: Dict, technicals: Dict, sentiment: Dict) -> Dict[str, Any]:
69
+ try:
70
+ data = json.loads(response.strip())
71
+ if not isinstance(data, dict):
72
+ raise ValueError("Invalid response format")
73
+
74
+ # Ensure required fields are present
75
+ required_fields = ["direction", "entry_zone", "stop_loss", "take_profit_levels"]
76
+ for field in required_fields:
77
+ if field not in data:
78
+ raise ValueError(f"Missing required field: {field}")
79
+
80
+ # Add metadata
81
+ data["meta"] = {
82
+ "sources": ["chart", "technicals", "sentiment"],
83
+ "sentiment": sentiment
84
+ }
85
+
86
+ return data
87
+ except (json.JSONDecodeError, ValueError) as e:
88
+ # Fallback to default if parsing fails
89
+ return {
90
+ "direction": "neutral",
91
+ "entry_zone": [0, 0],
92
+ "stop_loss": 0,
93
+ "take_profit_levels": [0, 0, 0],
94
+ "timeframe_inferred": chart_features.get("timeframe", "1h"),
95
+ "confidence": 50,
96
+ "time_horizon": "intra-day",
97
+ "explanation": "Could not generate signal: " + str(e),
98
+ "meta": {
99
+ "sources": ["error"],
100
+ "sentiment": sentiment
101
+ }
102
+ }
103
+ ```
backend/timeseries_analysis.py ADDED
@@ -0,0 +1,80 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ```python
2
+ import pandas as pd
3
+ import numpy as np
4
+ import requests
5
+ from typing import Dict, Any, List
6
+
7
+ BINANCE_API_URL = "https://api.binance.com/api/v3/klines"
8
+
9
+ async def fetch_klines(symbol: str, timeframe: str, limit: int = 200) -> List[List[Any]]:
10
+ params = {
11
+ "symbol": symbol.upper(),
12
+ "interval": timeframe,
13
+ "limit": limit
14
+ }
15
+ response = requests.get(BINANCE_API_URL, params=params)
16
+ response.raise_for_status()
17
+ return response.json()
18
+
19
+ def compute_technicals(ohlcv: List[List[Any]]) -> Dict[str, Any]:
20
+ # Convert to pandas DataFrame
21
+ df = pd.DataFrame(ohlcv, columns=["timestamp", "open", "high", "low", "close", "volume"])
22
+ df = df.astype({
23
+ "open": float, "high": float, "low": float, "close": float, "volume": float
24
+ })
25
+
26
+ # Calculate indicators
27
+ df["ema_20"] = df["close"].ewm(span=20, adjust=False).mean()
28
+ df["ema_50"] = df["close"].ewm(span=50, adjust=False).mean()
29
+ df["rsi"] = compute_rsi(df["close"])
30
+ df["macd"], df["signal"], df["hist"] = compute_macd(df["close"])
31
+ df["atr"] = compute_atr(df["high"], df["low"], df["close"])
32
+
33
+ latest = df.iloc[-1]
34
+
35
+ return {
36
+ "trend": "bullish" if latest["ema_20"] > latest["ema_50"] else "bearish",
37
+ "momentum": "strong" if latest["rsi"] > 70 or latest["rsi"] < 30 else "neutral",
38
+ "volatility": "high" if latest["atr"] > (df["atr"].mean() * 1.5) else "normal",
39
+ "resistance": df["high"].max(),
40
+ "support": df["low"].min(),
41
+ "indicators": {
42
+ "ema_20": latest["ema_20"],
43
+ "ema_50": latest["ema_50"],
44
+ "rsi": latest["rsi"],
45
+ "macd": {
46
+ "value": latest["macd"],
47
+ "signal": latest["signal"],
48
+ "hist": latest["hist"]
49
+ },
50
+ "atr": latest["atr"]
51
+ }
52
+ }
53
+
54
+ def compute_rsi(prices: pd.Series, period: int = 14) -> pd.Series:
55
+ delta = prices.diff()
56
+ gain = delta.where(delta > 0, 0)
57
+ loss = -delta.where(delta < 0, 0)
58
+
59
+ avg_gain = gain.rolling(window=period).mean()
60
+ avg_loss = loss.rolling(window=period).mean()
61
+
62
+ rs = avg_gain / avg_loss
63
+ return 100 - (100 / (1 + rs))
64
+
65
+ def compute_macd(prices: pd.Series, fast: int = 12, slow: int = 26, signal: int = 9) -> tuple:
66
+ ema_fast = prices.ewm(span=fast, adjust=False).mean()
67
+ ema_slow = prices.ewm(span=slow, adjust=False).mean()
68
+ macd = ema_fast - ema_slow
69
+ signal_line = macd.ewm(span=signal, adjust=False).mean()
70
+ hist = macd - signal_line
71
+ return macd, signal_line, hist
72
+
73
+ def compute_atr(high: pd.Series, low: pd.Series, close: pd.Series, period: int = 14) -> pd.Series:
74
+ tr = pd.DataFrame({
75
+ "h-l": high - low,
76
+ "h-pc": abs(high - close.shift()),
77
+ "l-pc": abs(low - close.shift())
78
+ }).max(axis=1)
79
+ return tr.rolling(window=period).mean()
80
+ ```