cicboy Claude Opus 4.6 commited on
Commit
26b7ac7
Β·
1 Parent(s): af75a32

Fix app crash: lazy-init OpenAI client, fix delegation typo, add load_dotenv

Browse files

- Move OpenAI client creation from module-level to inside _analyze_with_llm()
to prevent crash when OPENAI_API_KEY is not set at import time
- Fix allow_delegations (plural) typo to allow_delegation (singular) so
CrewAI actually enables delegation for market, sentiment, and historical agents
- Add load_dotenv() at top of app.py so .env files are loaded before env var access
- Add CLAUDE.md for Claude Code guidance

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>

Files changed (3) hide show
  1. CLAUDE.md +69 -0
  2. app.py +7 -4
  3. tools/sentiment_tool.py +1 -2
CLAUDE.md ADDED
@@ -0,0 +1,69 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # CLAUDE.md
2
+
3
+ This file provides guidance to Claude Code (claude.ai/code) when working with code in this repository.
4
+
5
+ ## Project Overview
6
+
7
+ CrewAI-based multi-agent cryptocurrency analysis system with a Gradio web interface. Six specialized AI agents execute sequentially to fetch live market data, analyze historical trends, assess news sentiment, compute composite analytics, formulate trading strategy, and generate a narrative Markdown report.
8
+
9
+ ## Commands
10
+
11
+ ```bash
12
+ # Setup
13
+ python -m venv .venv && source .venv/bin/activate
14
+ pip install -r requirements.txt
15
+
16
+ # Run locally (launches Gradio on 0.0.0.0:7860)
17
+ python app.py
18
+ ```
19
+
20
+ No test suite or linter is configured. Testing is manual via the Gradio UI.
21
+
22
+ ## Required Environment Variables
23
+
24
+ Set these before running (or use a `.env` file):
25
+ - `OPENAI_API_KEY` β€” GPT-4 access for LLM agents and sentiment analysis
26
+ - `SERPER_API_KEY` β€” Serper search API for fetching crypto news
27
+ - `COINGECKO_API_KEY` β€” CoinGecko market data (optional but recommended)
28
+
29
+ ## Architecture
30
+
31
+ **Entry point**: `app.py` β€” defines all agents, tasks, the CrewAI Crew, and the Gradio UI.
32
+
33
+ ### Agent Pipeline (sequential execution via CrewAI)
34
+
35
+ ```
36
+ User Input (crypto name, currency, lookback days)
37
+ β†’ Market Agent (gpt-4o-mini) + MarketDataTool β†’ live price, 24h volume
38
+ β†’ Historical Agent (gpt-4o-mini) + HistoricalDataTool β†’ price history, % change, volatility, trend
39
+ β†’ Sentiment Agent (gpt-4.1) + SentimentTool β†’ news headlines, sentiment score, confidence
40
+ β†’ Analytics Agent (gpt-4o-mini) + AnalyticsTool β†’ composite score, alignment, effective sentiment
41
+ β†’ Strategy Agent (gpt-4.1) β†’ trading bias, risk guidance
42
+ β†’ Reporting Agent (gpt-4.1) β†’ final Markdown report (narrative prose, no bullets)
43
+ ```
44
+
45
+ ### Tools (`tools/`)
46
+
47
+ Each tool extends CrewAI's `BaseTool` and returns a structured JSON dict:
48
+
49
+ | Tool | File | External API | Key Behavior |
50
+ |------|------|-------------|--------------|
51
+ | `MarketDataTool` | `market_data.py` | CoinGecko `/simple/price` | Fetches live price + 24h volume |
52
+ | `HistoricalDataTool` | `historical_data_tool.py` | CoinGecko `/market_chart` | Computes % change, volatility (stdev of daily returns), trend classification |
53
+ | `SentimentTool` | `sentiment_tool.py` | Serper + OpenAI GPT-4.1 | Fetches ~12 headlines, analyzes sentiment with LLM, validates/bounds scores |
54
+ | `AnalyticsTool` | `analytics_tool.py` | None (aggregation) | Composite score = `(pct_change/10) + (effective_sentiment*1.5) - (volatility/100)`, bounded [-1,1] |
55
+
56
+ ### Data Flow
57
+
58
+ Each tool returns a dict that downstream agents consume. The Analytics Agent aggregates all three data sources (market, historical, sentiment) into a single scored assessment. The Strategy and Reporting agents have no dedicated tools β€” they reason over prior agent outputs.
59
+
60
+ ## Deployment
61
+
62
+ Supports HuggingFace Spaces (Gradio SDK). Set API keys as Space secrets. The `README.md` contains the HuggingFace Spaces metadata card.
63
+
64
+ ## Key Patterns
65
+
66
+ - All tools use 10-second HTTP timeouts and return error dicts on failure
67
+ - Sentiment tool has multi-layer fallbacks: graceful neutral defaults if API keys missing, JSON extraction with substring fallback, bounds validation on all scores
68
+ - LLM model selection is hardcoded per agent in `app.py` (mix of `gpt-4o-mini` for data tasks, `gpt-4.1` for reasoning tasks)
69
+ - The `generate_report()` function in `app.py` is the Gradio callback that instantiates and kicks off the Crew
app.py CHANGED
@@ -1,6 +1,9 @@
1
  # import libraries, APis and LLMs
2
- from crewai import Agent, Task, Crew
3
  import os
 
 
 
 
4
  from tools.market_data import MarketDataTool
5
  from tools.sentiment_tool import SentimentTool
6
  from tools.historical_data_tool import HistoricalDataTool
@@ -36,7 +39,7 @@ market_agent = Agent(
36
  "including price and market liquidity."
37
  ),
38
  verbose=False,
39
- allow_delegations=True,
40
  tools=[market_data_tool],
41
  llm="gpt-4o-mini"
42
  )
@@ -46,7 +49,7 @@ sentiment_agent = Agent(
46
  goal="Analyze public sentiment on news & Reddit using Serper + OpenAI. Output structured sentiment JSON.",
47
  backstory="An expert in NLP-based crypto sentiment interpretation.",
48
  verbose=False,
49
- allow_delegations=True,
50
  tools=[sentiment_tool],
51
  llm="gpt-4.1"
52
  )
@@ -56,7 +59,7 @@ historical_agent = Agent(
56
  goal="Analyze long-term price trends, volatility, and movement patterns using clean structured JSON.",
57
  backstory="A quantitative analyst specializing in historical trends.",
58
  verbose=False,
59
- allow_delegations=True,
60
  tools=[historical_data_tool],
61
  llm="gpt-4o-mini"
62
  )
 
1
  # import libraries, APis and LLMs
 
2
  import os
3
+ from dotenv import load_dotenv
4
+ load_dotenv()
5
+
6
+ from crewai import Agent, Task, Crew
7
  from tools.market_data import MarketDataTool
8
  from tools.sentiment_tool import SentimentTool
9
  from tools.historical_data_tool import HistoricalDataTool
 
39
  "including price and market liquidity."
40
  ),
41
  verbose=False,
42
+ allow_delegation=True,
43
  tools=[market_data_tool],
44
  llm="gpt-4o-mini"
45
  )
 
49
  goal="Analyze public sentiment on news & Reddit using Serper + OpenAI. Output structured sentiment JSON.",
50
  backstory="An expert in NLP-based crypto sentiment interpretation.",
51
  verbose=False,
52
+ allow_delegation=True,
53
  tools=[sentiment_tool],
54
  llm="gpt-4.1"
55
  )
 
59
  goal="Analyze long-term price trends, volatility, and movement patterns using clean structured JSON.",
60
  backstory="A quantitative analyst specializing in historical trends.",
61
  verbose=False,
62
+ allow_delegation=True,
63
  tools=[historical_data_tool],
64
  llm="gpt-4o-mini"
65
  )
tools/sentiment_tool.py CHANGED
@@ -16,8 +16,6 @@ from openai import OpenAI
16
  SERPER_API_KEY = os.getenv("SERPER_API_KEY")
17
  OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
18
 
19
- client = OpenAI(api_key=OPENAI_API_KEY)
20
-
21
 
22
  # -----------------------------
23
  # Input Schema
@@ -85,6 +83,7 @@ class SentimentTool(BaseTool):
85
  # LLM Sentiment Aggregation
86
  # -----------------------------------------------------
87
  def _analyze_with_llm(self, coin: str, headlines: List[str]) -> Dict[str, Any]:
 
88
 
89
  if not headlines:
90
  return {
 
16
  SERPER_API_KEY = os.getenv("SERPER_API_KEY")
17
  OPENAI_API_KEY = os.getenv("OPENAI_API_KEY")
18
 
 
 
19
 
20
  # -----------------------------
21
  # Input Schema
 
83
  # LLM Sentiment Aggregation
84
  # -----------------------------------------------------
85
  def _analyze_with_llm(self, coin: str, headlines: List[str]) -> Dict[str, Any]:
86
+ client = OpenAI(api_key=os.getenv("OPENAI_API_KEY"))
87
 
88
  if not headlines:
89
  return {