BankNifty Strategy Engine — README
A resumable, LLM-driven intraday engine that digests sentiment, expert transcripts, technicals (RSI/MACD), and news to produce trade plans for BankNifty (and Nifty for context). The engine simulates executions on 1-minute data, evaluates P&L, and keeps airtight checkpoints so you can resume exactly where you left off after interruptions.
Table of contents
- Features
- Project structure
- Data inputs & expected columns
- Installation
- Configuration (.env + config classes)
- Running
- Resumable checkpoints
- How the engine works (timeline)
- Trade simulation rules
- LLM JSON schemas
- Outputs
Features
- 🧠 LLM-assisted trade plans using structured JSON outputs (strict schema).
- 📰 News-aware decisions (hourly, last 15 mins at close).
- 📈 Technicals: RSI + MACD on hourly/daily series.
- 🧪 1-minute backtest execution with deterministic tiebreak rules.
- 🔁 Resumable runs via checkpoint (safe to kill & rerun).
- ✅ Flip/No-trade exit enforcement: if plan flips side or says “No trade” while holding, engine exits at market price.
- 🧠 Memory string summarizing the last completed trade gets fed back into prompts.
- 📦 Excel & Parquet logs for analysis.
Project structure
banknifty_strategy/
├─ app/
│ ├─ __init__.py
│ ├─ engine.py # Core loop (09:15 → … → 15:30)
│ ├─ models.py # Pydantic v2 models
│ ├─ prompts.py # Prompt templates: morning / intrahour / closing
│ ├─ simulator.py # simulate_trade_from_signal, slice_intraday
│ ├─ dataio.py # load_data() – reads/normalizes data frames
│ ├─ checkpoint.py # CheckpointManager – resume & append parquet logs
│ ├─ logging_setup.py # Rotating file/logger
│ ├─ config.py # AppConfig, Paths, LLMConfig
│ ├─ llm.py # OpenAI client wrapper; strict JSON schema handling
│ ├─ news.py # summaries_between() helpers
│ ├─ utils.py # hour_passed(), hourly_ohlc_dict(), helpers
│ └─ writer.py # to_excel_safely()
├─ scripts/
│ └─ run_backtest.py # CLI entrypoint
├─ requirements.txt
└─ README.md
Note: Keep
app/a proper package (it must include__init__.py). Always run from the project root so imports likefrom app.engine import Enginework.
Data inputs & expected columns
Your loader (app/dataio.py) should read and normalize sources. The engine expects these canonical frames and columns:
1. BankNifty hourly (df_bn_hourly)
- Columns:
datetime,open,high,low,close,RSI,MACD_Line,Signal_Line - Granularity: hourly (09:15, 10:15, …, 15:15, 15:30)
- Used for: 09:15 previous indicators, hourly OHLC dicts, close price lookup.
2. BankNifty 1-minute (df_bn_1m)
- Columns:
datetime,open,high,low,close - Granularity: 1 min
- Used by: simulate_trade_from_signal execution windows.
3. Nifty daily or daily-like context (df_nifty_daily)
- Columns:
datetime,open,high,low,close,RSI,MACD_Line,Signal_Line - Used for: contextual morning prompt.
4. Sentiment predictions (df_sentiment)
- Columns:
predicted_for(datetime),proposed_sentiment,reasoning
5. Expert transcript (df_transcript)
- Columns:
prediction_for(datetime),Transcript- (If your raw file has
Prediction_for_date, normalize toprediction_for.)
- (If your raw file has
6. News with summaries (df_news)
- Columns:
datetime_ist(datetime),Article_summary(string)
Ensure all datetime columns are timezone-normalized (naive or same tz) and parsed.
Installation
Python 3.9+ recommended.
pip install -r requirements.txt
Configuration (.env + config classes)
Create a .env in project root:
OPENAI_API_KEY=EMPTY
OPENAI_BASE_URL=http://localhost:8000/v1
OPENAI_MODEL=Qwen/Qwen3-4B
Edit temperature and top_p values from:
app/config.py
Running
Use the provided script. Minimal edits added --ckpt-dir (defaults to <out-dir>/checkpoint).
# Always run from project root so "app" package is on sys.path
python -m scripts.run_backtest --data-dir ./data --out-dir ./result --start "2023-12-29 15:15" --end "2024-05-01 09:15"
Resumable checkpoints
The engine persists state and logs in Parquet inside --ckpt-dir:
<ckpt-dir>/
├─ checkpoint.json # last_timestamp_processed, state, plans, memory_str
├─ trade_log.parquet
├─ stats_log.parquet
├─ expert_log.parquet
└─ summary_log.parquet
You can kill the process and re-run with the same --ckpt-dir. The engine:
- Reads
checkpoint.json - Skips timestamps already processed
- Continues from the next tick
Excel mirrors (*.xlsx) are written to --out-dir for human inspection.
How the engine works (timeline)
At each timestamp ts in your hourly series:
09:15 — Morning
- Gathers Nifty/BankNifty previous OHLC + indicators.
- Pulls sentiment + expert transcript.
- Calls LLM (schema SummaryMorning) → morning summary.
- Calls LLM (schema TradePlan) → first plan of the day (but the actual open/close is derived dynamically from previous state; first day has no memory).
10:15 — First intrahour
- Simulate 1-minute window from last slice start → 10:15 using current plan.
- Log state change only if it changed by value (not identity).
- If exited naturally (stop/target), update memory_str, reset state, move
last_slice_start. - Pull last hour news, OHLC dict, current indicators.
- Call LLM (DecisionOutput →
{summary_banknifty, trade}). - Flip/No-trade exit enforcement: if holding and LLM flips side or says “No trade” → force flatten at market price (hourly close).
- Update logs.
11:15 → 15:15 — Subsequent intrahours
- Same as 10:15 loop.
15:30 — Close
- Simulate last 15 minutes (15:15 → 15:30) on 1-minute data.
- Log state change once; if exited → update memory + reset.
- LLM close plan (schema TradePlan).
- If holding and plan flips/no-trade → force flatten at close.
- Otherwise carry overnight (state remains open).
- Save checkpoint after each timestamp.
Trade simulation rules
simulate_trade_from_signal(df, trade, dt_col, state, lookback_minutes):
- Trade schema (
TradePlan):status:"Trade"or"No trade"type:"long" | "short" | "none"entry_at,target,stoploss: numbers (positive; 0 ifNo trade)
- Entry is limit-style: if
entry_atin[low, high]of a 1-min bar → entry fills atentry_at. - Exit resolution when both target & stoploss could be hit in same bar: use tiebreaker (engine uses “stoploss_first”).
- P&L (
pnl_pct): signed percentage vs entry (longpositive ifexit > entry;shortinverted). - Flip/No-trade handling in engine:
- If open_position and LLM plan flips or says No trade at the tick → force flatten at minutes close (market price) and log memory.
LLM JSON schemas
All Pydantic models enforce extra="forbid" so the model can’t invent fields.
The client (app/llm.py) sanitizes the schema name and forces additionalProperties: false at the root and nested objects, satisfying strict servers.
SummaryMorning (example)
class SummaryMorning(BaseModel):
major_concern_nifty50: str
trade_reasoning_nifty50: str
trade_strategy_nifty50: str
major_concern_banknifty: str
trade_reasoning_banknifty: str
trade_strategy_banknifty: str
model_config = {"extra": "forbid"}
TradePlan
class TradePlan(BaseModel):
status: Literal["No trade", "Trade"]
brief_reason: str
type: Literal["long", "short", "none"]
entry_at: float
target: float
stoploss: float
model_config = {"extra": "forbid"}
DecisionOutput
class SummaryBankNifty(BaseModel):
major_concern: str
sentiment: Literal["bullish", "bearish"]
reasoning: str
trade_strategy: str
news_summary: str
model_config = {"extra": "forbid"}
class DecisionOutput(BaseModel):
summary_banknifty: SummaryBankNifty
trade: TradePlan
model_config = {"extra": "forbid"}
Outputs
Out dir (--out-dir):
stats_log.xlsx– time series of state snapshots/closing stats (one row when state changes; final close row).trade_log.xlsx– model trade plans over time.expert_log.xlsx– morning summaries (one per day).summary_log.xlsx– per-hour summaries.
Checkpoint dir (--ckpt-dir or <out-dir>/checkpoint):
- Parquets for each log, plus
checkpoint.json(state & last timestamp).