Spaces:
Sleeping
Sleeping
Upload 6 files
Browse files- README.md +202 -13
- app.py +127 -0
- data.csv +90 -0
- demo.py +396 -0
- llm_utils.py +240 -0
- requirements.txt +4 -3
README.md
CHANGED
|
@@ -1,20 +1,209 @@
|
|
| 1 |
---
|
| 2 |
-
title: Analytics
|
| 3 |
-
emoji:
|
| 4 |
-
colorFrom:
|
| 5 |
-
colorTo:
|
| 6 |
-
sdk:
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
- streamlit
|
| 10 |
pinned: false
|
| 11 |
-
short_description: Streamlit template space
|
| 12 |
license: mit
|
| 13 |
---
|
| 14 |
|
| 15 |
-
#
|
| 16 |
|
| 17 |
-
|
|
|
|
| 18 |
|
| 19 |
-
|
| 20 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
title: Analytics Validation Demo
|
| 3 |
+
emoji: 📊
|
| 4 |
+
colorFrom: blue
|
| 5 |
+
colorTo: indigo
|
| 6 |
+
sdk: streamlit
|
| 7 |
+
sdk_version: 1.35.0
|
| 8 |
+
app_file: app.py
|
|
|
|
| 9 |
pinned: false
|
|
|
|
| 10 |
license: mit
|
| 11 |
---
|
| 12 |
|
| 13 |
+
# Analytics Validation Demo
|
| 14 |
|
| 15 |
+
A local, demo-ready automation tool that helps data analysts validate business
|
| 16 |
+
metrics before presenting them to leadership. Built with Python and LangChain.
|
| 17 |
|
| 18 |
+
---
|
| 19 |
+
|
| 20 |
+
## What Problem This Solves
|
| 21 |
+
|
| 22 |
+
Data analysts frequently receive raw CSV exports from finance or operations
|
| 23 |
+
systems and must manually scan them for quality issues — missing values, date
|
| 24 |
+
gaps, sudden metric drops — before any numbers are presented to executives.
|
| 25 |
+
|
| 26 |
+
This tool automates that first-pass validation. It runs a deterministic rule
|
| 27 |
+
engine against daily KPI data, surfaces detected issues with precise context
|
| 28 |
+
(which column, which date, how large the drop), and optionally uses a local
|
| 29 |
+
LLM to narrate the findings in plain business English.
|
| 30 |
+
|
| 31 |
+
**The goal is to give an analyst a clean, trustworthy summary in seconds —
|
| 32 |
+
not to replace the analyst's judgment.**
|
| 33 |
+
|
| 34 |
+
---
|
| 35 |
+
|
| 36 |
+
## What Is Automated vs. What Is Not
|
| 37 |
+
|
| 38 |
+
| Automated | Not Automated |
|
| 39 |
+
|---|---|
|
| 40 |
+
| Missing value detection (per column, per date) | Root cause investigation |
|
| 41 |
+
| Row count / date gap detection | Database reconciliation |
|
| 42 |
+
| Day-over-day drop flagging (>20% threshold) | Trend analysis or forecasting |
|
| 43 |
+
| Descriptive statistics (min/max/mean/std/total) | Seasonality modeling |
|
| 44 |
+
| Duplicate date detection | Dashboard or report publishing |
|
| 45 |
+
| Plain-language issue narration (via LLM, optional) | Alerting or notifications |
|
| 46 |
+
| Fallback text summary when LLM unavailable | Automated decision-making |
|
| 47 |
+
|
| 48 |
+
The system will **never** invent data, speculate on causes, or auto-correct
|
| 49 |
+
issues. It only reports what it finds.
|
| 50 |
+
|
| 51 |
+
---
|
| 52 |
+
|
| 53 |
+
## How to Run Locally
|
| 54 |
+
|
| 55 |
+
### Prerequisites
|
| 56 |
+
|
| 57 |
+
- Python 3.10 or later
|
| 58 |
+
- pip
|
| 59 |
+
|
| 60 |
+
### Install dependencies
|
| 61 |
+
|
| 62 |
+
```bash
|
| 63 |
+
pip install pandas langchain-core langchain-community langchain-ollama
|
| 64 |
+
```
|
| 65 |
+
|
| 66 |
+
### Run the demo
|
| 67 |
+
|
| 68 |
+
```bash
|
| 69 |
+
cd analytics_langchain_demo
|
| 70 |
+
python demo.py
|
| 71 |
+
```
|
| 72 |
+
|
| 73 |
+
The script reads `data.csv` from the same directory and prints a structured
|
| 74 |
+
validation report to the console.
|
| 75 |
+
|
| 76 |
+
### Optional: Enable LLM summaries via Ollama
|
| 77 |
+
|
| 78 |
+
If you have [Ollama](https://ollama.com) installed:
|
| 79 |
+
|
| 80 |
+
```bash
|
| 81 |
+
# Pull the model (one-time, ~2 GB)
|
| 82 |
+
ollama pull llama3.2
|
| 83 |
+
|
| 84 |
+
# Start the Ollama server (in a separate terminal)
|
| 85 |
+
ollama serve
|
| 86 |
+
```
|
| 87 |
+
|
| 88 |
+
Then run `python demo.py` as normal. The report will include an LLM-generated
|
| 89 |
+
executive summary instead of the rule-based fallback text.
|
| 90 |
+
|
| 91 |
+
If Ollama is not running, the demo still produces a complete report — it logs
|
| 92 |
+
a message to stderr and uses the deterministic fallback automatically.
|
| 93 |
+
|
| 94 |
+
---
|
| 95 |
+
|
| 96 |
+
## Project Structure
|
| 97 |
+
|
| 98 |
+
```
|
| 99 |
+
analytics_langchain_demo/
|
| 100 |
+
├── demo.py Main runner. Contains load_data(), run_checks(),
|
| 101 |
+
│ format_console_output(), and main(). All rule-engine
|
| 102 |
+
│ logic lives here. No LLM calls.
|
| 103 |
+
│
|
| 104 |
+
├── llm_utils.py LangChain integration layer. Tries Ollama; falls back
|
| 105 |
+
│ to a deterministic text summary if unavailable.
|
| 106 |
+
│ The only file that imports LangChain.
|
| 107 |
+
│
|
| 108 |
+
├── data.csv 90-day synthetic daily metrics (revenue, orders) with
|
| 109 |
+
│ injected missing values, a date gap, and anomaly drops.
|
| 110 |
+
│
|
| 111 |
+
└── README.md This file.
|
| 112 |
+
```
|
| 113 |
+
|
| 114 |
+
---
|
| 115 |
+
|
| 116 |
+
## Why LLMs Are Used Cautiously
|
| 117 |
+
|
| 118 |
+
The rule engine is the source of truth. It produces deterministic, auditable
|
| 119 |
+
findings: specific columns, specific dates, specific percentage changes. These
|
| 120 |
+
facts never change between runs.
|
| 121 |
+
|
| 122 |
+
The LLM only narrates those pre-computed facts in plain language. The prompt
|
| 123 |
+
explicitly instructs it to:
|
| 124 |
+
|
| 125 |
+
- Not speculate on causes
|
| 126 |
+
- Not introduce information not present in the data
|
| 127 |
+
- Use a neutral, executive-friendly tone
|
| 128 |
+
|
| 129 |
+
This means the LLM cannot hallucinate a root cause or suggest a business
|
| 130 |
+
conclusion. It is a translator, not an analyst.
|
| 131 |
+
|
| 132 |
+
If the LLM is unavailable, the output is functionally identical — just
|
| 133 |
+
template-generated rather than model-generated. This makes the tool safe to
|
| 134 |
+
use in regulated or high-stakes reporting environments where outputs must
|
| 135 |
+
be reproducible.
|
| 136 |
+
|
| 137 |
+
---
|
| 138 |
+
|
| 139 |
+
## How This Scales in Production
|
| 140 |
+
|
| 141 |
+
This demo is intentionally small. Here is how each layer would evolve:
|
| 142 |
+
|
| 143 |
+
**Data input**
|
| 144 |
+
Replace `pd.read_csv()` with a SQLAlchemy connector to query directly from a
|
| 145 |
+
data warehouse (Snowflake, Redshift, BigQuery). The `run_checks()` function
|
| 146 |
+
signature does not change — it still receives a DataFrame.
|
| 147 |
+
|
| 148 |
+
**Scheduling**
|
| 149 |
+
Wrap `main()` in a Prefect or Airflow task to run nightly. Add a `--date`
|
| 150 |
+
argument so the tool checks a specific reporting period rather than the
|
| 151 |
+
full CSV.
|
| 152 |
+
|
| 153 |
+
**LLM**
|
| 154 |
+
Replace Ollama with a hosted model via `langchain-anthropic` or
|
| 155 |
+
`langchain-openai`. Inject the API key via environment variable. The
|
| 156 |
+
`_try_ollama_summary()` function is the only change point.
|
| 157 |
+
|
| 158 |
+
**Output**
|
| 159 |
+
Export `results` as JSON to a shared location for downstream dashboard
|
| 160 |
+
ingestion. Add a Slack or email alert when any ERROR-severity issues are found.
|
| 161 |
+
|
| 162 |
+
**Thresholds**
|
| 163 |
+
Move `ANOMALY_THRESHOLD` from a hardcoded constant to a YAML config file
|
| 164 |
+
so different teams can tune sensitivity without changing code.
|
| 165 |
+
|
| 166 |
+
---
|
| 167 |
+
|
| 168 |
+
## Sample Output (no Ollama)
|
| 169 |
+
|
| 170 |
+
```
|
| 171 |
+
============================================================
|
| 172 |
+
ANALYTICS VALIDATION REPORT
|
| 173 |
+
Generated : 2024-04-01 09:15:32
|
| 174 |
+
Data file : data.csv
|
| 175 |
+
============================================================
|
| 176 |
+
|
| 177 |
+
---- [ SECTION 1: DATA OVERVIEW ] --------------------------
|
| 178 |
+
Date Range : 2024-01-01 to 2024-03-30
|
| 179 |
+
Actual Rows : 89
|
| 180 |
+
Expected Rows : 90
|
| 181 |
+
Row Status : 1 row gap detected
|
| 182 |
+
|
| 183 |
+
---- [ SECTION 2: KPI STATISTICS ] -------------------------
|
| 184 |
+
|
| 185 |
+
REVENUE (USD)
|
| 186 |
+
Min : $28,341.00
|
| 187 |
+
Max : $51,203.40
|
| 188 |
+
Mean : $44,782.15
|
| 189 |
+
...
|
| 190 |
+
|
| 191 |
+
---- [ SECTION 3: DETECTED ISSUES ] ------------------------
|
| 192 |
+
|
| 193 |
+
Total issues found: 6
|
| 194 |
+
|
| 195 |
+
[WARNING] missing_values | Column: revenue
|
| 196 |
+
Detail : 1 missing value(s) in 'revenue' column
|
| 197 |
+
Dates : 2024-02-05
|
| 198 |
+
...
|
| 199 |
+
|
| 200 |
+
---- [ SECTION 4: EXECUTIVE SUMMARY ] ----------------------
|
| 201 |
+
|
| 202 |
+
Source: Rule-Based Fallback (Ollama unavailable)
|
| 203 |
+
|
| 204 |
+
The dataset spans 89 rows from 2024-01-01 to 2024-03-30
|
| 205 |
+
(expected 90 calendar days). Missing values were identified
|
| 206 |
+
in column(s): orders, revenue. A row count gap of 1 was
|
| 207 |
+
detected in the date sequence. 3 day-over-day drop(s)
|
| 208 |
+
exceeding the 20% anomaly threshold were flagged...
|
| 209 |
+
```
|
app.py
ADDED
|
@@ -0,0 +1,127 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import streamlit as st
|
| 2 |
+
import pandas as pd
|
| 3 |
+
from pathlib import Path
|
| 4 |
+
from demo import load_data, run_checks, format_console_output
|
| 5 |
+
from llm_utils import generate_summary
|
| 6 |
+
|
| 7 |
+
# ---------------------------------------------------------------------------
|
| 8 |
+
# Configuration & Setup
|
| 9 |
+
# ---------------------------------------------------------------------------
|
| 10 |
+
st.set_page_config(
|
| 11 |
+
page_title="Analytics Validation Demo",
|
| 12 |
+
page_icon="📊",
|
| 13 |
+
layout="wide"
|
| 14 |
+
)
|
| 15 |
+
|
| 16 |
+
# ---------------------------------------------------------------------------
|
| 17 |
+
# Helper Functions
|
| 18 |
+
# ---------------------------------------------------------------------------
|
| 19 |
+
@st.cache_data
|
| 20 |
+
def load_and_verify_data(filepath):
|
| 21 |
+
"""Load data using existing logic from demo.py."""
|
| 22 |
+
return load_data(filepath)
|
| 23 |
+
|
| 24 |
+
def display_issues(issues):
|
| 25 |
+
"""Render list of issues as Streamlit alerts."""
|
| 26 |
+
if not issues:
|
| 27 |
+
st.success("✅ No issues detected. Data appears clean.")
|
| 28 |
+
return
|
| 29 |
+
|
| 30 |
+
st.warning(f"⚠️ Found {len(issues)} issues")
|
| 31 |
+
|
| 32 |
+
for issue in issues:
|
| 33 |
+
severity_icon = "🔴" if issue["severity"] == "ERROR" else "⚠️"
|
| 34 |
+
with st.expander(f"{severity_icon} [{issue['severity']}] {issue['type']} in {issue.get('column', 'General')}"):
|
| 35 |
+
st.write(f"**Detail:** {issue['detail']}")
|
| 36 |
+
if issue.get("dates"):
|
| 37 |
+
st.write(f"**Affected Dates:** {', '.join(issue['dates'])}")
|
| 38 |
+
|
| 39 |
+
def display_metrics(stats):
|
| 40 |
+
"""Render Key Metrics in columns."""
|
| 41 |
+
st.subheader("Key Metrics")
|
| 42 |
+
|
| 43 |
+
col1, col2 = st.columns(2)
|
| 44 |
+
|
| 45 |
+
with col1:
|
| 46 |
+
rev = stats["revenue"]
|
| 47 |
+
st.metric("Total Revenue", f"${rev['total']:,.2f}")
|
| 48 |
+
st.caption(f"Mean: ${rev['mean']:,.2f} | Missing: {rev['missing_count']}")
|
| 49 |
+
|
| 50 |
+
with col2:
|
| 51 |
+
ord_ = stats["orders"]
|
| 52 |
+
st.metric("Total Orders", f"{int(ord_['total']):,}")
|
| 53 |
+
st.caption(f"Mean: {ord_['mean']:,.0f} | Missing: {ord_['missing_count']}")
|
| 54 |
+
|
| 55 |
+
st.divider()
|
| 56 |
+
|
| 57 |
+
# ---------------------------------------------------------------------------
|
| 58 |
+
# Main App Layout
|
| 59 |
+
# ---------------------------------------------------------------------------
|
| 60 |
+
def main():
|
| 61 |
+
st.title("📊 Analytics Validation Engine")
|
| 62 |
+
st.markdown("""
|
| 63 |
+
This demo validates daily business metrics for anomalies, missing data, and consistency errors.
|
| 64 |
+
It uses a deterministic rule engine and an optional local LLM for narration.
|
| 65 |
+
""")
|
| 66 |
+
|
| 67 |
+
# Sidebar parameters
|
| 68 |
+
st.sidebar.header("Configuration")
|
| 69 |
+
uploaded_file = st.sidebar.file_uploader("Upload CSV", type=["csv"])
|
| 70 |
+
|
| 71 |
+
# Check for default data if no upload
|
| 72 |
+
data_path = "data.csv"
|
| 73 |
+
if uploaded_file is not None:
|
| 74 |
+
data_path = uploaded_file
|
| 75 |
+
elif not Path(data_path).exists():
|
| 76 |
+
st.error("❌ data.csv not found and no file uploaded.")
|
| 77 |
+
st.stop()
|
| 78 |
+
|
| 79 |
+
# Load Data
|
| 80 |
+
try:
|
| 81 |
+
if uploaded_file:
|
| 82 |
+
df = pd.read_csv(uploaded_file, parse_dates=["date"])
|
| 83 |
+
# Quick re-validation using original logic if possible, or just use pandas directly
|
| 84 |
+
# Ideally we re-use load_data but it expects a filepath string.
|
| 85 |
+
# For simplicity in this demo, let's just use the loaded DF if it's from upload,
|
| 86 |
+
# but we need to ensure it matches schema.
|
| 87 |
+
if not pd.api.types.is_datetime64_any_dtype(df["date"]):
|
| 88 |
+
df["date"] = pd.to_datetime(df["date"])
|
| 89 |
+
df = df.sort_values("date").reset_index(drop=True)
|
| 90 |
+
else:
|
| 91 |
+
df = load_and_verify_data(data_path)
|
| 92 |
+
|
| 93 |
+
st.sidebar.success(f"Loaded {len(df)} rows")
|
| 94 |
+
|
| 95 |
+
except Exception as e:
|
| 96 |
+
st.error(f"Error loading data: {e}")
|
| 97 |
+
st.stop()
|
| 98 |
+
|
| 99 |
+
# Run Analysis
|
| 100 |
+
with st.spinner("Running validation rules..."):
|
| 101 |
+
results = run_checks(df)
|
| 102 |
+
|
| 103 |
+
# Display Findings
|
| 104 |
+
|
| 105 |
+
# 1. Executive Summary
|
| 106 |
+
st.header("Executive Summary")
|
| 107 |
+
|
| 108 |
+
# We use a placeholder to update it if we want to stream (future optimization),
|
| 109 |
+
# but for now synchronous generation is fine.
|
| 110 |
+
with st.spinner("Generating summary (LLM or Rule-based)..."):
|
| 111 |
+
summary = generate_summary(results)
|
| 112 |
+
|
| 113 |
+
st.info(summary)
|
| 114 |
+
|
| 115 |
+
# 2. Detailed Issues
|
| 116 |
+
st.header("Detected Issues")
|
| 117 |
+
display_issues(results["issues"])
|
| 118 |
+
|
| 119 |
+
# 3. Data Overview
|
| 120 |
+
st.header("Data Overview")
|
| 121 |
+
display_metrics(results["stats"])
|
| 122 |
+
|
| 123 |
+
with st.expander("View Raw Data"):
|
| 124 |
+
st.dataframe(df)
|
| 125 |
+
|
| 126 |
+
if __name__ == "__main__":
|
| 127 |
+
main()
|
data.csv
ADDED
|
@@ -0,0 +1,90 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
date,revenue,orders
|
| 2 |
+
2024-01-01,44596.55,305
|
| 3 |
+
2024-01-02,44688.32,317
|
| 4 |
+
2024-01-03,44642.75,290
|
| 5 |
+
2024-01-04,45930.49,314
|
| 6 |
+
2024-01-05,44392.52,308
|
| 7 |
+
2024-01-06,49250.43,354
|
| 8 |
+
2024-01-07,50438.58,349
|
| 9 |
+
2024-01-08,42932.7,284
|
| 10 |
+
2024-01-09,45689.76,331
|
| 11 |
+
2024-01-10,45116.64,310
|
| 12 |
+
2024-01-11,46488.97,303
|
| 13 |
+
2024-01-12,44125.62,310
|
| 14 |
+
2024-01-13,51045.53,349
|
| 15 |
+
2024-01-14,49654.48,345
|
| 16 |
+
2024-01-15,47190.52,312
|
| 17 |
+
2024-01-16,46591.1,303
|
| 18 |
+
2024-01-17,37664.15,252
|
| 19 |
+
2024-01-18,42435.73,303
|
| 20 |
+
2024-01-19,46859.94,309
|
| 21 |
+
2024-01-20,50972.61,340
|
| 22 |
+
2024-01-21,48358.52,330
|
| 23 |
+
2024-01-22,45320.38,322
|
| 24 |
+
2024-01-23,46787.56,327
|
| 25 |
+
2024-01-24,46819.85,329
|
| 26 |
+
2024-01-25,43244.44,290
|
| 27 |
+
2024-01-26,43684.09,307
|
| 28 |
+
2024-01-27,47899.68,358
|
| 29 |
+
2024-01-28,46305.98,306
|
| 30 |
+
2024-01-29,47151.73,342
|
| 31 |
+
2024-01-30,46415.94,330
|
| 32 |
+
2024-01-31,48993.77,337
|
| 33 |
+
2024-02-01,41015.71,276
|
| 34 |
+
2024-02-02,47668.11,311
|
| 35 |
+
2024-02-03,48693.89,339
|
| 36 |
+
2024-02-04,47716.34,338
|
| 37 |
+
2024-02-05,,349
|
| 38 |
+
2024-02-06,46735.91,
|
| 39 |
+
2024-02-07,43426.97,290
|
| 40 |
+
2024-02-08,47666.37,322
|
| 41 |
+
2024-02-09,44803.27,318
|
| 42 |
+
2024-02-10,46574.3,318
|
| 43 |
+
2024-02-11,43444.4,287
|
| 44 |
+
2024-02-12,43410.34,304
|
| 45 |
+
2024-02-13,48341.78,333
|
| 46 |
+
2024-02-14,30842.06,212
|
| 47 |
+
2024-02-15,48037.28,342
|
| 48 |
+
2024-02-16,45766.34,303
|
| 49 |
+
2024-02-17,51129.48,357
|
| 50 |
+
2024-02-18,52035.44,359
|
| 51 |
+
2024-02-19,50468.68,344
|
| 52 |
+
2024-02-21,49460.57,342
|
| 53 |
+
2024-02-22,43554.45,287
|
| 54 |
+
2024-02-23,44577.11,325
|
| 55 |
+
2024-02-24,50885.85,359
|
| 56 |
+
2024-02-25,41947.56,298
|
| 57 |
+
2024-02-26,46556.39,314
|
| 58 |
+
2024-02-27,43243.32,298
|
| 59 |
+
2024-02-28,49829.65,331
|
| 60 |
+
2024-02-29,43802.15,318
|
| 61 |
+
2024-03-01,43750.78,297
|
| 62 |
+
2024-03-02,48873.77,322
|
| 63 |
+
2024-03-03,49215.85,325
|
| 64 |
+
2024-03-04,47478.56,327
|
| 65 |
+
2024-03-05,51393.61,358
|
| 66 |
+
2024-03-06,48823.93,321
|
| 67 |
+
2024-03-07,44658.01,312
|
| 68 |
+
2024-03-08,49888.02,324
|
| 69 |
+
2024-03-09,51373.74,361
|
| 70 |
+
2024-03-10,33649.8,373
|
| 71 |
+
2024-03-11,45145.8,305
|
| 72 |
+
2024-03-12,41505.1,289
|
| 73 |
+
2024-03-13,44463.25,331
|
| 74 |
+
2024-03-14,43289.13,302
|
| 75 |
+
2024-03-15,40606.8,275
|
| 76 |
+
2024-03-16,49331.21,350
|
| 77 |
+
2024-03-17,52654.68,363
|
| 78 |
+
2024-03-18,41871.71,294
|
| 79 |
+
2024-03-19,46447.56,326
|
| 80 |
+
2024-03-20,43039.19,310
|
| 81 |
+
2024-03-21,45246.11,320
|
| 82 |
+
2024-03-22,48567.41,342
|
| 83 |
+
2024-03-23,49402.24,367
|
| 84 |
+
2024-03-24,49282.37,336
|
| 85 |
+
2024-03-25,45313.51,330
|
| 86 |
+
2024-03-26,45332.39,319
|
| 87 |
+
2024-03-27,48348.05,327
|
| 88 |
+
2024-03-28,40165.6,281
|
| 89 |
+
2024-03-29,45643.26,307
|
| 90 |
+
2024-03-30,51039.04,359
|
demo.py
ADDED
|
@@ -0,0 +1,396 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
demo.py — Analytics Validation Demo
|
| 3 |
+
|
| 4 |
+
Entry point for the analytics automation demo. Loads a CSV of daily business
|
| 5 |
+
metrics, runs a deterministic rule-based validation engine, optionally enriches
|
| 6 |
+
the findings with an LLM narrative, and prints a structured console report.
|
| 7 |
+
|
| 8 |
+
Usage:
|
| 9 |
+
python demo.py
|
| 10 |
+
|
| 11 |
+
The script expects data.csv to be in the same directory as this file.
|
| 12 |
+
"""
|
| 13 |
+
|
| 14 |
+
from __future__ import annotations
|
| 15 |
+
|
| 16 |
+
import sys
|
| 17 |
+
from datetime import datetime
|
| 18 |
+
from pathlib import Path
|
| 19 |
+
|
| 20 |
+
import pandas as pd # type: ignore
|
| 21 |
+
|
| 22 |
+
from llm_utils import generate_summary # type: ignore
|
| 23 |
+
|
| 24 |
+
# ---------------------------------------------------------------------------
|
| 25 |
+
# Constants
|
| 26 |
+
# ---------------------------------------------------------------------------
|
| 27 |
+
|
| 28 |
+
DIVIDER = "=" * 60
|
| 29 |
+
SECTION_DIV = "-" * 60
|
| 30 |
+
|
| 31 |
+
# A day-over-day drop beyond this threshold is flagged as an anomaly.
|
| 32 |
+
# Why 20%: this is a common business heuristic for "something unusual happened"
|
| 33 |
+
# in daily revenue or order metrics. Adjust for your domain as needed.
|
| 34 |
+
ANOMALY_THRESHOLD = -0.20
|
| 35 |
+
|
| 36 |
+
REQUIRED_COLUMNS = {"date", "revenue", "orders"}
|
| 37 |
+
|
| 38 |
+
|
| 39 |
+
# ---------------------------------------------------------------------------
|
| 40 |
+
# Data loading
|
| 41 |
+
# ---------------------------------------------------------------------------
|
| 42 |
+
|
| 43 |
+
|
| 44 |
+
def load_data(filepath: str) -> pd.DataFrame:
|
| 45 |
+
"""
|
| 46 |
+
Read and validate the input CSV file.
|
| 47 |
+
|
| 48 |
+
Args:
|
| 49 |
+
filepath: Absolute or relative path to the CSV.
|
| 50 |
+
|
| 51 |
+
Returns:
|
| 52 |
+
A DataFrame with columns [date (datetime64), revenue (float), orders (numeric)],
|
| 53 |
+
sorted ascending by date.
|
| 54 |
+
|
| 55 |
+
Raises:
|
| 56 |
+
FileNotFoundError: if the file does not exist.
|
| 57 |
+
ValueError: if required columns are missing or dates cannot be parsed.
|
| 58 |
+
pd.errors.EmptyDataError: if the file is empty.
|
| 59 |
+
"""
|
| 60 |
+
df = pd.read_csv(filepath, parse_dates=["date"])
|
| 61 |
+
|
| 62 |
+
# Validate required columns before any processing
|
| 63 |
+
missing_cols = REQUIRED_COLUMNS - set(df.columns)
|
| 64 |
+
if missing_cols:
|
| 65 |
+
found = sorted(df.columns.tolist())
|
| 66 |
+
raise ValueError(
|
| 67 |
+
f"CSV is missing required column(s): {sorted(missing_cols)}. "
|
| 68 |
+
f"Found: {found}"
|
| 69 |
+
)
|
| 70 |
+
|
| 71 |
+
# Confirm date column parsed correctly — bad date formats produce object dtype
|
| 72 |
+
if not pd.api.types.is_datetime64_any_dtype(df["date"]):
|
| 73 |
+
raise ValueError(
|
| 74 |
+
"The 'date' column could not be parsed as dates. "
|
| 75 |
+
"Ensure dates are in YYYY-MM-DD format."
|
| 76 |
+
)
|
| 77 |
+
|
| 78 |
+
# Sort ascending so day-over-day calculations are always forward-looking
|
| 79 |
+
df = df.sort_values("date").reset_index(drop=True)
|
| 80 |
+
return df
|
| 81 |
+
|
| 82 |
+
|
| 83 |
+
# ---------------------------------------------------------------------------
|
| 84 |
+
# Rule engine
|
| 85 |
+
# ---------------------------------------------------------------------------
|
| 86 |
+
|
| 87 |
+
|
| 88 |
+
def run_checks(df: pd.DataFrame) -> dict:
|
| 89 |
+
"""
|
| 90 |
+
Run all deterministic validation checks against the DataFrame.
|
| 91 |
+
|
| 92 |
+
This function never calls any LLM or external service. Every result is
|
| 93 |
+
derived purely from the data using pandas arithmetic and comparisons.
|
| 94 |
+
|
| 95 |
+
Args:
|
| 96 |
+
df: Clean DataFrame from load_data().
|
| 97 |
+
|
| 98 |
+
Returns:
|
| 99 |
+
A dict with keys:
|
| 100 |
+
"issues": list of issue dicts (see _make_issue for structure)
|
| 101 |
+
"stats": dict with "revenue", "orders", and "date_range" sub-dicts
|
| 102 |
+
"""
|
| 103 |
+
issues: list[dict] = []
|
| 104 |
+
|
| 105 |
+
# -- Check A: Missing values -----------------------------------------
|
| 106 |
+
# NaN values in numeric columns make any aggregation over that period
|
| 107 |
+
# potentially misleading. Surface them so analysts know before reporting.
|
| 108 |
+
for col in ["revenue", "orders"]:
|
| 109 |
+
missing_mask = df[col].isna()
|
| 110 |
+
missing_count = int(missing_mask.sum())
|
| 111 |
+
if missing_count > 0:
|
| 112 |
+
affected_dates = df.loc[missing_mask, "date"].dt.strftime("%Y-%m-%d").tolist()
|
| 113 |
+
issues.append(
|
| 114 |
+
_make_issue(
|
| 115 |
+
type_="missing_values",
|
| 116 |
+
severity="WARNING",
|
| 117 |
+
column=col,
|
| 118 |
+
detail=f"{missing_count} missing value(s) in '{col}' column",
|
| 119 |
+
dates=affected_dates,
|
| 120 |
+
value=float(missing_count),
|
| 121 |
+
)
|
| 122 |
+
)
|
| 123 |
+
|
| 124 |
+
# -- Check B: Row count consistency ----------------------------------
|
| 125 |
+
# A missing date in a time series is invisible to most BI tools and
|
| 126 |
+
# creates silent gaps in trend lines. Flagging it early prevents charts
|
| 127 |
+
# that appear continuous but are actually dropping a day of data.
|
| 128 |
+
date_start = df["date"].min()
|
| 129 |
+
date_end = df["date"].max()
|
| 130 |
+
expected_rows = (date_end - date_start).days + 1
|
| 131 |
+
actual_rows = len(df)
|
| 132 |
+
row_gap = expected_rows - actual_rows
|
| 133 |
+
|
| 134 |
+
if row_gap != 0:
|
| 135 |
+
issues.append(
|
| 136 |
+
_make_issue(
|
| 137 |
+
type_="row_count",
|
| 138 |
+
severity="WARNING",
|
| 139 |
+
column=None,
|
| 140 |
+
detail=(
|
| 141 |
+
f"Expected {expected_rows} rows for date range "
|
| 142 |
+
f"{date_start.date()} to {date_end.date()}, "
|
| 143 |
+
f"found {actual_rows} (gap: {row_gap})"
|
| 144 |
+
),
|
| 145 |
+
dates=[],
|
| 146 |
+
value=float(row_gap),
|
| 147 |
+
)
|
| 148 |
+
)
|
| 149 |
+
|
| 150 |
+
# -- Check C: Day-over-day anomaly drops -----------------------------
|
| 151 |
+
# A >20% single-day drop almost always signals either a data quality
|
| 152 |
+
# problem or a significant business event requiring executive attention.
|
| 153 |
+
# We drop NaN rows before calling pct_change() to prevent a missing value
|
| 154 |
+
# from propagating into the percentage calculation for adjacent rows.
|
| 155 |
+
for col in ["revenue", "orders"]:
|
| 156 |
+
series_clean = df[["date", col]].dropna(subset=[col]).copy()
|
| 157 |
+
series_clean["pct_change"] = series_clean[col].pct_change()
|
| 158 |
+
|
| 159 |
+
anomalies = series_clean[series_clean["pct_change"] < ANOMALY_THRESHOLD]
|
| 160 |
+
for _, row in anomalies.iterrows():
|
| 161 |
+
pct = float(row["pct_change"])
|
| 162 |
+
issues.append(
|
| 163 |
+
_make_issue(
|
| 164 |
+
type_="anomaly_drop",
|
| 165 |
+
severity="WARNING",
|
| 166 |
+
column=col,
|
| 167 |
+
detail=(
|
| 168 |
+
f"'{col}' dropped {pct:.1%} on "
|
| 169 |
+
f"{row['date'].strftime('%Y-%m-%d')}"
|
| 170 |
+
),
|
| 171 |
+
dates=[str(row["date"].strftime("%Y-%m-%d"))],
|
| 172 |
+
value=round(float(pct * 100), 2), # type: ignore
|
| 173 |
+
)
|
| 174 |
+
)
|
| 175 |
+
|
| 176 |
+
# -- Check D: Duplicate dates ----------------------------------------
|
| 177 |
+
# Duplicate dates cause silent double-counting in GROUP BY aggregations.
|
| 178 |
+
# This is classified as ERROR (not WARNING) because it corrupts totals.
|
| 179 |
+
dup_mask = df["date"].duplicated(keep=False)
|
| 180 |
+
dup_count = int(dup_mask.sum())
|
| 181 |
+
if dup_count > 0:
|
| 182 |
+
raw_dates = df.loc[dup_mask, "date"].dt.strftime("%Y-%m-%d").unique().tolist()
|
| 183 |
+
dup_dates: list[str] = sorted(str(d) for d in raw_dates)
|
| 184 |
+
issues.append(
|
| 185 |
+
_make_issue(
|
| 186 |
+
type_="duplicate_dates",
|
| 187 |
+
severity="ERROR",
|
| 188 |
+
column="date",
|
| 189 |
+
detail=f"{dup_count} rows share duplicate dates: {', '.join(dup_dates)}",
|
| 190 |
+
dates=dup_dates,
|
| 191 |
+
value=float(dup_count),
|
| 192 |
+
)
|
| 193 |
+
)
|
| 194 |
+
|
| 195 |
+
# -- Statistics (always computed, skipna=True so one NaN doesn't block all stats)
|
| 196 |
+
stats = _compute_stats(df, date_start, date_end, actual_rows, expected_rows, row_gap)
|
| 197 |
+
|
| 198 |
+
return {"issues": issues, "stats": stats}
|
| 199 |
+
|
| 200 |
+
|
| 201 |
+
def _make_issue(
|
| 202 |
+
*,
|
| 203 |
+
type_: str,
|
| 204 |
+
severity: str,
|
| 205 |
+
column: str | None,
|
| 206 |
+
detail: str,
|
| 207 |
+
dates: list[str],
|
| 208 |
+
value: float | None,
|
| 209 |
+
) -> dict:
|
| 210 |
+
"""Return a consistently structured issue dict."""
|
| 211 |
+
return {
|
| 212 |
+
"type": type_,
|
| 213 |
+
"severity": severity,
|
| 214 |
+
"column": column,
|
| 215 |
+
"detail": detail,
|
| 216 |
+
"dates": dates,
|
| 217 |
+
"value": value,
|
| 218 |
+
}
|
| 219 |
+
|
| 220 |
+
|
| 221 |
+
def _compute_stats(
|
| 222 |
+
df: pd.DataFrame,
|
| 223 |
+
date_start: pd.Timestamp,
|
| 224 |
+
date_end: pd.Timestamp,
|
| 225 |
+
actual_rows: int,
|
| 226 |
+
expected_rows: int,
|
| 227 |
+
row_gap: int,
|
| 228 |
+
) -> dict:
|
| 229 |
+
"""Compute descriptive statistics for revenue and orders."""
|
| 230 |
+
stats: dict = {}
|
| 231 |
+
|
| 232 |
+
for col in ["revenue", "orders"]:
|
| 233 |
+
col_data = df[col]
|
| 234 |
+
stats[col] = {
|
| 235 |
+
"min": _safe_round(col_data.min(skipna=True), 2),
|
| 236 |
+
"max": _safe_round(col_data.max(skipna=True), 2),
|
| 237 |
+
"mean": _safe_round(col_data.mean(skipna=True), 2),
|
| 238 |
+
"std": _safe_round(col_data.std(skipna=True), 2),
|
| 239 |
+
"total": _safe_round(col_data.sum(skipna=True), 2),
|
| 240 |
+
"missing_count": int(col_data.isna().sum()),
|
| 241 |
+
}
|
| 242 |
+
|
| 243 |
+
stats["date_range"] = {
|
| 244 |
+
"start": date_start.strftime("%Y-%m-%d"),
|
| 245 |
+
"end": date_end.strftime("%Y-%m-%d"),
|
| 246 |
+
"actual_rows": actual_rows,
|
| 247 |
+
"expected_rows": expected_rows,
|
| 248 |
+
"row_gap": row_gap,
|
| 249 |
+
}
|
| 250 |
+
|
| 251 |
+
return stats
|
| 252 |
+
|
| 253 |
+
|
| 254 |
+
def _safe_round(value: float | None, ndigits: int) -> float | None:
|
| 255 |
+
"""Round a value, returning None if it is NaN (e.g., all-NaN column)."""
|
| 256 |
+
try:
|
| 257 |
+
if value is None or (isinstance(value, float) and pd.isna(value)):
|
| 258 |
+
return None
|
| 259 |
+
return round(float(value), int(ndigits)) # type: ignore
|
| 260 |
+
except (TypeError, ValueError):
|
| 261 |
+
return None
|
| 262 |
+
|
| 263 |
+
|
| 264 |
+
# ---------------------------------------------------------------------------
|
| 265 |
+
# Console rendering
|
| 266 |
+
# ---------------------------------------------------------------------------
|
| 267 |
+
|
| 268 |
+
|
| 269 |
+
def format_console_output(results: dict, llm_summary: str) -> None:
|
| 270 |
+
"""
|
| 271 |
+
Print a structured, ASCII-safe validation report to stdout.
|
| 272 |
+
|
| 273 |
+
This is a pure I/O function. It reads from results and llm_summary only;
|
| 274 |
+
it does not compute, modify, or validate anything.
|
| 275 |
+
|
| 276 |
+
Args:
|
| 277 |
+
results: The dict returned by run_checks().
|
| 278 |
+
llm_summary: The string returned by generate_summary().
|
| 279 |
+
"""
|
| 280 |
+
timestamp = datetime.now().strftime("%Y-%m-%d %H:%M:%S")
|
| 281 |
+
dr = results["stats"]["date_range"]
|
| 282 |
+
issues = results["issues"]
|
| 283 |
+
|
| 284 |
+
# Header
|
| 285 |
+
print(DIVIDER)
|
| 286 |
+
print(" ANALYTICS VALIDATION REPORT")
|
| 287 |
+
print(f" Generated : {timestamp}")
|
| 288 |
+
print(f" Data file : data.csv")
|
| 289 |
+
print(DIVIDER)
|
| 290 |
+
|
| 291 |
+
# Section 1: Data Overview
|
| 292 |
+
_section("1: DATA OVERVIEW")
|
| 293 |
+
print(f" Date Range : {dr['start']} to {dr['end']}")
|
| 294 |
+
print(f" Actual Rows : {dr['actual_rows']}")
|
| 295 |
+
print(f" Expected Rows : {dr['expected_rows']}")
|
| 296 |
+
gap_label = f"{dr['row_gap']} row gap detected" if dr["row_gap"] != 0 else "Consistent"
|
| 297 |
+
print(f" Row Status : {gap_label}")
|
| 298 |
+
|
| 299 |
+
# Section 2: KPI Statistics
|
| 300 |
+
_section("2: KPI STATISTICS")
|
| 301 |
+
|
| 302 |
+
rev = results["stats"]["revenue"]
|
| 303 |
+
print("\n REVENUE (USD)")
|
| 304 |
+
print(f" Min : {_fmt_usd(rev['min'])}")
|
| 305 |
+
print(f" Max : {_fmt_usd(rev['max'])}")
|
| 306 |
+
print(f" Mean : {_fmt_usd(rev['mean'])}")
|
| 307 |
+
print(f" Std Dev : {_fmt_usd(rev['std'])}")
|
| 308 |
+
print(f" Total : {_fmt_usd(rev['total'])}")
|
| 309 |
+
print(f" Missing : {rev['missing_count']} value(s)")
|
| 310 |
+
|
| 311 |
+
ord_ = results["stats"]["orders"]
|
| 312 |
+
print("\n ORDERS")
|
| 313 |
+
print(f" Min : {_fmt_int(ord_['min'])}")
|
| 314 |
+
print(f" Max : {_fmt_int(ord_['max'])}")
|
| 315 |
+
print(f" Mean : {_fmt_int(ord_['mean'])}")
|
| 316 |
+
print(f" Std Dev : {_fmt_int(ord_['std'])}")
|
| 317 |
+
print(f" Total : {_fmt_int(ord_['total'])}")
|
| 318 |
+
print(f" Missing : {ord_['missing_count']} value(s)")
|
| 319 |
+
|
| 320 |
+
# Section 3: Detected Issues
|
| 321 |
+
_section("3: DETECTED ISSUES")
|
| 322 |
+
print(f"\n Total issues found: {len(issues)}")
|
| 323 |
+
|
| 324 |
+
if not issues:
|
| 325 |
+
print("\n No issues detected. Data appears clean.")
|
| 326 |
+
else:
|
| 327 |
+
for issue in issues:
|
| 328 |
+
col_label = issue["column"] if issue["column"] else "N/A"
|
| 329 |
+
print(f"\n [{issue['severity']}] {issue['type']} | Column: {col_label}")
|
| 330 |
+
print(f" Detail : {issue['detail']}")
|
| 331 |
+
if issue["dates"]:
|
| 332 |
+
print(f" Dates : {', '.join(issue['dates'])}")
|
| 333 |
+
|
| 334 |
+
# Section 4: Executive Summary
|
| 335 |
+
_section("4: EXECUTIVE SUMMARY")
|
| 336 |
+
print()
|
| 337 |
+
print(llm_summary)
|
| 338 |
+
|
| 339 |
+
# Footer
|
| 340 |
+
print(f"\n{DIVIDER}")
|
| 341 |
+
print(" END OF REPORT")
|
| 342 |
+
print(DIVIDER)
|
| 343 |
+
|
| 344 |
+
|
| 345 |
+
def _section(title: str) -> None:
|
| 346 |
+
"""Print a section divider line."""
|
| 347 |
+
header = f"---- [ SECTION {title} ] "
|
| 348 |
+
print(f"\n{header}{'-' * max(0, 60 - len(header))}")
|
| 349 |
+
|
| 350 |
+
|
| 351 |
+
def _fmt_usd(value: float | None) -> str:
|
| 352 |
+
if value is None:
|
| 353 |
+
return "N/A"
|
| 354 |
+
return f"${value:,.2f}"
|
| 355 |
+
|
| 356 |
+
|
| 357 |
+
def _fmt_int(value: float | None) -> str:
|
| 358 |
+
if value is None:
|
| 359 |
+
return "N/A"
|
| 360 |
+
return f"{int(value):,}"
|
| 361 |
+
|
| 362 |
+
|
| 363 |
+
# ---------------------------------------------------------------------------
|
| 364 |
+
# Entry point
|
| 365 |
+
# ---------------------------------------------------------------------------
|
| 366 |
+
|
| 367 |
+
|
| 368 |
+
def main() -> None:
|
| 369 |
+
# Resolve data path relative to this script's directory so the demo works
|
| 370 |
+
# regardless of where the user invokes it from (e.g., from the repo root).
|
| 371 |
+
script_dir = Path(__file__).parent
|
| 372 |
+
data_path = script_dir / "data.csv"
|
| 373 |
+
|
| 374 |
+
# Load — surface file and format errors before doing any work
|
| 375 |
+
try:
|
| 376 |
+
df = load_data(str(data_path))
|
| 377 |
+
except FileNotFoundError:
|
| 378 |
+
print(f"ERROR: data.csv not found at '{data_path}'")
|
| 379 |
+
print("Ensure data.csv is in the same directory as demo.py.")
|
| 380 |
+
sys.exit(1)
|
| 381 |
+
except (ValueError, pd.errors.EmptyDataError) as exc:
|
| 382 |
+
print(f"ERROR: Could not load data — {exc}")
|
| 383 |
+
sys.exit(1)
|
| 384 |
+
|
| 385 |
+
# Rule engine — always deterministic, never calls external services
|
| 386 |
+
results = run_checks(df)
|
| 387 |
+
|
| 388 |
+
# LLM narration — may fall back to rule-based text (handled inside generate_summary)
|
| 389 |
+
llm_summary = generate_summary(results)
|
| 390 |
+
|
| 391 |
+
# Render report
|
| 392 |
+
format_console_output(results, llm_summary)
|
| 393 |
+
|
| 394 |
+
|
| 395 |
+
if __name__ == "__main__":
|
| 396 |
+
main()
|
llm_utils.py
ADDED
|
@@ -0,0 +1,240 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
"""
|
| 2 |
+
llm_utils.py — LangChain integration layer for the analytics demo.
|
| 3 |
+
|
| 4 |
+
Design principle: the LLM is an optional narrator, not a decision-maker.
|
| 5 |
+
Every public function must return a valid result even when Ollama is absent.
|
| 6 |
+
"""
|
| 7 |
+
|
| 8 |
+
from __future__ import annotations
|
| 9 |
+
|
| 10 |
+
import sys
|
| 11 |
+
|
| 12 |
+
# Guard against environments where LangChain is not installed.
|
| 13 |
+
# The rule engine in demo.py never imports this at module level for logic,
|
| 14 |
+
# so an ImportError here degrades gracefully to the fallback path.
|
| 15 |
+
#
|
| 16 |
+
# We prefer langchain-ollama (the current, non-deprecated package) and fall
|
| 17 |
+
# back to langchain_community.llms.Ollama for environments that only have
|
| 18 |
+
# the older package installed.
|
| 19 |
+
try:
|
| 20 |
+
from langchain_core.prompts import PromptTemplate
|
| 21 |
+
|
| 22 |
+
try:
|
| 23 |
+
from langchain_ollama import OllamaLLM as Ollama # preferred (langchain-ollama)
|
| 24 |
+
except ImportError:
|
| 25 |
+
from langchain_community.llms import Ollama # fallback (older install)
|
| 26 |
+
|
| 27 |
+
LANGCHAIN_AVAILABLE = True
|
| 28 |
+
except ImportError:
|
| 29 |
+
LANGCHAIN_AVAILABLE = False
|
| 30 |
+
|
| 31 |
+
# ---------------------------------------------------------------------------
|
| 32 |
+
# Configuration
|
| 33 |
+
# ---------------------------------------------------------------------------
|
| 34 |
+
|
| 35 |
+
OLLAMA_BASE_URL = "http://localhost:11434"
|
| 36 |
+
OLLAMA_MODEL = "llama3.2"
|
| 37 |
+
|
| 38 |
+
# The system instruction is kept as a constant so it can be audited,
|
| 39 |
+
# versioned, and referenced in documentation independently of the template.
|
| 40 |
+
SYSTEM_PROMPT = (
|
| 41 |
+
"You are assisting a business analyst. "
|
| 42 |
+
"Explain the following metric validation findings clearly and factually. "
|
| 43 |
+
"Do not speculate on causes. "
|
| 44 |
+
"Do not introduce information not present in the data. "
|
| 45 |
+
"Use a neutral, executive-friendly tone."
|
| 46 |
+
)
|
| 47 |
+
|
| 48 |
+
|
| 49 |
+
# ---------------------------------------------------------------------------
|
| 50 |
+
# Internal helpers
|
| 51 |
+
# ---------------------------------------------------------------------------
|
| 52 |
+
|
| 53 |
+
|
| 54 |
+
def _build_prompt_template() -> "PromptTemplate":
|
| 55 |
+
"""
|
| 56 |
+
Construct the LangChain PromptTemplate.
|
| 57 |
+
|
| 58 |
+
Why PromptTemplate rather than an f-string: the variable injection point
|
| 59 |
+
is explicit and the template can be tested and versioned independently
|
| 60 |
+
of the findings serialization logic.
|
| 61 |
+
"""
|
| 62 |
+
template = (
|
| 63 |
+
f"{SYSTEM_PROMPT}\n\n"
|
| 64 |
+
"Metric Validation Findings:\n"
|
| 65 |
+
"{findings_text}\n\n"
|
| 66 |
+
"Provide a concise executive summary (3-5 sentences) of the above findings. "
|
| 67 |
+
"Stick strictly to the facts presented."
|
| 68 |
+
)
|
| 69 |
+
return PromptTemplate(
|
| 70 |
+
input_variables=["findings_text"],
|
| 71 |
+
template=template,
|
| 72 |
+
)
|
| 73 |
+
|
| 74 |
+
|
| 75 |
+
def _serialize_findings(findings: dict) -> str:
|
| 76 |
+
"""
|
| 77 |
+
Convert the structured findings dict into a plain-text paragraph suitable
|
| 78 |
+
for injection into the LLM prompt.
|
| 79 |
+
|
| 80 |
+
Why plain text rather than raw JSON: LLMs produce better, more natural
|
| 81 |
+
summaries when given prose-style context rather than nested JSON objects.
|
| 82 |
+
"""
|
| 83 |
+
lines: list[str] = []
|
| 84 |
+
|
| 85 |
+
dr = findings["stats"]["date_range"]
|
| 86 |
+
lines.append(
|
| 87 |
+
f"Dataset covers {dr['actual_rows']} rows from {dr['start']} to {dr['end']}. "
|
| 88 |
+
f"Expected {dr['expected_rows']} rows based on the date range "
|
| 89 |
+
f"(gap: {dr['row_gap']} row(s))."
|
| 90 |
+
)
|
| 91 |
+
|
| 92 |
+
rev = findings["stats"]["revenue"]
|
| 93 |
+
lines.append(
|
| 94 |
+
f"Revenue (USD): mean=${rev['mean']:,.2f}, std=${rev['std']:,.2f}, "
|
| 95 |
+
f"total=${rev['total']:,.2f}, missing={rev['missing_count']} value(s)."
|
| 96 |
+
)
|
| 97 |
+
|
| 98 |
+
ord_ = findings["stats"]["orders"]
|
| 99 |
+
lines.append(
|
| 100 |
+
f"Orders: mean={ord_['mean']:,.0f}, std={ord_['std']:,.1f}, "
|
| 101 |
+
f"total={int(ord_['total']):,}, missing={ord_['missing_count']} value(s)."
|
| 102 |
+
)
|
| 103 |
+
|
| 104 |
+
issues = findings["issues"]
|
| 105 |
+
if issues:
|
| 106 |
+
lines.append(f"\nDetected {len(issues)} issue(s):")
|
| 107 |
+
for issue in issues:
|
| 108 |
+
date_str = f" on {', '.join(issue['dates'])}" if issue["dates"] else ""
|
| 109 |
+
lines.append(f" - [{issue['severity']}] {issue['detail']}{date_str}")
|
| 110 |
+
else:
|
| 111 |
+
lines.append("\nNo data quality issues detected. Dataset appears clean.")
|
| 112 |
+
|
| 113 |
+
return "\n".join(lines)
|
| 114 |
+
|
| 115 |
+
|
| 116 |
+
def _try_ollama_summary(findings_text: str) -> str | None:
|
| 117 |
+
"""
|
| 118 |
+
Attempt a local Ollama call via LangChain. Returns the summary string on
|
| 119 |
+
success, or None on any failure (connection refused, model not found, etc.).
|
| 120 |
+
|
| 121 |
+
Why return None instead of raising: the caller uses None as a signal to
|
| 122 |
+
activate the deterministic fallback, keeping all error-handling in one place.
|
| 123 |
+
Errors are printed to stderr so they don't pollute the report on stdout.
|
| 124 |
+
"""
|
| 125 |
+
if not LANGCHAIN_AVAILABLE:
|
| 126 |
+
return None
|
| 127 |
+
|
| 128 |
+
try:
|
| 129 |
+
prompt_template = _build_prompt_template()
|
| 130 |
+
llm = Ollama(base_url=OLLAMA_BASE_URL, model=OLLAMA_MODEL, timeout=30)
|
| 131 |
+
|
| 132 |
+
# LCEL pipe syntax: preferred over deprecated LLMChain
|
| 133 |
+
chain = prompt_template | llm
|
| 134 |
+
result = chain.invoke({"findings_text": findings_text})
|
| 135 |
+
|
| 136 |
+
# langchain_community.llms.Ollama returns a plain str;
|
| 137 |
+
# ChatOllama returns an AIMessage — handle both defensively.
|
| 138 |
+
if hasattr(result, "content"):
|
| 139 |
+
return result.content.strip() or None
|
| 140 |
+
return str(result).strip() or None
|
| 141 |
+
|
| 142 |
+
except Exception as exc:
|
| 143 |
+
print(
|
| 144 |
+
f"[llm_utils] Ollama unavailable ({type(exc).__name__}): {exc}",
|
| 145 |
+
file=sys.stderr,
|
| 146 |
+
)
|
| 147 |
+
return None
|
| 148 |
+
|
| 149 |
+
|
| 150 |
+
def _rule_based_summary(findings: dict) -> str:
|
| 151 |
+
"""
|
| 152 |
+
Generate a deterministic plain-text summary from the findings dict.
|
| 153 |
+
|
| 154 |
+
This is the guaranteed fallback when no LLM is available. Template-driven
|
| 155 |
+
text is auditable, predictable, and consistent — qualities analysts require
|
| 156 |
+
from a validation tool used in reporting contexts.
|
| 157 |
+
"""
|
| 158 |
+
dr = findings["stats"]["date_range"]
|
| 159 |
+
issues = findings["issues"]
|
| 160 |
+
|
| 161 |
+
line1 = (
|
| 162 |
+
f"The dataset spans {dr['actual_rows']} rows from {dr['start']} to "
|
| 163 |
+
f"{dr['end']} (expected {dr['expected_rows']} calendar days)."
|
| 164 |
+
)
|
| 165 |
+
|
| 166 |
+
if not issues:
|
| 167 |
+
return (
|
| 168 |
+
f" {line1}\n"
|
| 169 |
+
" No data quality issues were detected. "
|
| 170 |
+
"The dataset appears suitable for reporting."
|
| 171 |
+
)
|
| 172 |
+
|
| 173 |
+
parts: list[str] = []
|
| 174 |
+
|
| 175 |
+
missing_issues = [i for i in issues if i["type"] == "missing_values"]
|
| 176 |
+
row_issues = [i for i in issues if i["type"] == "row_count"]
|
| 177 |
+
anomaly_issues = [i for i in issues if i["type"] == "anomaly_drop"]
|
| 178 |
+
duplicate_issues = [i for i in issues if i["type"] == "duplicate_dates"]
|
| 179 |
+
|
| 180 |
+
if missing_issues:
|
| 181 |
+
cols = sorted({i["column"] for i in missing_issues})
|
| 182 |
+
parts.append(f"Missing values were identified in column(s): {', '.join(cols)}.")
|
| 183 |
+
|
| 184 |
+
if row_issues:
|
| 185 |
+
gap = row_issues[0]["value"]
|
| 186 |
+
parts.append(f"A row count gap of {int(gap)} was detected in the date sequence.")
|
| 187 |
+
|
| 188 |
+
if duplicate_issues:
|
| 189 |
+
parts.append(
|
| 190 |
+
f"{len(duplicate_issues)} duplicate date(s) were found, "
|
| 191 |
+
"which may cause double-counting in aggregations."
|
| 192 |
+
)
|
| 193 |
+
|
| 194 |
+
if anomaly_issues:
|
| 195 |
+
# Report worst single-day drop
|
| 196 |
+
worst = min(anomaly_issues, key=lambda x: x["value"])
|
| 197 |
+
parts.append(
|
| 198 |
+
f"{len(anomaly_issues)} day-over-day drop(s) exceeding the 20% anomaly "
|
| 199 |
+
f"threshold were flagged; the largest was {worst['value']:.1f}% "
|
| 200 |
+
f"on {worst['dates'][0]}."
|
| 201 |
+
)
|
| 202 |
+
|
| 203 |
+
parts.append(
|
| 204 |
+
"These findings should be reviewed and resolved before this dataset "
|
| 205 |
+
"is used in executive or board-level reporting."
|
| 206 |
+
)
|
| 207 |
+
|
| 208 |
+
body = " ".join(parts)
|
| 209 |
+
return f" {line1}\n {body}"
|
| 210 |
+
|
| 211 |
+
|
| 212 |
+
# ---------------------------------------------------------------------------
|
| 213 |
+
# Public API
|
| 214 |
+
# ---------------------------------------------------------------------------
|
| 215 |
+
|
| 216 |
+
|
| 217 |
+
def generate_summary(findings: dict) -> str:
|
| 218 |
+
"""
|
| 219 |
+
Generate a human-readable summary of the findings dict.
|
| 220 |
+
|
| 221 |
+
Attempts an Ollama (local LLM) call first; falls back to a deterministic
|
| 222 |
+
rule-based summary if Ollama is unavailable. Always returns a non-empty string.
|
| 223 |
+
|
| 224 |
+
Args:
|
| 225 |
+
findings: The dict returned by demo.run_checks()
|
| 226 |
+
|
| 227 |
+
Returns:
|
| 228 |
+
A formatted summary string including a source label.
|
| 229 |
+
"""
|
| 230 |
+
findings_text = _serialize_findings(findings)
|
| 231 |
+
llm_result = _try_ollama_summary(findings_text)
|
| 232 |
+
|
| 233 |
+
if llm_result:
|
| 234 |
+
source_label = f" Source: Ollama ({OLLAMA_MODEL})\n"
|
| 235 |
+
# Indent each line of LLM output to match the report's 2-space style
|
| 236 |
+
indented = "\n".join(f" {line}" for line in llm_result.splitlines())
|
| 237 |
+
return source_label + "\n" + indented
|
| 238 |
+
|
| 239 |
+
source_label = " Source: Rule-Based Fallback (Ollama unavailable)\n"
|
| 240 |
+
return source_label + "\n" + _rule_based_summary(findings)
|
requirements.txt
CHANGED
|
@@ -1,3 +1,4 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
streamlit
|
|
|
|
|
|
| 1 |
+
pandas
|
| 2 |
+
langchain-ollama
|
| 3 |
+
streamlit
|
| 4 |
+
watchdog
|