atascioglu commited on
Commit
e8eb0ce
·
verified ·
1 Parent(s): 5f46bc8

Upload 7 files

Browse files
Files changed (7) hide show
  1. Dockerfile +29 -0
  2. README.md +41 -5
  3. app.py +591 -0
  4. datacreation.ipynb +931 -0
  5. pythonanalysis.ipynb +0 -0
  6. requirements.txt +21 -0
  7. style.css +374 -0
Dockerfile ADDED
@@ -0,0 +1,29 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # syntax=docker/dockerfile:1
2
+
3
+ FROM python:3.12-slim
4
+
5
+ # Install system dependencies
6
+ RUN apt-get update && apt-get install -y --no-install-recommends \
7
+ gcc \
8
+ g++ \
9
+ build-essential \
10
+ libffi-dev \
11
+ && rm -rf /var/lib/apt/lists/*
12
+
13
+ # Set working directory
14
+ WORKDIR /app
15
+
16
+ # Copy all files
17
+ COPY . .
18
+
19
+ # Install Python dependencies
20
+ RUN pip install --no-cache-dir -r requirements.txt
21
+
22
+ # Install ipykernel and register it as the python3 kernel
23
+ RUN python -m ipykernel install --name python3 --display-name "Python 3" --user
24
+
25
+ # Expose the Gradio default port
26
+ EXPOSE 7860
27
+
28
+ # Run the app
29
+ CMD ["python", "app.py"]
README.md CHANGED
@@ -1,10 +1,46 @@
1
  ---
2
- title: SE21AppTemplate
3
- emoji: 🌍
4
- colorFrom: purple
5
- colorTo: indigo
6
  sdk: docker
7
  pinned: false
 
8
  ---
9
 
10
- Check out the configuration reference at https://huggingface.co/docs/hub/spaces-config-reference
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ title: AIBDM Workshop App
3
+ emoji: "\U0001F4CA"
4
+ colorFrom: blue
5
+ colorTo: purple
6
  sdk: docker
7
  pinned: false
8
+ short_description: AI-enhanced analytics dashboard for ESCP AIBDM course
9
  ---
10
 
11
+ # AIBDM Workshop App
12
+
13
+ An AI-enhanced analytics dashboard built for the ESCP Business School **AI in Business Decision Making** course. This app runs student Jupyter notebooks directly in the browser, displays their outputs (tables, charts, and narrative), and provides an optional AI assistant for interpreting results.
14
+
15
+ ## What It Does
16
+
17
+ - Executes student Jupyter notebooks on demand using Papermill and renders the results (dataframes, plots, and markdown) in an interactive Gradio dashboard.
18
+ - Supports two notebook slots (Notebook 1 and Notebook 2) covering different analytics exercises.
19
+ - Includes an AI chat assistant that can answer questions about the notebook outputs and business analytics concepts.
20
+
21
+ ## How to Customize
22
+
23
+ Swap the default notebook filenames by setting environment variables in your Space settings:
24
+
25
+ | Variable | Description | Default |
26
+ |----------|-------------|---------|
27
+ | `NB1` | Filename for the first notebook | `datacreation.ipynb` |
28
+ | `NB2` | Filename for the second notebook | `pythonanalysis.ipynb` |
29
+
30
+ Place your `.ipynb` files in the root of the Space repository alongside `app.py`.
31
+
32
+ ## Enabling AI Features
33
+
34
+ To activate the AI chat assistant, add your Hugging Face API key as a Space secret:
35
+
36
+ 1. Go to your Space **Settings > Secrets**.
37
+ 2. Add a secret named `HF_API_KEY` with your Hugging Face token value.
38
+
39
+ When the key is not set, the app runs normally but the AI assistant will be disabled.
40
+
41
+ ## Built With
42
+
43
+ - [Gradio](https://gradio.app/) for the web interface
44
+ - [Papermill](https://papermill.readthedocs.io/) for notebook execution
45
+ - pandas, numpy, matplotlib, seaborn, plotly for data analysis and visualization
46
+ - Built for [ESCP Business School](https://escp.eu/)
app.py ADDED
@@ -0,0 +1,591 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ AIBDM 2026 - AI & Big Data Management | ESCP Business School
3
+ Gradio App Template for Hugging Face Spaces
4
+
5
+ This app executes student Jupyter notebooks (data creation + analysis),
6
+ displays results in a gallery, and provides an AI-powered dashboard
7
+ for exploring the generated artifacts.
8
+
9
+ Usage:
10
+ 1. Place your two notebooks in the repo root.
11
+ 2. Set environment variables (or use defaults).
12
+ 3. Deploy to Hugging Face Spaces.
13
+ """
14
+
15
+ # ─────────────────────────────────────────────
16
+ # Imports
17
+ # ─────────────────────────────────────────────
18
+ import os, json, glob, time, re, textwrap, traceback, subprocess, sys
19
+ from pathlib import Path
20
+ from functools import lru_cache
21
+
22
+ import gradio as gr
23
+ import pandas as pd
24
+
25
+ # ─────────────────────────────────────────────
26
+ # Configuration (all via environment variables)
27
+ # ─────────────────────────────────────────────
28
+ NB1 = os.getenv("NB1", "datacreation.ipynb")
29
+ NB2 = os.getenv("NB2", "pythonanalysis.ipynb")
30
+ HF_API_KEY = os.getenv("HF_API_KEY", "")
31
+ MODEL_NAME = os.getenv("MODEL_NAME", "mistralai/Mistral-7B-Instruct-v0.3")
32
+
33
+ FIG_DIR = Path("artifacts/py/figures")
34
+ TABLE_DIR = Path("artifacts/py/tables")
35
+ KERNEL_NAME = "python3"
36
+
37
+ # ─────────────────────────────────────────────
38
+ # Directory & kernel helpers
39
+ # ─────────────────────────────────────────────
40
+ def ensure_dirs():
41
+ """Create artifact output directories if they don't exist."""
42
+ FIG_DIR.mkdir(parents=True, exist_ok=True)
43
+ TABLE_DIR.mkdir(parents=True, exist_ok=True)
44
+
45
+
46
+ def ensure_python_kernelspec():
47
+ """Register the current Python interpreter as a Jupyter kernel so
48
+ Papermill can find it. Safe to call repeatedly."""
49
+ try:
50
+ subprocess.check_call(
51
+ [sys.executable, "-m", "ipykernel", "install",
52
+ "--user", "--name", KERNEL_NAME],
53
+ stdout=subprocess.DEVNULL, stderr=subprocess.DEVNULL,
54
+ )
55
+ except Exception as exc:
56
+ print(f"[WARN] Could not install kernelspec: {exc}")
57
+
58
+ # Run once at import time
59
+ ensure_dirs()
60
+ ensure_python_kernelspec()
61
+
62
+ # ─────────────────────────────────────────────
63
+ # Notebook execution via Papermill
64
+ # ─────────────────────────────────────────────
65
+ def run_notebook(nb_name: str) -> str:
66
+ """Execute a notebook with Papermill. Returns a log string."""
67
+ import papermill as pm
68
+
69
+ nb_path = Path(nb_name)
70
+ if not nb_path.exists():
71
+ return f"ERROR Notebook not found: {nb_name}"
72
+
73
+ out_path = Path(f"artifacts/{nb_path.stem}_output.ipynb")
74
+ out_path.parent.mkdir(parents=True, exist_ok=True)
75
+
76
+ t0 = time.time()
77
+ try:
78
+ pm.execute_notebook(
79
+ str(nb_path),
80
+ str(out_path),
81
+ kernel_name=KERNEL_NAME,
82
+ progress_bar=False,
83
+ )
84
+ elapsed = time.time() - t0
85
+ return f"OK {nb_name} finished in {elapsed:.1f}s\n Output -> {out_path}"
86
+ except Exception:
87
+ elapsed = time.time() - t0
88
+ tb = traceback.format_exc()
89
+ return f"FAIL {nb_name} failed after {elapsed:.1f}s\n{tb}"
90
+
91
+
92
+ # Pipeline runner wrappers (return status dict + log)
93
+ _step_status = {"step1": "READY", "step2": "READY"}
94
+
95
+
96
+ def _log_block(title: str, body: str) -> str:
97
+ return f"\n{'='*60}\n {title}\n{'='*60}\n{body}\n"
98
+
99
+
100
+ def run_datacreation():
101
+ _step_status["step1"] = "RUNNING"
102
+ log = _log_block("Step 1: Data Creation", run_notebook(NB1))
103
+ _step_status["step1"] = "DONE" if "OK" in log else "ERROR"
104
+ return render_status_html(), log
105
+
106
+
107
+ def run_pythonanalysis():
108
+ _step_status["step2"] = "RUNNING"
109
+ log = _log_block("Step 2: Python Analysis", run_notebook(NB2))
110
+ _step_status["step2"] = "DONE" if "OK" in log else "ERROR"
111
+ return render_status_html(), log
112
+
113
+
114
+ def run_full_pipeline():
115
+ all_log = ""
116
+ status, log = run_datacreation()
117
+ all_log += log
118
+ status, log = run_pythonanalysis()
119
+ all_log += log
120
+ return status, all_log
121
+
122
+ # ─────────────────────────────────────────────
123
+ # Artifact indexing
124
+ # ─────────────────────────────────────────────
125
+ def artifacts_index():
126
+ """Return (list_of_figure_paths, list_of_table_paths)."""
127
+ figs = sorted(glob.glob(str(FIG_DIR / "*.png")))
128
+ tables = (
129
+ sorted(glob.glob(str(TABLE_DIR / "*.csv")))
130
+ + sorted(glob.glob(str(TABLE_DIR / "*.json")))
131
+ )
132
+ return figs, tables
133
+
134
+ # ─────────────────────────────────────────────
135
+ # KPI helpers
136
+ # ─────────────────────────────────────────────
137
+ def load_kpis() -> dict:
138
+ """Load KPIs from artifacts/py/tables/kpis.json (or return empty)."""
139
+ kpi_path = TABLE_DIR / "kpis.json"
140
+ if not kpi_path.exists():
141
+ return {}
142
+ try:
143
+ with open(kpi_path) as f:
144
+ return json.load(f)
145
+ except Exception:
146
+ return {}
147
+
148
+
149
+ _KPI_META = {
150
+ "n_titles": ("📚", "Book Titles", "#a48de8"),
151
+ "n_months": ("📅", "Time Periods", "#7aa6f8"),
152
+ "total_units_sold": ("📦", "Units Sold", "#6ee7c7"),
153
+ "total_revenue": ("💰", "Total Revenue", "#3dcba8"),
154
+ }
155
+
156
+
157
+ def render_kpi_cards(kpis: dict | None = None) -> str:
158
+ """Return glassmorphism HTML cards for each KPI."""
159
+ if kpis is None:
160
+ kpis = load_kpis()
161
+ if not kpis:
162
+ return (
163
+ '<div style="background:rgba(255,255,255,0.65);backdrop-filter:blur(16px);'
164
+ 'border-radius:20px;padding:28px;text-align:center;'
165
+ 'border:1px solid rgba(197,180,240,0.3);'
166
+ 'box-shadow:0 8px 32px rgba(124,92,191,0.08);">'
167
+ '<div style="font-size:36px;margin-bottom:10px;">📊</div>'
168
+ '<div style="color:#6b5b8e;font-size:14px;font-weight:700;">No data yet</div>'
169
+ '<div style="color:#9d8fc4;font-size:12px;margin-top:4px;">'
170
+ 'Run the pipeline to populate these cards.</div></div>'
171
+ )
172
+
173
+ def _card(icon, label, value, colour):
174
+ if isinstance(value, (int, float)):
175
+ value = f"{value:,.0f}" if value > 100 else str(value)
176
+ return (
177
+ f'<div style="background:rgba(255,255,255,0.72);backdrop-filter:blur(16px);'
178
+ f'border-radius:20px;padding:18px 14px 16px;text-align:center;'
179
+ f'border:1px solid rgba(255,255,255,0.8);'
180
+ f'box-shadow:0 4px 16px rgba(124,92,191,0.08);'
181
+ f'border-top:3px solid {colour};">'
182
+ f'<div style="font-size:26px;margin-bottom:7px;">{icon}</div>'
183
+ f'<div style="color:#9d8fc4;font-size:10px;text-transform:uppercase;'
184
+ f'letter-spacing:1.8px;font-weight:800;margin-bottom:7px;">{label}</div>'
185
+ f'<div style="color:#2d1f4e;font-size:18px;font-weight:800;">{value}</div>'
186
+ f'</div>'
187
+ )
188
+
189
+ cards_html = ""
190
+ for key, val in kpis.items():
191
+ icon, label, colour = _KPI_META.get(key, ("📈", key.replace("_", " ").title(), "#a48de8"))
192
+ cards_html += _card(icon, label, val, colour)
193
+
194
+ return (
195
+ f'<div style="display:grid;grid-template-columns:repeat(auto-fit,minmax(150px,1fr));'
196
+ f'gap:12px;margin-bottom:20px;">{cards_html}</div>'
197
+ )
198
+
199
+ # ─────────────────────────────────────────────
200
+ # Status badges
201
+ # ─────────────────────────────────────────────
202
+ _BADGE_COLORS = {
203
+ "READY": ("#888", "rgba(255,255,255,0.08)"),
204
+ "RUNNING": ("#e8a835", "rgba(232,168,53,0.12)"),
205
+ "DONE": ("#3eca6e", "rgba(62,202,110,0.12)"),
206
+ "ERROR": ("#e84f4f", "rgba(232,79,79,0.12)"),
207
+ }
208
+
209
+
210
+ def render_status_html() -> str:
211
+ """Render pipeline status badges as HTML."""
212
+ badge_css = (
213
+ "display:inline-flex;align-items:center;gap:8px;"
214
+ "padding:8px 18px;border-radius:12px;margin:6px;"
215
+ "font-family:system-ui,sans-serif;font-size:0.9em;"
216
+ "backdrop-filter:blur(10px);border:1px solid rgba(255,255,255,0.15);"
217
+ )
218
+ badges = []
219
+ for label, key in [("Data Creation", "step1"), ("Python Analysis", "step2")]:
220
+ st = _step_status.get(key, "READY")
221
+ color, bg = _BADGE_COLORS.get(st, _BADGE_COLORS["READY"])
222
+ dot = f"<span style='width:10px;height:10px;border-radius:50%;background:{color};display:inline-block;'></span>"
223
+ badges.append(
224
+ f"<div style='{badge_css}background:{bg};color:{color};'>"
225
+ f"{dot}<strong>{label}</strong> &mdash; {st}</div>"
226
+ )
227
+ return f"<div style='display:flex;flex-wrap:wrap;justify-content:center;'>{''.join(badges)}</div>"
228
+
229
+ # ─────────────────────────────────────────────
230
+ # Gallery / table helpers
231
+ # ─────────────────────────────────────────────
232
+ def refresh_gallery():
233
+ """Return (gallery_images, table_dropdown_choices)."""
234
+ figs, tables = artifacts_index()
235
+ table_names = [Path(t).name for t in tables]
236
+ gallery = figs if figs else []
237
+ return gallery, gr.update(choices=table_names, value=table_names[0] if table_names else None)
238
+
239
+
240
+ def on_table_select(choice: str | None):
241
+ """Load the selected table and return a DataFrame (or message)."""
242
+ if not choice:
243
+ return pd.DataFrame({"info": ["Select a table from the dropdown."]})
244
+ for ext_dir in [TABLE_DIR]:
245
+ path = ext_dir / choice
246
+ if path.exists():
247
+ if choice.endswith(".csv"):
248
+ try:
249
+ return pd.read_csv(path)
250
+ except Exception as exc:
251
+ return pd.DataFrame({"error": [str(exc)]})
252
+ elif choice.endswith(".json"):
253
+ try:
254
+ with open(path) as f:
255
+ data = json.load(f)
256
+ if isinstance(data, list):
257
+ return pd.DataFrame(data)
258
+ return pd.DataFrame([data])
259
+ except Exception as exc:
260
+ return pd.DataFrame({"error": [str(exc)]})
261
+ return pd.DataFrame({"error": [f"File not found: {choice}"]})
262
+
263
+ # ─────────────────────────────────────────────
264
+ # AI Dashboard
265
+ # ─────────────────────────────────────────────
266
+ _SYSTEM_PROMPT = textwrap.dedent("""\
267
+ You are a data-analysis assistant for an ESCP Business School project.
268
+ The student has generated the following artifacts:
269
+
270
+ FIGURES: {figures}
271
+ TABLES: {tables}
272
+ KPIs: {kpis}
273
+
274
+ Answer the student's question conversationally. When relevant, end your
275
+ response with a JSON directive on its own line so the UI can display the
276
+ right artifact:
277
+ {{"show_figure": "filename.png"}} OR {{"show_table": "filename.csv"}}
278
+ Only include the directive when an artifact is directly relevant.
279
+ """)
280
+
281
+
282
+ def _build_system_prompt() -> str:
283
+ figs, tables = artifacts_index()
284
+ fig_names = [Path(f).name for f in figs]
285
+ tbl_names = [Path(t).name for t in tables]
286
+ kpis = load_kpis()
287
+ return _SYSTEM_PROMPT.format(
288
+ figures=", ".join(fig_names) or "none yet",
289
+ tables=", ".join(tbl_names) or "none yet",
290
+ kpis=json.dumps(kpis) if kpis else "none yet",
291
+ )
292
+
293
+
294
+ def _call_hf_api(messages: list[dict]) -> str:
295
+ """Call Hugging Face Inference API (returns assistant text)."""
296
+ from huggingface_hub import InferenceClient
297
+ client = InferenceClient(model=MODEL_NAME, token=HF_API_KEY)
298
+ resp = client.chat_completion(messages=messages, max_tokens=512)
299
+ return resp.choices[0].message.content
300
+
301
+
302
+ def _keyword_fallback(msg: str, idx: dict, kpis: dict) -> str:
303
+ """Simple keyword matcher when no LLM is available."""
304
+ msg_lower = msg.lower()
305
+ figs, tables = idx
306
+
307
+ # Try to match a figure
308
+ for f in figs:
309
+ name = Path(f).stem.lower().replace("_", " ")
310
+ if any(w in msg_lower for w in name.split()):
311
+ return (
312
+ f"Here is the **{Path(f).stem}** chart I found for you.\n\n"
313
+ f'{{"show_figure": "{Path(f).name}"}}'
314
+ )
315
+
316
+ # Try to match a table
317
+ for t in tables:
318
+ name = Path(t).stem.lower().replace("_", " ")
319
+ if any(w in msg_lower for w in name.split()):
320
+ return (
321
+ f"Here is the **{Path(t).stem}** table.\n\n"
322
+ f'{{"show_table": "{Path(t).name}"}}'
323
+ )
324
+
325
+ # KPI summary
326
+ if any(k in msg_lower for k in ("kpi", "metric", "summary", "overview")):
327
+ if kpis:
328
+ lines = [f"- **{k.replace('_',' ').title()}**: {v}" for k, v in kpis.items()]
329
+ return "Here are the current KPIs:\n\n" + "\n".join(lines)
330
+ return "No KPIs have been generated yet. Please run the pipeline first."
331
+
332
+ # Sentiment / review keywords
333
+ if any(k in msg_lower for k in ("sentiment", "review", "opinion")):
334
+ match = [f for f in figs if "sentiment" in Path(f).name.lower()]
335
+ if match:
336
+ return f'The sentiment analysis is shown in this chart.\n\n{{"show_figure": "{Path(match[0]).name}"}}'
337
+
338
+ # Forecast / ARIMA
339
+ if any(k in msg_lower for k in ("forecast", "arima", "predict", "future")):
340
+ match = [f for f in figs if any(w in Path(f).name.lower() for w in ("forecast", "arima"))]
341
+ if match:
342
+ return f'Here is the forecast chart.\n\n{{"show_figure": "{Path(match[0]).name}"}}'
343
+
344
+ # Price / pricing
345
+ if any(k in msg_lower for k in ("price", "pricing", "cost")):
346
+ match = [f for f in figs if "pric" in Path(f).name.lower()]
347
+ if match:
348
+ return f'Here is the pricing chart.\n\n{{"show_figure": "{Path(match[0]).name}"}}'
349
+
350
+ return (
351
+ "I can help you explore your project data. Try asking about "
352
+ "**sentiment**, **forecasts**, **pricing**, **KPIs**, or mention "
353
+ "a specific chart/table name."
354
+ )
355
+
356
+
357
+ def _parse_directive(text: str):
358
+ """Extract a JSON directive from the assistant response."""
359
+ match = re.search(r'\{[^{}]*"show_(?:figure|table)"[^{}]*\}', text)
360
+ if not match:
361
+ return None, None
362
+ try:
363
+ d = json.loads(match.group())
364
+ if "show_figure" in d:
365
+ return "figure", d["show_figure"]
366
+ if "show_table" in d:
367
+ return "table", d["show_table"]
368
+ except json.JSONDecodeError:
369
+ pass
370
+ return None, None
371
+
372
+
373
+ def ai_chat(user_msg: str, history: list):
374
+ """Process a user message and return (history, viz_image, viz_table)."""
375
+ if not user_msg.strip():
376
+ return history, None, pd.DataFrame()
377
+
378
+ figs, tables = artifacts_index()
379
+ kpis = load_kpis()
380
+
381
+ # Build assistant response
382
+ if HF_API_KEY:
383
+ try:
384
+ messages = [{"role": "system", "content": _build_system_prompt()}]
385
+ for h_user, h_bot in history:
386
+ messages.append({"role": "user", "content": h_user})
387
+ if h_bot:
388
+ messages.append({"role": "assistant", "content": h_bot})
389
+ messages.append({"role": "user", "content": user_msg})
390
+ assistant_text = _call_hf_api(messages)
391
+ except Exception as exc:
392
+ assistant_text = f"LLM error ({exc}). Falling back to keyword mode.\n\n"
393
+ assistant_text += _keyword_fallback(user_msg, (figs, tables), kpis)
394
+ else:
395
+ assistant_text = _keyword_fallback(user_msg, (figs, tables), kpis)
396
+
397
+ # Parse directive
398
+ kind, name = _parse_directive(assistant_text)
399
+ # Strip the JSON directive from the displayed message
400
+ clean_text = re.sub(r'\{[^{}]*"show_(?:figure|table)"[^{}]*\}', "", assistant_text).strip()
401
+
402
+ viz_img = None
403
+ viz_tbl = pd.DataFrame()
404
+
405
+ if kind == "figure":
406
+ fig_path = FIG_DIR / name
407
+ if fig_path.exists():
408
+ viz_img = str(fig_path)
409
+ elif kind == "table":
410
+ tbl_path = TABLE_DIR / name
411
+ if tbl_path.exists():
412
+ viz_tbl = on_table_select(name)
413
+
414
+ history = history + [[user_msg, clean_text]]
415
+ return history, viz_img, viz_tbl
416
+
417
+ # ─────────────────────────────────────────────
418
+ # Gradio Theme & CSS
419
+ # ─────────────────────────────────────────────
420
+ theme = gr.themes.Soft(
421
+ primary_hue=gr.themes.colors.blue,
422
+ secondary_hue=gr.themes.colors.purple,
423
+ font=("system-ui", "-apple-system", "Segoe UI", "sans-serif"),
424
+ )
425
+
426
+ def _load_css() -> str:
427
+ """Load external CSS file if it exists."""
428
+ css_path = Path(__file__).parent / "style.css"
429
+ if css_path.exists():
430
+ return css_path.read_text(encoding="utf-8")
431
+ return ""
432
+
433
+ CUSTOM_CSS = _load_css()
434
+
435
+ # ─────────────────────────────────────────────
436
+ # Build the Gradio UI
437
+ # ─────────────────────────────────────────────
438
+ with gr.Blocks(theme=theme, css=CUSTOM_CSS, title="AIBDM 2026 Dashboard") as demo:
439
+
440
+ gr.Markdown(
441
+ "# AIBDM 2026 &mdash; AI & Big Data Management Dashboard\n"
442
+ "*ESCP Business School &bull; Hugging Face Spaces*"
443
+ )
444
+
445
+ # ── Tab 1: Pipeline Runner ──────────────
446
+ with gr.Tab("Pipeline Runner"):
447
+ status_html = gr.HTML(value=render_status_html, every=None)
448
+
449
+ with gr.Row():
450
+ btn_step1 = gr.Button("Step 1: Data Creation", variant="secondary")
451
+ btn_step2 = gr.Button("Step 2: Python Analysis", variant="secondary")
452
+ btn_full = gr.Button("Run Full Pipeline", variant="primary")
453
+
454
+ log_box = gr.Textbox(
455
+ label="Execution Log",
456
+ lines=18,
457
+ max_lines=40,
458
+ interactive=False,
459
+ elem_id="log-box",
460
+ )
461
+
462
+ btn_step1.click(fn=run_datacreation, outputs=[status_html, log_box])
463
+ btn_step2.click(fn=run_pythonanalysis, outputs=[status_html, log_box])
464
+ btn_full.click(fn=run_full_pipeline, outputs=[status_html, log_box])
465
+
466
+ # ── Tab 2: Results Gallery ──────────────
467
+ with gr.Tab("Results Gallery"):
468
+ kpi_html = gr.HTML(value=render_kpi_cards)
469
+
470
+ refresh_btn = gr.Button("Refresh Results", variant="secondary")
471
+
472
+ gallery = gr.Gallery(
473
+ label="Generated Figures",
474
+ columns=3,
475
+ height="auto",
476
+ object_fit="contain",
477
+ )
478
+
479
+ table_dd = gr.Dropdown(label="Select a data table", choices=[], interactive=True)
480
+ table_view = gr.Dataframe(label="Table Preview", wrap=True)
481
+
482
+ def _on_refresh():
483
+ imgs, dd_update = refresh_gallery()
484
+ khtml = render_kpi_cards()
485
+ return imgs, dd_update, khtml
486
+
487
+ refresh_btn.click(fn=_on_refresh, outputs=[gallery, table_dd, kpi_html])
488
+ table_dd.change(fn=on_table_select, inputs=[table_dd], outputs=[table_view])
489
+
490
+ # ── Tab 3: AI Dashboard ─────────────────
491
+ with gr.Tab("AI Dashboard"):
492
+ _llm_note = (
493
+ "*LLM active.*" if HF_API_KEY
494
+ else "*No API key detected. Using keyword matching. "
495
+ "Set `HF_API_KEY` in Space secrets for full AI support.*"
496
+ )
497
+ gr.Markdown(
498
+ "Ask questions about your generated data and the AI will "
499
+ f"pick the best chart or table to show you.\n\n{_llm_note}"
500
+ )
501
+
502
+ with gr.Row(equal_height=True):
503
+ with gr.Column(scale=1):
504
+ chatbot = gr.Chatbot(label="Chat", height=420, type="tuples")
505
+ with gr.Row():
506
+ chat_input = gr.Textbox(
507
+ placeholder="e.g. Show me the sentiment distribution",
508
+ show_label=False,
509
+ scale=4,
510
+ )
511
+ send_btn = gr.Button("Send", variant="primary", scale=1)
512
+
513
+ gr.Examples(
514
+ examples=[
515
+ "Show me the sentiment analysis results",
516
+ "What do the sales forecasts look like?",
517
+ "Give me a KPI summary",
518
+ "Show the pricing analysis",
519
+ "Which books have the best reviews?",
520
+ ],
521
+ inputs=chat_input,
522
+ )
523
+
524
+ with gr.Column(scale=1):
525
+ viz_image = gr.Image(label="Visualization", height=350)
526
+ viz_table = gr.Dataframe(label="Data Table", wrap=True)
527
+
528
+ def _send(msg, hist):
529
+ return ai_chat(msg, hist)
530
+
531
+ send_btn.click(
532
+ fn=_send,
533
+ inputs=[chat_input, chatbot],
534
+ outputs=[chatbot, viz_image, viz_table],
535
+ ).then(fn=lambda: "", outputs=[chat_input])
536
+
537
+ chat_input.submit(
538
+ fn=_send,
539
+ inputs=[chat_input, chatbot],
540
+ outputs=[chatbot, viz_image, viz_table],
541
+ ).then(fn=lambda: "", outputs=[chat_input])
542
+
543
+ # ── Tab 4: About ────────────────────────
544
+ with gr.Tab("About"):
545
+ gr.Markdown(textwrap.dedent("""\
546
+ ## About This Dashboard
547
+
548
+ This interactive dashboard was built for the **AI & Big Data Management**
549
+ (AIBDM) course at **ESCP Business School** (2026 cohort).
550
+
551
+ ### How It Works
552
+ 1. **Data Creation** notebook scrapes the web and generates synthetic
553
+ data (books, sales, reviews).
554
+ 2. **Python Analysis** notebook runs sentiment analysis, creates
555
+ visualizations, builds ARIMA forecasts, and computes pricing
556
+ decisions.
557
+ 3. All outputs are saved to `artifacts/py/figures/` and
558
+ `artifacts/py/tables/`.
559
+ 4. This Gradio app displays the results and lets you explore them
560
+ with an AI assistant.
561
+
562
+ ### How to Customize
563
+ - Replace `datacreation.ipynb` and `pythonanalysis.ipynb` with your
564
+ own notebooks.
565
+ - Make sure your notebooks write PNGs to `artifacts/py/figures/`
566
+ and CSVs/JSONs to `artifacts/py/tables/`.
567
+ - Optionally export a `kpis.json` file to `artifacts/py/tables/`
568
+ for the KPI cards.
569
+ - Set the `HF_API_KEY` secret in your Space settings to enable the
570
+ AI chat (otherwise keyword fallback is used).
571
+
572
+ ### Environment Variables
573
+ | Variable | Default | Description |
574
+ |----------|---------|-------------|
575
+ | `NB1` | `datacreation.ipynb` | Path to data-creation notebook |
576
+ | `NB2` | `pythonanalysis.ipynb` | Path to analysis notebook |
577
+ | `HF_API_KEY` | *(empty)* | HF Inference API token |
578
+ | `MODEL_NAME` | `mistralai/Mistral-7B-Instruct-v0.3` | LLM model ID |
579
+
580
+ ### Credits
581
+ Built with [Gradio](https://gradio.app) and deployed on
582
+ [Hugging Face Spaces](https://huggingface.co/spaces).
583
+
584
+ *ESCP Business School &mdash; AIBDM 2026*
585
+ """))
586
+
587
+ # ─────────────────────────────────────────────
588
+ # Launch (HF Spaces handles host/port)
589
+ # ─────────────────────────────────────────────
590
+ if __name__ == "__main__":
591
+ demo.launch()
datacreation.ipynb ADDED
@@ -0,0 +1,931 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "cells": [
3
+ {
4
+ "cell_type": "markdown",
5
+ "metadata": {
6
+ "id": "4ba6aba8"
7
+ },
8
+ "source": [
9
+ "# 🤖 **Data Collection, Creation, Storage, and Processing**\n"
10
+ ]
11
+ },
12
+ {
13
+ "cell_type": "markdown",
14
+ "metadata": {
15
+ "id": "jpASMyIQMaAq"
16
+ },
17
+ "source": [
18
+ "## **1.** 📦 Install required packages"
19
+ ]
20
+ },
21
+ {
22
+ "cell_type": "code",
23
+ "execution_count": 1,
24
+ "metadata": {
25
+ "colab": {
26
+ "base_uri": "https://localhost:8080/"
27
+ },
28
+ "id": "f48c8f8c",
29
+ "outputId": "13d0dd5e-82c6-489f-b1f0-e970186a4eb7"
30
+ },
31
+ "outputs": [
32
+ {
33
+ "output_type": "stream",
34
+ "name": "stdout",
35
+ "text": [
36
+ "Requirement already satisfied: beautifulsoup4 in /usr/local/lib/python3.12/dist-packages (4.13.5)\n",
37
+ "Requirement already satisfied: pandas in /usr/local/lib/python3.12/dist-packages (2.2.2)\n",
38
+ "Requirement already satisfied: matplotlib in /usr/local/lib/python3.12/dist-packages (3.10.0)\n",
39
+ "Requirement already satisfied: seaborn in /usr/local/lib/python3.12/dist-packages (0.13.2)\n",
40
+ "Requirement already satisfied: numpy in /usr/local/lib/python3.12/dist-packages (2.0.2)\n",
41
+ "Requirement already satisfied: textblob in /usr/local/lib/python3.12/dist-packages (0.19.0)\n",
42
+ "Requirement already satisfied: soupsieve>1.2 in /usr/local/lib/python3.12/dist-packages (from beautifulsoup4) (2.8.3)\n",
43
+ "Requirement already satisfied: typing-extensions>=4.0.0 in /usr/local/lib/python3.12/dist-packages (from beautifulsoup4) (4.15.0)\n",
44
+ "Requirement already satisfied: python-dateutil>=2.8.2 in /usr/local/lib/python3.12/dist-packages (from pandas) (2.9.0.post0)\n",
45
+ "Requirement already satisfied: pytz>=2020.1 in /usr/local/lib/python3.12/dist-packages (from pandas) (2025.2)\n",
46
+ "Requirement already satisfied: tzdata>=2022.7 in /usr/local/lib/python3.12/dist-packages (from pandas) (2025.3)\n",
47
+ "Requirement already satisfied: contourpy>=1.0.1 in /usr/local/lib/python3.12/dist-packages (from matplotlib) (1.3.3)\n",
48
+ "Requirement already satisfied: cycler>=0.10 in /usr/local/lib/python3.12/dist-packages (from matplotlib) (0.12.1)\n",
49
+ "Requirement already satisfied: fonttools>=4.22.0 in /usr/local/lib/python3.12/dist-packages (from matplotlib) (4.61.1)\n",
50
+ "Requirement already satisfied: kiwisolver>=1.3.1 in /usr/local/lib/python3.12/dist-packages (from matplotlib) (1.4.9)\n",
51
+ "Requirement already satisfied: packaging>=20.0 in /usr/local/lib/python3.12/dist-packages (from matplotlib) (26.0)\n",
52
+ "Requirement already satisfied: pillow>=8 in /usr/local/lib/python3.12/dist-packages (from matplotlib) (11.3.0)\n",
53
+ "Requirement already satisfied: pyparsing>=2.3.1 in /usr/local/lib/python3.12/dist-packages (from matplotlib) (3.3.2)\n",
54
+ "Requirement already satisfied: nltk>=3.9 in /usr/local/lib/python3.12/dist-packages (from textblob) (3.9.1)\n",
55
+ "Requirement already satisfied: click in /usr/local/lib/python3.12/dist-packages (from nltk>=3.9->textblob) (8.3.1)\n",
56
+ "Requirement already satisfied: joblib in /usr/local/lib/python3.12/dist-packages (from nltk>=3.9->textblob) (1.5.3)\n",
57
+ "Requirement already satisfied: regex>=2021.8.3 in /usr/local/lib/python3.12/dist-packages (from nltk>=3.9->textblob) (2025.11.3)\n",
58
+ "Requirement already satisfied: tqdm in /usr/local/lib/python3.12/dist-packages (from nltk>=3.9->textblob) (4.67.3)\n",
59
+ "Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.12/dist-packages (from python-dateutil>=2.8.2->pandas) (1.17.0)\n"
60
+ ]
61
+ }
62
+ ],
63
+ "source": [
64
+ "!pip install beautifulsoup4 pandas matplotlib seaborn numpy textblob"
65
+ ]
66
+ },
67
+ {
68
+ "cell_type": "markdown",
69
+ "metadata": {
70
+ "id": "lquNYCbfL9IM"
71
+ },
72
+ "source": [
73
+ "## **2.** ⛏ Web-scrape all book titles, prices, and ratings from books.toscrape.com"
74
+ ]
75
+ },
76
+ {
77
+ "cell_type": "markdown",
78
+ "metadata": {
79
+ "id": "0IWuNpxxYDJF"
80
+ },
81
+ "source": [
82
+ "### *a. Initial setup*\n",
83
+ "Define the base url of the website you will scrape as well as how and what you will scrape"
84
+ ]
85
+ },
86
+ {
87
+ "cell_type": "code",
88
+ "execution_count": 2,
89
+ "metadata": {
90
+ "id": "91d52125"
91
+ },
92
+ "outputs": [],
93
+ "source": [
94
+ "import requests\n",
95
+ "from bs4 import BeautifulSoup\n",
96
+ "import pandas as pd\n",
97
+ "import time\n",
98
+ "\n",
99
+ "base_url = \"https://books.toscrape.com/catalogue/page-{}.html\"\n",
100
+ "headers = {\"User-Agent\": \"Mozilla/5.0\"}\n",
101
+ "\n",
102
+ "titles, prices, ratings = [], [], []"
103
+ ]
104
+ },
105
+ {
106
+ "cell_type": "markdown",
107
+ "metadata": {
108
+ "id": "oCdTsin2Yfp3"
109
+ },
110
+ "source": [
111
+ "### *b. Fill titles, prices, and ratings from the web pages*"
112
+ ]
113
+ },
114
+ {
115
+ "cell_type": "code",
116
+ "execution_count": 3,
117
+ "metadata": {
118
+ "id": "xqO5Y3dnYhxt"
119
+ },
120
+ "outputs": [],
121
+ "source": [
122
+ "# Loop through all 50 pages\n",
123
+ "for page in range(1, 51):\n",
124
+ " url = base_url.format(page)\n",
125
+ " response = requests.get(url, headers=headers)\n",
126
+ " soup = BeautifulSoup(response.content, \"html.parser\")\n",
127
+ " books = soup.find_all(\"article\", class_=\"product_pod\")\n",
128
+ "\n",
129
+ " for book in books:\n",
130
+ " titles.append(book.h3.a[\"title\"])\n",
131
+ " prices.append(float(book.find(\"p\", class_=\"price_color\").text[1:]))\n",
132
+ " ratings.append(book.p.get(\"class\")[1])\n",
133
+ "\n",
134
+ " time.sleep(0.5) # polite scraping delay"
135
+ ]
136
+ },
137
+ {
138
+ "cell_type": "markdown",
139
+ "metadata": {
140
+ "id": "T0TOeRC4Yrnn"
141
+ },
142
+ "source": [
143
+ "### *c. ✋🏻🛑⛔️ Create a dataframe df_books that contains the now complete \"title\", \"price\", and \"rating\" objects*"
144
+ ]
145
+ },
146
+ {
147
+ "cell_type": "code",
148
+ "execution_count": 4,
149
+ "metadata": {
150
+ "id": "l5FkkNhUYTHh"
151
+ },
152
+ "outputs": [],
153
+ "source": []
154
+ },
155
+ {
156
+ "cell_type": "markdown",
157
+ "metadata": {
158
+ "id": "duI5dv3CZYvF"
159
+ },
160
+ "source": [
161
+ "### *d. Save web-scraped dataframe either as a CSV or Excel file*"
162
+ ]
163
+ },
164
+ {
165
+ "cell_type": "code",
166
+ "execution_count": 5,
167
+ "metadata": {
168
+ "id": "lC1U_YHtZifh"
169
+ },
170
+ "outputs": [],
171
+ "source": [
172
+ "# 💾 Save to CSV\n",
173
+ "df_books.to_csv(\"books_data.csv\", index=False)\n",
174
+ "\n",
175
+ "# 💾 Or save to Excel\n",
176
+ "# df_books.to_excel(\"books_data.xlsx\", index=False)"
177
+ ]
178
+ },
179
+ {
180
+ "cell_type": "markdown",
181
+ "metadata": {
182
+ "id": "qMjRKMBQZlJi"
183
+ },
184
+ "source": [
185
+ "### *e. ✋🏻🛑⛔️ View first fiew lines*"
186
+ ]
187
+ },
188
+ {
189
+ "cell_type": "code",
190
+ "execution_count": 6,
191
+ "metadata": {
192
+ "colab": {
193
+ "base_uri": "https://localhost:8080/",
194
+ "height": 206
195
+ },
196
+ "id": "O_wIvTxYZqCK",
197
+ "outputId": "349b36b0-c008-4fd5-d4a4-dba38ae18337"
198
+ },
199
+ "outputs": [
200
+ {
201
+ "output_type": "execute_result",
202
+ "data": {
203
+ "text/plain": [
204
+ " title price rating\n",
205
+ "0 A Light in the Attic 51.77 Three\n",
206
+ "1 Tipping the Velvet 53.74 One\n",
207
+ "2 Soumission 50.10 One\n",
208
+ "3 Sharp Objects 47.82 Four\n",
209
+ "4 Sapiens: A Brief History of Humankind 54.23 Five"
210
+ ],
211
+ "text/html": [
212
+ "\n",
213
+ " <div id=\"df-04c87660-4415-45e9-ad3b-3fa19d9402c2\" class=\"colab-df-container\">\n",
214
+ " <div>\n",
215
+ "<style scoped>\n",
216
+ " .dataframe tbody tr th:only-of-type {\n",
217
+ " vertical-align: middle;\n",
218
+ " }\n",
219
+ "\n",
220
+ " .dataframe tbody tr th {\n",
221
+ " vertical-align: top;\n",
222
+ " }\n",
223
+ "\n",
224
+ " .dataframe thead th {\n",
225
+ " text-align: right;\n",
226
+ " }\n",
227
+ "</style>\n",
228
+ "<table border=\"1\" class=\"dataframe\">\n",
229
+ " <thead>\n",
230
+ " <tr style=\"text-align: right;\">\n",
231
+ " <th></th>\n",
232
+ " <th>title</th>\n",
233
+ " <th>price</th>\n",
234
+ " <th>rating</th>\n",
235
+ " </tr>\n",
236
+ " </thead>\n",
237
+ " <tbody>\n",
238
+ " <tr>\n",
239
+ " <th>0</th>\n",
240
+ " <td>A Light in the Attic</td>\n",
241
+ " <td>51.77</td>\n",
242
+ " <td>Three</td>\n",
243
+ " </tr>\n",
244
+ " <tr>\n",
245
+ " <th>1</th>\n",
246
+ " <td>Tipping the Velvet</td>\n",
247
+ " <td>53.74</td>\n",
248
+ " <td>One</td>\n",
249
+ " </tr>\n",
250
+ " <tr>\n",
251
+ " <th>2</th>\n",
252
+ " <td>Soumission</td>\n",
253
+ " <td>50.10</td>\n",
254
+ " <td>One</td>\n",
255
+ " </tr>\n",
256
+ " <tr>\n",
257
+ " <th>3</th>\n",
258
+ " <td>Sharp Objects</td>\n",
259
+ " <td>47.82</td>\n",
260
+ " <td>Four</td>\n",
261
+ " </tr>\n",
262
+ " <tr>\n",
263
+ " <th>4</th>\n",
264
+ " <td>Sapiens: A Brief History of Humankind</td>\n",
265
+ " <td>54.23</td>\n",
266
+ " <td>Five</td>\n",
267
+ " </tr>\n",
268
+ " </tbody>\n",
269
+ "</table>\n",
270
+ "</div>\n",
271
+ " <div class=\"colab-df-buttons\">\n",
272
+ "\n",
273
+ " <div class=\"colab-df-container\">\n",
274
+ " <button class=\"colab-df-convert\" onclick=\"convertToInteractive('df-04c87660-4415-45e9-ad3b-3fa19d9402c2')\"\n",
275
+ " title=\"Convert this dataframe to an interactive table.\"\n",
276
+ " style=\"display:none;\">\n",
277
+ "\n",
278
+ " <svg xmlns=\"http://www.w3.org/2000/svg\" height=\"24px\" viewBox=\"0 -960 960 960\">\n",
279
+ " <path d=\"M120-120v-720h720v720H120Zm60-500h600v-160H180v160Zm220 220h160v-160H400v160Zm0 220h160v-160H400v160ZM180-400h160v-160H180v160Zm440 0h160v-160H620v160ZM180-180h160v-160H180v160Zm440 0h160v-160H620v160Z\"/>\n",
280
+ " </svg>\n",
281
+ " </button>\n",
282
+ "\n",
283
+ " <style>\n",
284
+ " .colab-df-container {\n",
285
+ " display:flex;\n",
286
+ " gap: 12px;\n",
287
+ " }\n",
288
+ "\n",
289
+ " .colab-df-convert {\n",
290
+ " background-color: #E8F0FE;\n",
291
+ " border: none;\n",
292
+ " border-radius: 50%;\n",
293
+ " cursor: pointer;\n",
294
+ " display: none;\n",
295
+ " fill: #1967D2;\n",
296
+ " height: 32px;\n",
297
+ " padding: 0 0 0 0;\n",
298
+ " width: 32px;\n",
299
+ " }\n",
300
+ "\n",
301
+ " .colab-df-convert:hover {\n",
302
+ " background-color: #E2EBFA;\n",
303
+ " box-shadow: 0px 1px 2px rgba(60, 64, 67, 0.3), 0px 1px 3px 1px rgba(60, 64, 67, 0.15);\n",
304
+ " fill: #174EA6;\n",
305
+ " }\n",
306
+ "\n",
307
+ " .colab-df-buttons div {\n",
308
+ " margin-bottom: 4px;\n",
309
+ " }\n",
310
+ "\n",
311
+ " [theme=dark] .colab-df-convert {\n",
312
+ " background-color: #3B4455;\n",
313
+ " fill: #D2E3FC;\n",
314
+ " }\n",
315
+ "\n",
316
+ " [theme=dark] .colab-df-convert:hover {\n",
317
+ " background-color: #434B5C;\n",
318
+ " box-shadow: 0px 1px 3px 1px rgba(0, 0, 0, 0.15);\n",
319
+ " filter: drop-shadow(0px 1px 2px rgba(0, 0, 0, 0.3));\n",
320
+ " fill: #FFFFFF;\n",
321
+ " }\n",
322
+ " </style>\n",
323
+ "\n",
324
+ " <script>\n",
325
+ " const buttonEl =\n",
326
+ " document.querySelector('#df-04c87660-4415-45e9-ad3b-3fa19d9402c2 button.colab-df-convert');\n",
327
+ " buttonEl.style.display =\n",
328
+ " google.colab.kernel.accessAllowed ? 'block' : 'none';\n",
329
+ "\n",
330
+ " async function convertToInteractive(key) {\n",
331
+ " const element = document.querySelector('#df-04c87660-4415-45e9-ad3b-3fa19d9402c2');\n",
332
+ " const dataTable =\n",
333
+ " await google.colab.kernel.invokeFunction('convertToInteractive',\n",
334
+ " [key], {});\n",
335
+ " if (!dataTable) return;\n",
336
+ "\n",
337
+ " const docLinkHtml = 'Like what you see? Visit the ' +\n",
338
+ " '<a target=\"_blank\" href=https://colab.research.google.com/notebooks/data_table.ipynb>data table notebook</a>'\n",
339
+ " + ' to learn more about interactive tables.';\n",
340
+ " element.innerHTML = '';\n",
341
+ " dataTable['output_type'] = 'display_data';\n",
342
+ " await google.colab.output.renderOutput(dataTable, element);\n",
343
+ " const docLink = document.createElement('div');\n",
344
+ " docLink.innerHTML = docLinkHtml;\n",
345
+ " element.appendChild(docLink);\n",
346
+ " }\n",
347
+ " </script>\n",
348
+ " </div>\n",
349
+ "\n",
350
+ "\n",
351
+ " </div>\n",
352
+ " </div>\n"
353
+ ],
354
+ "application/vnd.google.colaboratory.intrinsic+json": {
355
+ "type": "dataframe",
356
+ "variable_name": "df_books",
357
+ "summary": "{\n \"name\": \"df_books\",\n \"rows\": 1000,\n \"fields\": [\n {\n \"column\": \"title\",\n \"properties\": {\n \"dtype\": \"string\",\n \"num_unique_values\": 999,\n \"samples\": [\n \"The Grownup\",\n \"Persepolis: The Story of a Childhood (Persepolis #1-2)\",\n \"Ayumi's Violin\"\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n },\n {\n \"column\": \"price\",\n \"properties\": {\n \"dtype\": \"number\",\n \"std\": 14.446689669952772,\n \"min\": 10.0,\n \"max\": 59.99,\n \"num_unique_values\": 903,\n \"samples\": [\n 19.73,\n 55.65,\n 46.31\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n },\n {\n \"column\": \"rating\",\n \"properties\": {\n \"dtype\": \"category\",\n \"num_unique_values\": 5,\n \"samples\": [\n \"One\",\n \"Two\",\n \"Four\"\n ],\n \"semantic_type\": \"\",\n \"description\": \"\"\n }\n }\n ]\n}"
358
+ }
359
+ },
360
+ "metadata": {},
361
+ "execution_count": 6
362
+ }
363
+ ],
364
+ "source": []
365
+ },
366
+ {
367
+ "cell_type": "markdown",
368
+ "metadata": {
369
+ "id": "p-1Pr2szaqLk"
370
+ },
371
+ "source": [
372
+ "## **3.** 🧩 Create a meaningful connection between real & synthetic datasets"
373
+ ]
374
+ },
375
+ {
376
+ "cell_type": "markdown",
377
+ "metadata": {
378
+ "id": "SIaJUGIpaH4V"
379
+ },
380
+ "source": [
381
+ "### *a. Initial setup*"
382
+ ]
383
+ },
384
+ {
385
+ "cell_type": "code",
386
+ "execution_count": 7,
387
+ "metadata": {
388
+ "id": "-gPXGcRPuV_9"
389
+ },
390
+ "outputs": [],
391
+ "source": [
392
+ "import numpy as np\n",
393
+ "import random\n",
394
+ "from datetime import datetime\n",
395
+ "import warnings\n",
396
+ "\n",
397
+ "warnings.filterwarnings(\"ignore\")\n",
398
+ "random.seed(2025)\n",
399
+ "np.random.seed(2025)"
400
+ ]
401
+ },
402
+ {
403
+ "cell_type": "markdown",
404
+ "metadata": {
405
+ "id": "pY4yCoIuaQqp"
406
+ },
407
+ "source": [
408
+ "### *b. Generate popularity scores based on rating (with some randomness) with a generate_popularity_score function*"
409
+ ]
410
+ },
411
+ {
412
+ "cell_type": "code",
413
+ "execution_count": 8,
414
+ "metadata": {
415
+ "id": "mnd5hdAbaNjz"
416
+ },
417
+ "outputs": [],
418
+ "source": [
419
+ "def generate_popularity_score(rating):\n",
420
+ " base = {\"One\": 2, \"Two\": 3, \"Three\": 3, \"Four\": 4, \"Five\": 4}.get(rating, 3)\n",
421
+ " trend_factor = random.choices([-1, 0, 1], weights=[1, 3, 2])[0]\n",
422
+ " return int(np.clip(base + trend_factor, 1, 5))"
423
+ ]
424
+ },
425
+ {
426
+ "cell_type": "markdown",
427
+ "metadata": {
428
+ "id": "n4-TaNTFgPak"
429
+ },
430
+ "source": [
431
+ "### *c. ✋🏻🛑⛔️ Run the function to create a \"popularity_score\" column from \"rating\"*"
432
+ ]
433
+ },
434
+ {
435
+ "cell_type": "code",
436
+ "execution_count": 9,
437
+ "metadata": {
438
+ "id": "V-G3OCUCgR07"
439
+ },
440
+ "outputs": [],
441
+ "source": []
442
+ },
443
+ {
444
+ "cell_type": "markdown",
445
+ "metadata": {
446
+ "id": "HnngRNTgacYt"
447
+ },
448
+ "source": [
449
+ "### *d. Decide on the sentiment_label based on the popularity score with a get_sentiment function*"
450
+ ]
451
+ },
452
+ {
453
+ "cell_type": "code",
454
+ "execution_count": 10,
455
+ "metadata": {
456
+ "id": "kUtWmr8maZLZ"
457
+ },
458
+ "outputs": [],
459
+ "source": [
460
+ "def get_sentiment(popularity_score):\n",
461
+ " if popularity_score <= 2:\n",
462
+ " return \"negative\"\n",
463
+ " elif popularity_score == 3:\n",
464
+ " return \"neutral\"\n",
465
+ " else:\n",
466
+ " return \"positive\""
467
+ ]
468
+ },
469
+ {
470
+ "cell_type": "markdown",
471
+ "metadata": {
472
+ "id": "HF9F9HIzgT7Z"
473
+ },
474
+ "source": [
475
+ "### *e. ✋🏻🛑⛔️ Run the function to create a \"sentiment_label\" column from \"popularity_score\"*"
476
+ ]
477
+ },
478
+ {
479
+ "cell_type": "code",
480
+ "execution_count": 11,
481
+ "metadata": {
482
+ "id": "tafQj8_7gYCG"
483
+ },
484
+ "outputs": [],
485
+ "source": []
486
+ },
487
+ {
488
+ "cell_type": "markdown",
489
+ "metadata": {
490
+ "id": "T8AdKkmASq9a"
491
+ },
492
+ "source": [
493
+ "## **4.** 📈 Generate synthetic book sales data of 18 months"
494
+ ]
495
+ },
496
+ {
497
+ "cell_type": "markdown",
498
+ "metadata": {
499
+ "id": "OhXbdGD5fH0c"
500
+ },
501
+ "source": [
502
+ "### *a. Create a generate_sales_profit function that would generate sales patterns based on sentiment_label (with some randomness)*"
503
+ ]
504
+ },
505
+ {
506
+ "cell_type": "code",
507
+ "execution_count": 12,
508
+ "metadata": {
509
+ "id": "qkVhYPXGbgEn"
510
+ },
511
+ "outputs": [],
512
+ "source": [
513
+ "def generate_sales_profile(sentiment):\n",
514
+ " months = pd.date_range(end=datetime.today(), periods=18, freq=\"M\")\n",
515
+ "\n",
516
+ " if sentiment == \"positive\":\n",
517
+ " base = random.randint(200, 300)\n",
518
+ " trend = np.linspace(base, base + random.randint(20, 60), len(months))\n",
519
+ " elif sentiment == \"negative\":\n",
520
+ " base = random.randint(20, 80)\n",
521
+ " trend = np.linspace(base, base - random.randint(10, 30), len(months))\n",
522
+ " else: # neutral\n",
523
+ " base = random.randint(80, 160)\n",
524
+ " trend = np.full(len(months), base + random.randint(-10, 10))\n",
525
+ "\n",
526
+ " seasonality = 10 * np.sin(np.linspace(0, 3 * np.pi, len(months)))\n",
527
+ " noise = np.random.normal(0, 5, len(months))\n",
528
+ " monthly_sales = np.clip(trend + seasonality + noise, a_min=0, a_max=None).astype(int)\n",
529
+ "\n",
530
+ " return list(zip(months.strftime(\"%Y-%m\"), monthly_sales))"
531
+ ]
532
+ },
533
+ {
534
+ "cell_type": "markdown",
535
+ "metadata": {
536
+ "id": "L2ak1HlcgoTe"
537
+ },
538
+ "source": [
539
+ "### *b. Run the function as part of building sales_data*"
540
+ ]
541
+ },
542
+ {
543
+ "cell_type": "code",
544
+ "execution_count": 13,
545
+ "metadata": {
546
+ "id": "SlJ24AUafoDB"
547
+ },
548
+ "outputs": [],
549
+ "source": [
550
+ "sales_data = []\n",
551
+ "for _, row in df_books.iterrows():\n",
552
+ " records = generate_sales_profile(row[\"sentiment_label\"])\n",
553
+ " for month, units in records:\n",
554
+ " sales_data.append({\n",
555
+ " \"title\": row[\"title\"],\n",
556
+ " \"month\": month,\n",
557
+ " \"units_sold\": units,\n",
558
+ " \"sentiment_label\": row[\"sentiment_label\"]\n",
559
+ " })"
560
+ ]
561
+ },
562
+ {
563
+ "cell_type": "markdown",
564
+ "metadata": {
565
+ "id": "4IXZKcCSgxnq"
566
+ },
567
+ "source": [
568
+ "### *c. ✋🏻🛑⛔️ Create a df_sales DataFrame from sales_data*"
569
+ ]
570
+ },
571
+ {
572
+ "cell_type": "code",
573
+ "execution_count": 14,
574
+ "metadata": {
575
+ "id": "wcN6gtiZg-ws"
576
+ },
577
+ "outputs": [],
578
+ "source": []
579
+ },
580
+ {
581
+ "cell_type": "markdown",
582
+ "metadata": {
583
+ "id": "EhIjz9WohAmZ"
584
+ },
585
+ "source": [
586
+ "### *d. Save df_sales as synthetic_sales_data.csv & view first few lines*"
587
+ ]
588
+ },
589
+ {
590
+ "cell_type": "code",
591
+ "execution_count": 15,
592
+ "metadata": {
593
+ "colab": {
594
+ "base_uri": "https://localhost:8080/"
595
+ },
596
+ "id": "MzbZvLcAhGaH",
597
+ "outputId": "c692bb04-7263-4115-a2ba-c72fe0180722"
598
+ },
599
+ "outputs": [
600
+ {
601
+ "output_type": "stream",
602
+ "name": "stdout",
603
+ "text": [
604
+ " title month units_sold sentiment_label\n",
605
+ "0 A Light in the Attic 2024-08 100 neutral\n",
606
+ "1 A Light in the Attic 2024-09 109 neutral\n",
607
+ "2 A Light in the Attic 2024-10 102 neutral\n",
608
+ "3 A Light in the Attic 2024-11 107 neutral\n",
609
+ "4 A Light in the Attic 2024-12 108 neutral\n"
610
+ ]
611
+ }
612
+ ],
613
+ "source": [
614
+ "df_sales.to_csv(\"synthetic_sales_data.csv\", index=False)\n",
615
+ "\n",
616
+ "print(df_sales.head())"
617
+ ]
618
+ },
619
+ {
620
+ "cell_type": "markdown",
621
+ "metadata": {
622
+ "id": "7g9gqBgQMtJn"
623
+ },
624
+ "source": [
625
+ "## **5.** 🎯 Generate synthetic customer reviews"
626
+ ]
627
+ },
628
+ {
629
+ "cell_type": "markdown",
630
+ "metadata": {
631
+ "id": "Gi4y9M9KuDWx"
632
+ },
633
+ "source": [
634
+ "### *a. ✋🏻🛑⛔️ Ask ChatGPT to create a list of 50 distinct generic book review texts for the sentiment labels \"positive\", \"neutral\", and \"negative\" called synthetic_reviews_by_sentiment*"
635
+ ]
636
+ },
637
+ {
638
+ "cell_type": "code",
639
+ "execution_count": 16,
640
+ "metadata": {
641
+ "id": "b3cd2a50"
642
+ },
643
+ "outputs": [],
644
+ "source": [
645
+ "synthetic_reviews_by_sentiment = {\n",
646
+ " \"positive\": [\n",
647
+ " \"A compelling and heartwarming read that stayed with me long after I finished.\",\n",
648
+ " \"Brilliantly written! The characters were unforgettable and the plot was engaging.\",\n",
649
+ " \"One of the best books I've read this year — inspiring and emotionally rich.\",\n",
650
+ " ],\n",
651
+ " \"neutral\": [\n",
652
+ " \"An average book — not great, but not bad either.\",\n",
653
+ " \"Some parts really stood out, others felt a bit flat.\",\n",
654
+ " \"It was okay overall. A decent way to pass the time.\",\n",
655
+ " ],\n",
656
+ " \"negative\": [\n",
657
+ " \"I struggled to get through this one — it just didn’t grab me.\",\n",
658
+ " \"The plot was confusing and the characters felt underdeveloped.\",\n",
659
+ " \"Disappointing. I had high hopes, but they weren't met.\",\n",
660
+ " ]\n",
661
+ "}"
662
+ ]
663
+ },
664
+ {
665
+ "cell_type": "markdown",
666
+ "metadata": {
667
+ "id": "fQhfVaDmuULT"
668
+ },
669
+ "source": [
670
+ "### *b. Generate 10 reviews per book using random sampling from the corresponding 50*"
671
+ ]
672
+ },
673
+ {
674
+ "cell_type": "code",
675
+ "execution_count": 17,
676
+ "metadata": {
677
+ "id": "l2SRc3PjuTGM"
678
+ },
679
+ "outputs": [],
680
+ "source": [
681
+ "review_rows = []\n",
682
+ "for _, row in df_books.iterrows():\n",
683
+ " title = row['title']\n",
684
+ " sentiment_label = row['sentiment_label']\n",
685
+ " review_pool = synthetic_reviews_by_sentiment[sentiment_label]\n",
686
+ " sampled_reviews = random.sample(review_pool, 10)\n",
687
+ " for review_text in sampled_reviews:\n",
688
+ " review_rows.append({\n",
689
+ " \"title\": title,\n",
690
+ " \"sentiment_label\": sentiment_label,\n",
691
+ " \"review_text\": review_text,\n",
692
+ " \"rating\": row['rating'],\n",
693
+ " \"popularity_score\": row['popularity_score']\n",
694
+ " })"
695
+ ]
696
+ },
697
+ {
698
+ "cell_type": "markdown",
699
+ "metadata": {
700
+ "id": "bmJMXF-Bukdm"
701
+ },
702
+ "source": [
703
+ "### *c. Create the final dataframe df_reviews & save it as synthetic_book_reviews.csv*"
704
+ ]
705
+ },
706
+ {
707
+ "cell_type": "code",
708
+ "execution_count": 18,
709
+ "metadata": {
710
+ "id": "ZUKUqZsuumsp"
711
+ },
712
+ "outputs": [],
713
+ "source": [
714
+ "df_reviews = pd.DataFrame(review_rows)\n",
715
+ "df_reviews.to_csv(\"synthetic_book_reviews.csv\", index=False)"
716
+ ]
717
+ },
718
+ {
719
+ "cell_type": "markdown",
720
+ "source": [
721
+ "### *c. inputs for R*"
722
+ ],
723
+ "metadata": {
724
+ "id": "_602pYUS3gY5"
725
+ }
726
+ },
727
+ {
728
+ "cell_type": "code",
729
+ "execution_count": 19,
730
+ "metadata": {
731
+ "colab": {
732
+ "base_uri": "https://localhost:8080/"
733
+ },
734
+ "id": "3946e521",
735
+ "outputId": "514d7bef-0488-4933-b03c-953b9e8a7f66"
736
+ },
737
+ "outputs": [
738
+ {
739
+ "output_type": "stream",
740
+ "name": "stdout",
741
+ "text": [
742
+ "✅ Wrote synthetic_title_level_features.csv\n",
743
+ "✅ Wrote synthetic_monthly_revenue_series.csv\n"
744
+ ]
745
+ }
746
+ ],
747
+ "source": [
748
+ "import numpy as np\n",
749
+ "\n",
750
+ "def _safe_num(s):\n",
751
+ " return pd.to_numeric(\n",
752
+ " pd.Series(s).astype(str).str.replace(r\"[^0-9.]\", \"\", regex=True),\n",
753
+ " errors=\"coerce\"\n",
754
+ " )\n",
755
+ "\n",
756
+ "# --- Clean book metadata (price/rating) ---\n",
757
+ "df_books_r = df_books.copy()\n",
758
+ "if \"price\" in df_books_r.columns:\n",
759
+ " df_books_r[\"price\"] = _safe_num(df_books_r[\"price\"])\n",
760
+ "if \"rating\" in df_books_r.columns:\n",
761
+ " df_books_r[\"rating\"] = _safe_num(df_books_r[\"rating\"])\n",
762
+ "\n",
763
+ "df_books_r[\"title\"] = df_books_r[\"title\"].astype(str).str.strip()\n",
764
+ "\n",
765
+ "# --- Clean sales ---\n",
766
+ "df_sales_r = df_sales.copy()\n",
767
+ "df_sales_r[\"title\"] = df_sales_r[\"title\"].astype(str).str.strip()\n",
768
+ "df_sales_r[\"month\"] = pd.to_datetime(df_sales_r[\"month\"], errors=\"coerce\")\n",
769
+ "df_sales_r[\"units_sold\"] = _safe_num(df_sales_r[\"units_sold\"])\n",
770
+ "\n",
771
+ "# --- Clean reviews ---\n",
772
+ "df_reviews_r = df_reviews.copy()\n",
773
+ "df_reviews_r[\"title\"] = df_reviews_r[\"title\"].astype(str).str.strip()\n",
774
+ "df_reviews_r[\"sentiment_label\"] = df_reviews_r[\"sentiment_label\"].astype(str).str.lower().str.strip()\n",
775
+ "if \"rating\" in df_reviews_r.columns:\n",
776
+ " df_reviews_r[\"rating\"] = _safe_num(df_reviews_r[\"rating\"])\n",
777
+ "if \"popularity_score\" in df_reviews_r.columns:\n",
778
+ " df_reviews_r[\"popularity_score\"] = _safe_num(df_reviews_r[\"popularity_score\"])\n",
779
+ "\n",
780
+ "# --- Sentiment shares per title (from reviews) ---\n",
781
+ "sent_counts = (\n",
782
+ " df_reviews_r.groupby([\"title\", \"sentiment_label\"])\n",
783
+ " .size()\n",
784
+ " .unstack(fill_value=0)\n",
785
+ ")\n",
786
+ "for lab in [\"positive\", \"neutral\", \"negative\"]:\n",
787
+ " if lab not in sent_counts.columns:\n",
788
+ " sent_counts[lab] = 0\n",
789
+ "\n",
790
+ "sent_counts[\"total_reviews\"] = sent_counts[[\"positive\", \"neutral\", \"negative\"]].sum(axis=1)\n",
791
+ "den = sent_counts[\"total_reviews\"].replace(0, np.nan)\n",
792
+ "sent_counts[\"share_positive\"] = sent_counts[\"positive\"] / den\n",
793
+ "sent_counts[\"share_neutral\"] = sent_counts[\"neutral\"] / den\n",
794
+ "sent_counts[\"share_negative\"] = sent_counts[\"negative\"] / den\n",
795
+ "sent_counts = sent_counts.reset_index()\n",
796
+ "\n",
797
+ "# --- Sales aggregation per title ---\n",
798
+ "sales_by_title = (\n",
799
+ " df_sales_r.dropna(subset=[\"title\"])\n",
800
+ " .groupby(\"title\", as_index=False)\n",
801
+ " .agg(\n",
802
+ " months_observed=(\"month\", \"nunique\"),\n",
803
+ " avg_units_sold=(\"units_sold\", \"mean\"),\n",
804
+ " total_units_sold=(\"units_sold\", \"sum\"),\n",
805
+ " )\n",
806
+ ")\n",
807
+ "\n",
808
+ "# --- Title-level features (join sales + books + sentiment) ---\n",
809
+ "df_title = (\n",
810
+ " sales_by_title\n",
811
+ " .merge(df_books_r[[\"title\", \"price\", \"rating\"]], on=\"title\", how=\"left\")\n",
812
+ " .merge(sent_counts[[\"title\", \"share_positive\", \"share_neutral\", \"share_negative\", \"total_reviews\"]],\n",
813
+ " on=\"title\", how=\"left\")\n",
814
+ ")\n",
815
+ "\n",
816
+ "df_title[\"avg_revenue\"] = df_title[\"avg_units_sold\"] * df_title[\"price\"]\n",
817
+ "df_title[\"total_revenue\"] = df_title[\"total_units_sold\"] * df_title[\"price\"]\n",
818
+ "\n",
819
+ "df_title.to_csv(\"synthetic_title_level_features.csv\", index=False)\n",
820
+ "print(\"✅ Wrote synthetic_title_level_features.csv\")\n",
821
+ "\n",
822
+ "# --- Monthly revenue series (proxy: units_sold * price) ---\n",
823
+ "monthly_rev = (\n",
824
+ " df_sales_r.merge(df_books_r[[\"title\", \"price\"]], on=\"title\", how=\"left\")\n",
825
+ ")\n",
826
+ "monthly_rev[\"revenue\"] = monthly_rev[\"units_sold\"] * monthly_rev[\"price\"]\n",
827
+ "\n",
828
+ "df_monthly = (\n",
829
+ " monthly_rev.dropna(subset=[\"month\"])\n",
830
+ " .groupby(\"month\", as_index=False)[\"revenue\"]\n",
831
+ " .sum()\n",
832
+ " .rename(columns={\"revenue\": \"total_revenue\"})\n",
833
+ " .sort_values(\"month\")\n",
834
+ ")\n",
835
+ "# if revenue is all NA (e.g., missing price), fallback to units_sold as a teaching proxy\n",
836
+ "if df_monthly[\"total_revenue\"].notna().sum() == 0:\n",
837
+ " df_monthly = (\n",
838
+ " df_sales_r.dropna(subset=[\"month\"])\n",
839
+ " .groupby(\"month\", as_index=False)[\"units_sold\"]\n",
840
+ " .sum()\n",
841
+ " .rename(columns={\"units_sold\": \"total_revenue\"})\n",
842
+ " .sort_values(\"month\")\n",
843
+ " )\n",
844
+ "\n",
845
+ "df_monthly[\"month\"] = pd.to_datetime(df_monthly[\"month\"], errors=\"coerce\").dt.strftime(\"%Y-%m-%d\")\n",
846
+ "df_monthly.to_csv(\"synthetic_monthly_revenue_series.csv\", index=False)\n",
847
+ "print(\"✅ Wrote synthetic_monthly_revenue_series.csv\")\n"
848
+ ]
849
+ },
850
+ {
851
+ "cell_type": "markdown",
852
+ "metadata": {
853
+ "id": "RYvGyVfXuo54"
854
+ },
855
+ "source": [
856
+ "### *d. ✋🏻🛑⛔️ View the first few lines*"
857
+ ]
858
+ },
859
+ {
860
+ "cell_type": "code",
861
+ "execution_count": 20,
862
+ "metadata": {
863
+ "colab": {
864
+ "base_uri": "https://localhost:8080/"
865
+ },
866
+ "id": "xfE8NMqOurKo",
867
+ "outputId": "191730ba-d5e2-4df7-97d2-99feb0b704af"
868
+ },
869
+ "outputs": [
870
+ {
871
+ "output_type": "stream",
872
+ "name": "stdout",
873
+ "text": [
874
+ " title sentiment_label \\\n",
875
+ "0 A Light in the Attic neutral \n",
876
+ "1 A Light in the Attic neutral \n",
877
+ "2 A Light in the Attic neutral \n",
878
+ "3 A Light in the Attic neutral \n",
879
+ "4 A Light in the Attic neutral \n",
880
+ "\n",
881
+ " review_text rating popularity_score \n",
882
+ "0 Had potential that went unrealized. Three 3 \n",
883
+ "1 The themes were solid, but not well explored. Three 3 \n",
884
+ "2 It simply lacked that emotional punch. Three 3 \n",
885
+ "3 Serviceable but not something I'd go out of my... Three 3 \n",
886
+ "4 Standard fare with some promise. Three 3 \n"
887
+ ]
888
+ }
889
+ ],
890
+ "source": []
891
+ }
892
+ ],
893
+ "metadata": {
894
+ "colab": {
895
+ "collapsed_sections": [
896
+ "jpASMyIQMaAq",
897
+ "lquNYCbfL9IM",
898
+ "0IWuNpxxYDJF",
899
+ "oCdTsin2Yfp3",
900
+ "T0TOeRC4Yrnn",
901
+ "duI5dv3CZYvF",
902
+ "qMjRKMBQZlJi",
903
+ "p-1Pr2szaqLk",
904
+ "SIaJUGIpaH4V",
905
+ "pY4yCoIuaQqp",
906
+ "n4-TaNTFgPak",
907
+ "HnngRNTgacYt",
908
+ "HF9F9HIzgT7Z",
909
+ "T8AdKkmASq9a",
910
+ "OhXbdGD5fH0c",
911
+ "L2ak1HlcgoTe",
912
+ "4IXZKcCSgxnq",
913
+ "EhIjz9WohAmZ",
914
+ "Gi4y9M9KuDWx",
915
+ "fQhfVaDmuULT",
916
+ "bmJMXF-Bukdm",
917
+ "RYvGyVfXuo54"
918
+ ],
919
+ "provenance": []
920
+ },
921
+ "kernelspec": {
922
+ "display_name": "Python 3",
923
+ "name": "python3"
924
+ },
925
+ "language_info": {
926
+ "name": "python"
927
+ }
928
+ },
929
+ "nbformat": 4,
930
+ "nbformat_minor": 0
931
+ }
pythonanalysis.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
requirements.txt ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ gradio>=5.0.0,<6.0.0
2
+ papermill>=2.6.0
3
+ pandas>=2.2.0
4
+ numpy>=1.26.0
5
+ matplotlib>=3.8.0
6
+ seaborn>=0.13.0
7
+ vaderSentiment>=3.3.2
8
+ statsmodels>=0.14.0
9
+ scikit-learn>=1.4.0
10
+ beautifulsoup4>=4.12.0
11
+ requests>=2.31.0
12
+ textblob>=0.18.0
13
+ huggingface_hub>=0.23.0
14
+ plotly>=5.22.0
15
+ faker>=28.0.0
16
+ openpyxl>=3.1.0
17
+ ipykernel>=6.29.0
18
+ nbformat>=5.10.0
19
+ nbclient>=0.10.0
20
+ jupyter_client>=8.6.0
21
+ jupyter_core>=5.7.0
style.css ADDED
@@ -0,0 +1,374 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ /* ============================================================
2
+ ESCP Business School — AI for Business Decision Making
3
+ Gradio 5.x Custom Theme | Glass-Morphism Aurora
4
+ ============================================================ */
5
+
6
+ /* ---------- design tokens ---------- */
7
+ :root {
8
+ --bg: #f0ecff;
9
+ --bg-card: rgba(255, 255, 255, 0.72);
10
+ --lavender: #c5b4f0;
11
+ --lavender-mid: #a48de8;
12
+ --violet: #7c5cbf;
13
+ --violet-deep: #4b2d8a;
14
+ --mint: #6ee7c7;
15
+ --blush: #ffb3c8;
16
+ --red: #ff6b8a;
17
+ --text: #2d1f4e;
18
+ --text-mid: #6b5b8e;
19
+ --text-muted: #9d8fc4;
20
+ --font-sans: -apple-system, BlinkMacSystemFont, 'Segoe UI', Roboto, sans-serif;
21
+ --font-mono: 'SF Mono', 'Cascadia Code', Consolas, 'Liberation Mono', monospace;
22
+ --radius-sm: 10px;
23
+ --radius-md: 16px;
24
+ --radius-lg: 20px;
25
+ --radius-pill: 50px;
26
+ --shadow-card: 0 4px 24px rgba(75, 45, 138, 0.08);
27
+ --shadow-hover: 0 8px 32px rgba(75, 45, 138, 0.14);
28
+ --transition: 0.2s ease;
29
+ }
30
+
31
+ /* ---------- aurora background ---------- */
32
+ body, .gradio-container {
33
+ background: var(--bg) !important;
34
+ font-family: var(--font-sans) !important;
35
+ color: var(--text) !important;
36
+ }
37
+
38
+ .gradio-container::before {
39
+ content: '';
40
+ position: fixed;
41
+ inset: 0;
42
+ z-index: -1;
43
+ background:
44
+ radial-gradient(ellipse 60% 50% at 15% 20%, rgba(197, 180, 240, 0.55) 0%, transparent 70%),
45
+ radial-gradient(ellipse 50% 45% at 80% 15%, rgba(160, 200, 255, 0.45) 0%, transparent 65%),
46
+ radial-gradient(ellipse 40% 40% at 70% 75%, rgba(110, 231, 199, 0.35) 0%, transparent 60%),
47
+ radial-gradient(ellipse 35% 35% at 25% 80%, rgba(255, 179, 200, 0.30) 0%, transparent 60%),
48
+ var(--bg);
49
+ pointer-events: none;
50
+ }
51
+
52
+ /* ---------- container ---------- */
53
+ .gradio-container > .main {
54
+ max-width: 1520px !important;
55
+ margin: 0 auto !important;
56
+ padding: 1.5rem !important;
57
+ }
58
+
59
+ /* ---------- animations ---------- */
60
+ @keyframes popIn {
61
+ 0% { opacity: 0; transform: scale(0.92) translateY(12px); }
62
+ 100% { opacity: 1; transform: scale(1) translateY(0); }
63
+ }
64
+
65
+ @keyframes shimmerSlide {
66
+ 0% { background-position: -200% center; }
67
+ 100% { background-position: 200% center; }
68
+ }
69
+
70
+ /* ---------- cards / panels ---------- */
71
+ .panel, .block, .form, .gradio-group,
72
+ .gradio-accordion, .gradio-tabitem {
73
+ background: var(--bg-card) !important;
74
+ backdrop-filter: blur(18px) !important;
75
+ -webkit-backdrop-filter: blur(18px) !important;
76
+ border: 1px solid rgba(197, 180, 240, 0.30) !important;
77
+ border-radius: var(--radius-lg) !important;
78
+ box-shadow: var(--shadow-card) !important;
79
+ animation: popIn 0.4s ease both;
80
+ }
81
+
82
+ /* ---------- header accent stripe ---------- */
83
+ .gradio-container > .main > *:first-child::before {
84
+ content: '';
85
+ display: block;
86
+ height: 4px;
87
+ border-radius: 2px;
88
+ margin-bottom: 1rem;
89
+ background: linear-gradient(90deg, var(--violet-deep), var(--lavender-mid), var(--mint), var(--blush));
90
+ background-size: 200% 100%;
91
+ animation: shimmerSlide 4s linear infinite;
92
+ }
93
+
94
+ /* ---------- tabs ---------- */
95
+ .tabs > .tab-nav {
96
+ background: var(--bg-card) !important;
97
+ backdrop-filter: blur(14px) !important;
98
+ -webkit-backdrop-filter: blur(14px) !important;
99
+ border-radius: var(--radius-pill) !important;
100
+ padding: 5px !important;
101
+ gap: 4px !important;
102
+ border: 1px solid rgba(197, 180, 240, 0.25) !important;
103
+ box-shadow: var(--shadow-card) !important;
104
+ }
105
+
106
+ .tabs > .tab-nav > button {
107
+ border: none !important;
108
+ border-radius: var(--radius-pill) !important;
109
+ padding: 8px 22px !important;
110
+ font-weight: 600 !important;
111
+ font-size: 0.88rem !important;
112
+ color: var(--text-mid) !important;
113
+ background: transparent !important;
114
+ transition: all var(--transition) !important;
115
+ letter-spacing: 0.3px;
116
+ }
117
+
118
+ .tabs > .tab-nav > button:hover {
119
+ background: rgba(197, 180, 240, 0.22) !important;
120
+ color: var(--violet) !important;
121
+ }
122
+
123
+ .tabs > .tab-nav > button.selected {
124
+ background: linear-gradient(135deg, var(--violet), var(--violet-deep)) !important;
125
+ color: #fff !important;
126
+ box-shadow: 0 2px 12px rgba(124, 92, 191, 0.35) !important;
127
+ }
128
+
129
+ /* ---------- buttons: primary ---------- */
130
+ .primary, button.primary,
131
+ .gr-button-primary, .gr-button.primary {
132
+ background: linear-gradient(135deg, var(--violet), var(--violet-deep)) !important;
133
+ color: #fff !important;
134
+ border: none !important;
135
+ border-radius: var(--radius-pill) !important;
136
+ padding: 10px 28px !important;
137
+ font-weight: 600 !important;
138
+ box-shadow: 0 3px 14px rgba(124, 92, 191, 0.30) !important;
139
+ transition: all var(--transition) !important;
140
+ }
141
+
142
+ .primary:hover, button.primary:hover,
143
+ .gr-button-primary:hover, .gr-button.primary:hover {
144
+ transform: translateY(-2px) !important;
145
+ box-shadow: var(--shadow-hover) !important;
146
+ filter: brightness(1.06);
147
+ }
148
+
149
+ /* ---------- buttons: secondary ---------- */
150
+ .secondary, button.secondary,
151
+ .gr-button-secondary, .gr-button.secondary {
152
+ background: var(--bg-card) !important;
153
+ backdrop-filter: blur(10px) !important;
154
+ color: var(--violet) !important;
155
+ border: 1.5px solid var(--lavender) !important;
156
+ border-radius: var(--radius-pill) !important;
157
+ padding: 10px 28px !important;
158
+ font-weight: 600 !important;
159
+ transition: all var(--transition) !important;
160
+ }
161
+
162
+ .secondary:hover, button.secondary:hover,
163
+ .gr-button-secondary:hover, .gr-button.secondary:hover {
164
+ background: rgba(197, 180, 240, 0.18) !important;
165
+ border-color: var(--lavender-mid) !important;
166
+ transform: translateY(-1px) !important;
167
+ }
168
+
169
+ /* ---------- inputs / textareas ---------- */
170
+ input[type="text"], input[type="number"], input[type="password"],
171
+ textarea, .gr-input, .gr-text-input, select {
172
+ background: var(--bg-card) !important;
173
+ backdrop-filter: blur(8px) !important;
174
+ border: 1.5px solid var(--lavender) !important;
175
+ border-radius: var(--radius-sm) !important;
176
+ color: var(--text) !important;
177
+ font-family: var(--font-sans) !important;
178
+ transition: all var(--transition) !important;
179
+ }
180
+
181
+ input:focus, textarea:focus, select:focus,
182
+ .gr-input:focus, .gr-text-input:focus {
183
+ outline: none !important;
184
+ border-color: var(--lavender-mid) !important;
185
+ box-shadow: 0 0 0 3px rgba(164, 141, 232, 0.25) !important;
186
+ }
187
+
188
+ /* ---------- pipeline log (terminal) ---------- */
189
+ .pipeline-log, .log-output, [class*="log"],
190
+ textarea[data-testid="textbox"].prose {
191
+ background: #1a0e2e !important;
192
+ color: var(--lavender) !important;
193
+ font-family: var(--font-mono) !important;
194
+ font-size: 0.82rem !important;
195
+ line-height: 1.65 !important;
196
+ border-radius: var(--radius-md) !important;
197
+ padding: 1rem !important;
198
+ border: 1px solid rgba(197, 180, 240, 0.15) !important;
199
+ max-height: 360px !important;
200
+ overflow-y: auto !important;
201
+ }
202
+
203
+ /* ---------- chatbot ---------- */
204
+ .chatbot .message-row .user {
205
+ background: linear-gradient(135deg, rgba(197, 180, 240, 0.28), rgba(197, 180, 240, 0.12)) !important;
206
+ border: 1px solid rgba(197, 180, 240, 0.35) !important;
207
+ border-radius: var(--radius-md) var(--radius-md) 4px var(--radius-md) !important;
208
+ color: var(--text) !important;
209
+ }
210
+
211
+ .chatbot .message-row .bot {
212
+ background: var(--bg-card) !important;
213
+ border: 1px solid rgba(197, 180, 240, 0.18) !important;
214
+ border-radius: var(--radius-md) var(--radius-md) var(--radius-md) 4px !important;
215
+ box-shadow: 0 2px 10px rgba(75, 45, 138, 0.06) !important;
216
+ color: var(--text) !important;
217
+ }
218
+
219
+ /* ---------- section labels ---------- */
220
+ .gr-block-label, .gr-input-label, label span,
221
+ .label-wrap > span {
222
+ text-transform: uppercase !important;
223
+ letter-spacing: 2.5px !important;
224
+ font-size: 0.72rem !important;
225
+ font-weight: 700 !important;
226
+ color: var(--violet) !important;
227
+ }
228
+
229
+ .gr-block-label::after, .label-wrap > span::after {
230
+ content: '';
231
+ display: block;
232
+ margin-top: 6px;
233
+ height: 2px;
234
+ width: 48px;
235
+ border-radius: 1px;
236
+ background: linear-gradient(90deg, var(--violet), var(--lavender), transparent);
237
+ }
238
+
239
+ /* ---------- scrollbars ---------- */
240
+ * {
241
+ scrollbar-width: thin;
242
+ scrollbar-color: var(--lavender) transparent;
243
+ }
244
+
245
+ ::-webkit-scrollbar {
246
+ width: 6px;
247
+ height: 6px;
248
+ }
249
+
250
+ ::-webkit-scrollbar-track {
251
+ background: transparent;
252
+ }
253
+
254
+ ::-webkit-scrollbar-thumb {
255
+ background: linear-gradient(180deg, var(--lavender), var(--mint));
256
+ border-radius: 3px;
257
+ }
258
+
259
+ /* ---------- plotly charts ---------- */
260
+ .js-plotly-plot .plot-container,
261
+ .js-plotly-plot .main-svg {
262
+ background: transparent !important;
263
+ }
264
+
265
+ /* ---------- slider ---------- */
266
+ input[type="range"] {
267
+ accent-color: var(--violet) !important;
268
+ }
269
+
270
+ /* ---------- dropdown ---------- */
271
+ .gr-dropdown, .dropdown-content {
272
+ background: var(--bg-card) !important;
273
+ backdrop-filter: blur(12px) !important;
274
+ border: 1px solid var(--lavender) !important;
275
+ border-radius: var(--radius-sm) !important;
276
+ }
277
+
278
+ /* ---------- status colors ---------- */
279
+ .gr-status-success, .success { color: var(--mint) !important; }
280
+ .gr-status-warning, .warning { color: var(--blush) !important; }
281
+ .gr-status-error, .error { color: var(--red) !important; }
282
+
283
+ /* ---------- dataframe / table ---------- */
284
+ .dataframe, .gr-dataframe {
285
+ border-radius: var(--radius-md) !important;
286
+ overflow: hidden !important;
287
+ }
288
+
289
+ .dataframe th {
290
+ background: linear-gradient(135deg, var(--violet), var(--violet-deep)) !important;
291
+ color: #fff !important;
292
+ font-weight: 600 !important;
293
+ text-transform: uppercase !important;
294
+ letter-spacing: 1px !important;
295
+ font-size: 0.78rem !important;
296
+ }
297
+
298
+ .dataframe td {
299
+ border-color: rgba(197, 180, 240, 0.18) !important;
300
+ }
301
+
302
+ .dataframe tr:hover td {
303
+ background: rgba(197, 180, 240, 0.10) !important;
304
+ }
305
+
306
+ /* ---------- dark mode overrides (Gradio toggle) ---------- */
307
+ .dark body, .dark .gradio-container {
308
+ --bg: #12091f;
309
+ --bg-card: rgba(30, 18, 52, 0.78);
310
+ --text: #e8e0f6;
311
+ --text-mid: #b8a8d8;
312
+ --text-muted: #7a6a9e;
313
+ background: var(--bg) !important;
314
+ color: var(--text) !important;
315
+ }
316
+
317
+ .dark .gradio-container::before {
318
+ background:
319
+ radial-gradient(ellipse 60% 50% at 15% 20%, rgba(124, 92, 191, 0.25) 0%, transparent 70%),
320
+ radial-gradient(ellipse 50% 45% at 80% 15%, rgba(80, 120, 200, 0.20) 0%, transparent 65%),
321
+ radial-gradient(ellipse 40% 40% at 70% 75%, rgba(110, 231, 199, 0.12) 0%, transparent 60%),
322
+ radial-gradient(ellipse 35% 35% at 25% 80%, rgba(255, 179, 200, 0.10) 0%, transparent 60%),
323
+ var(--bg) !important;
324
+ }
325
+
326
+ .dark input[type="text"], .dark input[type="number"],
327
+ .dark textarea, .dark select {
328
+ background: rgba(30, 18, 52, 0.65) !important;
329
+ color: var(--text) !important;
330
+ }
331
+
332
+ /* ---------- responsive ---------- */
333
+ @media (max-width: 768px) {
334
+ .gradio-container > .main {
335
+ padding: 0.75rem !important;
336
+ }
337
+
338
+ .tabs > .tab-nav {
339
+ flex-wrap: wrap !important;
340
+ }
341
+
342
+ .tabs > .tab-nav > button {
343
+ padding: 6px 14px !important;
344
+ font-size: 0.8rem !important;
345
+ }
346
+
347
+ .primary, button.primary,
348
+ .secondary, button.secondary {
349
+ padding: 8px 18px !important;
350
+ font-size: 0.85rem !important;
351
+ }
352
+ }
353
+
354
+ @media (max-width: 480px) {
355
+ .gradio-container > .main {
356
+ padding: 0.5rem !important;
357
+ max-width: 100% !important;
358
+ }
359
+
360
+ .panel, .block, .form, .gradio-group {
361
+ border-radius: var(--radius-md) !important;
362
+ padding: 0.75rem !important;
363
+ }
364
+ }
365
+
366
+ /* ---------- smooth transitions everywhere ---------- */
367
+ *, *::before, *::after {
368
+ transition-property: background, border-color, box-shadow, color, opacity, transform;
369
+ transition-duration: 0.2s;
370
+ transition-timing-function: ease;
371
+ }
372
+
373
+ /* don't transition layout-triggering properties on load */
374
+ .gradio-container * { animation-fill-mode: both; }