Initial commit
Browse files- .gitattributes +1 -0
- Decision_Kernel_Lite__Choosing_Under_Uncertainty.pptx +3 -0
- Dockerfile +31 -0
- Readme.md +210 -0
- app.py +287 -0
- docs/Appendix.md +190 -0
- docs/Executive_brief.md +178 -0
- docs/Technical_Brief.md +238 -0
- requirements.txt +3 -0
.gitattributes
CHANGED
|
@@ -33,3 +33,4 @@ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
|
|
|
|
|
| 33 |
*.zip filter=lfs diff=lfs merge=lfs -text
|
| 34 |
*.zst filter=lfs diff=lfs merge=lfs -text
|
| 35 |
*tfevents* filter=lfs diff=lfs merge=lfs -text
|
| 36 |
+
Decision_Kernel_Lite__Choosing_Under_Uncertainty.pptx filter=lfs diff=lfs merge=lfs -text
|
Decision_Kernel_Lite__Choosing_Under_Uncertainty.pptx
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
version https://git-lfs.github.com/spec/v1
|
| 2 |
+
oid sha256:e599b2e2b6d56ffdd6f8ce62834a497ac10c870cc9efa4f8df07d593ebb39f03
|
| 3 |
+
size 2503143
|
Dockerfile
ADDED
|
@@ -0,0 +1,31 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# ---- Base image ----
|
| 2 |
+
FROM python:3.11-slim
|
| 3 |
+
|
| 4 |
+
# ---- Environment ----
|
| 5 |
+
ENV PYTHONDONTWRITEBYTECODE=1
|
| 6 |
+
ENV PYTHONUNBUFFERED=1
|
| 7 |
+
|
| 8 |
+
# ---- System deps (minimal) ----
|
| 9 |
+
RUN apt-get update && apt-get install -y \
|
| 10 |
+
build-essential \
|
| 11 |
+
&& rm -rf /var/lib/apt/lists/*
|
| 12 |
+
|
| 13 |
+
# ---- Workdir ----
|
| 14 |
+
WORKDIR /app
|
| 15 |
+
|
| 16 |
+
# ---- Install Python deps ----
|
| 17 |
+
COPY requirements.txt .
|
| 18 |
+
RUN pip install --no-cache-dir -r requirements.txt
|
| 19 |
+
|
| 20 |
+
# ---- Copy app ----
|
| 21 |
+
COPY . .
|
| 22 |
+
|
| 23 |
+
# ---- Expose Streamlit port ----
|
| 24 |
+
EXPOSE 7860
|
| 25 |
+
|
| 26 |
+
# ---- Streamlit config ----
|
| 27 |
+
ENV STREAMLIT_SERVER_PORT=7860
|
| 28 |
+
ENV STREAMLIT_SERVER_ADDRESS=0.0.0.0
|
| 29 |
+
|
| 30 |
+
# ---- Run ----
|
| 31 |
+
CMD ["streamlit", "run", "app.py"]
|
Readme.md
ADDED
|
@@ -0,0 +1,210 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---
|
| 3 |
+
|
| 4 |
+
# **Decision Kernel Lite — Choosing Under Uncertainty**
|
| 5 |
+
|
| 6 |
+
A minimal, reproducible system for making **defensible decisions under uncertainty**, using three complementary risk lenses:
|
| 7 |
+
|
| 8 |
+
* **Expected Loss**
|
| 9 |
+
* **Minimax Regret**
|
| 10 |
+
* **CVaR (Conditional Value at Risk)**
|
| 11 |
+
|
| 12 |
+
The system collapses scenarios, probabilities, and asymmetric losses into **one deployable decision** with an explicit rationale.
|
| 13 |
+
|
| 14 |
+
---
|
| 15 |
+
|
| 16 |
+
## **What Problem This Solves**
|
| 17 |
+
|
| 18 |
+
Most business decisions fail not because of bad models, but because:
|
| 19 |
+
|
| 20 |
+
* probabilities are uncertain or disputed
|
| 21 |
+
* downside risk is asymmetric
|
| 22 |
+
* decisions are justified with intuition instead of structure
|
| 23 |
+
|
| 24 |
+
Decision Kernel Lite provides a **decision-first abstraction** that makes trade-offs explicit and auditable.
|
| 25 |
+
|
| 26 |
+
It does **not** predict.
|
| 27 |
+
It does **not** optimize operations.
|
| 28 |
+
It **chooses actions**.
|
| 29 |
+
|
| 30 |
+
---
|
| 31 |
+
|
| 32 |
+
## **Core Concept**
|
| 33 |
+
|
| 34 |
+
A decision is defined by four primitives:
|
| 35 |
+
|
| 36 |
+
```text
|
| 37 |
+
Actions × Scenarios × Probabilities × Losses
|
| 38 |
+
```
|
| 39 |
+
|
| 40 |
+
From these, the kernel evaluates actions using three lenses:
|
| 41 |
+
|
| 42 |
+
| Lens | Optimizes for |
|
| 43 |
+
| -------------- | ----------------------- |
|
| 44 |
+
| Expected Loss | Average pain |
|
| 45 |
+
| Minimax Regret | Hindsight defensibility |
|
| 46 |
+
| CVaR | Tail-risk protection |
|
| 47 |
+
|
| 48 |
+
The output is a **Decision Card** — not a dashboard.
|
| 49 |
+
|
| 50 |
+
---
|
| 51 |
+
|
| 52 |
+
## **What This Repository Provides**
|
| 53 |
+
|
| 54 |
+
This repository includes:
|
| 55 |
+
|
| 56 |
+
* a pure **decision kernel** (no ML, no forecasting)
|
| 57 |
+
* three mathematically sound decision rules
|
| 58 |
+
* a **Streamlit UI** for rapidž input → decision
|
| 59 |
+
* an explicit **rule-selection heuristic**
|
| 60 |
+
* a copy/paste **Decision Card** suitable for exec decks or memos
|
| 61 |
+
|
| 62 |
+
This is not analytics.
|
| 63 |
+
It is **decision intelligence**.
|
| 64 |
+
|
| 65 |
+
---
|
| 66 |
+
|
| 67 |
+
## **Decision Rules — When to Use What**
|
| 68 |
+
|
| 69 |
+
### **Expected Loss (Risk-Neutral)**
|
| 70 |
+
|
| 71 |
+
Use when:
|
| 72 |
+
|
| 73 |
+
* decisions repeat frequently
|
| 74 |
+
* probabilities are reasonably trusted
|
| 75 |
+
* variance is acceptable
|
| 76 |
+
|
| 77 |
+
Optimizes:
|
| 78 |
+
|
| 79 |
+
* long-run average outcomes
|
| 80 |
+
|
| 81 |
+
---
|
| 82 |
+
|
| 83 |
+
### **Minimax Regret (Robust / Political Safety)**
|
| 84 |
+
|
| 85 |
+
Use when:
|
| 86 |
+
|
| 87 |
+
* probabilities are unreliable or contested
|
| 88 |
+
* decisions are one-shot or high-accountability
|
| 89 |
+
* post-hoc defensibility matters
|
| 90 |
+
|
| 91 |
+
Optimizes:
|
| 92 |
+
|
| 93 |
+
* “I should not regret this choice”
|
| 94 |
+
|
| 95 |
+
---
|
| 96 |
+
|
| 97 |
+
### **CVaR (Tail-Risk Protection)**
|
| 98 |
+
|
| 99 |
+
Use when:
|
| 100 |
+
|
| 101 |
+
* rare bad outcomes are unacceptable
|
| 102 |
+
* downside is asymmetric (ruin, safety, bankruptcy)
|
| 103 |
+
* survival > average performance
|
| 104 |
+
|
| 105 |
+
Optimizes:
|
| 106 |
+
|
| 107 |
+
* average loss in the worst cases
|
| 108 |
+
|
| 109 |
+
---
|
| 110 |
+
|
| 111 |
+
## **Heuristic Rule Recommendation**
|
| 112 |
+
|
| 113 |
+
The system includes a simple, transparent heuristic:
|
| 114 |
+
|
| 115 |
+
* if tail risk dominates average risk → **recommend CVaR**
|
| 116 |
+
* otherwise → **recommend Expected Loss**
|
| 117 |
+
|
| 118 |
+
The recommendation is **advisory only** and can be overridden.
|
| 119 |
+
|
| 120 |
+
Governance is preserved.
|
| 121 |
+
|
| 122 |
+
---
|
| 123 |
+
|
| 124 |
+
## **Repository Structure**
|
| 125 |
+
|
| 126 |
+
```text
|
| 127 |
+
decision_kernel_lite/
|
| 128 |
+
├── app.py → Streamlit application
|
| 129 |
+
├── requirements.txt → minimal dependencies
|
| 130 |
+
├── Dockerfile → containerized deployment
|
| 131 |
+
├── README.md → this file
|
| 132 |
+
├── Executive_brief.md → executive narrative
|
| 133 |
+
└── Technical_brief.md → math + implementation
|
| 134 |
+
```
|
| 135 |
+
|
| 136 |
+
---
|
| 137 |
+
|
| 138 |
+
## **How to Run**
|
| 139 |
+
|
| 140 |
+
### Local
|
| 141 |
+
|
| 142 |
+
```bash
|
| 143 |
+
pip install -r requirements.txt
|
| 144 |
+
streamlit run app.py
|
| 145 |
+
```
|
| 146 |
+
|
| 147 |
+
### Docker
|
| 148 |
+
|
| 149 |
+
```bash
|
| 150 |
+
docker build -t decision-kernel-lite .
|
| 151 |
+
docker run -p 7860:7860 decision-kernel-lite
|
| 152 |
+
```
|
| 153 |
+
|
| 154 |
+
Open: `http://localhost:7860`
|
| 155 |
+
|
| 156 |
+
---
|
| 157 |
+
|
| 158 |
+
## **Deployment**
|
| 159 |
+
|
| 160 |
+
Works on:
|
| 161 |
+
|
| 162 |
+
* **Hugging Face Spaces (Docker SDK)**
|
| 163 |
+
* local Docker
|
| 164 |
+
* any environment that supports Streamlit
|
| 165 |
+
|
| 166 |
+
No external services required.
|
| 167 |
+
|
| 168 |
+
---
|
| 169 |
+
|
| 170 |
+
## **What This Is Not**
|
| 171 |
+
|
| 172 |
+
Decision Kernel Lite deliberately excludes:
|
| 173 |
+
|
| 174 |
+
* forecasting models
|
| 175 |
+
* machine learning
|
| 176 |
+
* optimization solvers
|
| 177 |
+
* domain-specific logic
|
| 178 |
+
|
| 179 |
+
Those belong **upstream or downstream**.
|
| 180 |
+
|
| 181 |
+
This kernel is intentionally **domain-agnostic**.
|
| 182 |
+
|
| 183 |
+
---
|
| 184 |
+
|
| 185 |
+
## **Positioning**
|
| 186 |
+
|
| 187 |
+
Decision Kernel Lite is designed to be:
|
| 188 |
+
|
| 189 |
+
* embedded downstream of forecasts
|
| 190 |
+
* embedded upstream of optimization
|
| 191 |
+
* used standalone for high-stakes choices
|
| 192 |
+
|
| 193 |
+
It is the **decision layer** in a larger Decision Intelligence stack.
|
| 194 |
+
|
| 195 |
+
---
|
| 196 |
+
|
| 197 |
+
## **Summary**
|
| 198 |
+
|
| 199 |
+
This system delivers:
|
| 200 |
+
|
| 201 |
+
1. a **clear action recommendation**
|
| 202 |
+
2. multiple **risk-aware justifications**
|
| 203 |
+
3. explicit trade-offs between lenses
|
| 204 |
+
4. a governance-ready Decision Card
|
| 205 |
+
5. a deployable, minimal interface
|
| 206 |
+
|
| 207 |
+
> Decisions are not predictions.
|
| 208 |
+
> They are commitments under uncertainty.
|
| 209 |
+
|
| 210 |
+
---
|
app.py
ADDED
|
@@ -0,0 +1,287 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
import numpy as np
|
| 2 |
+
import pandas as pd
|
| 3 |
+
import streamlit as st
|
| 4 |
+
|
| 5 |
+
|
| 6 |
+
# -----------------------
|
| 7 |
+
# Core math
|
| 8 |
+
# -----------------------
|
| 9 |
+
def normalize_probs(p: np.ndarray) -> np.ndarray:
|
| 10 |
+
p = np.asarray(p, dtype=float)
|
| 11 |
+
p = np.clip(p, 0.0, None)
|
| 12 |
+
s = float(p.sum())
|
| 13 |
+
if s <= 0:
|
| 14 |
+
return np.ones_like(p) / len(p)
|
| 15 |
+
return p / s
|
| 16 |
+
|
| 17 |
+
|
| 18 |
+
def expected_loss(loss: np.ndarray, p: np.ndarray) -> np.ndarray:
|
| 19 |
+
# loss: (A, S), p: (S,)
|
| 20 |
+
return loss @ p
|
| 21 |
+
|
| 22 |
+
|
| 23 |
+
def regret_matrix(loss: np.ndarray) -> np.ndarray:
|
| 24 |
+
# regret[a,s] = loss[a,s] - min_a loss[a,s]
|
| 25 |
+
return loss - loss.min(axis=0, keepdims=True)
|
| 26 |
+
|
| 27 |
+
|
| 28 |
+
def max_regret(regret: np.ndarray) -> np.ndarray:
|
| 29 |
+
return regret.max(axis=1)
|
| 30 |
+
|
| 31 |
+
|
| 32 |
+
def cvar_discrete(losses: np.ndarray, probs: np.ndarray, alpha: float = 0.8) -> float:
|
| 33 |
+
"""
|
| 34 |
+
CVaRα for discrete outcomes:
|
| 35 |
+
- Sort by loss ascending
|
| 36 |
+
- Find tail mass beyond alpha (i.e., worst 1-alpha probability)
|
| 37 |
+
- Return probability-weighted average loss over the tail
|
| 38 |
+
"""
|
| 39 |
+
alpha = float(alpha)
|
| 40 |
+
alpha = min(max(alpha, 0.0), 1.0)
|
| 41 |
+
|
| 42 |
+
order = np.argsort(losses)
|
| 43 |
+
l = np.asarray(losses, dtype=float)[order]
|
| 44 |
+
p = np.asarray(probs, dtype=float)[order]
|
| 45 |
+
|
| 46 |
+
p = normalize_probs(p)
|
| 47 |
+
cum = np.cumsum(p)
|
| 48 |
+
|
| 49 |
+
# tail = outcomes with cum_prob >= alpha
|
| 50 |
+
tail = cum >= alpha
|
| 51 |
+
if not np.any(tail):
|
| 52 |
+
# alpha==1 with numerical edge cases; take worst outcome
|
| 53 |
+
tail[-1] = True
|
| 54 |
+
|
| 55 |
+
tail_p = p[tail].sum()
|
| 56 |
+
if tail_p <= 0:
|
| 57 |
+
return float(l[-1])
|
| 58 |
+
|
| 59 |
+
return float((l[tail] * p[tail]).sum() / tail_p)
|
| 60 |
+
|
| 61 |
+
|
| 62 |
+
def cvar_per_action(loss: np.ndarray, p: np.ndarray, alpha: float) -> np.ndarray:
|
| 63 |
+
return np.array([cvar_discrete(loss[i, :], p, alpha=alpha) for i in range(loss.shape[0])], dtype=float)
|
| 64 |
+
|
| 65 |
+
|
| 66 |
+
# -----------------------
|
| 67 |
+
# UI
|
| 68 |
+
# -----------------------
|
| 69 |
+
st.set_page_config(page_title="Decision Kernel Lite", layout="wide")
|
| 70 |
+
st.markdown(
|
| 71 |
+
"""
|
| 72 |
+
<style>
|
| 73 |
+
section[data-testid="stSidebar"] {
|
| 74 |
+
width: 420px !important;
|
| 75 |
+
}
|
| 76 |
+
section[data-testid="stSidebar"] > div {
|
| 77 |
+
width: 420px !important;
|
| 78 |
+
}
|
| 79 |
+
</style>
|
| 80 |
+
""",
|
| 81 |
+
unsafe_allow_html=True,
|
| 82 |
+
)
|
| 83 |
+
|
| 84 |
+
st.title("Decision Kernel Lite")
|
| 85 |
+
st.caption("One output: choose an action under uncertainty. Three lenses: Expected Loss, Regret, CVaR.")
|
| 86 |
+
|
| 87 |
+
# Defaults
|
| 88 |
+
default_actions = ["A1", "A2", "A3"]
|
| 89 |
+
default_scenarios = ["Low", "Medium", "High"]
|
| 90 |
+
default_probs = [0.3, 0.4, 0.3]
|
| 91 |
+
default_loss = np.array([[10, 5, 1], [6, 4, 6], [2, 6, 12]])
|
| 92 |
+
|
| 93 |
+
st.sidebar.header("Controls")
|
| 94 |
+
alpha = st.sidebar.slider("CVaR alpha (tail threshold)", 0.50, 0.99, 0.80, 0.01)
|
| 95 |
+
tie_policy = st.sidebar.selectbox("Tie policy", ["First", "Show all"], index=1)
|
| 96 |
+
|
| 97 |
+
st.sidebar.header("Decision rule")
|
| 98 |
+
primary_rule = st.sidebar.radio("Choose action by", ["Expected Loss", "Minimax Regret", "CVaR"], index=0)
|
| 99 |
+
|
| 100 |
+
# Editable inputs
|
| 101 |
+
left, right = st.columns([1.2, 1])
|
| 102 |
+
|
| 103 |
+
with left:
|
| 104 |
+
st.subheader("1) Define scenarios + probabilities")
|
| 105 |
+
scen_df = pd.DataFrame({"Scenario": default_scenarios, "Probability": default_probs})
|
| 106 |
+
scen_df = st.data_editor(scen_df, num_rows="dynamic", use_container_width=True)
|
| 107 |
+
|
| 108 |
+
# clean scenarios/probs
|
| 109 |
+
scen_df = scen_df.dropna(subset=["Scenario"]).copy()
|
| 110 |
+
scen_df["Scenario"] = scen_df["Scenario"].astype(str).str.strip()
|
| 111 |
+
scen_df = scen_df[scen_df["Scenario"] != ""]
|
| 112 |
+
if scen_df.empty:
|
| 113 |
+
st.error("Add at least one scenario.")
|
| 114 |
+
st.stop()
|
| 115 |
+
|
| 116 |
+
scenarios = scen_df["Scenario"].tolist()
|
| 117 |
+
probs_raw = scen_df["Probability"].fillna(0.0).astype(float).to_numpy()
|
| 118 |
+
probs = normalize_probs(probs_raw)
|
| 119 |
+
|
| 120 |
+
if not np.isclose(probs_raw.sum(), 1.0):
|
| 121 |
+
st.info(f"Probabilities normalized to sum to 1.0 (raw sum was {probs_raw.sum():.3f}).")
|
| 122 |
+
|
| 123 |
+
with right:
|
| 124 |
+
st.subheader("2) Define actions + losses")
|
| 125 |
+
# loss table editor
|
| 126 |
+
loss_df = pd.DataFrame(default_loss, index=default_actions, columns=default_scenarios)
|
| 127 |
+
|
| 128 |
+
# If user changed scenarios count, reindex to match
|
| 129 |
+
# Start from current editor state if available by reconstructing using scenarios
|
| 130 |
+
loss_df = loss_df.reindex(columns=scenarios)
|
| 131 |
+
for c in scenarios:
|
| 132 |
+
if c not in loss_df.columns:
|
| 133 |
+
loss_df[c] = 0.0
|
| 134 |
+
loss_df = loss_df[scenarios]
|
| 135 |
+
|
| 136 |
+
loss_df = st.data_editor(
|
| 137 |
+
loss_df.reset_index().rename(columns={"index": "Action"}),
|
| 138 |
+
num_rows="dynamic",
|
| 139 |
+
use_container_width=True,
|
| 140 |
+
)
|
| 141 |
+
|
| 142 |
+
loss_df = loss_df.dropna(subset=["Action"]).copy()
|
| 143 |
+
loss_df["Action"] = loss_df["Action"].astype(str).str.strip()
|
| 144 |
+
loss_df = loss_df[loss_df["Action"] != ""]
|
| 145 |
+
if loss_df.empty:
|
| 146 |
+
st.error("Add at least one action.")
|
| 147 |
+
st.stop()
|
| 148 |
+
|
| 149 |
+
actions = loss_df["Action"].tolist()
|
| 150 |
+
loss_vals = loss_df.drop(columns=["Action"]).fillna(0.0).astype(float).to_numpy()
|
| 151 |
+
|
| 152 |
+
# Compute
|
| 153 |
+
loss_mat = loss_vals # shape (A, S)
|
| 154 |
+
A, S = loss_mat.shape
|
| 155 |
+
|
| 156 |
+
exp = expected_loss(loss_mat, probs)
|
| 157 |
+
reg = regret_matrix(loss_mat)
|
| 158 |
+
mxr = max_regret(reg)
|
| 159 |
+
cvar = cvar_per_action(loss_mat, probs, alpha=alpha)
|
| 160 |
+
|
| 161 |
+
results = pd.DataFrame(
|
| 162 |
+
{
|
| 163 |
+
"Expected Loss": exp,
|
| 164 |
+
"Max Regret": mxr,
|
| 165 |
+
f"CVaR@{alpha:.2f}": cvar,
|
| 166 |
+
},
|
| 167 |
+
index=actions,
|
| 168 |
+
)
|
| 169 |
+
|
| 170 |
+
# -----------------------
|
| 171 |
+
# Heuristic recommendation (rule suggestion)
|
| 172 |
+
# -----------------------
|
| 173 |
+
# Minimal heuristic: if tail risk is materially worse than average, recommend CVaR;
|
| 174 |
+
# if probabilities are weak/unknown, recommend Minimax Regret; otherwise Expected Loss.
|
| 175 |
+
|
| 176 |
+
tail_ratio = float(results[f"CVaR@{alpha:.2f}"].min() / max(results["Expected Loss"].min(), 1e-9))
|
| 177 |
+
|
| 178 |
+
if tail_ratio >= 1.5:
|
| 179 |
+
rule_reco = "CVaR"
|
| 180 |
+
rule_reason = f"Tail risk dominates average (best CVaR / best Expected Loss = {tail_ratio:.2f})."
|
| 181 |
+
else:
|
| 182 |
+
rule_reco = "Expected Loss"
|
| 183 |
+
rule_reason = f"Tail risk is not extreme (ratio = {tail_ratio:.2f}); average-optimal is defensible."
|
| 184 |
+
|
| 185 |
+
# Let user override the heuristic explicitly (keeps governance clean)
|
| 186 |
+
use_rule_reco = st.sidebar.checkbox("Use recommended rule (heuristic)", value=False)
|
| 187 |
+
if use_rule_reco:
|
| 188 |
+
primary_rule = rule_reco
|
| 189 |
+
|
| 190 |
+
|
| 191 |
+
# Choose by rule
|
| 192 |
+
if primary_rule == "Expected Loss":
|
| 193 |
+
metric = results["Expected Loss"]
|
| 194 |
+
best_val = metric.min()
|
| 195 |
+
best_actions = metric[metric == best_val].index.tolist()
|
| 196 |
+
elif primary_rule == "Minimax Regret":
|
| 197 |
+
metric = results["Max Regret"]
|
| 198 |
+
best_val = metric.min()
|
| 199 |
+
best_actions = metric[metric == best_val].index.tolist()
|
| 200 |
+
else:
|
| 201 |
+
col = f"CVaR@{alpha:.2f}"
|
| 202 |
+
metric = results[col]
|
| 203 |
+
best_val = metric.min()
|
| 204 |
+
best_actions = metric[metric == best_val].index.tolist()
|
| 205 |
+
|
| 206 |
+
chosen = best_actions[0] if tie_policy == "First" else ", ".join(best_actions)
|
| 207 |
+
st.sidebar.header("Rule guidance (when to use what)")
|
| 208 |
+
|
| 209 |
+
st.sidebar.markdown(
|
| 210 |
+
"""
|
| 211 |
+
**Expected Loss (risk-neutral)**
|
| 212 |
+
- Use when decisions repeat frequently and you can tolerate variance.
|
| 213 |
+
- Use when probabilities are reasonably trusted.
|
| 214 |
+
- Optimizes *average* pain.
|
| 215 |
+
|
| 216 |
+
**Minimax Regret (robust to bad probability estimates)**
|
| 217 |
+
- Use when probabilities are unreliable or politically contested.
|
| 218 |
+
- Use for one-shot / high-accountability decisions.
|
| 219 |
+
- Minimizes “I should have done X” exposure.
|
| 220 |
+
|
| 221 |
+
**CVaR (tail-risk protection)**
|
| 222 |
+
- Use when rare bad outcomes are unacceptable (ruin / safety / bankruptcy).
|
| 223 |
+
- Use when downside is asymmetric and must be bounded.
|
| 224 |
+
- Optimizes the *average of worst cases* (tail), not the average overall.
|
| 225 |
+
"""
|
| 226 |
+
)
|
| 227 |
+
|
| 228 |
+
# Layout output
|
| 229 |
+
st.divider()
|
| 230 |
+
topL, topR = st.columns([2, 1], vertical_alignment="center")
|
| 231 |
+
with topL:
|
| 232 |
+
st.subheader("Decision")
|
| 233 |
+
st.markdown(f"### Choose **{chosen}**")
|
| 234 |
+
st.caption(f"Primary rule: **{primary_rule}**")
|
| 235 |
+
with topR:
|
| 236 |
+
st.metric("Scenarios", S)
|
| 237 |
+
st.metric("Actions", A)
|
| 238 |
+
|
| 239 |
+
st.subheader("Evidence table")
|
| 240 |
+
st.dataframe(results.style.format("{:.3f}"), use_container_width=True)
|
| 241 |
+
|
| 242 |
+
st.subheader("Regret table (per action × scenario)")
|
| 243 |
+
reg_df = pd.DataFrame(reg, index=actions, columns=scenarios)
|
| 244 |
+
st.dataframe(reg_df.style.format("{:.3f}"), use_container_width=True)
|
| 245 |
+
|
| 246 |
+
# Decision card
|
| 247 |
+
st.subheader("Decision Card")
|
| 248 |
+
st.info(f"Recommended rule (heuristic): **{rule_reco}** — {rule_reason}")
|
| 249 |
+
|
| 250 |
+
prob_str = ", ".join([f"{s}={p:.2f}" for s, p in zip(scenarios, probs)])
|
| 251 |
+
|
| 252 |
+
exp_best = results["Expected Loss"].idxmin()
|
| 253 |
+
mxr_best = results["Max Regret"].idxmin()
|
| 254 |
+
cvar_best = results[f"CVaR@{alpha:.2f}"].idxmin()
|
| 255 |
+
|
| 256 |
+
st.code(
|
| 257 |
+
f"""DECISION KERNEL LITE — DECISION CARD
|
| 258 |
+
|
| 259 |
+
Decision:
|
| 260 |
+
Choose action {chosen}
|
| 261 |
+
|
| 262 |
+
Context:
|
| 263 |
+
- Actions evaluated: {", ".join(actions)}
|
| 264 |
+
- Scenarios considered: {", ".join(scenarios)}
|
| 265 |
+
- Probabilities: {prob_str}
|
| 266 |
+
|
| 267 |
+
Results:
|
| 268 |
+
- Expected Loss optimal: {exp_best} ({results.loc[exp_best, "Expected Loss"]:.3f})
|
| 269 |
+
- Minimax Regret optimal: {mxr_best} ({results.loc[mxr_best, "Max Regret"]:.3f})
|
| 270 |
+
- CVaR@{alpha:.2f} optimal: {cvar_best} ({results.loc[cvar_best, f"CVaR@{alpha:.2f}"]:.3f})
|
| 271 |
+
|
| 272 |
+
Rule guidance:
|
| 273 |
+
- Expected Loss: repeated decisions + trusted probabilities
|
| 274 |
+
- Minimax Regret: probabilities unreliable + high accountability
|
| 275 |
+
- CVaR: tail-risk unacceptable / ruin protection
|
| 276 |
+
|
| 277 |
+
Recommended rule (heuristic): {rule_reco} — {rule_reason}
|
| 278 |
+
|
| 279 |
+
|
| 280 |
+
Primary rule used: {primary_rule}
|
| 281 |
+
""",
|
| 282 |
+
language="text",
|
| 283 |
+
)
|
| 284 |
+
|
| 285 |
+
with st.expander("Raw inputs"):
|
| 286 |
+
st.write("Probabilities (normalized):", probs)
|
| 287 |
+
st.dataframe(pd.DataFrame(loss_mat, index=actions, columns=scenarios), use_container_width=True)
|
docs/Appendix.md
ADDED
|
@@ -0,0 +1,190 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# **Appendix — Decision Kernel Lite**
|
| 2 |
+
|
| 3 |
+
## **Purpose**
|
| 4 |
+
|
| 5 |
+
This appendix consolidates **worked examples, edge cases, and interpretation notes** supporting Decision Kernel Lite.
|
| 6 |
+
|
| 7 |
+
It is intended for:
|
| 8 |
+
|
| 9 |
+
* reviewers and auditors
|
| 10 |
+
* advanced users
|
| 11 |
+
* downstream system integrators
|
| 12 |
+
|
| 13 |
+
This appendix is **reference-only** and complements the Executive and Technical Briefs.
|
| 14 |
+
|
| 15 |
+
---
|
| 16 |
+
|
| 17 |
+
## **Appendix A — Worked Example (Baseline Case)**
|
| 18 |
+
|
| 19 |
+
### **Inputs**
|
| 20 |
+
|
| 21 |
+
**Actions:** A1, A2, A3
|
| 22 |
+
**Scenarios:** Low, Medium, High
|
| 23 |
+
**Probabilities:** 0.30, 0.40, 0.30
|
| 24 |
+
|
| 25 |
+
**Loss Matrix**
|
| 26 |
+
|
| 27 |
+
| Action | Low | Medium | High |
|
| 28 |
+
| -----: | --: | -----: | ---: |
|
| 29 |
+
| A1 | 10 | 5 | 1 |
|
| 30 |
+
| A2 | 6 | 4 | 6 |
|
| 31 |
+
| A3 | 2 | 6 | 12 |
|
| 32 |
+
|
| 33 |
+
---
|
| 34 |
+
|
| 35 |
+
### **Expected Loss**
|
| 36 |
+
|
| 37 |
+
[
|
| 38 |
+
\text{EL}(A1)=5.3,\quad
|
| 39 |
+
\text{EL}(A2)=5.2,\quad
|
| 40 |
+
\text{EL}(A3)=6.6
|
| 41 |
+
]
|
| 42 |
+
|
| 43 |
+
**Optimal:** A2
|
| 44 |
+
|
| 45 |
+
Interpretation: A2 minimizes average loss given the stated probabilities.
|
| 46 |
+
|
| 47 |
+
---
|
| 48 |
+
|
| 49 |
+
### **Regret Matrix**
|
| 50 |
+
|
| 51 |
+
| Action | Low | Medium | High | Max Regret |
|
| 52 |
+
| -----: | --: | -----: | ---: | ---------: |
|
| 53 |
+
| A1 | 8 | 1 | 0 | 8 |
|
| 54 |
+
| A2 | 4 | 0 | 5 | 5 |
|
| 55 |
+
| A3 | 0 | 2 | 11 | 11 |
|
| 56 |
+
|
| 57 |
+
**Optimal (Minimax Regret):** A2
|
| 58 |
+
|
| 59 |
+
Interpretation: A2 minimizes worst-case hindsight regret.
|
| 60 |
+
|
| 61 |
+
---
|
| 62 |
+
|
| 63 |
+
### **CVaR @ 0.8**
|
| 64 |
+
|
| 65 |
+
With three discrete scenarios, CVaR selects outcomes in the **worst 20% tail**, which collapses to the worst scenario.
|
| 66 |
+
|
| 67 |
+
| Action | CVaR@0.8 |
|
| 68 |
+
| -----: | -------: |
|
| 69 |
+
| A1 | 10 |
|
| 70 |
+
| A2 | 6 |
|
| 71 |
+
| A3 | 12 |
|
| 72 |
+
|
| 73 |
+
**Optimal (CVaR):** A2
|
| 74 |
+
|
| 75 |
+
Interpretation: A2 has the lowest average loss conditional on being in the tail.
|
| 76 |
+
|
| 77 |
+
---
|
| 78 |
+
|
| 79 |
+
### **Decision Card (Result)**
|
| 80 |
+
|
| 81 |
+
```
|
| 82 |
+
Decision: Choose A2
|
| 83 |
+
|
| 84 |
+
Rationale:
|
| 85 |
+
- Expected Loss optimal
|
| 86 |
+
- Minimax Regret optimal
|
| 87 |
+
- CVaR optimal
|
| 88 |
+
```
|
| 89 |
+
|
| 90 |
+
All lenses agree. This represents a **fully aligned decision**.
|
| 91 |
+
|
| 92 |
+
---
|
| 93 |
+
|
| 94 |
+
## **Appendix B — When Decision Lenses Disagree**
|
| 95 |
+
|
| 96 |
+
Disagreement between lenses is **expected** and informative.
|
| 97 |
+
|
| 98 |
+
| Situation | Expected Loss | Minimax Regret | CVaR |
|
| 99 |
+
| ------------------------------------ | ------------- | -------------- | ------- |
|
| 100 |
+
| Aggressive upside bet | Favors | Rejects | Rejects |
|
| 101 |
+
| Conservative safety choice | Rejects | Neutral | Favors |
|
| 102 |
+
| High accountability / political risk | Neutral | Favors | Neutral |
|
| 103 |
+
|
| 104 |
+
**Guidance:**
|
| 105 |
+
|
| 106 |
+
* Do not average lenses
|
| 107 |
+
* Select the rule that matches the risk posture
|
| 108 |
+
* Document the choice explicitly
|
| 109 |
+
|
| 110 |
+
---
|
| 111 |
+
|
| 112 |
+
## **Appendix C — CVaR in Discrete Scenario Settings**
|
| 113 |
+
|
| 114 |
+
In small discrete scenario sets:
|
| 115 |
+
|
| 116 |
+
* CVaR approximates worst-case average
|
| 117 |
+
* This behavior is correct by definition
|
| 118 |
+
|
| 119 |
+
As the number of scenarios increases:
|
| 120 |
+
|
| 121 |
+
* CVaR becomes smoother
|
| 122 |
+
* Tail behavior is better resolved
|
| 123 |
+
|
| 124 |
+
Decision Kernel Lite intentionally operates in the **discrete regime**.
|
| 125 |
+
|
| 126 |
+
---
|
| 127 |
+
|
| 128 |
+
## **Appendix D — Probability Misspecification**
|
| 129 |
+
|
| 130 |
+
When probabilities are uncertain or contested:
|
| 131 |
+
|
| 132 |
+
* Expected Loss becomes fragile
|
| 133 |
+
* Minimax Regret remains valid
|
| 134 |
+
* CVaR protects against catastrophic misestimation
|
| 135 |
+
|
| 136 |
+
**Operational rule:**
|
| 137 |
+
If probabilities are debated → prefer **Regret** or **CVaR**.
|
| 138 |
+
|
| 139 |
+
---
|
| 140 |
+
|
| 141 |
+
## **Appendix E — Integration Positioning**
|
| 142 |
+
|
| 143 |
+
Decision Kernel Lite is designed to sit between analytics and action:
|
| 144 |
+
|
| 145 |
+
```text
|
| 146 |
+
Forecasts → Scenarios → Probabilities → Losses
|
| 147 |
+
↓
|
| 148 |
+
Decision Kernel Lite
|
| 149 |
+
↓
|
| 150 |
+
Action / Policy / Price
|
| 151 |
+
```
|
| 152 |
+
|
| 153 |
+
It does not replace forecasting or optimization.
|
| 154 |
+
It **binds them into a decision**.
|
| 155 |
+
|
| 156 |
+
---
|
| 157 |
+
|
| 158 |
+
## **Appendix F — Design Exclusions (Intentional)**
|
| 159 |
+
|
| 160 |
+
Decision Kernel Lite deliberately excludes:
|
| 161 |
+
|
| 162 |
+
* forecasting models
|
| 163 |
+
* probability estimation
|
| 164 |
+
* optimization solvers
|
| 165 |
+
* learning or calibration
|
| 166 |
+
|
| 167 |
+
Rationale:
|
| 168 |
+
|
| 169 |
+
* forecasting belongs upstream
|
| 170 |
+
* optimization belongs downstream
|
| 171 |
+
* decision justification belongs here
|
| 172 |
+
|
| 173 |
+
This separation preserves clarity, auditability, and governance.
|
| 174 |
+
|
| 175 |
+
---
|
| 176 |
+
|
| 177 |
+
## **Appendix G — Audit & Governance Notes**
|
| 178 |
+
|
| 179 |
+
* Deterministic computations
|
| 180 |
+
* Explicit assumptions
|
| 181 |
+
* No hidden state
|
| 182 |
+
* Copy/paste Decision Card output
|
| 183 |
+
|
| 184 |
+
This makes the kernel suitable for:
|
| 185 |
+
|
| 186 |
+
* executive review
|
| 187 |
+
* governance committees
|
| 188 |
+
* post-decision audits
|
| 189 |
+
|
| 190 |
+
---
|
docs/Executive_brief.md
ADDED
|
@@ -0,0 +1,178 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---
|
| 3 |
+
|
| 4 |
+
# **Executive Brief — Decision Kernel Lite**
|
| 5 |
+
|
| 6 |
+
## **Why This Exists**
|
| 7 |
+
|
| 8 |
+
Most decisions fail not because of missing data, but because uncertainty is handled informally:
|
| 9 |
+
|
| 10 |
+
* probabilities are debated, not modeled
|
| 11 |
+
* downside risk is underweighted
|
| 12 |
+
* justifications are retrospective, not structured
|
| 13 |
+
|
| 14 |
+
Decision Kernel Lite provides a **simple, auditable mechanism** to choose actions when outcomes are uncertain and costs are asymmetric.
|
| 15 |
+
|
| 16 |
+
It replaces intuition-driven debate with **explicit trade-offs**.
|
| 17 |
+
|
| 18 |
+
---
|
| 19 |
+
|
| 20 |
+
## **What the System Does**
|
| 21 |
+
|
| 22 |
+
Decision Kernel Lite takes four inputs:
|
| 23 |
+
|
| 24 |
+
* a set of **actions**
|
| 25 |
+
* a set of plausible **scenarios**
|
| 26 |
+
* **probabilities** for each scenario
|
| 27 |
+
* a **loss matrix** describing consequences
|
| 28 |
+
|
| 29 |
+
From these, it produces:
|
| 30 |
+
|
| 31 |
+
* a **single recommended action**
|
| 32 |
+
* a clear justification using multiple risk lenses
|
| 33 |
+
* a **Decision Card** suitable for executive review
|
| 34 |
+
|
| 35 |
+
No forecasting.
|
| 36 |
+
No optimization.
|
| 37 |
+
Only the decision.
|
| 38 |
+
|
| 39 |
+
---
|
| 40 |
+
|
| 41 |
+
## **The Three Risk Lenses**
|
| 42 |
+
|
| 43 |
+
The system evaluates every action using three complementary rules.
|
| 44 |
+
|
| 45 |
+
### **1. Expected Loss — “What works best on average?”**
|
| 46 |
+
|
| 47 |
+
* Optimizes average outcome
|
| 48 |
+
* Appropriate when probabilities are trusted
|
| 49 |
+
* Best for repeatable decisions
|
| 50 |
+
|
| 51 |
+
This is the default economic lens.
|
| 52 |
+
|
| 53 |
+
---
|
| 54 |
+
|
| 55 |
+
### **2. Minimax Regret — “What will I regret least?”**
|
| 56 |
+
|
| 57 |
+
* Optimizes post-hoc defensibility
|
| 58 |
+
* Appropriate when probabilities are unreliable or disputed
|
| 59 |
+
* Best for one-shot, high-accountability decisions
|
| 60 |
+
|
| 61 |
+
This lens protects decision-makers from hindsight criticism.
|
| 62 |
+
|
| 63 |
+
---
|
| 64 |
+
|
| 65 |
+
### **3. CVaR — “How bad are the bad cases?”**
|
| 66 |
+
|
| 67 |
+
* Focuses on tail risk
|
| 68 |
+
* Appropriate when rare failures are unacceptable
|
| 69 |
+
* Best for safety, financial ruin, or irreversible outcomes
|
| 70 |
+
|
| 71 |
+
This lens prioritizes survival over average performance.
|
| 72 |
+
|
| 73 |
+
---
|
| 74 |
+
|
| 75 |
+
## **How the Final Decision Is Chosen**
|
| 76 |
+
|
| 77 |
+
* All three lenses are computed simultaneously
|
| 78 |
+
* A **primary decision rule** is selected explicitly
|
| 79 |
+
* A heuristic recommendation is provided, but can be overridden
|
| 80 |
+
|
| 81 |
+
This ensures:
|
| 82 |
+
|
| 83 |
+
* transparency
|
| 84 |
+
* governance
|
| 85 |
+
* accountability
|
| 86 |
+
|
| 87 |
+
There is no “black box” choice.
|
| 88 |
+
|
| 89 |
+
---
|
| 90 |
+
|
| 91 |
+
## **What the Output Looks Like**
|
| 92 |
+
|
| 93 |
+
The system produces a **Decision Card** summarizing:
|
| 94 |
+
|
| 95 |
+
* the recommended action
|
| 96 |
+
* the assumptions used
|
| 97 |
+
* how each rule evaluated the options
|
| 98 |
+
* why the final rule was chosen
|
| 99 |
+
|
| 100 |
+
This artifact can be:
|
| 101 |
+
|
| 102 |
+
* pasted into an executive memo
|
| 103 |
+
* included in a slide deck
|
| 104 |
+
* stored for audit and review
|
| 105 |
+
|
| 106 |
+
---
|
| 107 |
+
|
| 108 |
+
## **What Makes This Different**
|
| 109 |
+
|
| 110 |
+
Decision Kernel Lite does **not** attempt to:
|
| 111 |
+
|
| 112 |
+
* predict demand
|
| 113 |
+
* estimate probabilities automatically
|
| 114 |
+
* optimize operational parameters
|
| 115 |
+
|
| 116 |
+
Its value lies in **decision clarity**, not model sophistication.
|
| 117 |
+
|
| 118 |
+
It sits between:
|
| 119 |
+
|
| 120 |
+
* analytics (what might happen)
|
| 121 |
+
* and operations (what to do)
|
| 122 |
+
|
| 123 |
+
---
|
| 124 |
+
|
| 125 |
+
## **Typical Use Cases**
|
| 126 |
+
|
| 127 |
+
* strategic one-off choices
|
| 128 |
+
* pricing or investment decisions with asymmetric downside
|
| 129 |
+
* contract or supplier selection
|
| 130 |
+
* policy or governance decisions
|
| 131 |
+
* scenario planning workshops
|
| 132 |
+
|
| 133 |
+
Any situation where:
|
| 134 |
+
|
| 135 |
+
> “We must decide, even though we are uncertain.”
|
| 136 |
+
|
| 137 |
+
---
|
| 138 |
+
|
| 139 |
+
## **Business Value**
|
| 140 |
+
|
| 141 |
+
Organizations using structured decision kernels achieve:
|
| 142 |
+
|
| 143 |
+
* faster decision cycles
|
| 144 |
+
* fewer unexamined assumptions
|
| 145 |
+
* reduced post-decision conflict
|
| 146 |
+
* clearer accountability
|
| 147 |
+
|
| 148 |
+
Most importantly:
|
| 149 |
+
|
| 150 |
+
> Decisions become explainable, not just executable.
|
| 151 |
+
|
| 152 |
+
---
|
| 153 |
+
|
| 154 |
+
## **Positioning**
|
| 155 |
+
|
| 156 |
+
Decision Kernel Lite is a **foundational decision layer**.
|
| 157 |
+
|
| 158 |
+
It can be:
|
| 159 |
+
|
| 160 |
+
* used standalone
|
| 161 |
+
* embedded into larger planning systems
|
| 162 |
+
* integrated downstream of forecasting or upstream of optimization
|
| 163 |
+
|
| 164 |
+
It is deliberately minimal, fast, and domain-agnostic.
|
| 165 |
+
|
| 166 |
+
---
|
| 167 |
+
|
| 168 |
+
## **Bottom Line**
|
| 169 |
+
|
| 170 |
+
Decision Kernel Lite does not promise certainty.
|
| 171 |
+
|
| 172 |
+
It delivers something more valuable:
|
| 173 |
+
|
| 174 |
+
> **Clarity about what you are choosing — and why — under uncertainty.**
|
| 175 |
+
|
| 176 |
+
---
|
| 177 |
+
|
| 178 |
+
|
docs/Technical_Brief.md
ADDED
|
@@ -0,0 +1,238 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
|
| 2 |
+
---
|
| 3 |
+
|
| 4 |
+
# **Technical Brief — Decision Kernel Lite**
|
| 5 |
+
|
| 6 |
+
## **Purpose**
|
| 7 |
+
|
| 8 |
+
This document specifies the **mathematical definitions**, **algorithms**, and **implementation choices** used in Decision Kernel Lite.
|
| 9 |
+
|
| 10 |
+
The goal is not theoretical novelty, but **correct, transparent, and reproducible decision logic**.
|
| 11 |
+
|
| 12 |
+
---
|
| 13 |
+
|
| 14 |
+
## **Formal Problem Definition**
|
| 15 |
+
|
| 16 |
+
Let:
|
| 17 |
+
|
| 18 |
+
* Actions: ( a \in {1,\dots,A} )
|
| 19 |
+
* Scenarios: ( s \in {1,\dots,S} )
|
| 20 |
+
* Scenario probabilities: ( p_s \ge 0,\ \sum_s p_s = 1 )
|
| 21 |
+
* Loss matrix: ( L_{a,s} \in \mathbb{R}_{\ge 0} )
|
| 22 |
+
|
| 23 |
+
A decision consists of selecting one action ( a^* ) under uncertainty about which scenario ( s ) will occur.
|
| 24 |
+
|
| 25 |
+
---
|
| 26 |
+
|
| 27 |
+
## **Decision Lenses**
|
| 28 |
+
|
| 29 |
+
Decision Kernel Lite evaluates each action using three lenses.
|
| 30 |
+
|
| 31 |
+
### **1. Expected Loss**
|
| 32 |
+
|
| 33 |
+
[
|
| 34 |
+
\text{EL}(a) = \sum_{s=1}^{S} p_s \cdot L_{a,s}
|
| 35 |
+
]
|
| 36 |
+
|
| 37 |
+
**Interpretation**
|
| 38 |
+
|
| 39 |
+
* Risk-neutral criterion
|
| 40 |
+
* Minimizes long-run average loss
|
| 41 |
+
|
| 42 |
+
**Implementation**
|
| 43 |
+
|
| 44 |
+
```python
|
| 45 |
+
expected_loss = loss_matrix @ probabilities
|
| 46 |
+
```
|
| 47 |
+
|
| 48 |
+
---
|
| 49 |
+
|
| 50 |
+
### **2. Regret and Minimax Regret**
|
| 51 |
+
|
| 52 |
+
**Regret matrix**
|
| 53 |
+
|
| 54 |
+
[
|
| 55 |
+
R_{a,s} = L_{a,s} - \min_{a'} L_{a',s}
|
| 56 |
+
]
|
| 57 |
+
|
| 58 |
+
**Max regret per action**
|
| 59 |
+
|
| 60 |
+
[
|
| 61 |
+
\text{MR}(a) = \max_s R_{a,s}
|
| 62 |
+
]
|
| 63 |
+
|
| 64 |
+
**Decision rule**
|
| 65 |
+
|
| 66 |
+
[
|
| 67 |
+
a^* = \arg\min_a \text{MR}(a)
|
| 68 |
+
]
|
| 69 |
+
|
| 70 |
+
**Interpretation**
|
| 71 |
+
|
| 72 |
+
* Robust to probability misspecification
|
| 73 |
+
* Optimizes post-hoc defensibility
|
| 74 |
+
|
| 75 |
+
**Implementation**
|
| 76 |
+
|
| 77 |
+
```python
|
| 78 |
+
regret = loss - loss.min(axis=0, keepdims=True)
|
| 79 |
+
max_regret = regret.max(axis=1)
|
| 80 |
+
```
|
| 81 |
+
|
| 82 |
+
---
|
| 83 |
+
|
| 84 |
+
### **3. CVaR (Conditional Value at Risk)**
|
| 85 |
+
|
| 86 |
+
CVaR evaluates **average loss in the worst tail of outcomes**.
|
| 87 |
+
|
| 88 |
+
#### **Procedure (discrete)**
|
| 89 |
+
|
| 90 |
+
1. Sort losses ( L_{a,s} ) in ascending order
|
| 91 |
+
2. Sort probabilities accordingly
|
| 92 |
+
3. Compute cumulative probability
|
| 93 |
+
4. Select outcomes where cumulative probability ≥ ( \alpha )
|
| 94 |
+
5. Compute probability-weighted average over the tail
|
| 95 |
+
|
| 96 |
+
[
|
| 97 |
+
\text{CVaR}_\alpha(a)
|
| 98 |
+
=====================
|
| 99 |
+
|
| 100 |
+
\mathbb{E}[L_{a,s} \mid \text{worst } (1-\alpha) \text{ mass}]
|
| 101 |
+
]
|
| 102 |
+
|
| 103 |
+
**Interpretation**
|
| 104 |
+
|
| 105 |
+
* Tail-risk-aware
|
| 106 |
+
* Penalizes catastrophic downside
|
| 107 |
+
|
| 108 |
+
**Implementation**
|
| 109 |
+
|
| 110 |
+
```python
|
| 111 |
+
order = np.argsort(losses)
|
| 112 |
+
cum = np.cumsum(probs[order])
|
| 113 |
+
tail = cum >= alpha
|
| 114 |
+
cvar = (losses[order][tail] * probs[order][tail]).sum() / probs[order][tail].sum()
|
| 115 |
+
```
|
| 116 |
+
|
| 117 |
+
---
|
| 118 |
+
|
| 119 |
+
## **Probability Normalization**
|
| 120 |
+
|
| 121 |
+
To guarantee numerical and logical safety:
|
| 122 |
+
|
| 123 |
+
* Negative probabilities are clipped to zero
|
| 124 |
+
* Zero-sum vectors default to uniform probabilities
|
| 125 |
+
* All probabilities are normalized to sum to 1
|
| 126 |
+
|
| 127 |
+
```python
|
| 128 |
+
p = np.clip(p, 0, None)
|
| 129 |
+
p = p / p.sum() if p.sum() > 0 else np.ones_like(p) / len(p)
|
| 130 |
+
```
|
| 131 |
+
|
| 132 |
+
This ensures the kernel **never fails due to malformed inputs**.
|
| 133 |
+
|
| 134 |
+
---
|
| 135 |
+
|
| 136 |
+
## **Rule Selection Heuristic**
|
| 137 |
+
|
| 138 |
+
Decision Kernel Lite includes an **advisory heuristic**:
|
| 139 |
+
|
| 140 |
+
[
|
| 141 |
+
\text{Tail Ratio} =
|
| 142 |
+
\frac{\min_a \text{CVaR}_\alpha(a)}{\min_a \text{EL}(a)}
|
| 143 |
+
]
|
| 144 |
+
|
| 145 |
+
* If Tail Ratio ≥ 1.5 → recommend **CVaR**
|
| 146 |
+
* Otherwise → recommend **Expected Loss**
|
| 147 |
+
|
| 148 |
+
**Properties**
|
| 149 |
+
|
| 150 |
+
* Transparent
|
| 151 |
+
* Non-binding
|
| 152 |
+
* Overridable by the user
|
| 153 |
+
|
| 154 |
+
This preserves governance while guiding non-technical users.
|
| 155 |
+
|
| 156 |
+
---
|
| 157 |
+
|
| 158 |
+
## **Decision Logic Flow**
|
| 159 |
+
|
| 160 |
+
```text
|
| 161 |
+
Inputs
|
| 162 |
+
↓
|
| 163 |
+
Normalize probabilities
|
| 164 |
+
↓
|
| 165 |
+
Compute:
|
| 166 |
+
- Expected Loss
|
| 167 |
+
- Regret matrix
|
| 168 |
+
- Max Regret
|
| 169 |
+
- CVaR
|
| 170 |
+
↓
|
| 171 |
+
Select primary rule
|
| 172 |
+
↓
|
| 173 |
+
Choose action
|
| 174 |
+
↓
|
| 175 |
+
Generate Decision Card
|
| 176 |
+
```
|
| 177 |
+
|
| 178 |
+
All computations are deterministic and stateless.
|
| 179 |
+
|
| 180 |
+
---
|
| 181 |
+
|
| 182 |
+
## **System Architecture**
|
| 183 |
+
|
| 184 |
+
* **Core kernel**: pure NumPy (testable, embeddable)
|
| 185 |
+
* **UI layer**: Streamlit (input + presentation only)
|
| 186 |
+
* **Deployment**: Docker / Hugging Face Spaces
|
| 187 |
+
|
| 188 |
+
There is no persistence, no external API, and no hidden state.
|
| 189 |
+
|
| 190 |
+
---
|
| 191 |
+
|
| 192 |
+
## **Design Constraints (Intentional)**
|
| 193 |
+
|
| 194 |
+
The system deliberately avoids:
|
| 195 |
+
|
| 196 |
+
* forecasting
|
| 197 |
+
* probability estimation
|
| 198 |
+
* optimization solvers
|
| 199 |
+
* learning or calibration
|
| 200 |
+
|
| 201 |
+
These belong to **adjacent layers**, not the decision kernel.
|
| 202 |
+
|
| 203 |
+
---
|
| 204 |
+
|
| 205 |
+
## **Computational Complexity**
|
| 206 |
+
|
| 207 |
+
Let ( A ) = number of actions, ( S ) = number of scenarios.
|
| 208 |
+
|
| 209 |
+
* Expected Loss: ( O(A \cdot S) )
|
| 210 |
+
* Regret: ( O(A \cdot S) )
|
| 211 |
+
* CVaR: ( O(A \cdot S \log S) )
|
| 212 |
+
|
| 213 |
+
The system is effectively instantaneous for realistic decision sizes.
|
| 214 |
+
|
| 215 |
+
---
|
| 216 |
+
|
| 217 |
+
## **Reproducibility & Auditability**
|
| 218 |
+
|
| 219 |
+
* Deterministic outputs
|
| 220 |
+
* Explicit assumptions
|
| 221 |
+
* Full transparency of trade-offs
|
| 222 |
+
* Copy/paste Decision Card
|
| 223 |
+
|
| 224 |
+
This makes the kernel suitable for:
|
| 225 |
+
|
| 226 |
+
* executive decisions
|
| 227 |
+
* governance reviews
|
| 228 |
+
* post-decision audits
|
| 229 |
+
|
| 230 |
+
---
|
| 231 |
+
|
| 232 |
+
## **Summary**
|
| 233 |
+
|
| 234 |
+
Decision Kernel Lite implements **classical, defensible decision theory** in a minimal, production-ready form.
|
| 235 |
+
|
| 236 |
+
It transforms uncertainty from an excuse into a **structured input**.
|
| 237 |
+
|
| 238 |
+
---
|
requirements.txt
ADDED
|
@@ -0,0 +1,3 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
streamlit>=1.30,<2.0
|
| 2 |
+
numpy>=1.23,<2.0
|
| 3 |
+
pandas>=1.5,<3.0
|