decision-kernel-lite / docs /Technical_Brief.md
PranavSharma's picture
Initial commit
1e67131 verified
---
# **Technical Brief β€” Decision Kernel Lite**
## **Purpose**
This document specifies the **mathematical definitions**, **algorithms**, and **implementation choices** used in Decision Kernel Lite.
The goal is not theoretical novelty, but **correct, transparent, and reproducible decision logic**.
---
## **Formal Problem Definition**
Let:
* Actions: ( a \in {1,\dots,A} )
* Scenarios: ( s \in {1,\dots,S} )
* Scenario probabilities: ( p_s \ge 0,\ \sum_s p_s = 1 )
* Loss matrix: ( L_{a,s} \in \mathbb{R}_{\ge 0} )
A decision consists of selecting one action ( a^* ) under uncertainty about which scenario ( s ) will occur.
---
## **Decision Lenses**
Decision Kernel Lite evaluates each action using three lenses.
### **1. Expected Loss**
[
\text{EL}(a) = \sum_{s=1}^{S} p_s \cdot L_{a,s}
]
**Interpretation**
* Risk-neutral criterion
* Minimizes long-run average loss
**Implementation**
```python
expected_loss = loss_matrix @ probabilities
```
---
### **2. Regret and Minimax Regret**
**Regret matrix**
[
R_{a,s} = L_{a,s} - \min_{a'} L_{a',s}
]
**Max regret per action**
[
\text{MR}(a) = \max_s R_{a,s}
]
**Decision rule**
[
a^* = \arg\min_a \text{MR}(a)
]
**Interpretation**
* Robust to probability misspecification
* Optimizes post-hoc defensibility
**Implementation**
```python
regret = loss - loss.min(axis=0, keepdims=True)
max_regret = regret.max(axis=1)
```
---
### **3. CVaR (Conditional Value at Risk)**
CVaR evaluates **average loss in the worst tail of outcomes**.
#### **Procedure (discrete)**
1. Sort losses ( L_{a,s} ) in ascending order
2. Sort probabilities accordingly
3. Compute cumulative probability
4. Select outcomes where cumulative probability β‰₯ ( \alpha )
5. Compute probability-weighted average over the tail
[
\text{CVaR}_\alpha(a)
=====================
\mathbb{E}[L_{a,s} \mid \text{worst } (1-\alpha) \text{ mass}]
]
**Interpretation**
* Tail-risk-aware
* Penalizes catastrophic downside
**Implementation**
```python
order = np.argsort(losses)
cum = np.cumsum(probs[order])
tail = cum >= alpha
cvar = (losses[order][tail] * probs[order][tail]).sum() / probs[order][tail].sum()
```
---
## **Probability Normalization**
To guarantee numerical and logical safety:
* Negative probabilities are clipped to zero
* Zero-sum vectors default to uniform probabilities
* All probabilities are normalized to sum to 1
```python
p = np.clip(p, 0, None)
p = p / p.sum() if p.sum() > 0 else np.ones_like(p) / len(p)
```
This ensures the kernel **never fails due to malformed inputs**.
---
## **Rule Selection Heuristic**
Decision Kernel Lite includes an **advisory heuristic**:
[
\text{Tail Ratio} =
\frac{\min_a \text{CVaR}_\alpha(a)}{\min_a \text{EL}(a)}
]
* If Tail Ratio β‰₯ 1.5 β†’ recommend **CVaR**
* Otherwise β†’ recommend **Expected Loss**
**Properties**
* Transparent
* Non-binding
* Overridable by the user
This preserves governance while guiding non-technical users.
---
## **Decision Logic Flow**
```text
Inputs
↓
Normalize probabilities
↓
Compute:
- Expected Loss
- Regret matrix
- Max Regret
- CVaR
↓
Select primary rule
↓
Choose action
↓
Generate Decision Card
```
All computations are deterministic and stateless.
---
## **System Architecture**
* **Core kernel**: pure NumPy (testable, embeddable)
* **UI layer**: Streamlit (input + presentation only)
* **Deployment**: Docker / Hugging Face Spaces
There is no persistence, no external API, and no hidden state.
---
## **Design Constraints (Intentional)**
The system deliberately avoids:
* forecasting
* probability estimation
* optimization solvers
* learning or calibration
These belong to **adjacent layers**, not the decision kernel.
---
## **Computational Complexity**
Let ( A ) = number of actions, ( S ) = number of scenarios.
* Expected Loss: ( O(A \cdot S) )
* Regret: ( O(A \cdot S) )
* CVaR: ( O(A \cdot S \log S) )
The system is effectively instantaneous for realistic decision sizes.
---
## **Reproducibility & Auditability**
* Deterministic outputs
* Explicit assumptions
* Full transparency of trade-offs
* Copy/paste Decision Card
This makes the kernel suitable for:
* executive decisions
* governance reviews
* post-decision audits
---
## **Summary**
Decision Kernel Lite implements **classical, defensible decision theory** in a minimal, production-ready form.
It transforms uncertainty from an excuse into a **structured input**.
---