phi-forecaster / README.md
crichalchemist's picture
fix: remove invalid model-index from card frontmatter
3ea496f verified
metadata
license: mit
language:
  - en
tags:
  - welfare
  - forecasting
  - time-series
  - pytorch
  - ethics
  - phi-humanity
library_name: pytorch
pipeline_tag: time-series-forecasting
datasets:
  - synthetic
metrics:
  - mse

PhiForecasterGPU β€” Welfare Trajectory Forecaster

A CNN+LSTM+Attention model that forecasts Phi(humanity) welfare trajectories β€” predicting how 8 ethical constructs (care, compassion, joy, purpose, empathy, love, protection, truth) evolve over time.

Model Description

PhiForecasterGPU takes a 50-timestep window of 36 welfare signal features and predicts the next 10 timesteps for both the aggregate Phi score and all 8 individual constructs.

Property Value
Architecture CNN1D (2-layer) β†’ Stacked LSTM (2-layer) β†’ Additive Attention β†’ Dual heads
Parameters ~1.3M
Input 50 timesteps Γ— 36 features
Output 10-step Phi forecast + 10-step Γ— 8 construct forecast + attention weights
Formula Phi v2.1 (recovery-aware floors)
Training 100 epochs, 44,800 sequences from 8 scenario types Γ— 50 seeds
Best Val Loss 0.000196 MSE

Architecture

Input (50 Γ— 36)
    ↓
CNN1D (2 conv layers, kernel=3, hidden=256)
    ↓
Stacked LSTM (2 layers, hidden=256)
    ↓
Additive Attention (query-key-value)
    ↓
β”Œβ”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”¬β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”
β”‚  Phi Head     β”‚  Construct Head    β”‚
β”‚  (MLP β†’ 10)  β”‚  (MLP β†’ 10 Γ— 8)   β”‚
β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”΄β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜

36 Input Features

  • 8 raw constructs: c, kappa, j, p, eps, lam_L, lam_P, xi
  • 8 volatility signals: rolling 20-step standard deviation per construct
  • 8 momentum signals: 10-step price momentum per construct
  • 5 synergy signals: geometric mean of construct pairs (careΓ—love, compassionΓ—protection, joyΓ—purpose, empathyΓ—truth, loveΓ—truth)
  • 5 divergence signals: squared difference of construct pairs
  • phi: aggregate welfare score
  • dphi_dt: first derivative of phi

Graph-Enhanced Mode (43 features)

When trained with graph_features_enabled=True, the model accepts 43 features (36 base + 7 graph topology features from the detective knowledge graph):

Feature Description
graph_density Edge density of the full graph
entity_pagerank PageRank centrality of focal entity
entity_degree Normalized degree of focal entity
entity_clustering Clustering coefficient
community_size Fraction of graph in same community
avg_neighbor_conf Mean edge confidence to neighbors
hub_score HITS hub score (normalized)

The Phi(humanity) Formula

Phi is a rigorous ethical-affective objective function grounded in care ethics (hooks 2000), capability theory (Sen 1999), and Ubuntu philosophy:

Phi(humanity) = f(lam_L) Β· [prod(x_tilde_i ^ w_i)] Β· Psi_ubuntu Β· (1 - Psi_penalty)
  • f(lam_L): Community solidarity multiplier
  • x_tilde_i: Recovery-aware effective inputs (below-floor constructs get community-mediated recovery)
  • w_i: Inverse-deprivation weights (Rawlsian maximin)
  • Psi_ubuntu: Relational synergy term + curiosity coupling (love Γ— truth)
  • Psi_penalty: Structural distortion penalty for mismatched construct pairs

Training Scenarios

Scenario Description
stable_community Baseline β€” all constructs ~0.5
capitalism_suppresses_love Love declines as purpose erodes
surveillance_state Truth rises while love collapses
willful_ignorance Love rises while truth collapses
recovery_arc Collapse then community-led recovery
sudden_crisis Compassion + protection crash mid-scenario
slow_decay Gradual institutional erosion
random_walk Correlated random walks with mean-reversion

Usage

import torch
from huggingface_hub import hf_hub_download

# Download checkpoint
path = hf_hub_download("crichalchemist/phi-forecaster", "phi_forecaster_best.pt")

# Load model (see spaces/maninagarden/model.py for class definitions)
from model import PhiForecasterGPU
model = PhiForecasterGPU(input_size=36, hidden_size=256, n_layers=2, pred_len=10)
model.load_state_dict(torch.load(path, map_location="cpu", weights_only=True))
model.eval()

# Inference: (batch, seq_len=50, features=36) -> phi, constructs, attention
X = torch.randn(1, 50, 36)
phi_pred, construct_pred, attn = model(X)
# phi_pred: (1, 10, 1) β€” next 10 Phi values
# construct_pred: (1, 10, 8) β€” next 10 values for each construct
# attn: (1, 50) β€” attention weights over input window

Live Demo

Try it at crichalchemist/maninagarden β€” interactive scenario explorer, custom forecasting, and experiment comparison.

Citation

This model implements the Phi(humanity) welfare function formalized in the detective-llm project. Key theoretical foundations:

  • hooks, b. (2000). All About Love: New Visions. William Morrow.
  • Sen, A. (1999). Development as Freedom. Oxford University Press.
  • Fricker, M. (2007). Epistemic Injustice. Oxford University Press.
  • Metz, T. (2007). Toward an African moral theory. Journal of Political Philosophy.

Limitations

  • Trained on synthetic data β€” real-world calibration pending
  • Phi is a diagnostic tool, not an optimization target (Goodhart's Law)
  • 8-construct taxonomy is Western-situated; requires adaptation for diverse philosophical frameworks
  • Recovery-aware floors add theoretical richness but the model has not yet been retrained to fully leverage them