Attuned Resonance Outcome Predictor β Multi-Head LSTM
The GitHub repo slug is still "CEPM" during the gradual rename; the Hugging Face slugs were migrated to
attuned-resonance-*on 2026-05-09 (HF preserves the oldcepm-*URLs as redirects).
Given an intake record, an advisor profile, and the advisor's recent call history, forecasts three call outcomes: handle time, first-contact resolution (FCR) probability, and CSAT. Designed to be composed downstream of the intake model and upstream of the PPO router.
Research/educational use only. See disclaimer below.
Compatibility note (2026-04): these weights were trained against the previous 10-class intake schema. The intake model has since been retrained on a clean 8-class schema, so the
intent_idx / len(INTENT_LABELS)channel of the input now lives on a slightly different scale at inference time. End-to-end cascade output is best read as illustrative until the predictor is retrained against the new intake.
Architecture
intake_features (7-dim) β
advisor_features (14-dim) ββββΆ concat βββΆ MLP head ββ¬ββΆ handle_time (regression, 0β1, rescaled Γ1800s)
advisor_history (30 Γ 6) β β βββΆ fcr (binary, sigmoid)
β βββΆ csat (regression, 0β1, rescaled to 1β5)
LSTM encoder
(30-step history β hidden state)
Small model (~120k parameters). The LSTM encodes the advisor's last 30 calls as a temporal sequence; its final hidden state is concatenated with the advisor profile and intake features, then fed to per-task heads.
Training
- Data: synthetic data pipeline β ~60k calls Γ 500 advisors Γ 30 days (default generator config)
- Intake features: derived from intake labels (intent-index, sentiment, urgency, complexity, jung-index, campbell-index, archetype-confidence), each normalized to [0, 1]
- Optimizer: Adam, batch size 256
- Epochs: ~6 (plateaued; early-stopped via best-val checkpoint)
- Hardware: CPU (RunPod RTX 5090 had a cuDNN LSTM compatibility issue on Blackwell β known issue, fix pending)
- Tracked: MLflow experiment
prod-predictor
Metrics (best checkpoint)
- val_loss: 0.6367 (sum of handle-time MSE + FCR BCE + CSAT MSE)
- Plateau reached by epoch 3β6
Intended Use
- Research on multi-task tabular+sequence regression
- Educational demonstrations of feature encoding discipline (matching training and inference encodings)
- Benchmarking against the synthetic-data environment in this repo
Out of Scope / Not Intended
- Any production or commercial use. Not validated for operational deployment.
- Real-world outcome prediction. Trained on synthetic data with known distributional simplifications.
- High-stakes decision support.
Limitations
- Synthetic data only β no real outcome distributions. The model learns the generator's biases.
- Short history dependence β only the last 30 advisor calls are visible.
- Feature encoding must match training. The 7-dim intake vector uses a specific normalization scheme (see
models/predictor/dataset.py::_encode_intake). Mismatched encoding produces silently wrong predictions. - Router integration pending. The PPO router (v2.1 roadmap) will consume this predictor's outputs as part of its reward signal.
How to Load
import numpy as np
from models.predictor.inference import OutcomeEstimator
from pathlib import Path
estimator = OutcomeEstimator(
model_path=Path("trained_models/predictor/model.pt"),
device="cpu",
)
intake_features = np.array([...], dtype=np.float32) # shape (7,)
advisor_features = np.array([...], dtype=np.float32) # shape (14,)
advisor_history = np.array([...], dtype=np.float32) # shape (30, 6)
outcome = estimator.predict(intake_features, advisor_features, advisor_history)
# {'handle_time_seconds': 1550.0, 'fcr_probability': 0.41, 'csat_predicted': 3.4}
Full pipeline code at tedrubin80/CEPM.
License
CC-BY-NC-4.0. Non-commercial use only. Attribution required.
Citation
@software{attuned_resonance_predictor_2026,
author = {Rubin, Ted},
title = {Attuned Resonance Outcome Predictor: Multi-Head LSTM for Call Outcome Forecasting},
year = {2026},
url = {https://github.com/tedrubin80/CEPM}
}