Ridge multi-horizon CGM forecaster (MetaboNet)
A sklearn-trained Ridge regressor (with StandardScaler) re-packaged as a
transformers-compatible Hub model. One repo holds four feature ablations
selectable at load time:
cgmβ 24 CGM lags +hour_sin/hour_cos(26 features).insulinβcgmfeatures + 24 Insulin lags (50 features).carbsβcgmfeatures + 24 Carbs lags (50 features).allβcgmfeatures + 24 Insulin lags + 24 Carbs lags (74 features).
History length is 24 (= 2 hours at 5-minute sampling). Output is 12 future CGM values (5β60 min horizons).
Files
config.jsonβauto_mapwiring + per-ablation feature lists.model.pyβRidgeMultiHorizonConfig/RidgeMultiHorizonModel(trust_remote_code=True).model_<ablation>.safetensorsβ one per ablation, holdingscaler_mean,scaler_scale,coef(12 Γ F),intercept(12).
Usage
from transformers import AutoConfig, AutoModel
cfg = AutoConfig.from_pretrained(
"anonymous-4FAD/Ridge", trust_remote_code=True, ablation="cgm"
)
model = AutoModel.from_pretrained(
"anonymous-4FAD/Ridge", trust_remote_code=True, config=cfg
)
# Inputs match the MetaboNet benchmark.py contract:
# timestamps: int64 ns, shape (B, T_in)
# cgm/insulin/carbs: float, shape (B, T_in); only the last 24 steps are used
preds = model.predict(timestamps, cgm, insulin, carbs) # -> (B, 12)
The thin local wrapper in
models/ridge.py
exposes the same API used by benchmark.py.
Feature convention
CGM_t<i> denotes the i-th sample within the last history_length steps,
ordered oldest -> newest (CGM_t0 is the oldest of the 24, CGM_t23 is the
newest). The same convention applies to Insulin_t<i> and Carbs_t<i>.
hour_sin / hour_cos are derived from the most recent input timestamp.
Provenance
Trained via
other_models/results/train_ridge.py
on the public MetaboNet train split. The safetensors checkpoints are produced
by scripts/build_other_models_hub.py
from the original sklearn pickles.
- Downloads last month
- 20