The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.
๐ง ๐ง OpOp โ The Optimizer Optimizer ๐ง ๐ง
Because your optimizer is dumb and somebody had to say it ๐คท
pip install opop
from opop import OptBrain
optimizer = OptBrain(torch.optim.Adam(model.parameters(), lr=1e-3))
literally just add loss= and ur done lol
optimizer.step(loss=loss.item()) thats it. thats the whole thing. ur welcome ๐
๐ผ wtf is this Adam stores two full copies of every parameter in your model.
Two. Full. Copies. ๐คฏ
Your model has 500M parameters? Cool, Adam is sitting on 4 GIGABYTES of optimizer state running the same formula over and over like a goldfish that forgot it already swam that lap.
It doesnt know what step youre on. It doesnt know the loss plateaued 200 steps ago. It doesnt know that parameter group 3 is oscillating like crazy while group 1 converged an hour ago. It doesnt know ANYTHING. Its just vibing with exponential moving averages. Forever. Until you stop it. ๐
OpOp is a tiny brain that watches your training and learns what helps.
๐ loss going down? brain remembers what it did ๐ loss going up? brain remembers that too and stops doing it ๐ gradients oscillating? brain dampens that group ๐ด parameters stuck? brain pushes harder ๐ early training chaos? brain stays cautious ๐ฏ converging nicely? brain gets out of the way 50KB. not 4GB. 50KB. a brain that THINKS vs a buffer that DOESNT. ๐
๐ฎ how it works (for babies) your optimizer does its normal thing (Adam, SGD, whatever grandpa uses) OpOp watches what happened tiny brain goes "hmm" ๐ค outputs 3 knobs per parameter group: scale โ push harder or softer (0.01x to 10x) clip โ tighter or looser leash (0.1x to 5x) dampen โ chill out or full send (0 to 1) loss went down? brain learns "that was good" โ loss went up? brain learns "dont do that again" โ repeat forever, brain gets smarter, training gets better its literally reinforcement learning on your optimizer. the optimizer is optimizing the optimizer. OpOp. ๐ง ยฒ
๐ features drop-in โ wraps any pytorch optimizer. 3 lines. done. learns online โ no pre-training needed. starts neutral, gets smarter. cant make things worse โ initialized at 1x everything. worst case = base optimizer unchanged. ~50KB memory โ less than your models bias terms lmao ~0.1% compute โ a tiny MLP forward pass per step. your GPU wont even notice. saves/loads โ brain checkpoints alongside your model. it remembers across restarts. numpy mode โ dont use pytorch? cool neither do we. works with anything. replaces โ manual LR scheduling, gradient clip tuning, warmup schedules, differential learning rates, and all the other stuff you spend 3 hours tuning and still get wrong ๐ what OpOp replaces thing you used to do manually OpOp cosine LR schedule brain learns when to push/pull ๐ง warmup for 2000 steps brain figures out early training is fragile ๐ผ gradient clipping at 1.0 brain adjusts clip per group dynamically โ๏ธ different LR per param group brain scales each group independently ๐๏ธ "try lr=3e-4 no wait 1e-4 no wait" brain handles it ๐ฎโ๐จ staring at loss curves for hours brain stares at them FOR you ๐ ๐งช numpy mode (for the unhinged) from opop import OptBrain
brain = OptBrain(None, n_groups=5)
for batch in data: loss = forward(batch)
decisions = brain.get_decisions(loss=loss)
for group_idx, (scale, clip, dampen) in decisions.items():
# apply to your weird custom optimizer
grads[group_idx] *= scale
# etc
brain.record_grads(group_idx, grad_flat)
brain.finish_step()
works with any optimizer in any framework in any language that can call python. or just read the 50 lines of brain code and rewrite it in rust or whatever idc ๐ฆ
๐พ save ur brain optimizer.save("big_brain.npz") # ๐ง ๐พ optimizer.load("big_brain.npz") # ๐ง โฌ๏ธ the brain remembers everything across restarts. loss history. gradient patterns. what worked. what didnt. its not starting from scratch every time like Adam does because Adam has amnesia and nobody talks about it. ๐ซ
๐บ the TUI (live training dashboard) OpOp comes with a full terminal dashboard for watching your training runs in real time.
pip install opop[tui] start it
watch one training run
opop monitor --log path/to/training.log
watch multiple runs side by side (race mode)
opop monitor --log run1.log --log run2.log --log run3.log
watch + live control (edit hyperparams while training)
opop monitor --log training.log --control control.json
auto-discover everything in a directory
opop monitor --control-dir ./my_runs/ thats it. it finds your logs, parses them, and shows you everything. ๐บ
what you see โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ โ LEFT SIDE (70%) โ RIGHT SIDE (30%) โ โ โ โ โ Dashboard tab: โ Brain State: โ โ sparkline of loss over time โ mood: COOKING/VIBING/etc โ โ sparkline of accuracy โ per-group scale bars โ โ metrics table (all runs) โ per-group clip values โ โ โ reward baseline โ โ Logs tab: โ โ โ color-coded live log stream โ GPU Status: โ โ green = eval results โ temp, utilization, VRAM โ โ cyan = generated text โ power draw per GPU โ โ yellow = checkpoints โ โ โ magenta = control changes โ Live Control: โ โ โ JSON editor โ โ Generations tab: โ [Apply] [Reset] โ โ prompt -> generated text โ โ โโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโโ keyboard shortcuts key what it does d switch to Dashboard tab l switch to Logs tab g switch to Generations tab c focus the control editor r force refresh (re-read logs) q quit live control if your training script reads a JSON control file on a loop (most do), OpOp lets you edit it live from the TUI. change learning rate, sophia params, temperature, whatever โ hit Apply and your training picks it up on the next check. no restart needed.
the control JSON looks like this:
{ "lr": 1e-05, "temperature": 0.3, "sophia_rho": 250.0, "sophia_gamma": 10.5, "rep_penalty": 1.3, "stop": false } set "stop": true to cleanly stop a run. ๐
brain mood indicator the dashboard shows you what the brain is feeling based on its reward baseline:
mood meaning COOKING ๐ฅ loss is going down consistently. brain is making good decisions LEARNING ๐ง brain is figuring things out. slight positive signal VIBING ๐ neutral zone. brain is watching but not doing much yet ADJUSTING ๐ง brain made some bad calls and is correcting. normal early on COPING ๐ everything is on fire. brain is trying its best dont worry about COPING early in training โ the brain starts neutral and needs a few hundred steps to calibrate. if its still COPING after 1000 steps your learning rate is probably too high. or too low. the brain will figure it out either way but itll be faster if you give it a reasonable starting point.
log format the TUI parses these log line formats automatically:
[10/50000] 0% loss=6.65 (ce=6.65) acc=0.157 ~1000s remaining [eval] top1=9.3% top5=20.9% (1024 tok) [gen] "The meaning of life is" -> "generated text here" [ckpt] Saved checkpoint.npz (2988.3 MB) [ctrl] lr: 0.001 -> 0.0001 [diag] mlp_W0_norm=0.0088 if your training script prints lines in roughly this format, the TUI will pick them up. the regex is generous. it doesnt have to be exact.
GPU monitoring polls nvidia-smi every 5 seconds (configurable with --gpu-poll). shows temperature, utilization %, VRAM usage, and power draw per GPU. set --gpu-poll 0 to disable if you dont have GPUs or dont care.
๐ค FAQ Q: does this actually work? A: the brain literally cannot make things worse. it starts at 1x (neutral) and only changes if it learns something helpful. worst case you get base Adam. best case you get Adam with a copilot.
Q: why hasnt anyone done this before? A: because they think of optimizers as math, not as agents. Adam is an equation. OpOp is a tiny creature that lives in your training loop and learns from experience. the entire field put optimizers in the "math" box instead of the "agent" box and never looked back. we looked back. ๐
Q: how much overhead? A: ~50KB memory. one tiny MLP forward pass per training step. your batch norm layers use more compute than this.
Q: what if I have 47 parameter groups? A: brain scales. observation vector grows by 6 floats per group. still tiny. still fast. still smarter than Adam.
Q: can I use this with [obscure optimizer]? A: if it has a .step() method, yes. if it doesnt, use numpy mode. OpOp doesnt care whats underneath. it just watches and learns.
Q: is this a joke? A: Adam is using 4GB to run a formula a calculator could do. we're using 50KB to run a brain. you tell me whos joking. ๐คก
๐๏ธ built by a guy who cant code and an AI on a metal shelf in Nebraska.
no degree. no funding. no pytorch copy-paste.
just "what if the optimizer could think" and then making it think. ๐ง
if your PhD advisor told you optimizers cant have intent, theyre wrong and you should send them this repo.
๐ license MIT. take it. use it. wrap your precious AdamW in a brain. tell your coworkers "my optimizer has a brain now" and watch their face.
if you work at a big lab and this ends up in your training pipeline, you owe us a hotdog. ๐ญ
Adam stores 2 copies of your entire model to run a formula.
OpOp stores 50KB to make decisions.
one of these is obviously smarter than the other.
๐ง > ๐
the optimizer optimizer has entered the chat.
"I'm not just optimizing models. I'm optimizing the thing that optimizes the models."
โ OpOp, probably
- Downloads last month
- 9