Self Consistent Thermal Resummation: A Case Study of the Phase Transition in 2HDM
Paper • 2504.02024 • Published
YAML Metadata Warning:empty or missing yaml metadata in repo card
Check out the documentation for more information.
Fine-tuned DreamDojo LAM (Latent Action Model, 710M params) on the bin_pick_pack_coffee_capsules manipulation dataset.
| Metric | Epoch 0 | Epoch 17 (best) | Epoch 56 (final) |
|---|---|---|---|
| train_loss | 0.000154 | 0.000076 | 0.000058 |
| val_loss | 0.000137 | 0.000097 | 0.000105 |
| val_mse | 0.000107 | 0.000080 | 0.000092 |
| val_kl | 29.35 | 16.57 | 12.83 |
Val loss improved until epoch 17 then plateaued around 1.0e-4. Train loss continued decreasing. Mild overfitting but no divergence.
best.pt (params only, 2.84 GB)model_state_dict, epoch, step, best_loss72e746704080266c7c6aa265035de3bd2132b9ad2783dbfe8d9fc82670a838dcVerify with:
sha256sum best.pt
import torch
ckpt = torch.load("best.pt", map_location="cpu", weights_only=False)
model.load_state_dict(ckpt["model_state_dict"])
See lam_finetune_isambard.yaml for the full training configuration.
Training curves: wandb.ai/pravsels/lam-finetune/runs/afu3164m