PhysicsLM
Anonymous submission for ICML 2026: "PhysicsLM: Autoregressive Language Modeling of 2D Rigid Body Dynamics"
PhysicsLM fine-tunes LFM2-350M (LiquidAI) via LoRA on 900K 2D rigid-body physics scenes, learning to predict next simulation states as structured decimal text.
Model details
- Base model: LiquidAI/LFM2-350M
- Fine-tuning: LoRA (r=32, alpha=64), 5-stage curriculum on PhysicsScenes
- Task: Next-frame physics prediction (autoregressive text generation)
- Format: structured decimal text encoding of 2D object states
Results (seen scenarios)
| Category | PhysicsLM RMSE (px) | Copy-last RMSE | Linear extrap RMSE |
|---|---|---|---|
| Stacking | 2.60 | 6.72 | 0.06 |
| Constraint | 1.35 | 4.99 | 0.06 |
| Collision | 5.37 | 7.69 | 0.09 |
| Ramp | 18.85 | ... | 0.19 |
| Minigame | 36.14 | ... | 0.09 |
| Complex | 109.57 | ... | 0.04 |
OOD: near-distribution 0.94 px RMSE, novel OOD 24.79 px RMSE. Parse failure: 0.0%.
Usage
from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
tok = AutoTokenizer.from_pretrained("anonsubmiticml2026/PhysicsLM")
model = AutoModelForCausalLM.from_pretrained("anonsubmiticml2026/PhysicsLM",
torch_dtype=torch.bfloat16,
device_map="cuda")
# See paper for text encoding format
Dataset
Training data: anonsubmiticml2026/PhysicsScenes
- Downloads last month
- 16
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support