lfm2-physics

LoRA fine-tune of LiquidAI/LFM2-350M for 2D rigid body physics next-frame prediction. Part of an ICML-2026 study comparing fine-tuned LMs vs. from-scratch GPTs on physics trajectory modelling.

Adapter details

  • Base: LiquidAI/LFM2-350M
  • Adapter type: LoRA, r=32, alpha=64, dropout=0.0
  • Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
  • Trainer: SFTTrainer (TRL) via Unsloth
  • Curriculum: 5 stages of increasing scene complexity
  • Task: autoregressive next-frame prediction over 200-frame rigid-body scenes

Stages

  • stage0/ ... stage4/ โ€” checkpoints from each curriculum stage
  • final/ โ€” final adapter after all stages

Each stage directory contains an Unsloth-saved adapter (adapter_config.json, adapter_model.safetensors, tokenizer files).

Usage

from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

base = AutoModelForCausalLM.from_pretrained("LiquidAI/LFM2-350M")
model = PeftModel.from_pretrained(base, "AlexWortega/lfm2-physics", subfolder="final")
tokenizer = AutoTokenizer.from_pretrained("AlexWortega/lfm2-physics", subfolder="final")

Training data

Trained on ~900K scenes across 24 "seen" scenario types. See physics-scenarios-packed.

Citation

ICML-2026 submission (in progress).

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for AlexWortega/lfm2-physics

Adapter
(18)
this model