lfm2-scenarios

Sister checkpoint to lfm2-physics โ€” LoRA fine-tune of LiquidAI/LFM2-350M on the physics scenarios dataset, with a different training regime / curriculum sampling.

Adapter details

  • Base: LiquidAI/LFM2-350M
  • Adapter type: LoRA, r=32, alpha=64, dropout=0.0
  • Target modules: q_proj, k_proj, v_proj, o_proj, gate_proj, up_proj, down_proj
  • Trainer: SFTTrainer (TRL) via Unsloth
  • Curriculum: 5 stages, includes scenario-type stratified sampling
  • Task: autoregressive next-frame prediction; conditioning includes scenario Type, Difficulty, Static geometry, Constraints

Stages

  • stage0/ ... stage4/ โ€” checkpoints from each curriculum stage
  • final/ โ€” final adapter

Usage

from peft import PeftModel
from transformers import AutoModelForCausalLM, AutoTokenizer

base = AutoModelForCausalLM.from_pretrained("LiquidAI/LFM2-350M")
model = PeftModel.from_pretrained(base, "AlexWortega/lfm2-scenarios", subfolder="final")
tokenizer = AutoTokenizer.from_pretrained("AlexWortega/lfm2-scenarios", subfolder="final")

Training data

900K scenes, 24 seen scenario types (avalanche, basketball, billiards, breakout, bridge, chain, conveyor, dominos, explosion, funnel, head_on, jenga, marble_run, orbit, pendulum, pinball, plinko, projectile, pyramid, seesaw, ski_jump, tower, wind, wrecking_ball). 6 types held out for OOD eval (pong, bowling, ramp_roll, angry_birds, hourglass, newtons_cradle).

Citation

ICML-2026 submission (in progress).

Downloads last month
-
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support

Model tree for AlexWortega/lfm2-scenarios

Adapter
(18)
this model