Papers
arxiv:2605.13757

FrameSkip: Learning from Fewer but More Informative Frames in VLA Training

Published on May 13
· Submitted by
yubin
on May 14
Authors:
,
,
,
,
,
,
,
,
,
,

Abstract

FrameSkip is a data-layer frame selection method that improves VLA policy training by prioritizing high-importance frames based on action variation and visual-coherence metrics.

AI-generated summary

Vision-Language-Action (VLA) policies are commonly trained from dense robot demonstration trajectories, often collected through teleoperation, by sampling every recorded frame as if it provided equally useful supervision. We argue that this convention creates a temporal supervision imbalance: long low-change segments dominate the training stream, while manipulation-critical transitions such as alignment, contact, grasping, and release appear only sparsely. We introduce FrameSkip, a data-layer frame selection framework that scores trajectory frames using action variation, visual-action coherence, task-progress priors, and gripper-transition preservation, then remaps training samples toward high-importance frames under a target retention ratio. Because FrameSkip operates only in the dataloader, it leaves the VLA architecture, action head, training objective, and inference procedure unchanged. Across RoboCasa-GR1, SimplerEnv, and LIBERO, FrameSkip improves the success-retention trade-off over full-frame training and simpler frame selection variants, achieving a macro-average success rate of 76.15% across the three benchmarks compared with 66.50% for full-frame training while using a compressed trajectory view that retains 20% of unique frames in the main setting.

Community

Paper submitter

TLDR: FrameSkip is a data-layer framework that improves VLA training by selectively retaining only the most informative frames—based on action variation, visual-action coherence, and task-progress cues—rather than uniformly sampling all trajectory frames, achieving a 76.15% macro-average success rate across three benchmarks while using just 20% of the original frames.

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2605.13757
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 1

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2605.13757 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.13757 in a Space README.md to link it from this page.

Collections including this paper 1