Papers
arxiv:2601.20381

STORM: Slot-based Task-aware Object-centric Representation for robotic Manipulation

Published on Jan 28
· Submitted by
Alexandre Chapin
on Jan 30
Authors:
,

Abstract

STORM enhances robotic manipulation by adapting visual foundation models with semantic-aware slots through multi-phase training, improving generalization and control performance.

AI-generated summary

Visual foundation models provide strong perceptual features for robotics, but their dense representations lack explicit object-level structure, limiting robustness and contractility in manipulation tasks. We propose STORM (Slot-based Task-aware Object-centric Representation for robotic Manipulation), a lightweight object-centric adaptation module that augments frozen visual foundation models with a small set of semantic-aware slots for robotic manipulation. Rather than retraining large backbones, STORM employs a multi-phase training strategy: object-centric slots are first stabilized through visual--semantic pretraining using language embeddings, then jointly adapted with a downstream manipulation policy. This staged learning prevents degenerate slot formation and preserves semantic consistency while aligning perception with task objectives. Experiments on object discovery benchmarks and simulated manipulation tasks show that STORM improves generalization to visual distractors, and control performance compared to directly using frozen foundation model features or training object-centric representations end-to-end. Our results highlight multi-phase adaptation as an efficient mechanism for transforming generic foundation model features into task-aware object-centric representations for robotic control.

Community

Paper author Paper submitter

We introduce a slot-based object-centric method with a "task-awareness" alignment in order to learn robotic manipulation. Our method obtains strong generalization improvements over existing VFM by simply adding a few layers of structure and keeping the backbone frozen. We hope this work can lead to more work going in the direction of adding structure in the visual inputs for robotics manipulation.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2601.20381 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2601.20381 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2601.20381 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.