Papers
arxiv:2603.15618

Look Before Acting: Enhancing Vision Foundation Representations for Vision-Language-Action Models

Published on Mar 16
Authors:
,
,
,
,
,
,
,
,
,
,

Abstract

Vision-Language-Action (VLA) models have recently emerged as a promising paradigm for robotic manipulation, in which reliable action prediction critically depends on accurately interpreting and integrating visual observations conditioned on language instructions. Although recent works have sought to enhance the visual capabilities of VLA models, most approaches treat the LLM backbone as a black box, providing limited insight into how visual information is grounded into action generation. Therefore, we perform a systematic analysis of multiple VLA models across different action-generation paradigms and observe that sensitivity to visual tokens progressively decreases in deeper layers during action generation. Motivated by this observation, we propose DeepVision-VLA, built on a Vision-Language Mixture-of-Transformers (VL-MoT) framework. This framework enables shared attention between the vision foundation model and the VLA backbone, injecting multi-level visual features from the vision expert into deeper layers of the VLA backbone to enhance visual representations for precise and complex manipulation. In addition, we introduce Action-Guided Visual Pruning (AGVP), which leverages shallow-layer attention to prune irrelevant visual tokens while preserving task-relevant ones, reinforcing critical visual cues for manipulation with minimal computational overhead. DeepVision-VLA outperforms prior state-of-the-art methods by 9.0\% and 7.5\% on simulated and real-world tasks, respectively, providing new insights for the design of visually enhanced VLA models.

Community

In this work, we explore a key limitation of current Vision-Language-Action models: even when the model initially “sees” the right objects, task-relevant visual information can fade in deeper layers when actions are generated. We address this with DeepVision-VLA, where we inject multi-level features from a vision foundation model into deeper VLA layers through a Vision-Language Mixture-of-Transformers framework, and use Action-Guided Visual Pruning to keep the fusion focused on the most relevant regions. Through experiments in both RLBench and real-world manipulation, we show that improving deep-layer visual grounding leads to much stronger action performance.

Looking ahead, we hope this work can inspire future research on more effective visual integration techniques, stronger visual enhancement mechanisms, and better utilization of visual information throughout the entire VLA decision process.

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2603.15618 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2603.15618 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2603.15618 in a Space README.md to link it from this page.

Collections including this paper 1