Instructions to use junjin0/Multi-view-VLA with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use junjin0/Multi-view-VLA with PEFT:
Task type is invalid.
- Notebooks
- Google Colab
- Kaggle
metadata
pipeline_tag: robotics
library_name: peft
Learning Action Manifold with Multi-view Latent Priors for Robotic Manipulation
This repository contains the weights for Multi-view-VLA, a Vision-Language-Action framework designed for robust and precise robotic manipulation.
Project Page | Code | arXiv
Introduction
Multi-view-VLA addresses the challenges of spatial perception and manipulation in Vision-Language-Action (VLA) models. Key features include:
- Geometry-Guided Gated Transformer (G3T): Addresses monocular depth ambiguity by leveraging multi-view diffusion priors to provide geometric guidance while adaptively filtering occlusion noise.
- Action Manifold Learning (AML): A direct action prediction mechanism that bypasses the limitations of traditional diffusion-based indirect noise/velocity regression, leading to more efficient action learning.
The model demonstrates superior success rates and robustness on benchmarks like LIBERO, RoboTwin 2.0, and real-world robotic tasks.
Usage
For detailed instructions on installation, training, and evaluation, please refer to the official GitHub repository.
Citation
If you find this work useful, please consider citing:
@article{xiao2026learning,
title={Learning Action Manifold with Multi-view Latent Priors for Robotic Manipulation},
author={Junjin Xiao and Dongyang Li and Yandan Yang and Shuang Zeng and Tong Lin and Xinyuan Chang and Feng Xiong and Mu Xu and Xing Wei and Zhiheng Ma and Qing Zhang and Wei-Shi Zheng},
year={2026},
journal={arxiv:2605.11832},
}
Acknowledgement
This project builds upon starVLA, Qwen3-VL, vggt, JiT, LeRobot, Isaac-GR00T and any4lerobot.