File size: 5,370 Bytes
d9ae7d8
 
 
84f78f8
da8edfd
84f78f8
da8edfd
293e121
 
84f78f8
 
 
 
 
 
 
 
 
 
293e121
84f78f8
 
 
 
293e121
84f78f8
293e121
 
84f78f8
 
 
293e121
84f78f8
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
d7bc106
84f78f8
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
---
language:
- en
license: mit
pipeline_tag: robotics
library_name: lam
---

<div align="center">
  <p align="center">
    <img src="https://huggingface.co/microsoft/villa-x/resolve/main/villa-x-transparent.png" width="400"/>
  </p>

  <h1>villa-X: A Vision-Language-Latent-Action Model</h1>

  [![arXiv](https://img.shields.io/badge/arXiv-Paper-red?logo=arxiv&logoColor=white)](https://arxiv.org/abs/2507.23682) &ensp; [![Project](https://img.shields.io/badge/Project-Page-blue?logo=homepage&logoColor=white)](https://aka.ms/villa-x) &ensp; [![Code](https://img.shields.io/badge/GitHub-Code-blue?logo=github&logoColor=white)](https://github.com/microsoft/villa-x/)
</div>

This is the official Hugging Face repository for **villa-X: Enhancing Latent Action Modeling in Vision-Language-Action Models**.

## Abstract
Visual-Language-Action (VLA) models have emerged as a popular paradigm for learning robot manipulation policies that can follow language instructions and generalize to novel scenarios. Recent work has begun to explore the incorporation of latent actions, an abstract representation of visual change between two frames, into VLA pre-training. In this paper, we introduce **villa-X**, a novel Visual-Language-Latent-Action (ViLLA) framework that advances latent action modeling for learning generalizable robot manipulation policies. Our approach improves both how latent actions are learned and how they are incorporated into VLA pre-training. Together, these contributions enable villa-X to achieve superior performance across simulated environments including SIMPLER and LIBERO, as well as on two real-world robot setups including gripper and dexterous hand manipulation. We believe the ViLLA paradigm holds significant promise, and that our villa-X provides a strong foundation for future research.

## Overview
<p align="center">
  <img src="https://github.com/microsoft/villa-x/raw/main/assets/overview.png" alt="villa-x overview" width="700"/>
</p>

*   We improve latent action learning by introducing an extra proprio FDM, which aligns latent tokens with underlying robot states and actions and grounds them in physical dynamics.
*   We propose to jointly learn a latent action expert and a robot action expert through joint diffusion in the policy model, conditioning robot action prediction on latent actions to fully exploit their potential.
*   Our method demonstrates superior performance on simulated environments as well as on real-world robotic tasks. The latent action expert can effectively plan into future with both visual and proprio state planning.

## Usage

### Setup

1.  Clone the repository.

    ```bash
    git clone https://github.com/microsoft/villa-x.git
    cd villa-x
    ```

2.  Install the required packages.

    ```bash
    sudo apt-get install -y build-essential zlib1g-dev libffi-dev libssl-dev libbz2-dev libreadline-dev libsqlite3-dev liblzma-dev libncurses-dev tk-dev python3-dev ffmpeg -y
    curl -LsSf https://astral.sh/uv/install.sh | sh # Skip this step if you already have uv installed
    uv sync
    ```

### Inference with Pre-trained Latent Action Model

1.  Download the pre-trained models from [Hugging Face](https://huggingface.co/microsoft/villa-x).

2.  Load the latent action model.

    ```python
    from lam import IgorModel

    lam = IgorModel.from_pretrained("LOCAL_MODEL_DIRECTORY").cuda()
    ```

3.  Extract the latent actions from a video.

    ```python
    def read_video(fp: str):
        from torchvision.io import read_video

        video, *_ = read_video(fp, pts_unit="sec")
        return video

    video = read_video("path/to/video.mp4").cuda()  # Load your video here
    latent_action = lam.idm(video)
    ```

4.  Use image FDM to reconstruct future frames from the latent actions.

    ```python
    frames = []
    for i in range(len(latent_action[0])):
        pred = lam.apply_latent_action(video[i], latent_action[0][i])
        frames.append(pred)
    ```

We also provide a Jupyter [notebook](https://github.com/microsoft/villa-x/blob/main/demo/notebook.ipynb) for a step-by-step guide on how to use the pre-trained latent action model.

## Pre-trained Models

| Model ID             | Description         | Params | Link                                                     |
|----------------------|---------------------|--------|----------------------------------------------------------|
| `microsoft/villa-x/lam` | Latent action model | 955M   | 🤗 [Link](https://huggingface.co/microsoft/villa-x/tree/main/lam) |

## Citation
```bibtex
@article{chen2025villa0x0,
  title   = {villa-X: Enhancing Latent Action Modeling in Vision-Language-Action Models},
  author  = {Xiaoyu Chen and Hangxing Wei and Pushi Zhang and Chuheng Zhang and Kaixin Wang and Yanjiang Guo and Rushuai Yang and Yucen Wang and Xinquan Xiao and Li Zhao and Jianyu Chen and Jiang Bian},
  year    = {2025},
  journal = {arXiv preprint arXiv: 2507.23682}
}
```

## Credits

We are grateful for the open-source projects like [Open Sora](https://github.com/hpcaitech/Open-Sora), [taming-transformers](https://github.com/CompVis/taming-transformers), [open-pi-zero](https://github.com/allenzren/open-pi-zero), [MAE](https://github.com/facebookresearch/mae) and [timm](https://github.com/rwightman/pytorch-image-models). Their contributions have been invaluable in the development of villa-X.