X-VLA (LeRobot)

X-VLA is a Vision-Language-Action foundation model that uses soft prompts to handle cross-embodiment and cross-domain robot control within a unified Transformer architecture.

Checkpoint trained and evaluated on LIBERO tasks.

Original paper: X-VLA: Soft-Prompted Transformer as Scalable Cross-Embodiment Vision-Language-Action Model Reference implementation: https://github.com/2toinf/X-VLA
LeRobot implementation: Follows the original reference code for compatibility.

Model description

  • Inputs: images (multi-view), proprio/state, optional language instruction
  • Outputs: continuous actions
  • Training objective: flow matching
  • Action representation: continuous
  • Intended use: Base model to fine tune on your specific use case

Quick start (inference on a real batch)

Installation

pip install "lerobot[xvla]"

For full installation details (including optional video dependencies such as ffmpeg for torchcodec), see the official documentation: https://huggingface.co/docs/lerobot/installation

Load model + dataset, run select_action

import torch
from lerobot.datasets.lerobot_dataset import LeRobotDataset
from lerobot.policies.factory import make_pre_post_processors

# Swap this import per-policy
from lerobot.policies.xvla.modeling_xvla import XVLAPolicy

# load a policy
model_id = "lerobot/xvla-libero"  # <- swap checkpoint
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")

policy = XVLAPolicy.from_pretrained(model_id).to(device).eval()

preprocess, postprocess = make_pre_post_processors(
    policy.config,
    model_id,
    preprocessor_overrides={"device_processor": {"device": str(device)}},
)
# load a lerobotdataset (we will replace with a simpler dataset)
dataset = LeRobotDataset("lerobot/libero")

# pick an episode
episode_index = 0

# each episode corresponds to a contiguous range of frame indices
from_idx = dataset.meta.episodes["dataset_from_index"][episode_index]
to_idx   = dataset.meta.episodes["dataset_to_index"][episode_index]

# get a single frame from that episode (e.g. the first frame)
frame_index = from_idx
frame = dict(dataset[frame_index])

batch = preprocess(frame)
with torch.inference_mode():
    pred_action = policy.select_action(batch)
    # use your policy postprocess, this post process the action
    # for instance unnormalize the actions, detokenize it etc..
    pred_action = postprocess(pred_action)

Training step (loss + backward)

If you’re training / fine-tuning, you typically call forward(...) to get a loss and then:

policy.train()
batch = dict(dataset[0])
batch = preprocess(batch)

loss, outputs = policy.forward(batch)
loss.backward()

Notes:

  • Some policies expose policy(**batch) or return a dict; keep this snippet aligned with the policy API.
  • Use your trainer script (lerobot-train) for full training loops.

How to train / fine-tune

lerobot-train \
  --dataset.repo_id=${HF_USER}/<dataset> \
  --output_dir=./outputs/[RUN_NAME] \
  --job_name=[RUN_NAME] \
  --policy.repo_id=${HF_USER}/<desired_policy_repo_id> \
  --policy.path=lerobot/[BASE_CHECKPOINT] \
  --policy.dtype=bfloat16 \
  --policy.device=cuda \
  --steps=100000 \
  --batch_size=4

Add policy-specific flags below:

  • -policy.chunk_size=...
  • -policy.n_action_steps=...
  • -policy.max_action_tokens=...
  • -policy.gradient_checkpointing=true

Evaluate in Simulation (LIBERO)

You can evaluate the model in Libero environment.

lerobot-eval \
  --policy.path=lerobot/xvla-libero \
  --env.type=libero \
  --env.task=libero_object \
  --eval.batch_size=1 \
  --eval.n_episodes=20
Downloads last month
897
Safetensors
Model size
0.9B params
Tensor type
F32
·
Video Preview
loading

Dataset used to train lerobot/xvla-libero

Collection including lerobot/xvla-libero

Paper for lerobot/xvla-libero