GreenVLA-5b-base-stride-4
Staged Vision-Language-Action Model for Generalist Robots
Sber Robotics Center · Manipulation Team
Overview
GreenVLA-5b-base-stride-4 is a base checkpoint of the Green-VLA family — a ~5B-parameter Vision-Language-Action model pretrained on both general-domain and robotics data (3,000+ hours of demonstrations across multiple embodiments).
This is the stride-4 variant: the action expert has 4× fewer transformer layers than the VLM backbone, resulting in a lighter action head while retaining the full VLM capacity. For the variant with the same number of action-expert layers as the VLM, see GreenVLA-5b-base-stride-1.
This checkpoint combines:
- VLM capabilities — Visual Question Answering, object pointing, bounding box prediction, and scene description, inherited from the Qwen3-VL-4B backbone.
- Autoregressive action prediction — FAST token-based action generation for discrete control.
- Flow-matching action expert — A continuous action head for smooth, high-frequency trajectory generation.
Use this checkpoint as the starting point for fine-tuning on your own embodiment (R1 stage), or for zero-shot VLM inference.
Architecture
| Component | Details |
|---|---|
| VLM Backbone | Qwen3-VL-4B-Instruct (vision encoder + language model) |
| Action Expert | Flow-matching transformer operating in a reduced hidden space |
| Action Expert Depth | 4× fewer layers than the VLM (stride 4) |
| Action Tokenizer | FAST tokenizer for autoregressive action prediction |
| Total Parameters | ~5B |
Training Curriculum
This checkpoint corresponds to the Base stage of the Green-VLA curriculum:
| Stage | Name | Status |
|---|---|---|
| L0 | Foundational VLM pretraining | ✓ |
| L1 | Multimodal grounding (VQA, pointing, bbox) | ✓ |
| R0 | Multi-embodiment robotics pretraining | ✓ |
| R1 | Embodiment-specific adaptation | — |
| R2 | RL policy alignment | — |
Quick Start
Installation
git clone https://github.com/greenvla/GreenVLA.git
cd GreenVLA
uv sync # or: pip install -e .
Action Inference
import numpy as np
import torch
from lerobot.common.policies.factory import load_pretrained_policy
from lerobot.common.utils.torch_observation import (
move_dict_to_batch_for_inference,
torch_preprocess_dict_inference,
)
# 1. Load policy and transforms.
policy, input_transforms, output_transforms = load_pretrained_policy(
"SberRoboticsCenter/GreenVLA-5b-stride-4-R1-fractal",
data_config_name="fractal",
)
policy.to("cuda").eval()
# 2. Build an observation (replace with real sensor data).
raw_obs = {
"observation/state": np.random.rand(8), # x, y, z, rx, ry, rz, rw, gripper
"observation/image": np.random.randint(256, size=(448, 448, 3), dtype=np.uint8),
"prompt": "move the coke can to the left of the table",
}
# 3. Transform, preprocess, and batch.
obs = input_transforms(raw_obs)
obs = torch_preprocess_dict_inference(obs)
batch = move_dict_to_batch_for_inference(obs, device="cuda")
# 4. Predict actions and post-process.
with torch.inference_mode():
raw_actions = policy.select_action(batch).cpu().numpy()
actions = output_transforms(
{"actions": raw_actions, "state": batch["state"].cpu().numpy()}
)["actions"]
# actions shape: (action_horizon, 7) — [x, y, z, roll, pitch, yaw, gripper]
See examples/example_inference_fractal.py for the full runnable script with argument parsing.
VLM Inference (VQA, Pointing, BBox)
The base model retains full VLM capabilities:
from PIL import Image
from lerobot.common.policies.factory import load_pretrained_policy
# Load without data transforms
policy, _, _ = load_pretrained_policy(
"SberRoboticsCenter/GreenVLA-5b-base-stride-4",
data_config_name=None,
)
policy = policy.to("cuda").eval()
# Access the processor and model directly
processor = policy.model.processor
image = Image.open("scene.jpg")
messages = [
{
"role": "user",
"content": [
{"type": "image", "image": image},
{"type": "text", "text": "Describe what the robot should do next."},
],
}
]
inputs = processor.apply_chat_template(
messages, tokenize=True, add_generation_prompt=False,
return_dict=True, return_tensors="pt",
padding_side="left", padding="max_length", max_length=256,
images_kwargs={"do_resize": True},
).to("cuda")
generated_ids = policy.model.model.generate(
**inputs, max_new_tokens=256, do_sample=False, use_cache=False,
)
generated_ids_trimmed = [
out[len(inp):] for inp, out in zip(inputs.input_ids, generated_ids)
]
print(processor.batch_decode(generated_ids_trimmed, skip_special_tokens=True)[0])
Citation
@misc{apanasevich2026greenvlastagedvisionlanguageactionmodel,
title = {Green-VLA: Staged Vision-Language-Action Model for Generalist Robots},
author = {I. Apanasevich and M. Artemyev and R. Babakyan and P. Fedotova and
D. Grankin and E. Kupryashin and A. Misailidi and D. Nerus and
A. Nutalapati and G. Sidorov and I. Efremov and M. Gerasyov and
D. Pikurov and Y. Senchenko and S. Davidenko and D. Kulikov and
M. Sultankin and K. Askarbek and O. Shamanin and D. Statovoy and
E. Zalyaev and I. Zorin and A. Letkin and E. Rusakov and
A. Silchenko and V. Vorobyov and S. Sobolnikov and A. Postnikov},
year = {2026},
eprint = {2602.00919},
archivePrefix = {arXiv},
primaryClass = {cs.RO},
url = {https://arxiv.org/abs/2602.00919},
}
© 2026 Sber Robotics Center · Manipulation Team
- Downloads last month
- 14
Model tree for SberRoboticsCenter/GreenVLA-5b-base-stride-4
Base model
Qwen/Qwen3-VL-4B-Instruct