YAML Metadata Warning: empty or missing yaml metadata in repo card (https://huggingface.co/docs/hub/model-cards#model-card-metadata)

WALL-OSS-Flow-0.1

Paper    Hugging Face    GitHub    Project Page

We have upgraded the architecture of WALL-OSS. In the new architecture, the linear projection layers (including QKV mappings and output projections) for action tokens and non-action tokens (such as text or vision tokens) are designed with non-shared parameters.

The core objectives of this decoupled design are:

  • Task Specialization: Action tokens carry the precise modalities of robotic trajectories, whereas non-action tokens focus on high-level semantic understanding. By separating the parameters, we prevent gradient interference between these two fundamentally different modalities during the backpropagation process.

  • Computational Interaction: Although the projection layers are independent, the tokens still engage in global interaction within the Self-Attention layers. This ensures that the generated actions are fully grounded in the context of the visual environment and linguistic instructions.

๐Ÿš€ Quick Start

Installation

# Create conda environment
conda create --name wallx python=3.10
conda activate wallx

# Install base requirements
pip install torch torchvision transformers
pip install huggingface_hub

# Install Wall-X from GitHub
git clone https://github.com/X-Square-Robot/wall-x.git
cd wall-x
pip install -e .

Basic Usage

import torch
from wall_x.model.qwen2_5_based.modeling_qwen2_5_vl_act import Qwen2_5_VLMoEForAction

# Load the model
model_path = "X-Square-Robot/wall-oss-flow"  # or your local path
model = Qwen2_5_VLMoEForAction.from_pretrained(model_path)
model.eval()

# Configuration
device = "cuda" if torch.cuda.is_available() else "cpu"
model = model.to(device).bfloat16()

# Your inference code here...

๐ŸŽฏ Supervised Fine-Tuning (SFT)

For training Wall-X on your robotics datasets, please refer to our comprehensive training guide:

๐Ÿ“– Training Documentation

The training process includes:

  • Dataset Preparation: How to prepare your robotics datasets in LeRobot format
  • Configuration Setup: Detailed configuration for GPU setup, model paths, and robot DOF settings
  • Training Scripts: Ready-to-use training scripts with proper hyperparameters

Quick Training Start

# Run training (see workspace/README.md for detailed configuration)
bash ./workspace/lerobot_example/run.sh

๐Ÿ”ฎ Inference

For detailed inference examples and model evaluation:

๐Ÿ“– Inference Documentation

Basic Inference Example

import torch
from wall_x.model.qwen2_5_based.modeling_qwen2_5_vl_act import Qwen2_5_VLMoEForAction

# Load model
model_path = "X-Square-Robot/wall-x"
model = Qwen2_5_VLMoEForAction.from_pretrained(model_path)
model.eval()

# Setup
batch_size = 1
seq_length = 50
device = "cuda" if torch.cuda.is_available() else "cpu"
model.eval()
model = model.to(device)
model.to_bfloat16_for_selected_params()

# Prepare inputs (example with synthetic data)
torch.manual_seed(0)
input_ids = torch.randint(0, 151667, (batch_size, seq_length), dtype=torch.long)
attention_mask = torch.ones((batch_size, seq_length), dtype=torch.long)
moe_token_types = torch.zeros((batch_size, seq_length), dtype=torch.long)
position_ids = torch.arange(seq_length, dtype=torch.long).unsqueeze(0).expand(batch_size, -1)

# Robotics-specific inputs
proprioception = torch.randn((batch_size, 1, 20), dtype=torch.float32)  # Joint states
agent_pos_mask = torch.ones((batch_size, 1, 20), dtype=torch.float32)
dof_mask = torch.ones((batch_size, 32, 20), dtype=torch.float32)  # DOF mask
dataset_names = ["x2_normal"]

# Move to device
inputs = {
    "input_ids": input_ids.to(device),
    "attention_mask": attention_mask.to(device),
    "moe_token_types": moe_token_types.to(device),
    "position_ids": position_ids.to(device),
    "proprioception": proprioception.to(device).bfloat16(),
    "agent_pos_mask": agent_pos_mask.to(device).bfloat16(),
    "dof_mask": dof_mask.to(device).bfloat16(),
    "dataset_names": dataset_names,
    "mode": "validate"
}

# Run inference
with torch.no_grad():
    outputs = model(**inputs)
    print(f"Output logits shape: {outputs.logits.shape}")

Advanced Inference Scripts

For production-ready inference and evaluation scripts:

# Basic inference test
python ./scripts/fake_inference.py

# Generate open-loop comparison plots
python ./scripts/draw_openloop_plot.py

๐Ÿ“ View all inference scripts

๐Ÿ“š Complete Documentation

For comprehensive setup, training, and inference instructions:

๐Ÿš€ Visit our GitHub Repository

The repository contains:

  • Detailed Installation Guide: Complete environment setup with all dependencies
  • Training Tutorials: Step-by-step SFT process with LeRobot datasets
  • Inference Examples: Multiple inference scripts and evaluation tools
  • Configuration Templates: Ready-to-use configs for different robot setups
  • Troubleshooting Guide: Common issues and solutions

๐Ÿ“„ Cite Us

If you find WALL-OSS models useful, please cite:

@misc{walloss_paper_2025,
  title        = {WALL-OSS: Igniting VLMs toward the Embodied Space},
  author       = {X Square Robot},
  year         = {2025},
  howpublished = {\url{https://x2robot.cn-wlcb.ufileos.com/wall_oss.pdf}},
  note         = {White paper}
}
Downloads last month
50
Safetensors
Model size
4B params
Tensor type
F32
ยท
BF16
ยท
Inference Providers NEW
This model isn't deployed by any Inference Provider. ๐Ÿ™‹ Ask for provider support