license: apache-2.0
library_name: robo-orchard-lab
tags:
- horizonlabs
- robo-orchard-lab
- holobrain
A foundation model for general embodied manipulation
📘 Framework
By incorporating explicit embodiment modeling (e.g., camera parameters and kinematic descriptions), our model effectively unifies training across heterogeneous robots. Together with a full-stack VLA infrastructure (RoboOrchard) and an effective test-driven data strategy, HoloBrain-0 delivers superior performance on both real world and simulation manipulation benchmarks.
📁 Quick Start
The exported model and processor can be used very conveniently. You can insert the code below into any location to perform model inference.
from robo_orchard_lab.models.holobrain.processor import (
HoloBrainProcessor,
MultiArmManipulationInput,
MultiArmManipulationOutput,
)
from robo_orchard_lab.models.mixin import ModelMixin
# load model and processor
processor = HoloBrainProcessor.load("./HoloBrain_v0.0_Qwen", "robotwin2_0_processor.json")
model = ModelMixin.load_model("./HoloBrain_v0.0_Qwen/pretrain", load_impl="native")
input_data: MultiArmManipulationInput
input_data = processor.pre_process(input_data)
model_outs = model(input_data)
output_data: MultiArmManipulationOutput = processor.post_process(input_data, model_outs)
Quick Start for Real-World Tasks
The Fold Clothes and Grasp Anything tasks share the same environment setup and deployment flow. Start with the common steps below, then choose the task-specific weights and inference command you need.
1. Set up the environment
git clone https://github.com/HorizonRobotics/RoboOrchardLab
cd RoboOrchardLab
2. Optional: switch to hf-mirror
If you run into network issues while downloading the weights, you can switch to hf-mirror first:
export HF_ENDPOINT=https://hf-mirror.com
3. Download shared dependencies once
The two real-world tasks share the same urdf files and Qwen base model, so you only need to download them once.
from huggingface_hub import snapshot_download
snapshot_download(
repo_id="HorizonRobotics/HoloBrain_v0.0_Qwen",
repo_type="model",
local_dir="./",
allow_patterns="urdf/*",
max_workers=8,
resume_download=True,
)
snapshot_download(
repo_id="Qwen/Qwen2.5-VL-3B-Instruct",
repo_type="model",
local_dir="./ckpt/Qwen2.5-VL-3B-Instruct",
max_workers=8,
resume_download=True,
)
4. Link the shared dependencies into each task directory
The exported task configs load ./urdf/... and ./ckpt/... relative to each task directory. Create soft links once so both tasks can reuse the same downloaded files.
ln -sfn ../urdf post_training_foldclothes/urdf
ln -sfn ../ckpt post_training_foldclothes/ckpt
ln -sfn ../urdf post_training_graspanything/urdf
ln -sfn ../ckpt post_training_graspanything/ckpt
5. Choose a task and download its weights
Fold Clothes
from huggingface_hub import snapshot_download
snapshot_download(
repo_id="HorizonRobotics/HoloBrain_v0.0_Qwen",
repo_type="model",
local_dir="./",
allow_patterns="post_training_foldclothes/*",
max_workers=8,
resume_download=True,
)
Grasp Anything
from huggingface_hub import snapshot_download
snapshot_download(
repo_id="HorizonRobotics/HoloBrain_v0.0_Qwen",
repo_type="model",
local_dir="./",
allow_patterns="post_training_graspanything/*",
max_workers=8,
resume_download=True,
)
6. Start the inference server
Fold Clothes
Choose the command that matches your robot arm:
Piper robot arm
cd projects/holobrain
python3 scripts/inference_server.py \
--model_dir ../../post_training_foldclothes \
--inference_prefix fold_clothes_ro_piper \
--port 6050 \
--server_name sem \
--num_joints 7 \
--valid_action_step 64
PiperX robot arm
cd projects/holobrain
python3 scripts/inference_server.py \
--model_dir ../../post_training_foldclothes \
--inference_prefix fold_clothes_ro_piperx \
--port 6050 \
--server_name sem \
--num_joints 7 \
--valid_action_step 64
Grasp Anything
cd projects/holobrain
python3 scripts/inference_server.py \
--model_dir ../../post_training_graspanything \
--inference_prefix grasp_anything_ro \
--port 6050 \
--server_name sem \
--num_joints 7 \
--valid_action_step 64
7. Hardware and Control
For robot hardware setup and controller integration, see the HoloBrain-0 Real Robot Deployment Guide.
📄 Citation
@misc{lin2026holobrain0technicalreport,
title={HoloBrain-0 Technical Report},
author={Xuewu Lin and Tianwei Lin and Yun Du and Hongyu Xie and Yiwei Jin and Jiawei Li and Shijie Wu and Qingze Wang and Mengdi Li and Mengao Zhao and Ziang Li and Chaodong Huang and Hongzhe Bi and Lichao Huang and Zhizhong Su},
year={2026},
eprint={2602.12062},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2602.12062},
}