Robo_Orchard_Lab
Collection
6 items
β’
Updated
β’
2
By incorporating explicit embodiment modeling (e.g., camera parameters and kinematic descriptions), our model effectively unifies training across heterogeneous robots. Together with a full-stack VLA infrastructure (RoboOrchard) and an effective test-driven data strategy, HoloBrain-0 delivers superior performance on both real world and simulation manipulation benchmarks.
The exported model and processor can be used very conveniently. You can insert the code below into any location to perform model inference.
from robo_orchard_lab.models.holobrain.processor import (
HoloBrainProcessor,
MultiArmManipulationInput,
MultiArmManipulationOutput,
)
from robo_orchard_lab.models.mixin import ModelMixin
# load model and processor
processor = HoloBrainProcessor.load("./HoloBrain_v0.0_GD", "robotwin2_0_processor.json")
model = ModelMixin.load_model("./HoloBrain_v0.0_GD/pretrain", load_impl="native")
input_data: MultiArmManipulationInput
input_data = processor.pre_process(input_data)
model_outs = model(input_data)
output_data: MultiArmManipulationOutput = processor.post_process(input_data, model_outs)
@misc{lin2026holobrain0technicalreport,
title={HoloBrain-0 Technical Report},
author={Xuewu Lin and Tianwei Lin and Yun Du and Hongyu Xie and Yiwei Jin and Jiawei Li and Shijie Wu and Qingze Wang and Mengdi Li and Mengao Zhao and Ziang Li and Chaodong Huang and Hongzhe Bi and Lichao Huang and Zhizhong Su},
year={2026},
eprint={2602.12062},
archivePrefix={arXiv},
primaryClass={cs.RO},
url={https://arxiv.org/abs/2602.12062},
}