File size: 6,502 Bytes
1f2b6d0 cb0d0c2 c24e4a2 6c5b23b 1f2b6d0 6c5b23b 1f2b6d0 | 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 | ---
license: apache-2.0
tags:
- robotics
- vla-model
- embodied-ai
- task-generation
- humanoid-robots
- lerobot
- pytorch
- agentic-framework
datasets:
- robogene
---
# Dataset Card for RoboGene-Dataset
<p align="center">
<a href="https://arxiv.org/abs/2602.16444"><img src="https://img.shields.io/badge/arXiv-2602.16444-B31B1B.svg" alt="Paper"></a>
<a href="https://huggingface.co/datasets/X-Humanoid/RoboGene"><img src="https://img.shields.io/badge/%F0%9F%A4%97%20Dataset-HuggingFace-yellow" alt="Dataset"></a>
<a href="LICENSE"><img src="https://img.shields.io/badge/License-Apache--2.0-blue.svg" alt="License"></a>
</p>
<p align="center">
<img src="teaser.png" width="800">
</p>
<p align="center"><em>Figure 1: Overview of the RoboGene dataset.</em></p>
**RoboGene** is a large-scale robotic task dataset generated by a **diversity-driven agentic framework**. Specifically designed for **Vision-Language-Action (VLA)** model pre-training, it addresses the critical bottlenecks of limited scene variety and insufficient physical grounding in existing datasets, significantly enhancing the generalization capabilities of humanoid robots in complex real-world tasks.
## Dataset Details
### Dataset Description
RoboGene overcomes the limitations of traditional human teleoperation for data collection. By leveraging foundation models (e.g., GPT-4o) as generators within the **RoboGene Agent Loop**—which integrates retrieval, generation, and self-reflection—the framework ensures that every generated instruction is not only semantically diverse but also physically executable in structured environments.
- **Curated by:** Beijing Innovation Center of Humanoid Robotics, Peking University, Beihang University, Beijing Institute of Technology.
- **Shared by:** RoboGene Project Team
- **Language(s) (NLP):** Chinese (Task Instructions)
- **License:** Apache-2.0
### Dataset Sources
- **Demo:** (https://robogene-boost-vla.github.io/)
- **Paper:** *RoboGene: Boosting VLA Pre-training via Diversity-Driven Agentic Framework for Real-World Task Generation*
- **Framework Compatibility:** Native support for the [LeRobot](https://github.com/huggingface/lerobot) framework.
## Uses
### Direct Use
This dataset serves as a **pre-training foundation** for VLA models:
- **Massive-Scale Pre-training:** Enhances model robustness against novel objects, lighting variations, and background clutter.
- **Zero/Few-shot Adaptation:** Experiments show that models pre-trained on RoboGene require only 50 human demonstrations to quickly adapt to complex downstream tasks.
- **Dual-arm Coordination Research:** Contains high-quality trajectories for sophisticated dual-arm manipulation.
### Out-of-Scope Use
- Deploying pre-trained policies directly to heavy industrial equipment without safety validation or simulation-based verification is strictly prohibited.
## Dataset Structure
The dataset is organized by scenes with a balanced distribution. Each episode includes:
- **Observations:** Multi-view RGB images (front-view, wrist-view) and proprioceptive states.
- **Actions:** High-frequency control sequences for joints or end-effectors.
- **Instructions:** Verified, high-quality natural language task descriptions.
### Task Highlights
- **100+ Diverse Skills:** Ranging from basic pick-and-place to complex multi-stage assembly and tool usage.
- **Rich Scenarios:** Comprehensive coverage of Kitchen, Laboratory, Medical, Office, Retail, Education, Industrial, and Daily Life environments.
- **Complex Object Spectrum:** Includes high-reflectivity materials (e.g., glass), deformable/flexible objects (e.g., cloth, ropes), and articulated/hinged mechanisms (e.g., doors, drawers, cabinets).
## Dataset Creation
### Curation Rationale
While human-curated data is physically valid, it often lacks structural diversity. Conversely, traditional LLM-generated tasks frequently fail to align with physical constraints. RoboGene bridges this gap by embedding physical grounding into an automated generation loop, resulting in pre-training performance that significantly outperforms both purely human-curated and baseline foundation-model datasets.
## Bias, Risks, and Limitations
- **Sim-to-Real Gap:** Although verified for physical grounding, minor sim-to-real discrepancies may still occur in extremely high-precision contact mechanics tasks.
- **Domain Scope:** While covering 8+ major scenarios, the dataset may not account for highly specialized or hazardous industrial environments not represented in the training distribution.
### Performance Benchmarks
Empirical results demonstrate that the RoboGene pre-training policy significantly outperforms human-curated baselines across five unseen dual-arm tasks. Most notably, in the complex DFR-BuildBlocks scenario, RoboGene achieves a success rate five times higher than that of existing foundation models (25% vs. 5%). Furthermore, our approach exhibits superior generalization capabilities, maintaining robust performance in novel environments where baseline policies frequently fail to adapt.
## structure of the dataset
Each task follows the LeRobot format:
```
task_name/
├── data/
│ └── chunk-000/
│ ├── episode_000000.parquet
│ ├── episode_000001.parquet
│ └── ...
├── meta/
│ ├── info.json #
│ ├── tasks.jsonl # task description
│ ├── episodes.jsonl #
│ └── episodes_stats.jsonl
└── videos/
└── chunk-000/
├── observation.rgb_images.camera_front/
├── observation.rgb_images.camera_left/
├── observation.rgb_images.camera_right/
├── observation.rgb_images.camera_top/
├── observation.rgb_images.camera_wrist_left/
└── observation.rgb_images.camera_wrist_right/
```
### camera configuration(six RGB cameras)
- `camera_front`
- `camera_left`
- `camera_right`
- `camera_top`
- `camera_wrist_left`
- `camera_wrist_right`
## Citation
If you use this dataset in your research, please cite:
**BibTeX:**
```latex
@article{zhang2026robogene,
title={RoboGene: Boosting VLA Pre-training via Diversity-Driven Agentic Framework for Real-World Task Generation},
author={Zhang, Yixue and Wu, Kun and Gao, Zhi and Zhao, Zhen and Ren, Pei and Xu, Zhiyuan and Liao, Fei and Wang, Xinhua and Fan, Shichao and Wu, Di and others},
journal={arXiv preprint arXiv:2602.16444},
year={2026}
} |