The dataset is currently empty. Upload or create new data files. Then, you will be able to explore them in the Dataset Viewer.

Dataset Card for RoboGene-Dataset

Paper Dataset License

RoboGene is a large-scale robotic task dataset generated by a diversity-driven agentic framework. Specifically designed for Vision-Language-Action (VLA) model pre-training, it addresses the critical bottlenecks of limited scene variety and insufficient physical grounding in existing datasets, significantly enhancing the generalization capabilities of humanoid robots in complex real-world tasks.

Dataset Details

Dataset Description

RoboGene overcomes the limitations of traditional human teleoperation for data collection. By leveraging foundation models (e.g., GPT-4o) as generators within the RoboGene Agent Loop—which integrates retrieval, generation, and self-reflection—the framework ensures that every generated instruction is not only semantically diverse but also physically executable in structured environments.

  • Curated by: Beijing Innovation Center of Humanoid Robotics, Peking University, Beihang University, Beijing Institute of Technology.
  • Shared by: RoboGene Project Team
  • Language(s) (NLP): Chinese (Task Instructions)
  • License: Apache-2.0

Dataset Sources

  • Demo: (https://robogene-boost-vla.github.io/)
  • Paper: RoboGene: Boosting VLA Pre-training via Diversity-Driven Agentic Framework for Real-World Task Generation
  • Framework Compatibility: Native support for the LeRobot framework.

Uses

Direct Use

This dataset serves as a pre-training foundation for VLA models:

  • Massive-Scale Pre-training: Enhances model robustness against novel objects, lighting variations, and background clutter.
  • Zero/Few-shot Adaptation: Experiments show that models pre-trained on RoboGene require only 50 human demonstrations to quickly adapt to complex downstream tasks.
  • Dual-arm Coordination Research: Contains high-quality trajectories for sophisticated dual-arm manipulation.

Out-of-Scope Use

  • Deploying pre-trained policies directly to heavy industrial equipment without safety validation or simulation-based verification is strictly prohibited.

Dataset Structure

The dataset is organized by scenes with a balanced distribution. Each episode includes:

  • Observations: Multi-view RGB images (front-view, wrist-view) and proprioceptive states.
  • Actions: High-frequency control sequences for joints or end-effectors.
  • Instructions: Verified, high-quality natural language task descriptions.

Task Highlights

  • 100+ Diverse Skills: Ranging from basic pick-and-place to complex multi-stage assembly and tool usage.
  • Rich Scenarios: Comprehensive coverage of Kitchen, Laboratory, Medical, Office, Retail, Education, Industrial, and Daily Life environments.
  • Complex Object Spectrum: Includes high-reflectivity materials (e.g., glass), deformable/flexible objects (e.g., cloth, ropes), and articulated/hinged mechanisms (e.g., doors, drawers, cabinets).

Dataset Creation

Curation Rationale

While human-curated data is physically valid, it often lacks structural diversity. Conversely, traditional LLM-generated tasks frequently fail to align with physical constraints. RoboGene bridges this gap by embedding physical grounding into an automated generation loop, resulting in pre-training performance that significantly outperforms both purely human-curated and baseline foundation-model datasets.

Bias, Risks, and Limitations

  • Sim-to-Real Gap: Although verified for physical grounding, minor sim-to-real discrepancies may still occur in extremely high-precision contact mechanics tasks.
  • Domain Scope: While covering 8+ major scenarios, the dataset may not account for highly specialized or hazardous industrial environments not represented in the training distribution.

Performance Benchmarks

Empirical results demonstrate that the RoboGene pre-training policy significantly outperforms human-curated baselines across five unseen dual-arm tasks. Most notably, in the complex DFR-BuildBlocks scenario, RoboGene achieves a success rate five times higher than that of existing foundation models (25% vs. 5%). Furthermore, our approach exhibits superior generalization capabilities, maintaining robust performance in novel environments where baseline policies frequently fail to adapt.

Citation

If you use this dataset in your research, please cite:

BibTeX:

@article{zhang2026robogene,
  title={RoboGene: Boosting VLA Pre-training via Diversity-Driven Agentic Framework for Real-World Task Generation},
  author={Zhang, Yixue and Wu, Kun and Gao, Zhi and Zhao, Zhen and Ren, Pei and Xu, Zhiyuan and Liao, Fei and Wang, Xinhua and Fan, Shichao and Wu, Di and others},
  journal={arXiv preprint arXiv:2602.16444},
  year={2026}
}
Downloads last month
-

Paper for X-Humanoid/RoboGene