Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,96 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: apache-2.0
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
+
tags:
|
| 4 |
+
- robotics
|
| 5 |
+
- vla-model
|
| 6 |
+
- embodied-ai
|
| 7 |
+
- task-generation
|
| 8 |
+
- humanoid-robots
|
| 9 |
+
- lerobot
|
| 10 |
+
- pytorch
|
| 11 |
+
- agentic-framework
|
| 12 |
+
datasets:
|
| 13 |
+
- robogene
|
| 14 |
+
---
|
| 15 |
+
|
| 16 |
+
|
| 17 |
+
# Dataset Card for RoboGene-Dataset
|
| 18 |
+
|
| 19 |
+
[]
|
| 20 |
+
[]
|
| 21 |
+
[](LICENSE)
|
| 22 |
+
**RoboGene** is a large-scale robotic task dataset generated by a **diversity-driven agentic framework**. Specifically designed for **Vision-Language-Action (VLA)** model pre-training, it addresses the critical bottlenecks of limited scene variety and insufficient physical grounding in existing datasets, significantly enhancing the generalization capabilities of humanoid robots in complex real-world tasks.
|
| 23 |
+
|
| 24 |
+
|
| 25 |
+
|
| 26 |
+
## Dataset Details
|
| 27 |
+
|
| 28 |
+
### Dataset Description
|
| 29 |
+
|
| 30 |
+
RoboGene overcomes the limitations of traditional human teleoperation for data collection. By leveraging foundation models (e.g., GPT-4o) as generators within the **RoboGene Agent Loop**—which integrates retrieval, generation, and self-reflection—the framework ensures that every generated instruction is not only semantically diverse but also physically executable in structured environments.
|
| 31 |
+
|
| 32 |
+
- **Curated by:** Beijing Innovation Center of Humanoid Robotics, Peking University, Beihang University, Beijing Institute of Technology.
|
| 33 |
+
- **Shared by:** RoboGene Project Team
|
| 34 |
+
- **Language(s) (NLP):** Chinese (Task Instructions)
|
| 35 |
+
- **License:** Apache-2.0
|
| 36 |
+
|
| 37 |
+
### Dataset Sources
|
| 38 |
+
|
| 39 |
+
- **Demo:** (https://robogene-boost-vla.github.io/)
|
| 40 |
+
- **Paper:** *RoboGene: Boosting VLA Pre-training via Diversity-Driven Agentic Framework for Real-World Task Generation*
|
| 41 |
+
- **Framework Compatibility:** Native support for the [LeRobot](https://github.com/huggingface/lerobot) framework.
|
| 42 |
+
|
| 43 |
+
## Uses
|
| 44 |
+
|
| 45 |
+
### Direct Use
|
| 46 |
+
|
| 47 |
+
This dataset serves as a **pre-training foundation** for VLA models:
|
| 48 |
+
- **Massive-Scale Pre-training:** Enhances model robustness against novel objects, lighting variations, and background clutter.
|
| 49 |
+
- **Zero/Few-shot Adaptation:** Experiments show that models pre-trained on RoboGene require only 50 human demonstrations to quickly adapt to complex downstream tasks.
|
| 50 |
+
- **Dual-arm Coordination Research:** Contains high-quality trajectories for sophisticated dual-arm manipulation.
|
| 51 |
+
|
| 52 |
+
### Out-of-Scope Use
|
| 53 |
+
|
| 54 |
+
- Deploying pre-trained policies directly to heavy industrial equipment without safety validation or simulation-based verification is strictly prohibited.
|
| 55 |
+
|
| 56 |
+
## Dataset Structure
|
| 57 |
+
|
| 58 |
+
The dataset is organized by scenes with a balanced distribution. Each episode includes:
|
| 59 |
+
- **Observations:** Multi-view RGB images (front-view, wrist-view) and proprioceptive states.
|
| 60 |
+
- **Actions:** High-frequency control sequences for joints or end-effectors.
|
| 61 |
+
- **Instructions:** Verified, high-quality natural language task descriptions.
|
| 62 |
+
|
| 63 |
+
### Task Highlights
|
| 64 |
+
- **100+ Diverse Skills:** Ranging from basic pick-and-place to complex multi-stage assembly and tool usage.
|
| 65 |
+
- **Rich Scenarios:** Comprehensive coverage of Kitchen, Laboratory, Medical, Office, Retail, Education, Industrial, and Daily Life environments.
|
| 66 |
+
- **Complex Object Spectrum:** Includes high-reflectivity materials (e.g., glass), deformable/flexible objects (e.g., cloth, ropes), and articulated/hinged mechanisms (e.g., doors, drawers, cabinets).
|
| 67 |
+
|
| 68 |
+
|
| 69 |
+
|
| 70 |
+
## Dataset Creation
|
| 71 |
+
|
| 72 |
+
### Curation Rationale
|
| 73 |
+
|
| 74 |
+
While human-curated data is physically valid, it often lacks structural diversity. Conversely, traditional LLM-generated tasks frequently fail to align with physical constraints. RoboGene bridges this gap by embedding physical grounding into an automated generation loop, resulting in pre-training performance that significantly outperforms both purely human-curated and baseline foundation-model datasets.
|
| 75 |
+
|
| 76 |
+
|
| 77 |
+
## Bias, Risks, and Limitations
|
| 78 |
+
|
| 79 |
+
- **Sim-to-Real Gap:** Although verified for physical grounding, minor sim-to-real discrepancies may still occur in extremely high-precision contact mechanics tasks.
|
| 80 |
+
- **Domain Scope:** While covering 8+ major scenarios, the dataset may not account for highly specialized or hazardous industrial environments not represented in the training distribution.
|
| 81 |
+
|
| 82 |
+
### Performance Benchmarks
|
| 83 |
+
Empirical results demonstrate that the RoboGene pre-training policy significantly outperforms human-curated baselines across five unseen dual-arm tasks. Most notably, in the complex DFR-BuildBlocks scenario, RoboGene achieves a success rate five times higher than that of existing foundation models (25% vs. 5%). Furthermore, our approach exhibits superior generalization capabilities, maintaining robust performance in novel environments where baseline policies frequently fail to adapt.
|
| 84 |
+
## Citation
|
| 85 |
+
|
| 86 |
+
If you use this dataset in your research, please cite:
|
| 87 |
+
|
| 88 |
+
**BibTeX:**
|
| 89 |
+
|
| 90 |
+
```latex
|
| 91 |
+
@article{zhang2026robogene,
|
| 92 |
+
title={RoboGene: Boosting VLA Pre-training via Diversity-Driven Agentic Framework for Real-World Task Generation},
|
| 93 |
+
author={Zhang, Yixue and Wu, Kun and Gao, Zhi and Zhao, Zhen and Ren, Pei and Xu, Zhiyuan and Liao, Fei and Wang, Xinhua and Fan, Shichao and Wu, Di and others},
|
| 94 |
+
journal={arXiv preprint arXiv:2602.16444},
|
| 95 |
+
year={2026}
|
| 96 |
+
}
|