Update README.md
Browse files
README.md
CHANGED
|
@@ -25,6 +25,7 @@ datasets:
|
|
| 25 |
<img src="teaser.png" width="800">
|
| 26 |
</p>
|
| 27 |
<p align="center"><em>Figure 1: Overview of the RoboGene dataset.</em></p>
|
|
|
|
| 28 |
**RoboGene** is a large-scale robotic task dataset generated by a **diversity-driven agentic framework**. Specifically designed for **Vision-Language-Action (VLA)** model pre-training, it addresses the critical bottlenecks of limited scene variety and insufficient physical grounding in existing datasets, significantly enhancing the generalization capabilities of humanoid robots in complex real-world tasks.
|
| 29 |
|
| 30 |
|
|
@@ -87,6 +88,41 @@ While human-curated data is physically valid, it often lacks structural diversit
|
|
| 87 |
|
| 88 |
### Performance Benchmarks
|
| 89 |
Empirical results demonstrate that the RoboGene pre-training policy significantly outperforms human-curated baselines across five unseen dual-arm tasks. Most notably, in the complex DFR-BuildBlocks scenario, RoboGene achieves a success rate five times higher than that of existing foundation models (25% vs. 5%). Furthermore, our approach exhibits superior generalization capabilities, maintaining robust performance in novel environments where baseline policies frequently fail to adapt.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 90 |
## Citation
|
| 91 |
|
| 92 |
If you use this dataset in your research, please cite:
|
|
|
|
| 25 |
<img src="teaser.png" width="800">
|
| 26 |
</p>
|
| 27 |
<p align="center"><em>Figure 1: Overview of the RoboGene dataset.</em></p>
|
| 28 |
+
|
| 29 |
**RoboGene** is a large-scale robotic task dataset generated by a **diversity-driven agentic framework**. Specifically designed for **Vision-Language-Action (VLA)** model pre-training, it addresses the critical bottlenecks of limited scene variety and insufficient physical grounding in existing datasets, significantly enhancing the generalization capabilities of humanoid robots in complex real-world tasks.
|
| 30 |
|
| 31 |
|
|
|
|
| 88 |
|
| 89 |
### Performance Benchmarks
|
| 90 |
Empirical results demonstrate that the RoboGene pre-training policy significantly outperforms human-curated baselines across five unseen dual-arm tasks. Most notably, in the complex DFR-BuildBlocks scenario, RoboGene achieves a success rate five times higher than that of existing foundation models (25% vs. 5%). Furthermore, our approach exhibits superior generalization capabilities, maintaining robust performance in novel environments where baseline policies frequently fail to adapt.
|
| 91 |
+
## structure of the dataset
|
| 92 |
+
|
| 93 |
+
|
| 94 |
+
Each task follows the LeRobot format:
|
| 95 |
+
```
|
| 96 |
+
task_name/
|
| 97 |
+
├── data/
|
| 98 |
+
│ └── chunk-000/
|
| 99 |
+
│ ├── episode_000000.parquet
|
| 100 |
+
│ ├── episode_000001.parquet
|
| 101 |
+
│ └── ...
|
| 102 |
+
├── meta/
|
| 103 |
+
│ ├── info.json #
|
| 104 |
+
│ ├── tasks.jsonl # task description
|
| 105 |
+
│ ├── episodes.jsonl #
|
| 106 |
+
│ └── episodes_stats.jsonl
|
| 107 |
+
└── videos/
|
| 108 |
+
└── chunk-000/
|
| 109 |
+
├── observation.rgb_images.camera_front/
|
| 110 |
+
├── observation.rgb_images.camera_left/
|
| 111 |
+
├── observation.rgb_images.camera_right/
|
| 112 |
+
├── observation.rgb_images.camera_top/
|
| 113 |
+
├── observation.rgb_images.camera_wrist_left/
|
| 114 |
+
└── observation.rgb_images.camera_wrist_right/
|
| 115 |
+
```
|
| 116 |
+
|
| 117 |
+
|
| 118 |
+
|
| 119 |
+
### camera configuration(six RGB cameras)
|
| 120 |
+
- `camera_front`
|
| 121 |
+
- `camera_left`
|
| 122 |
+
- `camera_right`
|
| 123 |
+
- `camera_top`
|
| 124 |
+
- `camera_wrist_left`
|
| 125 |
+
- `camera_wrist_right`
|
| 126 |
## Citation
|
| 127 |
|
| 128 |
If you use this dataset in your research, please cite:
|