Update README.md
Browse files
README.md
CHANGED
|
@@ -1,3 +1,69 @@
|
|
| 1 |
---
|
| 2 |
-
license:
|
| 3 |
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
---
|
| 4 |
+
# XR-1-Dataset-Sample
|
| 5 |
+
|
| 6 |
+
[[Project Page](https://github.com/Open-X-Humanoid/XR-1)] [[Paper](https://arxiv.org/abs/2411.02776v1)] [[GitHub](https://github.com/Open-X-Humanoid/XR-1)]
|
| 7 |
+
|
| 8 |
+
This repository contains a representative sample of the **XR-1** project's multi-modal dataset. The data is organized to support cross-embodiment training for Humanoids, Manipulators, and Ego-centric vision.
|
| 9 |
+
|
| 10 |
+
## π Directory Structure
|
| 11 |
+
|
| 12 |
+
The dataset follows a hierarchy based on **Embodiment -> Task -> Format**:
|
| 13 |
+
|
| 14 |
+
### 1. Robot Embodiment Data (LeRobot Format)
|
| 15 |
+
Standard robot data (like TienKung or UR5) is organized following the [LeRobot](https://github.com/huggingface/lerobot) convention:
|
| 16 |
+
|
| 17 |
+
```text
|
| 18 |
+
XR-1-Dataset-Sample/
|
| 19 |
+
βββ DUAL_ARM_TIEN_KUNG2/ # Robot Embodiment
|
| 20 |
+
βββ Press_Green_Button/ # Task Name
|
| 21 |
+
βββ lerobot/ # Data in LeRobot format
|
| 22 |
+
βββ metadata.json
|
| 23 |
+
βββ episodes.jsonl
|
| 24 |
+
βββ videos/
|
| 25 |
+
βββ data/
|
| 26 |
+
|
| 27 |
+
```
|
| 28 |
+
|
| 29 |
+
### 2. Human/Ego-centric Data (Ego4D Format)
|
| 30 |
+
|
| 31 |
+
For ego-centric data (e.g., Ego4D subsets used for Stage 1 UVMC pre-training), the structure is adapted to its native recording format:
|
| 32 |
+
|
| 33 |
+
```text
|
| 34 |
+
XR-1-Dataset-Sample/
|
| 35 |
+
βββ Ego4D/ # Human ego-centric source
|
| 36 |
+
βββ files.json # Unified annotation/mapping file
|
| 37 |
+
βββ files/ # Raw data storage
|
| 38 |
+
βββ [video_id].mp4 # Egocentric video clips
|
| 39 |
+
```
|
| 40 |
+
|
| 41 |
+
## π€ Data Modalities
|
| 42 |
+
|
| 43 |
+
* **Vision**: High-frequency RGB streams from multiple camera perspectives.
|
| 44 |
+
* **Motion**: Continuous state-action pairs, which are tokenized into **UVMC** (Unified Vision-Motion Codes) for XR-1 training.
|
| 45 |
+
* **Language**: Natural language instructions paired with each episode for VLA alignment.
|
| 46 |
+
|
| 47 |
+
## π Usage
|
| 48 |
+
|
| 49 |
+
This sample is intended for use with the [XR-1 GitHub Repository](https://github.com/Open-X-Humanoid/XR-1).
|
| 50 |
+
|
| 51 |
+
## π Citation
|
| 52 |
+
|
| 53 |
+
```bibtex
|
| 54 |
+
@article{fan2025xr,
|
| 55 |
+
title={XR-1: Towards Versatile Vision-Language-Action Models via Learning Unified Vision-Motion Representations},
|
| 56 |
+
author={Fan, Shichao and others},
|
| 57 |
+
journal={arXiv preprint arXiv:2411.02776},
|
| 58 |
+
year={2025}
|
| 59 |
+
}
|
| 60 |
+
|
| 61 |
+
```
|
| 62 |
+
|
| 63 |
+
## π License
|
| 64 |
+
|
| 65 |
+
This dataset is released under the [MIT License](https://github.com/Open-X-Humanoid/XR-1/blob/main/LICENSE).
|
| 66 |
+
|
| 67 |
+
---
|
| 68 |
+
|
| 69 |
+
**Contact**: For questions, please open an issue on our [GitHub](https://github.com/Open-X-Humanoid/XR-1).
|