File size: 2,537 Bytes
65fccdc a4a1b11 65fccdc a4a1b11 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 |
---
license: apache-2.0
---
# XR-1-Dataset-Sample
[[Project Page](https://github.com/Open-X-Humanoid/XR-1)] [[Paper](https://arxiv.org/abs/2411.02776v1)] [[GitHub](https://github.com/Open-X-Humanoid/XR-1)]
This repository contains a representative sample of the **XR-1** project's multi-modal dataset. The data is organized to support cross-embodiment training for Humanoids, Manipulators, and Ego-centric vision.
## π Directory Structure
The dataset follows a hierarchy based on **Embodiment -> Task -> Format**:
### 1. Robot Embodiment Data (LeRobot Format)
Standard robot data (like TienKung or UR5) is organized following the [LeRobot](https://github.com/huggingface/lerobot) convention:
```text
XR-1-Dataset-Sample/
βββ DUAL_ARM_TIEN_KUNG2/ # Robot Embodiment
βββ Press_Green_Button/ # Task Name
βββ lerobot/ # Data in LeRobot format
βββ metadata.json
βββ episodes.jsonl
βββ videos/
βββ data/
```
### 2. Human/Ego-centric Data (Ego4D Format)
For ego-centric data (e.g., Ego4D subsets used for Stage 1 UVMC pre-training), the structure is adapted to its native recording format:
```text
XR-1-Dataset-Sample/
βββ Ego4D/ # Human ego-centric source
βββ files.json # Unified annotation/mapping file
βββ files/ # Raw data storage
βββ [video_id].mp4 # Egocentric video clips
```
## π€ Data Modalities
* **Vision**: High-frequency RGB streams from multiple camera perspectives.
* **Motion**: Continuous state-action pairs, which are tokenized into **UVMC** (Unified Vision-Motion Codes) for XR-1 training.
* **Language**: Natural language instructions paired with each episode for VLA alignment.
## π Usage
This sample is intended for use with the [XR-1 GitHub Repository](https://github.com/Open-X-Humanoid/XR-1).
## π Citation
```bibtex
@article{fan2025xr,
title={XR-1: Towards Versatile Vision-Language-Action Models via Learning Unified Vision-Motion Representations},
author={Fan, Shichao and others},
journal={arXiv preprint arXiv:2411.02776},
year={2025}
}
```
## π License
This dataset is released under the [MIT License](https://github.com/Open-X-Humanoid/XR-1/blob/main/LICENSE).
---
**Contact**: For questions, please open an issue on our [GitHub](https://github.com/Open-X-Humanoid/XR-1). |