Datasets:

Modalities:
Video
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
bambooboom commited on
Commit
a4a1b11
Β·
verified Β·
1 Parent(s): 2db2c4b

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +67 -1
README.md CHANGED
@@ -1,3 +1,69 @@
1
  ---
2
- license: mit
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
+ license: apache-2.0
3
  ---
4
+ # XR-1-Dataset-Sample
5
+
6
+ [[Project Page](https://github.com/Open-X-Humanoid/XR-1)] [[Paper](https://arxiv.org/abs/2411.02776v1)] [[GitHub](https://github.com/Open-X-Humanoid/XR-1)]
7
+
8
+ This repository contains a representative sample of the **XR-1** project's multi-modal dataset. The data is organized to support cross-embodiment training for Humanoids, Manipulators, and Ego-centric vision.
9
+
10
+ ## πŸ“‚ Directory Structure
11
+
12
+ The dataset follows a hierarchy based on **Embodiment -> Task -> Format**:
13
+
14
+ ### 1. Robot Embodiment Data (LeRobot Format)
15
+ Standard robot data (like TienKung or UR5) is organized following the [LeRobot](https://github.com/huggingface/lerobot) convention:
16
+
17
+ ```text
18
+ XR-1-Dataset-Sample/
19
+ └── DUAL_ARM_TIEN_KUNG2/ # Robot Embodiment
20
+ └── Press_Green_Button/ # Task Name
21
+ └── lerobot/ # Data in LeRobot format
22
+ β”œβ”€β”€ metadata.json
23
+ β”œβ”€β”€ episodes.jsonl
24
+ β”œβ”€β”€ videos/
25
+ └── data/
26
+
27
+ ```
28
+
29
+ ### 2. Human/Ego-centric Data (Ego4D Format)
30
+
31
+ For ego-centric data (e.g., Ego4D subsets used for Stage 1 UVMC pre-training), the structure is adapted to its native recording format:
32
+
33
+ ```text
34
+ XR-1-Dataset-Sample/
35
+ └── Ego4D/ # Human ego-centric source
36
+ β”œβ”€β”€ files.json # Unified annotation/mapping file
37
+ └── files/ # Raw data storage
38
+ └── [video_id].mp4 # Egocentric video clips
39
+ ```
40
+
41
+ ## πŸ€– Data Modalities
42
+
43
+ * **Vision**: High-frequency RGB streams from multiple camera perspectives.
44
+ * **Motion**: Continuous state-action pairs, which are tokenized into **UVMC** (Unified Vision-Motion Codes) for XR-1 training.
45
+ * **Language**: Natural language instructions paired with each episode for VLA alignment.
46
+
47
+ ## πŸ›  Usage
48
+
49
+ This sample is intended for use with the [XR-1 GitHub Repository](https://github.com/Open-X-Humanoid/XR-1).
50
+
51
+ ## πŸ“ Citation
52
+
53
+ ```bibtex
54
+ @article{fan2025xr,
55
+ title={XR-1: Towards Versatile Vision-Language-Action Models via Learning Unified Vision-Motion Representations},
56
+ author={Fan, Shichao and others},
57
+ journal={arXiv preprint arXiv:2411.02776},
58
+ year={2025}
59
+ }
60
+
61
+ ```
62
+
63
+ ## πŸ“œ License
64
+
65
+ This dataset is released under the [MIT License](https://github.com/Open-X-Humanoid/XR-1/blob/main/LICENSE).
66
+
67
+ ---
68
+
69
+ **Contact**: For questions, please open an issue on our [GitHub](https://github.com/Open-X-Humanoid/XR-1).