Search is not available for this dataset
video
video |
|---|
End of preview. Expand
in Data Studio
XR-1-Dataset-Sample
[Project Page] [Paper] [GitHub]
This repository contains a representative sample of the XR-1 project's multi-modal dataset. The data is organized to support cross-embodiment training for Humanoids, Manipulators, and Ego-centric vision.
π Directory Structure
The dataset follows a hierarchy based on Embodiment -> Task -> Format:
1. Robot Embodiment Data (LeRobot Format)
Standard robot data (like TienKung or UR5) is organized following the LeRobot convention:
XR-1-Dataset-Sample/
βββ DUAL_ARM_TIEN_KUNG2/ # Robot Embodiment
βββ Press_Green_Button/ # Task Name
βββ lerobot/ # Data in LeRobot format
βββ metadata.json
βββ episodes.jsonl
βββ videos/
βββ data/
2. Human/Ego-centric Data (Ego4D Format)
For ego-centric data (e.g., Ego4D subsets used for Stage 1 UVMC pre-training), the structure is adapted to its native recording format:
XR-1-Dataset-Sample/
βββ Ego4D/ # Human ego-centric source
βββ files.json # Unified annotation/mapping file
βββ files/ # Raw data storage
βββ [video_id].mp4 # Egocentric video clips
π€ Data Modalities
- Vision: High-frequency RGB streams from multiple camera perspectives.
- Motion: Continuous state-action pairs, which are tokenized into UVMC (Unified Vision-Motion Codes) for XR-1 training.
- Language: Natural language instructions paired with each episode for VLA alignment.
π Usage
This sample is intended for use with the XR-1 GitHub Repository.
π Citation
@article{fan2025xr,
title={XR-1: Towards Versatile Vision-Language-Action Models via Learning Unified Vision-Motion Representations},
author={Fan, Shichao and others},
journal={arXiv preprint arXiv:2411.02776},
year={2025}
}
π License
This dataset is released under the MIT License.
Contact: For questions, please open an issue on our GitHub.
- Downloads last month
- 130