Datasets:

Modalities:
Video
Size:
< 1K
ArXiv:
Libraries:
Datasets
License:
Dataset Viewer
Auto-converted to Parquet Duplicate
Search is not available for this dataset
video
video
End of preview. Expand in Data Studio

XR-1-Dataset-Sample

[Project Page] [Paper] [GitHub]

This repository contains a representative sample of the XR-1 project's multi-modal dataset. The data is organized to support cross-embodiment training for Humanoids, Manipulators, and Ego-centric vision.

πŸ“‚ Directory Structure

The dataset follows a hierarchy based on Embodiment -> Task -> Format:

1. Robot Embodiment Data (LeRobot Format)

Standard robot data (like TienKung or UR5) is organized following the LeRobot convention:

XR-1-Dataset-Sample/
└── DUAL_ARM_TIEN_KUNG2/                      # Robot Embodiment
    └── Press_Green_Button/              # Task Name
        └── lerobot/               # Data in LeRobot format
            β”œβ”€β”€ metadata.json     
            β”œβ”€β”€ episodes.jsonl     
            β”œβ”€β”€ videos/            
            └── data/             

2. Human/Ego-centric Data (Ego4D Format)

For ego-centric data (e.g., Ego4D subsets used for Stage 1 UVMC pre-training), the structure is adapted to its native recording format:

XR-1-Dataset-Sample/
└── Ego4D/                         # Human ego-centric source
    β”œβ”€β”€ files.json                 # Unified annotation/mapping file
    └── files/                     # Raw data storage
        └── [video_id].mp4         # Egocentric video clips

πŸ€– Data Modalities

  • Vision: High-frequency RGB streams from multiple camera perspectives.
  • Motion: Continuous state-action pairs, which are tokenized into UVMC (Unified Vision-Motion Codes) for XR-1 training.
  • Language: Natural language instructions paired with each episode for VLA alignment.

πŸ›  Usage

This sample is intended for use with the XR-1 GitHub Repository.

πŸ“ Citation

@article{fan2025xr,
  title={XR-1: Towards Versatile Vision-Language-Action Models via Learning Unified Vision-Motion Representations},
  author={Fan, Shichao and others},
  journal={arXiv preprint arXiv:2411.02776},
  year={2025}
}

πŸ“œ License

This dataset is released under the MIT License.


Contact: For questions, please open an issue on our GitHub.

Downloads last month
130