Dataset Viewer

The dataset viewer is not available because its heuristics could not detect any supported data files. You can try uploading some data files, or configuring the data files location manually.

EgoPoseVR Dataset

Overview

The EgoPoseVR Dataset is a large-scale synthetic dataset for egocentric full-body pose estimation in virtual reality.
It contains paired RGB-D observations, pose annotations, HMD tracking signals, and SMPL body parameters for temporally aligned motion clips.

Pipeline

  • Total samples: 18,235 motion clips
  • Scenes: 7 virtual scenes (Scene0 - Scene6)
  • Train / Val / Test: 14,702 / 1,827 / 1,706
  • Data format: .npz (NumPy compressed archives)

For more details, please visit the Project Page or check the official repository.


Data Sources

The motion data is derived from the AMASS dataset.
In total, 2,344 motion sequences are extracted. Each sequence folder corresponds to one continuous motion sequence, and each .npz file contains a 100-frame clip sampled from that sequence.

🎬 Dataset Video
Dataset Video


Directory Structure

EgoPoseVR_Dataset/
β”œβ”€β”€ Scene0/
β”œβ”€β”€ Scene1/
β”œβ”€β”€ Scene2/
β”œβ”€β”€ Scene3/
β”œβ”€β”€ Scene4/
β”œβ”€β”€ Scene5/
β”œβ”€β”€ Scene6/
β”‚   └── AllDataPath_{Source}_{split}_{id}/
β”‚       └── {clip_id}.npz
β”œβ”€β”€ train_npz_paths.txt
β”œβ”€β”€ val_npz_paths.txt
β”œβ”€β”€ test_npz_paths.txt
└── all_npz_paths.txt

Citation

If you find our code or paper helps, please consider citing:

@article{cheng2026egoposevr,
  title={EgoPoseVR: Spatiotemporal Multi-Modal Reasoning for Egocentric Full-Body Pose in Virtual Reality},
  author={Cheng, Haojie and Ong, Shaun Jing Heng and Cai, Shaoyu and Koh, Aiden Tat Yang and Ouyang, Fuxi and Khoo, Eng Tat},
  journal={arXiv preprint arXiv:2602.05590},
  year={2026}
}
Downloads last month
4,963

Paper for AplusX/EgoPoseVR