annotations_creators:
- other
language:
- en
language_creators:
- other
license:
- odc-by
multilinguality:
- monolingual
pretty_name: 'RGB-D-SegmentEgocentricBodies '
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- egocentric segmentation
- extended reality
- xr
- human-body
- mixed-reality
- avatar
task_categories:
- image-segmentation
- depth-estimation
task_ids:
- semantic-segmentation
- features:
- name: image
dtype: image
- name: depth
dtype: image
- name: mask
dtype: image
- name: synthetic_depth
dtype: image
-splits:
- name: train
num_examples: 8005
- name: validation
num_examples: 1069
RGB-D Segment Egocentric Bodies Dataset
Overview
The RGB-D Segment Egocentric Bodies Dataset is a multi-modal dataset designed for egocentric body segmentation and depth-aware perception. It contains synchronized RGB images, real depth maps, segmentation masks, and synthetic depth data, captured from an egocentric point of view.
The dataset is intended to support research in egocentric vision, XR/VR/AR, humanβcomputer interaction, and depth-aware computer vision.
Dataset Description
The dataset is an extension of the EgoBodies Dataset (please refer to https://arxiv.org/pdf/2207.01296 for more information), with depth frames. We provide two versions of depth: real depth images acquired with different sensors: RealSense D435, Realsense L515. Synthetic detph were estimated using Depth-Anything by Yang et al (2024). It is composed of more than 40 different users, in wild scenarios.
Dataset Structure
RGB-D-SegmentEgocentricBodies/
β
βββ train/ # ~3.11 GB
β βββ images/ # RGB frames
β βββ depths/ # Real depth maps
β βββ masks/ # Segmentation masks
β βββ synthetic_depths/ # Synthetic or enhanced depth maps
β
βββ val/ # ~401 MB
β βββ images/
β βββ depths/
β βββ masks/
β βββ synthetic_depths/
β
βββ .gitattributes # Git LFS configuration
Intended Use
This dataset is suitable for:
- Egocentric human / body-part segmentation
- Depth-aware perception models
- XR avatar embodiment and telepresence
- Mixed-reality interaction research
- Training and benchmarking RGB-D models
Acknowledgements
This dataset was created by Nokia ExtendedRealityLab and developed in the context of research on egocentric perception and immersive telepresence. If you use this dataset in academic work, please cite the following papers:
@article{gonzalez2023full, title={Full body video-based self-avatars for mixed reality: from e2e system to user study}, author={Gonzalez Morin, Diego and Gonzalez-Sosa, Ester and Perez, Pablo and Villegas, Alvaro}, journal={Virtual Reality}, volume={27}, number={3}, pages={2129--2147}, year={2023}, publisher={Springer} }
@article{gonzalez2022real, title={Real time egocentric segmentation for video-self avatar in mixed reality}, author={Gonzalez-Sosa, Ester and Gajic, Andrija and Gonzalez-Morin, Diego and Robledo, Guillermo and Perez, Pablo and Villegas, Alvaro}, journal={arXiv preprint arXiv:2207.01296}, year={2022} }
@article{tobaruela2026egocentricrgbd, title={RGB-D Egocentric Segmentation of Human Bodies for XR Applications}, author={Pedros-Tobaruela, Sofia and Gonzalez-Sosa, Ester and Perez, Pablo and Villegas, Alvaro}, journal={submitted} }
Example Usage
from PIL import Image
import numpy as np
import os
def load_sample(root, split, idx):
base = os.path.join(root, split)
rgb = Image.open(os.path.join(base, "images", f"{idx}.png"))
depth = Image.open(os.path.join(base, "depths", f"{idx}.png"))
mask = Image.open(os.path.join(base, "masks", f"{idx}.png"))
synth = Image.open(os.path.join(base, "synthetic_depths", f"{idx}.png"))
return rgb, depth, mask, synth