File size: 3,944 Bytes
f92e8aa
45361b9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
f92e8aa
45361b9
 
 
 
 
0d20fb5
45361b9
 
 
 
 
 
 
 
1737942
45361b9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1737942
45361b9
 
 
 
 
 
 
 
 
 
 
 
0d20fb5
45361b9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
8eb360c
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
```
annotations_creators:
- other
language:
- en
language_creators:
- other
license:
- odc-by
multilinguality:
- monolingual
pretty_name: 'RGB-D-SegmentEgocentricBodies '
size_categories:
- 1K<n<10K
source_datasets:
- original
tags:
- egocentric segmentation
- extended reality
- xr
- human-body
- mixed-reality
- avatar
task_categories:
- image-segmentation
- depth-estimation
task_ids:
- semantic-segmentation
-  features:
    - name: image
      dtype: image
    - name: depth
      dtype: image
    - name: mask
      dtype: image
    - name: synthetic_depth
      dtype: image
-splits:
    - name: train
      num_examples: 8005
    - name: validation
      num_examples: 1069
```
# RGB-D Segment Egocentric Bodies Dataset

## Overview

The **RGB-D Segment Egocentric Bodies Dataset** is a multi-modal dataset designed for **egocentric body segmentation and depth-aware perception**. It contains synchronized **RGB images**, **real depth maps**, **segmentation masks**, and **synthetic depth data**, captured from an egocentric point of view.  
The dataset is intended to support research in **egocentric vision**, **XR/VR/AR**, **human–computer interaction**, and **depth-aware computer vision**.

## Dataset Description

The dataset is an extension of the EgoBodies Dataset (please refer to https://arxiv.org/pdf/2207.01296 for more information), with depth frames. We provide two versions of depth: real depth images acquired with different sensors: 
RealSense D435, Realsense L515. Synthetic detph were estimated using Depth-Anything by Yang et al (2024). It is composed of more than 40 different users, in wild scenarios. 

## Dataset Structure

```
RGB-D-SegmentEgocentricBodies/
β”‚
β”œβ”€β”€ train/ # ~3.11 GB
β”‚ β”œβ”€β”€ images/ # RGB frames
β”‚ β”œβ”€β”€ depths/ # Real depth maps
β”‚ β”œβ”€β”€ masks/ # Segmentation masks
β”‚ └── synthetic_depths/ # Synthetic or enhanced depth maps
β”‚
β”œβ”€β”€ val/ # ~401 MB
β”‚ β”œβ”€β”€ images/
β”‚ β”œβ”€β”€ depths/
β”‚ β”œβ”€β”€ masks/
β”‚ └── synthetic_depths/
β”‚
└── .gitattributes # Git LFS configuration
```
## Intended Use

This dataset is suitable for:

- Egocentric human / body-part segmentation  
- Depth-aware perception models  
- XR avatar embodiment and telepresence  
- Mixed-reality interaction research  
- Training and benchmarking RGB-D models

## Acknowledgements

This dataset was created by Nokia ExtendedRealityLab and developed in the context of research on egocentric perception and immersive telepresence.
If you use this dataset in academic work, please cite the following papers:

@article{gonzalez2023full,
  title={Full body video-based self-avatars for mixed reality: from e2e system to user study},
  author={Gonzalez Morin, Diego and Gonzalez-Sosa, Ester and Perez, Pablo and Villegas, Alvaro},
  journal={Virtual Reality},
  volume={27},
  number={3},
  pages={2129--2147},
  year={2023},
  publisher={Springer}
}

@article{gonzalez2022real,
  title={Real time egocentric segmentation for video-self avatar in mixed reality},
  author={Gonzalez-Sosa, Ester and Gajic, Andrija and Gonzalez-Morin, Diego and Robledo, Guillermo and Perez, Pablo and Villegas, Alvaro},
  journal={arXiv preprint arXiv:2207.01296},
  year={2022}
}

@article{tobaruela2026egocentricrgbd,
  title={RGB-D Egocentric Segmentation of Human Bodies for XR Applications},
  author={Pedros-Tobaruela, Sofia and Gonzalez-Sosa, Ester and Perez, Pablo and Villegas, Alvaro},
  journal={submitted}
}


## Example Usage

```python
from PIL import Image
import numpy as np
import os

def load_sample(root, split, idx):
    base = os.path.join(root, split)
    rgb = Image.open(os.path.join(base, "images", f"{idx}.png"))
    depth = Image.open(os.path.join(base, "depths", f"{idx}.png"))
    mask = Image.open(os.path.join(base, "masks", f"{idx}.png"))
    synth = Image.open(os.path.join(base, "synthetic_depths", f"{idx}.png"))
    return rgb, depth, mask, synth