Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,70 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
annotations_creators:
|
| 2 |
+
- other
|
| 3 |
+
language:
|
| 4 |
+
- en
|
| 5 |
+
language_creators:
|
| 6 |
+
- other
|
| 7 |
+
license:
|
| 8 |
+
- odc-by
|
| 9 |
+
multilinguality:
|
| 10 |
+
- monolingual
|
| 11 |
+
pretty_name: 'RGB-SegmentEgocentricBodies-InclusiveValSet'
|
| 12 |
+
size_categories:
|
| 13 |
+
- 1K<n<10K
|
| 14 |
+
source_datasets:
|
| 15 |
+
- original
|
| 16 |
+
tags:
|
| 17 |
+
- egocentric segmentation
|
| 18 |
+
- extended reality
|
| 19 |
+
- xr
|
| 20 |
+
- human-body
|
| 21 |
+
- mixed-reality
|
| 22 |
+
- bias
|
| 23 |
+
- avatar
|
| 24 |
+
task_categories:
|
| 25 |
+
- image-segmentation
|
| 26 |
+
task_ids:
|
| 27 |
+
- semantic-segmentation
|
| 28 |
+
- features:
|
| 29 |
+
- name: image
|
| 30 |
+
dtype: image
|
| 31 |
+
-splits:
|
| 32 |
+
- name: val
|
| 33 |
+
num_examples: 3003
|
| 34 |
+
|
| 35 |
+
# RGB-D Segment Egocentric Bodies Dataset
|
| 36 |
+
|
| 37 |
+
## Overview and Dataset Description
|
| 38 |
+
|
| 39 |
+
The **RGB-D Segment Egocentric Bodies Inclusive Validatio Dataset** is a set of 3003 validation images for the purpose of **egocentric segmentation**.
|
| 40 |
+
The dataset includes multiple users representing a diverse range of skin tones, covering the full spectrum of the Fitzpatrick skin type scale.
|
| 41 |
+
The dataset is intended to support research in **XR/VR/AR**, **human–computer interaction**, and **bias mitigation**.
|
| 42 |
+
|
| 43 |
+
The dataset is comprised of 33 users, for each of them the following set of videos were created twice with two different outfits. We decide to
|
| 44 |
+
create a new user for the second version of the dataset, making a total of 66 effective users.
|
| 45 |
+
|
| 46 |
+
For each user, there are 16 videos, resulting from the combination of 2 standing actions × 2 outfits, plus 2 seated actions × 3 table scenarios × 2 outfits.
|
| 47 |
+
|
| 48 |
+
The standing actions are:
|
| 49 |
+
|
| 50 |
+
-Walking while looking forward
|
| 51 |
+
-Walking while looking at a mobile phone
|
| 52 |
+
|
| 53 |
+
which were done in the 3 following table scenarios:
|
| 54 |
+
|
| 55 |
+
-Red table with a white wall
|
| 56 |
+
-White table with a white wall
|
| 57 |
+
-Wooden table with a black wall
|
| 58 |
+
|
| 59 |
+
The seated actions are:
|
| 60 |
+
|
| 61 |
+
-Typing on a computer keyboard
|
| 62 |
+
-Writing on paper
|
| 63 |
+
|
| 64 |
+
The pixel-wise annotation of these images is work in progress
|
| 65 |
+
|
| 66 |
+
## Acknowledgements
|
| 67 |
+
|
| 68 |
+
This dataset was created by Nokia ExtendedRealityLab and developed in the context of research on egocentric perception and immersive telepresence. If you use this dataset in academic work, please cite the following papers:
|
| 69 |
+
|
| 70 |
+
@article{gonzalez2023full, title={Full body video-based self-avatars for mixed reality: from e2e system to user study}, author={Gonzalez Morin, Diego and Gonzalez-Sosa, Ester and Perez, Pablo and Villegas, Alvaro}, journal={Virtual Reality}, volume={27}, number={3}, pages={2129--2147}, year={2023}, publisher={Springer} }
|