Create README.md
Browse files
README.md
ADDED
|
@@ -0,0 +1,62 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
```
|
| 2 |
+
annotations_creators:
|
| 3 |
+
- other
|
| 4 |
+
language:
|
| 5 |
+
- en
|
| 6 |
+
language_creators:
|
| 7 |
+
- other
|
| 8 |
+
license:
|
| 9 |
+
- odc-by
|
| 10 |
+
multilinguality:
|
| 11 |
+
- monolingual
|
| 12 |
+
pretty_name: 'RGB-SegmentEgocentricBodies-Cuttlery'
|
| 13 |
+
size_categories:
|
| 14 |
+
- 1<n<1K
|
| 15 |
+
source_datasets:
|
| 16 |
+
- original
|
| 17 |
+
tags:
|
| 18 |
+
- egocentric segmentation
|
| 19 |
+
- extended reality
|
| 20 |
+
- xr
|
| 21 |
+
- human-body
|
| 22 |
+
- mixed-reality
|
| 23 |
+
- avatar
|
| 24 |
+
task_categories:
|
| 25 |
+
- image-segmentation
|
| 26 |
+
task_ids:
|
| 27 |
+
- semantic-segmentation
|
| 28 |
+
- features:
|
| 29 |
+
- name: image
|
| 30 |
+
dtype: image
|
| 31 |
+
- name: label
|
| 32 |
+
dtype: txt
|
| 33 |
+
-splits:
|
| 34 |
+
- name: train
|
| 35 |
+
num_examples: 644
|
| 36 |
+
- name: val
|
| 37 |
+
num_examples: 100
|
| 38 |
+
```
|
| 39 |
+
|
| 40 |
+
# RGB Segment Egocentric Bodies-Cuttlery Dataset
|
| 41 |
+
|
| 42 |
+
## Overview and Dataset Description
|
| 43 |
+
|
| 44 |
+
The **RGB-D Segment Egocentric Bodies Cluttlery** is a subset of https://huggingface.co/datasets/ExtendedRealityLab/RGB-D-SegmentEgocentricBodies, where the groundtruth annotation
|
| 45 |
+
have been modified to segment 4 classes: [0:'people',1: 'plate',2: 'cuttlery', 3: 'glass']
|
| 46 |
+
The dataset is intended to support research in **egocentric vision**, **XR/VR/AR**, **human–computer interaction**.
|
| 47 |
+
|
| 48 |
+
The groundtruth annotation are in txt, following the format required for training Yolo-based architectures. Pixel-wise annotations will follow
|
| 49 |
+
|
| 50 |
+
## Acknowledgements
|
| 51 |
+
|
| 52 |
+
This dataset was created by Nokia ExtendedRealityLab and developed in the context of research on egocentric perception and immersive telepresence. If you use this dataset in academic work, please cite the following papers:
|
| 53 |
+
|
| 54 |
+
@inproceedings{jimenez2025evaluation,
|
| 55 |
+
title={Evaluation of Segmentation Algorithms for Embodiment Improvement in an XR Application},
|
| 56 |
+
author={Jim{\'e}nez-Moreno, Amaya and Conderana-Medem, Elena and Casino-Colom, Silvia and Orduna, Marta and Gonzalez-Sosa, Ester and Perez, Pablo and Villegas, Alvaro},
|
| 57 |
+
booktitle={Proceedings of the 17th International Workshop on IMmersive Mixed and Virtual Environment Systems},
|
| 58 |
+
pages={36--39},
|
| 59 |
+
year={2025}
|
| 60 |
+
}
|
| 61 |
+
|
| 62 |
+
@article{gonzalez2023full, title={Full body video-based self-avatars for mixed reality: from e2e system to user study}, author={Gonzalez Morin, Diego and Gonzalez-Sosa, Ester and Perez, Pablo and Villegas, Alvaro}, journal={Virtual Reality}, volume={27}, number={3}, pages={2129--2147}, year={2023}, publisher={Springer} }
|