Update README.md
Browse filesAdding citation in Readme.md
README.md
CHANGED
|
@@ -1,48 +1,66 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: cc-by-nc-sa-4.0
|
| 3 |
-
---
|
| 4 |
-
|
| 5 |
-
# ToF-360 Dataset
|
| 6 |
-
|
| 7 |
-

|
| 8 |
-
|
| 9 |
-
## Overview
|
| 10 |
-
The ToF-360 dataset consists of spherical RGB-D images with instance-level semantic and room layout annotations, which include 4 unique scenes. It contains 179 equirectangular RGB images along with the corresponding depths, surface normals, XYZ images, and HHA images, labeled with building-defining object categories and image based layout boundaries (ceiling-wall, wall-floor). The dataset enables development of scene understanding tasks based on single-shot reconstruction without the need for global alignment in indoor spaces.
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
RGB images
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
XYZ images
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
HHA images
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
Room layout
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: cc-by-nc-sa-4.0
|
| 3 |
+
---
|
| 4 |
+
|
| 5 |
+
# ToF-360 Dataset
|
| 6 |
+
|
| 7 |
+

|
| 8 |
+
|
| 9 |
+
## Overview
|
| 10 |
+
The ToF-360 dataset consists of spherical RGB-D images with instance-level semantic and room layout annotations, which include 4 unique scenes. It contains 179 equirectangular RGB images along with the corresponding depths, surface normals, XYZ images, and HHA images, labeled with building-defining object categories and image based layout boundaries (ceiling-wall, wall-floor). The dataset enables development of scene understanding tasks based on single-shot reconstruction without the need for global alignment in indoor spaces.
|
| 11 |
+
You can also find the paper [here](https://av.dfki.de/publications/tof-360-a-panoramic-time-of-flight-rgb-d-dataset-for-single-capture-indoor-semantic-3d-reconstruction/).
|
| 12 |
+
|
| 13 |
+
## Dataset Modalities
|
| 14 |
+
Each scenes in the dataset has its own folder in the dataset. All the modalities for each area are contained in that folder as `<scene>/<modality>`.
|
| 15 |
+
|
| 16 |
+
**RGB images:**
|
| 17 |
+
RGB images contain equirectangular 24-bit color and it is converted from raw dual fisheye image taken by a sensor.
|
| 18 |
+
|
| 19 |
+
**Manhattan aligned RGB images:**
|
| 20 |
+
We followed the preprocessing code proposed by [[LGT-Net]](https://github.com/zhigangjiang/LGT-Net) to create Manhattan aligned RGB images. Sample code for our dataset is in `assets/preprocessing/align_manhattan.py`.
|
| 21 |
+
|
| 22 |
+
**depth:**
|
| 23 |
+
Depth images are stored as 16-bit grayscale PNGs having a maximum depth of 128m and a sensitivity of 1/512m. Missing values are encoded with the value 0. Note that while depth is defined as the distance from the point-center of the camera in the panoramics.
|
| 24 |
+
|
| 25 |
+
**XYZ images:**
|
| 26 |
+
XYZ images are saved as `.npy` binary file format in [NumPy](https://numpy.org/). It contains pixel-aligned set of data points in space with a sensitivity of mm. It must be the size of (Height, Width, 3[xyz]).
|
| 27 |
+
|
| 28 |
+
**Normal images:**
|
| 29 |
+
Normals are 127.5-centered per-channel surface normal images. The normal vector is saved as 24-bit RGB PNGs where Red is the horizontal value (more red to the right), Green is vertical (more green downwards), and Blue is towards the camera. It is computed by [normal estimation function](https://www.open3d.org/docs/0.7.0/python_api/open3d.geometry.estimate_normals.html) in [Open3D](https://github.com/isl-org/Open3D). The tool for creating normal images from 3D is located in the `assets/preprocessing/depth2normal.py`.
|
| 30 |
+
|
| 31 |
+
**HHA images:**
|
| 32 |
+
HHA images contains horizontal disparity, height above ground and angle with gravity, respectively.
|
| 33 |
+
We followed [Depth2HHA-python](https://github.com/charlesCXK/Depth2HHA-python) to create it. Code is located in `assets/preprocessing/getHHA.py`.
|
| 34 |
+
|
| 35 |
+
**Annotation:**
|
| 36 |
+
We used the [COCO Annotator](https://github.com/jsbroks/coco-annotator) for labelling the RGB data. We follow [ontology-based annotation guidelines](https://www.dfki.de/fileadmin/user_upload/import/13246_EC3_2023_Ontology_based_annotation_of_RGB_D_images_and_point_clouds_for_a_domain_adapted_dataset.pdf) developed for both RGB-D and point cloud data.
|
| 37 |
+
`<scenes>/annotation` contains json format files, `<scenes>/semantics` and `<scenes>/instances>` have image-like labeled data stored as `.npy` binary file.
|
| 38 |
+
|
| 39 |
+
**Room layout annotation:**
|
| 40 |
+
Room layout annotations are stored as same json format as [PanoAnnotator](https://github.com/SunDaDenny/PanoAnnotator). Please refer to this repo for more details.
|
| 41 |
+
|
| 42 |
+
## Tools
|
| 43 |
+
This repository provides some basic tools for getting preprocessed data and evaluating dataset. The tools are located in the `assets/` folder.
|
| 44 |
+
|
| 45 |
+
## Croissant metadata
|
| 46 |
+
You can use [this instruction](https://huggingface.co/docs/datasets-server/croissant) provided by HuggingFace. `croissant_metadata.json` is also available.
|
| 47 |
+
|
| 48 |
+
## Citations
|
| 49 |
+
If you use this code or dataset in your research, please cite the following paper:
|
| 50 |
+
@inproceedings{pub15783,
|
| 51 |
+
author = {
|
| 52 |
+
Hideaki Kanayama and
|
| 53 |
+
Mahdi Chamseddine and
|
| 54 |
+
Suresh Guttikonda and
|
| 55 |
+
So Okumura and
|
| 56 |
+
Soichiro Yokota and
|
| 57 |
+
Didier Stricker and
|
| 58 |
+
Jason Raphael Rambach
|
| 59 |
+
},
|
| 60 |
+
title = {ToF-360 – A Panoramic Time-of-flight RGB-D Dataset for Single Capture Indoor Semantic 3D Reconstruction},
|
| 61 |
+
booktitle = {21st CVPR Workshop on Perception Beyond the Visible Spectrum (PBVS-2025)},
|
| 62 |
+
year = {2025},
|
| 63 |
+
publisher = {IEEE},
|
| 64 |
+
address = {Nashville, Tennessee, USA},
|
| 65 |
+
month = {June}
|
| 66 |
+
}
|