Datasets:

DOI:
License:
ToF-360 / README.md
sokumura's picture
Update README.md
22d1d9d verified
|
raw
history blame
2.81 kB
metadata
license: cc-by-nc-sa-4.0

ToF-360 Dataset (test)

Figure showing multiple modalities

Overview

The ToF-360 dataset consists of spherical RGB-D images with instance-level semantic and room ayout annotations, which include 4 unique scenes. It contains 179 equirectangular RGB images along with the corresponding depths, surface normals, XYZ images, and HHA images, labeled with building-defining object categories and image based layout boundaries (ceiling-wall, wall-floor). The dataset enables development of scene understanding tasks based on single-shot reconstruction without the need for global alignment in indoor spaces.

Dataset Modalities

Each scenes in the dataset has its own folder in the dataset. All the modalities and metadata for each area are contained in that folder as <scene>/<modality>.

HHA images:
We followed [Depth2HHA-python] to create it.

RGB images:
RGB images contain equirectangular 24-bit color images converted from raw dual fisheye image.

Manhattan aligned RGB images:
We followed [LGT-Net] to create Manhattan aligned RGB images.

XYZ images:
XYZ images are saved as .npy binary file format in NumPy. It contains pixel-aligned set of data points in space with a sensitivity of mm. It must be the size of (Height, Width, 3[xyz]).

Annotation:

depth:
Depth images are stored as 16-bit PNGs having a maximum depth of 128m and a sensitivity of 1/512m. Missing values are encoded with the value 0. Note that while depth is defined as the distance from the point-center of the camera in the panoramics.

Room layout annotation:
Room layout annotations are stored as same json format as PanoAnnotator. Please refer to this repo for more details.

Normal images:
Normals are 127.5-centered per-channel surface normal images. The normal vector is saved as 24-bit RGB PNGs where Red is the horizontal value (more red to the right), Green is vertical (more green downwards), and Blue is towards the camera. It is computed by normal estimation function in Open3D. The tool for creating normal images from 3D is located in the assets/compute_normal.py

Tools

This repository provides some basic tools for interacting with the dataset and how to get preprocessed data. The tools are located in the assets/preprocessing folder.

Evaluation

Semantic segmentation (image-based):

Semantic segmentation (pointcloud-based):

Layout estimation:

Citations

Coming soon...