Datasets:

DOI:
License:
kanayamaHideaki commited on
Commit
6b23c55
·
1 Parent(s): 0ee1862

Commit for first version of README.md

Browse files
Files changed (2) hide show
  1. README.md +50 -0
  2. assets/figure/figure_1.png +3 -0
README.md CHANGED
@@ -1,3 +1,53 @@
1
  ---
2
  license: cc-by-nc-sa-4.0
3
  ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
  ---
2
  license: cc-by-nc-sa-4.0
3
  ---
4
+
5
+ # ToF-360 Dataset
6
+
7
+ ![Figure showing multiple modalities](assets/figure/figure_1.png?raw=true)
8
+
9
+ ## Overview
10
+ The ToF-360 dataset consists of spherical RGB-D images with instance-level semantic and room ayout annotations, which include 4 unique scenes. It contains 179 equirectangular RGB images along with the corresponding depths, surface normals, XYZ images, and HHA images, labeled with building-defining object categories and image based layout boundaries (ceiling-wall, wall-floor). The dataset enables development of scene understanding tasks based on single-shot reconstruction without the need for global alignment in indoor spaces.
11
+
12
+
13
+ ## Dataset Modalities
14
+ Each scenes in the dataset has its own folder in the dataset. All the modalities and metadata for each area are contained in that folder as `<scene>/<modality>`.
15
+
16
+ **HHA images:**
17
+ We followed [[Depth2HHA-python]](https://github.com/charlesCXK/Depth2HHA-python) to create it.
18
+
19
+ **RGB images:**
20
+ RGB images contain equirectangular 24-bit color images converted from raw dual fisheye image.
21
+
22
+ **Manhattan aligned RGB images:**
23
+ We followed [[LGT-Net]](https://github.com/zhigangjiang/LGT-Net) to create Manhattan aligned RGB images.
24
+
25
+ **XYZ images:**
26
+ XYZ images are saved as `.npy` binary file format in NumPy. It contains pixel-aligned set of data points in space with a sensitivity of mm. It must be the size of (Height, Width, 3[xyz]).
27
+
28
+ **Annotation:**
29
+
30
+ **depth:**
31
+ Depth images are stored as 16-bit PNGs having a maximum depth of 128m and a sensitivity of 1/512m. Missing values are encoded with the value 0. Note that while depth is defined as the distance from the point-center of the camera in the panoramics.
32
+
33
+ **Room layout annotation:**
34
+ Room layout annotations are stored as same json format as [PanoAnnotator](https://github.com/SunDaDenny/PanoAnnotator). Please refer to this repo for more details.
35
+
36
+ **Normal images:**
37
+ Normals are 127.5-centered per-channel surface normal images. The normal vector is saved as 24-bit RGB PNGs where Red is the horizontal value (more red to the right), Green is vertical (more green downwards), and Blue is towards the camera. It is computed by [normal estimation function](https://www.open3d.org/docs/0.7.0/python_api/open3d.geometry.estimate_normals.html) in [Open3D](https://github.com/isl-org/Open3D). The tool for creating normal images from 3D is located in the `assets/compute_normal.py`
38
+
39
+ ## Tools
40
+ This repository provides some basic tools for interacting with the dataset and how to get preprocessed data. The tools are located in the `assets/preprocessing` folder.
41
+
42
+ ## Evaluation
43
+ **Semantic segmentation (image-based):**
44
+
45
+
46
+ **Semantic segmentation (pointcloud-based):**
47
+
48
+
49
+ **Layout estimation:**
50
+
51
+
52
+ ## Citations
53
+ Coming soon...
assets/figure/figure_1.png ADDED

Git LFS Details

  • SHA256: dddb6e0a2fa37e0a76fa0615021ffc5ea5f75ff79a71c19324daad87478551f0
  • Pointer size: 131 Bytes
  • Size of remote file: 299 kB