Datasets:

DOI:
License:
kanayamaHideaki's picture
Add semantics, instances, layout_eval, preprocessing and modifying README.md.
324d4da

Evaluation with LGT-Net

This is instruction for evaluating our dataset with "LGT-Net: Indoor Panoramic Room Layout Estimation with Geometry-Aware Transformer Network".

Downloading Pre-trained Weights

Pre-trained weights are provided by authors on individual datasets at here.
These models are used in our dataset paper below.

  • mp3d/best.pkl: Training on MatterportLayout dataset
  • pano/best.pkl: Training on PanoContext(train)+Stanford2D-3D(whole) dataset
  • s2d3d/best.pkl: Training on Stanford2D-3D(train)+PanoContext(whole) dataset

Make sure the pre-trained weight files are stored as follows:

checkpoints
|-- SWG_Transformer_LGT_Net
|   |-- mp3d
|   |   |-- best.pkl
|   |-- pano
|   |   |-- best.pkl
|   |-- s2d3d
|   |   |-- best.pkl

Preparing Dataset

You can use assets/layout_eval/convert4LGTNet.py to get proper data structure for evaluation.

Evaluation with MatterportLayout

Make sure the dataset files are stored as follows:

src/dataset/mp3d
|-- image
|   |-- 000_<scene>_equi_rgb.png
|-- label
|   |-- 000_<scene>_equi_layout.json
|-- split
    |-- test.txt # it needs to contain all of files.

Evaluation with PanoContext

Make sure the dataset files are stored as follows:

src/dataset/pano_s2d3d
|-- test
|   |-- img
|   |   |-- pano_000_<scene>_equi_rgb.png
|   |-- label_cor
|       |-- pano_000_<scene>_equi_layout.txt

Evaluation with Stanford 2D-3D

Make sure the dataset files are stored as follows:

src/dataset/pano_s2d3d
|-- test
|   |-- img
|   |   |-- camera_000_<scene>_equi_rgb.png
|   |-- label_cor
|       |-- camera_000_<scene>_equi_layout.txt