# Evaluation with LGT-Net This is instruction for evaluating our dataset with "[LGT-Net: Indoor Panoramic Room Layout Estimation with Geometry-Aware Transformer Network](https://arxiv.org/abs/2203.01824)". # Downloading Pre-trained Weights Pre-trained weights are provided by authors on individual datasets at [here](https://drive.google.com/drive/folders/1bOZyXeuNnwFEC9nw7EgJUwMiI685obdT?usp=sharing). These models are used in our dataset paper below. - [mp3d/best.pkl](https://drive.google.com/file/d/1o97oAmd-yEP5bQrM0eAWFPLq27FjUDbh/view?usp=sharing): Training on MatterportLayout dataset - [pano/best.pkl](https://drive.google.com/file/d/1JoeqcPbm_XBPOi6O9GjjWi3_rtyPZS8m/view?usp=sharing): Training on PanoContext(train)+Stanford2D-3D(whole) dataset - [s2d3d/best.pkl](https://drive.google.com/file/d/1PfJzcxzUsbwwMal7yTkBClIFgn8IdEzI/view?usp=sharing): Training on Stanford2D-3D(train)+PanoContext(whole) dataset Make sure the pre-trained weight files are stored as follows: ``` checkpoints |-- SWG_Transformer_LGT_Net | |-- mp3d | | |-- best.pkl | |-- pano | | |-- best.pkl | |-- s2d3d | | |-- best.pkl ``` # Preparing Dataset You can use `assets/layout_eval/convert4LGTNet.py` to get proper data structure for evaluation. ### Evaluation with MatterportLayout Make sure the dataset files are stored as follows: ``` src/dataset/mp3d |-- image | |-- 000__equi_rgb.png |-- label | |-- 000__equi_layout.json |-- split |-- test.txt # it needs to contain all of files. ``` ### Evaluation with PanoContext Make sure the dataset files are stored as follows: ``` src/dataset/pano_s2d3d |-- test | |-- img | | |-- pano_000__equi_rgb.png | |-- label_cor | |-- pano_000__equi_layout.txt ``` ### Evaluation with Stanford 2D-3D Make sure the dataset files are stored as follows: ``` src/dataset/pano_s2d3d |-- test | |-- img | | |-- camera_000__equi_rgb.png | |-- label_cor | |-- camera_000__equi_layout.txt ```