rsi commited on
Commit
0e8f1d9
·
1 Parent(s): 96b9f5c

update readme

Browse files
Files changed (1) hide show
  1. README.md +40 -9
README.md CHANGED
@@ -28,6 +28,16 @@ tags:
28
  <b>Figure 1</b>: A view of our dataset of Zurich, Switzerland
29
  </div>
30
 
 
 
 
 
 
 
 
 
 
 
31
  ## Abstract
32
 
33
  <div align="justify">
@@ -40,7 +50,6 @@ We present the P<sup>3</sup> dataset, a large-scale multimodal benchmark for bui
40
  - A library for training and evaluating state-of-the-art deep learning methods on the dataset, available at [github.com/raphaelsulzer/PixelsPointsPolygons](https://github.com/raphaelsulzer/PixelsPointsPolygons)
41
  - Pretrained model weights, available at [huggingface.co/rsi/PixelsPointsPolygons](https://huggingface.co/rsi/PixelsPointsPolygons)
42
 
43
-
44
  ## Dataset
45
 
46
  ### Overview
@@ -51,15 +60,26 @@ We present the P<sup>3</sup> dataset, a large-scale multimodal benchmark for bui
51
 
52
  ### Download
53
 
 
 
 
 
 
 
 
 
 
54
  ```
55
  git lfs install
56
  git clone https://huggingface.co/datasets/rsi/PixelsPointsPolygons $DATA_ROOT
57
  ```
58
 
 
 
59
  ### Structure
60
 
61
  <details>
62
- <summary>📁 Click to expand folder structure</summary -->
63
 
64
  ```text
65
  PixelsPointsPolygons/data/224
@@ -451,12 +471,21 @@ PixelsPointsPolygons/data/224
451
 
452
  ### Download
453
 
 
 
 
 
 
 
 
 
454
  ```
455
- git lfs install
456
  git clone https://huggingface.co/rsi/PixelsPointsPolygons $MODEL_ROOT
457
  ```
458
 
459
- ## Code
 
 
460
 
461
  ### Download
462
 
@@ -497,7 +526,7 @@ pip install .
497
 
498
  ### Setup
499
 
500
- The project supports hydra configuration which allows to modify any parameter either from a `.yaml` file of directly from the command line.
501
 
502
  To setup the project structure we recommend to specify your `$DATA_ROOT` and `$MODEL_ROOT` in `config/host/default.yaml`.
503
 
@@ -531,7 +560,7 @@ python scripts/predict_demo.py
531
 
532
  ### Reproduce paper results
533
 
534
- To reproduce the results from the paper you can run any of the following commands
535
 
536
  ```
537
  python scripts/modality_ablation.py
@@ -541,15 +570,17 @@ python scripts/all_countries.py
541
 
542
  ### Custom training, prediction and evaluation
543
 
544
- We recommend to first setup a custom `$EXP_FILE` in `config/experiment` following the structure of one of the existing experiment files, e.g. `ffl_fusion.yaml`. You can then run:
545
 
546
  ```
547
  # train your model (on multiple GPUs)
548
  torchrun --nproc_per_node=$NUM_GPU scripts/train.py experiment=$EXP_FILE
 
549
  # predict the test set with your model (on multiple GPUs)
550
- torchrun --nproc_per_node=$NUM_GPU scripts/predict.py evaluation=test checkpoint=best_val_iou
 
551
  # evaluate your prediction of the test set
552
- python scripts/evaluate.py model=<model> evaluation=test checkpoint=best_val_iou
553
  ```
554
 
555
  You could also continue training from a provided pretrained model with
 
28
  <b>Figure 1</b>: A view of our dataset of Zurich, Switzerland
29
  </div>
30
 
31
+ ## Table of Contents
32
+
33
+ - [Abstract](#abstract)
34
+ - [Highlights](#highlights)
35
+ - [Dataset](#dataset)
36
+ - [Pretrained model weights](#pretrained-model-weights)
37
+ - [Code](#code)
38
+ - [Citation](#citation)
39
+ - [Acknowledgements](#acknowledgements)
40
+
41
  ## Abstract
42
 
43
  <div align="justify">
 
50
  - A library for training and evaluating state-of-the-art deep learning methods on the dataset, available at [github.com/raphaelsulzer/PixelsPointsPolygons](https://github.com/raphaelsulzer/PixelsPointsPolygons)
51
  - Pretrained model weights, available at [huggingface.co/rsi/PixelsPointsPolygons](https://huggingface.co/rsi/PixelsPointsPolygons)
52
 
 
53
  ## Dataset
54
 
55
  ### Overview
 
60
 
61
  ### Download
62
 
63
+ The recommended and fastest way to download the dataset is to run
64
+
65
+ ```
66
+ pip install huggingface_hub
67
+ python scripts/download_dataset.py --dataset-root $DATA_ROOT
68
+ ```
69
+
70
+ Optionally you can also download the dataset by running
71
+
72
  ```
73
  git lfs install
74
  git clone https://huggingface.co/datasets/rsi/PixelsPointsPolygons $DATA_ROOT
75
  ```
76
 
77
+ Both options will download the full dataset, including aerial images (as .tif), aerial lidar point clouds (as .copc.laz) and building polygon annotaions (as MS-COCO .json) into `$DATA_ROOT` . The size of the dataset is around 163GB.
78
+
79
  ### Structure
80
 
81
  <details>
82
+ <summary>📁 Click to expand dataset folder structure</summary -->
83
 
84
  ```text
85
  PixelsPointsPolygons/data/224
 
471
 
472
  ### Download
473
 
474
+ The recommended and fastest way to download the pretrained model weights is to run
475
+
476
+ ```
477
+ python scripts/download_pretrained.py --model-root $MODEL_ROOT
478
+ ```
479
+
480
+ Optionally you can also download the weights by running
481
+
482
  ```
 
483
  git clone https://huggingface.co/rsi/PixelsPointsPolygons $MODEL_ROOT
484
  ```
485
 
486
+ Both options will download all checkpoints (as .pth) and results presented in the paper (as MS-COCO .json) into `$MODEL_ROOT` .
487
+
488
+ ## Code
489
 
490
  ### Download
491
 
 
526
 
527
  ### Setup
528
 
529
+ The project supports hydra configuration which allows to modify any parameter either from a `.yaml` file or directly from the command line.
530
 
531
  To setup the project structure we recommend to specify your `$DATA_ROOT` and `$MODEL_ROOT` in `config/host/default.yaml`.
532
 
 
560
 
561
  ### Reproduce paper results
562
 
563
+ To reproduce the results from the paper you can run the following commands
564
 
565
  ```
566
  python scripts/modality_ablation.py
 
570
 
571
  ### Custom training, prediction and evaluation
572
 
573
+ We recommend to first setup a custom experiment file `$EXP_FILE` in `config/experiment/` following the structure of one of the existing files, e.g. `ffl_fusion.yaml`. You can then run
574
 
575
  ```
576
  # train your model (on multiple GPUs)
577
  torchrun --nproc_per_node=$NUM_GPU scripts/train.py experiment=$EXP_FILE
578
+
579
  # predict the test set with your model (on multiple GPUs)
580
+ torchrun --nproc_per_node=$NUM_GPU scripts/predict.py experiment=$EXP_FILE evaluation=test checkpoint=best_val_iou
581
+
582
  # evaluate your prediction of the test set
583
+ python scripts/evaluate.py experiment=$EXP_FILE evaluation=test checkpoint=best_val_iou
584
  ```
585
 
586
  You could also continue training from a provided pretrained model with