rsi commited on
Commit
9492094
·
1 Parent(s): 619b161

update readme

Browse files
Files changed (1) hide show
  1. README.md +7 -7
README.md CHANGED
@@ -54,8 +54,6 @@ language:
54
  # - split: test
55
  # path: "data/224/annotations/annotations_NZ_test.json"
56
  ---
57
-
58
-
59
  <div align="center">
60
  <h1 align="center">The P<sup>3</sup> dataset: Pixels, Points and Polygons <br> for Multimodal Building Vectorization</h1>
61
  <h3><align="center">Raphael Sulzer<sup>1,2</sup> &nbsp;&nbsp;&nbsp; Liuyun Duan<sup>1</sup>
@@ -545,7 +543,6 @@ pip install .
545
  ⚠️ **Warning**: The implementation of the LiDAR point cloud encoder uses Open3D-ML. Currently, Open3D-ML officially only supports the PyTorch version specified in `requirements-torch-cuda.txt`.
546
 
547
 
548
-
549
  <!-- ## Model Zoo
550
 
551
 
@@ -587,13 +584,16 @@ python scripts/train.py --help
587
 
588
  </details> -->
589
 
590
- ### Predict a single tile
591
 
592
- TODO
593
 
594
  ```
595
- python scripts/predict_demo.py
596
  ```
 
 
 
597
 
598
  ### Reproduce paper results
599
 
@@ -629,7 +629,7 @@ python scripts/train.py experiment=p2p_fusion checkpoint=latest
629
 
630
  ## Citation
631
 
632
- If you find our work useful, please consider citing:
633
  ```bibtex
634
  TODO
635
  ```
 
54
  # - split: test
55
  # path: "data/224/annotations/annotations_NZ_test.json"
56
  ---
 
 
57
  <div align="center">
58
  <h1 align="center">The P<sup>3</sup> dataset: Pixels, Points and Polygons <br> for Multimodal Building Vectorization</h1>
59
  <h3><align="center">Raphael Sulzer<sup>1,2</sup> &nbsp;&nbsp;&nbsp; Liuyun Duan<sup>1</sup>
 
543
  ⚠️ **Warning**: The implementation of the LiDAR point cloud encoder uses Open3D-ML. Currently, Open3D-ML officially only supports the PyTorch version specified in `requirements-torch-cuda.txt`.
544
 
545
 
 
546
  <!-- ## Model Zoo
547
 
548
 
 
584
 
585
  </details> -->
586
 
587
+ ### Predict demo tile
588
 
589
+ After downloading the model weights and setting up the code you can predict a demo tile by running
590
 
591
  ```
592
+ python scripts/predict_demo.py checkpoint=best_val_iou experiment=$MODEL_$MODALITY +image_file=demo_data/image0_CH_val.tif +lidar_file=demo_data/lidar0_CH_val.copc.laz
593
  ```
594
+ At least one of `image_file` or `lidar_file` has to be specified. `$MODEL` can be one of the following: `ffl`, `hisup` or `p2p`. `$MODALITY` can be `image`, `lidar` or `fusion`.
595
+ The result will be stored in `prediction.png`.
596
+
597
 
598
  ### Reproduce paper results
599
 
 
629
 
630
  ## Citation
631
 
632
+ If you use our work please cite
633
  ```bibtex
634
  TODO
635
  ```