Update README.md
Browse files
README.md
CHANGED
|
@@ -1,108 +1,3 @@
|
|
| 1 |
-
|
| 2 |
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-

|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
## System Requirement
|
| 10 |
-
### Tested Environment
|
| 11 |
-
- Ubuntu 18.04
|
| 12 |
-
- CUDA 11.1
|
| 13 |
-
- Python 3.6
|
| 14 |
-
|
| 15 |
-
### Main Dependencies:
|
| 16 |
-
- [`bop_toolkit`](https://github.com/thodan/bop_toolkit)
|
| 17 |
-
- Pytorch 1.10
|
| 18 |
-
- torchvision 0.11.0
|
| 19 |
-
- opencv-python
|
| 20 |
-
- [`Progressive-X`](https://github.com/danini/progressive-x)
|
| 21 |
-
|
| 22 |
-
Download with `git clone --recurse-submodules` so that `bop_toolkit` will also be cloned.
|
| 23 |
-
|
| 24 |
-
## Training with a dataset in BOP benchmark
|
| 25 |
-
### Training data preparation
|
| 26 |
-
1. Download the dataset from [`BOP benchmark`](https://bop.felk.cvut.cz/datasets/)
|
| 27 |
-
|
| 28 |
-
2. Download required ground truth folders of zebrapose from [`owncloud`](https://cloud.dfki.de/owncloud/index.php/s/zT7z7c3e666mJTW). The folders are `models_GT_color`, `XX_GT` (e.g. `train_real_GT` and `test_GT`) and `models` (`models` is optional, only if you want to generate GT from scratch).
|
| 29 |
-
|
| 30 |
-
3. The expected data structure:
|
| 31 |
-
```
|
| 32 |
-
.
|
| 33 |
-
βββ BOP ROOT PATH/
|
| 34 |
-
βββ lmo
|
| 35 |
-
βββ ycbv/
|
| 36 |
-
β βββ models
|
| 37 |
-
β βββ models_eval
|
| 38 |
-
β βββ models_fine
|
| 39 |
-
β βββ test
|
| 40 |
-
β βββ train_pbr
|
| 41 |
-
β βββ train_real
|
| 42 |
-
β βββ ... #(other files from BOP page)
|
| 43 |
-
β βββ models_GT_color #(from last step)
|
| 44 |
-
β βββ train_pbr_GT #(from last step)
|
| 45 |
-
β βββ train_real_GT #(from last step)
|
| 46 |
-
β βββ test_GT #(from last step)
|
| 47 |
-
β βββ train_pbr_GT_v2 #(from last step, for symmetry aware training)
|
| 48 |
-
β βββ train_real_GT_v2 #(from last step, for symmetry aware training)
|
| 49 |
-
β βββ test_GT_v2 #(from last step, for symmetry aware training)
|
| 50 |
-
βββ tless
|
| 51 |
-
```
|
| 52 |
-
|
| 53 |
-
4. Download the 3 [`pretrained resnet`](https://cloud.dfki.de/owncloud/index.php/s/zT7z7c3e666mJTW), save them under `zebrapose/pretrained_backbone/resnet`, and download `pretrained efficientnet` from "https://download.pytorch.org/models/efficientnet_b4_rwightman-7eb33cd5.pth", save it under `zebrapose/pretrained_backbone/efficientnet`
|
| 54 |
-
|
| 55 |
-
5. (Optional) Instead of download the ground truth, you can also generate them from scratch, details in [`Generate_GT.md`](Binary_Code_GT_Generator/Generate_GT.md).
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
### Training
|
| 59 |
-
Adjust the paths in the config files, and train the network with `train.py`, e.g.
|
| 60 |
-
|
| 61 |
-
`python train.py --cfg config/config_BOP/lmo/exp_lmo_BOP.txt --obj_name ape`
|
| 62 |
-
|
| 63 |
-
The script will save the last 3 checkpoints and the best checkpoint, as well as tensorboard log. To enable sym. aware training, with `--sym_aware_training True`
|
| 64 |
-
|
| 65 |
-
## Test with trained model
|
| 66 |
-
For most datasets, a specific object occurs only once in a test images.
|
| 67 |
-
|
| 68 |
-
`python test.py --cfg config/config_BOP/lmo/exp_lmo_BOP.txt --obj_name ape --ckpt_file path/to/the/best/checkpoint --ignore_bit 0 --eval_output_path path/to/save/the/evaluation/report`
|
| 69 |
-
|
| 70 |
-
To use ICP for refinement, use `--use_icp True`
|
| 71 |
-
|
| 72 |
-
For datasets like tless, the number of a a specific object is unknown in the test stage.
|
| 73 |
-
|
| 74 |
-
`python test_vivo.py --cfg config/config_BOP/tless/exp_tless_BOP.txt --ckpt_file path/to/the/best/checkpoint --ignore_bit 0 --obj_name obj01 --eval_output_path path/to/save/the/evaluation/report`
|
| 75 |
-
|
| 76 |
-
To use ICP for refinement, use `--use_icp True`
|
| 77 |
-
|
| 78 |
-
Download our trained model from this [`link`](https://cloud.dfki.de/owncloud/index.php/s/EmQDWgd5ipbdw3E). The ProgressiveX can not set random seed in its python API. The ADD results can be +/- 0.5%.
|
| 79 |
-
|
| 80 |
-
## Evaluate for BOP challange
|
| 81 |
-
Merge the `.csv` files generated in the last step using `tools_for_BOP/merge_csv.py`, e.g.
|
| 82 |
-
|
| 83 |
-
`python merge_csv.py --input_dir /dir/to/pose_result_bop/lmo --output_fn zebrapose_lmo-test.csv`
|
| 84 |
-
|
| 85 |
-
And then evaluate it according to [`bop_toolkit`](https://github.com/thodan/bop_toolkit)
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
## Difference between ArXiv v1 and v2
|
| 89 |
-
The results were reported with the same checkpoints. We fixed a bug that only influence the inference results:
|
| 90 |
-
|
| 91 |
-
The PnP solver requires the Bbox size to calculate the 2D pixel location in the original image. We modified the Bbox size in the dataloader. The bug is that we didn't update this modification for the PnP solver. If you remove the `get_final_Bbox` in the dataloader, you will get the results reported in v1.
|
| 92 |
-
|
| 93 |
-
The bug has more influence if we resize the Bbox using `crop_square_resize`. After we fixed the bug, we used `crop_square_resize` for BOP challange (instead of `crop_resize` in the config files in config_paper). We think this resize method should work better since it will not introduce distortion. However, we didn't compare resize methods with experiments.
|
| 94 |
-
|
| 95 |
-
|
| 96 |
-
## Acknowledgement
|
| 97 |
-
The original code has been developed together with [`Mahdi Saleh`](https://github.com/mahdi-slh). Some code are adapted from [`Pix2Pose`](https://github.com/kirumang/Pix2Pose), [`SingleShotPose`](https://github.com/microsoft/singleshotpose), [`GDR-Net`](https://github.com/THU-DA-6D-Pose-Group/GDR-Net), and [`Deeplabv3`]().
|
| 98 |
-
|
| 99 |
-
|
| 100 |
-
## Citation
|
| 101 |
-
```
|
| 102 |
-
@article{su2022zebrapose,
|
| 103 |
-
title={ZebraPose: Coarse to Fine Surface Encoding for 6DoF Object Pose Estimation},
|
| 104 |
-
author={Su, Yongzhi and Saleh, Mahdi and Fetzer, Torben and Rambach, Jason and Navab, Nassir and Busam, Benjamin and Stricker, Didier and Tombari, Federico},
|
| 105 |
-
journal={arXiv preprint arXiv:2203.09418},
|
| 106 |
-
year={2022}
|
| 107 |
-
}
|
| 108 |
-
```
|
|
|
|
| 1 |
+
Download required ground truth folders of zebra pose from here. The folders are `models_GT_color`, `XX_GT` (e.g. `train_real_GT` and `test_GT`) and `models` (`models` is optional, only if you want to generate GT from scratch).
|
| 2 |
|
| 3 |
+
For more information and next steps, please go to the project page at https://github.com/suyz526/ZebraPose
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|