Voldemort108X commited on
Commit
75db438
·
verified ·
1 Parent(s): ee358ba

Add files using upload-large-folder tool

Browse files
Code/Baselines/CraftsMan3D/README.md ADDED
@@ -0,0 +1,257 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #### Important: we released [CraftsMan3D-DoraVAE](https://aruichen.github.io/Dora/) trained using rectified flow.
2
+
3
+ [中文版](README_zh.md)
4
+ <p align="center">
5
+ <img src="asset/logo.png" height=220>
6
+ </p>
7
+
8
+ ### <div align="center">CraftsMan3D: High-fidelity Mesh Generation <br> with 3D Native Generation and Interactive Geometry Refiner<div>
9
+ ##### <p align="center"> [Weiyu Li<sup>*1,2</sup>](https://wyysf-98.github.io/), Jiarui Liu<sup>*1,2</sup>, Hongyu Yan<sup>*1</sup>, [Rui Chen<sup>1</sup>](https://aruichen.github.io/), [Yixun Liang<sup>1,2</sup>](https://yixunliang.github.io/), [Xuelin Chen<sup>3</sup>](https://xuelin-chen.github.io/), [Ping Tan<sup>1,2</sup>](https://ece.hkust.edu.hk/pingtan), [Xiaoxiao Long<sup>1,2</sup>](https://www.xxlong.site/)</p>
10
+ ##### <p align="center"> <sup>1</sup>HKUST, <sup>2</sup>LightIllusions, <sup>3</sup>Adobe Research</p>
11
+ <div align="center">
12
+ <a href="https://craftsman3d.github.io/"><img src="https://img.shields.io/static/v1?label=Project%20Page&message=Github&color=blue&logo=github-pages"></a> &ensp;
13
+ <a href="https://huggingface.co/spaces/wyysf/CraftsMan"><img src="https://www.gradio.app/_app/immutable/assets/gradio.CHB5adID.svg" height="25"/></a> &ensp;
14
+ <a href="https://triverse.lightillusions.com/"><img src="asset/icon.png" height="25"/>Local Website</a> &ensp;
15
+ <a href="https://arxiv.org/pdf/2405.14979"><img src="https://img.shields.io/static/v1?label=Paper&message=Arxiv&color=red&logo=arxiv"></a> &ensp;
16
+ </div>
17
+
18
+ # Usage
19
+
20
+ ```
21
+ from craftsman import CraftsManPipeline
22
+ import torch
23
+
24
+ # load from local ckpt
25
+ # mkdir ckpts && cd ckpts
26
+ # mkdir craftsman-DoraVAE && cd craftsman-DoraVAE
27
+ # wget https://pub-c7137d332b4145b6b321a6c01fcf8911.r2.dev/craftsman-DoraVAE/config.yaml
28
+ # wget https://pub-c7137d332b4145b6b321a6c01fcf8911.r2.dev/craftsman-DoraVAE/model.ckpt
29
+ # pipeline = CraftsManPipeline.from_pretrained("./ckpts/craftsman-DoraVAE", device="cuda:0", torch_dtype=torch.bfloat16)
30
+
31
+ # load from huggingface model hub, I uploading...
32
+ pipeline = CraftsManPipeline.from_pretrained("craftsman3d/craftsman-DoraVAE", device="cuda:0", torch_dtype=torch.bfloat16)
33
+
34
+ # inference
35
+ mesh = pipeline("https://pub-f9073a756ec645d692ce3d171c2e1232.r2.dev/data/werewolf.png").meshes[0]
36
+ mesh.export("werewolf.obj")
37
+
38
+ ```
39
+
40
+ The results should be like this:
41
+ <p align="center">
42
+ <img src="asset/demo_result.png" height=220>
43
+ </p>
44
+
45
+
46
+ #### TL; DR: <font color="red">**CraftsMan3D (aka 匠心)**</font> is a two-stage text/image to 3D mesh generation model. By mimicking the modeling workflow of artist/craftsman, we propose to generate a coarse mesh (5s) with smooth geometry using 3D diffusion model and then refine it (20s) using enhanced multi-view normal maps generated by 2D normal diffusion, which is also can be in a interactive manner like Zbrush.
47
+
48
+
49
+
50
+ ## ✨ Overview
51
+
52
+ This repo contains source code (training / inference) of 3D diffusion model, pretrained weights and gradio demo code of our 3D mesh generation project, you can find more visualizations on our [project page](https://craftsman3d.github.io/) and try our [demo](http://algodemo.bj.lightions.top:24926). If you have high-quality 3D data or some other ideas, we very much welcome any form of cooperation.
53
+ <details><summary>Full abstract here</summary>
54
+ We present a novel generative 3D modeling system, coined CraftsMan, which can generate high-fidelity 3D geometries with highly varied shapes, regular mesh topologies, and detailed surfaces, and, notably, allows for refining the geometry in an interactive manner. Despite the significant advancements in 3D generation, existing methods still struggle with lengthy optimization processes, irregular mesh topologies, noisy surfaces, and difficulties in accommodating user edits, consequently impeding their widespread adoption and implentation in 3D modeling softwares. Our work is inspired by the craftsman, who usually roughs out the holistic figure of the work first and elaborate the surface details subsequently. Specifically, we employ a 3D native diffusion model, which operates on latent space learned from latent set-based 3D representations, to generate coarse geometries with regular mesh topology in seconds. In particular, this process takes as input a text prompt or a reference image, and leverages a powerful multi-view (MV) diffusion model to generates multiple views of the coarse geometry, which are fed into our MV-conditioned 3D diffusion model for generating the 3D geometry, significantly improving robustness and generalizability. Following that, a normal-based geometry refiner is used to significantly enhance the surface details. This refinement can be performed automatically, or interactively with user-supplied edits. Extensive experiments demonstrate that our method achieves high efficiency in producing superior quality 3D assets compared to existing methods.
55
+ </details>
56
+
57
+ <p align="center">
58
+ <img src="asset/teaser.jpg" >
59
+ </p>
60
+
61
+ # 💪 ToDo List
62
+
63
+ - [x] Inference code
64
+ - [x] Training code
65
+ - [x] Gradio & Hugging Face demo
66
+ - [x] Model zoo
67
+ - [x] Environment setup
68
+ - [x] Data sample
69
+ - [x] CraftsMan3D-DoraVAE (not the official version)
70
+ - [x] support rectified flow training
71
+ - [x] support [flashVDM](https://github.com/Tencent/FlashVDM/tree/main), thanks for their open-source
72
+ - [ ] release multiview(4 views) conditioned model (including weights and training data sample)
73
+ - [x] add data for vae training, we release the data preprocessing script in `watertight_and_sampling.py`
74
+ - [ ] support training and finetuning TripoSG model (almost done)
75
+ - [ ] support training Hunyuan3D-2 model(it is not release the weights for vae encoder)
76
+
77
+
78
+ ## Contents
79
+ * [Pretrained Models](##-Pretrained-models)
80
+ * [Gradio & Huggingface Demo](#Gradio-demo)
81
+ * [Inference](#Inference)
82
+ * [Training](#Train)
83
+ * [Data Prepration](#data)
84
+ * [Video](#Video)
85
+ * [Acknowledgement](#Acknowledgements)
86
+ * [Citation](#Bibtex)
87
+
88
+ ## Environment Setup
89
+
90
+ <details> <summary>Hardware</summary>
91
+ We train our model on 32x A800 GPUs with a batch size of 32 per GPU for 7 days.
92
+
93
+ The mesh refinement part is performed on a GTX 3080 GPU.
94
+
95
+
96
+ </details>
97
+ <details> <summary>Setup environment</summary>
98
+
99
+ :smiley: We also provide a Dockerfile for easy installation, see [Setup using Docker](./docker/README.md).
100
+
101
+ - Python 3.10.0
102
+ - PyTorch 2.5.1 (for RSMNorm)
103
+ - Cuda Toolkit 12.4.0
104
+ - Ubuntu 22.04
105
+
106
+ Clone this repository.
107
+
108
+ ```sh
109
+ git clone https://github.com/wyysf-98/CraftsMan.git
110
+ ```
111
+
112
+ Install the required packages.
113
+
114
+ ```sh
115
+ conda create -n CraftsMan python=3.10 -y
116
+ conda activate CraftsMan
117
+ # conda install -c "nvidia/label/cuda-12.1.1" cudatoolkit
118
+ # conda install pytorch==2.5.1 torchvision==0.20.1 torchaudio==2.5.1 pytorch-cuda=12.1 -c pytorch -c nvidia
119
+ pip install torch==2.5.1 torchvision==0.20.1
120
+ pip install -r docker/requirements.txt
121
+ pip install torch-cluster -f https://data.pyg.org/whl/torch-2.5.1+cu124.html
122
+
123
+ ```
124
+
125
+ </details>
126
+
127
+
128
+ ## ✨ History
129
+ This repo will port some recent techniques for 3D diffusion model and the history version like the arxiv version can be found in different branch.
130
+ <details>
131
+
132
+ <p align="center">
133
+ <img src="asset/history.png" >
134
+ </p>
135
+
136
+
137
+ </details>
138
+
139
+ # 🎥 Video
140
+
141
+ [![Watch the video](asset/video_cover.png)](https://www.youtube.com/watch?v=WhEs4tS4mGo)
142
+
143
+
144
+ # 3D Native DiT Model (Latent Set DiT Model)
145
+ We provide the training and the inference code here for future research.
146
+ The latent set VAE model is heavily build on the same structure of [Michelangelo](https://github.com/NeuralCarver/Michelangelo).
147
+ The latent set diffusion model is based on a [DiT/Pixart-alpha](https://pixart-alpha.github.io/) and with 500M parameters.
148
+
149
+ ## Pretrained models
150
+ Currently, We provide the [models](https://huggingface.co/wyysf/CraftsMan) with single view image as condition with DiT.
151
+ We will consider open source the further models according to the real situation.
152
+ If you run the ``inference.py`` without specifying the model path, it will automatically download the model from the huggingface model hub.
153
+
154
+ Or you can download the model manually:
155
+ ```bash
156
+ ## you can just manually get the model using wget:
157
+ mkdir ckpts
158
+ cd ckpts
159
+ mkdir craftsman-v1-5
160
+ cd craftsman-v1-5
161
+ wget https://huggingface.co/craftsman3d/craftsman/resolve/main/config.yaml
162
+ wget https://huggingface.co/craftsman3d/craftsman/resolve/main/model.ckpt
163
+ ### for DoraVAE version(https://aruichen.github.io/Dora/)
164
+ cd ..
165
+ mkdir craftsman-doravae
166
+ cd craftsman-doravae
167
+ wget https://huggingface.co/craftsman3d/craftsman-doravae/resolve/main/config.yaml
168
+ wget https://huggingface.co/craftsman3d/craftsman-doravae/resolve/main/model.ckpt
169
+
170
+ ## OR you can git clone the repo:
171
+ git lfs install
172
+ git clone https://huggingface.co/craftsman3d/craftsman
173
+ ### for DoraVAE version(https://aruichen.github.io/Dora/)
174
+ git clone https://huggingface.co/craftsman3d/craftsman-doravae
175
+
176
+ ```
177
+ If you download the models using wget, you should manually put them under the `ckpts/craftsman` directory.
178
+
179
+ ## Gradio demo
180
+ We provide gradio demos for easy usage.
181
+
182
+ ```bash
183
+ python gradio_app.py --model_path ./ckpts/craftsman
184
+ ```
185
+
186
+ ## Inference
187
+ To generate 3D meshes from images folders via command line, simply run:
188
+ ```bash
189
+ python inference.py --input eval_data --device 0 --model ./ckpts/craftsman
190
+ ```
191
+
192
+ For more configs, please refer to the `inference.py`.
193
+
194
+ ## Train from scratch
195
+ We provide our training code to facilitate future research. And we provide a data sample in `data`.
196
+ 100k data sample for VAE training can be downloaded from (to be uploaded)
197
+
198
+ 100k data sample for diffusion training can be downloaded from https://pub-c7137d332b4145b6b321a6c01fcf8911.r2.dev/Objaverse_100k.zip
199
+
200
+ selected 190k UUID for training can be downloaded from https://pub-c7137d332b4145b6b321a6c01fcf8911.r2.dev/objaverse_190k.json
201
+
202
+ selected 320k UUID for training can be downloaded from https://pub-c7137d332b4145b6b321a6c01fcf8911.r2.dev/objaverse_320k.json
203
+
204
+ For more training details and configs, please refer to the `configs` folder.
205
+
206
+ ```bash
207
+ ### training the shape-autoencoder
208
+ python train.py --config ./configs/shape-autoencoder/michelangelo-l768-e64-ne8-nd16.yaml \
209
+ --train --gpu 0
210
+
211
+ ### training the image-to-shape diffusion model
212
+ # for single view conditioned generation
213
+ python train.py --config ./configs/image-to-shape-diffusion/clip-dinov2-pixart-diffusion-dit32.yaml --train --gpu 0
214
+
215
+ # for multi view conditioned generation (original paper)
216
+ python train.py --config ./configs/image-to-shape-diffusion/clip-mvrgb-modln-l256-e64-ne8-nd16-nl6.yaml --train --gpu 0
217
+
218
+ # for DoraVAE single view diffusion version (We can not provide the data for you due to the license issue, you can processed it by yourself)
219
+ # (https://github.com/Seed3D/Dora/tree/main/sharp_edge_sampling)
220
+ python train.py --config ./configs/image-to-shape-diffusion/DoraVAE-dinov2reglarge518-pixart-rectified-flow-dit32.yaml --train --gpu 0
221
+
222
+ ```
223
+
224
+
225
+ # ❓Common questions
226
+ Q: Tips to get better results.
227
+ 0. Due to limited resources, we will gradually expand the dataset and training scale, and therefore we will release more pre-trained models in the future.
228
+ 1. Just like the 2D diffusion model, try different seeds, adjust the CFG scale or different scheduler. Good Luck.
229
+ 2. We will provide a version that conditioned on the text prompt, so you can use some positive and negative prompts.
230
+
231
+
232
+ # 🤗 Acknowledgements
233
+
234
+ - Thanks to [LightIllusion](https://www.lightillusions.com/) for providing computational resources and Jianxiong Pan for data preprocessing. If you have any idea about high-quality 3D Generation, welcome to contact us!
235
+ - Thanks to [Hugging Face](https://github.com/huggingface) for sponsoring the nicely demo!
236
+ - Thanks to [3DShape2VecSet](https://github.com/1zb/3DShape2VecSet/tree/master) for their amazing work, the latent set representation provides an efficient way to represent 3D shape!
237
+ - Thanks to [Michelangelo](https://github.com/NeuralCarver/Michelangelo) for their great work, our model structure is heavily build on this repo!
238
+ - Thanks to [CRM](https://github.com/thu-ml/CRM), [Wonder3D](https://github.com/xxlong0/Wonder3D/) and [LGM](https://github.com/3DTopia/LGM) for their released model about multi-view images generation. If you have a more advanced version and want to contribute to the community, we are welcome to update.
239
+ - Thanks to [Objaverse](https://objaverse.allenai.org/), [Objaverse-MIX](https://huggingface.co/datasets/BAAI/Objaverse-MIX/tree/main) for their open-sourced data, which help us to do many validation experiments.
240
+ - Thanks to [ThreeStudio](https://github.com/threestudio-project/threestudio) for their great repo, we follow their fantastic and easy-to-use code structure!
241
+ - Thanks to [Direct3D](https://github.com/DreamTechAI/Direct3D) especially [Shuang Wu](https://scholar.google.it/citations?user=SN8J78EAAAAJ&hl=zh-CN) for providing their results.
242
+ - Thanks to [TripoSG](https://github.com/VAST-AI-Research/TripoSG) and [Hunyuan3D-2](https://github.com/Tencent/Hunyuan3D-2) for their open-source, we adapted our code to support loading their weights, training, and fine-tuning.
243
+
244
+
245
+ # 📑License
246
+ CraftsMan3D is under MIT License.
247
+
248
+
249
+ # 📖 BibTeX
250
+
251
+ @misc{li2024craftsman,
252
+ title = {CraftsMan3D: High-fidelity Mesh Generation with 3D Native Generation and Interactive Geometry Refiner},
253
+ author = {Weiyu Li and Jiarui Liu and Hongyu Yan and Rui Chen and Yixun Liang and Xuelin Chen and Ping Tan and Xiaoxiao Long},
254
+ year = {2024},
255
+ archivePrefix = {arXiv preprint arXiv:2405.14979},
256
+ primaryClass = {cs.CG}
257
+ }
Code/Baselines/CraftsMan3D/README_zh.md ADDED
@@ -0,0 +1,207 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ <p align="center">
2
+ <img src="asset/logo.png" height=220>
3
+ </p>
4
+
5
+ ### <div align="center">匠心1.5:基于3D原生扩模型和交互式几何优化的高质量网格模型生成<div>
6
+ ##### <p align="center"> [李威宇<sup>1,2</sup>](https://wyysf-98.github.io/), 刘嘉瑞<sup>1,2</sup>, 闫鸿禹<sup>*1,2</sup>, [陈锐<sup>1,2</sup>](https://aruichen.github.io/), [梁逸勋<sup>3,2</sup>](https://yixunliang.github.io/), [陈学霖<sup>4</sup>](https://xuelin-chen.github.io/), [谭平<sup>1,2</sup>](https://ece.hkust.edu.hk/pingtan), [龙霄潇<sup>1,2</sup>](https://www.xxlong.site/)</p>
7
+ ##### <p align="center"> <sup>1</sup>香港科技大学, <sup>2</sup>光影幻象, <sup>3</sup>香港科技大学(广州), <sup>4</sup>腾讯 AI Lab</p>
8
+ <div align="center">
9
+ <a href="https://craftsman3d.github.io/"><img src="https://img.shields.io/static/v1?label=Project%20Page&message=Github&color=blue&logo=github-pages"></a> &ensp;
10
+ <a href="https://huggingface.co/spaces/wyysf/CraftsMan"><img src="https://www.gradio.app/_app/immutable/assets/gradio.CHB5adID.svg" height="25"/>(不带纹理)</a> &ensp;
11
+ <a href="http://algodemo.bj.lightions.top:24926"><img src="https://www.gradio.app/_app/immutable/assets/gradio.CHB5adID.svg" height="25"/>(带纹理)</a> &ensp;
12
+ <a href="https://arxiv.org/pdf/2405.14979"><img src="https://img.shields.io/static/v1?label=Paper&message=Arxiv&color=red&logo=arxiv"></a> &ensp;
13
+ </div>
14
+
15
+ # 使用方案
16
+
17
+ ```
18
+ from craftsman import CraftsManPipeline
19
+ import torch
20
+
21
+ # load from local ckpt
22
+ # pipeline = CraftsManPipeline.from_pretrained("./ckpts/craftsman", device="cuda:0", torch_dtype=torch.float32)
23
+
24
+ # load from huggingface model hub
25
+ pipeline = CraftsManPipeline.from_pretrained("craftsman3d/craftsman", device="cuda:0", torch_dtype=torch.float32)
26
+
27
+ # inference
28
+ mesh = pipeline("https://pub-f9073a756ec645d692ce3d171c2e1232.r2.dev/data/werewolf.png").meshes[0]
29
+ mesh.export("werewolf.obj")
30
+
31
+ ```
32
+
33
+ 这个结果应该是:
34
+ <p align="center">
35
+ <img src="asset/demo_result.png" height=220>
36
+ </p>
37
+
38
+
39
+ #### 一句话总结: <font color="red">**CraftsMan (又名 匠心)**</font> 是一个两阶段的文本/图像到3D网格生成模型。通过模仿艺术家/工匠的建模工作流程,我们提出首先使用3D扩散模型生成一个具有平滑几何形状的粗糙网格(5秒),然后使用2D法线扩散生成的增强型多视图法线图进行细化(20秒),这也可以通过类似Zbrush的交互方式进行。
40
+
41
+
42
+ ## ✨ 总览
43
+ 这个仓库包含了我们3D网格生成项目的源代码(训练/推理)、预训练权重和gradio演示代码,你可以在我们的[项目页面](https://craftsman3d.github.io/)找到更多的可视化内容以及[演示](https://huggingface.co/spaces/wyysf/CraftsMan)试玩生成结果。如果你有高质量的3D数据或其他想法,我们非常欢迎任何形式的合作。
44
+ <details><summary>完整摘要</summary>
45
+ 我们提出了一个新颖的3D建模系统,匠心。它可以生成具有多样形状、规则网格拓扑和光滑表面的高保真3D几何,并且值得注意的是,它可以和人工建模流程一样以交互方式细化几何体。尽管3D生成领域取得了显著进展,但现有方法仍然难以应对漫长的优化过程、不规则的网格拓扑、嘈杂的表面以及难以适应用户编辑的问题,因此阻碍了它们在3D建模软件中的广泛采用和实施。我们的工作受到工匠建模的启发,他们通常会首先粗略地勾勒出作品的整体形状,然后详细描绘表面细节。具体来说,我们采用了一个3D原生扩散模型,该模型在从基于潜在集的3D表示学习到的潜在空间上操作,只需几秒钟就可以生成具有规则网格拓扑的粗糙几何体。特别是,这个过程以文本提示或参考图像作为输入,并利用强大的多视图(MV)二维扩散模型生成粗略几何体的多个视图,这些视图被输入到我们的多视角条件3D扩散模型中,用于生成3D几何,显著提高其了鲁棒性和泛化能力。随后,使用基于法线的几何细化器显著增强表面细节。这种细化可以自动执行,或者通过用户提供的编辑以交互方式进行。广泛的实验表明,我们的方法在生成优于现有方法的高质量3D资产方面十分高效。
46
+ </details>
47
+
48
+ <p align="center">
49
+ <img src="asset/teaser.jpg" >
50
+ </p>
51
+
52
+
53
+ ## 内容
54
+ * [视频](#Video)
55
+ * [预训练模型](##-Pretrained-models)
56
+ * [Gradio & Huggingface 示例](#Gradio-demo)
57
+ * [推理代码](#Inference)
58
+ * [训练代码](#Train)
59
+ * [数据准备](#data)
60
+ * [致谢](#Acknowledgements)
61
+ * [引用](#Bibtex)
62
+
63
+ ## 环境搭建
64
+
65
+ <details> <summary>硬件</summary>
66
+ 我们在32个A800 GPU上以每GPU 32的批量大小训练模型,训练了7天。
67
+
68
+ 网格细化部分在GTX 3080 GPU上执行。
69
+
70
+
71
+ </details>
72
+ <details> <summary>运行环境搭建</summary>
73
+
74
+ :smiley: 为了方便使用,我们提供了docker镜像文件[Setup using Docker](./docker/README.md).
75
+
76
+ - Python 3.10.0
77
+ - PyTorch 2.1.0
78
+ - Cuda Toolkit 11.8.0
79
+ - Ubuntu 22.04
80
+
81
+ 克隆这个仓库.
82
+
83
+ ```sh
84
+ git clone git@github.com:wyysf-98/CraftsMan.git
85
+ ```
86
+
87
+ 安装所需要的依赖包.
88
+
89
+ ```sh
90
+ conda create -n CraftsMan python=3.10 -y
91
+ conda activate CraftsMan
92
+ conda install cudatoolkit=11.8 -c pytorch -y
93
+ pip install torch==2.5.0 torchvision==0.18.0
94
+ pip install -r docker/requirements.txt
95
+ ```
96
+
97
+ </details>
98
+
99
+
100
+ # 🎥 视频
101
+
102
+ [![观看视频](asset/video_cover.png)](https://www.youtube.com/watch?v=WhEs4tS4mGo)
103
+
104
+
105
+ # 三维原生扩散模型 (Latent Set DiT Model)
106
+ 我们在这里提供了训练和推理代码,以便于未来的研究。
107
+ The latent set diffusion model 在很大程度上基于[Michelangelo](https://github.com/NeuralCarver/Michelangelo),
108
+ 采用了 [DiT/Pixart-alpha](https://pixart-alpha.github.io/) DiT架构,并且参数量为500M.
109
+
110
+ ## 预训练模型
111
+ 目前,我们提供了以单视图图像作为条件的模型。
112
+ 我们将根据实际情况考虑开源进一步的模型。
113
+ ```bash
114
+ ## 您可以直接使用 wget 下载:
115
+ wget https://huggingface.co/craftsman3d/craftsman/resolve/main/config.yaml
116
+ wget https://huggingface.co/craftsman3d/craftsman/resolve/main/model.ckpt
117
+
118
+ ## 或者克隆模型仓库:
119
+ git lfs install
120
+ git clone https://huggingface.co/craftsman3d/craftsman
121
+
122
+ ```
123
+ 如果使用 wget 下载,应该手动将模型文件放置于 `ckpts/craftsman` 文件夹。
124
+
125
+
126
+ ## Gradio 示例
127
+ 我们提供了gradio示例,为了更方便的使用。
128
+ 要在本地机器上运行gradio演示,请简单运行:
129
+
130
+ ```bash
131
+ python gradio_app.py --model_path ./ckpts/craftsman
132
+
133
+ ```
134
+
135
+ ## 模型推理
136
+ 要通过命令行从图像文件夹生成3D网格,简单运行:
137
+ ```bash
138
+ python inference.py --input eval_data --device 0 --model ./ckpts/craftsman
139
+ ```
140
+
141
+ 更多推理配置,请参考 `inference.py`
142
+
143
+ ## 从头开始训练
144
+ 我们提供了我们的训练代码以方便未来的研究。我们已经提供数据样本。
145
+ 对于训练数据,请填写表格[form](https://docs.google.com/forms/d/e/1FAIpQLSdhjMFNaOqMqioqZyJNcSCfXb4H0WrcYyEcHvFI2nf60_fPhw/viewform)获取下载链接。
146
+
147
+ *由于部署数据的成本问题,如果您能帮助在社交媒体上分享我们的工作(任何形式都可),您将收到存储在AWS S3上的下载链接,这可以实现20-100 MB/s的下载速度。*
148
+ 有关更多的训练细节和配置,请参考configs文件夹。
149
+
150
+ ```bash
151
+ ### 训练形状自动编码器
152
+ python train.py --config ./configs/shape-autoencoder/l256-e64-ne8-nd16.yaml \
153
+ --train --gpu 0
154
+
155
+ ### 训练单视图DiT模型
156
+ python train.py --config .configs/image-to-shape-diffusion/clip-dino-rgb-pixart-lr2e4-ddim.yaml \
157
+ --train --gpu 0
158
+
159
+ ```
160
+
161
+ # 2D法线增强扩散模型(即将推出)
162
+
163
+ 我们正在努力发布我们的三维网格细化代码。感谢您的耐心等待,我们将为这个激动人心的发展做最后的努力。" 🔧🚀
164
+
165
+ 您也可以在视频中找到网格细化部分的结果。
166
+
167
+
168
+ # ❓常见问题
169
+ 问题: 如何获得更好的结果?
170
+ 0. 由于我们资源有限,将会逐步扩大数据集和训练规模,因此我们将在未来发布更多的预训练模型。
171
+ 1. 就像2D扩散模型一样,尝试不同的随机数种子,调整CFG比例或不同的调度器。
172
+ 2. 我们将在后期考虑提供一个以文本提示为条件的版本,因此您可以使用一些正面和负面的提示。
173
+
174
+
175
+ # 💪 待办事项
176
+
177
+ - [x] 推理代码
178
+ - [x] 训练代码
179
+ - [x] Gradio & Hugging Face演示
180
+ - [x] 模型库,我们将在未来发布更多的ckpt
181
+ - [x] 环境设置
182
+ - [x] 数据样本
183
+ - [ ] 网格细化代码
184
+
185
+
186
+ # 🤗 致谢
187
+
188
+ - 感谢[光影幻像](https://www.lightillusions.com/)提供计算资源和潘建雄进行数据预处理。如果您对高质量的3D生成有任何想法,欢迎与我们联系!
189
+ - Thanks to [Hugging Face](https://github.com/huggingface) for sponsoring the nicely demo!
190
+ - Thanks to [3DShape2VecSet](https://github.com/1zb/3DShape2VecSet/tree/master) for their amazing work, the latent set representation provides an efficient way to represent 3D shape!
191
+ - Thanks to [Michelangelo](https://github.com/NeuralCarver/Michelangelo) for their great work, our model structure is heavily build on this repo!
192
+ - Thanks to [CRM](https://github.com/thu-ml/CRM), [Wonder3D](https://github.com/xxlong0/Wonder3D/) and [LGM](https://github.com/3DTopia/LGM) for their released model about multi-view images generation. If you have a more advanced version and want to contribute to the community, we are welcome to update.
193
+ - 感谢 [Objaverse](https://objaverse.allenai.org/), [Objaverse-MIX](https://huggingface.co/datasets/BAAI/Objaverse-MIX/tree/main) 开源的数据,这帮助我们进行了许多验证实验。
194
+ - 感谢 [ThreeStudio](https://github.com/threestudio-project/threestudio) 实现了一个完整的框架,我们参考他们出色且易于使用的代码结构。
195
+
196
+ # 📑许可证
197
+ CraftsMan在[AGPL-3.0](https://www.gnu.org/licenses/agpl-3.0.en.html)下,因此任何包含CraftsMan代码或训练模型(无论是预训练还是自定义训练)的下游解决方案和产品(包括云服务)都应该是开源的,以符合AGPL的条件。如果您对CraftsMan的使用有任何疑问,请先与我们联系。
198
+
199
+ # 📖 BibTeX
200
+
201
+ @misc{li2024craftsman,
202
+ title = {CraftsMan: High-fidelity Mesh Generation with 3D Native Generation and Interactive Geometry Refiner},
203
+ author = {Weiyu Li and Jiarui Liu and Hongyu Yan and Rui Chen and Yixun Liang and Xuelin Chen and Ping Tan and Xiaoxiao Long},
204
+ year = {2024},
205
+ archivePrefix = {arXiv preprint arXiv:2405.14979},
206
+ primaryClass = {cs.CG}
207
+ }
Code/Baselines/CraftsMan3D/__init__.py ADDED
File without changes
Code/Baselines/CraftsMan3D/gradio_app.py ADDED
@@ -0,0 +1,313 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import spaces
2
+ import argparse
3
+ import os
4
+ import json
5
+ import torch
6
+ import sys
7
+ import time
8
+ import importlib
9
+ import numpy as np
10
+ from omegaconf import OmegaConf
11
+ from huggingface_hub import hf_hub_download
12
+ from diffusers import DiffusionPipeline
13
+
14
+ import PIL
15
+ from PIL import Image
16
+ from collections import OrderedDict
17
+ import trimesh
18
+ import rembg
19
+ import gradio as gr
20
+ from typing import Any
21
+
22
+ proj_dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
23
+ sys.path.append(os.path.join(proj_dir))
24
+
25
+ import tempfile
26
+
27
+ import craftsman
28
+ from craftsman.utils.config import ExperimentConfig, load_config
29
+
30
+ _TITLE = '''CraftsMan: High-fidelity Mesh Generation with 3D Native Generation and Interactive Geometry Refiner'''
31
+ _DESCRIPTION = '''
32
+ <div>
33
+ <span style="color: red;">Important: If you have your own data and want to collaborate, we are welcom to any contact.</span>
34
+ <div>
35
+ Select or upload a image, then just click 'Generate'.
36
+ <br>
37
+ By mimicking the artist/craftsman modeling workflow, we propose CraftsMan (aka 匠心) that uses 3D Latent Set Diffusion Model that directly generate coarse meshes,
38
+ then a multi-view normal enhanced image generation model is used to refine the mesh.
39
+ We provide the coarse 3D diffusion part here.
40
+ <br>
41
+ If you found CraftsMan is helpful, please help to ⭐ the <a href='https://github.com/wyysf-98/CraftsMan/' target='_blank'>Github Repo</a>. Thanks!
42
+ <a style="display:inline-block; margin-left: .5em" href='https://github.com/wyysf-98/CraftsMan/'><img src='https://img.shields.io/github/stars/wyysf-98/CraftsMan?style=social' /></a>
43
+ <br>
44
+ *If you have your own multi-view images, you can directly upload it.
45
+ </div>
46
+ '''
47
+ _CITE_ = r"""
48
+ ---
49
+ 📝 **Citation**
50
+ If you find our work useful for your research or applications, please cite using this bibtex:
51
+ ```bibtex
52
+ @article{li2024craftsman,
53
+ author = {Weiyu Li and Jiarui Liu and Rui Chen and Yixun Liang and Xuelin Chen and Ping Tan and Xiaoxiao Long},
54
+ title = {CraftsMan: High-fidelity Mesh Generation with 3D Native Generation and Interactive Geometry Refiner},
55
+ journal = {arXiv preprint arXiv:2405.14979},
56
+ year = {2024},
57
+ }
58
+ ```
59
+ 🤗 **Acknowledgements**
60
+ We use <a href='https://github.com/wjakob/instant-meshes' target='_blank'>Instant Meshes</a> to remesh the generated mesh to a lower face count, thanks to the authors for the great work.
61
+ 📋 **License**
62
+ CraftsMan is under [AGPL-3.0](https://www.gnu.org/licenses/agpl-3.0.en.html), so any downstream solution and products (including cloud services) that include CraftsMan code or a trained model (both pretrained or custom trained) inside it should be open-sourced to comply with the AGPL conditions. If you have any questions about the usage of CraftsMan, please contact us first.
63
+ 📧 **Contact**
64
+ If you have any questions, feel free to open a discussion or contact us at <b>weiyuli.cn@gmail.com</b>.
65
+ """
66
+
67
+ model = None
68
+ cached_dir = None
69
+
70
+ generator = None
71
+
72
+ def check_input_image(input_image):
73
+ if input_image is None:
74
+ raise gr.Error("No image uploaded!")
75
+
76
+ class RMBG(object):
77
+ def __init__(self):
78
+ pass
79
+
80
+ def rmbg_rembg(self, input_image, background_color):
81
+ def _rembg_remove(
82
+ image: PIL.Image.Image,
83
+ rembg_session = None,
84
+ force: bool = False,
85
+ **rembg_kwargs,
86
+ ) -> PIL.Image.Image:
87
+ do_remove = True
88
+ if image.mode == "RGBA" and image.getextrema()[3][0] < 255:
89
+ # explain why current do not rm bg
90
+ print("alhpa channl not enpty, skip remove background, using alpha channel as mask")
91
+ background = Image.new("RGBA", image.size, background_color)
92
+ image = Image.alpha_composite(background, image)
93
+ do_remove = False
94
+ do_remove = do_remove or force
95
+ if do_remove:
96
+ image = rembg.remove(image, session=rembg_session, **rembg_kwargs)
97
+
98
+ # calculate the min bbox of the image
99
+ alpha = image.split()[-1]
100
+ image = image.crop(alpha.getbbox())
101
+
102
+ return image
103
+ return _rembg_remove(input_image, None, force_remove=True)
104
+
105
+ def run(self, rm_type, image, foreground_ratio, background_choice, background_color=(0, 0, 0, 0)):
106
+ if "Original" in background_choice:
107
+ return image
108
+ else:
109
+ if background_choice == "Alpha as mask":
110
+ alpha = image.split()[-1]
111
+ image = image.crop(alpha.getbbox())
112
+
113
+ elif "Remove" in background_choice:
114
+ if rm_type.upper() == "REMBG":
115
+ image = self.rmbg_rembg(image, background_color=background_color)
116
+ else:
117
+ return -1
118
+
119
+ # Calculate the new size after rescaling
120
+ new_size = tuple(int(dim * foreground_ratio) for dim in image.size)
121
+ # Resize the image while maintaining the aspect ratio
122
+ resized_image = image.resize(new_size)
123
+ # Create a new image with the original size and white background
124
+ padded_image = PIL.Image.new("RGBA", image.size, (0, 0, 0, 0))
125
+ paste_position = ((image.width - resized_image.width) // 2, (image.height - resized_image.height) // 2)
126
+ padded_image.paste(resized_image, paste_position)
127
+
128
+ # expand image to 1:1
129
+ width, height = padded_image.size
130
+ if width == height:
131
+ return padded_image
132
+ new_size = (max(width, height), max(width, height))
133
+ image = PIL.Image.new("RGBA", new_size, (0, 0, 0, 0))
134
+ paste_position = ((new_size[0] - width) // 2, (new_size[1] - height) // 2)
135
+ image.paste(padded_image, paste_position)
136
+ return image
137
+
138
+ # @spaces.GPU
139
+ def image2mesh(image: Any,
140
+ more: bool = False,
141
+ scheluder_name: str ="DDIMScheduler",
142
+ guidance_scale: int = 7.5,
143
+ steps: int = 30,
144
+ seed: int = 4,
145
+ target_face_count: int = 2000,
146
+ octree_depth: int = 7):
147
+
148
+ sample_inputs = {
149
+ "image": [
150
+ image
151
+ ]
152
+ }
153
+
154
+ global model
155
+ latents = model.sample(
156
+ sample_inputs,
157
+ sample_times=1,
158
+ steps=steps,
159
+ guidance_scale=guidance_scale,
160
+ seed=seed
161
+ )[0]
162
+
163
+ # decode the latents to mesh
164
+ box_v = 1.1
165
+ mesh_outputs, _ = model.shape_model.extract_geometry(
166
+ latents,
167
+ bounds=[-box_v, -box_v, -box_v, box_v, box_v, box_v],
168
+ octree_depth=octree_depth
169
+ )
170
+ assert len(mesh_outputs) == 1, "Only support single mesh output for gradio demo"
171
+ mesh = trimesh.Trimesh(mesh_outputs[0][0], mesh_outputs[0][1])
172
+ # filepath = f"{cached_dir}/{time.time()}.obj"
173
+ filepath = tempfile.NamedTemporaryFile(suffix=f".obj", delete=False).name
174
+ mesh.export(filepath, include_normals=True)
175
+
176
+ if 'Remesh' in more:
177
+ remeshed_filepath = tempfile.NamedTemporaryFile(suffix=f"_remeshed.obj", delete=False).name
178
+ print("Remeshing with Instant Meshes...")
179
+ command = f"{proj_dir}/apps/third_party/InstantMeshes {filepath} -f {target_face_count} -o {remeshed_filepath}"
180
+ os.system(command)
181
+ filepath = remeshed_filepath
182
+
183
+ return filepath
184
+
185
+ if __name__=="__main__":
186
+ parser = argparse.ArgumentParser()
187
+ parser.add_argument("--model_path", type=str, default="./ckpts/craftsman-v1-5", help="Path to the object file",)
188
+ parser.add_argument("--cached_dir", type=str, default="")
189
+ parser.add_argument("--device", type=int, default=0)
190
+ args = parser.parse_args()
191
+
192
+
193
+ cached_dir = args.cached_dir
194
+ if cached_dir != "":
195
+ os.makedirs(args.cached_dir, exist_ok=True)
196
+ device = torch.device(f"cuda:{args.device}" if torch.cuda.is_available() else "cpu")
197
+ print(f"using device: {device}")
198
+
199
+ # for input image
200
+ background_choice = OrderedDict({
201
+ "Alpha as Mask": "Alpha as Mask",
202
+ "Auto Remove Background": "Auto Remove Background",
203
+ "Original Image": "Original Image",
204
+ })
205
+
206
+ generator = torch.Generator(device)
207
+
208
+ # for 3D latent set diffusion
209
+ if args.model_path == "":
210
+ ckpt_path = hf_hub_download(repo_id="craftsman3d/craftsman-v1-5", filename="model.ckpt", repo_type="model")
211
+ config_path = hf_hub_download(repo_id="craftsman3d/craftsman-v1-5", filename="config.yaml", repo_type="model")
212
+ else:
213
+ ckpt_path = os.path.join(args.model_path, "model.ckpt")
214
+ config_path = os.path.join(args.model_path, "config.yaml")
215
+ scheluder_dict = OrderedDict({
216
+ "DDIMScheduler": 'diffusers.schedulers.DDIMScheduler',
217
+ # "DPMSolverMultistepScheduler": 'diffusers.schedulers.DPMSolverMultistepScheduler', # not support yet
218
+ # "UniPCMultistepScheduler": 'diffusers.schedulers.UniPCMultistepScheduler', # not support yet
219
+ })
220
+
221
+ # main GUI
222
+ custom_theme = gr.themes.Soft(primary_hue="blue").set(
223
+ button_secondary_background_fill="*neutral_100",
224
+ button_secondary_background_fill_hover="*neutral_200")
225
+ custom_css = '''#disp_image {
226
+ text-align: center; /* Horizontally center the content */
227
+ }'''
228
+
229
+ with gr.Blocks(title=_TITLE, theme=custom_theme, css=custom_css) as demo:
230
+ with gr.Row():
231
+ with gr.Column(scale=1):
232
+ gr.Markdown('# ' + _TITLE)
233
+ gr.Markdown(_DESCRIPTION)
234
+
235
+ with gr.Row():
236
+ with gr.Column(scale=2):
237
+ with gr.Column():
238
+ # input image
239
+ with gr.Row():
240
+ image_input = gr.Image(
241
+ label="Image Input",
242
+ image_mode="RGBA",
243
+ sources="upload",
244
+ type="pil",
245
+ )
246
+ run_btn = gr.Button('Generate', variant='primary', interactive=True)
247
+
248
+ with gr.Row():
249
+ gr.Markdown('''Try a different <b>seed and MV Model</b> for better results. Good Luck :)''')
250
+ with gr.Row():
251
+ seed = gr.Number(0, label='Seed', show_label=True)
252
+ more = gr.CheckboxGroup(["Remesh"], label="More", show_label=False)
253
+ target_face_count = gr.Number(2000, label='Target Face Count', show_label=True)
254
+
255
+ with gr.Row():
256
+ gr.Examples(
257
+ examples=[os.path.join("./asset/examples", i) for i in os.listdir("./asset/examples")],
258
+ inputs=[image_input],
259
+ examples_per_page=8
260
+ )
261
+
262
+ with gr.Column(scale=4):
263
+ with gr.Row():
264
+ output_model_obj = gr.Model3D(
265
+ label="Output Model (OBJ Format)",
266
+ camera_position=(90.0, 90.0, 3.5),
267
+ interactive=False,
268
+ )
269
+ with gr.Row():
270
+ gr.Markdown('''*please note that the model is fliped due to the gradio viewer, please download the obj file and you will get the correct orientation.''')
271
+
272
+ with gr.Accordion('Advanced options', open=False):
273
+ with gr.Row():
274
+ background_choice = gr.Dropdown(label="Backgroud Choice", value="Auto Remove Background",choices=list(background_choice.keys()))
275
+ rmbg_type = gr.Dropdown(label="Backgroud Remove Type", value="rembg",choices=['sam', "rembg"])
276
+ foreground_ratio = gr.Slider(label="Foreground Ratio", value=1.0, minimum=0.5, maximum=1.0, step=0.01)
277
+
278
+ with gr.Row():
279
+ guidance_scale = gr.Number(label="3D Guidance Scale", value=5.0, minimum=3.0, maximum=10.0)
280
+ steps = gr.Number(value=50, minimum=20, maximum=100, label="3D Sample Steps")
281
+
282
+ with gr.Row():
283
+ scheduler = gr.Dropdown(label="scheluder", value="DDIMScheduler",choices=list(scheluder_dict.keys()))
284
+ octree_depth = gr.Slider(label="Octree Depth", value=7, minimum=4, maximum=8, step=1)
285
+
286
+ gr.Markdown(_CITE_)
287
+
288
+ outputs = [output_model_obj]
289
+ rmbg = RMBG()
290
+
291
+ # model = load_model(ckpt_path, config_path, device)
292
+ cfg = load_config(config_path)
293
+ model = craftsman.find(cfg.system_type)(cfg.system)
294
+ print(f"Restoring states from the checkpoint path at {ckpt_path} with config {cfg}")
295
+ ckpt = torch.load(ckpt_path, map_location=torch.device('cpu'))
296
+ model.load_state_dict(
297
+ ckpt["state_dict"] if "state_dict" in ckpt else ckpt,
298
+ )
299
+ model = model.to(device).eval()
300
+
301
+
302
+ run_btn.click(fn=check_input_image, inputs=[image_input]
303
+ ).success(
304
+ fn=rmbg.run,
305
+ inputs=[rmbg_type, image_input, foreground_ratio, background_choice],
306
+ outputs=[image_input]
307
+ ).success(
308
+ fn=image2mesh,
309
+ inputs=[image_input, more, scheduler, guidance_scale, steps, seed, target_face_count, octree_depth],
310
+ outputs=outputs,
311
+ api_name="generate_img2obj")
312
+
313
+ demo.queue().launch(share=True, allowed_paths=[args.cached_dir])
Code/Baselines/CraftsMan3D/inference.py ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from craftsman import CraftsManPipeline
2
+ import torch
3
+
4
+ # load from local ckpt
5
+ # pipeline = CraftsManPipeline.from_pretrained("./ckpts/craftsman-v1-5", device="cuda:0", torch_dtype=torch.float32)
6
+ # pipeline = CraftsManPipeline.from_pretrained("./ckpts/craftsman-DoraVAE", device="cuda:0", torch_dtype=torch.float32)
7
+ pipeline = CraftsManPipeline.from_pretrained("./ckpts/craftsman-DoraVAE", device="cuda:0", torch_dtype=torch.bfloat16) # bf16 for fast inference
8
+
9
+ # # load from huggingface model hub
10
+ # pipeline = CraftsManPipeline.from_pretrained("craftsman3d/craftsman-v1-5", device="cuda:0", torch_dtype=torch.float32)
11
+ # pipeline = CraftsManPipeline.from_pretrained("craftsman3d/craftsman-DoraVAE", device="cuda:0", torch_dtype=torch.float32)
12
+ # pipeline = CraftsManPipeline.from_pretrained("craftsman3d/craftsman-DoraVAE", device="cuda:0", torch_dtype=torch.bfloat16) # bf16 for fast inference
13
+
14
+ image_file = "val_data/dragon.png"
15
+ obj_file = "dragon.glb" # output obj or glb file
16
+ textured_obj_file = "dragon_textured.glb"
17
+ # inference
18
+ mesh = pipeline(image_file).meshes[0]
19
+ mesh.export(obj_file)
20
+
21
+ ########## For texture generation, we recommend to use hunyuan3d-2 ##########
22
+ # https://github.com/Tencent/Hunyuan3D-2/tree/main/hy3dgen/texgen
Code/Baselines/CraftsMan3D/material.mtl ADDED
@@ -0,0 +1,7 @@
 
 
 
 
 
 
 
 
1
+ # https://github.com/mikedh/trimesh
2
+
3
+ newmtl material_0
4
+ Ka 0.40000000 0.40000000 0.40000000
5
+ Kd 1.00000000 1.00000000 1.00000000
6
+ Ks 0.40000000 0.40000000 0.40000000
7
+ Ns 1.00000000
Code/Baselines/CraftsMan3D/train.py ADDED
@@ -0,0 +1,306 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import getpass
3
+ import contextlib
4
+ import importlib
5
+ import logging
6
+ import os
7
+ import sys
8
+ import time
9
+ import re
10
+ import datetime
11
+ import traceback
12
+ import pytorch_lightning as pl
13
+ import torch
14
+ from pytorch_lightning import Trainer
15
+ from pytorch_lightning.callbacks import LearningRateMonitor, ModelCheckpoint
16
+ from pytorch_lightning.loggers import CSVLogger, TensorBoardLogger
17
+ from pytorch_lightning.utilities.rank_zero import rank_zero_only
18
+ import craftsman
19
+ from craftsman.systems.base import BaseSystem
20
+ from craftsman.utils.callbacks import (
21
+ EarlyEnvironmentSetter,
22
+ CodeSnapshotCallback,
23
+ ConfigSnapshotCallback,
24
+ CustomProgressBar,
25
+ ProgressCallback,
26
+ )
27
+ from craftsman.utils.config import ExperimentConfig, load_config
28
+ from craftsman.utils.misc import get_rank
29
+ from craftsman.utils.typing import Optional
30
+ class ColoredFilter(logging.Filter):
31
+ """
32
+ A logging filter to add color to certain log levels.
33
+ """
34
+
35
+ RESET = "\033[0m"
36
+ RED = "\033[31m"
37
+ GREEN = "\033[32m"
38
+ YELLOW = "\033[33m"
39
+
40
+ BLUE = "\033[34m"
41
+ MAGENTA = "\033[35m"
42
+ CYAN = "\033[36m"
43
+
44
+ COLORS = {
45
+ "WARNING": YELLOW,
46
+ "INFO": GREEN,
47
+ "DEBUG": BLUE,
48
+ "CRITICAL": MAGENTA,
49
+ "ERROR": RED,
50
+ }
51
+
52
+ RESET = "\x1b[0m"
53
+
54
+ def __init__(self):
55
+ super().__init__()
56
+
57
+ def filter(self, record):
58
+ if record.levelname in self.COLORS:
59
+ color_start = self.COLORS[record.levelname]
60
+ record.levelname = f"{color_start}[{record.levelname}]"
61
+ record.msg = f"{record.msg}{self.RESET}"
62
+ return True
63
+
64
+
65
+ def load_custom_module(module_path):
66
+ module_name = os.path.basename(module_path)
67
+ if os.path.isfile(module_path):
68
+ sp = os.path.splitext(module_path)
69
+ module_name = sp[0]
70
+ try:
71
+ if os.path.isfile(module_path):
72
+ module_spec = importlib.util.spec_from_file_location(
73
+ module_name, module_path
74
+ )
75
+ else:
76
+ module_spec = importlib.util.spec_from_file_location(
77
+ module_name, os.path.join(module_path, "__init__.py")
78
+ )
79
+
80
+ module = importlib.util.module_from_spec(module_spec)
81
+ sys.modules[module_name] = module
82
+ module_spec.loader.exec_module(module)
83
+ return True
84
+ except Exception as e:
85
+ print(traceback.format_exc())
86
+ print(f"Cannot import {module_path} module for custom nodes:", e)
87
+ return False
88
+
89
+
90
+ def load_custom_modules():
91
+ node_paths = ["custom"]
92
+ node_import_times = []
93
+ if not os.path.exists("node_paths"):
94
+ return
95
+ for custom_node_path in node_paths:
96
+ possible_modules = os.listdir(custom_node_path)
97
+ if "__pycache__" in possible_modules:
98
+ possible_modules.remove("__pycache__")
99
+
100
+ for possible_module in possible_modules:
101
+ module_path = os.path.join(custom_node_path, possible_module)
102
+ if (
103
+ os.path.isfile(module_path)
104
+ and os.path.splitext(module_path)[1] != ".py"
105
+ ):
106
+ continue
107
+ if module_path.endswith(".disabled"):
108
+ continue
109
+ time_before = time.perf_counter()
110
+ success = load_custom_module(module_path)
111
+ node_import_times.append(
112
+ (time.perf_counter() - time_before, module_path, success)
113
+ )
114
+
115
+ if len(node_import_times) > 0:
116
+ print("\nImport times for custom modules:")
117
+ for n in sorted(node_import_times):
118
+ if n[2]:
119
+ import_message = ""
120
+ else:
121
+ import_message = " (IMPORT FAILED)"
122
+ print("{:6.1f} seconds{}:".format(n[0], import_message), n[1])
123
+ print()
124
+
125
+
126
+ def main(args, extras) -> None:
127
+ # set CUDA_VISIBLE_DEVICES if needed, then import pytorch-lightning
128
+ os.environ["CUDA_DEVICE_ORDER"] = "PCI_BUS_ID"
129
+ env_gpus_str = os.environ.get("CUDA_VISIBLE_DEVICES", None)
130
+ env_gpus = list(env_gpus_str.split(",")) if env_gpus_str else []
131
+ selected_gpus = [0]
132
+ torch.set_float32_matmul_precision("high")
133
+
134
+ # Always rely on CUDA_VISIBLE_DEVICES if specific GPU ID(s) are specified.
135
+ # As far as Pytorch Lightning is concerned, we always use all available GPUs
136
+ # (possibly filtered by CUDA_VISIBLE_DEVICES).
137
+ devices = -1
138
+ if len(env_gpus) > 0:
139
+ n_gpus = len(env_gpus)
140
+ else:
141
+ selected_gpus = list(args.gpu.split(","))
142
+ n_gpus = len(selected_gpus)
143
+ print(f"Using {n_gpus} GPUs: {selected_gpus}")
144
+ os.environ["CUDA_VISIBLE_DEVICES"] = args.gpu
145
+
146
+ if args.typecheck:
147
+ from jaxtyping import install_import_hook
148
+
149
+ install_import_hook("craftsman", "typeguard.typechecked")
150
+
151
+ logger = logging.getLogger("pytorch_lightning")
152
+ if args.verbose:
153
+ logger.setLevel(logging.DEBUG)
154
+
155
+ for handler in logger.handlers:
156
+ if handler.stream == sys.stderr: # type: ignore
157
+ if not args.gradio:
158
+ handler.setFormatter(logging.Formatter("%(levelname)s %(message)s"))
159
+ handler.addFilter(ColoredFilter())
160
+ else:
161
+ handler.setFormatter(logging.Formatter("[%(levelname)s] %(message)s"))
162
+
163
+ load_custom_modules()
164
+
165
+ # parse YAML config to OmegaConf
166
+ cfg: ExperimentConfig
167
+ cfg = load_config(args.config, cli_args=extras, n_gpus=n_gpus)
168
+
169
+ # set a different seed for each device
170
+ rank = get_rank()
171
+ pl.seed_everything(cfg.seed + rank, workers=True)
172
+
173
+ dm = craftsman.find(cfg.data_type)(cfg.data)
174
+ system: BaseSystem = craftsman.find(cfg.system_type)(
175
+ cfg.system, resumed=cfg.resume is not None
176
+ )
177
+ system.set_save_dir(os.path.join(cfg.trial_dir, "save"))
178
+
179
+ if args.gradio:
180
+ fh = logging.FileHandler(os.path.join(cfg.trial_dir, "logs"))
181
+ fh.setLevel(logging.INFO)
182
+ if args.verbose:
183
+ fh.setLevel(logging.DEBUG)
184
+ fh.setFormatter(logging.Formatter("[%(levelname)s] %(message)s"))
185
+ logger.addHandler(fh)
186
+
187
+ callbacks = []
188
+ if args.train:
189
+ callbacks += [
190
+ EarlyEnvironmentSetter(),
191
+ ModelCheckpoint(
192
+ dirpath=os.path.join(cfg.trial_dir, "ckpts"), **cfg.checkpoint
193
+ ),
194
+ LearningRateMonitor(logging_interval="step"),
195
+ CodeSnapshotCallback(
196
+ os.path.join(cfg.trial_dir, "code"), use_version=False
197
+ ),
198
+ ConfigSnapshotCallback(
199
+ args.config,
200
+ cfg,
201
+ os.path.join(cfg.trial_dir, "configs"),
202
+ use_version=False,
203
+ ),
204
+ ]
205
+ if args.gradio:
206
+ callbacks += [
207
+ ProgressCallback(save_path=os.path.join(cfg.trial_dir, "progress"))
208
+ ]
209
+ else:
210
+ callbacks += [CustomProgressBar(refresh_rate=1)]
211
+
212
+ def write_to_text(file, lines):
213
+ with open(file, "w") as f:
214
+ for line in lines:
215
+ f.write(line + "\n")
216
+
217
+ loggers = []
218
+ if args.train:
219
+ # make tensorboard logging dir to suppress warning
220
+ rank_zero_only(
221
+ lambda: os.makedirs(os.path.join(cfg.trial_dir, "tb_logs"), exist_ok=True)
222
+ )()
223
+ loggers += [
224
+ TensorBoardLogger(cfg.trial_dir, name="tb_logs"),
225
+ CSVLogger(cfg.trial_dir, name="csv_logs"),
226
+ ] + system.get_loggers()
227
+ rank_zero_only(
228
+ lambda: write_to_text(
229
+ os.path.join(cfg.trial_dir, "cmd.txt"),
230
+ ["python " + " ".join(sys.argv), str(args)],
231
+ )
232
+ )()
233
+
234
+ trainer = Trainer(
235
+ callbacks=callbacks,
236
+ logger=loggers,
237
+ inference_mode=False,
238
+ accelerator="gpu",
239
+ devices=devices,
240
+ **cfg.trainer
241
+ # profiler="pytorch",
242
+ )
243
+
244
+ def set_system_status(system: BaseSystem, ckpt_path: Optional[str]):
245
+ if ckpt_path is None:
246
+ return
247
+ ckpt = torch.load(ckpt_path, map_location="cpu")
248
+ system.set_resume_status(ckpt["epoch"], ckpt["global_step"])
249
+ if args.train:
250
+ trainer.fit(system, datamodule=dm, ckpt_path=cfg.resume)
251
+ trainer.test(system, datamodule=dm)
252
+ if args.gradio:
253
+ # also export assets if in gradio mode
254
+ trainer.predict(system, datamodule=dm)
255
+ elif args.validate:
256
+ # manually set epoch and global_step as they cannot be automatically resumed
257
+ set_system_status(system, cfg.resume)
258
+ trainer.validate(system, datamodule=dm, ckpt_path=cfg.resume)
259
+ elif args.test:
260
+ # manually set epoch and global_step as they cannot be automatically resumed
261
+ set_system_status(system, cfg.resume)
262
+ trainer.test(system, datamodule=dm, ckpt_path=cfg.resume)
263
+ elif args.export:
264
+ set_system_status(system, cfg.resume)
265
+ trainer.predict(system, datamodule=dm, ckpt_path=cfg.resume)
266
+
267
+
268
+ if __name__ == "__main__":
269
+ parser = argparse.ArgumentParser()
270
+ parser.add_argument("--config", required=True, help="path to config file")
271
+ parser.add_argument(
272
+ "--gpu",
273
+ default="0",
274
+ help="GPU(s) to be used. 0 means use the 1st available GPU. "
275
+ "1,2 means use the 2nd and 3rd available GPU. "
276
+ "If CUDA_VISIBLE_DEVICES is set before calling `launch.py`, "
277
+ "this argument is ignored and all available GPUs are always used.",
278
+ )
279
+
280
+ group = parser.add_mutually_exclusive_group(required=True)
281
+ group.add_argument("--train", action="store_true")
282
+ group.add_argument("--validate", action="store_true")
283
+ group.add_argument("--test", action="store_true")
284
+ group.add_argument("--export", action="store_true")
285
+
286
+ parser.add_argument(
287
+ "--gradio", action="store_true", help="if true, run in gradio mode"
288
+ )
289
+
290
+ parser.add_argument(
291
+ "--verbose", action="store_true", help="if true, set logging level to DEBUG"
292
+ )
293
+
294
+ parser.add_argument(
295
+ "--typecheck",
296
+ action="store_true",
297
+ help="whether to enable dynamic type checking",
298
+ )
299
+
300
+ args, extras = parser.parse_known_args()
301
+
302
+ if args.gradio:
303
+ with contextlib.redirect_stdout(sys.stderr):
304
+ main(args, extras)
305
+ else:
306
+ main(args, extras)
Code/Baselines/CraftsMan3D/train_autoencoder.sh ADDED
@@ -0,0 +1,4 @@
 
 
 
 
 
1
+ export CUDA_VISIBLE_DEVICES=0
2
+ export CUDA_LAUNCH_BLOCKING=1
3
+
4
+ python launch.py --config ./configs/shape-autoencoder/l256-e64-ne8-nd16.yaml --train --gpu 0
Code/Baselines/CraftsMan3D/watertight_and_sampling.py ADDED
@@ -0,0 +1,613 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import math
3
+ import torch
4
+ import torch.nn.functional as F
5
+ import igl # pip install libigl==2.5.1
6
+ import trimesh
7
+ import mcubes # pip install mcubes
8
+ import numpy as np
9
+ import nvdiffrast.torch as dr
10
+
11
+ from pysdf import SDF
12
+ from matplotlib import image
13
+ from argparse import ArgumentParser
14
+
15
+ def sample_from_sphere(num_views, radius, upper=False):
16
+ """sample x,y,z location from the sphere
17
+ reference: https://zhuanlan.zhihu.com/p/25988652?group_id=828963677192491008
18
+ """
19
+ num_views = num_views * 2 if upper else num_views
20
+ phi = (np.sqrt(5) - 1.0) / 2.0
21
+ pos_list = []
22
+ for n in range(1, num_views + 1):
23
+ y = (2.0 * n - 1) / num_views - 1.0
24
+ x = np.cos(2 * np.pi * n * phi) * np.sqrt(1 - y * y)
25
+ z = np.sin(2 * np.pi * n * phi) * np.sqrt(1 - y * y)
26
+ if upper and y < 0:
27
+ continue
28
+ pos_list.append((x * radius, y * radius, z * radius))
29
+ return np.array(pos_list)
30
+
31
+
32
+ class MeshRenderer:
33
+ def __init__(
34
+ self,
35
+ resolution=(1024, 1024), # resolution of the rendered image
36
+ near=0.1, # near plane for the camera
37
+ far=10.0, # far plane for the camera
38
+ device='cuda' # device to run the renderer on
39
+ ):
40
+ """Initialize the mesh renderer."""
41
+ self.resolution = resolution
42
+ self.near = near
43
+ self.far = far
44
+ self.device = torch.device(device)
45
+ # check if the device is cuda
46
+ if torch.cuda.is_available() and device == 'cuda':
47
+ self._ctx = dr.RasterizeCudaContext(device=self.device)
48
+ elif device == 'cpu':
49
+ self._ctx = dr.RasterizeGLContext(device=self.device)
50
+ else:
51
+ raise ValueError("Device must be 'cuda' or 'cpu'.")
52
+
53
+ # warm up the renderer
54
+ self._warmup()
55
+
56
+ def _warmup(self):
57
+ """Warm up the renderer to avoid the first frame being slow."""
58
+ #windows workaround for https://github.com/NVlabs/nvdiffrast/issues/59
59
+ def tensor(*args, **kwargs):
60
+ return torch.tensor(*args, device='cuda', **kwargs)
61
+ pos = tensor([[[-0.8, -0.8, 0, 1], [0.8, -0.8, 0, 1], [-0.8, 0.8, 0, 1]]], dtype=torch.float32)
62
+ tri = tensor([[0, 1, 2]], dtype=torch.int32)
63
+ dr.rasterize(self._ctx, pos, tri, resolution=[256, 256])
64
+
65
+
66
+ def rasterize(
67
+ self,
68
+ pos: torch.FloatTensor,
69
+ tri: torch.IntTensor,
70
+ resolution = (1024, 1024), # resolution of the rendered image
71
+ grad_db: bool = True,
72
+ ):
73
+ """
74
+ Rasterize the given vertices and triangles.
75
+ Args:
76
+ pos (Float[Tensor, "B Nv 4"]): Vertex positions
77
+ tri (Integer[Tensor, "Nf 3"]): Triangle indices
78
+ resolution (Union[int, Tuple[int, int]]): Output resolution
79
+ grad_db (Bool): Enable gradient backpropagation
80
+ Returns:
81
+ Rasterized outputs
82
+ """
83
+ # rasterize in instance mode (single topology)
84
+ return dr.rasterize(
85
+ self._ctx, pos.float(), tri.int(), resolution, grad_db=grad_db
86
+ )
87
+
88
+ def interpolate(
89
+ self,
90
+ attr: torch.FloatTensor,
91
+ rast: torch.FloatTensor,
92
+ tri: torch.IntTensor,
93
+ rast_db=None,
94
+ diff_attrs=None,
95
+ ):
96
+ """
97
+ Interpolate attributes using the given rasterization outputs.
98
+ Args:
99
+ attr (Float[Tensor, "B Nv C"]): Attributes to interpolate
100
+ rast (Float[Tensor, "B H W 4"]): Rasterization outputs
101
+ tri (Integer[Tensor, "Nf 3"]): Triangle indices
102
+ rast_db (Float[Tensor, "B H W 4"], optional): Differentiable rasterization outputs
103
+ diff_attrs (Float[Tensor, "B Nv C"], optional): Differentiable attributes
104
+ Returns:
105
+ Interpolated attribute values
106
+ """
107
+ return dr.interpolate(
108
+ attr.float(), rast, tri.int(), rast_db=rast_db, diff_attrs=diff_attrs
109
+ )
110
+
111
+ def render(
112
+ self,
113
+ mesh: trimesh.Trimesh, # trimesh object
114
+ cam2world_matrixs: torch.Tensor, #N,4,4
115
+ mvp_matrixs: torch.Tensor, #N,4,4
116
+ render_vert_depth: bool = True, # whether to render vertex depth
117
+ render_face_normals: bool = True, # whether to render face normals
118
+ ):
119
+ """
120
+ Render the mesh using the given camera and model view projection matrices.
121
+ Args:
122
+ mesh (trimesh.Trimesh): The mesh to render
123
+ cam2world_matrixs (torch.Tensor): Camera to world matrix (N, 4, 4)
124
+ mvp_matrixs (torch.Tensor): Model view projection matrix (N, 4, 4)
125
+ render_vert_depth (bool): Whether to render vertex depth
126
+ render_face_normals (bool): Whether to render face normals
127
+ Returns:
128
+ results (dict): Dictionary containing rendered outputs
129
+ """
130
+ results = {}
131
+
132
+ v_pos = torch.tensor(mesh.vertices, dtype=torch.float32, device=self.device) # (num_vertices, 3)
133
+ t_pos_idx = torch.tensor(mesh.faces, dtype=torch.int32, device=self.device) # (num_faces, 3)
134
+
135
+ verts_homo = torch.cat([v_pos, torch.ones([v_pos.shape[0], 1]).to(v_pos)], dim=-1)
136
+ v_pos_clip: Float[Tensor, "B Nv 4"] = torch.matmul(verts_homo, mvp_matrixs.permute(0, 2, 1))
137
+
138
+ rast, _ = self.rasterize(v_pos_clip, t_pos_idx, self.resolution)
139
+ mask = rast[..., 3:] > 0
140
+
141
+ if render_vert_depth:
142
+ verts_homo = torch.cat(
143
+ [
144
+ v_pos, torch.ones([v_pos.shape[0], 1]).to(v_pos),
145
+ ],
146
+ dim=-1,
147
+ )
148
+ v_pos_cam = verts_homo @ cam2world_matrixs.inverse().transpose(-1, -2)
149
+ v_depth = v_pos_cam[..., 2:3] * -1 # (B,n_v,1)
150
+ gb_depth, _ = self.interpolate(
151
+ v_depth.contiguous(), rast, t_pos_idx
152
+ )
153
+ gb_depth[~mask] = self.far
154
+ results.update({"vert_depth": gb_depth})
155
+
156
+
157
+ if render_face_normals:
158
+ flat_face_index = torch.arange(
159
+ len(t_pos_idx) * 3, device=self.device, dtype=torch.int
160
+ ).reshape(-1, 3)
161
+
162
+ i0 = t_pos_idx[:, 0]
163
+ i1 = t_pos_idx[:, 1]
164
+ i2 = t_pos_idx[:, 2]
165
+
166
+ v0 = v_pos[i0, :]
167
+ v1 = v_pos[i1, :]
168
+ v2 = v_pos[i2, :]
169
+
170
+ face_normals = torch.linalg.cross(v1 - v0, v2 - v0)
171
+ f_nrm = face_normals[:, None, :].repeat(1, 3, 1).reshape(-1, 3)
172
+
173
+ gb_normal, _ = self.interpolate(f_nrm, rast, flat_face_index)
174
+
175
+ gb_normal = gb_normal.view(-1, self.resolution[0] * self.resolution[1], 3)
176
+ gb_normal = torch.matmul(
177
+ torch.linalg.inv(cam2world_matrixs[:, :3, :3]),
178
+ gb_normal.transpose(1, 2),
179
+ ).transpose(1, 2)
180
+ gb_normal = gb_normal.view(-1, self.resolution[0], self.resolution[0], 3)
181
+ gb_normal = F.normalize(gb_normal, dim=-1).contiguous()
182
+ gb_normal = torch.lerp(
183
+ torch.zeros_like(gb_normal), (gb_normal + 1.0) / 2.0, mask.float()
184
+ )
185
+ gb_normal = torch.cat([gb_normal, mask], dim=-1)
186
+ results.update({"face_normal": gb_normal})
187
+
188
+ return results
189
+
190
+
191
+ @torch.no_grad()
192
+ def visibility_check(points, depths, cam2world_matrixs, mvp_matrixs):
193
+ '''
194
+ Visibility check for points in 3D space
195
+
196
+ Args:
197
+ - points: (n_points, 3), 3D points in world space
198
+ - depths: (n_view, H, W, 1), depth maps
199
+ - cam2world_matrixs: (n_views, 4, 4), camera to world matrix
200
+ - mvp_matrixs: (n_views, 4, 4), model view projection matrix
201
+
202
+ Returns:
203
+ - mask: (n_points, ), visibility mask
204
+ - dist: (n_points, ), distance to the visible surface
205
+ '''
206
+ dist = torch.ones(points.shape[0]).to(points) # defult as one
207
+ mask = torch.zeros(points.shape[0], dtype=torch.bool).to(points.device) # visibility
208
+
209
+ points_homo = torch.cat(
210
+ [points, torch.ones([points.shape[0], 1]).to(points)], dim=-1
211
+ )
212
+ for i, cam2world_matrix in enumerate(cam2world_matrixs):
213
+ points_clip_i = points_homo @ mvp_matrixs[i].permute(1,0)
214
+ valid_region = (torch.abs(points_clip_i[...,0]) < 0.999) & \
215
+ (torch.abs(points_clip_i[...,1]) < 0.999)
216
+ points_valid = points_clip_i[valid_region].float()
217
+
218
+ v_pos_cam = points_homo @ cam2world_matrix.inverse().transpose(-1, -2)
219
+ v_depth = v_pos_cam[..., 2:3] * -1
220
+
221
+ # query using (u, v)
222
+ sample_z = torch.nn.functional.grid_sample(depths[i].view(1, 1, depths.shape[1], depths.shape[2]).float(),
223
+ points_valid[:, :2].reshape(1, 1, points_valid.shape[0], 2), align_corners=True, mode='bilinear').reshape(-1)
224
+
225
+ visible_points = v_depth[valid_region].squeeze() < sample_z # visible if z smaller than render depth
226
+ mask[torch.where(valid_region)[0][torch.where(visible_points)[0]]] = True
227
+
228
+ # dist to hitting point along camera ray
229
+ dist[valid_region] = torch.minimum(dist[valid_region], torch.abs(sample_z - v_depth[valid_region].squeeze()))
230
+
231
+ return mask, dist
232
+
233
+
234
+ @torch.no_grad()
235
+ def watertight(
236
+ mesh,
237
+ grid_resolution=256,
238
+ device='cuda',
239
+ num_views=50,
240
+ sample_size=2.1,
241
+ winding_number_thres=0.5,
242
+ ):
243
+ """
244
+ Convert a mesh to a watertight mesh using trimesh.
245
+
246
+ Args:
247
+ mesh: Input mesh as a trimesh object
248
+ grid_resolution: Resolution of the grid for sampling
249
+ device: Device to run the script on (cpu or cuda)
250
+ num_views: Number of views for visibility check, default is 50
251
+ sample_size: Size of the sample space, default is 2.1
252
+ winding_number_thres: Threshold for winding number, default is 0.5
253
+
254
+ Returns:
255
+ watertight_mesh: A watertight mesh as a trimesh object
256
+ """
257
+ # setup grid points
258
+ x,y,z = np.meshgrid(
259
+ np.arange(grid_resolution, dtype=np.float32),
260
+ np.arange(grid_resolution, dtype=np.float32),
261
+ np.arange(grid_resolution, dtype=np.float32),
262
+ indexing='ij')
263
+ grid_points = np.stack(
264
+ (x.reshape(-1) + 0.5, y.reshape(-1) + 0.5, z.reshape(-1) + 0.5
265
+ ),axis=-1) / grid_resolution * sample_size - sample_size / 2.0
266
+ grid_points = torch.tensor(grid_points).to(device)
267
+ print(f"number of grid_points: {grid_points.shape} with resolution {grid_resolution}, range {grid_points.min()} to {grid_points.max()}")
268
+
269
+ # setup for rendering depth maps
270
+ cam_poses = sample_from_sphere(num_views, 4.0, upper=False) # (num_views, 3)
271
+ scale = 1.0 # scale for the orthogonal camera projection matrix
272
+ resolution = 1024 # resolution of the rendered images
273
+ aspect_ratio = 1.0 # aspect ratio of the rendered images
274
+ near, far = 0.1, 10.0 # near and far plane for the camera
275
+ cam2world_matrixs, mvp_matrixs = [], []
276
+ for position in cam_poses:
277
+ # extrinsic matrix
278
+ backward = np.array([0, 0, 0]) - position
279
+ backward = backward / np.linalg.norm(backward)
280
+ right = np.cross(backward, np.array([0, 1, 0]))
281
+ right = right / np.linalg.norm(right)
282
+ up = np.cross(right, backward)
283
+
284
+ R = np.stack([right, up, -backward], axis=0)
285
+ t = -R @ position
286
+ extrinsic = np.eye(4)
287
+ extrinsic[:3, :3] = R
288
+ extrinsic[:3, 3] = t
289
+ cam2world_matrixs.append(np.linalg.inv(extrinsic)) # (4, 4)
290
+
291
+ # projection matrix
292
+ proj_mtx = np.zeros([4, 4])
293
+ proj_mtx[0, 0] = scale
294
+ proj_mtx[1, 1] = scale * -1
295
+ proj_mtx[2, 2] = -2 / (far - near)
296
+ proj_mtx[2, 3] = -(far + near) / (far - near)
297
+ proj_mtx[3, 3] = 1
298
+ mvp_matrix = proj_mtx @ extrinsic
299
+ mvp_matrixs.append(mvp_matrix) # (4, 4)
300
+ cam2world_matrixs = torch.tensor(np.array(cam2world_matrixs), dtype=torch.float32).to(device) # (num_views, 4, 4)
301
+ mvp_matrixs = torch.tensor(np.array(mvp_matrixs), dtype=torch.float32).to(device) # (num_views, 4, 4)
302
+
303
+ rendererd_imgs = MeshRenderer((resolution, resolution), near, far, device).render(
304
+ mesh,
305
+ cam2world_matrixs=cam2world_matrixs,
306
+ mvp_matrixs=mvp_matrixs,
307
+ )
308
+
309
+ # STEP A. do visibility_check for each grid point
310
+ visibility, dist = visibility_check(grid_points, rendererd_imgs['vert_depth'], cam2world_matrixs, mvp_matrixs)
311
+ winding_numbers = igl.fast_winding_number_for_meshes(
312
+ np.array(mesh.vertices, dtype=np.float32),
313
+ np.array(mesh.faces, dtype=np.int32),
314
+ grid_points.detach().cpu().numpy()
315
+ )
316
+ winding_numbers = torch.from_numpy(winding_numbers).to(device)
317
+ visibility[visibility & (winding_numbers > winding_number_thres)]= False # combine visibility with winding number mask
318
+
319
+ ## STEP B. refine sdf close to the surface
320
+ near_surface_idx = torch.where(dist < 1.0)[0]
321
+ squared_distances, closest_points, face_indices = \
322
+ igl.point_mesh_squared_distance(grid_points[near_surface_idx].detach().cpu().numpy(),
323
+ mesh.vertices,
324
+ mesh.faces)
325
+ squared_distances = torch.from_numpy(squared_distances).to(grid_points)
326
+ dist[near_surface_idx] = torch.sqrt(squared_distances)
327
+
328
+ ## STEP C. convert udf to sdf
329
+ dist[visibility==False] *= -1
330
+
331
+ ## STEP D. generate the mesh using Marching Cube
332
+ sdf = dist.view(grid_resolution, grid_resolution, grid_resolution)
333
+ # not the 0-level surface, we use the surface with a small offset
334
+ mesh = mcubes.marching_cubes(sdf.cpu().numpy(), sample_size / grid_resolution)
335
+ mesh = trimesh.Trimesh(
336
+ vertices=mesh[0] / grid_resolution * sample_size - sample_size / 2.0,
337
+ faces=mesh[1],
338
+ process=False
339
+ )
340
+
341
+ return mesh
342
+
343
+
344
+ def sharp_edge_sampling(mesh_path, num_views=100000, sharpness_threshold=math.radians(30)):
345
+ """
346
+ Sample points on sharp edges of the mesh.
347
+ Code borrowed from the Dora github repository: https://github.com/Seed3D/Dora/blob/main/sharp_edge_sampling/sharp_sample.py#L37
348
+ Please consider citing the Dora paper if you use this code in your work.
349
+
350
+ Args:
351
+ mesh_path: Path to the OBJ file
352
+ sharpness_threshold: Threshold for sharp edge detection
353
+ num_views: Target number of points to generate
354
+
355
+ Returns:
356
+ sharp_surface: Array of sharp surface points with positions and normals
357
+ """
358
+ import bpy # bpy==4.0.0
359
+ import bmesh
360
+ # Import OBJ file
361
+ bpy.ops.wm.obj_import(filepath=mesh_path)
362
+ obj = bpy.context.selected_objects[0]
363
+
364
+ # Enter Edit mode
365
+ bpy.context.view_layer.objects.active = obj
366
+ bpy.ops.object.mode_set(mode='EDIT')
367
+
368
+ # Ensure edge selection mode
369
+ bpy.ops.mesh.select_mode(type="EDGE")
370
+
371
+ # Select sharp edges
372
+ bpy.ops.mesh.edges_select_sharp(sharpness=sharpness_threshold)
373
+
374
+ # Switch back to Object mode to access selection state
375
+ bpy.ops.object.mode_set(mode='OBJECT')
376
+
377
+ # Create bmesh instance
378
+ bm = bmesh.new()
379
+ bm.from_mesh(obj.data)
380
+
381
+ # Get selected sharp edges
382
+ sharp_edges = [edge for edge in bm.edges if edge.select]
383
+
384
+ # Collect sharp edge vertex pairs
385
+ sharp_edges_vertices = []
386
+ link_normal1 = []
387
+ link_normal2 = []
388
+ sharp_edges_angle = []
389
+ # Unique vertices set
390
+ vertices_set = set()
391
+
392
+ for edge in sharp_edges:
393
+ vertices_set.update(edge.verts[:]) # Add to unique vertices set
394
+
395
+ # Collect sharp edge vertex pair indices
396
+ sharp_edges_vertices.append([edge.verts[0].index, edge.verts[1].index])
397
+
398
+ # Get normals of linked faces
399
+ normal1 = edge.link_faces[0].normal
400
+ normal2 = edge.link_faces[1].normal
401
+
402
+ link_normal1.append(normal1)
403
+ link_normal2.append(normal2)
404
+
405
+ if normal1.length == 0.0 or normal2.length == 0.0:
406
+ sharp_edges_angle.append(0.0)
407
+ # Compute the angle between the two normals
408
+ else:
409
+ sharp_edges_angle.append(math.degrees(normal1.angle(normal2)))
410
+
411
+ # Extract vertex data
412
+ vertices = []
413
+ vertices_index = []
414
+ vertices_normal = []
415
+
416
+ for vertex in vertices_set:
417
+ vertices.append(vertex.co)
418
+ vertices_index.append(vertex.index)
419
+ vertices_normal.append(vertex.normal)
420
+
421
+ # Convert to numpy arrays
422
+ vertices = np.array(vertices)
423
+ vertices_index = np.array(vertices_index)
424
+ vertices_normal = np.array(vertices_normal)
425
+
426
+ sharp_edges_count = np.array(len(sharp_edges))
427
+ sharp_edges_angle_array = np.array(sharp_edges_angle)
428
+
429
+ if sharp_edges_count > 0:
430
+ sharp_edge_link_normal = np.array(np.concatenate([link_normal1, link_normal2], axis=1))
431
+ nan_mask = np.isnan(sharp_edge_link_normal)
432
+ # Replace NaN values with 0 using boolean indexing
433
+ sharp_edge_link_normal = np.where(nan_mask, 0, sharp_edge_link_normal)
434
+
435
+ nan_mask = np.isnan(vertices_normal)
436
+ # Replace NaN values with 0 using boolean indexing
437
+ vertices_normal = np.where(nan_mask, 0, vertices_normal)
438
+
439
+ # Convert to numpy array
440
+ sharp_edges_vertices_array = np.array(sharp_edges_vertices)
441
+
442
+ if sharp_edges_count > 0:
443
+ mesh = trimesh.load(mesh_path, process=False, force='mesh')
444
+ num_target_sharp_vertices = num_views // 2
445
+ sharp_edge_length = sharp_edges_count
446
+ sharp_edges_vertices_pair = sharp_edges_vertices_array
447
+ sharp_vertices_pair = mesh.vertices[sharp_edges_vertices_pair] # Vertex pair coordinates (1225, 2, 3)
448
+ epsilon = 1e-4 # Small numerical value
449
+
450
+ # Calculate edge normals
451
+ edge_normal = 0.5 * sharp_edge_link_normal[:, :3] + 0.5 * sharp_edge_link_normal[:, 3:]
452
+ norms = np.linalg.norm(edge_normal, axis=1, keepdims=True)
453
+ norms = np.where(norms > epsilon, norms, epsilon)
454
+ edge_normal = edge_normal / norms # Normalize edge normals
455
+
456
+ known_vertices = vertices # Unique sharp vertices
457
+ known_vertices_normal = vertices_normal
458
+ known_vertices = np.concatenate([known_vertices, known_vertices_normal], axis=1)
459
+
460
+ num_known_vertices = known_vertices.shape[0] # Number of unique sharp vertices
461
+
462
+ if num_known_vertices < num_target_sharp_vertices: # If known vertices < target vertices
463
+ num_new_vertices = num_target_sharp_vertices - num_known_vertices
464
+
465
+ if num_new_vertices >= sharp_edge_length: # If new vertices needed >= sharp edges count
466
+ # Each sharp edge needs at least one interpolated vertex
467
+ num_new_vertices_per_pair = num_new_vertices // sharp_edge_length # Vertices per edge
468
+ new_vertices = np.zeros((sharp_edge_length, num_new_vertices_per_pair, 6)) # Initialize new vertices array
469
+
470
+ start_vertex = sharp_vertices_pair[:, 0]
471
+ end_vertex = sharp_vertices_pair[:, 1]
472
+
473
+ for j in range(1, num_new_vertices_per_pair + 1):
474
+ t = j / float(num_new_vertices_per_pair + 1)
475
+ new_vertices[:, j - 1, :3] = (1 - t) * start_vertex + t * end_vertex
476
+ new_vertices[:, j - 1, 3:] = edge_normal # Same normal within each edge
477
+
478
+ new_vertices = new_vertices.reshape(-1, 6)
479
+
480
+ remaining_vertices = num_new_vertices % sharp_edge_length # Calculate remaining vertices
481
+ if remaining_vertices > 0:
482
+ rng = np.random.default_rng()
483
+ ind = rng.choice(sharp_edge_length, remaining_vertices, replace=False)
484
+ new_vertices_remain = np.zeros((remaining_vertices, 6)) # Initialize remaining vertices array
485
+
486
+ start_vertex = sharp_vertices_pair[ind, 0]
487
+ end_vertex = sharp_vertices_pair[ind, 1]
488
+ t = np.random.rand(remaining_vertices).reshape(-1, 1)
489
+ new_vertices_remain[:, :3] = (1 - t) * start_vertex + t * end_vertex
490
+
491
+ edge_normal = 0.5 * sharp_edge_link_normal[ind, :3] + 0.5 * sharp_edge_link_normal[ind, 3:]
492
+ edge_normal = edge_normal / np.linalg.norm(edge_normal, axis=1, keepdims=True)
493
+ new_vertices_remain[:, 3:] = edge_normal
494
+
495
+ new_vertices = np.concatenate([new_vertices, new_vertices_remain], axis=0)
496
+ else:
497
+ remaining_vertices = num_new_vertices % sharp_edge_length # Calculate remaining vertices to allocate
498
+ if remaining_vertices > 0:
499
+ rng = np.random.default_rng()
500
+ ind = rng.choice(sharp_edge_length, remaining_vertices, replace=False)
501
+ new_vertices_remain = np.zeros((remaining_vertices, 6)) # Initialize new vertices array
502
+
503
+ start_vertex = sharp_vertices_pair[ind, 0]
504
+ end_vertex = sharp_vertices_pair[ind, 1]
505
+ t = np.random.rand(remaining_vertices).reshape(-1, 1)
506
+ new_vertices_remain[:, :3] = (1 - t) * start_vertex + t * end_vertex
507
+
508
+ edge_normal = 0.5 * sharp_edge_link_normal[ind, :3] + 0.5 * sharp_edge_link_normal[ind, 3:]
509
+ edge_normal = edge_normal / np.linalg.norm(edge_normal, axis=1, keepdims=True)
510
+ new_vertices_remain[:, 3:] = edge_normal
511
+
512
+ new_vertices = new_vertices_remain
513
+
514
+ sharp_surface = np.concatenate([new_vertices, known_vertices], axis=0)
515
+ else:
516
+ sharp_surface = known_vertices
517
+ # Make sure the sharp surface has the correct number of samples
518
+ sharp_surface = sharp_surface[np.random.choice(sharp_surface.shape[0], num_views, replace=True), :]
519
+ print(f"Sampled {sharp_surface.shape[0]} points on sharp edges of the mesh.")
520
+ # manually remove the bpy object and free memory
521
+ bm.free()
522
+ bpy.data.objects.remove(obj, do_unlink=True)
523
+
524
+ return sharp_surface
525
+ else:
526
+ print("No sharp edges found in the mesh.")
527
+ return None
528
+
529
+
530
+ if __name__ == "__main__":
531
+ parser = ArgumentParser(description="Watertight mesh and sampling points")
532
+ parser.add_argument("--input_mesh", type=str, required=True, help="Path to the input mesh file")
533
+ parser.add_argument("--output_path", type=str, default="./output", help="Path to save the watertight mesh and sampled points")
534
+ parser.add_argument("--skip_watertight", action='store_true', help="Skip the watertight check and directly sample points from the mesh")
535
+ parser.add_argument("--device", type=str, default="cuda", help="Device to run the script on (cpu or cuda)")
536
+ # Add command-line arguments for watertight conversion
537
+ parser.add_argument("--grid_resolution", type=int, default=256, help="Resolution of the grid for sampling")
538
+ # Add command-line arguments for sampling
539
+ parser.add_argument("--sample_sharp_edge", type=bool, default=True, help="Sample points on sharp edges of the mesh in Dora paper")
540
+ parser.add_argument("--angle_threshold", type=float, default=15.0, help="Angle threshold for sharp edge detection in degrees")
541
+ parser.add_argument("--num_surface_points", type=int, default=100000, help="Number of points to sample from the mesh")
542
+ parser.add_argument("--num_sharp_surface_points", type=int, default=100000, help="Number of points to sample from the sharp edges of the mesh")
543
+ parser.add_argument("--num_near_surface_points", type=int, default=100000, help="Number of points to sample near the mesh surface")
544
+ parser.add_argument("--num_vlume_points", type=int, default=100000, help="Number of points to sample inside the mesh volume")
545
+ parser.add_argument("--bounds", type=float, default=1.05, help="Bounds for sampling points in the mesh, a little larger than the mesh size")
546
+ args = parser.parse_args()
547
+
548
+ # Load the mesh
549
+ mesh = trimesh.load(args.input_mesh, force='mesh')
550
+ # Normalize the mesh into a unit cube
551
+ mesh.apply_translation(-np.mean(mesh.vertices, axis=0))
552
+ mesh.apply_scale(1.0 / np.max(np.abs(mesh.vertices)))
553
+
554
+ # Check if the mesh is watertight
555
+ if mesh.is_watertight and args.skip_watertight:
556
+ print("Mesh is already watertight. Proceeding to sample points.")
557
+ else:
558
+ print("Attempting to convert the mesh to a watertight mesh.")
559
+ mesh = watertight(
560
+ mesh,
561
+ grid_resolution=args.grid_resolution,
562
+ device=args.device,
563
+ )
564
+
565
+ # Save the watertight mesh
566
+ output_path = f"{args.output_path}/{args.input_mesh.split('/')[-1].split('.')[0]}"
567
+ os.makedirs(output_path, exist_ok=True)
568
+ mesh.export(f"{output_path}/watertight_mesh.obj")
569
+ print(f"Watertight mesh saved to {output_path}/watertight_mesh.obj")
570
+
571
+ # sample points near the surface and in the space within bounds
572
+ surface_points, faces = mesh.sample(args.num_surface_points, return_index=True)
573
+ near_points = [
574
+ surface_points + np.random.normal(scale=0.001, size=(args.num_near_surface_points, 3)),
575
+ surface_points + np.random.normal(scale=0.01, size=(args.num_near_surface_points, 3)),
576
+ ]
577
+ near_surface_points = np.concatenate(near_points)
578
+ volume_rand_points = np.random.uniform(-args.bounds, args.bounds, size=(args.num_vlume_points, 3))
579
+ f = SDF(mesh.vertices, mesh.faces); # (num_vertices, 3) and (num_faces, 3)
580
+ # compute SDF values for the sampled points
581
+ near_surface_points_with_sdf = np.concatenate([near_surface_points, f(near_surface_points)[:, np.newaxis]], axis=1) # (num_near_surface_points, 4)
582
+ volume_rand_points_with_sdf = np.concatenate([volume_rand_points, f(volume_rand_points)[:, np.newaxis]], axis=1) # (num_vlume_points, 4)
583
+
584
+ # Sample points with normals on the surface
585
+ surface_points, faces = mesh.sample(args.num_surface_points, return_index=True)
586
+ normals = mesh.face_normals[faces]
587
+ surface = np.concatenate([surface_points, normals], axis=1)
588
+ if args.sample_sharp_edge:
589
+ # Sample points on sharp edges
590
+ print("Sampling points on sharp edges of the mesh.")
591
+ sharp_surface = sharp_edge_sampling(
592
+ args.input_mesh,
593
+ num_views=args.num_sharp_surface_points,
594
+ sharpness_threshold=math.radians(args.angle_threshold)
595
+ )
596
+ # Save the samples
597
+ np.savez(
598
+ f'{output_path}/samples.npz',
599
+ surface=surface, # (num_surface_points, 6), surface points with normals
600
+ sharp_surface=sharp_surface, # (num_sharp_surface_points, 6), sharp surface points with normals
601
+ near_surface_points=near_surface_points_with_sdf, # (num_near_surface_points, 4), sampled points near the surface with SDF values
602
+ volume_rand_points=volume_rand_points_with_sdf, # (num_vlume_points, 4), sampled points in the volume within bounds with SDF values
603
+ bounds=np.array([-args.bounds, args.bounds])
604
+ )
605
+ else:
606
+ # Save the samples
607
+ np.savez(
608
+ f'{output_path}/samples.npz',
609
+ surface=surface, # (num_surface_points, 6), surface points with normals
610
+ near_surface_points=near_surface_points_with_sdf, # (num_near_surface_points, 4), sampled points near the surface with SDF values
611
+ volume_rand_points=volume_rand_points_with_sdf, # (num_vlume_points, 4), sampled points in the volume within bounds with SDF values
612
+ bounds=np.array([-args.bounds, args.bounds])
613
+ )
Code/Baselines/sd-dino/README.md ADDED
@@ -0,0 +1,162 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence
2
+
3
+ **A Tale of Two Features** explores the complementary nature of Stable Diffusion (SD) and DINOv2 features for zero-shot semantic correspondence. The results demonstrate that a simple fusion of the two features leads to state-of-the-art performance on the SPair-71k, PF-Pascal, and TSS datasets.
4
+
5
+ This repository is the official implementation of the paper:
6
+
7
+ [**A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence**](https://arxiv.org/abs/2305.15347)
8
+ [*Junyi Zhang*](https://junyi42.github.io/),
9
+ [*Charles Herrmann*](https://scholar.google.com/citations?user=LQvi5XAAAAAJ),
10
+ [*Junhwa Hur*](https://hurjunhwa.github.io/),
11
+ [*Luisa F. Polanía*](https://scholar.google.com/citations?user=HGLobX4AAAAJ),
12
+ [*Varun Jampani*](https://varunjampani.github.io/),
13
+ [*Deqing Sun*](https://deqings.github.io/),
14
+ [*Ming-Hsuan Yang*](https://faculty.ucmerced.edu/mhyang/)
15
+ NeurIPS, 2023.
16
+
17
+ **[New!] We have released the code for [Telling Left from Right](https://github.com/Junyi42/geoaware-sc), a follow-up with better semantic correspondence.**
18
+
19
+ ![teaser](assets/teaser.png)
20
+
21
+ ## Visual Results
22
+ ### Dense Correspondence
23
+ <img src="assets/dense_correspondence.png" width="100%">
24
+
25
+ ### Object Swapping
26
+ <div align="center">
27
+ <img src="assets/swap_aero.gif" width="32%">
28
+ <img src="assets/swap_bird.gif" width="32%">
29
+ <img src="assets/swap_bus.gif" width="32%">
30
+ </div>
31
+ <div align="center">
32
+ <img src="assets/swap_car.gif" width="32%">
33
+ <img src="assets/swap_cow.gif" width="32%">
34
+ <img src="assets/swap_dog.gif" width="32%">
35
+ </div>
36
+ <div align="center">
37
+ <img src="assets/swap_person.gif" width="32%">
38
+ <img src="assets/swap_sheep.gif" width="32%">
39
+ <img src="assets/swap_train.gif" width="32%">
40
+ </div>
41
+
42
+ ### Object Swapping (with refinement process)
43
+ <div align="center">
44
+ <img src="assets/instance_swapping_cat.png" width="49%">
45
+ <img src="assets/instance_swapping_bird.png" width="49%">
46
+ </div>
47
+
48
+ ## Links
49
+ * [Project Page](https://sd-complements-dino.github.io) (with additional visual results)
50
+ * [arXiv Page](https://arxiv.org/abs/2305.15347)
51
+
52
+ ## Environment Setup
53
+
54
+ To install the required dependencies, use the following commands:
55
+
56
+ ```bash
57
+ conda create -n sd-dino python=3.9
58
+ conda activate sd-dino
59
+ conda install pytorch=1.13.1 torchvision=0.14.1 pytorch-cuda=11.6 -c pytorch -c nvidia
60
+ conda install -c "nvidia/label/cuda-11.6.1" libcusolver-dev
61
+ git clone git@github.com:Junyi42/sd-dino.git
62
+ cd sd-dino
63
+ pip install -e .
64
+ ```
65
+ (Optional) You may also want to install [xformers](https://github.com/facebookresearch/xformers) for efficient transformer implementation:
66
+
67
+ ```
68
+ pip install xformers==0.0.16
69
+ ```
70
+
71
+ ## Get Started
72
+
73
+ ### Prepare the data
74
+
75
+ We provide the scripts to download the datasets in the `data` folder. To download specific datasets, use the following commands:
76
+
77
+ * SPair-71k:
78
+ ```bash
79
+ bash data/prepare_spair.sh
80
+ ```
81
+ * PF-Pascal:
82
+ ```bash
83
+ bash data/prepare_pfpascal.sh
84
+ ```
85
+ * TSS:
86
+ ```bash
87
+ bash data/prepare_tss.sh
88
+ ```
89
+
90
+ ### Evaluate the PCK Results of SPair-71k
91
+
92
+
93
+ Run [pck_spair_pascal.py](pck_spair_pascal.py) file:
94
+
95
+ ```bash
96
+ python pck_spair_pascal.py --SAMPLE 20
97
+ ```
98
+
99
+ Note that the `SAMPLE` is the number of sampled pairs for each category, which is set to 20 by default. Set to `0` to use all the samples (settings in the paper).
100
+
101
+ Additional important parameters in [pck_spair_pascal.py](pck_spair_pascal.py) include:
102
+
103
+ * `--NOT_FUSE`: if set to True, only use the SD feature.
104
+ * `--ONLY_DINO`: if set to True, only use the DINO feature.
105
+ * `--DRAW_DENSE`: if set to True, draw the dense correspondence map.
106
+ * `--DRAW_SWAP`: if set to True, draw the object swapping result.
107
+ * `--DRAW_GIF`: if set to True, draw the object swapping result as a gif.
108
+ * `--TOTAL_SAVE_RESULT`: number of samples to save the qualitative results, set to 0 to disable and accelerate the evaluation process.
109
+
110
+ Please refer to the [pck_spair_pascal.py](pck_spair_pascal.py) file for more details. You may find samples of qualitative results in the `results_spair` folder.
111
+
112
+ ### Evaluate the PCK Results of PF-Pascal
113
+
114
+ Run [pck_spair_pascal.py](pck_spair_pascal.py) file:
115
+
116
+ ```bash
117
+ python pck_spair_pascal.py --PASCAL
118
+ ```
119
+
120
+ You may find samples of qualitative results in the `results_pascal` folder.
121
+
122
+ ### Evaluate the PCK Results of TSS
123
+
124
+ Run [pck_tss.py](pck_tss.py) file:
125
+
126
+ ```bash
127
+ python pck_tss.py
128
+ ```
129
+
130
+ You may find samples of qualitative results in the `results_tss` folder.
131
+
132
+ ## Demo
133
+
134
+ ### PCA / K-means Visualization of the Features
135
+
136
+ To extract the fused features of the input pair images and visualize the correspondence,
137
+ please check the notebook [demo_vis_features.ipynb](demo_vis_features.ipynb) for more details.
138
+
139
+ ### Quick Try on the Object Swapping
140
+
141
+ To swap the objects in the input pair images, please check the notebook [demo_swap.ipynb](demo_swap.ipynb) for more details.
142
+
143
+ ### Refine the Result
144
+
145
+ TODO
146
+
147
+ ## Citation
148
+
149
+ If you find our work useful, please cite:
150
+
151
+ ```BiBTeX
152
+ @article{zhang2023tale,
153
+ title={{A Tale of Two Features: Stable Diffusion Complements DINO for Zero-Shot Semantic Correspondence}},
154
+ author={Zhang, Junyi and Herrmann, Charles and Hur, Junhwa and Cabrera, Luisa Polania and Jampani, Varun and Sun, Deqing and Yang, Ming-Hsuan},
155
+ journal={arXiv preprint arxiv:2305.15347},
156
+ year={2023}
157
+ }
158
+ ```
159
+
160
+ ## Acknowledgement
161
+
162
+ Our code is largely based on the following open-source projects: [ODISE](https://github.com/NVlabs/ODISE), [dino-vit-features (official implementation)](https://github.com/ShirAmir/dino-vit-features), [dino-vit-features (Kamal Gupta's implementation)](https://github.com/kampta/dino-vit-features), [DenseMatching](https://github.com/PruneTruong/DenseMatching), and [ncnet](https://github.com/ignacio-rocco/ncnet). Our heartfelt gratitude goes to the developers of these resources!
Code/Baselines/sd-dino/demo_swap.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
Code/Baselines/sd-dino/demo_swap_proj_mot.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
Code/Baselines/sd-dino/demo_swap_proj_mot_clean.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
Code/Baselines/sd-dino/demo_swap_proj_mot_clean.py ADDED
@@ -0,0 +1,499 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ path_example = '../../../Raw_datasets/uploaded/to_upload/chair_example'
3
+ os.listdir('../../../Raw_datasets/uploaded/to_upload/chair_example')
4
+
5
+ import imageio as imageio
6
+ import numpy as np
7
+ import matplotlib.pyplot as plt
8
+
9
+ path_video = os.path.join(path_example, 'target_video_trim.mp4')
10
+
11
+ reader = imageio.get_reader(path_video)
12
+ frames = [frame for frame in reader]
13
+ reader.close()
14
+ print(f'Number of frames: {len(frames)}')
15
+
16
+ # plot every 10th frame
17
+
18
+ # for i in range(0, len(frames), 10):
19
+ # plt.imshow(frames[i])
20
+ # plt.axis('off')
21
+ # plt.title(f'Frame {i}')
22
+ # plt.show()# Uncomment to close the plot after saving
23
+
24
+ # save the first frame as a jpg image
25
+ imageio.imwrite(os.path.join(path_example, 'target_frame1.jpg'), frames[0])
26
+
27
+ # check frame shape
28
+ print(f'Frame shape: {frames[0].shape}')
29
+
30
+ import os
31
+ os.environ['CUDA_VISIBLE_DEVICES'] = '6'
32
+ import torch
33
+ import os
34
+ import numpy as np
35
+ from PIL import Image
36
+ from tqdm import tqdm
37
+ import torch.nn.functional as F
38
+ import extractor_sd as extractor_sd
39
+ from extractor_sd import load_model, process_features_and_mask, get_mask
40
+ from utils.utils_correspondence import co_pca, resize, find_nearest_patchs, find_nearest_patchs_replace, animate_image_transfer_reverse
41
+ import matplotlib.pyplot as plt
42
+ from extractor_dino import ViTExtractor
43
+
44
+ MASK = True
45
+ VER = "v1-5"
46
+ PCA = False
47
+ CO_PCA = True
48
+ PCA_DIMS = [256, 256, 256]
49
+ SIZE =960
50
+ RESOLUTION = 256
51
+ EDGE_PAD = False
52
+
53
+ FUSE_DINO = 1
54
+ ONLY_DINO = 0
55
+ DRAW_GIF=1
56
+ DINOV2 = True
57
+ MODEL_SIZE = 'base' # 'small' or 'base', indicate dinov2 model
58
+ DRAW_DENSE = 1
59
+ DRAW_SWAP = 1
60
+ SWAP = 1
61
+ TEXT_INPUT = False
62
+ SEED = 42
63
+ TIMESTEP = 100 #flexible from 0~200
64
+
65
+ DIST = 'l2' if FUSE_DINO and not ONLY_DINO else 'cos'
66
+ if ONLY_DINO:
67
+ FUSE_DINO = True
68
+
69
+
70
+ np.random.seed(SEED)
71
+ torch.manual_seed(SEED)
72
+ torch.cuda.manual_seed(SEED)
73
+ torch.backends.cudnn.benchmark = True
74
+
75
+ # model, aug = load_model(diffusion_ver=VER, image_size=SIZE, num_timesteps=TIMESTEP)
76
+ model, aug = load_model(diffusion_ver=VER, image_size=SIZE, num_timesteps=TIMESTEP, decoder_only=False)
77
+
78
+
79
+ from utils.utils_correspondence import animate_image_transfer, animate_image_transfer_reverse
80
+
81
+ from PIL import Image
82
+ import torch
83
+ import numpy as np
84
+ import torchvision.transforms as T
85
+
86
+ def apply_mask_to_image(image_pil, mask_tensor):
87
+ # Convert PIL image to tensor (C, H, W) in [0, 1]
88
+ image_tensor = T.ToTensor()(image_pil) # Shape: (3, 840, 840)
89
+
90
+ # Ensure mask is binary and shape (1, H, W)
91
+ if mask_tensor.ndim == 2:
92
+ mask_tensor = mask_tensor.unsqueeze(0) # (1, 840, 840)
93
+
94
+ # Apply mask to each channel
95
+ masked_image_tensor = image_tensor * mask_tensor
96
+
97
+ # Convert back to PIL image
98
+ masked_image_pil = T.ToPILImage()(masked_image_tensor)
99
+
100
+ return masked_image_pil
101
+
102
+
103
+ def find_nearest_patchs_replace_mask_first(mask1, mask2, image1, image2, features1, features2, mask=False, resolution=128, draw_gif=False, save_path=None, gif_reverse=False):
104
+
105
+ # # mask
106
+ # image1_mask =
107
+
108
+ print('mask2 shape:', mask2.shape)
109
+ print('mask1 shape:', mask1.shape)
110
+ print('image1 shape:', image1.size) # image1 shape: (840, 840)
111
+ print('image2 shape:', image2.size) # image2 shape: (840, 840) PIL image
112
+
113
+ # mask out image_1 and image_2 in PIL format, image1 and image2 are PIL images, mask1 and mask2 are torch tensors
114
+ image1_mask = apply_mask_to_image(image1, mask1.cpu())
115
+ image2_mask = apply_mask_to_image(image2, mask2.cpu())
116
+
117
+ print('image1_mask shape:', image1_mask.size) # image1_mask shape: (840, 840)
118
+ print('image2_mask shape:', image2_mask.size) # image2_mask shape: (840, 840)
119
+
120
+
121
+ if resolution is not None: # resize the feature map to the resolution
122
+ features1 = F.interpolate(features1, size=resolution, mode='bilinear')
123
+ features2 = F.interpolate(features2, size=resolution, mode='bilinear')
124
+
125
+ # resize the image to the shape of the feature map
126
+ # resized_image1 = resize(image1, features1.shape[2], resize=True, to_pil=False)
127
+ # resized_image2 = resize(image2, features2.shape[2], resize=True, to_pil=False)
128
+ resized_image1 = resize(image1_mask, features1.shape[2], resize=True, to_pil=False)
129
+ resized_image2 = resize(image2_mask, features2.shape[2], resize=True, to_pil=False)
130
+
131
+ if mask: # mask the features
132
+ resized_mask1 = F.interpolate(mask1.cuda().unsqueeze(0).unsqueeze(0).float(), size=features1.shape[2:], mode='nearest')
133
+ resized_mask2 = F.interpolate(mask2.cuda().unsqueeze(0).unsqueeze(0).float(), size=features2.shape[2:], mode='nearest')
134
+ features1 = features1 * resized_mask1.repeat(1, features1.shape[1], 1, 1)
135
+ features2 = features2 * resized_mask2.repeat(1, features2.shape[1], 1, 1)
136
+ # set where mask==0 a very large number
137
+ features1[(features1.sum(1)==0).repeat(1, features1.shape[1], 1, 1)] = 100000
138
+ features2[(features2.sum(1)==0).repeat(1, features2.shape[1], 1, 1)] = 100000
139
+
140
+ features1_2d = features1.reshape(features1.shape[1], -1).permute(1, 0)
141
+ features2_2d = features2.reshape(features2.shape[1], -1).permute(1, 0)
142
+
143
+ resized_image1 = torch.tensor(resized_image1).to("cuda").float()
144
+ resized_image2 = torch.tensor(resized_image2).to("cuda").float()
145
+
146
+ # mask1 = F.interpolate(mask1.cuda().unsqueeze(0).unsqueeze(0).float(), size=resized_image1.shape[:2], mode='nearest').squeeze(0).squeeze(0)
147
+ # mask2 = F.interpolate(mask2.cuda().unsqueeze(0).unsqueeze(0).float(), size=resized_image2.shape[:2], mode='nearest').squeeze(0).squeeze(0)
148
+
149
+ # Mask the images
150
+ # resized_image1 = resized_image1 * mask1.unsqueeze(-1).repeat(1, 1, 3)
151
+ # resized_image2 = resized_image2 * mask2.unsqueeze(-1).repeat(1, 1, 3)
152
+ # Normalize the images to the range [0, 1]
153
+ resized_image1 = (resized_image1 - resized_image1.min()) / (resized_image1.max() - resized_image1.min())
154
+ resized_image2 = (resized_image2 - resized_image2.min()) / (resized_image2.max() - resized_image2.min())
155
+
156
+ distances = torch.cdist(features1_2d, features2_2d)
157
+ nearest_patch_indices = torch.argmin(distances, dim=1)
158
+ nearest_patches = torch.index_select(resized_image2.cuda().clone().detach().reshape(-1, 3), 0, nearest_patch_indices)
159
+
160
+ nearest_patches_image = nearest_patches.reshape(resized_image1.shape)
161
+
162
+ if draw_gif:
163
+ assert save_path is not None, "save_path must be provided when draw_gif is True"
164
+ img_1 = resize(image1, features1.shape[2], resize=True, to_pil=True)
165
+ img_2 = resize(image2, features2.shape[2], resize=True, to_pil=True)
166
+ mapping = torch.zeros((img_1.size[1], img_1.size[0], 2))
167
+ for i in range(len(nearest_patch_indices)):
168
+ mapping[i // img_1.size[0], i % img_1.size[0]] = torch.tensor([nearest_patch_indices[i] // img_2.size[0], nearest_patch_indices[i] % img_2.size[0]])
169
+ animate_image_transfer(img_1, img_2, mapping, save_path) if gif_reverse else animate_image_transfer_reverse(img_1, img_2, mapping, save_path)
170
+
171
+ # TODO: upsample the nearest_patches_image to the resolution of the original image
172
+ # nearest_patches_image = F.interpolate(nearest_patches_image.permute(2,0,1).unsqueeze(0), size=256, mode='bilinear').squeeze(0).permute(1,2,0)
173
+ # resized_image2 = F.interpolate(resized_image2.permute(2,0,1).unsqueeze(0), size=256, mode='bilinear').squeeze(0).permute(1,2,0)
174
+
175
+ nearest_patches_image = (nearest_patches_image).cpu().numpy()
176
+ resized_image2 = (resized_image2).cpu().numpy()
177
+
178
+ return nearest_patches_image, resized_image2
179
+
180
+
181
+ def compute_pair_feature(model, aug, save_path, files, category, mask=False, dist='cos', real_size=960):
182
+ if type(category) == str:
183
+ category = [category]
184
+ img_size = 840 if DINOV2 else 244
185
+ model_dict={'small':'dinov2_vits14',
186
+ 'base':'dinov2_vitb14',
187
+ 'large':'dinov2_vitl14',
188
+ 'giant':'dinov2_vitg14'}
189
+
190
+ model_type = model_dict[MODEL_SIZE] if DINOV2 else 'dino_vits8'
191
+ layer = 11 if DINOV2 else 9
192
+ if 'l' in model_type:
193
+ layer = 23
194
+ elif 'g' in model_type:
195
+ layer = 39
196
+ facet = 'token' if DINOV2 else 'key'
197
+ stride = 14 if DINOV2 else 4
198
+ device = 'cuda' if torch.cuda.is_available() else 'cpu'
199
+ # indiactor = 'v2' if DINOV2 else 'v1'
200
+ # model_size = model_type.split('vit')[-1]
201
+ extractor = ViTExtractor(model_type, stride, device=device)
202
+ patch_size = extractor.model.patch_embed.patch_size[0] if DINOV2 else extractor.model.patch_embed.patch_size
203
+ num_patches = int(patch_size / stride * (img_size // patch_size - 1) + 1)
204
+
205
+ print('patch_size:', patch_size)
206
+ print('num_patches:', num_patches)
207
+
208
+ input_text = "a photo of "+category[-1][0] if TEXT_INPUT else None
209
+
210
+ N = len(files) // 2
211
+ pbar = tqdm(total=N)
212
+ result = []
213
+ for pair_idx in range(N):
214
+
215
+ # Load image 1
216
+ img1 = Image.open(files[2*pair_idx]).convert('RGB')
217
+ img1_input = resize(img1, real_size, resize=True, to_pil=True, edge=EDGE_PAD)
218
+ img1 = resize(img1, img_size, resize=True, to_pil=True, edge=EDGE_PAD)
219
+
220
+ # Load image 2
221
+ img2 = Image.open(files[2*pair_idx+1]).convert('RGB')
222
+ img2_input = resize(img2, real_size, resize=True, to_pil=True, edge=EDGE_PAD)
223
+ img2 = resize(img2, img_size, resize=True, to_pil=True, edge=EDGE_PAD)
224
+
225
+ with torch.no_grad():
226
+ if not CO_PCA:
227
+ if not ONLY_DINO:
228
+ img1_desc = process_features_and_mask(model, aug, img1_input, input_text=input_text, mask=False, pca=PCA).reshape(1,1,-1, num_patches**2).permute(0,1,3,2)
229
+ img2_desc = process_features_and_mask(model, aug, img2_input, category[-1], input_text=input_text, mask=mask, pca=PCA).reshape(1,1,-1, num_patches**2).permute(0,1,3,2)
230
+ if FUSE_DINO:
231
+ img1_batch = extractor.preprocess_pil(img1)
232
+ img1_desc_dino = extractor.extract_descriptors(img1_batch.to(device), layer, facet)
233
+ img2_batch = extractor.preprocess_pil(img2)
234
+ img2_desc_dino = extractor.extract_descriptors(img2_batch.to(device), layer, facet)
235
+
236
+ else:
237
+ if not ONLY_DINO:
238
+ features1 = process_features_and_mask(model, aug, img1_input, input_text=input_text, mask=False, raw=True)
239
+ features2 = process_features_and_mask(model, aug, img2_input, input_text=input_text, mask=False, raw=True)
240
+
241
+
242
+ processed_features1, processed_features2 = co_pca(features1, features2, PCA_DIMS)
243
+ # print('processed_feautres1 shape:', processed_features1.shape) # torch.Size([1, 768, 60, 60])
244
+ # print('processed_feautres2 shape:', processed_features2.shape) # torch.Size([1, 768, 60, 60])
245
+ img1_desc = processed_features1.reshape(1, 1, -1, num_patches**2).permute(0,1,3,2)
246
+ img2_desc = processed_features2.reshape(1, 1, -1, num_patches**2).permute(0,1,3,2)
247
+ if FUSE_DINO:
248
+ img1_batch = extractor.preprocess_pil(img1)
249
+ img1_desc_dino = extractor.extract_descriptors(img1_batch.to(device), layer, facet)
250
+ img2_batch = extractor.preprocess_pil(img2)
251
+ img2_desc_dino = extractor.extract_descriptors(img2_batch.to(device), layer, facet)
252
+
253
+ # print('img1_desc_dino shape:', img1_desc_dino.shape) # torch.Size([1, 1, 3600, 768])
254
+ # print('img2_desc_dino shape:', img2_desc_dino.shape) # torch.Size([1, 1, 3600, 768])
255
+
256
+ if dist == 'l1' or dist == 'l2':
257
+ # normalize the features
258
+ img1_desc = img1_desc / img1_desc.norm(dim=-1, keepdim=True)
259
+ img2_desc = img2_desc / img2_desc.norm(dim=-1, keepdim=True)
260
+ if FUSE_DINO:
261
+ img1_desc_dino = img1_desc_dino / img1_desc_dino.norm(dim=-1, keepdim=True)
262
+ img2_desc_dino = img2_desc_dino / img2_desc_dino.norm(dim=-1, keepdim=True)
263
+
264
+ if FUSE_DINO and not ONLY_DINO:
265
+ # cat two features together
266
+ img1_desc = torch.cat((img1_desc, img1_desc_dino), dim=-1)
267
+ img2_desc = torch.cat((img2_desc, img2_desc_dino), dim=-1)
268
+
269
+ if ONLY_DINO:
270
+ img1_desc = img1_desc_dino
271
+ img2_desc = img2_desc_dino
272
+
273
+ if DRAW_DENSE:
274
+ mask1 = get_mask(model, aug, img1, category[0])
275
+ mask2 = get_mask(model, aug, img2, category[-1])
276
+
277
+ print('mask 1 shape:', mask1.shape) # torch.Size([840, 840])
278
+ print('mask 2 shape:', mask2.shape) # torch.Size([840, 840])
279
+ import matplotlib.pyplot as plt
280
+ fig_mask, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 8))
281
+ ax1.axis('off')
282
+ ax2.axis('off')
283
+ # ax1.imshow(mask1.cpu().numpy())
284
+ # ax2.imshow(mask2.cpu().numpy())
285
+ ax1.imshow(img1)
286
+ ax1.contour(mask1.cpu().numpy(), colors='r', alpha=0.5)
287
+ ax2.imshow(img2)
288
+ ax2.contour(mask2.cpu().numpy(), colors='r', alpha=0.5)
289
+ plt.show()
290
+
291
+ if ONLY_DINO or not FUSE_DINO:
292
+ img1_desc = img1_desc / img1_desc.norm(dim=-1, keepdim=True)
293
+ img2_desc = img2_desc / img2_desc.norm(dim=-1, keepdim=True)
294
+
295
+ img1_desc_reshaped = img1_desc.permute(0,1,3,2).reshape(-1, img1_desc.shape[-1], num_patches, num_patches)
296
+ img2_desc_reshaped = img2_desc.permute(0,1,3,2).reshape(-1, img2_desc.shape[-1], num_patches, num_patches)
297
+ trg_dense_output, src_color_map = find_nearest_patchs(mask2, mask1, img2, img1, img2_desc_reshaped, img1_desc_reshaped, mask=mask)
298
+
299
+ if not os.path.exists(f'{save_path}/{category[0]}'):
300
+ os.makedirs(f'{save_path}/{category[0]}')
301
+ import matplotlib.pyplot as plt
302
+ fig_colormap, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 8))
303
+ ax1.axis('off')
304
+ ax2.axis('off')
305
+ ax1.imshow(src_color_map)
306
+ ax2.imshow(trg_dense_output)
307
+ fig_colormap.savefig(f'{save_path}/{category[0]}/{pair_idx}_colormap.png')
308
+
309
+ print('src_color_map shape:', src_color_map.shape) # (60, 60, 3)
310
+ print('trg_dense_output shape:', trg_dense_output.shape) # (60, 60, 3)
311
+ plt.close(fig_colormap)
312
+
313
+ if DRAW_SWAP:
314
+ if not DRAW_DENSE:
315
+
316
+ print('I am getting the masks for swap')
317
+ mask1 = get_mask(model, aug, img1, category[0])
318
+ mask2 = get_mask(model, aug, img2, category[-1])
319
+
320
+ import matplotlib.pyplot as plt
321
+ fig_mask, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 8))
322
+ ax1.axis('off')
323
+ ax2.axis('off')
324
+ # ax1.imshow(mask1.cpu().numpy())
325
+ # ax2.imshow(mask2.cpu().numpy())
326
+ # overlay the masks on the images
327
+ ax1.imshow(img1)
328
+ ax1.contour(mask1.cpu().numpy(), colors='r', alpha=0.5)
329
+ ax2.imshow(img2)
330
+ ax2.contour(mask2.cpu().numpy(), colors='r', alpha=0.5)
331
+ plt.show()
332
+ print(torch.max(mask1), torch.min(mask1), torch.max(mask2), torch.min(mask2))
333
+
334
+ if (ONLY_DINO or not FUSE_DINO) and not DRAW_DENSE:
335
+ img1_desc = img1_desc / img1_desc.norm(dim=-1, keepdim=True)
336
+ img2_desc = img2_desc / img2_desc.norm(dim=-1, keepdim=True)
337
+
338
+ print('img1_desc shape:', img1_desc.shape) # torch.Size([1, 1, 3600, 1536])
339
+ print('img2_desc shape:', img2_desc.shape) # torch.Size([1, 1, 3600, 1536])
340
+
341
+ img1_desc_reshaped = img1_desc.permute(0,1,3,2).reshape(-1, img1_desc.shape[-1], num_patches, num_patches)
342
+ img2_desc_reshaped = img2_desc.permute(0,1,3,2).reshape(-1, img2_desc.shape[-1], num_patches, num_patches)
343
+
344
+ print('img1_desc_reshaped shape:', img1_desc_reshaped.shape) # torch.Size([1, 1536, 60, 60])
345
+ print('img2_desc_reshaped shape:', img2_desc_reshaped.shape) # torch.Size([1, 1536, 60, 60])
346
+ trg_dense_output, src_color_map = find_nearest_patchs_replace_mask_first(mask2, mask1, img2, img1, img2_desc_reshaped, img1_desc_reshaped, mask=mask, resolution=156) # 156 max resolution find_nearest_patchs_replace
347
+
348
+ if not os.path.exists(f'{save_path}/{category[0]}'):
349
+ os.makedirs(f'{save_path}/{category[0]}')
350
+ fig_colormap, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 8))
351
+ ax1.axis('off')
352
+ ax2.axis('off')
353
+ ax1.imshow(src_color_map)
354
+ ax2.imshow(trg_dense_output)
355
+
356
+ print('src_color_map shape:', src_color_map.shape) # (156, 156, 3)
357
+ print('trg_dense_output shape:', trg_dense_output.shape) # (156, 156, 3)
358
+ fig_colormap.savefig(f'{save_path}/{category[0]}/{pair_idx}_swap.png')
359
+ plt.close(fig_colormap)
360
+ if not DRAW_SWAP and not DRAW_DENSE:
361
+ result.append([img1_desc.cpu(), img2_desc.cpu()])
362
+ else:
363
+ result.append([img1_desc.cpu(), img2_desc.cpu(), mask1.cpu(), mask2.cpu()])
364
+
365
+ pbar.update(1)
366
+ return result
367
+
368
+
369
+ import matplotlib.pyplot as plt
370
+
371
+ def process_images_mask_first(src_img_path,trg_img_path,categories):
372
+
373
+ files = [src_img_path, trg_img_path]
374
+ # save_path = './results_swap' + f'/{trg_img_path.split("/")[-1].split(".")[0]}_{src_img_path.split("/")[-1].split(".")[0]}'
375
+ save_path = './results_swap/video_demo/'
376
+ if not os.path.exists(save_path):
377
+ os.makedirs(save_path)
378
+
379
+ # print('save_path:', save_path)
380
+
381
+ result = compute_pair_feature(model, aug, save_path, files, mask=MASK, category=categories, dist=DIST)
382
+
383
+ if SWAP:
384
+ # high resolution swap, will take the instance of interest from the target image and replace it in the source image
385
+ for (feature2,feature1,mask2,mask1) in result:
386
+
387
+ print('feature1 shape:', feature1.shape) # torch.Size([1, 1, 3600, 1536])
388
+ print('feature2 shape:', feature2.shape) # torch.Size([1, 1, 3600, 1536])
389
+ print('mask1 shape:', mask1.shape) # torch.Size([840, 840])
390
+ print('mask2 shape:', mask2.shape) # torch.Size([840, 840])
391
+
392
+ src_feature_reshaped = feature1.squeeze().permute(1,0).reshape(1,-1,60,60).cuda()
393
+ tgt_feature_reshaped = feature2.squeeze().permute(1,0).reshape(1,-1,60,60).cuda()
394
+ src_img=Image.open(trg_img_path) # image which contains an object to be replaced
395
+ tgt_img=Image.open(src_img_path) # object of interest
396
+
397
+ patch_size = RESOLUTION # the resolution of the output image, set to 256 could be faster
398
+
399
+ # plt.imshow(mask1.cpu().numpy())
400
+ # plt.show()
401
+
402
+ # plt.imshow(mask2.cpu().numpy())
403
+ # plt.show()
404
+ src_img = resize(src_img, 840, resize=True, to_pil=True, edge=EDGE_PAD)
405
+ tgt_img = resize(tgt_img, 840, resize=True, to_pil=True, edge=EDGE_PAD)
406
+
407
+ src_img_mask = apply_mask_to_image(src_img, mask1.cpu())
408
+ tgt_img_mask = apply_mask_to_image(tgt_img, mask2.cpu())
409
+ print('src_img_mask shape:', src_img_mask.size) # src_img_mask shape: (840, 840)
410
+ print('tgt_img_mask shape:', tgt_img_mask.size) # tgt_img_mask shape: (840, 840)
411
+
412
+ src_img = resize(src_img, patch_size, resize=True, to_pil=False, edge=EDGE_PAD)
413
+ # tgt_img = resize(tgt_img, patch_size, resize=True, to_pil=False, edge=EDGE_PAD)
414
+ src_img_mask = resize(src_img_mask, patch_size, resize=True, to_pil=False, edge=EDGE_PAD)
415
+ tgt_img_mask = resize(tgt_img_mask, patch_size, resize=True, to_pil=False, edge=EDGE_PAD)
416
+
417
+
418
+ resized_src_mask = F.interpolate(mask1.unsqueeze(0).unsqueeze(0), size=(patch_size, patch_size), mode='nearest').squeeze().cuda()
419
+ resized_tgt_mask = F.interpolate(mask2.unsqueeze(0).unsqueeze(0), size=(patch_size, patch_size), mode='nearest').squeeze().cuda()
420
+
421
+
422
+ src_feature_upsampled = F.interpolate(src_feature_reshaped, size=(patch_size, patch_size), mode='bilinear').squeeze()
423
+ tgt_feature_upsampled = F.interpolate(tgt_feature_reshaped, size=(patch_size, patch_size), mode='bilinear').squeeze()
424
+
425
+
426
+ src_feature_upsampled = src_feature_upsampled * resized_src_mask.repeat(src_feature_upsampled.shape[0],1,1)
427
+ tgt_feature_upsampled = tgt_feature_upsampled * resized_tgt_mask.repeat(src_feature_upsampled.shape[0],1,1)
428
+
429
+
430
+ # Set the masked area to a very small number
431
+ src_feature_upsampled[src_feature_upsampled == 0] = -100000
432
+ tgt_feature_upsampled[tgt_feature_upsampled == 0] = -100000
433
+ # Calculate the cosine similarity between src_feature and tgt_feature
434
+ src_features_2d=src_feature_upsampled.reshape(src_feature_upsampled.shape[0],-1).permute(1,0)
435
+ tgt_features_2d=tgt_feature_upsampled.reshape(tgt_feature_upsampled.shape[0],-1).permute(1,0)
436
+ swapped_image=src_img
437
+ mapping = torch.zeros(patch_size,patch_size,2).cuda()
438
+ for patch_idx in tqdm(range(patch_size*patch_size)):
439
+ # If the patch is in the resized_src_mask_out_layers, find the corresponding patch in the target_output and swap them
440
+ if resized_src_mask[patch_idx // patch_size, patch_idx % patch_size] == 1:
441
+ # Find the corresponding patch with the highest cosine similarity
442
+ distances = torch.linalg.norm(tgt_features_2d - src_features_2d[patch_idx], dim=1)
443
+ tgt_patch_idx = torch.argmin(distances)
444
+
445
+ tgt_patch_row = tgt_patch_idx // patch_size
446
+ tgt_patch_col = tgt_patch_idx % patch_size
447
+
448
+ # Swap the patches in output
449
+ swapped_image[patch_idx // patch_size, patch_idx % patch_size,:] = tgt_img_mask[tgt_patch_row, tgt_patch_col,:] #tgt_img[tgt_patch_row, tgt_patch_col,:]
450
+ mapping[patch_idx // patch_size, patch_idx % patch_size] = torch.tensor([tgt_patch_row,tgt_patch_col])
451
+
452
+ # swapped_image=Image.fromarray(swapped_image)
453
+
454
+ print('swapped_image shape bf:', swapped_image.shape) # swapped_image shape: (patch_size, patch_size, 3)
455
+
456
+ # only retain the masked area of the swapped image, resized_src_mask is tensor of shape (patch_size, patch_size), src_image and swapped image are numpy arrays of shape (patch_size, patch_size, 3)]
457
+ swapped_image = src_img * (1 - resized_src_mask.unsqueeze(-1).repeat(1, 1, 3).cpu().numpy()) + swapped_image * resized_src_mask.unsqueeze(-1).repeat(1, 1, 3).cpu().numpy() # swap the masked area of the source image with the swapped image
458
+ swapped_image = swapped_image.astype(np.uint8)
459
+
460
+ print('swapped_image shape aft:', swapped_image.shape)
461
+
462
+ swapped_image = Image.fromarray(swapped_image)
463
+
464
+ # save the swapped image
465
+ # swapped_image = Image.from
466
+
467
+ tgt_name = trg_img_path.split('/')[-1].split('.')[0]
468
+ # swapped_image.save(save_path+'/swapped_image.png')
469
+ swapped_image.save(save_path+f'/{tgt_name}_swapped_image.png')
470
+ if DRAW_GIF:
471
+ # animate_image_transfer_reverse(resize(Image.open(trg_img_path), patch_size, resize=True, to_pil=True, edge=EDGE_PAD),resize(Image.open(src_img_path), patch_size, resize=True, to_pil=True, edge=EDGE_PAD),mapping,save_path+'/warp_pixel.gif')
472
+ animate_image_transfer_reverse(resize(Image.open(trg_img_path), patch_size, resize=True, to_pil=True, edge=EDGE_PAD),resize(Image.open(src_img_path), patch_size, resize=True, to_pil=True, edge=EDGE_PAD),mapping,save_path+f'/{tgt_name}_warp_pixel.gif')
473
+
474
+ return result
475
+
476
+
477
+ # src_paths=[
478
+ # # "data/images/cat1.jpg",
479
+ # # "data/images/cat2.jpg",
480
+ # # "data/images/cat3.jpg",
481
+ # os.path.join(path_example, 'source_image.jpg')
482
+ # ]
483
+
484
+ if __name__ == "__main__":
485
+
486
+ src_path = os.path.join(path_example, 'source_image.jpg')
487
+ SWAP=1
488
+ DRAW_GIF=1
489
+ RESOLUTION = 400 # resolution for swapped images, set to 512 to align with the paper
490
+ trg_img_paths = [os.path.join(path_example, f'target_video_trim_frames/target_frame_{i:04d}.jpg') for i in range(7, 67)] # 0,67
491
+ # for src_path in src_paths:
492
+ for trg_img_path in trg_img_paths:
493
+ src_img_path = src_path
494
+ # trg_img_path = "data/images/cat0.jpg"
495
+ # trg_img_path = os.path.join(path_example, 'target_frame1.jpg')
496
+ # trg_img_path = os.path.join(path_example, 'target_video_trim_frames/target_frame_0040.jpg')
497
+ categories = [['chair'], ['chair']]
498
+ # result = process_images(src_img_path, trg_img_path, categories)
499
+ result = process_images_mask_first(src_img_path, trg_img_path, categories)
Code/Baselines/sd-dino/demo_vis_features.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
Code/Baselines/sd-dino/demo_vis_features_sd_unet.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
Code/Baselines/sd-dino/extractor_dino.py ADDED
@@ -0,0 +1,387 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import torch
3
+ import torchvision.transforms
4
+ from torch import nn
5
+ from torchvision import transforms
6
+ import torch.nn.modules.utils as nn_utils
7
+ import math
8
+ import timm
9
+ import types
10
+ from pathlib import Path
11
+ from typing import Union, List, Tuple
12
+ from PIL import Image
13
+
14
+
15
+ class ViTExtractor:
16
+ """ This class facilitates extraction of features, descriptors, and saliency maps from a ViT.
17
+ We use the following notation in the documentation of the module's methods:
18
+ B - batch size
19
+ h - number of heads. usually takes place of the channel dimension in pytorch's convention BxCxHxW
20
+ p - patch size of the ViT. either 8 or 16.
21
+ t - number of tokens. equals the number of patches + 1, e.g. HW / p**2 + 1. Where H and W are the height and width
22
+ of the input image.
23
+ d - the embedding dimension in the ViT.
24
+ """
25
+
26
+ def __init__(self, model_type: str = 'dino_vits8', stride: int = 4, model: nn.Module = None, device: str = 'cuda'):
27
+ """
28
+ :param model_type: A string specifying the type of model to extract from.
29
+ [dino_vits8 | dino_vits16 | dino_vitb8 | dino_vitb16 | vit_small_patch8_224 |
30
+ vit_small_patch16_224 | vit_base_patch8_224 | vit_base_patch16_224]
31
+ :param stride: stride of first convolution layer. small stride -> higher resolution.
32
+ :param model: Optional parameter. The nn.Module to extract from instead of creating a new one in ViTExtractor.
33
+ should be compatible with model_type.
34
+ """
35
+ self.model_type = model_type
36
+ self.device = device
37
+ if model is not None:
38
+ self.model = model
39
+ else:
40
+ self.model = ViTExtractor.create_model(model_type)
41
+
42
+ self.model = ViTExtractor.patch_vit_resolution(self.model, stride=stride)
43
+ self.model.eval()
44
+ self.model.to(self.device)
45
+ self.p = self.model.patch_embed.patch_size
46
+ if type(self.p)==tuple:
47
+ self.p = self.p[0]
48
+ self.stride = self.model.patch_embed.proj.stride
49
+
50
+ self.mean = (0.485, 0.456, 0.406) if "dino" in self.model_type else (0.5, 0.5, 0.5)
51
+ self.std = (0.229, 0.224, 0.225) if "dino" in self.model_type else (0.5, 0.5, 0.5)
52
+
53
+ self._feats = []
54
+ self.hook_handlers = []
55
+ self.load_size = None
56
+ self.num_patches = None
57
+
58
+ @staticmethod
59
+ def create_model(model_type: str) -> nn.Module:
60
+ """
61
+ :param model_type: a string specifying which model to load. [dino_vits8 | dino_vits16 | dino_vitb8 |
62
+ dino_vitb16 | vit_small_patch8_224 | vit_small_patch16_224 | vit_base_patch8_224 |
63
+ vit_base_patch16_224]
64
+ :return: the model
65
+ """
66
+ torch.hub._validate_not_a_forked_repo=lambda a,b,c: True
67
+ if 'v2' in model_type:
68
+ model = torch.hub.load('facebookresearch/dinov2', model_type)
69
+ elif 'dino' in model_type:
70
+ model = torch.hub.load('facebookresearch/dino:main', model_type)
71
+ else: # model from timm -- load weights from timm to dino model (enables working on arbitrary size images).
72
+ temp_model = timm.create_model(model_type, pretrained=True)
73
+ model_type_dict = {
74
+ 'vit_small_patch16_224': 'dino_vits16',
75
+ 'vit_small_patch8_224': 'dino_vits8',
76
+ 'vit_base_patch16_224': 'dino_vitb16',
77
+ 'vit_base_patch8_224': 'dino_vitb8'
78
+ }
79
+ model = torch.hub.load('facebookresearch/dino:main', model_type_dict[model_type])
80
+ temp_state_dict = temp_model.state_dict()
81
+ del temp_state_dict['head.weight']
82
+ del temp_state_dict['head.bias']
83
+ model.load_state_dict(temp_state_dict)
84
+ return model
85
+
86
+ @staticmethod
87
+ def _fix_pos_enc(patch_size: int, stride_hw: Tuple[int, int]):
88
+ """
89
+ Creates a method for position encoding interpolation.
90
+ :param patch_size: patch size of the model.
91
+ :param stride_hw: A tuple containing the new height and width stride respectively.
92
+ :return: the interpolation method
93
+ """
94
+ def interpolate_pos_encoding(self, x: torch.Tensor, w: int, h: int) -> torch.Tensor:
95
+ npatch = x.shape[1] - 1
96
+ N = self.pos_embed.shape[1] - 1
97
+ if npatch == N and w == h:
98
+ return self.pos_embed
99
+ class_pos_embed = self.pos_embed[:, 0]
100
+ patch_pos_embed = self.pos_embed[:, 1:]
101
+ dim = x.shape[-1]
102
+ # compute number of tokens taking stride into account
103
+ w0 = 1 + (w - patch_size) // stride_hw[1]
104
+ h0 = 1 + (h - patch_size) // stride_hw[0]
105
+ assert (w0 * h0 == npatch), f"""got wrong grid size for {h}x{w} with patch_size {patch_size} and
106
+ stride {stride_hw} got {h0}x{w0}={h0 * w0} expecting {npatch}"""
107
+ # we add a small number to avoid floating point error in the interpolation
108
+ # see discussion at https://github.com/facebookresearch/dino/issues/8
109
+ w0, h0 = w0 + 0.1, h0 + 0.1
110
+ patch_pos_embed = nn.functional.interpolate(
111
+ patch_pos_embed.reshape(1, int(math.sqrt(N)), int(math.sqrt(N)), dim).permute(0, 3, 1, 2),
112
+ scale_factor=(w0 / math.sqrt(N), h0 / math.sqrt(N)),
113
+ mode='bicubic',
114
+ align_corners=False, recompute_scale_factor=False
115
+ )
116
+ assert int(w0) == patch_pos_embed.shape[-2] and int(h0) == patch_pos_embed.shape[-1]
117
+ patch_pos_embed = patch_pos_embed.permute(0, 2, 3, 1).view(1, -1, dim)
118
+ return torch.cat((class_pos_embed.unsqueeze(0), patch_pos_embed), dim=1)
119
+
120
+ return interpolate_pos_encoding
121
+
122
+ @staticmethod
123
+ def patch_vit_resolution(model: nn.Module, stride: int) -> nn.Module:
124
+ """
125
+ change resolution of model output by changing the stride of the patch extraction.
126
+ :param model: the model to change resolution for.
127
+ :param stride: the new stride parameter.
128
+ :return: the adjusted model
129
+ """
130
+ patch_size = model.patch_embed.patch_size
131
+ if type(patch_size) == tuple:
132
+ patch_size = patch_size[0]
133
+ if stride == patch_size: # nothing to do
134
+ return model
135
+
136
+ stride = nn_utils._pair(stride)
137
+ assert all([(patch_size // s_) * s_ == patch_size for s_ in
138
+ stride]), f'stride {stride} should divide patch_size {patch_size}'
139
+
140
+ # fix the stride
141
+ model.patch_embed.proj.stride = stride
142
+ # fix the positional encoding code
143
+ model.interpolate_pos_encoding = types.MethodType(ViTExtractor._fix_pos_enc(patch_size, stride), model)
144
+ return model
145
+
146
+ def preprocess(self, image_path: Union[str, Path],
147
+ load_size: Union[int, Tuple[int, int]] = None, patch_size: int = 14) -> Tuple[torch.Tensor, Image.Image]:
148
+ """
149
+ Preprocesses an image before extraction.
150
+ :param image_path: path to image to be extracted.
151
+ :param load_size: optional. Size to resize image before the rest of preprocessing.
152
+ :return: a tuple containing:
153
+ (1) the preprocessed image as a tensor to insert the model of shape BxCxHxW.
154
+ (2) the pil image in relevant dimensions
155
+ """
156
+ def divisible_by_num(num, dim):
157
+ return num * (dim // num)
158
+ pil_image = Image.open(image_path).convert('RGB')
159
+ if load_size is not None:
160
+ pil_image = transforms.Resize(load_size, interpolation=transforms.InterpolationMode.LANCZOS)(pil_image)
161
+
162
+ width, height = pil_image.size
163
+ new_width = divisible_by_num(patch_size, width)
164
+ new_height = divisible_by_num(patch_size, height)
165
+ pil_image = pil_image.resize((new_width, new_height), resample=Image.LANCZOS)
166
+
167
+ prep = transforms.Compose([
168
+ transforms.ToTensor(),
169
+ transforms.Normalize(mean=self.mean, std=self.std)
170
+ ])
171
+ prep_img = prep(pil_image)[None, ...]
172
+ return prep_img, pil_image
173
+
174
+ def preprocess_pil(self, pil_image):
175
+ """
176
+ Preprocesses an image before extraction.
177
+ :param image_path: path to image to be extracted.
178
+ :param load_size: optional. Size to resize image before the rest of preprocessing.
179
+ :return: a tuple containing:
180
+ (1) the preprocessed image as a tensor to insert the model of shape BxCxHxW.
181
+ (2) the pil image in relevant dimensions
182
+ """
183
+ prep = transforms.Compose([
184
+ transforms.ToTensor(),
185
+ transforms.Normalize(mean=self.mean, std=self.std)
186
+ ])
187
+ prep_img = prep(pil_image)[None, ...]
188
+ return prep_img
189
+
190
+ def _get_hook(self, facet: str):
191
+ """
192
+ generate a hook method for a specific block and facet.
193
+ """
194
+ if facet in ['attn', 'token']:
195
+ def _hook(model, input, output):
196
+ self._feats.append(output)
197
+ return _hook
198
+
199
+ if facet == 'query':
200
+ facet_idx = 0
201
+ elif facet == 'key':
202
+ facet_idx = 1
203
+ elif facet == 'value':
204
+ facet_idx = 2
205
+ else:
206
+ raise TypeError(f"{facet} is not a supported facet.")
207
+
208
+ def _inner_hook(module, input, output):
209
+ input = input[0]
210
+ B, N, C = input.shape
211
+ qkv = module.qkv(input).reshape(B, N, 3, module.num_heads, C // module.num_heads).permute(2, 0, 3, 1, 4)
212
+ self._feats.append(qkv[facet_idx]) #Bxhxtxd
213
+ return _inner_hook
214
+
215
+ def _register_hooks(self, layers: List[int], facet: str) -> None:
216
+ """
217
+ register hook to extract features.
218
+ :param layers: layers from which to extract features.
219
+ :param facet: facet to extract. One of the following options: ['key' | 'query' | 'value' | 'token' | 'attn']
220
+ """
221
+ for block_idx, block in enumerate(self.model.blocks):
222
+ if block_idx in layers:
223
+ if facet == 'token':
224
+ self.hook_handlers.append(block.register_forward_hook(self._get_hook(facet)))
225
+ elif facet == 'attn':
226
+ self.hook_handlers.append(block.attn.attn_drop.register_forward_hook(self._get_hook(facet)))
227
+ elif facet in ['key', 'query', 'value']:
228
+ self.hook_handlers.append(block.attn.register_forward_hook(self._get_hook(facet)))
229
+ else:
230
+ raise TypeError(f"{facet} is not a supported facet.")
231
+
232
+ def _unregister_hooks(self) -> None:
233
+ """
234
+ unregisters the hooks. should be called after feature extraction.
235
+ """
236
+ for handle in self.hook_handlers:
237
+ handle.remove()
238
+ self.hook_handlers = []
239
+
240
+ def _extract_features(self, batch: torch.Tensor, layers: List[int] = 11, facet: str = 'key') -> List[torch.Tensor]:
241
+ """
242
+ extract features from the model
243
+ :param batch: batch to extract features for. Has shape BxCxHxW.
244
+ :param layers: layer to extract. A number between 0 to 11.
245
+ :param facet: facet to extract. One of the following options: ['key' | 'query' | 'value' | 'token' | 'attn']
246
+ :return : tensor of features.
247
+ if facet is 'key' | 'query' | 'value' has shape Bxhxtxd
248
+ if facet is 'attn' has shape Bxhxtxt
249
+ if facet is 'token' has shape Bxtxd
250
+ """
251
+ B, C, H, W = batch.shape
252
+ self._feats = []
253
+ self._register_hooks(layers, facet)
254
+ _ = self.model(batch)
255
+ self._unregister_hooks()
256
+ self.load_size = (H, W)
257
+ self.num_patches = (1 + (H - self.p) // self.stride[0], 1 + (W - self.p) // self.stride[1])
258
+ return self._feats
259
+
260
+ def _log_bin(self, x: torch.Tensor, hierarchy: int = 2) -> torch.Tensor:
261
+ """
262
+ create a log-binned descriptor.
263
+ :param x: tensor of features. Has shape Bxhxtxd.
264
+ :param hierarchy: how many bin hierarchies to use.
265
+ """
266
+ B = x.shape[0]
267
+ num_bins = 1 + 8 * hierarchy
268
+
269
+ bin_x = x.permute(0, 2, 3, 1).flatten(start_dim=-2, end_dim=-1) # Bx(t-1)x(dxh)
270
+ bin_x = bin_x.permute(0, 2, 1)
271
+ bin_x = bin_x.reshape(B, bin_x.shape[1], self.num_patches[0], self.num_patches[1])
272
+ # Bx(dxh)xnum_patches[0]xnum_patches[1]
273
+ sub_desc_dim = bin_x.shape[1]
274
+
275
+ avg_pools = []
276
+ # compute bins of all sizes for all spatial locations.
277
+ for k in range(0, hierarchy):
278
+ # avg pooling with kernel 3**kx3**k
279
+ win_size = 3 ** k
280
+ avg_pool = torch.nn.AvgPool2d(win_size, stride=1, padding=win_size // 2, count_include_pad=False)
281
+ avg_pools.append(avg_pool(bin_x))
282
+
283
+ bin_x = torch.zeros((B, sub_desc_dim * num_bins, self.num_patches[0], self.num_patches[1])).to(self.device)
284
+ for y in range(self.num_patches[0]):
285
+ for x in range(self.num_patches[1]):
286
+ part_idx = 0
287
+ # fill all bins for a spatial location (y, x)
288
+ for k in range(0, hierarchy):
289
+ kernel_size = 3 ** k
290
+ for i in range(y - kernel_size, y + kernel_size + 1, kernel_size):
291
+ for j in range(x - kernel_size, x + kernel_size + 1, kernel_size):
292
+ if i == y and j == x and k != 0:
293
+ continue
294
+ if 0 <= i < self.num_patches[0] and 0 <= j < self.num_patches[1]:
295
+ bin_x[:, part_idx * sub_desc_dim: (part_idx + 1) * sub_desc_dim, y, x] = avg_pools[k][
296
+ :, :, i, j]
297
+ else: # handle padding in a more delicate way than zero padding
298
+ temp_i = max(0, min(i, self.num_patches[0] - 1))
299
+ temp_j = max(0, min(j, self.num_patches[1] - 1))
300
+ bin_x[:, part_idx * sub_desc_dim: (part_idx + 1) * sub_desc_dim, y, x] = avg_pools[k][
301
+ :, :, temp_i,
302
+ temp_j]
303
+ part_idx += 1
304
+ bin_x = bin_x.flatten(start_dim=-2, end_dim=-1).permute(0, 2, 1).unsqueeze(dim=1)
305
+ # Bx1x(t-1)x(dxh)
306
+ return bin_x
307
+
308
+ def extract_descriptors(self, batch: torch.Tensor, layer: int = 11, facet: str = 'key',
309
+ bin: bool = False, include_cls: bool = False) -> torch.Tensor:
310
+ """
311
+ extract descriptors from the model
312
+ :param batch: batch to extract descriptors for. Has shape BxCxHxW.
313
+ :param layers: layer to extract. A number between 0 to 11.
314
+ :param facet: facet to extract. One of the following options: ['key' | 'query' | 'value' | 'token']
315
+ :param bin: apply log binning to the descriptor. default is False.
316
+ :return: tensor of descriptors. Bx1xtxd' where d' is the dimension of the descriptors.
317
+ """
318
+ assert facet in ['key', 'query', 'value', 'token'], f"""{facet} is not a supported facet for descriptors.
319
+ choose from ['key' | 'query' | 'value' | 'token'] """
320
+ self._extract_features(batch, [layer], facet)
321
+ x = self._feats[0]
322
+ if facet == 'token':
323
+ x.unsqueeze_(dim=1) #Bx1xtxd
324
+ if not include_cls:
325
+ x = x[:, :, 1:, :] # remove cls token
326
+ else:
327
+ assert not bin, "bin = True and include_cls = True are not supported together, set one of them False."
328
+ if not bin:
329
+ desc = x.permute(0, 2, 3, 1).flatten(start_dim=-2, end_dim=-1).unsqueeze(dim=1) # Bx1xtx(dxh)
330
+ else:
331
+ desc = self._log_bin(x)
332
+ return desc
333
+
334
+ def extract_saliency_maps(self, batch: torch.Tensor) -> torch.Tensor:
335
+ """
336
+ extract saliency maps. The saliency maps are extracted by averaging several attention heads from the last layer
337
+ in of the CLS token. All values are then normalized to range between 0 and 1.
338
+ :param batch: batch to extract saliency maps for. Has shape BxCxHxW.
339
+ :return: a tensor of saliency maps. has shape Bxt-1
340
+ """
341
+ assert self.model_type == "dino_vits8", f"saliency maps are supported only for dino_vits model_type."
342
+ self._extract_features(batch, [11], 'attn')
343
+ head_idxs = [0, 2, 4, 5]
344
+ curr_feats = self._feats[0] #Bxhxtxt
345
+ cls_attn_map = curr_feats[:, head_idxs, 0, 1:].mean(dim=1) #Bx(t-1)
346
+ temp_mins, temp_maxs = cls_attn_map.min(dim=1)[0], cls_attn_map.max(dim=1)[0]
347
+ cls_attn_maps = (cls_attn_map - temp_mins) / (temp_maxs - temp_mins) # normalize to range [0,1]
348
+ return cls_attn_maps
349
+
350
+ """ taken from https://stackoverflow.com/questions/15008758/parsing-boolean-values-with-argparse"""
351
+ def str2bool(v):
352
+ if isinstance(v, bool):
353
+ return v
354
+ if v.lower() in ('yes', 'true', 't', 'y', '1'):
355
+ return True
356
+ elif v.lower() in ('no', 'false', 'f', 'n', '0'):
357
+ return False
358
+ else:
359
+ raise argparse.ArgumentTypeError('Boolean value expected.')
360
+
361
+ if __name__ == "__main__":
362
+ parser = argparse.ArgumentParser(description='Facilitate ViT Descriptor extraction.')
363
+ parser.add_argument('--image_path', type=str, required=True, help='path of the extracted image.')
364
+ parser.add_argument('--output_path', type=str, required=True, help='path to file containing extracted descriptors.')
365
+ parser.add_argument('--load_size', default=224, type=int, help='load size of the input image.')
366
+ parser.add_argument('--stride', default=4, type=int, help="""stride of first convolution layer.
367
+ small stride -> higher resolution.""")
368
+ parser.add_argument('--model_type', default='dino_vits8', type=str,
369
+ help="""type of model to extract.
370
+ Choose from [dino_vits8 | dino_vits16 | dino_vitb8 | dino_vitb16 | vit_small_patch8_224 |
371
+ vit_small_patch16_224 | vit_base_patch8_224 | vit_base_patch16_224]""")
372
+ parser.add_argument('--facet', default='key', type=str, help="""facet to create descriptors from.
373
+ options: ['key' | 'query' | 'value' | 'token']""")
374
+ parser.add_argument('--layer', default=11, type=int, help="layer to create descriptors from.")
375
+ parser.add_argument('--bin', default='False', type=str2bool, help="create a binned descriptor if True.")
376
+ parser.add_argument('--patch_size', default=14, type=int, help="patch size of the model.")
377
+ args = parser.parse_args()
378
+
379
+ with torch.no_grad():
380
+ device = 'cuda' if torch.cuda.is_available() else 'cpu'
381
+ extractor = ViTExtractor(args.model_type, args.stride, device=device)
382
+ image_batch, image_pil = extractor.preprocess(args.image_path, args.load_size, args.patch_size)
383
+ print(f"Image {args.image_path} is preprocessed to tensor of size {image_batch.shape}.")
384
+ descriptors = extractor.extract_descriptors(image_batch.to(device), args.layer, args.facet, args.bin)
385
+ print(f"Descriptors are of size: {descriptors.shape}")
386
+ torch.save(descriptors, args.output_path)
387
+ print(f"Descriptors saved to: {args.output_path}")
Code/Baselines/sd-dino/extractor_sd.py ADDED
@@ -0,0 +1,410 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import itertools
2
+ from contextlib import ExitStack
3
+ import torch
4
+ from mask2former.data.datasets.register_ade20k_panoptic import ADE20K_150_CATEGORIES
5
+ from PIL import Image
6
+ import numpy as np
7
+ import torch.nn.functional as F
8
+ from detectron2.config import instantiate
9
+ from detectron2.data import MetadataCatalog
10
+ from detectron2.data import detection_utils as utils
11
+ from detectron2.config import LazyCall as L
12
+ from detectron2.data import transforms as T
13
+ from detectron2.data.datasets.builtin_meta import COCO_CATEGORIES
14
+ from detectron2.evaluation import inference_context
15
+ from detectron2.utils.env import seed_all_rng
16
+ from detectron2.utils.visualizer import ColorMode, Visualizer, random_color
17
+ from detectron2.utils.logger import setup_logger
18
+
19
+ from odise import model_zoo
20
+ from odise.checkpoint import ODISECheckpointer
21
+ from odise.config import instantiate_odise
22
+ from odise.data import get_openseg_labels
23
+ from odise.modeling.wrapper import OpenPanopticInference
24
+
25
+ from utils.utils_correspondence import resize
26
+ import faiss
27
+
28
+ COCO_THING_CLASSES = [
29
+ label
30
+ for idx, label in enumerate(get_openseg_labels("coco_panoptic", True))
31
+ if COCO_CATEGORIES[idx]["isthing"] == 1
32
+ ]
33
+ COCO_THING_COLORS = [c["color"] for c in COCO_CATEGORIES if c["isthing"] == 1]
34
+ COCO_STUFF_CLASSES = [
35
+ label
36
+ for idx, label in enumerate(get_openseg_labels("coco_panoptic", True))
37
+ if COCO_CATEGORIES[idx]["isthing"] == 0
38
+ ]
39
+ COCO_STUFF_COLORS = [c["color"] for c in COCO_CATEGORIES if c["isthing"] == 0]
40
+
41
+ ADE_THING_CLASSES = [
42
+ label
43
+ for idx, label in enumerate(get_openseg_labels("ade20k_150", True))
44
+ if ADE20K_150_CATEGORIES[idx]["isthing"] == 1
45
+ ]
46
+ ADE_THING_COLORS = [c["color"] for c in ADE20K_150_CATEGORIES if c["isthing"] == 1]
47
+ ADE_STUFF_CLASSES = [
48
+ label
49
+ for idx, label in enumerate(get_openseg_labels("ade20k_150", True))
50
+ if ADE20K_150_CATEGORIES[idx]["isthing"] == 0
51
+ ]
52
+ ADE_STUFF_COLORS = [c["color"] for c in ADE20K_150_CATEGORIES if c["isthing"] == 0]
53
+
54
+ LVIS_CLASSES = get_openseg_labels("lvis_1203", True)
55
+ # use beautiful coco colors
56
+ LVIS_COLORS = list(
57
+ itertools.islice(itertools.cycle([c["color"] for c in COCO_CATEGORIES]), len(LVIS_CLASSES))
58
+ )
59
+
60
+
61
+ class StableDiffusionSeg(object):
62
+ def __init__(self, model, metadata, aug, instance_mode=ColorMode.IMAGE):
63
+ """
64
+ Args:
65
+ model (nn.Module):
66
+ metadata (MetadataCatalog): image metadata.
67
+ instance_mode (ColorMode):
68
+ parallel (bool): whether to run the model in different processes from visualization.
69
+ Useful since the visualization logic can be slow.
70
+ """
71
+ self.model = model
72
+ self.metadata = metadata
73
+ self.aug = aug
74
+ self.cpu_device = torch.device("cpu")
75
+ self.instance_mode = instance_mode
76
+
77
+ def get_features(self, original_image, caption=None, pca=None):
78
+ """
79
+ Args:
80
+ original_image (np.ndarray): an image of shape (H, W, C) (in BGR order).
81
+
82
+ Returns:
83
+ features (dict):
84
+ the output of the model for one image only.
85
+ """
86
+ height, width = original_image.shape[:2]
87
+ aug_input = T.AugInput(original_image, sem_seg=None)
88
+ self.aug(aug_input)
89
+ image = aug_input.image
90
+ image = torch.as_tensor(image.astype("float32").transpose(2, 0, 1))
91
+
92
+ inputs = {"image": image, "height": height, "width": width}
93
+ if caption is not None:
94
+ features = self.model.get_features([inputs],caption,pca=pca)
95
+ else:
96
+ features = self.model.get_features([inputs],pca=pca)
97
+ return features
98
+
99
+ def predict(self, original_image):
100
+ """
101
+ Args:
102
+ original_image (np.ndarray): an image of shape (H, W, C) (in BGR order).
103
+
104
+ Returns:
105
+ predictions (dict):
106
+ the output of the model for one image only.
107
+ See :doc:`/tutorials/models` for details about the format.
108
+ """
109
+ height, width = original_image.shape[:2]
110
+ aug_input = T.AugInput(original_image, sem_seg=None)
111
+ self.aug(aug_input)
112
+ image = aug_input.image
113
+ image = torch.as_tensor(image.astype("float32").transpose(2, 0, 1))
114
+
115
+ inputs = {"image": image, "height": height, "width": width}
116
+ predictions = self.model([inputs])[0]
117
+ return predictions
118
+
119
+ def build_demo_classes_and_metadata(vocab, label_list):
120
+ extra_classes = []
121
+
122
+ if vocab:
123
+ for words in vocab.split(";"):
124
+ extra_classes.append([word.strip() for word in words.split(",")])
125
+ extra_colors = [random_color(rgb=True, maximum=1) for _ in range(len(extra_classes))]
126
+
127
+ demo_thing_classes = extra_classes
128
+ demo_stuff_classes = []
129
+ demo_thing_colors = extra_colors
130
+ demo_stuff_colors = []
131
+
132
+ if "COCO" in label_list:
133
+ demo_thing_classes += COCO_THING_CLASSES
134
+ demo_stuff_classes += COCO_STUFF_CLASSES
135
+ demo_thing_colors += COCO_THING_COLORS
136
+ demo_stuff_colors += COCO_STUFF_COLORS
137
+ if "ADE" in label_list:
138
+ demo_thing_classes += ADE_THING_CLASSES
139
+ demo_stuff_classes += ADE_STUFF_CLASSES
140
+ demo_thing_colors += ADE_THING_COLORS
141
+ demo_stuff_colors += ADE_STUFF_COLORS
142
+ if "LVIS" in label_list:
143
+ demo_thing_classes += LVIS_CLASSES
144
+ demo_thing_colors += LVIS_COLORS
145
+
146
+ MetadataCatalog.pop("odise_demo_metadata", None)
147
+ demo_metadata = MetadataCatalog.get("odise_demo_metadata")
148
+ demo_metadata.thing_classes = [c[0] for c in demo_thing_classes]
149
+ demo_metadata.stuff_classes = [
150
+ *demo_metadata.thing_classes,
151
+ *[c[0] for c in demo_stuff_classes],
152
+ ]
153
+ demo_metadata.thing_colors = demo_thing_colors
154
+ demo_metadata.stuff_colors = demo_thing_colors + demo_stuff_colors
155
+ demo_metadata.stuff_dataset_id_to_contiguous_id = {
156
+ idx: idx for idx in range(len(demo_metadata.stuff_classes))
157
+ }
158
+ demo_metadata.thing_dataset_id_to_contiguous_id = {
159
+ idx: idx for idx in range(len(demo_metadata.thing_classes))
160
+ }
161
+
162
+ demo_classes = demo_thing_classes + demo_stuff_classes
163
+
164
+ return demo_classes, demo_metadata
165
+
166
+ import sys
167
+
168
+
169
+ def load_model(config_path="Panoptic/odise_label_coco_50e.py", seed=42, diffusion_ver="v1-3", image_size=1024, num_timesteps=0, block_indices=(2,5,8,11), decoder_only=True, encoder_only=False, resblock_only=False):
170
+ cfg = model_zoo.get_config(config_path, trained=True)
171
+
172
+ cfg.model.backbone.feature_extractor.init_checkpoint = "sd://"+diffusion_ver
173
+ cfg.model.backbone.feature_extractor.steps = (num_timesteps,)
174
+ cfg.model.backbone.feature_extractor.unet_block_indices = block_indices
175
+ cfg.model.backbone.feature_extractor.encoder_only = encoder_only
176
+ cfg.model.backbone.feature_extractor.decoder_only = decoder_only
177
+ cfg.model.backbone.feature_extractor.resblock_only = resblock_only
178
+ cfg.model.overlap_threshold = 0
179
+ seed_all_rng(seed)
180
+
181
+ cfg.dataloader.test.mapper.augmentations=[
182
+ L(T.ResizeShortestEdge)(short_edge_length=image_size, sample_style="choice", max_size=2560),
183
+ ]
184
+ dataset_cfg = cfg.dataloader.test
185
+
186
+ aug = instantiate(dataset_cfg.mapper).augmentations
187
+
188
+ model = instantiate_odise(cfg.model)
189
+ model.to(cfg.train.device)
190
+ ODISECheckpointer(model).load(cfg.train.init_checkpoint)
191
+
192
+ return model, aug
193
+
194
+ def inference(model, aug, image, vocab, label_list):
195
+
196
+ demo_classes, demo_metadata = build_demo_classes_and_metadata(vocab, label_list)
197
+ with ExitStack() as stack:
198
+ inference_model = OpenPanopticInference(
199
+ model=model,
200
+ labels=demo_classes,
201
+ metadata=demo_metadata,
202
+ semantic_on=False,
203
+ instance_on=False,
204
+ panoptic_on=True,
205
+ )
206
+ stack.enter_context(inference_context(inference_model))
207
+ stack.enter_context(torch.no_grad())
208
+
209
+ demo = StableDiffusionSeg(inference_model, demo_metadata, aug)
210
+ pred = demo.predict(np.array(image))
211
+ return (pred, demo_classes)
212
+
213
+ def get_features(model, aug, image, vocab, label_list, caption=None, pca=False):
214
+
215
+ demo_classes, demo_metadata = build_demo_classes_and_metadata(vocab, label_list)
216
+ with ExitStack() as stack:
217
+ inference_model = OpenPanopticInference(
218
+ model=model,
219
+ labels=demo_classes,
220
+ metadata=demo_metadata,
221
+ semantic_on=False,
222
+ instance_on=False,
223
+ panoptic_on=True,
224
+ )
225
+ stack.enter_context(inference_context(inference_model))
226
+ stack.enter_context(torch.no_grad())
227
+
228
+ demo = StableDiffusionSeg(inference_model, demo_metadata, aug)
229
+ if caption is not None:
230
+ features = demo.get_features(np.array(image), caption, pca=pca)
231
+ else:
232
+ features = demo.get_features(np.array(image), pca=pca)
233
+ return features
234
+
235
+
236
+ def pca_process(features):
237
+ # Get the feature tensors
238
+ size_s5=features['s5'].shape[-1]
239
+ size_s4=features['s4'].shape[-1]
240
+ size_s3=features['s3'].shape[-1]
241
+
242
+ print(f"Original shapes: s5: {features['s5'].shape}, s4: {features['s4'].shape}, s3: {features['s3'].shape}")
243
+
244
+ s5 = features['s5'].reshape(features['s5'].shape[0], features['s5'].shape[1], -1)
245
+ s4 = features['s4'].reshape(features['s4'].shape[0], features['s4'].shape[1], -1)
246
+ s3 = features['s3'].reshape(features['s3'].shape[0], features['s3'].shape[1], -1)
247
+
248
+ print(f"Reshaped tensors: s5: {s5.shape}, s4: {s4.shape}, s3: {s3.shape}")
249
+
250
+ # Define the target dimensions
251
+ target_dims = {'s5': 128, 's4': 128, 's3': 128}
252
+
253
+ # Apply PCA to each tensor using Faiss CPU
254
+ for name, tensor in zip(['s5', 's4', 's3'], [s5, s4, s3]):
255
+ target_dim = target_dims[name]
256
+
257
+ # Transpose the tensor so that the last dimension is the number of features
258
+ tensor = tensor.permute(0, 2, 1)
259
+
260
+ # # Norm the tensor
261
+ # tensor = tensor / tensor.norm(dim=-1, keepdim=True)
262
+
263
+ # Initialize a Faiss PCA object
264
+ pca = faiss.PCAMatrix(tensor.shape[-1], target_dim)
265
+
266
+ # Train the PCA object
267
+ pca.train(tensor[0].cpu().numpy())
268
+
269
+ # Apply PCA to the data
270
+ transformed_tensor_np = pca.apply(tensor[0].cpu().numpy())
271
+
272
+ # Convert the transformed data back to a tensor
273
+ transformed_tensor = torch.tensor(transformed_tensor_np, device=tensor.device).unsqueeze(0)
274
+
275
+ # Store the transformed tensor in the features dictionary
276
+ features[name] = transformed_tensor
277
+
278
+ # print(f"Transformed shapes: s5: {features['s5'].shape}, s4: {features['s4'].shape}, s3: {features['s3'].shape}")
279
+ # # s5: torch.Size([1, 256, 225]), s4: torch.Size([1, 256, 900]), s3: torch.Size([1, 256, 3600])
280
+
281
+ # Reshape the tensors back to their original shapes
282
+ features['s5'] = features['s5'].permute(0, 2, 1).reshape(features['s5'].shape[0], -1, size_s5, size_s5)
283
+ features['s4'] = features['s4'].permute(0, 2, 1).reshape(features['s4'].shape[0], -1, size_s4, size_s4)
284
+ features['s3'] = features['s3'].permute(0, 2, 1).reshape(features['s3'].shape[0], -1, size_s3, size_s3)
285
+ # Upsample s5 spatially by a factor of 2
286
+ upsampled_s5 = torch.nn.functional.interpolate(features['s5'], scale_factor=2, mode='bilinear', align_corners=False)
287
+
288
+ print(f"features['s5'] shape after upsampling: {upsampled_s5.shape}")
289
+
290
+ # Concatenate upsampled_s5 and s4 to create a new s5
291
+ features['s5'] = torch.cat((upsampled_s5, features['s4']), dim=1)
292
+
293
+ print(f"features['s5'] shape after concatenation: {features['s5'].shape}")
294
+
295
+ # Set s3 as the new s4
296
+ features['s4'] = features['s3']
297
+
298
+ # Remove s3 from the features dictionary
299
+ del features['s3']
300
+
301
+ return features
302
+
303
+
304
+ def process_features_and_mask(model, aug, image, category=None, input_text=None, mask=True, pca=False, raw=False):
305
+
306
+ input_image = image
307
+ caption = input_text
308
+ vocab = ""
309
+ label_list = ["COCO"]
310
+ category_convert_dict={
311
+ 'aeroplane':'airplane',
312
+ 'motorbike':'motorcycle',
313
+ 'pottedplant':'potted plant',
314
+ 'tvmonitor':'tv',
315
+ }
316
+ if type(category) is not list and category in category_convert_dict:
317
+ category=category_convert_dict[category]
318
+ elif type(category) is list:
319
+ category=[category_convert_dict[cat] if cat in category_convert_dict else cat for cat in category]
320
+ features = get_features(model, aug, input_image, vocab, label_list, caption, pca=(pca or raw))
321
+ if pca:
322
+ features = pca_process(features)
323
+ if raw:
324
+ # print("features['s5'].shape, features['s4'].shape, features['s3'].shape)", features['s5'].shape, features['s4'].shape, features['s3'].shape)
325
+ # torch.Size([1, 2560, 15, 15]) torch.Size([1, 1920, 30, 30]) torch.Size([1, 960, 60, 60])
326
+ return features
327
+
328
+ features_gether_s4_s5 = torch.cat([features['s4'], F.interpolate(features['s5'], size=(features['s4'].shape[-2:]), mode='bilinear')], dim=1)
329
+
330
+ if mask:
331
+ (pred,classes) =inference(model, aug, input_image, vocab, label_list)
332
+ seg_map=pred['panoptic_seg'][0]
333
+ target_mask_id = []
334
+ for item in pred['panoptic_seg'][1]:
335
+ item['category_name']=classes[item['category_id']]
336
+ if category in item['category_name']:
337
+ target_mask_id.append(item['id'])
338
+ resized_seg_map_s4 = F.interpolate(seg_map.unsqueeze(0).unsqueeze(0).float(),
339
+ size=(features['s4'].shape[-2:]), mode='nearest')
340
+ # to do adjust size
341
+ binary_seg_map = torch.zeros_like(resized_seg_map_s4)
342
+ for i in target_mask_id:
343
+ binary_seg_map += (resized_seg_map_s4 == i).float()
344
+ if len(target_mask_id) == 0 or binary_seg_map.sum() < 6:
345
+ binary_seg_map = torch.ones_like(resized_seg_map_s4)
346
+ features_gether_s4_s5 = features_gether_s4_s5 * binary_seg_map
347
+ # set where mask is 0 to inf
348
+ features_gether_s4_s5[(binary_seg_map == 0).repeat(1,features_gether_s4_s5.shape[1],1,1)] = -1
349
+
350
+ # print(f"final features_gether_s4_s5 shape: {features_gether_s4_s5.shape}")
351
+
352
+ return features_gether_s4_s5
353
+
354
+ def get_mask(model, aug, image, category=None, input_text=None):
355
+ model.backbone.feature_extractor.decoder_only = False
356
+ model.backbone.feature_extractor.encoder_only = False
357
+ model.backbone.feature_extractor.resblock_only = False
358
+ input_image = image
359
+ caption = input_text
360
+ vocab = ""
361
+ label_list = ["COCO"]
362
+ category_convert_dict={
363
+ 'aeroplane':'airplane',
364
+ 'motorbike':'motorcycle',
365
+ 'pottedplant':'potted plant',
366
+ 'tvmonitor':'tv',
367
+ }
368
+ if type(category) is not list and category in category_convert_dict:
369
+ category=category_convert_dict[category]
370
+ elif type(category) is list:
371
+ category=[category_convert_dict[cat] if cat in category_convert_dict else cat for cat in category]
372
+
373
+ (pred,classes) =inference(model, aug, input_image, vocab, label_list)
374
+ seg_map=pred['panoptic_seg'][0]
375
+ target_mask_id = []
376
+ for item in pred['panoptic_seg'][1]:
377
+ item['category_name']=classes[item['category_id']]
378
+ if type(category) is list:
379
+ for cat in category:
380
+ if cat in item['category_name']:
381
+ target_mask_id.append(item['id'])
382
+ else:
383
+ if category in item['category_name']:
384
+ target_mask_id.append(item['id'])
385
+ resized_seg_map_s4 = seg_map.float()
386
+ binary_seg_map = torch.zeros_like(resized_seg_map_s4)
387
+ for i in target_mask_id:
388
+ binary_seg_map += (resized_seg_map_s4 == i).float()
389
+ if len(target_mask_id) == 0 or binary_seg_map.sum() < 6:
390
+ binary_seg_map = torch.ones_like(resized_seg_map_s4)
391
+
392
+ return binary_seg_map
393
+
394
+ if __name__ == "__main__":
395
+ image_path = sys.argv[1]
396
+ try:
397
+ input_text = sys.argv[2]
398
+ except:
399
+ input_text = None
400
+
401
+ model, aug = load_model()
402
+ img_size = 960
403
+ image = Image.open(image_path).convert('RGB')
404
+ image = resize(image, img_size, resize=True, to_pil=True)
405
+
406
+ features = process_features_and_mask(model, aug, image, category=input_text, pca=False, raw=True)
407
+ features = features['s4'] # save the features of layer 5
408
+
409
+ # save the features
410
+ np.save(image_path[:-4]+'.npy', features.cpu().numpy())
Code/Baselines/sd-dino/pck_spair_pascal.py ADDED
@@ -0,0 +1,575 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import sys
3
+ import torch
4
+ torch.set_num_threads(16)
5
+ import os
6
+ import pandas as pd
7
+ import numpy as np
8
+ from PIL import Image
9
+ from tqdm import tqdm
10
+ import torch.nn.functional as F
11
+ import json
12
+ from glob import glob
13
+ from utils.utils_correspondence import pairwise_sim, draw_correspondences_gathered, chunk_cosine_sim, co_pca, resize, find_nearest_patchs, find_nearest_patchs_replace, draw_correspondences_lines
14
+ import matplotlib.pyplot as plt
15
+ import sys
16
+ import time
17
+ from utils.logger import get_logger
18
+ from loguru import logger
19
+ import argparse
20
+ from extractor_dino import ViTExtractor
21
+ from extractor_sd import load_model, process_features_and_mask, get_mask
22
+
23
+ def preprocess_kps_pad(kps, img_width, img_height, size):
24
+ # Once an image has been pre-processed via border (or zero) padding,
25
+ # the location of key points needs to be updated. This function applies
26
+ # that pre-processing to the key points so they are correctly located
27
+ # in the border-padded (or zero-padded) image.
28
+ kps = kps.clone()
29
+ scale = size / max(img_width, img_height)
30
+ kps[:, [0, 1]] *= scale
31
+ if img_height < img_width:
32
+ new_h = int(np.around(size * img_height / img_width))
33
+ offset_y = int((size - new_h) / 2)
34
+ offset_x = 0
35
+ kps[:, 1] += offset_y
36
+ elif img_width < img_height:
37
+ new_w = int(np.around(size * img_width / img_height))
38
+ offset_x = int((size - new_w) / 2)
39
+ offset_y = 0
40
+ kps[:, 0] += offset_x
41
+ else:
42
+ offset_x = 0
43
+ offset_y = 0
44
+ if not COUNT_INVIS:
45
+ kps *= kps[:, 2:3].clone() # zero-out any non-visible key points
46
+ return kps, offset_x, offset_y, scale
47
+
48
+ def load_spair_data(path, size=256, category='cat', split='test', subsample=None):
49
+ np.random.seed(SEED)
50
+ pairs = sorted(glob(f'{path}/PairAnnotation/{split}/*:{category}.json'))
51
+ if subsample is not None and subsample > 0:
52
+ pairs = [pairs[ix] for ix in np.random.choice(len(pairs), subsample)]
53
+ logger.info(f'Number of SPairs for {category} = {len(pairs)}')
54
+ files = []
55
+ thresholds = []
56
+ category_anno = list(glob(f'{path}/ImageAnnotation/{category}/*.json'))[0]
57
+ with open(category_anno) as f:
58
+ num_kps = len(json.load(f)['kps'])
59
+ logger.info(f'Number of SPair key points for {category} <= {num_kps}')
60
+ kps = []
61
+ blank_kps = torch.zeros(num_kps, 3)
62
+ for pair in pairs:
63
+ with open(pair) as f:
64
+ data = json.load(f)
65
+ assert category == data["category"]
66
+ assert data["mirror"] == 0
67
+ source_fn = f'{path}/JPEGImages/{category}/{data["src_imname"]}'
68
+ target_fn = f'{path}/JPEGImages/{category}/{data["trg_imname"]}'
69
+ source_bbox = np.asarray(data["src_bndbox"])
70
+ target_bbox = np.asarray(data["trg_bndbox"])
71
+ # The source thresholds aren't actually used to evaluate PCK on SPair-71K, but for completeness
72
+ # they are computed as well:
73
+ # thresholds.append(max(source_bbox[3] - source_bbox[1], source_bbox[2] - source_bbox[0]))
74
+ # thresholds.append(max(target_bbox[3] - target_bbox[1], target_bbox[2] - target_bbox[0]))
75
+
76
+ source_size = data["src_imsize"][:2] # (W, H)
77
+ target_size = data["trg_imsize"][:2] # (W, H)
78
+
79
+ kp_ixs = torch.tensor([int(id) for id in data["kps_ids"]]).view(-1, 1).repeat(1, 3)
80
+ source_raw_kps = torch.cat([torch.tensor(data["src_kps"], dtype=torch.float), torch.ones(kp_ixs.size(0), 1)], 1)
81
+ source_kps = blank_kps.scatter(dim=0, index=kp_ixs, src=source_raw_kps)
82
+ source_kps, src_x, src_y, src_scale = preprocess_kps_pad(source_kps, source_size[0], source_size[1], size)
83
+
84
+ target_raw_kps = torch.cat([torch.tensor(data["trg_kps"], dtype=torch.float), torch.ones(kp_ixs.size(0), 1)], 1)
85
+ target_kps = blank_kps.scatter(dim=0, index=kp_ixs, src=target_raw_kps)
86
+ target_kps, trg_x, trg_y, trg_scale = preprocess_kps_pad(target_kps, target_size[0], target_size[1], size)
87
+
88
+ thresholds.append(max(target_bbox[3] - target_bbox[1], target_bbox[2] - target_bbox[0])*trg_scale)
89
+
90
+ kps.append(source_kps)
91
+ kps.append(target_kps)
92
+ files.append(source_fn)
93
+ files.append(target_fn)
94
+
95
+ kps = torch.stack(kps)
96
+ used_kps, = torch.where(kps[:, :, 2].any(dim=0))
97
+ kps = kps[:, used_kps, :]
98
+ logger.info(f'Final number of used key points: {kps.size(1)}')
99
+ return files, kps, thresholds
100
+
101
+ def load_pascal_data(path, size=256, category='cat', split='test', subsample=None):
102
+
103
+ def get_points(point_coords_list, idx):
104
+ X = np.fromstring(point_coords_list.iloc[idx, 0], sep=";")
105
+ Y = np.fromstring(point_coords_list.iloc[idx, 1], sep=";")
106
+ Xpad = -np.ones(20)
107
+ Xpad[: len(X)] = X
108
+ Ypad = -np.ones(20)
109
+ Ypad[: len(X)] = Y
110
+ Zmask = np.zeros(20)
111
+ Zmask[: len(X)] = 1
112
+ point_coords = np.concatenate(
113
+ (Xpad.reshape(1, 20), Ypad.reshape(1, 20), Zmask.reshape(1,20)), axis=0
114
+ )
115
+ # make arrays float tensor for subsequent processing
116
+ point_coords = torch.Tensor(point_coords.astype(np.float32))
117
+ return point_coords
118
+
119
+ np.random.seed(SEED)
120
+ files = []
121
+ kps = []
122
+ test_data = pd.read_csv(f'{path}/{split}_pairs_pf_pascal.csv')
123
+ cls = ['aeroplane', 'bicycle', 'bird', 'boat', 'bottle',
124
+ 'bus', 'car', 'cat', 'chair', 'cow',
125
+ 'diningtable', 'dog', 'horse', 'motorbike', 'person',
126
+ 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor']
127
+ cls_ids = test_data.iloc[:,2].values.astype("int") - 1
128
+ cat_id = cls.index(category)
129
+ subset_id = np.where(cls_ids == cat_id)[0]
130
+ logger.info(f'Number of SPairs for {category} = {len(subset_id)}')
131
+ subset_pairs = test_data.iloc[subset_id,:]
132
+ src_img_names = np.array(subset_pairs.iloc[:,0])
133
+ trg_img_names = np.array(subset_pairs.iloc[:,1])
134
+ # print(src_img_names.shape, trg_img_names.shape)
135
+ point_A_coords = subset_pairs.iloc[:,3:5]
136
+ point_B_coords = subset_pairs.iloc[:,5:]
137
+ # print(point_A_coords.shape, point_B_coords.shape)
138
+ for i in range(len(src_img_names)):
139
+ point_coords_src = get_points(point_A_coords, i).transpose(1,0)
140
+ point_coords_trg = get_points(point_B_coords, i).transpose(1,0)
141
+ src_fn= f'{path}/../{src_img_names[i]}'
142
+ trg_fn= f'{path}/../{trg_img_names[i]}'
143
+ src_size=Image.open(src_fn).size
144
+ trg_size=Image.open(trg_fn).size
145
+ # print(src_size)
146
+ source_kps, src_x, src_y, src_scale = preprocess_kps_pad(point_coords_src, src_size[0], src_size[1], size)
147
+ target_kps, trg_x, trg_y, trg_scale = preprocess_kps_pad(point_coords_trg, trg_size[0], trg_size[1], size)
148
+ kps.append(source_kps)
149
+ kps.append(target_kps)
150
+ files.append(src_fn)
151
+ files.append(trg_fn)
152
+
153
+ kps = torch.stack(kps)
154
+ used_kps, = torch.where(kps[:, :, 2].any(dim=0))
155
+ kps = kps[:, used_kps, :]
156
+ logger.info(f'Final number of used key points: {kps.size(1)}')
157
+ return files, kps, None
158
+
159
+ def compute_pck(model, aug, save_path, files, kps, category, mask=False, dist='cos', thresholds=None, real_size=960):
160
+
161
+ img_size = 840 if DINOV2 else 224 if ONLY_DINO else 480
162
+ model_dict={'small':'dinov2_vits14',
163
+ 'base':'dinov2_vitb14',
164
+ 'large':'dinov2_vitl14',
165
+ 'giant':'dinov2_vitg14'}
166
+
167
+ model_type = model_dict[MODEL_SIZE] if DINOV2 else 'dino_vits8'
168
+ layer = 11 if DINOV2 else 9
169
+ if 'l' in model_type:
170
+ layer = 23
171
+ elif 'g' in model_type:
172
+ layer = 39
173
+ facet = 'token' if DINOV2 else 'key'
174
+ stride = 14 if DINOV2 else 4 if ONLY_DINO else 8
175
+ device = 'cuda' if torch.cuda.is_available() else 'cpu'
176
+ # indiactor = 'v2' if DINOV2 else 'v1'
177
+ # model_size = model_type.split('vit')[-1]
178
+ extractor = ViTExtractor(model_type, stride, device=device)
179
+ patch_size = extractor.model.patch_embed.patch_size[0] if DINOV2 else extractor.model.patch_embed.patch_size
180
+ num_patches = int(patch_size / stride * (img_size // patch_size - 1) + 1)
181
+
182
+ input_text = "a photo of "+category if TEXT_INPUT else None
183
+
184
+ current_save_results = 0
185
+ gt_correspondences = []
186
+ pred_correspondences = []
187
+ if thresholds is not None:
188
+ thresholds = torch.tensor(thresholds).to(device)
189
+ bbox_size=[]
190
+ N = len(files) // 2
191
+ pbar = tqdm(total=N)
192
+
193
+ for pair_idx in range(N):
194
+ # Load image 1
195
+ img1 = Image.open(files[2*pair_idx]).convert('RGB')
196
+ img1_input = resize(img1, real_size, resize=True, to_pil=True, edge=EDGE_PAD)
197
+ img1 = resize(img1, img_size, resize=True, to_pil=True, edge=EDGE_PAD)
198
+ img1_kps = kps[2*pair_idx]
199
+
200
+ # Get patch index for the keypoints
201
+ img1_y, img1_x = img1_kps[:, 1].numpy(), img1_kps[:, 0].numpy()
202
+ img1_y_patch = (num_patches / img_size * img1_y).astype(np.int32)
203
+ img1_x_patch = (num_patches / img_size * img1_x).astype(np.int32)
204
+ img1_patch_idx = num_patches * img1_y_patch + img1_x_patch
205
+
206
+ # Load image 2
207
+ img2 = Image.open(files[2*pair_idx+1]).convert('RGB')
208
+ img2_input = resize(img2, real_size, resize=True, to_pil=True, edge=EDGE_PAD)
209
+ img2 = resize(img2, img_size, resize=True, to_pil=True, edge=EDGE_PAD)
210
+ img2_kps = kps[2*pair_idx+1]
211
+
212
+ # Get patch index for the keypoints
213
+ img2_y, img2_x = img2_kps[:, 1].numpy(), img2_kps[:, 0].numpy()
214
+ img2_y_patch = (num_patches / img_size * img2_y).astype(np.int32)
215
+ img2_x_patch = (num_patches / img_size * img2_x).astype(np.int32)
216
+ img2_patch_idx = num_patches * img2_y_patch + img2_x_patch
217
+
218
+
219
+ with torch.no_grad():
220
+ if not CO_PCA:
221
+ if not ONLY_DINO:
222
+ img1_desc = process_features_and_mask(model, aug, img1_input, input_text=input_text, mask=False).reshape(1,1,-1, num_patches**2).permute(0,1,3,2)
223
+ img2_desc = process_features_and_mask(model, aug, img2_input, category, input_text=input_text, mask=mask).reshape(1,1,-1, num_patches**2).permute(0,1,3,2)
224
+ if FUSE_DINO:
225
+ img1_batch = extractor.preprocess_pil(img1)
226
+ img1_desc_dino = extractor.extract_descriptors(img1_batch.to(device), layer, facet)
227
+ img2_batch = extractor.preprocess_pil(img2)
228
+ img2_desc_dino = extractor.extract_descriptors(img2_batch.to(device), layer, facet)
229
+
230
+ else:
231
+ if not ONLY_DINO:
232
+ features1 = process_features_and_mask(model, aug, img1_input, input_text=input_text, mask=False, raw=True)
233
+ features2 = process_features_and_mask(model, aug, img2_input, input_text=input_text, mask=False, raw=True)
234
+ if not RAW:
235
+ processed_features1, processed_features2 = co_pca(features1, features2, PCA_DIMS)
236
+ else:
237
+ if WEIGHT[0]:
238
+ processed_features1 = features1['s5']
239
+ processed_features2 = features2['s5']
240
+ elif WEIGHT[1]:
241
+ processed_features1 = features1['s4']
242
+ processed_features2 = features2['s4']
243
+ elif WEIGHT[2]:
244
+ processed_features1 = features1['s3']
245
+ processed_features2 = features2['s3']
246
+ elif WEIGHT[3]:
247
+ processed_features1 = features1['s2']
248
+ processed_features2 = features2['s2']
249
+ else:
250
+ raise NotImplementedError
251
+ # rescale the features
252
+ processed_features1 = F.interpolate(processed_features1, size=(num_patches, num_patches), mode='bilinear', align_corners=False)
253
+ processed_features2 = F.interpolate(processed_features2, size=(num_patches, num_patches), mode='bilinear', align_corners=False)
254
+
255
+ img1_desc = processed_features1.reshape(1, 1, -1, num_patches**2).permute(0,1,3,2)
256
+ img2_desc = processed_features2.reshape(1, 1, -1, num_patches**2).permute(0,1,3,2)
257
+ if FUSE_DINO:
258
+ img1_batch = extractor.preprocess_pil(img1)
259
+ img1_desc_dino = extractor.extract_descriptors(img1_batch.to(device), layer, facet)
260
+ img2_batch = extractor.preprocess_pil(img2)
261
+ img2_desc_dino = extractor.extract_descriptors(img2_batch.to(device), layer, facet)
262
+
263
+ if CO_PCA_DINO:
264
+ cat_desc_dino = torch.cat((img1_desc_dino, img2_desc_dino), dim=2).squeeze() # (1, 1, num_patches**2, dim)
265
+ mean = torch.mean(cat_desc_dino, dim=0, keepdim=True)
266
+ centered_features = cat_desc_dino - mean
267
+ U, S, V = torch.pca_lowrank(centered_features, q=CO_PCA_DINO)
268
+ reduced_features = torch.matmul(centered_features, V[:, :CO_PCA_DINO]) # (t_x+t_y)x(d)
269
+ processed_co_features = reduced_features.unsqueeze(0).unsqueeze(0)
270
+ img1_desc_dino = processed_co_features[:, :, :img1_desc_dino.shape[2], :]
271
+ img2_desc_dino = processed_co_features[:, :, img1_desc_dino.shape[2]:, :]
272
+
273
+ if not ONLY_DINO and not RAW: # reweight different layers of sd
274
+
275
+ img1_desc[...,:PCA_DIMS[0]]*=WEIGHT[0]
276
+ img1_desc[...,PCA_DIMS[0]:PCA_DIMS[1]+PCA_DIMS[0]]*=WEIGHT[1]
277
+ img1_desc[...,PCA_DIMS[1]+PCA_DIMS[0]:PCA_DIMS[2]+PCA_DIMS[1]+PCA_DIMS[0]]*=WEIGHT[2]
278
+
279
+ img2_desc[...,:PCA_DIMS[0]]*=WEIGHT[0]
280
+ img2_desc[...,PCA_DIMS[0]:PCA_DIMS[1]+PCA_DIMS[0]]*=WEIGHT[1]
281
+ img2_desc[...,PCA_DIMS[1]+PCA_DIMS[0]:PCA_DIMS[2]+PCA_DIMS[1]+PCA_DIMS[0]]*=WEIGHT[2]
282
+
283
+ if 'l1' in dist or 'l2' in dist or dist == 'plus_norm':
284
+ # normalize the features
285
+ img1_desc = img1_desc / img1_desc.norm(dim=-1, keepdim=True)
286
+ img2_desc = img2_desc / img2_desc.norm(dim=-1, keepdim=True)
287
+ img1_desc_dino = img1_desc_dino / img1_desc_dino.norm(dim=-1, keepdim=True)
288
+ img2_desc_dino = img2_desc_dino / img2_desc_dino.norm(dim=-1, keepdim=True)
289
+
290
+ if FUSE_DINO and not ONLY_DINO and dist!='plus' and dist!='plus_norm':
291
+ # cat two features together
292
+ img1_desc = torch.cat((img1_desc, img1_desc_dino), dim=-1)
293
+ img2_desc = torch.cat((img2_desc, img2_desc_dino), dim=-1)
294
+ if not RAW:
295
+ # reweight sd and dino
296
+ img1_desc[...,:PCA_DIMS[2]+PCA_DIMS[1]+PCA_DIMS[0]]*=WEIGHT[3]
297
+ img1_desc[...,PCA_DIMS[2]+PCA_DIMS[1]+PCA_DIMS[0]:]*=WEIGHT[4]
298
+ img2_desc[...,:PCA_DIMS[2]+PCA_DIMS[1]+PCA_DIMS[0]]*=WEIGHT[3]
299
+ img2_desc[...,PCA_DIMS[2]+PCA_DIMS[1]+PCA_DIMS[0]:]*=WEIGHT[4]
300
+
301
+ elif dist=='plus' or dist=='plus_norm':
302
+ img1_desc = img1_desc + img1_desc_dino
303
+ img2_desc = img2_desc + img2_desc_dino
304
+ dist='cos'
305
+
306
+ if ONLY_DINO:
307
+ img1_desc = img1_desc_dino
308
+ img2_desc = img2_desc_dino
309
+ # logger.info(img1_desc.shape, img2_desc.shape)
310
+
311
+ if DRAW_DENSE:
312
+ mask1 = get_mask(model, aug, img1, category)
313
+ mask2 = get_mask(model, aug, img2, category)
314
+
315
+ if ONLY_DINO or not FUSE_DINO:
316
+ img1_desc = img1_desc / img1_desc.norm(dim=-1, keepdim=True)
317
+ img2_desc = img2_desc / img2_desc.norm(dim=-1, keepdim=True)
318
+
319
+ img1_desc_reshaped = img1_desc.permute(0,1,3,2).reshape(-1, img1_desc.shape[-1], num_patches, num_patches)
320
+ img2_desc_reshaped = img2_desc.permute(0,1,3,2).reshape(-1, img2_desc.shape[-1], num_patches, num_patches)
321
+ trg_dense_output, src_color_map = find_nearest_patchs(mask2, mask1, img2, img1, img2_desc_reshaped, img1_desc_reshaped, mask=mask, resolution=128)
322
+ if current_save_results!=TOTAL_SAVE_RESULT:
323
+ if not os.path.exists(f'{save_path}/{category}'):
324
+ os.makedirs(f'{save_path}/{category}')
325
+ fig_colormap, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 8))
326
+ ax1.axis('off')
327
+ ax2.axis('off')
328
+ ax1.imshow(src_color_map)
329
+ ax2.imshow(trg_dense_output)
330
+ fig_colormap.savefig(f'{save_path}/{category}/{pair_idx}_colormap.png')
331
+ plt.close(fig_colormap)
332
+
333
+ if DRAW_SWAP:
334
+ if not DRAW_DENSE:
335
+ mask1 = get_mask(model, aug, img1, category)
336
+ mask2 = get_mask(model, aug, img2, category)
337
+
338
+ if ONLY_DINO or not FUSE_DINO:
339
+ img1_desc = img1_desc / img1_desc.norm(dim=-1, keepdim=True)
340
+ img2_desc = img2_desc / img2_desc.norm(dim=-1, keepdim=True)
341
+
342
+ img1_desc_reshaped = img1_desc.permute(0,1,3,2).reshape(-1, img1_desc.shape[-1], num_patches, num_patches)
343
+ img2_desc_reshaped = img2_desc.permute(0,1,3,2).reshape(-1, img2_desc.shape[-1], num_patches, num_patches)
344
+ trg_dense_output, src_color_map = find_nearest_patchs_replace(mask2, mask1, img2, img1, img2_desc_reshaped, img1_desc_reshaped, mask=mask, resolution=156, draw_gif=DRAW_GIF, save_path=f'{save_path}/{category}/{pair_idx}_swap.gif')
345
+ if current_save_results!=TOTAL_SAVE_RESULT:
346
+ if not os.path.exists(f'{save_path}/{category}'):
347
+ os.makedirs(f'{save_path}/{category}')
348
+ fig_colormap, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 8))
349
+ ax1.axis('off')
350
+ ax2.axis('off')
351
+ ax1.imshow(src_color_map)
352
+ ax2.imshow(trg_dense_output)
353
+ fig_colormap.savefig(f'{save_path}/{category}/{pair_idx}_swap.png')
354
+ plt.close(fig_colormap)
355
+
356
+ if MASK and CO_PCA:
357
+ mask2 = get_mask(model, aug, img2, category)
358
+ img2_desc = img2_desc.permute(0,1,3,2).reshape(-1, img2_desc.shape[-1], num_patches, num_patches)
359
+ resized_mask2 = F.interpolate(mask2.cuda().unsqueeze(0).unsqueeze(0).float(), size=(num_patches, num_patches), mode='nearest')
360
+ img2_desc = img2_desc * resized_mask2.repeat(1, img2_desc.shape[1], 1, 1)
361
+ img2_desc[(img2_desc.sum(dim=1)==0).repeat(1, img2_desc.shape[1], 1, 1)] = 100000
362
+ # reshape back
363
+ img2_desc = img2_desc.reshape(1, 1, img2_desc.shape[1], num_patches*num_patches).permute(0,1,3,2)
364
+
365
+ # Get mutual visibility
366
+ vis = img1_kps[:, 2] * img2_kps[:, 2] > 0
367
+ if COUNT_INVIS:
368
+ vis = torch.ones_like(vis)
369
+ # Get similarity matrix
370
+ if dist == 'cos':
371
+ sim_1_to_2 = chunk_cosine_sim(img1_desc, img2_desc).squeeze()
372
+ elif dist == 'l2':
373
+ sim_1_to_2 = pairwise_sim(img1_desc, img2_desc, p=2).squeeze()
374
+ elif dist == 'l1':
375
+ sim_1_to_2 = pairwise_sim(img1_desc, img2_desc, p=1).squeeze()
376
+ elif dist == 'l2_norm':
377
+ sim_1_to_2 = pairwise_sim(img1_desc, img2_desc, p=2, normalize=True).squeeze()
378
+ elif dist == 'l1_norm':
379
+ sim_1_to_2 = pairwise_sim(img1_desc, img2_desc, p=1, normalize=True).squeeze()
380
+ else:
381
+ raise ValueError('Unknown distance metric')
382
+
383
+ # Get nearest neighors
384
+ nn_1_to_2 = torch.argmax(sim_1_to_2[img1_patch_idx], dim=1)
385
+ nn_y_patch, nn_x_patch = nn_1_to_2 // num_patches, nn_1_to_2 % num_patches
386
+ nn_x = (nn_x_patch - 1) * stride + stride + patch_size // 2 - .5
387
+ nn_y = (nn_y_patch - 1) * stride + stride + patch_size // 2 - .5
388
+ kps_1_to_2 = torch.stack([nn_x, nn_y]).permute(1, 0)
389
+
390
+ gt_correspondences.append(img2_kps[vis][:, [1,0]])
391
+ pred_correspondences.append(kps_1_to_2[vis][:, [1,0]])
392
+ if thresholds is not None:
393
+ bbox_size.append(thresholds[pair_idx].repeat(vis.sum()))
394
+
395
+ if current_save_results!=TOTAL_SAVE_RESULT:
396
+ tmp_alpha = torch.tensor([0.1, 0.05, 0.01])
397
+ if thresholds is not None:
398
+ tmp_bbox_size = thresholds[pair_idx].repeat(vis.sum()).cpu()
399
+ tmp_threshold = tmp_alpha.unsqueeze(-1) * tmp_bbox_size.unsqueeze(0)
400
+ else:
401
+ tmp_threshold = tmp_alpha * img_size
402
+ if not os.path.exists(f'{save_path}/{category}'):
403
+ os.makedirs(f'{save_path}/{category}')
404
+ # fig=draw_correspondences_gathered(img1_kps[vis][:, [1,0]], kps_1_to_2[vis][:, [1,0]], img1, img2)
405
+ fig=draw_correspondences_lines(img1_kps[vis][:, [1,0]], kps_1_to_2[vis][:, [1,0]], img2_kps[vis][:, [1,0]], img1, img2, tmp_threshold)
406
+ fig.savefig(f'{save_path}/{category}/{pair_idx}_pred.png')
407
+ fig_gt=draw_correspondences_gathered(img1_kps[vis][:, [1,0]], img2_kps[vis][:, [1,0]], img1, img2)
408
+ fig_gt.savefig(f'{save_path}/{category}/{pair_idx}_gt.png')
409
+ plt.close(fig)
410
+ plt.close(fig_gt)
411
+ current_save_results+=1
412
+
413
+ pbar.update(1)
414
+
415
+ gt_correspondences = torch.cat(gt_correspondences, dim=0).cpu()
416
+ pred_correspondences = torch.cat(pred_correspondences, dim=0).cpu()
417
+ alpha = torch.tensor([0.1, 0.05, 0.01]) if not PASCAL else torch.tensor([0.1, 0.05, 0.15])
418
+ correct = torch.zeros(len(alpha))
419
+
420
+ err = (pred_correspondences - gt_correspondences).norm(dim=-1)
421
+ err = err.unsqueeze(0).repeat(len(alpha), 1)
422
+ if thresholds is not None:
423
+ bbox_size = torch.cat(bbox_size, dim=0).cpu()
424
+ threshold = alpha.unsqueeze(-1) * bbox_size.unsqueeze(0)
425
+ correct = err < threshold
426
+ else:
427
+ threshold = alpha * img_size
428
+ correct = err < threshold.unsqueeze(-1)
429
+
430
+ correct = correct.sum(dim=-1) / len(gt_correspondences)
431
+
432
+ alpha2pck = zip(alpha.tolist(), correct.tolist())
433
+ logger.info(' | '.join([f'PCK-Transfer@{alpha:.2f}: {pck_alpha * 100:.2f}%'
434
+ for alpha, pck_alpha in alpha2pck]))
435
+
436
+ return correct
437
+
438
+ def main(args):
439
+ global MASK, SAMPLE, DIST, COUNT_INVIS, TOTAL_SAVE_RESULT, BBOX_THRE, VER, CO_PCA, PCA_DIMS, SIZE, FUSE_DINO, DINOV2, MODEL_SIZE, DRAW_DENSE, TEXT_INPUT, DRAW_SWAP, ONLY_DINO, SEED, EDGE_PAD, WEIGHT, CO_PCA_DINO, PASCAL, DRAW_GIF, RAW
440
+ MASK = args.MASK
441
+ SAMPLE = args.SAMPLE
442
+ DIST = args.DIST
443
+ COUNT_INVIS = args.COUNT_INVIS
444
+ TOTAL_SAVE_RESULT = args.TOTAL_SAVE_RESULT
445
+ BBOX_THRE = False if args.IMG_THRESHOLD else True
446
+ VER = args.VER
447
+ CO_PCA = False if args.PROJ_LAYER else True
448
+ CO_PCA_DINO = args.CO_PCA_DINO
449
+ PCA_DIMS = args.PCA_DIMS
450
+ SIZE = args.SIZE
451
+ INDICES = args.INDICES
452
+ EDGE_PAD = args.EDGE_PAD
453
+
454
+ FUSE_DINO = False if args.NOT_FUSE else True
455
+ ONLY_DINO = args.ONLY_DINO
456
+ DINOV2 = False if args.DINOV1 else True
457
+ MODEL_SIZE = args.MODEL_SIZE
458
+
459
+ DRAW_DENSE = args.DRAW_DENSE
460
+ DRAW_SWAP = args.DRAW_SWAP
461
+ DRAW_GIF = args.DRAW_GIF
462
+ TEXT_INPUT = args.TEXT_INPUT
463
+
464
+ SEED = args.SEED
465
+ WEIGHT = args.WEIGHT # corresponde to three groups for the sd features, and one group for the dino features
466
+ PASCAL = args.PASCAL
467
+ RAW = args.RAW
468
+
469
+ if SAMPLE == 0:
470
+ SAMPLE = None
471
+ if DRAW_DENSE or DRAW_SWAP:
472
+ TOTAL_SAVE_RESULT = SAMPLE
473
+ MASK = True
474
+ if ONLY_DINO:
475
+ FUSE_DINO = True
476
+ if FUSE_DINO and not ONLY_DINO:
477
+ DIST = "l2"
478
+ else:
479
+ DIST = "cos"
480
+ if args.DIST != "cos" and args.DIST != "l2":
481
+ DIST = args.DIST
482
+ if PASCAL:
483
+ SAMPLE = 0
484
+
485
+ np.random.seed(args.SEED)
486
+ torch.manual_seed(args.SEED)
487
+ torch.cuda.manual_seed(args.SEED)
488
+ torch.backends.cudnn.benchmark = True
489
+ model, aug = load_model(diffusion_ver=VER, image_size=SIZE, num_timesteps=args.TIMESTEP, block_indices=tuple(INDICES))
490
+ save_path=f'./results_spair/pck_fuse_{args.NOTE}mask_{MASK}_sample_{SAMPLE}_BBOX_{BBOX_THRE}_dist_{DIST}_Invis_{COUNT_INVIS}_{args.TIMESTEP}{VER}_{MODEL_SIZE}_{SIZE}_copca_{CO_PCA}_{INDICES[0]}_{PCA_DIMS[0]}_{INDICES[1]}_{PCA_DIMS[1]}_{INDICES[2]}_{PCA_DIMS[2]}_text_{TEXT_INPUT}_sd_{WEIGHT[3]}{not ONLY_DINO}_dino_{WEIGHT[4]}{FUSE_DINO}'
491
+ if PASCAL:
492
+ save_path=f'./results_pascal/pck_fuse_{args.NOTE}mask_{MASK}_sample_{SAMPLE}_BBOX_{BBOX_THRE}_dist_{DIST}_Invis_{COUNT_INVIS}_{args.TIMESTEP}{VER}_{MODEL_SIZE}_{SIZE}_copca_{CO_PCA}_{INDICES[0]}_{PCA_DIMS[0]}_{INDICES[1]}_{PCA_DIMS[1]}_{INDICES[2]}_{PCA_DIMS[2]}_text_{TEXT_INPUT}_sd_{WEIGHT[3]}{not ONLY_DINO}_dino_{WEIGHT[4]}{FUSE_DINO}'
493
+ if EDGE_PAD:
494
+ save_path += '_edge_pad'
495
+ if not os.path.exists(save_path):
496
+ os.makedirs(save_path)
497
+
498
+ logger = get_logger(save_path+'/result.log')
499
+
500
+ logger.info(args)
501
+ data_dir = 'data/SPair-71k' if not PASCAL else 'data/PF-dataset-PASCAL'
502
+ if not PASCAL:
503
+ categories = os.listdir(os.path.join(data_dir, 'ImageAnnotation'))
504
+ categories = sorted(categories)
505
+ else:
506
+ categories = ['aeroplane', 'bicycle', 'bird', 'boat', 'bottle',
507
+ 'bus', 'car', 'cat', 'chair', 'cow',
508
+ 'diningtable', 'dog', 'horse', 'motorbike', 'person',
509
+ 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor'] # for pascal
510
+ img_size = 840 if DINOV2 else 224 if ONLY_DINO else 480
511
+
512
+ pcks = []
513
+ pcks_05 = []
514
+ pcks_01 = []
515
+ start_time=time.time()
516
+ for cat in categories:
517
+ files, kps, thresholds = load_spair_data(data_dir, size=img_size, category=cat, subsample=SAMPLE) if not PASCAL else load_pascal_data(data_dir, size=img_size, category=cat, subsample=SAMPLE)
518
+ if BBOX_THRE:
519
+ pck = compute_pck(model, aug, save_path, files, kps, cat, mask=MASK, dist=DIST, thresholds=thresholds, real_size=SIZE)
520
+ else:
521
+ pck = compute_pck(model, aug, save_path, files, kps, cat, mask=MASK, dist=DIST, real_size=SIZE)
522
+ pcks.append(pck[0])
523
+ pcks_05.append(pck[1])
524
+ pcks_01.append(pck[2])
525
+ end_time=time.time()
526
+ minutes, seconds = divmod(end_time-start_time, 60)
527
+ logger.info(f"Time: {minutes:.0f}m {seconds:.0f}s")
528
+ logger.info(f"Average PCK0.10: {np.average(pcks) * 100:.2f}")
529
+ logger.info(f"Average PCK0.05: {np.average(pcks_05) * 100:.2f}")
530
+ logger.info(f"Average PCK0.01: {np.average(pcks_01) * 100:.2f}") if not PASCAL else logger.info(f"Average PCK0.15: {np.average(pcks_01) * 100:.2f}")
531
+ if SAMPLE is None or SAMPLE==0:
532
+ weights_pascal=[15,30,10,6,8,32,19,27,13,3,8,24,9,27,12,7,1,13,20,15]
533
+ weights_spair=[690,650,702,702,870,644,564,600,646,640,600,600,702,650,862,664,756,692]
534
+ weights = weights_pascal if PASCAL else weights_spair
535
+ else:
536
+ weights = [1] * len(pcks)
537
+ logger.info(f"Weighted PCK0.10: {np.average(pcks, weights=weights) * 100:.2f}")
538
+ logger.info(f"Weighted PCK0.05: {np.average(pcks_05, weights=weights) * 100:.2f}")
539
+ logger.info(f"Weighted PCK0.01: {np.average(pcks_01, weights=weights) * 100:.2f}") if not PASCAL else logger.info(f"Weighted PCK0.15: {np.average(pcks_01, weights=weights) * 100:.2f}")
540
+
541
+ if __name__ == '__main__':
542
+ parser = argparse.ArgumentParser()
543
+ parser.add_argument('--SEED', type=int, default=42)
544
+ parser.add_argument('--MASK', action='store_true', default=False) # set true to use the segmentation mask for the extracted features
545
+ parser.add_argument('--SAMPLE', type=int, default=20) # sample 20 pairs for each category, set to 0 to use all pairs
546
+ parser.add_argument('--DIST', type=str, default='l2') # distance metric, cos, l2, l1, l2_norm, l1_norm, plus, plus_norm
547
+ parser.add_argument('--COUNT_INVIS', action='store_true', default=False) # set true to count invisible keypoints
548
+ parser.add_argument('--TOTAL_SAVE_RESULT', type=int, default=5) # save the qualitative results for the first 5 pairs
549
+ parser.add_argument('--IMG_THRESHOLD', action='store_true', default=False) # set the pck threshold to the image size rather than the bbox size
550
+ parser.add_argument('--VER', type=str, default="v1-5") # version of diffusion, v1-3, v1-4, v1-5, v2-1-base
551
+ parser.add_argument('--PROJ_LAYER', action='store_true', default=False) # set true to use the pretrained projection layer from ODISE for dimension reduction
552
+ parser.add_argument('--CO_PCA_DINO', type=int, default=0) # whether perform co-pca on dino features
553
+ parser.add_argument('--PCA_DIMS', nargs=3, type=int, default=[256, 256, 256]) # the dimensions of the three groups of sd features
554
+ parser.add_argument('--TIMESTEP', type=int, default=100) # timestep for diffusion, [0, 1000], 0 for no noise added
555
+ parser.add_argument('--SIZE', type=int, default=960) # image size for the sd input
556
+ parser.add_argument('--INDICES', nargs=4, type=int, default=[2,5,8,11]) # select different layers of sd features, only the first three are used by default
557
+ parser.add_argument('--EDGE_PAD', action='store_true', default=False) # set true to pad the image with the edge pixels
558
+ parser.add_argument('--WEIGHT', nargs=5, type=float, default=[1,1,1,1,1]) # first three corresponde to three layers for the sd features, and the last two for the ensembled sd/dino features
559
+ parser.add_argument('--RAW', action='store_true', default=False) # set true to use the raw features from sd
560
+
561
+ parser.add_argument('--NOT_FUSE', action='store_true', default=False) # set true to use only sd features
562
+ parser.add_argument('--ONLY_DINO', action='store_true', default=False) # set true to use only dino features
563
+ parser.add_argument('--DINOV1', action='store_true', default=False) # set true to use dinov1
564
+ parser.add_argument('--MODEL_SIZE', type=str, default='base') # model size of thye dinov2, small, base, large
565
+
566
+ parser.add_argument('--DRAW_DENSE', action='store_true', default=False) # set true to draw the dense correspondences
567
+ parser.add_argument('--DRAW_SWAP', action='store_true', default=False) # set true to draw the swapped images
568
+ parser.add_argument('--DRAW_GIF', action='store_true', default=False) # set true to generate the gif for the swapped images
569
+ parser.add_argument('--TEXT_INPUT', action='store_true', default=False) # set true to use the explicit text input
570
+
571
+ parser.add_argument('--PASCAL', action='store_true', default=False) # set true to test on pfpascal dataset
572
+ parser.add_argument('--NOTE', type=str, default='')
573
+
574
+ args = parser.parse_args()
575
+ main(args)
Code/Baselines/sd-dino/pck_tss.py ADDED
@@ -0,0 +1,505 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+ import sys
3
+ import torch
4
+ import os
5
+ import numpy as np
6
+ from PIL import Image
7
+ from tqdm import tqdm
8
+ import torch.nn.functional as F
9
+ from utils.utils_correspondence import co_pca, resize, find_nearest_patchs, find_nearest_patchs_replace
10
+ import matplotlib.pyplot as plt
11
+ import time
12
+ from utils.logger import get_logger
13
+ from loguru import logger
14
+ import argparse
15
+ from extractor_sd import load_model, process_features_and_mask, get_mask
16
+ from extractor_dino import ViTExtractor
17
+ from utils.utils_tss import TSSDataset
18
+ from torch.utils.data import DataLoader
19
+ import torchvision.transforms as transforms
20
+ import imageio
21
+ from imageio import imwrite
22
+ from utils.utils_flow import remap_using_flow_fields, flow_to_image, convert_flow_to_mapping, overlay_semantic_mask
23
+ import torch.nn.functional as F
24
+
25
+ def get_smooth(img, mask=None):
26
+
27
+ if mask is not None:
28
+ img_smooth=img.clone().permute(0, 2, 3, 1)
29
+ img_smooth[~mask] = 0
30
+ img=img_smooth.permute(0, 3, 1, 2)
31
+
32
+ def _gradient_x(img,mask): #tobe implemented
33
+ img = F.pad(img, (0, 1, 0, 0), mode="replicate")
34
+ gx = img[:, :, :, :-1] - img[:, :, :, 1:] # NCHW
35
+ return gx
36
+
37
+ def _gradient_y(img,mask):
38
+ img = F.pad(img, (0, 0, 0, 1), mode="replicate")
39
+ gy = img[:, :, :-1, :] - img[:, :, 1:, :] # NCHW
40
+ return gy
41
+
42
+ img_grad_x = _gradient_x(img, mask)
43
+ img_grad_y = _gradient_y(img, mask)
44
+
45
+ if mask is not None:
46
+ smooth = (torch.abs(img_grad_x).sum() + torch.abs(img_grad_y).sum())/torch.sum(mask)
47
+ else:
48
+ smooth = torch.mean(torch.abs(img_grad_x)) + torch.mean(torch.abs(img_grad_y))
49
+
50
+ return smooth
51
+
52
+
53
+ def plot_individual_images(save_path, name_image, source_image, target_image, flow_est, flow_gt,
54
+ mask_used=None, color=[255, 102, 51]):
55
+ if not isinstance(source_image, np.ndarray):
56
+ source_image = source_image.squeeze().permute(1, 2, 0).cpu().numpy().astype(np.uint8)
57
+ target_image = target_image.squeeze().permute(1, 2, 0).cpu().numpy().astype(np.uint8)
58
+ else:
59
+ # numpy array
60
+ if not source_image.shape[2] == 3:
61
+ source_image = source_image.transpose(1, 2, 0)
62
+ target_image = target_image.transpose(1, 2, 0)
63
+
64
+ flow_target = flow_est.detach().permute(0, 2, 3, 1)[0].cpu().numpy()
65
+ flow_gt = flow_gt.detach().permute(0, 2, 3, 1)[0].cpu().numpy()
66
+ remapped_est = remap_using_flow_fields(source_image, flow_target[:, :, 0], flow_target[:, :, 1])
67
+
68
+ max_mapping = 520
69
+ max_flow = 400
70
+ rgb_flow = flow_to_image(flow_target, max_flow)
71
+ rgb_flow_gt = flow_to_image(flow_gt, max_flow)
72
+ rgb_mapping = flow_to_image(convert_flow_to_mapping(flow_target, False), max_mapping)
73
+
74
+ if not os.path.isdir(os.path.join(save_path, 'individual_images')):
75
+ os.makedirs(os.path.join(save_path, 'individual_images'))
76
+ # save the rgb flow
77
+ imageio.imwrite(os.path.join(save_path, 'individual_images', "{}_rgb_flow.png".format(name_image)), rgb_flow)
78
+ imageio.imwrite(os.path.join(save_path, 'individual_images', "{}_rgb_flow_gt.png".format(name_image)), rgb_flow_gt)
79
+ imageio.imwrite(os.path.join(save_path, 'individual_images', "{}_rgb_mapping.png".format(name_image)),rgb_mapping)
80
+ imageio.imwrite(os.path.join(save_path, 'individual_images', "{}_image_s.png".format(name_image)), source_image)
81
+ imageio.imwrite(os.path.join(save_path, 'individual_images', "{}_image_t.png".format(name_image)), target_image)
82
+ imageio.imwrite(os.path.join(save_path, 'individual_images', "{}_warped_s.png".format(name_image)),
83
+ remapped_est)
84
+
85
+ if mask_used is not None:
86
+ mask_used = mask_used.squeeze().cpu().numpy()
87
+ imageio.imwrite(os.path.join(save_path, 'individual_images', "{}_mask.png".format(name_image)),
88
+ mask_used.astype(np.uint8) * 255)
89
+ imageio.imwrite(
90
+ os.path.join(save_path, 'individual_images', "{}_image_s_warped_and_mask.png".format(name_image)),
91
+ remapped_est * np.tile(np.expand_dims(mask_used.astype(np.uint8), axis=2), (1, 1, 3)))
92
+
93
+ # overlay mask on warped image
94
+ img_mask_overlay_color = overlay_semantic_mask(remapped_est.astype(np.uint8),
95
+ 255 - mask_used.astype(np.uint8) * 255, color=color)
96
+ imwrite(os.path.join(save_path, 'individual_images',
97
+ '{}_warped_overlay_mask_color.png'.format(name_image)), img_mask_overlay_color)
98
+
99
+ flow_mask_overlay_color = overlay_semantic_mask(rgb_flow, 255 - mask_used.astype(np.uint8) * 255, color=color)
100
+ imwrite(os.path.join(save_path, 'individual_images',
101
+ '{}_flow_overlay_mask_color.png'.format(name_image)), flow_mask_overlay_color)
102
+
103
+ flow_gt_mask_overlay_color = overlay_semantic_mask(rgb_flow_gt, 255 - mask_used.astype(np.uint8) * 255, color=color)
104
+ imwrite(os.path.join(save_path, 'individual_images',
105
+ '{}_flow_gt_overlay_mask_color.png'.format(name_image)), flow_gt_mask_overlay_color)
106
+
107
+
108
+ def nearest_neighbor_flow(src_descriptor, trg_descriptor, ori_shape, mask1=None, mask2=None):
109
+ B, C, H, W = src_descriptor.shape
110
+
111
+ if mask1 is not None and mask2 is not None:
112
+ resized_mask1 = F.interpolate(mask1.cuda().unsqueeze(0).unsqueeze(0).float(), size=src_descriptor.shape[2:], mode='nearest')
113
+ resized_mask2 = F.interpolate(mask2.cuda().unsqueeze(0).unsqueeze(0).float(), size=trg_descriptor.shape[2:], mode='nearest')
114
+ src_descriptor = src_descriptor * resized_mask1.repeat(1, src_descriptor.shape[1], 1, 1)
115
+ trg_descriptor = trg_descriptor * resized_mask2.repeat(1, trg_descriptor.shape[1], 1, 1)
116
+ # set where mask==0 a very large number
117
+ src_descriptor[(src_descriptor.sum(1)==0).repeat(1, src_descriptor.shape[1], 1, 1)] = 100000
118
+ trg_descriptor[(trg_descriptor.sum(1)==0).repeat(1, trg_descriptor.shape[1], 1, 1)] = 100000
119
+
120
+ real_H, real_W = ori_shape
121
+ long_edge = max(real_H, real_W)
122
+ src_descriptor = src_descriptor.view(B, C, -1).permute(0, 2, 1).squeeze()
123
+ trg_descriptor = trg_descriptor.view(B, C, -1).permute(0, 2, 1).squeeze()
124
+
125
+ # Compute distance matrix using broadcasting and torch.cdist
126
+ distances = torch.cdist(trg_descriptor, src_descriptor)
127
+
128
+ # Find the indices of the minimum distances
129
+ indices = torch.argmin(distances, dim=1).reshape(B, H, W)
130
+
131
+ # Convert indices to coordinates
132
+ trg_y = torch.div(indices, W).to(torch.float32)
133
+ trg_x = torch.fmod(indices, W).to(torch.float32)
134
+
135
+ # Create coordinate grid
136
+ grid_y, grid_x = torch.meshgrid(torch.arange(H, dtype=torch.float32, device=src_descriptor.device), torch.arange(W, dtype=torch.float32, device=src_descriptor.device))
137
+
138
+ # Compare target coordinates with source coordinate grid
139
+ flow_x = trg_x - grid_x
140
+ flow_y = trg_y - grid_y
141
+
142
+ # Stack the flow fields together to form the final optical flow
143
+ flow = torch.stack([flow_x, flow_y], dim=1)
144
+
145
+ # Perform bilinear interpolation to adjust the optical flow from (60, 60) to (real_H, real_W)
146
+ flow = F.interpolate(flow, size=(long_edge, long_edge), mode='bilinear', align_corners=False)
147
+ flow *= torch.tensor([long_edge / 60.0, long_edge / 60.0], dtype=torch.float32, device=src_descriptor.device).view(1, 2, 1, 1)
148
+
149
+ # Crop the flow field to the original image size
150
+ if long_edge == real_H:
151
+ flow = flow[:, :, :, (long_edge - real_W) // 2:(long_edge - real_W) // 2 + real_W]
152
+ else:
153
+ flow = flow[:, :, (long_edge - real_H) // 2:(long_edge - real_H) // 2 + real_H, :]
154
+
155
+ return flow
156
+
157
+ def compute_flow(model, aug, source_img, target_img, save_path, batch_num=0, category=['car'], mask=False, dist='cos', real_size=960):
158
+ if type(category) == str:
159
+ category = [category]
160
+ img_size = 840 if DINOV2 else 480
161
+ model_dict={'small':'dinov2_vits14',
162
+ 'base':'dinov2_vitb14',
163
+ 'large':'dinov2_vitl14',
164
+ 'giant':'dinov2_vitg14'}
165
+
166
+ model_type = model_dict[MODEL_SIZE] if DINOV2 else 'dino_vits8'
167
+ layer = 11 if DINOV2 else 9
168
+ if 'l' in model_type:
169
+ layer = 23
170
+ elif 'g' in model_type:
171
+ layer = 39
172
+ facet = 'token' if DINOV2 else 'key'
173
+ stride = 14 if DINOV2 else 8
174
+ device = 'cuda' if torch.cuda.is_available() else 'cpu'
175
+ # indiactor = 'v2' if DINOV2 else 'v1'
176
+ # model_size = model_type.split('vit')[-1]
177
+ extractor = ViTExtractor(model_type, stride, device=device)
178
+ patch_size = extractor.model.patch_embed.patch_size[0] if DINOV2 else extractor.model.patch_embed.patch_size
179
+ num_patches = int(patch_size / stride * (img_size // patch_size - 1) + 1)
180
+
181
+ input_text = "a photo of "+category[-1][0] if TEXT_INPUT else None
182
+
183
+ current_save_results = 0
184
+
185
+ N = 1
186
+ result = []
187
+
188
+ for pair_idx in range(N):
189
+ shape = source_img.shape[2:]
190
+ # Load image 1
191
+ img1=Image.fromarray(source_img.squeeze().numpy().transpose(1,2,0).astype(np.uint8))
192
+ img1_input = resize(img1, real_size, resize=True, to_pil=True, edge=EDGE_PAD)
193
+ img1 = resize(img1, img_size, resize=True, to_pil=True, edge=EDGE_PAD)
194
+
195
+ # Load image 2
196
+ img2=Image.fromarray(target_img.squeeze().numpy().transpose(1,2,0).astype(np.uint8))
197
+ img2_input = resize(img2, real_size, resize=True, to_pil=True, edge=EDGE_PAD)
198
+ img2 = resize(img2, img_size, resize=True, to_pil=True, edge=EDGE_PAD)
199
+
200
+ with torch.no_grad():
201
+ if not CO_PCA:
202
+ if not ONLY_DINO:
203
+ img1_desc = process_features_and_mask(model, aug, img1_input, input_text=input_text, mask=False).reshape(1,1,-1, num_patches**2).permute(0,1,3,2)
204
+ img2_desc = process_features_and_mask(model, aug, img2_input, category[-1], input_text=input_text, mask=mask).reshape(1,1,-1, num_patches**2).permute(0,1,3,2)
205
+ if FUSE_DINO:
206
+ img1_batch = extractor.preprocess_pil(img1)
207
+ img1_desc_dino = extractor.extract_descriptors(img1_batch.to(device), layer, facet)
208
+ img2_batch = extractor.preprocess_pil(img2)
209
+ img2_desc_dino = extractor.extract_descriptors(img2_batch.to(device), layer, facet)
210
+
211
+ else:
212
+ if not ONLY_DINO:
213
+ features1 = process_features_and_mask(model, aug, img1_input, input_text=input_text, mask=False, raw=True)
214
+ features2 = process_features_and_mask(model, aug, img2_input, category[-1], input_text=input_text, mask=mask, raw=True)
215
+ processed_features1, processed_features2 = co_pca(features1, features2, PCA_DIMS)
216
+ img1_desc = processed_features1.reshape(1, 1, -1, num_patches**2).permute(0,1,3,2)
217
+ img2_desc = processed_features2.reshape(1, 1, -1, num_patches**2).permute(0,1,3,2)
218
+ if FUSE_DINO:
219
+ img1_batch = extractor.preprocess_pil(img1)
220
+ img1_desc_dino = extractor.extract_descriptors(img1_batch.to(device), layer, facet)
221
+
222
+ img2_batch = extractor.preprocess_pil(img2)
223
+ img2_desc_dino = extractor.extract_descriptors(img2_batch.to(device), layer, facet) # (1,1,3600,768)
224
+
225
+
226
+ if dist == 'l1' or dist == 'l2':
227
+ # normalize the features
228
+ img1_desc = img1_desc / img1_desc.norm(dim=-1, keepdim=True)
229
+ img2_desc = img2_desc / img2_desc.norm(dim=-1, keepdim=True)
230
+ if FUSE_DINO:
231
+ img1_desc_dino = img1_desc_dino / img1_desc_dino.norm(dim=-1, keepdim=True)
232
+ img2_desc_dino = img2_desc_dino / img2_desc_dino.norm(dim=-1, keepdim=True)
233
+
234
+ if FUSE_DINO and not ONLY_DINO:
235
+ # cat two features together
236
+ img1_desc = torch.cat((img1_desc, img1_desc_dino), dim=-1)
237
+ img2_desc = torch.cat((img2_desc, img2_desc_dino), dim=-1)
238
+
239
+ img1_desc[...,:PCA_DIMS[0]]*=WEIGHT[0]
240
+ img1_desc[...,PCA_DIMS[0]:PCA_DIMS[1]+PCA_DIMS[0]]*=WEIGHT[1]
241
+ img1_desc[...,PCA_DIMS[1]+PCA_DIMS[0]:PCA_DIMS[2]+PCA_DIMS[1]+PCA_DIMS[0]]*=WEIGHT[2]
242
+
243
+ img2_desc[...,:PCA_DIMS[0]]*=WEIGHT[0]
244
+ img2_desc[...,PCA_DIMS[0]:PCA_DIMS[1]+PCA_DIMS[0]]*=WEIGHT[1]
245
+ img2_desc[...,PCA_DIMS[1]+PCA_DIMS[0]:PCA_DIMS[2]+PCA_DIMS[1]+PCA_DIMS[0]]*=WEIGHT[2]
246
+
247
+ if ONLY_DINO:
248
+ img1_desc = img1_desc_dino
249
+ img2_desc = img2_desc_dino
250
+ # logger.info(img1_desc.shape, img2_desc.shape)
251
+
252
+ if DRAW_DENSE:
253
+ mask1 = get_mask(model, aug, img1, category[0])
254
+ mask2 = get_mask(model, aug, img2, category[-1])
255
+ if ONLY_DINO or not FUSE_DINO:
256
+ img1_desc = img1_desc / img1_desc.norm(dim=-1, keepdim=True)
257
+ img2_desc = img2_desc / img2_desc.norm(dim=-1, keepdim=True)
258
+
259
+ img1_desc_reshaped = img1_desc.permute(0,1,3,2).reshape(-1, img1_desc.shape[-1], num_patches, num_patches)
260
+ img2_desc_reshaped = img2_desc.permute(0,1,3,2).reshape(-1, img2_desc.shape[-1], num_patches, num_patches)
261
+ trg_dense_output, src_color_map = find_nearest_patchs(mask2, mask1, img2, img1, img2_desc_reshaped, img1_desc_reshaped, mask=mask)
262
+ if current_save_results!=TOTAL_SAVE_RESULT:
263
+ if not os.path.exists(f'{save_path}/{category[0]}'):
264
+ os.makedirs(f'{save_path}/{category[0]}')
265
+ fig_colormap, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 8))
266
+ ax1.axis('off')
267
+ ax2.axis('off')
268
+ ax1.imshow(src_color_map)
269
+ ax2.imshow(trg_dense_output)
270
+ fig_colormap.savefig(f'{save_path}/{category[0]}/{batch_num}_colormap.png')
271
+ plt.close(fig_colormap)
272
+
273
+ if DRAW_SWAP:
274
+ if not DRAW_DENSE:
275
+ mask1 = get_mask(model, aug, img1, category[0])
276
+ mask2 = get_mask(model, aug, img2, category[-1])
277
+
278
+ if (ONLY_DINO or not FUSE_DINO) and not DRAW_DENSE:
279
+ img1_desc = img1_desc / img1_desc.norm(dim=-1, keepdim=True)
280
+ img2_desc = img2_desc / img2_desc.norm(dim=-1, keepdim=True)
281
+
282
+ img1_desc_reshaped = img1_desc.permute(0,1,3,2).reshape(-1, img1_desc.shape[-1], num_patches, num_patches)
283
+ img2_desc_reshaped = img2_desc.permute(0,1,3,2).reshape(-1, img2_desc.shape[-1], num_patches, num_patches)
284
+ trg_dense_output, src_color_map = find_nearest_patchs_replace(mask2, mask1, img2, img1, img2_desc_reshaped, img1_desc_reshaped, mask=mask, resolution=156)
285
+ if current_save_results!=TOTAL_SAVE_RESULT:
286
+ if not os.path.exists(f'{save_path}/{category[0]}'):
287
+ os.makedirs(f'{save_path}/{category[0]}')
288
+ fig_colormap, (ax1, ax2) = plt.subplots(1, 2, figsize=(16, 8))
289
+ ax1.axis('off')
290
+ ax2.axis('off')
291
+ ax1.imshow(src_color_map)
292
+ ax2.imshow(trg_dense_output)
293
+ fig_colormap.savefig(f'{save_path}/{category[0]}/{batch_num}_swap.png')
294
+ plt.close(fig_colormap)
295
+
296
+ # compute the flow map based on the nearest neighbor
297
+ # reshape the descriptors (1,dim,80,60)
298
+ img1_desc_reshaped = img1_desc.permute(0,1,3,2).reshape(-1, img1_desc.shape[-1], num_patches, num_patches)
299
+ img2_desc_reshaped = img2_desc.permute(0,1,3,2).reshape(-1, img2_desc.shape[-1], num_patches, num_patches)
300
+
301
+ # compute the flow map based on the nearest neighbor
302
+ if MASK:
303
+ mask1 = get_mask(model, aug, img1, category[0])
304
+ mask2 = get_mask(model, aug, img2, category[-1])
305
+ result = nearest_neighbor_flow(img1_desc_reshaped, img2_desc_reshaped, shape, mask1, mask2)
306
+ else:
307
+ result = nearest_neighbor_flow(img1_desc_reshaped, img2_desc_reshaped, shape)
308
+
309
+ return result
310
+
311
+
312
+ def run_evaluation_semantic(model, aug, test_dataloader, device,
313
+ path_to_save=None, plot=False, plot_100=False, plot_ind_images=False):
314
+ current_save_results = 0
315
+ pbar = tqdm(enumerate(test_dataloader), total=len(test_dataloader))
316
+ mean_epe_list, epe_all_list, pck_0_05_list, pck_0_01_list, pck_0_1_list, pck_0_15_list = [], [], [], [], [], []
317
+ smooth_est_list, smooth_gt_list = [], []
318
+ eval_buf = {'cls_pck': dict(), 'vpvar': dict(), 'scvar': dict(), 'trncn': dict(), 'occln': dict()}
319
+
320
+ # pck curve per image
321
+ pck_thresholds = [0.01]
322
+ pck_thresholds.extend(np.arange(0.05, 0.4, 0.05).tolist())
323
+ pck_per_image_curve = np.zeros((len(pck_thresholds), len(test_dataloader)), np.float32)
324
+
325
+ for i_batch, mini_batch in pbar:
326
+ source_img = mini_batch['source_image']
327
+ target_img = mini_batch['target_image']
328
+ flow_gt = mini_batch['flow_map'].to(device)
329
+ mask_valid = mini_batch['correspondence_mask'].to(device)
330
+ category = mini_batch['category']
331
+
332
+ if 'pckthres' in list(mini_batch.keys()):
333
+ L_pck = mini_batch['pckthres'][0].float().item()
334
+ else:
335
+ raise ValueError('No pck threshold in mini_batch')
336
+
337
+ flow_est = compute_flow(model, aug, source_img, target_img, batch_num=i_batch, save_path=path_to_save, category=category)
338
+
339
+ if plot_ind_images or current_save_results < TOTAL_SAVE_RESULT:
340
+ plot_individual_images(path_to_save, 'image_{}'.format(i_batch), source_img, target_img, flow_est,flow_gt , mask_used=mask_valid)
341
+ current_save_results += 1
342
+
343
+ smooth_est_list.append(get_smooth(flow_est,mask_valid).cpu().numpy())
344
+ smooth_gt_list.append(get_smooth(flow_gt,mask_valid).cpu().numpy())
345
+
346
+ flow_est = flow_est.permute(0, 2, 3, 1)[mask_valid]
347
+ flow_gt = flow_gt.permute(0, 2, 3, 1)[mask_valid]
348
+
349
+ epe = torch.sum((flow_est - flow_gt) ** 2, dim=1).sqrt()
350
+
351
+ epe_all_list.append(epe.view(-1).cpu().numpy())
352
+ mean_epe_list.append(epe.mean().item())
353
+ pck_0_05_list.append(epe.le(0.05*L_pck).float().mean().item())
354
+ pck_0_01_list.append(epe.le(0.01*L_pck).float().mean().item())
355
+ pck_0_1_list.append(epe.le(0.1*L_pck).float().mean().item())
356
+ pck_0_15_list.append(epe.le(0.15*L_pck).float().mean().item())
357
+ for t in range(len(pck_thresholds)):
358
+ pck_per_image_curve[t, i_batch] = epe.le(pck_thresholds[t]*L_pck).float().mean().item()
359
+
360
+ epe_all = np.concatenate(epe_all_list)
361
+ pck_0_05_dataset = np.mean(epe_all <= 0.05 * L_pck)
362
+ pck_0_01_dataset = np.mean(epe_all <= 0.01 * L_pck)
363
+ pck_0_1_dataset = np.mean(epe_all <= 0.1 * L_pck)
364
+ pck_0_15_dataset = np.mean(epe_all <= 0.15 * L_pck)
365
+ smooth_est_dataset = np.mean(smooth_est_list)
366
+ smooth_gt_dataset = np.mean(smooth_gt_list)
367
+
368
+ output = {'AEPE': np.mean(mean_epe_list), 'PCK_0_05_per_image': np.mean(pck_0_05_list),
369
+ 'PCK_0_01_per_image': np.mean(pck_0_01_list), 'PCK_0_1_per_image': np.mean(pck_0_1_list),
370
+ 'PCK_0_15_per_image': np.mean(pck_0_15_list),
371
+ 'PCK_0_01_per_dataset': pck_0_01_dataset, 'PCK_0_05_per_dataset': pck_0_05_dataset,
372
+ 'PCK_0_1_per_dataset': pck_0_1_dataset, 'PCK_0_15_per_dataset': pck_0_15_dataset,
373
+ 'pck_threshold_alpha': pck_thresholds, 'pck_curve_per_image': np.mean(pck_per_image_curve, axis=1).tolist()
374
+ }
375
+ logger.info("Validation EPE: %f, alpha=0_01: %f, alpha=0.05: %f" % (output['AEPE'], output['PCK_0_01_per_image'],
376
+ output['PCK_0_05_per_image']))
377
+ logger.info("smooth_est: %f, smooth_gt: %f" % (smooth_est_dataset, smooth_gt_dataset))
378
+
379
+ for name in eval_buf.keys():
380
+ output[name] = {}
381
+ for cls in eval_buf[name]:
382
+ if eval_buf[name] is not None:
383
+ cls_avg = sum(eval_buf[name][cls]) / len(eval_buf[name][cls])
384
+ output[name][cls] = cls_avg
385
+
386
+ return output
387
+
388
+ def main(args):
389
+ global MASK, SAMPLE, DIST, TOTAL_SAVE_RESULT, VER, CO_PCA, PCA_DIMS, SIZE, FUSE_DINO, DINOV2, MODEL_SIZE, DRAW_DENSE, TEXT_INPUT, DRAW_SWAP, ONLY_DINO, SEED, EDGE_PAD, WEIGHT
390
+ MASK = args.MASK
391
+ SAMPLE = args.SAMPLE
392
+ DIST = args.DIST
393
+ TOTAL_SAVE_RESULT = args.TOTAL_SAVE_RESULT
394
+ VER = args.VER
395
+ CO_PCA = args.CO_PCA
396
+ PCA_DIMS = args.PCA_DIMS
397
+ SIZE = args.SIZE
398
+ INDICES = args.INDICES
399
+ EDGE_PAD = args.EDGE_PAD
400
+
401
+ FUSE_DINO = False if args.NOT_FUSE else True
402
+ ONLY_DINO = args.ONLY_DINO
403
+ DINOV2 = False if args.DINOV1 else True
404
+ MODEL_SIZE = args.MODEL_SIZE
405
+ DRAW_DENSE = args.DRAW_DENSE
406
+ DRAW_SWAP = args.DRAW_SWAP
407
+ TEXT_INPUT = args.TEXT_INPUT
408
+ SEED = args.SEED
409
+ WEIGHT = args.WEIGHT # corresponde to three groups for the sd features, and one group for the dino features
410
+
411
+ if SAMPLE == 0:
412
+ SAMPLE = None
413
+ if DRAW_DENSE or DRAW_SWAP:
414
+ TOTAL_SAVE_RESULT = SAMPLE
415
+ if ONLY_DINO:
416
+ FUSE_DINO = True
417
+ if FUSE_DINO and not ONLY_DINO:
418
+ DIST = "l2"
419
+ else:
420
+ DIST = "cos"
421
+
422
+ np.random.seed(args.SEED)
423
+ torch.manual_seed(args.SEED)
424
+ torch.cuda.manual_seed(args.SEED)
425
+ torch.backends.cudnn.benchmark = True
426
+
427
+ model, aug = load_model(diffusion_ver=VER, image_size=SIZE, num_timesteps=args.TIMESTEP, block_indices=tuple(INDICES))
428
+ save_path=f'./results_tss/pck_tss_mask_{MASK}_dist_{DIST}_{args.TIMESTEP}{VER}_{MODEL_SIZE}_{SIZE}_copca_{CO_PCA}_{INDICES[0]}_{PCA_DIMS[0]}_{INDICES[1]}_{PCA_DIMS[1]}_{INDICES[2]}_{PCA_DIMS[2]}_text_{TEXT_INPUT}_sd_{not ONLY_DINO}_dino_{FUSE_DINO}'
429
+ if EDGE_PAD:
430
+ save_path += '_edge_pad'
431
+ if not os.path.exists(save_path):
432
+ os.makedirs(save_path)
433
+
434
+ logger = get_logger(save_path+'/result.log')
435
+
436
+ logger.info(args)
437
+ data_dir = "data/TSS_CVPR2016"
438
+
439
+ start_time=time.time()
440
+
441
+ class ArrayToTensor(object):
442
+ """Converts a numpy.ndarray (H x W x C) to a torch.FloatTensor of shape (C x H x W)."""
443
+ def __init__(self, get_float=True):
444
+ self.get_float = get_float
445
+
446
+ def __call__(self, array):
447
+
448
+ if not isinstance(array, np.ndarray):
449
+ array = np.array(array)
450
+ array = np.transpose(array, (2, 0, 1))
451
+ # handle numpy array
452
+ tensor = torch.from_numpy(array)
453
+ # put it from HWC to CHW format
454
+ if self.get_float:
455
+ # carefull, this is not normalized to [0, 1]
456
+ return tensor.float()
457
+ else:
458
+ return tensor
459
+
460
+ co_transform = None
461
+ target_transform = transforms.Compose([ArrayToTensor()]) # only put channel first
462
+ input_transform = transforms.Compose([ArrayToTensor(get_float=False)]) # only put channel first
463
+ output = {}
464
+ for sub_data in ['FG3DCar', 'JODS', 'PASCAL']:
465
+ test_set = TSSDataset(os.path.join(data_dir, sub_data),
466
+ source_image_transform=input_transform,
467
+ target_image_transform=input_transform, flow_transform=target_transform,
468
+ co_transform=co_transform,
469
+ num_samples=SAMPLE)
470
+ test_dataloader = DataLoader(test_set, batch_size=1, num_workers=8)
471
+ results = run_evaluation_semantic(model,aug, test_dataloader, device='cuda', path_to_save=save_path+'/'+sub_data, plot_ind_images=DRAW_SWAP)
472
+ output[sub_data] = results
473
+
474
+ end_time=time.time()
475
+ minutes, seconds = divmod(end_time-start_time, 60)
476
+ logger.info(f"Time: {minutes:.0f}m {seconds:.0f}s")
477
+ torch.save(output, save_path+'/result.pth')
478
+
479
+ if __name__ == '__main__':
480
+ parser = argparse.ArgumentParser()
481
+ parser.add_argument('--SEED', type=int, default=42)
482
+ parser.add_argument('--MASK', action='store_true', default=False)
483
+ parser.add_argument('--SAMPLE', type=int, default=0)
484
+ parser.add_argument('--DIST', type=str, default='l2')
485
+ parser.add_argument('--TOTAL_SAVE_RESULT', type=int, default=5)
486
+ parser.add_argument('--VER', type=str, default="v1-5")
487
+ parser.add_argument('--CO_PCA', type=bool, default=True)
488
+ parser.add_argument('--PCA_DIMS', nargs=3, type=int, default=[256, 256, 256])
489
+ parser.add_argument('--TIMESTEP', type=int, default=100)
490
+ parser.add_argument('--SIZE', type=int, default=960)
491
+ parser.add_argument('--INDICES', nargs=4, type=int, default=[2,5,8,11])
492
+ parser.add_argument('--WEIGHT', nargs=4, type=float, default=[1,1,1,1])
493
+ parser.add_argument('--EDGE_PAD', action='store_true', default=False)
494
+
495
+ parser.add_argument('--NOT_FUSE', action='store_true', default=False)
496
+ parser.add_argument('--ONLY_DINO', action='store_true', default=False)
497
+ parser.add_argument('--DINOV1', action='store_true', default=False)
498
+ parser.add_argument('--MODEL_SIZE', type=str, default='base')
499
+
500
+ parser.add_argument('--DRAW_DENSE', action='store_true', default=False)
501
+ parser.add_argument('--DRAW_SWAP', action='store_true', default=False)
502
+ parser.add_argument('--TEXT_INPUT', action='store_true', default=False)
503
+
504
+ args = parser.parse_args()
505
+ main(args)