Ockham98 commited on
Commit
42b03f7
Β·
verified Β·
1 Parent(s): aea0ad2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +156 -3
README.md CHANGED
@@ -1,3 +1,156 @@
1
- ---
2
- license: mit
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # IJCV (2025): TryOn-Adapter
2
+ This repository is the official implementation of [TryOn-Adapter](https://arxiv.org/abs/2404.00878)
3
+
4
+ > **TryOn-Adapter: Efficient Fine-Grained Clothing Identity Adaptation for High-Fidelity Virtual Try-On**<br>
5
+
6
+ >
7
+ > Jiazheng Xing, Chao Xu, Yijie Qian, Yang Liu, Guang Dai, Baigui Sun, Yong Liu, Jingdong Wang
8
+
9
+ [[arXiv Paper](https://arxiv.org/abs/2404.00878)]&nbsp;
10
+
11
+ ![teaser](assets/teaser.jpg)&nbsp;
12
+
13
+ ## TODO List
14
+ - [x] ~~Release Texture Highlighting Map and Segmentation Map~~
15
+ - [x] ~~Release Data Preparation Code~~
16
+ - [x] ~~Release Inference Code~~
17
+ - [x] ~~Release Model Weights~~
18
+
19
+ ## Getting Started
20
+ ### Installation
21
+ 1. Clone the repository
22
+ ```shell
23
+ git clone https://github.com/jiazheng-xing/TryOn-Adapter.git
24
+ cd TryOn-Adapter
25
+ ```
26
+ 2. Install Python dependencies
27
+ ```shell
28
+ conda env create -f environment.yaml
29
+ conda activate tryon-adapter
30
+ ```
31
+
32
+ ### Data Preparation
33
+ #### VITON-HD
34
+ 1. The VITON-HD dataset serves as a benchmark. Download [VITON-HD](https://github.com/shadow2496/VITON-HD) dataset.
35
+
36
+ 2. In addition to above content, some other preprocessed conditions are in use in TryOn-Adapter. The preprocessed data could be downloaded, respectively. The detail information and code see [data_preparation/README.md](data_preparation/README.md).
37
+
38
+ |Content|Google|Baidu|
39
+ |---|---|---|
40
+ |Segmentation Map|[link](https://drive.google.com/file/d/18KvGWR-3siJ_mt7g4CcEVFi_51E7ZifA/view?usp=sharing)|[link](https://pan.baidu.com/s/1zm3XV34tcrXpYt6uAN4R9Q?pwd=ekyn)|
41
+ |Highlighting Texture Map|[link](https://drive.google.com/file/d/111KBYA8-d9xl9a2aS9yUaTp0edflb7qT/view?usp=sharing)|[link](https://pan.baidu.com/s/1xWnvF7TeKB_2AzlCEbPsAQ?pwd=jnlz)|
42
+
43
+ 3. Generate Warped Cloth and Warped Mask based on the [GP-VTON](https://github.com/xiezhy6/GP-VTON.git).
44
+
45
+ Once everything is set up, the folders should be organized like this:
46
+ ```
47
+ β”œβ”€β”€ VITON-HD
48
+ | β”œβ”€β”€ test_pairs.txt
49
+ | β”œβ”€β”€ train_pairs.txt
50
+ β”‚ β”œβ”€β”€ [train | test]
51
+ | | β”œβ”€β”€ image
52
+ β”‚ β”‚ β”‚ β”œβ”€β”€ [000006_00.jpg | 000008_00.jpg | ...]
53
+ β”‚ β”‚ β”œβ”€β”€ cloth
54
+ β”‚ β”‚ β”‚ β”œβ”€β”€ [000006_00.jpg | 000008_00.jpg | ...]
55
+ β”‚ β”‚ β”œβ”€β”€ cloth-mask
56
+ β”‚ β”‚ β”‚ β”œβ”€β”€ [000006_00.jpg | 000008_00.jpg | ...]
57
+ β”‚ β”‚ β”œβ”€β”€ image-parse-v3
58
+ β”‚ β”‚ β”‚ β”œβ”€β”€ [000006_00.png | 000008_00.png | ...]
59
+ β”‚ β”‚ β”œβ”€β”€ openpose_img
60
+ β”‚ β”‚ β”‚ β”œβ”€β”€ [000006_00_rendered.png | 000008_00_rendered.png | ...]
61
+ β”‚ β”‚ β”œβ”€β”€ openpose_json
62
+ β”‚ β”‚ β”‚ β”œβ”€β”€ [000006_00_keypoints.json | 000008_00_keypoints.json | ...]
63
+ β”‚ β”‚ β”œβ”€β”€ train_paired/test_(un)paired
64
+ β”‚ β”‚ β”‚ β”œβ”€β”€ mask [000006_00.png | 000008_00.png | ...]
65
+ β”‚ β”‚ β”‚ β”œβ”€β”€ seg_preds [000006_00.png | 000008_00.png | ...]
66
+ β”‚ β”‚ β”‚ β”œβ”€β”€ warped [000006_00.png | 000008_00.png | ...]
67
+ ```
68
+
69
+ #### DressCode
70
+ 1. The DressCode dataset serves as a benchmark. Download the [DressCode](https://github.com/aimagelab/dress-code) dataset.
71
+
72
+ 2. In addition to above content, some other preprocessed conditions are in use in TryOn-Adapter. The detail information and code see [data_preparation/README.md](data_preparation/README.md).
73
+
74
+ 3. Generate Warped Cloth and Warped Mask based on the [GP-VTON](https://github.com/xiezhy6/GP-VTON.git).
75
+
76
+ Once everything is set up, the folders should be organized like this:
77
+ ```
78
+ β”œβ”€β”€ DressCode
79
+ | β”œβ”€β”€ test_pairs_paired.txt
80
+ | β”œβ”€β”€ test_pairs_unpaired.txt
81
+ | β”œβ”€β”€ train_pairs.txt
82
+ | β”œβ”€β”€ train_pairs.txt
83
+ β”‚ β”œβ”€β”€ [test_paird | test_unpaird | train_paird]
84
+ β”‚ β”‚ β”œβ”€β”€ [dresses | lower_body | upper_body]
85
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ mask [013563_1.png| 013564_1.png | ...]
86
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ seg_preds [013563_1.png| 013564_1.png | ...]
87
+ β”‚ β”‚ β”‚ β”‚ β”œβ”€β”€ warped [013563_1.png| 013564_1.png | ...]
88
+ β”‚ β”œβ”€β”€ [dresses | lower_body | upper_body]
89
+ | | β”œβ”€β”€ test_pairs_paired.txt
90
+ | | β”œβ”€β”€ test_pairs_unpaired.txt
91
+ | | β”œβ”€β”€ train_pairs.txt
92
+ β”‚ β”‚ β”œβ”€β”€ images
93
+ β”‚ β”‚ β”‚ β”œβ”€β”€ [013563_0.jpg | 013563_1.jpg | 013564_0.jpg | 013564_1.jpg | ...]
94
+ β”‚ β”‚ β”œβ”€β”€ masks
95
+ β”‚ β”‚ β”‚ β”œβ”€β”€ [013563_1.png| 013564_1.png | ...]
96
+ β”‚ β”‚ β”œβ”€β”€ keypoints
97
+ β”‚ β”‚ β”‚ β”œβ”€β”€ [013563_2.json | 013564_2.json | ...]
98
+ β”‚ β”‚ β”œβ”€β”€ label_maps
99
+ β”‚ β”‚ β”‚ β”œβ”€β”€ [013563_4.png | 013564_4.png | ...]
100
+ β”‚ β”‚ β”œβ”€β”€ skeletons
101
+ β”‚ β”‚ β”‚ β”œβ”€β”€ [013563_5.jpg | 013564_5.jpg | ...]
102
+ β”‚ β”‚ β”œβ”€β”€ dense
103
+ β”‚ β”‚ β”‚ β”œβ”€β”€ [013563_5.png | 013563_5_uv.npz | 013564_5.png | 013564_5_uv.npz | ...]
104
+ ```
105
+ ### Inference
106
+ Please download the pretrained model from [HuggingFace](https://huggingface.co/Ockham98/TryOn-Adapter).
107
+ To perform inference on the Dress Code or VITON-HD dataset, use the following command:
108
+ ```shell
109
+ python test_viton.py/test_dresscode.py --plms --gpu_id 0 \
110
+ --ddim_steps 100 \
111
+ --outdir <path> \
112
+ --config [configs/viton.yaml | configs/dresscode.yaml] \
113
+ --dataroot <path> \
114
+ --ckpt <path> \
115
+ --ckpt_elbm_path <path> \
116
+ --use_T_repaint [True | False] \
117
+ --n_samples 1 \
118
+ --seed 23 \
119
+ --scale 1 \
120
+ --H 512 \
121
+ --W 512 \
122
+ --unpaired
123
+ ```
124
+
125
+ ```shell
126
+ --ddim_steps <int> sampling steps
127
+ --outdir <str> output direction path
128
+ --config <str> config path of viton-hd/dresscode
129
+ --ckpt <str> diffusion model checkpoint path
130
+ --ckpt_elbm_path <str> elbm module checkpoint dirction path
131
+ --use_T_repaint <bool> whether to use T-Repaint technique
132
+ --n_samples <int> numbers of samples per inference
133
+ --unpaired whether to use the unpaired setting
134
+ ```
135
+
136
+ or just simply run:
137
+ ```shell
138
+ bash test_viton.sh
139
+ bash test_dresscode.sh
140
+ ```
141
+
142
+
143
+ ## Acknowledgements
144
+ Our code is heavily borrowed from [Paint-by-Example](https://github.com/Fantasy-Studio/Paint-by-Example). We also thank [GP-VTON](https://github.com/xiezhy6/GP-VTON.git), our warping garments are generated from it.
145
+
146
+ ## Citation
147
+ ```
148
+ @article{xing2025tryon,
149
+ title={TryOn-Adapter: Efficient Fine-Grained Clothing Identity Adaptation for High-Fidelity Virtual Try-On},
150
+ author={Xing, Jiazheng and Xu, Chao and Qian, Yijie and Liu, Yang and Dai, Guang and Sun, Baigui and Liu, Yong and Wang, Jingdong},
151
+ journal={International Journal of Computer Vision},
152
+ pages={1--22},
153
+ year={2025},
154
+ publisher={Springer}
155
+ }
156
+ ```