Moyao001 commited on
Commit
6e316f5
·
verified ·
1 Parent(s): 0285d87

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. CCEdit-main/.gitignore +8 -0
  2. CCEdit-main/LICENSE +74 -0
  3. CCEdit-main/README.md +161 -0
  4. CCEdit-main/config_pnp.yaml +19 -0
  5. CCEdit-main/config_pnp_auto.yaml +12 -0
  6. CCEdit-main/main.py +1060 -0
  7. CCEdit-main/models/.gitattributes +35 -0
  8. CCEdit-main/requirements.txt +51 -0
  9. CCEdit-main/setup.py +13 -0
  10. CCEdit-main/sgm.egg-info/PKG-INFO +175 -0
  11. CCEdit-main/sgm.egg-info/SOURCES.txt +59 -0
  12. CCEdit-main/sgm.egg-info/dependency_links.txt +1 -0
  13. CCEdit-main/sgm.egg-info/top_level.txt +2 -0
  14. CCEdit-main/src/controlnet11/.gitignore +140 -0
  15. CCEdit-main/src/controlnet11/config.py +1 -0
  16. CCEdit-main/src/controlnet11/environment.yaml +38 -0
  17. CCEdit-main/src/controlnet11/gradio_canny.py +115 -0
  18. CCEdit-main/src/controlnet11/gradio_depth.py +117 -0
  19. CCEdit-main/src/controlnet11/gradio_lineart_anime.py +116 -0
  20. CCEdit-main/src/controlnet11/gradio_normalbae.py +113 -0
  21. CCEdit-main/src/controlnet11/gradio_openpose.py +113 -0
  22. CCEdit-main/src/controlnet11/gradio_scribble.py +123 -0
  23. CCEdit-main/src/controlnet11/gradio_scribble_interactive.py +106 -0
  24. CCEdit-main/src/controlnet11/gradio_softedge.py +119 -0
  25. CCEdit-main/src/controlnet11/gradio_tile.py +109 -0
  26. CCEdit-main/src/controlnet11/share.py +8 -0
  27. FateZero-main/CLIP/.gitignore +10 -0
  28. FateZero-main/CLIP/LICENSE +22 -0
  29. FateZero-main/CLIP/MANIFEST.in +1 -0
  30. FateZero-main/CLIP/bench_clean_prompt.yaml +52 -0
  31. FateZero-main/CLIP/clip/bpe_simple_vocab_16e6.txt.gz +3 -0
  32. FateZero-main/CLIP/hubconf.py +42 -0
  33. FateZero-main/CLIP/probs.py +18 -0
  34. FateZero-main/CLIP/requirements.txt +5 -0
  35. FateZero-main/CLIP/setup.py +21 -0
  36. FateZero-main/ckpt/download.sh +8 -0
  37. FateZero-main/colab_fatezero.ipynb +0 -0
  38. FateZero-main/data/attribute/bear_tiger_lion_leopard.mp4 +3 -0
  39. FateZero-main/data/attribute/bus_gpu.mp4 +3 -0
  40. FateZero-main/data/attribute/bus_gpu/00000.png +3 -0
  41. FateZero-main/data/attribute/bus_gpu/00002.png +3 -0
  42. FateZero-main/data/attribute/bus_gpu/00004.png +3 -0
  43. FateZero-main/data/attribute/bus_gpu/00006.png +3 -0
  44. FateZero-main/data/attribute/bus_gpu/00007.png +3 -0
  45. FateZero-main/data/attribute/cat_tiger_leopard_grass.mp4 +3 -0
  46. FateZero-main/data/attribute/duck_rubber.mp4 +3 -0
  47. FateZero-main/data/attribute/duck_rubber/00000.png +3 -0
  48. FateZero-main/data/attribute/duck_rubber/00001.png +3 -0
  49. FateZero-main/data/attribute/duck_rubber/00002.png +3 -0
  50. FateZero-main/data/attribute/duck_rubber/00003.png +3 -0
CCEdit-main/.gitignore ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ src
2
+ *.pyc
3
+ *.npz
4
+ *.ckpt
5
+ outputs
6
+ sgm.egg-info/
7
+ latents_forward
8
+ PNP-results
CCEdit-main/LICENSE ADDED
@@ -0,0 +1,74 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Copyright (c) Stability AI Ltd.
2
+ This License Agreement (as may be amended in accordance with this License Agreement, “License”), between you, or your employer or other entity (if you are entering into this agreement on behalf of your employer or other entity) (“Licensee” or “you”) and Stability AI Ltd. (“Stability AI” or “we”) applies to your use of any computer program, algorithm, source code, object code, or software that is made available by Stability AI under this License (“Software”) and any specifications, manuals, documentation, and other written information provided by Stability AI related to the Software (“Documentation”).
3
+ By clicking “I Accept” below or by using the Software, you agree to the terms of this License. If you do not agree to this License, then you do not have any rights to use the Software or Documentation (collectively, the “Software Products”), and you must immediately cease using the Software Products. If you are agreeing to be bound by the terms of this License on behalf of your employer or other entity, you represent and warrant to Stability AI that you have full legal authority to bind your employer or such entity to this License. If you do not have the requisite authority, you may not accept the License or access the Software Products on behalf of your employer or other entity.
4
+ 1. LICENSE GRANT
5
+
6
+ a. Subject to your compliance with the Documentation and Sections 2, 3, and 5, Stability AI grants you a non-exclusive, worldwide, non-transferable, non-sublicensable, revocable, royalty free and limited license under Stability AI’s copyright interests to reproduce, distribute, and create derivative works of the Software solely for your non-commercial research purposes. The foregoing license is personal to you, and you may not assign or sublicense this License or any other rights or obligations under this License without Stability AI’s prior written consent; any such assignment or sublicense will be void and will automatically and immediately terminate this License.
7
+
8
+ b. You may make a reasonable number of copies of the Documentation solely for use in connection with the license to the Software granted above.
9
+
10
+ c. The grant of rights expressly set forth in this Section 1 (License Grant) are the complete grant of rights to you in the Software Products, and no other licenses are granted, whether by waiver, estoppel, implication, equity or otherwise. Stability AI and its licensors reserve all rights not expressly granted by this License.
11
+
12
+
13
+ 2. RESTRICTIONS
14
+
15
+ You will not, and will not permit, assist or cause any third party to:
16
+
17
+ a. use, modify, copy, reproduce, create derivative works of, or distribute the Software Products (or any derivative works thereof, works incorporating the Software Products, or any data produced by the Software), in whole or in part, for (i) any commercial or production purposes, (ii) military purposes or in the service of nuclear technology, (iii) purposes of surveillance, including any research or development relating to surveillance, (iv) biometric processing, (v) in any manner that infringes, misappropriates, or otherwise violates any third-party rights, or (vi) in any manner that violates any applicable law and violating any privacy or security laws, rules, regulations, directives, or governmental requirements (including the General Data Privacy Regulation (Regulation (EU) 2016/679), the California Consumer Privacy Act, and any and all laws governing the processing of biometric information), as well as all amendments and successor laws to any of the foregoing;
18
+
19
+ b. alter or remove copyright and other proprietary notices which appear on or in the Software Products;
20
+
21
+ c. utilize any equipment, device, software, or other means to circumvent or remove any security or protection used by Stability AI in connection with the Software, or to circumvent or remove any usage restrictions, or to enable functionality disabled by Stability AI; or
22
+
23
+ d. offer or impose any terms on the Software Products that alter, restrict, or are inconsistent with the terms of this License.
24
+
25
+ e. 1) violate any applicable U.S. and non-U.S. export control and trade sanctions laws (“Export Laws”); 2) directly or indirectly export, re-export, provide, or otherwise transfer Software Products: (a) to any individual, entity, or country prohibited by Export Laws; (b) to anyone on U.S. or non-U.S. government restricted parties lists; or (c) for any purpose prohibited by Export Laws, including nuclear, chemical or biological weapons, or missile technology applications; 3) use or download Software Products if you or they are: (a) located in a comprehensively sanctioned jurisdiction, (b) currently listed on any U.S. or non-U.S. restricted parties list, or (c) for any purpose prohibited by Export Laws; and (4) will not disguise your location through IP proxying or other methods.
26
+
27
+
28
+ 3. ATTRIBUTION
29
+
30
+ Together with any copies of the Software Products (as well as derivative works thereof or works incorporating the Software Products) that you distribute, you must provide (i) a copy of this License, and (ii) the following attribution notice: “SDXL 0.9 is licensed under the SDXL Research License, Copyright (c) Stability AI Ltd. All Rights Reserved.”
31
+
32
+
33
+ 4. DISCLAIMERS
34
+
35
+ THE SOFTWARE PRODUCTS ARE PROVIDED “AS IS” AND “WITH ALL FAULTS” WITH NO WARRANTY OF ANY KIND, EXPRESS OR IMPLIED. STABILITY AIEXPRESSLY DISCLAIMS ALL REPRESENTATIONS AND WARRANTIES, EXPRESS OR IMPLIED, WHETHER BY STATUTE, CUSTOM, USAGE OR OTHERWISE AS TO ANY MATTERS RELATED TO THE SOFTWARE PRODUCTS, INCLUDING BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE, TITLE, SATISFACTORY QUALITY, OR NON-INFRINGEMENT. STABILITY AI MAKES NO WARRANTIES OR REPRESENTATIONS THAT THE SOFTWARE PRODUCTS WILL BE ERROR FREE OR FREE OF VIRUSES OR OTHER HARMFUL COMPONENTS, OR PRODUCE ANY PARTICULAR RESULTS.
36
+
37
+
38
+ 5. LIMITATION OF LIABILITY
39
+
40
+ TO THE FULLEST EXTENT PERMITTED BY LAW, IN NO EVENT WILL STABILITY AI BE LIABLE TO YOU (A) UNDER ANY THEORY OF LIABILITY, WHETHER BASED IN CONTRACT, TORT, NEGLIGENCE, STRICT LIABILITY, WARRANTY, OR OTHERWISE UNDER THIS LICENSE, OR (B) FOR ANY INDIRECT, CONSEQUENTIAL, EXEMPLARY, INCIDENTAL, PUNITIVE OR SPECIAL DAMAGES OR LOST PROFITS, EVEN IF STABILITY AI HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. THE SOFTWARE PRODUCTS, THEIR CONSTITUENT COMPONENTS, AND ANY OUTPUT (COLLECTIVELY, “SOFTWARE MATERIALS”) ARE NOT DESIGNED OR INTENDED FOR USE IN ANY APPLICATION OR SITUATION WHERE FAILURE OR FAULT OF THE SOFTWARE MATERIALS COULD REASONABLY BE ANTICIPATED TO LEAD TO SERIOUS INJURY OF ANY PERSON, INCLUDING POTENTIAL DISCRIMINATION OR VIOLATION OF AN INDIVIDUAL’S PRIVACY RIGHTS, OR TO SEVERE PHYSICAL, PROPERTY, OR ENVIRONMENTAL DAMAGE (EACH, A “HIGH-RISK USE”). IF YOU ELECT TO USE ANY OF THE SOFTWARE MATERIALS FOR A HIGH-RISK USE, YOU DO SO AT YOUR OWN RISK. YOU AGREE TO DESIGN AND IMPLEMENT APPROPRIATE DECISION-MAKING AND RISK-MITIGATION PROCEDURES AND POLICIES IN CONNECTION WITH A HIGH-RISK USE SUCH THAT EVEN IF THERE IS A FAILURE OR FAULT IN ANY OF THE SOFTWARE MATERIALS, THE SAFETY OF PERSONS OR PROPERTY AFFECTED BY THE ACTIVITY STAYS AT A LEVEL THAT IS REASONABLE, APPROPRIATE, AND LAWFUL FOR THE FIELD OF THE HIGH-RISK USE.
41
+
42
+
43
+ 6. INDEMNIFICATION
44
+
45
+ You will indemnify, defend and hold harmless Stability AI and our subsidiaries and affiliates, and each of our respective shareholders, directors, officers, employees, agents, successors, and assigns (collectively, the “Stability AI Parties”) from and against any losses, liabilities, damages, fines, penalties, and expenses (including reasonable attorneys’ fees) incurred by any Stability AI Party in connection with any claim, demand, allegation, lawsuit, proceeding, or investigation (collectively, “Claims”) arising out of or related to: (a) your access to or use of the Software Products (as well as any results or data generated from such access or use), including any High-Risk Use (defined below); (b) your violation of this License; or (c) your violation, misappropriation or infringement of any rights of another (including intellectual property or other proprietary rights and privacy rights). You will promptly notify the Stability AI Parties of any such Claims, and cooperate with Stability AI Parties in defending such Claims. You will also grant the Stability AI Parties sole control of the defense or settlement, at Stability AI’s sole option, of any Claims. This indemnity is in addition to, and not in lieu of, any other indemnities or remedies set forth in a written agreement between you and Stability AI or the other Stability AI Parties.
46
+
47
+
48
+ 7. TERMINATION; SURVIVAL
49
+
50
+ a. This License will automatically terminate upon any breach by you of the terms of this License.
51
+
52
+ b. We may terminate this License, in whole or in part, at any time upon notice (including electronic) to you.
53
+
54
+ c. The following sections survive termination of this License: 2 (Restrictions), 3 (Attribution), 4 (Disclaimers), 5 (Limitation on Liability), 6 (Indemnification) 7 (Termination; Survival), 8 (Third Party Materials), 9 (Trademarks), 10 (Applicable Law; Dispute Resolution), and 11 (Miscellaneous).
55
+
56
+
57
+ 8. THIRD PARTY MATERIALS
58
+
59
+ The Software Products may contain third-party software or other components (including free and open source software) (all of the foregoing, “Third Party Materials”), which are subject to the license terms of the respective third-party licensors. Your dealings or correspondence with third parties and your use of or interaction with any Third Party Materials are solely between you and the third party. Stability AI does not control or endorse, and makes no representations or warranties regarding, any Third Party Materials, and your access to and use of such Third Party Materials are at your own risk.
60
+
61
+
62
+ 9. TRADEMARKS
63
+
64
+ Licensee has not been granted any trademark license as part of this License and may not use any name or mark associated with Stability AI without the prior written permission of Stability AI, except to the extent necessary to make the reference required by the “ATTRIBUTION” section of this Agreement.
65
+
66
+
67
+ 10. APPLICABLE LAW; DISPUTE RESOLUTION
68
+
69
+ This License will be governed and construed under the laws of the State of California without regard to conflicts of law provisions. Any suit or proceeding arising out of or relating to this License will be brought in the federal or state courts, as applicable, in San Mateo County, California, and each party irrevocably submits to the jurisdiction and venue of such courts.
70
+
71
+
72
+ 11. MISCELLANEOUS
73
+
74
+ If any provision or part of a provision of this License is unlawful, void or unenforceable, that provision or part of the provision is deemed severed from this License, and will not affect the validity and enforceability of any remaining provisions. The failure of Stability AI to exercise or enforce any right or provision of this License will not operate as a waiver of such right or provision. This License does not confer any third-party beneficiary rights upon any other person or entity. This License, together with the Documentation, contains the entire understanding between you and Stability AI regarding the subject matter of this License, and supersedes all other written or oral agreements and understandings between you and Stability AI regarding such subject matter. No change or addition to any provision of this License will be binding unless it is in writing and signed by an authorized representative of both you and Stability AI.
CCEdit-main/README.md ADDED
@@ -0,0 +1,161 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ### <div align="center"> CCEdit: Creative and Controllable Video Editing via Diffusion Models<div>
2
+ ### <div align="center"> CVPR 2024 <div>
3
+
4
+
5
+ <div align="center">
6
+ Ruoyu Feng,
7
+ Wenming Weng,
8
+ Yanhui Wang,
9
+ Yuhui Yuan,
10
+ Jianmin Bao,
11
+ Chong Luo,
12
+ Zhibo Chen,
13
+ Baining Guo
14
+ </div>
15
+
16
+ <br>
17
+
18
+ <div align="center">
19
+ <a href="https://ruoyufeng.github.io/CCEdit.github.io/"><img src="https://img.shields.io/static/v1?label=Project%20Page&message=Github&color=blue&logo=github-pages"></a> &ensp;
20
+ <a href="https://huggingface.co/datasets/RuoyuFeng/BalanceCC"><img src="https://img.shields.io/static/v1?label=BalanceCC BenchMark&message=HF&color=yellow"></a> &ensp;
21
+ <a href="https://arxiv.org/pdf/2309.16496.pdf"><img src="https://img.shields.io/static/v1?label=Paper&message=Arxiv:CCEdit&color=red&logo=arxiv"></a> &ensp;
22
+ </div>
23
+
24
+ <table class="center">
25
+ <tr>
26
+ <td><img src="assets/makeup.gif"></td>
27
+ <td><img src="assets/makeup1-magicReal.gif"></td>
28
+ </tr>
29
+ </table>
30
+
31
+ ## 🔥 Update
32
+ - 🔥 Mar. 27, 2024. [BalanceCC Benchmark](https://huggingface.co/datasets/RuoyuFeng/BalanceCC) is released! BalanceCC benchmark contains 100 videos with varied attributes, designed to offer a comprehensive platform for evaluating generative video editing, focusing on both controllability and creativity.
33
+
34
+ ## Installation
35
+ ```
36
+ # env
37
+ conda create -n ccedit python=3.9.17
38
+ conda activate ccedit
39
+ pip install -r requirements.txt
40
+ # pip install -r requirements_pt2.txt
41
+ # pip install torch==2.0.1 torchaudio==2.0.2 torchdata==0.6.1 torchmetrics==1.0.0 torchvision==0.15.2
42
+ pip install basicsr==1.4.2 wandb loralib av decord timm==0.6.7
43
+ pip install moviepy imageio==2.6.0 scikit-image==0.20.0 scipy==1.9.1 diffusers==0.17.1 transformers==4.27.3
44
+ pip install accelerate==0.20.3 ujson
45
+
46
+ git clone https://github.com/lllyasviel/ControlNet-v1-1-nightly src/controlnet11
47
+ git clone https://github.com/MichalGeyer/pnp-diffusers src/pnp-diffusers
48
+ ```
49
+
50
+ # Download models
51
+ download models from https://huggingface.co/RuoyuFeng/CCEdit and put them in ./models
52
+
53
+ <!-- ## Inference and training examples -->
54
+ ## Inference
55
+ ### Text-Video-to-Video
56
+ ```bash
57
+ python scripts/sampling/sampling_tv2v.py --config_path configs/inference_ccedit/keyframe_no2ndca_depthmidas.yaml --ckpt_path models/tv2v-no2ndca-depthmidas.ckpt --H 512 --W 768 --original_fps 18 --target_fps 6 --num_keyframes 17 --batch_size 1 --num_samples 2 --sample_steps 30 --sampler_name DPMPP2SAncestralSampler --cfg_scale 7.5 --prompt 'a bear is walking.' --video_path assets/Samples/davis/bear --add_prompt 'Van Gogh style' --save_path outputs/tv2v/bear-VanGogh --disable_check_repeat
58
+ ```
59
+
60
+ ### Text-Video-Image-to-Video
61
+ Specifiy the edited center frame.
62
+ ```bash
63
+ python scripts/sampling/sampling_tv2v_ref.py \
64
+ --seed 201574 \
65
+ --config_path configs/inference_ccedit/keyframe_ref_cp_no2ndca_add_cfca_depthzoe.yaml \
66
+ --ckpt_path models/tvi2v-no2ndca-depthmidas.ckpt \
67
+ --H 512 --W 768 --original_fps 18 --target_fps 6 --num_keyframes 17 --batch_size 1 --num_samples 2 \
68
+ --sample_steps 50 --sampler_name DPMPP2SAncestralSampler --cfg_scale 7 \
69
+ --prompt 'A person walks on the grass, the Milky Way is in the sky, night' \
70
+ --add_prompt 'masterpiece, best quality,' \
71
+ --video_path assets/Samples/tshirtman.mp4 \
72
+ --reference_path assets/Samples/tshirtman-milkyway.png \
73
+ --save_path outputs/tvi2v/tshirtman-MilkyWay \
74
+ --disable_check_repeat \
75
+ --prior_coefficient_x 0.03 \
76
+ --prior_type ref
77
+ ```
78
+
79
+ Automatic edit the center frame via [pnp-diffusers](https://github.com/MichalGeyer/pnp-diffusers)
80
+ Note that the performance of this pipeline heavily depends on the quality of the automatic editing result. So try to use more powerful automatic editing methods to edit the center frame. Or we recommond combine CCEdit with other powerfull AI editing tools, such as Stable-Diffusion WebUI, comfyui, etc.
81
+ ```bash
82
+ # python preprocess.py --data_path <path_to_guidance_image> --inversion_prompt <inversion_prompt>
83
+ python src/pnp-diffusers/preprocess.py --data_path assets/Samples/tshirtman-milkyway.png --inversion_prompt 'a man walks in the filed'
84
+ # modify the config file (config_pnp.yaml) to use the processed image
85
+ # python pnp.py --config_path <pnp_config_path>
86
+ python src/pnp-diffusers/pnp.py --config_path config_pnp.yaml
87
+ python scripts/sampling/sampling_tv2v_ref.py \
88
+ --seed 201574 \
89
+ --config_path configs/inference_ccedit/keyframe_ref_cp_no2ndca_add_cfca_depthzoe.yaml \
90
+ --ckpt_path models/tvi2v-no2ndca-depthmidas.ckpt \
91
+ --H 512 --W 768 --original_fps 18 --target_fps 6 --num_keyframes 17 --batch_size 1 --num_samples 2 \
92
+ --sample_steps 50 --sampler_name DPMPP2SAncestralSampler --cfg_scale 7 \
93
+ --prompt 'A person walks on the grass, the Milky Way is in the sky, night' \
94
+ --add_prompt 'masterpiece, best quality,' \
95
+ --video_path assets/Samples/tshirtman.mp4 \
96
+ --reference_path "PNP-results/tshirtman-milkyway/output-a man walks in the filed, milky way.png" \
97
+ --save_path outputs/tvi2v/tshirtman-MilkyWay \
98
+ --disable_check_repeat \
99
+ --prior_coefficient_x 0.03 \
100
+ --prior_type ref
101
+ ```
102
+
103
+ You can use the following pipeline to automatically extract the center frame, conduct editing via pnp-diffusers and then conduct video editing via tvi2v.
104
+ ```bash
105
+ python scripts/sampling/pnp_generate_config.py \
106
+ --p_config config_pnp_auto.yaml \
107
+ --output_path "outputs/automatic_ref_editing/image" \
108
+ --image_path "outputs/centerframe/tshirtman.png" \
109
+ --latents_path "latents_forward" \
110
+ --prompt "a man walks on the beach"
111
+ python scripts/tools/extract_centerframe.py \
112
+ --p_video assets/Samples/tshirtman.mp4 \
113
+ --p_save outputs/centerframe/tshirtman.png \
114
+ --orifps 18 \
115
+ --targetfps 6 \
116
+ --n_keyframes 17 \
117
+ --length_long 512 \
118
+ --length_short 512
119
+ python src/pnp-diffusers/preprocess.py --data_path outputs/centerframe/tshirtman.png --inversion_prompt 'a man walks in the filed'
120
+ python src/pnp-diffusers/pnp.py --config_path config_pnp_auto.yaml
121
+ python scripts/sampling/sampling_tv2v_ref.py \
122
+ --seed 201574 \
123
+ --config_path configs/inference_ccedit/keyframe_ref_cp_no2ndca_add_cfca_depthzoe.yaml \
124
+ --ckpt_path models/tvi2v-no2ndca-depthmidas.ckpt \
125
+ --H 512 --W 768 --original_fps 18 --target_fps 6 --num_keyframes 17 --batch_size 1 --num_samples 2 \
126
+ --sample_steps 50 --sampler_name DPMPP2SAncestralSampler --cfg_scale 7 \
127
+ --prompt 'A man walks on the beach' \
128
+ --add_prompt 'masterpiece, best quality,' \
129
+ --video_path assets/Samples/tshirtman.mp4 \
130
+ --reference_path "outputs/automatic_ref_editing/image/output-a man walks on the beach.png" \
131
+ --save_path outputs/tvi2v/tshirtman-Beach \
132
+ --disable_check_repeat \
133
+ --prior_coefficient_x 0.03 \
134
+ --prior_type ref
135
+ ```
136
+
137
+ ## Train example
138
+ ```bash
139
+ python main.py -b configs/example_training/sd_1_5_controlldm-test-ruoyu-tv2v-depthmidas.yaml --wandb False
140
+ ```
141
+
142
+ ## BibTeX
143
+ If you find this work useful for your research, please cite us:
144
+
145
+ ```
146
+ @article{feng2023ccedit,
147
+ title={CCEdit: Creative and Controllable Video Editing via Diffusion Models},
148
+ author={Feng, Ruoyu and Weng, Wenming and Wang, Yanhui and Yuan, Yuhui and Bao, Jianmin and Luo, Chong and Chen, Zhibo and Guo, Baining},
149
+ journal={arXiv preprint arXiv:2309.16496},
150
+ year={2023}
151
+ }
152
+ ```
153
+
154
+ ## Conact Us
155
+ **Ruoyu Feng**: [ustcfry@mail.ustc.edu.cn](ustcfry@mail.ustc.edu.cn)
156
+
157
+
158
+ ## Acknowledgements
159
+ The source videos in this repository come from our own collections and downloads from Pexels. If anyone feels that a particular piece of content is used inappropriately, please feel free to contact me, and I will remove it immediately.
160
+
161
+ Thanks to model contributers of [CivitAI](https://civitai.com/) and [RunwayML](https://runwayml.com/).
CCEdit-main/config_pnp.yaml ADDED
@@ -0,0 +1,19 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # general
2
+ seed: 1
3
+ device: 'cuda'
4
+ output_path: 'PNP-results/tshirtman-milkyway'
5
+
6
+ # data
7
+ image_path: 'assets/Samples/tshirtman-milkyway.png'
8
+ latents_path: 'latents_forward'
9
+
10
+ # diffusion
11
+ sd_version: '2.1'
12
+ guidance_scale: 7.5
13
+ n_timesteps: 50
14
+ prompt: a man walks in the filed, milky way
15
+ negative_prompt: ugly
16
+
17
+ # pnp injection thresholds, ∈ [0, 1]
18
+ pnp_attn_t: 0.5
19
+ pnp_f_t: 0.8
CCEdit-main/config_pnp_auto.yaml ADDED
@@ -0,0 +1,12 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ seed: 1
2
+ device: cuda
3
+ output_path: outputs/automatic_ref_editing/image
4
+ image_path: outputs/centerframe/tshirtman.png
5
+ latents_path: latents_forward
6
+ sd_version: '2.1'
7
+ guidance_scale: 7.5
8
+ n_timesteps: 50
9
+ prompt: a man walks on the beach
10
+ negative_prompt: ugly, blurry, black, low res, unrealistic
11
+ pnp_attn_t: 0.5
12
+ pnp_f_t: 0.8
CCEdit-main/main.py ADDED
@@ -0,0 +1,1060 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import argparse
2
+ import datetime
3
+ import glob
4
+ import inspect
5
+ import os
6
+ import sys
7
+ from inspect import Parameter
8
+ from typing import Union
9
+ import einops
10
+ import imageio
11
+ import re
12
+ import numpy as np
13
+ import pytorch_lightning as pl
14
+ import torch
15
+ import torchvision
16
+ import wandb
17
+ from PIL import Image
18
+ from matplotlib import pyplot as plt
19
+ from natsort import natsorted
20
+ from omegaconf import OmegaConf
21
+ from packaging import version
22
+ from pytorch_lightning import seed_everything
23
+ from pytorch_lightning.callbacks import Callback
24
+ from pytorch_lightning.loggers import WandbLogger
25
+ from pytorch_lightning.trainer import Trainer
26
+ from pytorch_lightning.utilities import rank_zero_only
27
+
28
+ from sgm.util import (
29
+ exists,
30
+ instantiate_from_config,
31
+ isheatmap,
32
+ )
33
+
34
+ MULTINODE_HACKS = True
35
+
36
+
37
+ def default_trainer_args():
38
+ argspec = dict(inspect.signature(Trainer.__init__).parameters)
39
+ argspec.pop("self")
40
+ default_args = {
41
+ param: argspec[param].default
42
+ for param in argspec
43
+ if argspec[param] != Parameter.empty
44
+ }
45
+ return default_args
46
+
47
+ def get_step_value(folder_name):
48
+ match = re.search(r'step=(\d+)', folder_name)
49
+ if match:
50
+ return int(match.group(1))
51
+ return 0 # return 0 as default
52
+
53
+ def get_parser(**parser_kwargs):
54
+ def str2bool(v):
55
+ if isinstance(v, bool):
56
+ return v
57
+ if v.lower() in ("yes", "true", "t", "y", "1"):
58
+ return True
59
+ elif v.lower() in ("no", "false", "f", "n", "0"):
60
+ return False
61
+ else:
62
+ raise argparse.ArgumentTypeError("Boolean value expected.")
63
+
64
+ parser = argparse.ArgumentParser(**parser_kwargs)
65
+ parser.add_argument(
66
+ "-n",
67
+ "--name",
68
+ type=str,
69
+ const=True,
70
+ default="",
71
+ nargs="?",
72
+ help="postfix for logdir",
73
+ )
74
+ parser.add_argument(
75
+ "--no_date",
76
+ type=str2bool,
77
+ nargs="?",
78
+ const=True,
79
+ default=False,
80
+ help="if True, skip date generation for logdir and only use naming via opt.base or opt.name (+ opt.postfix, optionally)",
81
+ )
82
+ parser.add_argument(
83
+ "-r",
84
+ "--resume",
85
+ type=str,
86
+ const=True,
87
+ default="",
88
+ nargs="?",
89
+ help="resume from logdir or checkpoint in logdir",
90
+ )
91
+ parser.add_argument(
92
+ "-b",
93
+ "--base",
94
+ nargs="*",
95
+ metavar="base_config.yaml",
96
+ help="paths to base configs. Loaded from left-to-right. "
97
+ "Parameters can be overwritten or added with command-line options of the form `--key value`.",
98
+ default=list(),
99
+ )
100
+ parser.add_argument(
101
+ "-t",
102
+ "--train",
103
+ type=str2bool,
104
+ const=True,
105
+ default=True,
106
+ nargs="?",
107
+ help="train",
108
+ )
109
+ parser.add_argument(
110
+ "--no-test",
111
+ type=str2bool,
112
+ const=True,
113
+ default=False,
114
+ nargs="?",
115
+ help="disable test",
116
+ )
117
+ parser.add_argument(
118
+ "-p", "--project", help="name of new or path to existing project"
119
+ )
120
+ parser.add_argument(
121
+ "-d",
122
+ "--debug",
123
+ type=str2bool,
124
+ nargs="?",
125
+ const=True,
126
+ default=False,
127
+ help="enable post-mortem debugging",
128
+ )
129
+ parser.add_argument(
130
+ "-s",
131
+ "--seed",
132
+ type=int,
133
+ default=23,
134
+ help="seed for seed_everything",
135
+ )
136
+ parser.add_argument(
137
+ "-f",
138
+ "--postfix",
139
+ type=str,
140
+ default="",
141
+ help="post-postfix for default name",
142
+ )
143
+ parser.add_argument(
144
+ "--projectname",
145
+ type=str,
146
+ default="video_generative_models",
147
+ )
148
+ parser.add_argument(
149
+ "-l",
150
+ "--logdir",
151
+ type=str,
152
+ default="logs",
153
+ help="directory for logging dat shit",
154
+ )
155
+ parser.add_argument(
156
+ "--scale_lr",
157
+ type=str2bool,
158
+ nargs="?",
159
+ const=True,
160
+ default=True,
161
+ help="scale base-lr by ngpu * batch_size * n_accumulate",
162
+ )
163
+ parser.add_argument(
164
+ "--legacy_naming",
165
+ type=str2bool,
166
+ nargs="?",
167
+ const=True,
168
+ default=False,
169
+ help="name run based on config file name if true, else by whole path",
170
+ )
171
+ parser.add_argument(
172
+ "--enable_tf32",
173
+ type=str2bool,
174
+ nargs="?",
175
+ const=True,
176
+ default=True,
177
+ help="enables the TensorFloat32 format both for matmuls and cuDNN for pytorch 1.12",
178
+ )
179
+ parser.add_argument(
180
+ "--startup",
181
+ type=str,
182
+ default=None,
183
+ help="Startuptime from distributed script",
184
+ )
185
+ parser.add_argument(
186
+ "--wandb",
187
+ type=str2bool,
188
+ nargs="?",
189
+ const=True,
190
+ default=True, # TODO: later default to True
191
+ help="log to wandb",
192
+ )
193
+ parser.add_argument(
194
+ "--wandb-entity",
195
+ type=str,
196
+ default="msra_cver",
197
+ help="Wandb entity name string",
198
+ )
199
+ parser.add_argument(
200
+ "--no_base_name",
201
+ type=str2bool,
202
+ nargs="?",
203
+ const=True,
204
+ default=False, # TODO: later default to True
205
+ help="experiment name shown in wandb",
206
+ )
207
+ if version.parse(torch.__version__) >= version.parse("2.0.0"):
208
+ parser.add_argument(
209
+ "--resume_from_checkpoint",
210
+ type=str,
211
+ default=None,
212
+ help="single checkpoint file to resume from",
213
+ )
214
+ default_args = default_trainer_args()
215
+ for key in default_args:
216
+ parser.add_argument("--" + key, default=default_args[key])
217
+ return parser
218
+
219
+
220
+ def get_checkpoint_name(logdir):
221
+ ckpt = os.path.join(logdir, "checkpoints", "last**.ckpt")
222
+ ckpt = natsorted(glob.glob(ckpt))
223
+ print('available "last" checkpoints:')
224
+ print(ckpt)
225
+ if len(ckpt) > 1:
226
+ print("got most recent checkpoint")
227
+ ckpt = sorted(ckpt, key=lambda x: os.path.getmtime(x))[-1]
228
+ print(f"Most recent ckpt is {ckpt}")
229
+ with open(os.path.join(logdir, "most_recent_ckpt.txt"), "w") as f:
230
+ f.write(ckpt + "\n")
231
+ try:
232
+ version = int(ckpt.split("/")[-1].split("-v")[-1].split(".")[0])
233
+ except Exception as e:
234
+ print("version confusion but not bad")
235
+ print(e)
236
+ version = 1
237
+ # version = last_version + 1
238
+ else:
239
+ # in this case, we only have one "last.ckpt"
240
+ ckpt = ckpt[0]
241
+ version = 1
242
+ melk_ckpt_name = f"last-v{version}.ckpt"
243
+ print(f"Current melk ckpt name: {melk_ckpt_name}")
244
+ return ckpt, melk_ckpt_name
245
+
246
+
247
+ class SetupCallback(Callback):
248
+ def __init__(
249
+ self,
250
+ resume,
251
+ now,
252
+ logdir,
253
+ ckptdir,
254
+ cfgdir,
255
+ config,
256
+ lightning_config,
257
+ debug,
258
+ ckpt_name=None,
259
+ ):
260
+ super().__init__()
261
+ self.resume = resume
262
+ self.now = now
263
+ self.logdir = logdir
264
+ self.ckptdir = ckptdir
265
+ self.cfgdir = cfgdir
266
+ self.config = config
267
+ self.lightning_config = lightning_config
268
+ self.debug = debug
269
+ self.ckpt_name = ckpt_name
270
+
271
+ def on_exception(self, trainer: pl.Trainer, pl_module, exception):
272
+ if not self.debug and trainer.global_rank == 0:
273
+ print("Summoning checkpoint.")
274
+ if self.ckpt_name is None:
275
+ ckpt_path = os.path.join(self.ckptdir, "last.ckpt")
276
+ else:
277
+ ckpt_path = os.path.join(self.ckptdir, self.ckpt_name)
278
+ # trainer.save_checkpoint(ckpt_path) # TODO: for fast debugging, I comment this line.
279
+
280
+ def on_fit_start(self, trainer, pl_module):
281
+ if trainer.global_rank == 0:
282
+ # Create logdirs and save configs
283
+ os.makedirs(self.logdir, exist_ok=True)
284
+ os.makedirs(self.ckptdir, exist_ok=True)
285
+ os.makedirs(self.cfgdir, exist_ok=True)
286
+
287
+ if "callbacks" in self.lightning_config:
288
+ if (
289
+ "metrics_over_trainsteps_checkpoint"
290
+ in self.lightning_config["callbacks"]
291
+ ):
292
+ os.makedirs(
293
+ os.path.join(self.ckptdir, "trainstep_checkpoints"),
294
+ exist_ok=True,
295
+ )
296
+ print("Project config")
297
+ print(OmegaConf.to_yaml(self.config))
298
+ if MULTINODE_HACKS:
299
+ import time
300
+
301
+ time.sleep(5)
302
+ OmegaConf.save(
303
+ self.config,
304
+ os.path.join(self.cfgdir, "{}-project.yaml".format(self.now)),
305
+ )
306
+
307
+ print("Lightning config")
308
+ print(OmegaConf.to_yaml(self.lightning_config))
309
+ OmegaConf.save(
310
+ OmegaConf.create({"lightning": self.lightning_config}),
311
+ os.path.join(self.cfgdir, "{}-lightning.yaml".format(self.now)),
312
+ )
313
+
314
+ else:
315
+ # ModelCheckpoint callback created log directory --- remove it
316
+ if not MULTINODE_HACKS and not self.resume and os.path.exists(self.logdir):
317
+ dst, name = os.path.split(self.logdir)
318
+ dst = os.path.join(dst, "child_runs", name)
319
+ os.makedirs(os.path.split(dst)[0], exist_ok=True)
320
+ try:
321
+ os.rename(self.logdir, dst)
322
+ except FileNotFoundError:
323
+ pass
324
+
325
+
326
+ class ImageLogger(Callback):
327
+ def __init__(
328
+ self,
329
+ batch_frequency,
330
+ max_images,
331
+ clamp=True,
332
+ increase_log_steps=True,
333
+ rescale=True,
334
+ disabled=False,
335
+ log_on_batch_idx=False,
336
+ log_first_step=False,
337
+ log_images_kwargs=None,
338
+ log_before_first_step=False,
339
+ enable_autocast=True,
340
+ ):
341
+ super().__init__()
342
+ self.enable_autocast = enable_autocast
343
+ self.rescale = rescale
344
+ self.batch_freq = batch_frequency
345
+ self.max_images = max_images
346
+ self.log_steps = [2**n for n in range(int(np.log2(self.batch_freq)) + 1)]
347
+ if not increase_log_steps:
348
+ self.log_steps = [self.batch_freq]
349
+ self.clamp = clamp
350
+ self.disabled = disabled
351
+ self.log_on_batch_idx = log_on_batch_idx
352
+ self.log_images_kwargs = log_images_kwargs if log_images_kwargs else {}
353
+ self.log_first_step = log_first_step
354
+ self.log_before_first_step = log_before_first_step
355
+
356
+ @rank_zero_only
357
+ def log_local(
358
+ self,
359
+ save_dir,
360
+ split,
361
+ images,
362
+ global_step,
363
+ current_epoch,
364
+ batch_idx,
365
+ pl_module: Union[None, pl.LightningModule] = None,
366
+ ):
367
+ root = os.path.join(save_dir, "images", split)
368
+ for k in images:
369
+ if isheatmap(images[k]):
370
+ fig, ax = plt.subplots()
371
+ ax = ax.matshow(
372
+ images[k].cpu().numpy(), cmap="hot", interpolation="lanczos"
373
+ )
374
+ plt.colorbar(ax)
375
+ plt.axis("off")
376
+
377
+ filename = "{}_gs-{:06}_e-{:06}_b-{:06}.png".format(
378
+ k, global_step, current_epoch, batch_idx
379
+ )
380
+ os.makedirs(root, exist_ok=True)
381
+ path = os.path.join(root, filename)
382
+ plt.savefig(path)
383
+ plt.close()
384
+ # TODO: support wandb
385
+ elif "video" in k:
386
+ fps = self.log_images_kwargs.get("video_fps", 3)
387
+ video = images[k]
388
+ if self.rescale:
389
+ video = (video + 1.0) / 2.0 # -1,1 -> 0,1; c,h,w
390
+ frames = [video[:, :, i] for i in range(video.shape[2])]
391
+ frames = [torchvision.utils.make_grid(each, nrow=4) for each in frames]
392
+ frames = [einops.rearrange(each, "c h w -> 1 c h w") for each in frames]
393
+ frames = torch.clamp(torch.cat(frames, dim=0), min=0.0, max=1.0)
394
+ frames = (frames.numpy() * 255).astype(np.uint8)
395
+
396
+ filename = "{}_gs-{:06}_e-{:06}_b-{:06}.gif".format(
397
+ k, global_step, current_epoch, batch_idx
398
+ )
399
+ os.makedirs(root, exist_ok=True)
400
+ path = os.path.join(root, filename)
401
+ save_numpy_as_gif(frames, path, duration=1 / fps)
402
+
403
+ if exists(pl_module):
404
+ assert isinstance(
405
+ pl_module.logger, WandbLogger
406
+ ), "logger_log_image only supports WandbLogger currently"
407
+ wandb.log({f"{split}/{k}": wandb.Video(frames, fps=fps)})
408
+ # wandb.log({f"{split}/{k}": wandb.Video(frames, fps=fps)}, step=global_step)
409
+ else:
410
+ data_tmp = images[k]
411
+ if data_tmp.ndim == 5:
412
+ data_tmp = einops.rearrange(data_tmp, "b c t h w -> (b t) c h w")
413
+ nrow = self.log_images_kwargs.get("n_rows", 8)
414
+ grid = torchvision.utils.make_grid(data_tmp, nrow=nrow)
415
+ if self.rescale:
416
+ grid = (grid + 1.0) / 2.0 # -1,1 -> 0,1; c,h,w
417
+ grid = grid.transpose(0, 1).transpose(1, 2).squeeze(-1)
418
+ grid = grid.numpy()
419
+ grid = (grid * 255).astype(np.uint8)
420
+ filename = "{}_gs-{:06}_e-{:06}_b-{:06}.png".format(
421
+ k, global_step, current_epoch, batch_idx
422
+ )
423
+ path = os.path.join(root, filename)
424
+ os.makedirs(os.path.split(path)[0], exist_ok=True)
425
+ img = Image.fromarray(grid)
426
+ img.save(path)
427
+ if exists(pl_module):
428
+ assert isinstance(
429
+ pl_module.logger, WandbLogger
430
+ ), "logger_log_image only supports WandbLogger currently"
431
+ pl_module.logger.log_image(
432
+ key=f"{split}/{k}",
433
+ images=[
434
+ img,
435
+ ],
436
+ step=pl_module.global_step,
437
+ )
438
+
439
+ @rank_zero_only
440
+ def log_img(self, pl_module, batch, batch_idx, split="train"):
441
+ check_idx = batch_idx if self.log_on_batch_idx else pl_module.global_step
442
+ if (
443
+ self.check_frequency(check_idx)
444
+ and hasattr(pl_module, "log_images") # batch_idx % self.batch_freq == 0
445
+ and callable(pl_module.log_images)
446
+ and
447
+ # batch_idx > 5 and
448
+ self.max_images > 0
449
+ ):
450
+ logger = type(pl_module.logger)
451
+ is_train = pl_module.training
452
+ if is_train:
453
+ pl_module.eval()
454
+
455
+ gpu_autocast_kwargs = {
456
+ "enabled": self.enable_autocast, # torch.is_autocast_enabled(),
457
+ "dtype": torch.float32, # torch.get_autocast_gpu_dtype(),
458
+ "cache_enabled": torch.is_autocast_cache_enabled(),
459
+ }
460
+ with torch.no_grad(), torch.cuda.amp.autocast(**gpu_autocast_kwargs):
461
+ images = pl_module.log_images(
462
+ batch, split=split, **self.log_images_kwargs
463
+ )
464
+
465
+ for k in images:
466
+ N = min(images[k].shape[0], self.max_images)
467
+ if not isheatmap(images[k]):
468
+ images[k] = images[k][:N]
469
+ if isinstance(images[k], torch.Tensor):
470
+ images[k] = images[k].detach().float().cpu()
471
+ if self.clamp and not isheatmap(images[k]):
472
+ images[k] = torch.clamp(images[k], -1.0, 1.0)
473
+
474
+ self.log_local(
475
+ pl_module.logger.save_dir,
476
+ split,
477
+ images,
478
+ pl_module.global_step,
479
+ pl_module.current_epoch,
480
+ batch_idx,
481
+ pl_module=pl_module
482
+ if isinstance(pl_module.logger, WandbLogger)
483
+ else None,
484
+ )
485
+
486
+ if is_train:
487
+ pl_module.train()
488
+
489
+ def check_frequency(self, check_idx):
490
+ if ((check_idx % self.batch_freq) == 0 or (check_idx in self.log_steps)) and (
491
+ check_idx > 0 or self.log_first_step
492
+ ):
493
+ try:
494
+ self.log_steps.pop(0)
495
+ except IndexError as e:
496
+ print(e)
497
+ pass
498
+ return True
499
+ return False
500
+
501
+ @rank_zero_only
502
+ def on_train_batch_end(self, trainer, pl_module, outputs, batch, batch_idx):
503
+ if not self.disabled and (pl_module.global_step > 0 or self.log_first_step):
504
+ self.log_img(pl_module, batch, batch_idx, split="train")
505
+
506
+ @rank_zero_only
507
+ def on_train_batch_start(self, trainer, pl_module, batch, batch_idx):
508
+ if self.log_before_first_step and pl_module.global_step == 0:
509
+ print(f"{self.__class__.__name__}: logging before training")
510
+ self.log_img(pl_module, batch, batch_idx, split="train")
511
+
512
+ @rank_zero_only
513
+ def on_validation_batch_end(
514
+ self, trainer, pl_module, outputs, batch, batch_idx, *args, **kwargs
515
+ ):
516
+ if not self.disabled and pl_module.global_step > 0:
517
+ self.log_img(pl_module, batch, batch_idx, split="val")
518
+ if hasattr(pl_module, "calibrate_grad_norm"):
519
+ if (
520
+ pl_module.calibrate_grad_norm and batch_idx % 25 == 0
521
+ ) and batch_idx > 0:
522
+ self.log_gradients(trainer, pl_module, batch_idx=batch_idx)
523
+
524
+
525
+ def save_numpy_as_gif(frames, path, duration=None):
526
+ """
527
+ save numpy array as gif file
528
+ """
529
+ image_list = []
530
+ for frame in frames:
531
+ image = frame.transpose(1, 2, 0)
532
+ image_list.append(image)
533
+ if duration:
534
+ imageio.mimsave(path, image_list, format="GIF", duration=duration, loop=0)
535
+ # imageio.mimsave(path, image_list, format="GIF", duration=duration, loop=0, quality=10)
536
+ else:
537
+ imageio.mimsave(path, image_list, format="GIF", loop=0)
538
+ # imageio.mimsave(path, image_list, format="GIF", loop=0, quality=10)
539
+
540
+
541
+ @rank_zero_only
542
+ def init_wandb(save_dir, opt, config, group_name, name_str, entity_name):
543
+ print(f"setting WANDB_DIR to {save_dir}")
544
+ os.makedirs(save_dir, exist_ok=True)
545
+
546
+ os.environ["WANDB_DIR"] = save_dir
547
+ if opt.debug:
548
+ wandb.init(project=opt.projectname, mode="offline", group=group_name)
549
+ else:
550
+ wandb.init(
551
+ project=opt.projectname,
552
+ config=None,
553
+ settings=wandb.Settings(code_dir="./sgm"),
554
+ group=group_name,
555
+ name=name_str,
556
+ entity=entity_name,
557
+ )
558
+
559
+
560
+ if __name__ == "__main__":
561
+ # custom parser to specify config files, train, test and debug mode,
562
+ # postfix, resume.
563
+ # `--key value` arguments are interpreted as arguments to the trainer.
564
+ # `nested.key=value` arguments are interpreted as config parameters.
565
+ # configs are merged from left-to-right followed by command line parameters.
566
+
567
+ # model:
568
+ # base_learning_rate: float
569
+ # target: path to lightning module
570
+ # params:
571
+ # key: value
572
+ # data:
573
+ # target: main.DataModuleFromConfig
574
+ # params:
575
+ # batch_size: int
576
+ # wrap: bool
577
+ # train:
578
+ # target: path to train dataset
579
+ # params:
580
+ # key: value
581
+ # validation:
582
+ # target: path to validation dataset
583
+ # params:
584
+ # key: value
585
+ # test:
586
+ # target: path to test dataset
587
+ # params:
588
+ # key: value
589
+ # lightning: (optional, has sane defaults and can be specified on cmdline)
590
+ # trainer:
591
+ # additional arguments to trainer
592
+ # logger:
593
+ # logger to instantiate
594
+ # modelcheckpoint:
595
+ # modelcheckpoint to instantiate
596
+ # callbacks:
597
+ # callback1:
598
+ # target: importpath
599
+ # params:
600
+ # key: value
601
+ torch.set_float32_matmul_precision(precision="medium")
602
+ now = datetime.datetime.now().strftime("%Y-%m-%dT%H-%M-%S")
603
+
604
+ # add cwd for convenience and to make classes in this file available when
605
+ # running as `python main.py`
606
+ # (in particular `main.DataModuleFromConfig`)
607
+ sys.path.append(os.getcwd())
608
+
609
+ parser = get_parser()
610
+
611
+ opt, unknown = parser.parse_known_args()
612
+
613
+ if opt.name and opt.resume:
614
+ raise ValueError(
615
+ "-n/--name and -r/--resume cannot be specified both."
616
+ "If you want to resume training in a new log folder, "
617
+ "use -n/--name in combination with --resume_from_checkpoint"
618
+ )
619
+ melk_ckpt_name = None
620
+ name = None
621
+ if opt.resume:
622
+ if not os.path.exists(opt.resume):
623
+ raise ValueError("Cannot find {}".format(opt.resume))
624
+ if os.path.isfile(opt.resume):
625
+ paths = opt.resume.split("/")
626
+ # idx = len(paths)-paths[::-1].index("logs")+1
627
+ # logdir = "/".join(paths[:idx])
628
+ logdir = "/".join(paths[:-2])
629
+ ckpt = opt.resume
630
+ _, melk_ckpt_name = get_checkpoint_name(logdir)
631
+ else:
632
+ assert os.path.isdir(opt.resume), opt.resume
633
+ logdir = opt.resume.rstrip("/")
634
+ checkpoint_dir = os.path.join(logdir, "checkpoints")
635
+
636
+ # Use the max step checkpoint file
637
+ ckpt_files = glob.glob(os.path.join(checkpoint_dir, "*.ckpt"))
638
+ ckpt_files.sort(key=get_step_value, reverse=True)
639
+ if ckpt_files:
640
+ ckpt = ckpt_files[0]
641
+ print("use latest checkpoint: {}".format(ckpt))
642
+ else:
643
+ # If no checkpoint files found, use a random initialized model
644
+ print("no checkpoint file found. not resume")
645
+ ckpt = None
646
+
647
+ print("#" * 100)
648
+ print(f'Resuming from checkpoint "{ckpt}"')
649
+ print("#" * 100)
650
+
651
+ opt.resume_from_checkpoint = ckpt
652
+ base_configs = sorted(glob.glob(os.path.join(logdir, "configs/*.yaml")))
653
+ opt.base = base_configs + opt.base
654
+ _tmp = logdir.split("/")
655
+ nowname = _tmp[-1]
656
+ else:
657
+ if opt.name:
658
+ name = "_" + opt.name
659
+ elif opt.base:
660
+ if opt.no_base_name:
661
+ name = ""
662
+ else:
663
+ if opt.legacy_naming:
664
+ cfg_fname = os.path.split(opt.base[0])[-1]
665
+ cfg_name = os.path.splitext(cfg_fname)[0]
666
+ else:
667
+ assert "configs" in os.path.split(opt.base[0])[0], os.path.split(
668
+ opt.base[0]
669
+ )[0]
670
+ cfg_path = os.path.split(opt.base[0])[0].split(os.sep)[
671
+ os.path.split(opt.base[0])[0].split(os.sep).index("configs")
672
+ + 1 :
673
+ ] # cut away the first one (we assert all configs are in "configs")
674
+ cfg_name = os.path.splitext(os.path.split(opt.base[0])[-1])[0]
675
+ cfg_name = "-".join(cfg_path) + f"-{cfg_name}"
676
+ name = "_" + cfg_name
677
+ else:
678
+ name = ""
679
+ if not opt.no_date:
680
+ nowname = now + name + opt.postfix
681
+ else:
682
+ nowname = name + opt.postfix
683
+ if nowname.startswith("_"):
684
+ nowname = nowname[1:]
685
+ logdir = os.path.join(opt.logdir, nowname)
686
+ print(f"LOGDIR: {logdir}")
687
+
688
+ ckptdir = os.path.join(logdir, "checkpoints")
689
+ cfgdir = os.path.join(logdir, "configs")
690
+ seed_everything(opt.seed, workers=True)
691
+
692
+ # move before model init, in case a torch.compile(...) is called somewhere
693
+ if opt.enable_tf32:
694
+ # pt_version = version.parse(torch.__version__)
695
+ torch.backends.cuda.matmul.allow_tf32 = True
696
+ torch.backends.cudnn.allow_tf32 = True
697
+ print(f"Enabling TF32 for PyTorch {torch.__version__}")
698
+ else:
699
+ print(f"Using default TF32 settings for PyTorch {torch.__version__}:")
700
+ print(
701
+ f"torch.backends.cuda.matmul.allow_tf32={torch.backends.cuda.matmul.allow_tf32}"
702
+ )
703
+ print(f"torch.backends.cudnn.allow_tf32={torch.backends.cudnn.allow_tf32}")
704
+
705
+ if "LOCAL_RANK" in os.environ:
706
+ os.environ["OMPI_COMM_WORLD_LOCAL_RANK"] = os.environ.get("LOCAL_RANK")
707
+ print("local rank:", os.environ["LOCAL_RANK"])
708
+
709
+ try:
710
+ # init and save configs
711
+ configs = [OmegaConf.load(cfg) for cfg in opt.base]
712
+ cli = OmegaConf.from_dotlist(unknown)
713
+ config = OmegaConf.merge(*configs, cli)
714
+ lightning_config = config.pop("lightning", OmegaConf.create())
715
+ # merge trainer cli with config
716
+ trainer_config = lightning_config.get("trainer", OmegaConf.create())
717
+
718
+ # default to gpu
719
+ trainer_config["accelerator"] = "gpu"
720
+ #
721
+ standard_args = default_trainer_args()
722
+ for k in standard_args:
723
+ if getattr(opt, k) != standard_args[k]:
724
+ trainer_config[k] = getattr(opt, k)
725
+
726
+ ckpt_resume_path = opt.resume_from_checkpoint
727
+
728
+ if not "devices" in trainer_config and trainer_config["accelerator"] != "gpu":
729
+ del trainer_config["accelerator"]
730
+ cpu = True
731
+ else:
732
+ gpuinfo = trainer_config["devices"]
733
+ print(f"Running on GPUs {gpuinfo}")
734
+ cpu = False
735
+ trainer_opt = argparse.Namespace(**trainer_config)
736
+ lightning_config.trainer = trainer_config
737
+
738
+ # model
739
+ model = instantiate_from_config(config.model)
740
+
741
+ # trainer and callbacks
742
+ trainer_kwargs = dict()
743
+
744
+ # default logger configs
745
+ default_logger_cfgs = {
746
+ "wandb": {
747
+ "target": "pytorch_lightning.loggers.WandbLogger",
748
+ "params": {
749
+ "name": nowname,
750
+ "save_dir": logdir,
751
+ "offline": opt.debug,
752
+ "id": nowname,
753
+ "project": opt.projectname,
754
+ "log_model": False,
755
+ "entity": opt.wandb_entity,
756
+ },
757
+ },
758
+ "csv": {
759
+ "target": "pytorch_lightning.loggers.CSVLogger",
760
+ "params": {
761
+ "name": "testtube", # hack for sbord fanatics
762
+ "save_dir": logdir,
763
+ },
764
+ },
765
+ }
766
+ default_logger_cfg = default_logger_cfgs["wandb" if opt.wandb else "csv"]
767
+ if opt.wandb:
768
+ # TODO change once leaving "swiffer" config directory
769
+ try:
770
+ group_name = nowname.split(now)[-1].split("-")[1]
771
+ except:
772
+ group_name = nowname
773
+ default_logger_cfg["params"]["group"] = group_name
774
+ init_wandb(
775
+ os.path.join(os.getcwd(), logdir),
776
+ opt=opt,
777
+ group_name=group_name,
778
+ config=config,
779
+ name_str=nowname,
780
+ entity_name=opt.wandb_entity,
781
+ )
782
+ if "logger" in lightning_config:
783
+ logger_cfg = lightning_config.logger
784
+ else:
785
+ logger_cfg = OmegaConf.create()
786
+ logger_cfg = OmegaConf.merge(default_logger_cfg, logger_cfg)
787
+ trainer_kwargs["logger"] = instantiate_from_config(logger_cfg)
788
+
789
+ # modelcheckpoint - use TrainResult/EvalResult(checkpoint_on=metric) to
790
+ # specify which metric is used to determine best models
791
+ default_modelckpt_cfg = {
792
+ "target": "pytorch_lightning.callbacks.ModelCheckpoint",
793
+ "params": {
794
+ "dirpath": ckptdir,
795
+ "filename": "epoch={epoch:06}-step={step:07}-train_loss={train/loss:.3f}",
796
+ "verbose": True,
797
+ "save_last": False,
798
+ "auto_insert_metric_name": False,
799
+ "save_top_k": -1,
800
+ },
801
+ }
802
+ if hasattr(model, "monitor"):
803
+ print(f"Monitoring {model.monitor} as checkpoint metric.")
804
+ default_modelckpt_cfg["params"]["monitor"] = model.monitor
805
+ # default_modelckpt_cfg["params"]["save_top_k"] = -1
806
+
807
+ if "modelcheckpoint" in lightning_config:
808
+ modelckpt_cfg = lightning_config.modelcheckpoint
809
+ else:
810
+ modelckpt_cfg = OmegaConf.create()
811
+ modelckpt_cfg = OmegaConf.merge(default_modelckpt_cfg, modelckpt_cfg)
812
+ print(f"Merged modelckpt-cfg: \n{modelckpt_cfg}")
813
+
814
+ # https://pytorch-lightning.readthedocs.io/en/stable/extensions/strategy.html
815
+ # default to ddp if not further specified
816
+ default_strategy_config = {"target": "pytorch_lightning.strategies.DDPStrategy"}
817
+
818
+ if "strategy" in lightning_config:
819
+ strategy_cfg = lightning_config.strategy
820
+ else:
821
+ strategy_cfg = OmegaConf.create()
822
+ default_strategy_config["params"] = {
823
+ "find_unused_parameters": False,
824
+ # "static_graph": True,
825
+ # "ddp_comm_hook": default.fp16_compress_hook # TODO: experiment with this, also for DDPSharded
826
+ }
827
+ strategy_cfg = OmegaConf.merge(default_strategy_config, strategy_cfg)
828
+ print(
829
+ f"strategy config: \n ++++++++++++++ \n {strategy_cfg} \n ++++++++++++++ "
830
+ )
831
+ trainer_kwargs["strategy"] = instantiate_from_config(strategy_cfg)
832
+
833
+ # add callback which sets up log directory
834
+ default_callbacks_cfg = {
835
+ "setup_callback": {
836
+ "target": "main.SetupCallback",
837
+ "params": {
838
+ "resume": opt.resume,
839
+ "now": now,
840
+ "logdir": logdir,
841
+ "ckptdir": ckptdir,
842
+ "cfgdir": cfgdir,
843
+ "config": config,
844
+ "lightning_config": lightning_config,
845
+ "debug": opt.debug,
846
+ "ckpt_name": melk_ckpt_name,
847
+ },
848
+ },
849
+ "image_logger": {
850
+ "target": "main.ImageLogger",
851
+ "params": {"batch_frequency": 1000, "max_images": 4, "clamp": True},
852
+ },
853
+ "learning_rate_logger": {
854
+ "target": "pytorch_lightning.callbacks.LearningRateMonitor",
855
+ "params": {
856
+ "logging_interval": "step",
857
+ # "log_momentum": True
858
+ },
859
+ },
860
+ }
861
+ if version.parse(pl.__version__) >= version.parse("1.4.0"):
862
+ default_callbacks_cfg.update({"checkpoint_callback": modelckpt_cfg})
863
+
864
+ if "callbacks" in lightning_config:
865
+ callbacks_cfg = lightning_config.callbacks
866
+ else:
867
+ callbacks_cfg = OmegaConf.create()
868
+
869
+ if "metrics_over_trainsteps_checkpoint" in callbacks_cfg:
870
+ print(
871
+ "Caution: Saving checkpoints every n train steps without deleting. This might require some free space."
872
+ )
873
+ default_metrics_over_trainsteps_ckpt_dict = {
874
+ "metrics_over_trainsteps_checkpoint": {
875
+ "target": "pytorch_lightning.callbacks.ModelCheckpoint",
876
+ "params": {
877
+ "dirpath": os.path.join(ckptdir, "trainstep_checkpoints"),
878
+ "filename": "{epoch:06}-{step:09}",
879
+ "verbose": True,
880
+ "save_top_k": -1,
881
+ "every_n_train_steps": 10000,
882
+ "save_weights_only": True,
883
+ },
884
+ }
885
+ }
886
+ default_callbacks_cfg.update(default_metrics_over_trainsteps_ckpt_dict)
887
+
888
+ callbacks_cfg = OmegaConf.merge(default_callbacks_cfg, callbacks_cfg)
889
+ if "ignore_keys_callback" in callbacks_cfg and ckpt_resume_path is not None:
890
+ callbacks_cfg.ignore_keys_callback.params["ckpt_path"] = ckpt_resume_path
891
+ elif "ignore_keys_callback" in callbacks_cfg:
892
+ del callbacks_cfg["ignore_keys_callback"]
893
+
894
+ trainer_kwargs["callbacks"] = [
895
+ instantiate_from_config(callbacks_cfg[k]) for k in callbacks_cfg
896
+ ]
897
+ if not "plugins" in trainer_kwargs:
898
+ trainer_kwargs["plugins"] = list()
899
+
900
+ # cmd line trainer args (which are in trainer_opt) have always priority over config-trainer-args (which are in trainer_kwargs)
901
+ trainer_opt = vars(trainer_opt)
902
+ trainer_kwargs = {
903
+ key: val for key, val in trainer_kwargs.items() if key not in trainer_opt
904
+ }
905
+ trainer = Trainer(**trainer_opt, **trainer_kwargs)
906
+
907
+ trainer.logdir = logdir ###
908
+
909
+ # data
910
+ data = instantiate_from_config(config.data)
911
+ # NOTE according to https://pytorch-lightning.readthedocs.io/en/latest/datamodules.html
912
+ # calling these ourselves should not be necessary but it is.
913
+ # lightning still takes care of proper multiprocessing though
914
+ data.prepare_data()
915
+ # data.setup()
916
+ print("#### Data #####")
917
+ try:
918
+ for k in data.datasets:
919
+ print(
920
+ f"{k}, {data.datasets[k].__class__.__name__}, {len(data.datasets[k])}"
921
+ )
922
+ except:
923
+ print("datasets not yet initialized.")
924
+
925
+ # configure learning rate
926
+ if "batch_size" in config.data.params:
927
+ bs, base_lr = config.data.params.batch_size, config.model.base_learning_rate
928
+ else:
929
+ bs, base_lr = (
930
+ config.data.params.train.loader.batch_size,
931
+ config.model.base_learning_rate,
932
+ )
933
+ if not cpu:
934
+ # add for different device input type
935
+ if isinstance(lightning_config.trainer.devices, int):
936
+ ngpu = lightning_config.trainer.devices
937
+ elif isinstance(lightning_config.trainer.devices, list):
938
+ ngpu = len(lightning_config.trainer.devices)
939
+ elif isinstance(lightning_config.trainer.devices, str):
940
+ ngpu = len(lightning_config.trainer.devices.strip(",").split(","))
941
+ else:
942
+ ngpu = 1
943
+ if "accumulate_grad_batches" in lightning_config.trainer:
944
+ accumulate_grad_batches = lightning_config.trainer.accumulate_grad_batches
945
+ else:
946
+ accumulate_grad_batches = 1
947
+ print(f"accumulate_grad_batches = {accumulate_grad_batches}")
948
+ lightning_config.trainer.accumulate_grad_batches = accumulate_grad_batches
949
+ if opt.scale_lr:
950
+ model.learning_rate = min(
951
+ accumulate_grad_batches * ngpu * bs * base_lr, 1e-4
952
+ )
953
+ print(
954
+ "Setting learning rate to {:.2e} = {} (accumulate_grad_batches) * {} (num_gpus) * {} (batchsize) * {:.2e} (base_lr)".format(
955
+ model.learning_rate, accumulate_grad_batches, ngpu, bs, base_lr
956
+ )
957
+ )
958
+ else:
959
+ model.learning_rate = base_lr
960
+ print("++++ NOT USING LR SCALING ++++")
961
+ print(f"Setting learning rate to {model.learning_rate:.2e}")
962
+
963
+ # allow checkpointing via USR1
964
+ def melk(*args, **kwargs):
965
+ # run all checkpoint hooks
966
+ if trainer.global_rank == 0:
967
+ print("Summoning checkpoint.")
968
+ if melk_ckpt_name is None:
969
+ ckpt_path = os.path.join(ckptdir, "last.ckpt")
970
+ else:
971
+ ckpt_path = os.path.join(ckptdir, melk_ckpt_name)
972
+ trainer.save_checkpoint(ckpt_path)
973
+
974
+ def divein(*args, **kwargs):
975
+ if trainer.global_rank == 0:
976
+ import pudb
977
+
978
+ pudb.set_trace()
979
+
980
+ import signal
981
+
982
+ signal.signal(signal.SIGUSR1, melk)
983
+ signal.signal(signal.SIGUSR2, divein)
984
+
985
+ # # [FIXME] Need to reset the requires_grad flag for the diffusion model
986
+ # # don't know why
987
+ # # freeze all at first
988
+ # for name, param in model.named_parameters():
989
+ # param.requires_grad = False
990
+ # # set requires_grad for diffusion_model.controlnet
991
+ # for name, param in model.named_parameters():
992
+ # if 'diffusion_model.controlnet' in name:
993
+ # param.requires_grad = True
994
+
995
+ # if hasattr(model, "freeze_model"):
996
+ # if model.freeze_model == 'none':
997
+ # # set requires_grad for diffusion_model
998
+ # print("Unlock spatial model")
999
+ # for name, param in model.named_parameters():
1000
+ # if 'diffusion_model' in name:
1001
+ # param.requires_grad = True
1002
+ # elif model.freeze_model == "spatial":
1003
+ # # set requires_grad for temporal layers in the SD branch of diffusion_model
1004
+ # print("Freeze spatial model")
1005
+ # for name, param in model.named_parameters():
1006
+ # if 'diffusion_model.controlnet' not in name and 'temporal' in name:
1007
+ # param.requires_grad = True
1008
+ # else:
1009
+ # raise ValueError(f"Unknown freeze_model option {model.freeze_model}")
1010
+
1011
+ # with open('params.txt', 'w') as f:
1012
+ # for name, param in model.named_parameters():
1013
+ # f.write(f'{name} {param.requires_grad}\n')
1014
+
1015
+ # run
1016
+ if opt.train:
1017
+ try:
1018
+ trainer.fit(model, data, ckpt_path=ckpt_resume_path)
1019
+ except Exception:
1020
+ if not opt.debug:
1021
+ melk()
1022
+ raise
1023
+ if not opt.no_test and not trainer.interrupted:
1024
+ trainer.test(model, data)
1025
+ except RuntimeError as err:
1026
+ if MULTINODE_HACKS:
1027
+ import requests
1028
+ import datetime
1029
+ import os
1030
+ import socket
1031
+
1032
+ device = os.environ.get("CUDA_VISIBLE_DEVICES", "?")
1033
+ hostname = socket.gethostname()
1034
+ ts = datetime.datetime.utcnow().strftime("%Y-%m-%d %H:%M:%S")
1035
+ resp = requests.get("http://169.254.169.254/latest/meta-data/instance-id")
1036
+ print(
1037
+ f"ERROR at {ts} on {hostname}/{resp.text} (CUDA_VISIBLE_DEVICES={device}): {type(err).__name__}: {err}",
1038
+ flush=True,
1039
+ )
1040
+ raise err
1041
+ except Exception:
1042
+ if opt.debug and trainer.global_rank == 0:
1043
+ try:
1044
+ import pudb as debugger
1045
+ except ImportError:
1046
+ import pdb as debugger
1047
+ debugger.post_mortem()
1048
+ raise
1049
+ finally:
1050
+ # move newly created debug project to debug_runs
1051
+ if opt.debug and not opt.resume and trainer.global_rank == 0:
1052
+ dst, name = os.path.split(logdir)
1053
+ dst = os.path.join(dst, "debug_runs", name)
1054
+ os.makedirs(os.path.split(dst)[0], exist_ok=True)
1055
+ os.rename(logdir, dst)
1056
+
1057
+ if opt.wandb:
1058
+ wandb.finish()
1059
+ # if trainer.global_rank == 0:
1060
+ # print(trainer.profiler.summary())
CCEdit-main/models/.gitattributes ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tar filter=lfs diff=lfs merge=lfs -text
29
+ *.tflite filter=lfs diff=lfs merge=lfs -text
30
+ *.tgz filter=lfs diff=lfs merge=lfs -text
31
+ *.wasm filter=lfs diff=lfs merge=lfs -text
32
+ *.xz filter=lfs diff=lfs merge=lfs -text
33
+ *.zip filter=lfs diff=lfs merge=lfs -text
34
+ *.zst filter=lfs diff=lfs merge=lfs -text
35
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
CCEdit-main/requirements.txt ADDED
@@ -0,0 +1,51 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ moviepy
2
+ imageio==2.6.0
3
+ omegaconf
4
+ einops
5
+ fire
6
+ tqdm
7
+ pillow
8
+ numpy
9
+ webdataset>=0.2.33
10
+ ninja
11
+ torch==2.0.1
12
+ matplotlib
13
+ torchaudio==2.0.2
14
+ torchmetrics
15
+ torchvision==0.15.2
16
+ opencv-python==4.6.0.66
17
+ fairscale
18
+ pytorch-lightning==2.0.1
19
+ fire
20
+ fsspec
21
+ kornia==0.6.9
22
+ natsort
23
+ open-clip-torch
24
+ chardet==5.1.0
25
+ tensorboardx==2.6
26
+ pandas
27
+ pudb
28
+ pyyaml
29
+ urllib3<1.27,>=1.25.4
30
+ scipy
31
+ streamlit>=0.73.1
32
+ timm
33
+ tokenizers==0.12.1
34
+ transformers==4.19.1
35
+ triton==2.0.0
36
+ torchdata==0.6.1
37
+ wandb
38
+ invisible-watermark
39
+ xformers
40
+ loralib
41
+ ninja
42
+ einops
43
+ deepspeed
44
+ av
45
+ decord
46
+ sqlparse
47
+ entrypoints
48
+ -e git+https://github.com/CompVis/taming-transformers.git@master#egg=taming-transformers
49
+ -e git+https://github.com/openai/CLIP.git@main#egg=clip
50
+ -e git+https://github.com/Stability-AI/datapipelines.git@main#egg=sdata
51
+ -e .
CCEdit-main/setup.py ADDED
@@ -0,0 +1,13 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from setuptools import find_packages, setup
2
+
3
+ setup(
4
+ name="sgm",
5
+ version="0.0.1",
6
+ packages=find_packages(),
7
+ python_requires=">=3.8",
8
+ py_modules=["sgm"],
9
+ description="Stability Generative Models",
10
+ long_description=open("README.md", "r", encoding="utf-8").read(),
11
+ long_description_content_type="text/markdown",
12
+ url="https://github.com/Stability-AI/generative-models",
13
+ )
CCEdit-main/sgm.egg-info/PKG-INFO ADDED
@@ -0,0 +1,175 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ Metadata-Version: 2.2
2
+ Name: sgm
3
+ Version: 0.0.1
4
+ Summary: Stability Generative Models
5
+ Home-page: https://github.com/Stability-AI/generative-models
6
+ Requires-Python: >=3.8
7
+ Description-Content-Type: text/markdown
8
+ License-File: LICENSE
9
+ Dynamic: description
10
+ Dynamic: description-content-type
11
+ Dynamic: home-page
12
+ Dynamic: requires-python
13
+ Dynamic: summary
14
+
15
+ ### <div align="center"> CCEdit: Creative and Controllable Video Editing via Diffusion Models<div>
16
+ ### <div align="center"> CVPR 2024 <div>
17
+
18
+
19
+ <div align="center">
20
+ Ruoyu Feng,
21
+ Wenming Weng,
22
+ Yanhui Wang,
23
+ Yuhui Yuan,
24
+ Jianmin Bao,
25
+ Chong Luo,
26
+ Zhibo Chen,
27
+ Baining Guo
28
+ </div>
29
+
30
+ <br>
31
+
32
+ <div align="center">
33
+ <a href="https://ruoyufeng.github.io/CCEdit.github.io/"><img src="https://img.shields.io/static/v1?label=Project%20Page&message=Github&color=blue&logo=github-pages"></a> &ensp;
34
+ <a href="https://huggingface.co/datasets/RuoyuFeng/BalanceCC"><img src="https://img.shields.io/static/v1?label=BalanceCC BenchMark&message=HF&color=yellow"></a> &ensp;
35
+ <a href="https://arxiv.org/pdf/2309.16496.pdf"><img src="https://img.shields.io/static/v1?label=Paper&message=Arxiv:CCEdit&color=red&logo=arxiv"></a> &ensp;
36
+ </div>
37
+
38
+ <table class="center">
39
+ <tr>
40
+ <td><img src="assets/makeup.gif"></td>
41
+ <td><img src="assets/makeup1-magicReal.gif"></td>
42
+ </tr>
43
+ </table>
44
+
45
+ ## 🔥 Update
46
+ - 🔥 Mar. 27, 2024. [BalanceCC Benchmark](https://huggingface.co/datasets/RuoyuFeng/BalanceCC) is released! BalanceCC benchmark contains 100 videos with varied attributes, designed to offer a comprehensive platform for evaluating generative video editing, focusing on both controllability and creativity.
47
+
48
+ ## Installation
49
+ ```
50
+ # env
51
+ conda create -n ccedit python=3.9.17
52
+ conda activate ccedit
53
+ pip install -r requirements.txt
54
+ # pip install -r requirements_pt2.txt
55
+ # pip install torch==2.0.1 torchaudio==2.0.2 torchdata==0.6.1 torchmetrics==1.0.0 torchvision==0.15.2
56
+ pip install basicsr==1.4.2 wandb loralib av decord timm==0.6.7
57
+ pip install moviepy imageio==2.6.0 scikit-image==0.20.0 scipy==1.9.1 diffusers==0.17.1 transformers==4.27.3
58
+ pip install accelerate==0.20.3 ujson
59
+
60
+ git clone https://github.com/lllyasviel/ControlNet-v1-1-nightly src/controlnet11
61
+ git clone https://github.com/MichalGeyer/pnp-diffusers src/pnp-diffusers
62
+ ```
63
+
64
+ # Download models
65
+ download models from https://huggingface.co/RuoyuFeng/CCEdit and put them in ./models
66
+
67
+ <!-- ## Inference and training examples -->
68
+ ## Inference
69
+ ### Text-Video-to-Video
70
+ ```bash
71
+ python scripts/sampling/sampling_tv2v.py --config_path configs/inference_ccedit/keyframe_no2ndca_depthmidas.yaml --ckpt_path models/tv2v-no2ndca-depthmidas.ckpt --H 512 --W 768 --original_fps 18 --target_fps 6 --num_keyframes 17 --batch_size 1 --num_samples 2 --sample_steps 30 --sampler_name DPMPP2SAncestralSampler --cfg_scale 7.5 --prompt 'a bear is walking.' --video_path assets/Samples/davis/bear --add_prompt 'Van Gogh style' --save_path outputs/tv2v/bear-VanGogh --disable_check_repeat
72
+ ```
73
+
74
+ ### Text-Video-Image-to-Video
75
+ Specifiy the edited center frame.
76
+ ```bash
77
+ python scripts/sampling/sampling_tv2v_ref.py \
78
+ --seed 201574 \
79
+ --config_path configs/inference_ccedit/keyframe_ref_cp_no2ndca_add_cfca_depthzoe.yaml \
80
+ --ckpt_path models/tvi2v-no2ndca-depthmidas.ckpt \
81
+ --H 512 --W 768 --original_fps 18 --target_fps 6 --num_keyframes 17 --batch_size 1 --num_samples 2 \
82
+ --sample_steps 50 --sampler_name DPMPP2SAncestralSampler --cfg_scale 7 \
83
+ --prompt 'A person walks on the grass, the Milky Way is in the sky, night' \
84
+ --add_prompt 'masterpiece, best quality,' \
85
+ --video_path assets/Samples/tshirtman.mp4 \
86
+ --reference_path assets/Samples/tshirtman-milkyway.png \
87
+ --save_path outputs/tvi2v/tshirtman-MilkyWay \
88
+ --disable_check_repeat \
89
+ --prior_coefficient_x 0.03 \
90
+ --prior_type ref
91
+ ```
92
+
93
+ Automatic edit the center frame via [pnp-diffusers](https://github.com/MichalGeyer/pnp-diffusers)
94
+ Note that the performance of this pipeline heavily depends on the quality of the automatic editing result. So try to use more powerful automatic editing methods to edit the center frame. Or we recommond combine CCEdit with other powerfull AI editing tools, such as Stable-Diffusion WebUI, comfyui, etc.
95
+ ```bash
96
+ # python preprocess.py --data_path <path_to_guidance_image> --inversion_prompt <inversion_prompt>
97
+ python src/pnp-diffusers/preprocess.py --data_path assets/Samples/tshirtman-milkyway.png --inversion_prompt 'a man walks in the filed'
98
+ # modify the config file (config_pnp.yaml) to use the processed image
99
+ # python pnp.py --config_path <pnp_config_path>
100
+ python src/pnp-diffusers/pnp.py --config_path config_pnp.yaml
101
+ python scripts/sampling/sampling_tv2v_ref.py \
102
+ --seed 201574 \
103
+ --config_path configs/inference_ccedit/keyframe_ref_cp_no2ndca_add_cfca_depthzoe.yaml \
104
+ --ckpt_path models/tvi2v-no2ndca-depthmidas.ckpt \
105
+ --H 512 --W 768 --original_fps 18 --target_fps 6 --num_keyframes 17 --batch_size 1 --num_samples 2 \
106
+ --sample_steps 50 --sampler_name DPMPP2SAncestralSampler --cfg_scale 7 \
107
+ --prompt 'A person walks on the grass, the Milky Way is in the sky, night' \
108
+ --add_prompt 'masterpiece, best quality,' \
109
+ --video_path assets/Samples/tshirtman.mp4 \
110
+ --reference_path "PNP-results/tshirtman-milkyway/output-a man walks in the filed, milky way.png" \
111
+ --save_path outputs/tvi2v/tshirtman-MilkyWay \
112
+ --disable_check_repeat \
113
+ --prior_coefficient_x 0.03 \
114
+ --prior_type ref
115
+ ```
116
+
117
+ You can use the following pipeline to automatically extract the center frame, conduct editing via pnp-diffusers and then conduct video editing via tvi2v.
118
+ ```bash
119
+ python scripts/sampling/pnp_generate_config.py \
120
+ --p_config config_pnp_auto.yaml \
121
+ --output_path "outputs/automatic_ref_editing/image" \
122
+ --image_path "outputs/centerframe/tshirtman.png" \
123
+ --latents_path "latents_forward" \
124
+ --prompt "a man walks on the beach"
125
+ python scripts/tools/extract_centerframe.py \
126
+ --p_video assets/Samples/tshirtman.mp4 \
127
+ --p_save outputs/centerframe/tshirtman.png \
128
+ --orifps 18 \
129
+ --targetfps 6 \
130
+ --n_keyframes 17 \
131
+ --length_long 512 \
132
+ --length_short 512
133
+ python src/pnp-diffusers/preprocess.py --data_path outputs/centerframe/tshirtman.png --inversion_prompt 'a man walks in the filed'
134
+ python src/pnp-diffusers/pnp.py --config_path config_pnp_auto.yaml
135
+ python scripts/sampling/sampling_tv2v_ref.py \
136
+ --seed 201574 \
137
+ --config_path configs/inference_ccedit/keyframe_ref_cp_no2ndca_add_cfca_depthzoe.yaml \
138
+ --ckpt_path models/tvi2v-no2ndca-depthmidas.ckpt \
139
+ --H 512 --W 768 --original_fps 18 --target_fps 6 --num_keyframes 17 --batch_size 1 --num_samples 2 \
140
+ --sample_steps 50 --sampler_name DPMPP2SAncestralSampler --cfg_scale 7 \
141
+ --prompt 'A man walks on the beach' \
142
+ --add_prompt 'masterpiece, best quality,' \
143
+ --video_path assets/Samples/tshirtman.mp4 \
144
+ --reference_path "outputs/automatic_ref_editing/image/output-a man walks on the beach.png" \
145
+ --save_path outputs/tvi2v/tshirtman-Beach \
146
+ --disable_check_repeat \
147
+ --prior_coefficient_x 0.03 \
148
+ --prior_type ref
149
+ ```
150
+
151
+ ## Train example
152
+ ```bash
153
+ python main.py -b configs/example_training/sd_1_5_controlldm-test-ruoyu-tv2v-depthmidas.yaml --wandb False
154
+ ```
155
+
156
+ ## BibTeX
157
+ If you find this work useful for your research, please cite us:
158
+
159
+ ```
160
+ @article{feng2023ccedit,
161
+ title={CCEdit: Creative and Controllable Video Editing via Diffusion Models},
162
+ author={Feng, Ruoyu and Weng, Wenming and Wang, Yanhui and Yuan, Yuhui and Bao, Jianmin and Luo, Chong and Chen, Zhibo and Guo, Baining},
163
+ journal={arXiv preprint arXiv:2309.16496},
164
+ year={2023}
165
+ }
166
+ ```
167
+
168
+ ## Conact Us
169
+ **Ruoyu Feng**: [ustcfry@mail.ustc.edu.cn](ustcfry@mail.ustc.edu.cn)
170
+
171
+
172
+ ## Acknowledgements
173
+ The source videos in this repository come from our own collections and downloads from Pexels. If anyone feels that a particular piece of content is used inappropriately, please feel free to contact me, and I will remove it immediately.
174
+
175
+ Thanks to model contributers of [CivitAI](https://civitai.com/) and [RunwayML](https://runwayml.com/).
CCEdit-main/sgm.egg-info/SOURCES.txt ADDED
@@ -0,0 +1,59 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ LICENSE
2
+ README.md
3
+ setup.py
4
+ scripts/__init__.py
5
+ scripts/demo/__init__.py
6
+ scripts/demo/detect.py
7
+ scripts/demo/sampling.py
8
+ scripts/demo/sampling_command.py
9
+ scripts/demo/streamlit_helpers.py
10
+ scripts/sampling/__init__.py
11
+ scripts/sampling/pnp_generate_config.py
12
+ scripts/sampling/sampling_image.py
13
+ scripts/sampling/sampling_tv2v.py
14
+ scripts/sampling/sampling_tv2v_ref.py
15
+ scripts/sampling/util.py
16
+ scripts/util/__init__.py
17
+ scripts/util/detection/__init__.py
18
+ scripts/util/detection/nsfw_and_watermark_dectection.py
19
+ sgm/__init__.py
20
+ sgm/lr_scheduler.py
21
+ sgm/util.py
22
+ sgm.egg-info/PKG-INFO
23
+ sgm.egg-info/SOURCES.txt
24
+ sgm.egg-info/dependency_links.txt
25
+ sgm.egg-info/top_level.txt
26
+ sgm/data/__init__.py
27
+ sgm/data/cifar10.py
28
+ sgm/data/dataset.py
29
+ sgm/data/detaset_webvid.py
30
+ sgm/data/mnist.py
31
+ sgm/models/__init__.py
32
+ sgm/models/autoencoder.py
33
+ sgm/models/diffusion-ori.py
34
+ sgm/models/diffusion.py
35
+ sgm/modules/__init__.py
36
+ sgm/modules/attention.py
37
+ sgm/modules/ema.py
38
+ sgm/modules/autoencoding/__init__.py
39
+ sgm/modules/autoencoding/losses/__init__.py
40
+ sgm/modules/autoencoding/regularizers/__init__.py
41
+ sgm/modules/diffusionmodules/__init__.py
42
+ sgm/modules/diffusionmodules/controlmodel.py
43
+ sgm/modules/diffusionmodules/denoiser.py
44
+ sgm/modules/diffusionmodules/denoiser_scaling.py
45
+ sgm/modules/diffusionmodules/denoiser_weighting.py
46
+ sgm/modules/diffusionmodules/discretizer.py
47
+ sgm/modules/diffusionmodules/guiders.py
48
+ sgm/modules/diffusionmodules/loss.py
49
+ sgm/modules/diffusionmodules/model.py
50
+ sgm/modules/diffusionmodules/openaimodel.py
51
+ sgm/modules/diffusionmodules/sampling.py
52
+ sgm/modules/diffusionmodules/sampling_utils.py
53
+ sgm/modules/diffusionmodules/sigma_sampling.py
54
+ sgm/modules/diffusionmodules/util.py
55
+ sgm/modules/diffusionmodules/wrappers.py
56
+ sgm/modules/distributions/__init__.py
57
+ sgm/modules/distributions/distributions.py
58
+ sgm/modules/encoders/__init__.py
59
+ sgm/modules/encoders/modules.py
CCEdit-main/sgm.egg-info/dependency_links.txt ADDED
@@ -0,0 +1 @@
 
 
1
+
CCEdit-main/sgm.egg-info/top_level.txt ADDED
@@ -0,0 +1,2 @@
 
 
 
1
+ scripts
2
+ sgm
CCEdit-main/src/controlnet11/.gitignore ADDED
@@ -0,0 +1,140 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ .idea/
2
+
3
+ training/
4
+ lightning_logs/
5
+ image_log/
6
+
7
+ *.pth
8
+ *.pt
9
+ *.ckpt
10
+ *.safetensors
11
+
12
+ # Byte-compiled / optimized / DLL files
13
+ __pycache__/
14
+ *.py[cod]
15
+ *$py.class
16
+
17
+ # C extensions
18
+ *.so
19
+
20
+ # Distribution / packaging
21
+ .Python
22
+ build/
23
+ develop-eggs/
24
+ dist/
25
+ downloads/
26
+ eggs/
27
+ .eggs/
28
+ lib/
29
+ lib64/
30
+ parts/
31
+ sdist/
32
+ var/
33
+ wheels/
34
+ pip-wheel-metadata/
35
+ share/python-wheels/
36
+ *.egg-info/
37
+ .installed.cfg
38
+ *.egg
39
+ MANIFEST
40
+
41
+ # PyInstaller
42
+ # Usually these files are written by a python script from a template
43
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
44
+ *.manifest
45
+ *.spec
46
+
47
+ # Installer logs
48
+ pip-log.txt
49
+ pip-delete-this-directory.txt
50
+
51
+ # Unit test / coverage reports
52
+ htmlcov/
53
+ .tox/
54
+ .nox/
55
+ .coverage
56
+ .coverage.*
57
+ .cache
58
+ nosetests.xml
59
+ coverage.xml
60
+ *.cover
61
+ *.py,cover
62
+ .hypothesis/
63
+ .pytest_cache/
64
+
65
+ # Translations
66
+ *.mo
67
+ *.pot
68
+
69
+ # Django stuff:
70
+ *.log
71
+ local_settings.py
72
+ db.sqlite3
73
+ db.sqlite3-journal
74
+
75
+ # Flask stuff:
76
+ instance/
77
+ .webassets-cache
78
+
79
+ # Scrapy stuff:
80
+ .scrapy
81
+
82
+ # Sphinx documentation
83
+ docs/_build/
84
+
85
+ # PyBuilder
86
+ target/
87
+
88
+ # Jupyter Notebook
89
+ .ipynb_checkpoints
90
+
91
+ # IPython
92
+ profile_default/
93
+ ipython_config.py
94
+
95
+ # pyenv
96
+ .python-version
97
+
98
+ # pipenv
99
+ # According to pypa/pipenv#598, it is recommended to include Pipfile.lock in version control.
100
+ # However, in case of collaboration, if having platform-specific dependencies or dependencies
101
+ # having no cross-platform support, pipenv may install dependencies that don't work, or not
102
+ # install all needed dependencies.
103
+ #Pipfile.lock
104
+
105
+ # PEP 582; used by e.g. github.com/David-OConnor/pyflow
106
+ __pypackages__/
107
+
108
+ # Celery stuff
109
+ celerybeat-schedule
110
+ celerybeat.pid
111
+
112
+ # SageMath parsed files
113
+ *.sage.py
114
+
115
+ # Environments
116
+ .env
117
+ .venv
118
+ env/
119
+ venv/
120
+ ENV/
121
+ env.bak/
122
+ venv.bak/
123
+
124
+ # Spyder project settings
125
+ .spyderproject
126
+ .spyproject
127
+
128
+ # Rope project settings
129
+ .ropeproject
130
+
131
+ # mkdocs documentation
132
+ /site
133
+
134
+ # mypy
135
+ .mypy_cache/
136
+ .dmypy.json
137
+ dmypy.json
138
+
139
+ # Pyre type checker
140
+ .pyre/
CCEdit-main/src/controlnet11/config.py ADDED
@@ -0,0 +1 @@
 
 
1
+ save_memory = False
CCEdit-main/src/controlnet11/environment.yaml ADDED
@@ -0,0 +1,38 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ name: control-v11
2
+ channels:
3
+ - pytorch
4
+ - defaults
5
+ dependencies:
6
+ - python=3.8.5
7
+ - pip=20.3
8
+ - cudatoolkit=11.3
9
+ - pytorch=1.12.1
10
+ - torchvision=0.13.1
11
+ - numpy=1.23.1
12
+ - pip:
13
+ - gradio==3.16.2
14
+ - albumentations==1.3.0
15
+ - opencv-contrib-python==4.3.0.36
16
+ - imageio==2.9.0
17
+ - imageio-ffmpeg==0.4.2
18
+ - pytorch-lightning==1.5.0
19
+ - omegaconf==2.1.1
20
+ - test-tube>=0.7.5
21
+ - streamlit==1.12.1
22
+ - einops==0.3.0
23
+ - transformers==4.19.2
24
+ - webdataset==0.2.5
25
+ - kornia==0.6
26
+ - open_clip_torch==2.0.2
27
+ - invisible-watermark>=0.1.5
28
+ - streamlit-drawable-canvas==0.8.0
29
+ - torchmetrics==0.6.0
30
+ - timm==0.6.12
31
+ - addict==2.4.0
32
+ - yapf==0.32.0
33
+ - prettytable==3.6.0
34
+ - safetensors==0.2.7
35
+ - basicsr==1.4.2
36
+ - fvcore
37
+ - pycocotools
38
+ - wandb
CCEdit-main/src/controlnet11/gradio_canny.py ADDED
@@ -0,0 +1,115 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from share import *
2
+ import config
3
+
4
+ import cv2
5
+ import einops
6
+ import gradio as gr
7
+ import numpy as np
8
+ import torch
9
+ import random
10
+
11
+ from pytorch_lightning import seed_everything
12
+ from annotator.util import resize_image, HWC3
13
+ from annotator.canny import CannyDetector
14
+ from cldm.model import create_model, load_state_dict
15
+ from cldm.ddim_hacked import DDIMSampler
16
+
17
+
18
+ preprocessor = None
19
+
20
+ model_name = 'control_v11p_sd15_canny'
21
+ model = create_model(f'./models/{model_name}.yaml').cpu()
22
+ model.load_state_dict(load_state_dict('./models/v1-5-pruned.ckpt', location='cuda'), strict=False)
23
+ model.load_state_dict(load_state_dict(f'./models/{model_name}.pth', location='cuda'), strict=False)
24
+ model = model.cuda()
25
+ ddim_sampler = DDIMSampler(model)
26
+
27
+
28
+ def process(det, input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, detect_resolution, ddim_steps, guess_mode, strength, scale, seed, eta, low_threshold, high_threshold):
29
+ global preprocessor
30
+
31
+ if det == 'Canny':
32
+ if not isinstance(preprocessor, CannyDetector):
33
+ preprocessor = CannyDetector()
34
+
35
+ with torch.no_grad():
36
+ input_image = HWC3(input_image)
37
+
38
+ if det == 'None':
39
+ detected_map = input_image.copy()
40
+ else:
41
+ detected_map = preprocessor(resize_image(input_image, detect_resolution), low_threshold, high_threshold)
42
+ detected_map = HWC3(detected_map)
43
+
44
+ img = resize_image(input_image, image_resolution)
45
+ H, W, C = img.shape
46
+
47
+ detected_map = cv2.resize(detected_map, (W, H), interpolation=cv2.INTER_LINEAR)
48
+
49
+ control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0
50
+ control = torch.stack([control for _ in range(num_samples)], dim=0)
51
+ control = einops.rearrange(control, 'b h w c -> b c h w').clone()
52
+
53
+ if seed == -1:
54
+ seed = random.randint(0, 65535)
55
+ seed_everything(seed)
56
+
57
+ if config.save_memory:
58
+ model.low_vram_shift(is_diffusing=False)
59
+
60
+ cond = {"c_concat": [control], "c_crossattn": [model.get_learned_conditioning([prompt + ', ' + a_prompt] * num_samples)]}
61
+ un_cond = {"c_concat": None if guess_mode else [control], "c_crossattn": [model.get_learned_conditioning([n_prompt] * num_samples)]}
62
+ shape = (4, H // 8, W // 8)
63
+
64
+ if config.save_memory:
65
+ model.low_vram_shift(is_diffusing=True)
66
+
67
+ model.control_scales = [strength * (0.825 ** float(12 - i)) for i in range(13)] if guess_mode else ([strength] * 13)
68
+ # Magic number. IDK why. Perhaps because 0.825**12<0.01 but 0.826**12>0.01
69
+
70
+ samples, intermediates = ddim_sampler.sample(ddim_steps, num_samples,
71
+ shape, cond, verbose=False, eta=eta,
72
+ unconditional_guidance_scale=scale,
73
+ unconditional_conditioning=un_cond)
74
+
75
+ if config.save_memory:
76
+ model.low_vram_shift(is_diffusing=False)
77
+
78
+ x_samples = model.decode_first_stage(samples)
79
+ x_samples = (einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + 127.5).cpu().numpy().clip(0, 255).astype(np.uint8)
80
+
81
+ results = [x_samples[i] for i in range(num_samples)]
82
+ return [detected_map] + results
83
+
84
+
85
+ block = gr.Blocks().queue()
86
+ with block:
87
+ with gr.Row():
88
+ gr.Markdown("## Control Stable Diffusion with Canny Edges")
89
+ with gr.Row():
90
+ with gr.Column():
91
+ input_image = gr.Image(source='upload', type="numpy")
92
+ prompt = gr.Textbox(label="Prompt")
93
+ run_button = gr.Button(label="Run")
94
+ num_samples = gr.Slider(label="Images", minimum=1, maximum=12, value=1, step=1)
95
+ seed = gr.Slider(label="Seed", minimum=-1, maximum=2147483647, step=1, value=12345)
96
+ det = gr.Radio(choices=["Canny", "None"], type="value", value="Canny", label="Preprocessor")
97
+ with gr.Accordion("Advanced options", open=False):
98
+ low_threshold = gr.Slider(label="Canny low threshold", minimum=1, maximum=255, value=100, step=1)
99
+ high_threshold = gr.Slider(label="Canny high threshold", minimum=1, maximum=255, value=200, step=1)
100
+ image_resolution = gr.Slider(label="Image Resolution", minimum=256, maximum=768, value=512, step=64)
101
+ strength = gr.Slider(label="Control Strength", minimum=0.0, maximum=2.0, value=1.0, step=0.01)
102
+ guess_mode = gr.Checkbox(label='Guess Mode', value=False)
103
+ detect_resolution = gr.Slider(label="Preprocessor Resolution", minimum=128, maximum=1024, value=512, step=1)
104
+ ddim_steps = gr.Slider(label="Steps", minimum=1, maximum=100, value=20, step=1)
105
+ scale = gr.Slider(label="Guidance Scale", minimum=0.1, maximum=30.0, value=9.0, step=0.1)
106
+ eta = gr.Slider(label="DDIM ETA", minimum=0.0, maximum=1.0, value=1.0, step=0.01)
107
+ a_prompt = gr.Textbox(label="Added Prompt", value='best quality')
108
+ n_prompt = gr.Textbox(label="Negative Prompt", value='lowres, bad anatomy, bad hands, cropped, worst quality')
109
+ with gr.Column():
110
+ result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery").style(grid=2, height='auto')
111
+ ips = [det, input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, detect_resolution, ddim_steps, guess_mode, strength, scale, seed, eta, low_threshold, high_threshold]
112
+ run_button.click(fn=process, inputs=ips, outputs=[result_gallery])
113
+
114
+
115
+ block.launch(server_name='0.0.0.0')
CCEdit-main/src/controlnet11/gradio_depth.py ADDED
@@ -0,0 +1,117 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from share import *
2
+ import config
3
+
4
+ import cv2
5
+ import einops
6
+ import gradio as gr
7
+ import numpy as np
8
+ import torch
9
+ import random
10
+
11
+ from pytorch_lightning import seed_everything
12
+ from annotator.util import resize_image, HWC3
13
+ from annotator.midas import MidasDetector
14
+ from annotator.zoe import ZoeDetector
15
+ from cldm.model import create_model, load_state_dict
16
+ from cldm.ddim_hacked import DDIMSampler
17
+
18
+
19
+ preprocessor = None
20
+
21
+ model_name = 'control_v11f1p_sd15_depth'
22
+ model = create_model(f'./models/{model_name}.yaml').cpu()
23
+ model.load_state_dict(load_state_dict('./models/v1-5-pruned.ckpt', location='cuda'), strict=False)
24
+ model.load_state_dict(load_state_dict(f'./models/{model_name}.pth', location='cuda'), strict=False)
25
+ model = model.cuda()
26
+ ddim_sampler = DDIMSampler(model)
27
+
28
+
29
+ def process(det, input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, detect_resolution, ddim_steps, guess_mode, strength, scale, seed, eta):
30
+ global preprocessor
31
+
32
+ if det == 'Depth_Midas':
33
+ if not isinstance(preprocessor, MidasDetector):
34
+ preprocessor = MidasDetector()
35
+ if det == 'Depth_Zoe':
36
+ if not isinstance(preprocessor, ZoeDetector):
37
+ preprocessor = ZoeDetector()
38
+
39
+ with torch.no_grad():
40
+ input_image = HWC3(input_image)
41
+
42
+ if det == 'None':
43
+ detected_map = input_image.copy()
44
+ else:
45
+ detected_map = preprocessor(resize_image(input_image, detect_resolution))
46
+ detected_map = HWC3(detected_map)
47
+
48
+ img = resize_image(input_image, image_resolution)
49
+ H, W, C = img.shape
50
+
51
+ detected_map = cv2.resize(detected_map, (W, H), interpolation=cv2.INTER_LINEAR)
52
+
53
+ control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0
54
+ control = torch.stack([control for _ in range(num_samples)], dim=0)
55
+ control = einops.rearrange(control, 'b h w c -> b c h w').clone()
56
+
57
+ if seed == -1:
58
+ seed = random.randint(0, 65535)
59
+ seed_everything(seed)
60
+
61
+ if config.save_memory:
62
+ model.low_vram_shift(is_diffusing=False)
63
+
64
+ cond = {"c_concat": [control], "c_crossattn": [model.get_learned_conditioning([prompt + ', ' + a_prompt] * num_samples)]}
65
+ un_cond = {"c_concat": None if guess_mode else [control], "c_crossattn": [model.get_learned_conditioning([n_prompt] * num_samples)]}
66
+ shape = (4, H // 8, W // 8)
67
+
68
+ if config.save_memory:
69
+ model.low_vram_shift(is_diffusing=True)
70
+
71
+ model.control_scales = [strength * (0.825 ** float(12 - i)) for i in range(13)] if guess_mode else ([strength] * 13)
72
+ # Magic number. IDK why. Perhaps because 0.825**12<0.01 but 0.826**12>0.01
73
+
74
+ samples, intermediates = ddim_sampler.sample(ddim_steps, num_samples,
75
+ shape, cond, verbose=False, eta=eta,
76
+ unconditional_guidance_scale=scale,
77
+ unconditional_conditioning=un_cond)
78
+
79
+ if config.save_memory:
80
+ model.low_vram_shift(is_diffusing=False)
81
+
82
+ x_samples = model.decode_first_stage(samples)
83
+ x_samples = (einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + 127.5).cpu().numpy().clip(0, 255).astype(np.uint8)
84
+
85
+ results = [x_samples[i] for i in range(num_samples)]
86
+ return [detected_map] + results
87
+
88
+
89
+ block = gr.Blocks().queue()
90
+ with block:
91
+ with gr.Row():
92
+ gr.Markdown("## Control Stable Diffusion with Depth Maps")
93
+ with gr.Row():
94
+ with gr.Column():
95
+ input_image = gr.Image(source='upload', type="numpy")
96
+ prompt = gr.Textbox(label="Prompt")
97
+ run_button = gr.Button(label="Run")
98
+ num_samples = gr.Slider(label="Images", minimum=1, maximum=12, value=1, step=1)
99
+ seed = gr.Slider(label="Seed", minimum=-1, maximum=2147483647, step=1, value=12345)
100
+ det = gr.Radio(choices=["Depth_Zoe", "Depth_Midas", "None"], type="value", value="Depth_Zoe", label="Preprocessor")
101
+ with gr.Accordion("Advanced options", open=False):
102
+ image_resolution = gr.Slider(label="Image Resolution", minimum=256, maximum=768, value=512, step=64)
103
+ strength = gr.Slider(label="Control Strength", minimum=0.0, maximum=2.0, value=1.0, step=0.01)
104
+ guess_mode = gr.Checkbox(label='Guess Mode', value=False)
105
+ detect_resolution = gr.Slider(label="Preprocessor Resolution", minimum=128, maximum=1024, value=512, step=1)
106
+ ddim_steps = gr.Slider(label="Steps", minimum=1, maximum=100, value=20, step=1)
107
+ scale = gr.Slider(label="Guidance Scale", minimum=0.1, maximum=30.0, value=9.0, step=0.1)
108
+ eta = gr.Slider(label="DDIM ETA", minimum=0.0, maximum=1.0, value=1.0, step=0.01)
109
+ a_prompt = gr.Textbox(label="Added Prompt", value='best quality')
110
+ n_prompt = gr.Textbox(label="Negative Prompt", value='lowres, bad anatomy, bad hands, cropped, worst quality')
111
+ with gr.Column():
112
+ result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery").style(grid=2, height='auto')
113
+ ips = [det, input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, detect_resolution, ddim_steps, guess_mode, strength, scale, seed, eta]
114
+ run_button.click(fn=process, inputs=ips, outputs=[result_gallery])
115
+
116
+
117
+ block.launch(server_name='0.0.0.0')
CCEdit-main/src/controlnet11/gradio_lineart_anime.py ADDED
@@ -0,0 +1,116 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from share import *
2
+ import config
3
+ from cldm.hack import hack_everything
4
+
5
+
6
+ hack_everything(clip_skip=2)
7
+
8
+
9
+ import cv2
10
+ import einops
11
+ import gradio as gr
12
+ import numpy as np
13
+ import torch
14
+ import random
15
+
16
+ from pytorch_lightning import seed_everything
17
+ from annotator.util import resize_image, HWC3
18
+ from annotator.lineart_anime import LineartAnimeDetector
19
+ from cldm.model import create_model, load_state_dict
20
+ from cldm.ddim_hacked import DDIMSampler
21
+
22
+
23
+ preprocessor = None
24
+
25
+ model_name = 'control_v11p_sd15s2_lineart_anime'
26
+ model = create_model(f'./models/{model_name}.yaml').cpu()
27
+ model.load_state_dict(load_state_dict('./models/anything-v3-full.safetensors', location='cuda'), strict=False)
28
+ model.load_state_dict(load_state_dict(f'./models/{model_name}.pth', location='cuda'), strict=False)
29
+ model = model.cuda()
30
+ ddim_sampler = DDIMSampler(model)
31
+
32
+
33
+ def process(det, input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, detect_resolution, ddim_steps, strength, scale, seed, eta):
34
+ global preprocessor
35
+
36
+ if det == 'Lineart_Anime':
37
+ if not isinstance(preprocessor, LineartAnimeDetector):
38
+ preprocessor = LineartAnimeDetector()
39
+
40
+ with torch.no_grad():
41
+ input_image = HWC3(input_image)
42
+
43
+ if det == 'None':
44
+ detected_map = input_image.copy()
45
+ else:
46
+ detected_map = preprocessor(resize_image(input_image, detect_resolution))
47
+ detected_map = HWC3(detected_map)
48
+
49
+ img = resize_image(input_image, image_resolution)
50
+ H, W, C = img.shape
51
+
52
+ detected_map = cv2.resize(detected_map, (W, H), interpolation=cv2.INTER_LINEAR)
53
+
54
+ control = 1.0 - torch.from_numpy(detected_map.copy()).float().cuda() / 255.0
55
+ control = torch.stack([control for _ in range(num_samples)], dim=0)
56
+ control = einops.rearrange(control, 'b h w c -> b c h w').clone()
57
+
58
+ if seed == -1:
59
+ seed = random.randint(0, 65535)
60
+ seed_everything(seed)
61
+
62
+ if config.save_memory:
63
+ model.low_vram_shift(is_diffusing=False)
64
+
65
+ cond = {"c_concat": [control], "c_crossattn": [model.get_learned_conditioning([prompt + ', ' + a_prompt] * num_samples)]}
66
+ un_cond = {"c_concat": [control], "c_crossattn": [model.get_learned_conditioning([n_prompt] * num_samples)]}
67
+ shape = (4, H // 8, W // 8)
68
+
69
+ if config.save_memory:
70
+ model.low_vram_shift(is_diffusing=True)
71
+
72
+ model.control_scales = [strength] * 13
73
+ samples, intermediates = ddim_sampler.sample(ddim_steps, num_samples,
74
+ shape, cond, verbose=False, eta=eta,
75
+ unconditional_guidance_scale=scale,
76
+ unconditional_conditioning=un_cond)
77
+
78
+ if config.save_memory:
79
+ model.low_vram_shift(is_diffusing=False)
80
+
81
+ x_samples = model.decode_first_stage(samples)
82
+ x_samples = (einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + 127.5).cpu().numpy().clip(0, 255).astype(np.uint8)
83
+
84
+ results = [x_samples[i] for i in range(num_samples)]
85
+ return [detected_map] + results
86
+
87
+
88
+ block = gr.Blocks().queue()
89
+ with block:
90
+ with gr.Row():
91
+ gr.Markdown("## Control Anything V3 with Anime Lineart")
92
+ with gr.Row():
93
+ with gr.Column():
94
+ input_image = gr.Image(source='upload', type="numpy")
95
+ prompt = gr.Textbox(label="Prompt")
96
+ a_prompt = gr.Textbox(label="Added Prompt (Beginners do not need to change)", value='masterpiece, best quality, ultra-detailed, illustration, disheveled hair')
97
+ n_prompt = gr.Textbox(label="Negative Prompt (Beginners do not need to change)",
98
+ value='longbody, lowres, bad anatomy, bad hands, missing fingers, pubic hair,extra digit, fewer digits, cropped, worst quality, low quality')
99
+ run_button = gr.Button(label="Run")
100
+ num_samples = gr.Slider(label="Images", minimum=1, maximum=12, value=1, step=1)
101
+ seed = gr.Slider(label="Seed", minimum=-1, maximum=2147483647, step=1, value=12345)
102
+ det = gr.Radio(choices=["None", "Lineart_Anime"], type="value", value="None", label="Preprocessor")
103
+ with gr.Accordion("Advanced options", open=False):
104
+ image_resolution = gr.Slider(label="Image Resolution", minimum=256, maximum=2048, value=512, step=64)
105
+ strength = gr.Slider(label="Control Strength", minimum=0.0, maximum=2.0, value=1.0, step=0.01)
106
+ detect_resolution = gr.Slider(label="Preprocessor Resolution", minimum=128, maximum=1024, value=512, step=1)
107
+ ddim_steps = gr.Slider(label="Steps", minimum=1, maximum=100, value=20, step=1)
108
+ scale = gr.Slider(label="Guidance Scale", minimum=0.1, maximum=30.0, value=9.0, step=0.1)
109
+ eta = gr.Slider(label="DDIM ETA", minimum=0.0, maximum=1.0, value=1.0, step=0.01)
110
+ with gr.Column():
111
+ result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery").style(grid=2, height='auto')
112
+ ips = [det, input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, detect_resolution, ddim_steps, strength, scale, seed, eta]
113
+ run_button.click(fn=process, inputs=ips, outputs=[result_gallery])
114
+
115
+
116
+ block.launch(server_name='0.0.0.0')
CCEdit-main/src/controlnet11/gradio_normalbae.py ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from share import *
2
+ import config
3
+
4
+ import cv2
5
+ import einops
6
+ import gradio as gr
7
+ import numpy as np
8
+ import torch
9
+ import random
10
+
11
+ from pytorch_lightning import seed_everything
12
+ from annotator.util import resize_image, HWC3
13
+ from annotator.normalbae import NormalBaeDetector
14
+ from cldm.model import create_model, load_state_dict
15
+ from cldm.ddim_hacked import DDIMSampler
16
+
17
+
18
+ preprocessor = None
19
+
20
+ model_name = 'control_v11p_sd15_normalbae'
21
+ model = create_model(f'./models/{model_name}.yaml').cpu()
22
+ model.load_state_dict(load_state_dict('./models/v1-5-pruned.ckpt', location='cuda'), strict=False)
23
+ model.load_state_dict(load_state_dict(f'./models/{model_name}.pth', location='cuda'), strict=False)
24
+ model = model.cuda()
25
+ ddim_sampler = DDIMSampler(model)
26
+
27
+
28
+ def process(det, input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, detect_resolution, ddim_steps, guess_mode, strength, scale, seed, eta):
29
+ global preprocessor
30
+
31
+ if det == 'Normal_BAE':
32
+ if not isinstance(preprocessor, NormalBaeDetector):
33
+ preprocessor = NormalBaeDetector()
34
+
35
+ with torch.no_grad():
36
+ input_image = HWC3(input_image)
37
+
38
+ if det == 'None':
39
+ detected_map = input_image.copy()
40
+ else:
41
+ detected_map = preprocessor(resize_image(input_image, detect_resolution))
42
+ detected_map = HWC3(detected_map)
43
+
44
+ img = resize_image(input_image, image_resolution)
45
+ H, W, C = img.shape
46
+
47
+ detected_map = cv2.resize(detected_map, (W, H), interpolation=cv2.INTER_LINEAR)
48
+
49
+ control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0
50
+ control = torch.stack([control for _ in range(num_samples)], dim=0)
51
+ control = einops.rearrange(control, 'b h w c -> b c h w').clone()
52
+
53
+ if seed == -1:
54
+ seed = random.randint(0, 65535)
55
+ seed_everything(seed)
56
+
57
+ if config.save_memory:
58
+ model.low_vram_shift(is_diffusing=False)
59
+
60
+ cond = {"c_concat": [control], "c_crossattn": [model.get_learned_conditioning([prompt + ', ' + a_prompt] * num_samples)]}
61
+ un_cond = {"c_concat": None if guess_mode else [control], "c_crossattn": [model.get_learned_conditioning([n_prompt] * num_samples)]}
62
+ shape = (4, H // 8, W // 8)
63
+
64
+ if config.save_memory:
65
+ model.low_vram_shift(is_diffusing=True)
66
+
67
+ model.control_scales = [strength * (0.825 ** float(12 - i)) for i in range(13)] if guess_mode else ([strength] * 13)
68
+ # Magic number. IDK why. Perhaps because 0.825**12<0.01 but 0.826**12>0.01
69
+
70
+ samples, intermediates = ddim_sampler.sample(ddim_steps, num_samples,
71
+ shape, cond, verbose=False, eta=eta,
72
+ unconditional_guidance_scale=scale,
73
+ unconditional_conditioning=un_cond)
74
+
75
+ if config.save_memory:
76
+ model.low_vram_shift(is_diffusing=False)
77
+
78
+ x_samples = model.decode_first_stage(samples)
79
+ x_samples = (einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + 127.5).cpu().numpy().clip(0, 255).astype(np.uint8)
80
+
81
+ results = [x_samples[i] for i in range(num_samples)]
82
+ return [detected_map] + results
83
+
84
+
85
+ block = gr.Blocks().queue()
86
+ with block:
87
+ with gr.Row():
88
+ gr.Markdown("## Control Stable Diffusion with Normal Maps")
89
+ with gr.Row():
90
+ with gr.Column():
91
+ input_image = gr.Image(source='upload', type="numpy")
92
+ prompt = gr.Textbox(label="Prompt")
93
+ run_button = gr.Button(label="Run")
94
+ num_samples = gr.Slider(label="Images", minimum=1, maximum=12, value=1, step=1)
95
+ seed = gr.Slider(label="Seed", minimum=-1, maximum=2147483647, step=1, value=12345)
96
+ det = gr.Radio(choices=["Normal_BAE", "None"], type="value", value="Normal_BAE", label="Preprocessor")
97
+ with gr.Accordion("Advanced options", open=False):
98
+ image_resolution = gr.Slider(label="Image Resolution", minimum=256, maximum=768, value=512, step=64)
99
+ strength = gr.Slider(label="Control Strength", minimum=0.0, maximum=2.0, value=1.0, step=0.01)
100
+ guess_mode = gr.Checkbox(label='Guess Mode', value=False)
101
+ detect_resolution = gr.Slider(label="Preprocessor Resolution", minimum=128, maximum=1024, value=512, step=1)
102
+ ddim_steps = gr.Slider(label="Steps", minimum=1, maximum=100, value=20, step=1)
103
+ scale = gr.Slider(label="Guidance Scale", minimum=0.1, maximum=30.0, value=9.0, step=0.1)
104
+ eta = gr.Slider(label="DDIM ETA", minimum=0.0, maximum=1.0, value=1.0, step=0.01)
105
+ a_prompt = gr.Textbox(label="Added Prompt", value='best quality')
106
+ n_prompt = gr.Textbox(label="Negative Prompt", value='lowres, bad anatomy, bad hands, cropped, worst quality')
107
+ with gr.Column():
108
+ result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery").style(grid=2, height='auto')
109
+ ips = [det, input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, detect_resolution, ddim_steps, guess_mode, strength, scale, seed, eta]
110
+ run_button.click(fn=process, inputs=ips, outputs=[result_gallery])
111
+
112
+
113
+ block.launch(server_name='0.0.0.0')
CCEdit-main/src/controlnet11/gradio_openpose.py ADDED
@@ -0,0 +1,113 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from share import *
2
+ import config
3
+
4
+ import cv2
5
+ import einops
6
+ import gradio as gr
7
+ import numpy as np
8
+ import torch
9
+ import random
10
+
11
+ from pytorch_lightning import seed_everything
12
+ from annotator.util import resize_image, HWC3
13
+ from annotator.openpose import OpenposeDetector
14
+ from cldm.model import create_model, load_state_dict
15
+ from cldm.ddim_hacked import DDIMSampler
16
+
17
+
18
+ preprocessor = None
19
+
20
+ model_name = 'control_v11p_sd15_openpose'
21
+ model = create_model(f'./models/{model_name}.yaml').cpu()
22
+ model.load_state_dict(load_state_dict('./models/v1-5-pruned.ckpt', location='cuda'), strict=False)
23
+ model.load_state_dict(load_state_dict(f'./models/{model_name}.pth', location='cuda'), strict=False)
24
+ model = model.cuda()
25
+ ddim_sampler = DDIMSampler(model)
26
+
27
+
28
+ def process(det, input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, detect_resolution, ddim_steps, guess_mode, strength, scale, seed, eta):
29
+ global preprocessor
30
+
31
+ if 'Openpose' in det:
32
+ if not isinstance(preprocessor, OpenposeDetector):
33
+ preprocessor = OpenposeDetector()
34
+
35
+ with torch.no_grad():
36
+ input_image = HWC3(input_image)
37
+
38
+ if det == 'None':
39
+ detected_map = input_image.copy()
40
+ else:
41
+ detected_map = preprocessor(resize_image(input_image, detect_resolution), hand_and_face='Full' in det)
42
+ detected_map = HWC3(detected_map)
43
+
44
+ img = resize_image(input_image, image_resolution)
45
+ H, W, C = img.shape
46
+
47
+ detected_map = cv2.resize(detected_map, (W, H), interpolation=cv2.INTER_LINEAR)
48
+
49
+ control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0
50
+ control = torch.stack([control for _ in range(num_samples)], dim=0)
51
+ control = einops.rearrange(control, 'b h w c -> b c h w').clone()
52
+
53
+ if seed == -1:
54
+ seed = random.randint(0, 65535)
55
+ seed_everything(seed)
56
+
57
+ if config.save_memory:
58
+ model.low_vram_shift(is_diffusing=False)
59
+
60
+ cond = {"c_concat": [control], "c_crossattn": [model.get_learned_conditioning([prompt + ', ' + a_prompt] * num_samples)]}
61
+ un_cond = {"c_concat": None if guess_mode else [control], "c_crossattn": [model.get_learned_conditioning([n_prompt] * num_samples)]}
62
+ shape = (4, H // 8, W // 8)
63
+
64
+ if config.save_memory:
65
+ model.low_vram_shift(is_diffusing=True)
66
+
67
+ model.control_scales = [strength * (0.825 ** float(12 - i)) for i in range(13)] if guess_mode else ([strength] * 13)
68
+ # Magic number. IDK why. Perhaps because 0.825**12<0.01 but 0.826**12>0.01
69
+
70
+ samples, intermediates = ddim_sampler.sample(ddim_steps, num_samples,
71
+ shape, cond, verbose=False, eta=eta,
72
+ unconditional_guidance_scale=scale,
73
+ unconditional_conditioning=un_cond)
74
+
75
+ if config.save_memory:
76
+ model.low_vram_shift(is_diffusing=False)
77
+
78
+ x_samples = model.decode_first_stage(samples)
79
+ x_samples = (einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + 127.5).cpu().numpy().clip(0, 255).astype(np.uint8)
80
+
81
+ results = [x_samples[i] for i in range(num_samples)]
82
+ return [detected_map] + results
83
+
84
+
85
+ block = gr.Blocks().queue()
86
+ with block:
87
+ with gr.Row():
88
+ gr.Markdown("## Control Stable Diffusion with OpenPose")
89
+ with gr.Row():
90
+ with gr.Column():
91
+ input_image = gr.Image(source='upload', type="numpy")
92
+ prompt = gr.Textbox(label="Prompt")
93
+ run_button = gr.Button(label="Run")
94
+ num_samples = gr.Slider(label="Images", minimum=1, maximum=12, value=1, step=1)
95
+ seed = gr.Slider(label="Seed", minimum=-1, maximum=2147483647, step=1, value=12345)
96
+ det = gr.Radio(choices=["Openpose_Full", "Openpose", "None"], type="value", value="Openpose_Full", label="Preprocessor")
97
+ with gr.Accordion("Advanced options", open=False):
98
+ image_resolution = gr.Slider(label="Image Resolution", minimum=256, maximum=768, value=512, step=64)
99
+ strength = gr.Slider(label="Control Strength", minimum=0.0, maximum=2.0, value=1.0, step=0.01)
100
+ guess_mode = gr.Checkbox(label='Guess Mode', value=False)
101
+ detect_resolution = gr.Slider(label="Preprocessor Resolution", minimum=128, maximum=1024, value=512, step=1)
102
+ ddim_steps = gr.Slider(label="Steps", minimum=1, maximum=100, value=20, step=1)
103
+ scale = gr.Slider(label="Guidance Scale", minimum=0.1, maximum=30.0, value=9.0, step=0.1)
104
+ eta = gr.Slider(label="DDIM ETA", minimum=0.0, maximum=1.0, value=1.0, step=0.01)
105
+ a_prompt = gr.Textbox(label="Added Prompt", value='best quality')
106
+ n_prompt = gr.Textbox(label="Negative Prompt", value='lowres, bad anatomy, bad hands, cropped, worst quality')
107
+ with gr.Column():
108
+ result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery").style(grid=2, height='auto')
109
+ ips = [det, input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, detect_resolution, ddim_steps, guess_mode, strength, scale, seed, eta]
110
+ run_button.click(fn=process, inputs=ips, outputs=[result_gallery])
111
+
112
+
113
+ block.launch(server_name='0.0.0.0')
CCEdit-main/src/controlnet11/gradio_scribble.py ADDED
@@ -0,0 +1,123 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from share import *
2
+ import config
3
+
4
+ import cv2
5
+ import einops
6
+ import gradio as gr
7
+ import numpy as np
8
+ import torch
9
+ import random
10
+
11
+ from pytorch_lightning import seed_everything
12
+ from annotator.util import resize_image, HWC3
13
+ from annotator.hed import HEDdetector
14
+ from annotator.pidinet import PidiNetDetector
15
+ from annotator.util import nms
16
+ from cldm.model import create_model, load_state_dict
17
+ from cldm.ddim_hacked import DDIMSampler
18
+
19
+
20
+ preprocessor = None
21
+
22
+ model_name = 'control_v11p_sd15_scribble'
23
+ model = create_model(f'./models/{model_name}.yaml').cpu()
24
+ model.load_state_dict(load_state_dict('./models/v1-5-pruned.ckpt', location='cuda'), strict=False)
25
+ model.load_state_dict(load_state_dict(f'./models/{model_name}.pth', location='cuda'), strict=False)
26
+ model = model.cuda()
27
+ ddim_sampler = DDIMSampler(model)
28
+
29
+
30
+ def process(det, input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, detect_resolution, ddim_steps, guess_mode, strength, scale, seed, eta):
31
+ global preprocessor
32
+
33
+ if 'HED' in det:
34
+ if not isinstance(preprocessor, HEDdetector):
35
+ preprocessor = HEDdetector()
36
+
37
+ if 'PIDI' in det:
38
+ if not isinstance(preprocessor, PidiNetDetector):
39
+ preprocessor = PidiNetDetector()
40
+
41
+ with torch.no_grad():
42
+ input_image = HWC3(input_image)
43
+
44
+ if det == 'None':
45
+ detected_map = input_image.copy()
46
+ else:
47
+ detected_map = preprocessor(resize_image(input_image, detect_resolution))
48
+ detected_map = HWC3(detected_map)
49
+
50
+ img = resize_image(input_image, image_resolution)
51
+ H, W, C = img.shape
52
+
53
+ detected_map = cv2.resize(detected_map, (W, H), interpolation=cv2.INTER_LINEAR)
54
+ detected_map = nms(detected_map, 127, 3.0)
55
+ detected_map = cv2.GaussianBlur(detected_map, (0, 0), 3.0)
56
+ detected_map[detected_map > 4] = 255
57
+ detected_map[detected_map < 255] = 0
58
+
59
+ control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0
60
+ control = torch.stack([control for _ in range(num_samples)], dim=0)
61
+ control = einops.rearrange(control, 'b h w c -> b c h w').clone()
62
+
63
+ if seed == -1:
64
+ seed = random.randint(0, 65535)
65
+ seed_everything(seed)
66
+
67
+ if config.save_memory:
68
+ model.low_vram_shift(is_diffusing=False)
69
+
70
+ cond = {"c_concat": [control], "c_crossattn": [model.get_learned_conditioning([prompt + ', ' + a_prompt] * num_samples)]}
71
+ un_cond = {"c_concat": None if guess_mode else [control], "c_crossattn": [model.get_learned_conditioning([n_prompt] * num_samples)]}
72
+ shape = (4, H // 8, W // 8)
73
+
74
+ if config.save_memory:
75
+ model.low_vram_shift(is_diffusing=True)
76
+
77
+ model.control_scales = [strength * (0.825 ** float(12 - i)) for i in range(13)] if guess_mode else ([strength] * 13)
78
+ # Magic number. IDK why. Perhaps because 0.825**12<0.01 but 0.826**12>0.01
79
+
80
+ samples, intermediates = ddim_sampler.sample(ddim_steps, num_samples,
81
+ shape, cond, verbose=False, eta=eta,
82
+ unconditional_guidance_scale=scale,
83
+ unconditional_conditioning=un_cond)
84
+
85
+ if config.save_memory:
86
+ model.low_vram_shift(is_diffusing=False)
87
+
88
+ x_samples = model.decode_first_stage(samples)
89
+ x_samples = (einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + 127.5).cpu().numpy().clip(0, 255).astype(np.uint8)
90
+
91
+ results = [x_samples[i] for i in range(num_samples)]
92
+ return [detected_map] + results
93
+
94
+
95
+ block = gr.Blocks().queue()
96
+ with block:
97
+ with gr.Row():
98
+ gr.Markdown("## Control Stable Diffusion with Synthesized Scribble")
99
+ with gr.Row():
100
+ with gr.Column():
101
+ input_image = gr.Image(source='upload', type="numpy")
102
+ prompt = gr.Textbox(label="Prompt")
103
+ run_button = gr.Button(label="Run")
104
+ num_samples = gr.Slider(label="Images", minimum=1, maximum=12, value=1, step=1)
105
+ seed = gr.Slider(label="Seed", minimum=-1, maximum=2147483647, step=1, value=12345)
106
+ det = gr.Radio(choices=["Scribble_HED", "Scribble_PIDI", "None"], type="value", value="Scribble_HED", label="Preprocessor")
107
+ with gr.Accordion("Advanced options", open=False):
108
+ image_resolution = gr.Slider(label="Image Resolution", minimum=256, maximum=768, value=512, step=64)
109
+ strength = gr.Slider(label="Control Strength", minimum=0.0, maximum=2.0, value=1.0, step=0.01)
110
+ guess_mode = gr.Checkbox(label='Guess Mode', value=False)
111
+ detect_resolution = gr.Slider(label="Preprocessor Resolution", minimum=128, maximum=1024, value=512, step=1)
112
+ ddim_steps = gr.Slider(label="Steps", minimum=1, maximum=100, value=20, step=1)
113
+ scale = gr.Slider(label="Guidance Scale", minimum=0.1, maximum=30.0, value=9.0, step=0.1)
114
+ eta = gr.Slider(label="DDIM ETA", minimum=0.0, maximum=1.0, value=1.0, step=0.01)
115
+ a_prompt = gr.Textbox(label="Added Prompt", value='best quality')
116
+ n_prompt = gr.Textbox(label="Negative Prompt", value='lowres, bad anatomy, bad hands, cropped, worst quality')
117
+ with gr.Column():
118
+ result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery").style(grid=2, height='auto')
119
+ ips = [det, input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, detect_resolution, ddim_steps, guess_mode, strength, scale, seed, eta]
120
+ run_button.click(fn=process, inputs=ips, outputs=[result_gallery])
121
+
122
+
123
+ block.launch(server_name='0.0.0.0')
CCEdit-main/src/controlnet11/gradio_scribble_interactive.py ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from share import *
2
+ import config
3
+
4
+ import einops
5
+ import gradio as gr
6
+ import numpy as np
7
+ import torch
8
+ import random
9
+
10
+ from pytorch_lightning import seed_everything
11
+ from annotator.util import resize_image, HWC3
12
+ from cldm.model import create_model, load_state_dict
13
+ from cldm.ddim_hacked import DDIMSampler
14
+
15
+
16
+ preprocessor = None
17
+
18
+ model_name = 'control_v11p_sd15_scribble'
19
+ model = create_model(f'./models/{model_name}.yaml').cpu()
20
+ model.load_state_dict(load_state_dict('./models/v1-5-pruned.ckpt', location='cuda'), strict=False)
21
+ model.load_state_dict(load_state_dict(f'./models/{model_name}.pth', location='cuda'), strict=False)
22
+ model = model.cuda()
23
+ ddim_sampler = DDIMSampler(model)
24
+
25
+
26
+ def process(input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, ddim_steps, guess_mode, strength, scale, seed, eta):
27
+ with torch.no_grad():
28
+ img = resize_image(HWC3(input_image['mask'][:, :, 0]), image_resolution)
29
+ H, W, C = img.shape
30
+
31
+ detected_map = np.zeros_like(img, dtype=np.uint8)
32
+ detected_map[np.min(img, axis=2) > 127] = 255
33
+
34
+ control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0
35
+ control = torch.stack([control for _ in range(num_samples)], dim=0)
36
+ control = einops.rearrange(control, 'b h w c -> b c h w').clone()
37
+
38
+ if seed == -1:
39
+ seed = random.randint(0, 65535)
40
+ seed_everything(seed)
41
+
42
+ if config.save_memory:
43
+ model.low_vram_shift(is_diffusing=False)
44
+
45
+ cond = {"c_concat": [control], "c_crossattn": [model.get_learned_conditioning([prompt + ', ' + a_prompt] * num_samples)]}
46
+ un_cond = {"c_concat": None if guess_mode else [control], "c_crossattn": [model.get_learned_conditioning([n_prompt] * num_samples)]}
47
+ shape = (4, H // 8, W // 8)
48
+
49
+ if config.save_memory:
50
+ model.low_vram_shift(is_diffusing=True)
51
+
52
+ model.control_scales = [strength * (0.825 ** float(12 - i)) for i in range(13)] if guess_mode else ([strength] * 13)
53
+ # Magic number. IDK why. Perhaps because 0.825**12<0.01 but 0.826**12>0.01
54
+
55
+ samples, intermediates = ddim_sampler.sample(ddim_steps, num_samples,
56
+ shape, cond, verbose=False, eta=eta,
57
+ unconditional_guidance_scale=scale,
58
+ unconditional_conditioning=un_cond)
59
+
60
+ if config.save_memory:
61
+ model.low_vram_shift(is_diffusing=False)
62
+
63
+ x_samples = model.decode_first_stage(samples)
64
+ x_samples = (einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + 127.5).cpu().numpy().clip(0, 255).astype(np.uint8)
65
+
66
+ results = [x_samples[i] for i in range(num_samples)]
67
+ return [detected_map] + results
68
+
69
+
70
+ def create_canvas(w, h):
71
+ return np.zeros(shape=(h, w, 3), dtype=np.uint8) + 255
72
+
73
+
74
+ block = gr.Blocks().queue()
75
+ with block:
76
+ with gr.Row():
77
+ gr.Markdown("## Control Stable Diffusion with Interactive Scribbles")
78
+ with gr.Row():
79
+ with gr.Column():
80
+ canvas_width = gr.Slider(label="Canvas Width", minimum=256, maximum=1024, value=512, step=1)
81
+ canvas_height = gr.Slider(label="Canvas Height", minimum=256, maximum=1024, value=512, step=1)
82
+ create_button = gr.Button(label="Start", value='Open drawing canvas!')
83
+ input_image = gr.Image(source='upload', type='numpy', tool='sketch')
84
+ gr.Markdown(value='Do not forget to change your brush width to make it thinner. '
85
+ 'Just click on the small pencil icon in the upper right corner of the above block.')
86
+ create_button.click(fn=create_canvas, inputs=[canvas_width, canvas_height], outputs=[input_image])
87
+ prompt = gr.Textbox(label="Prompt")
88
+ run_button = gr.Button(label="Run")
89
+ num_samples = gr.Slider(label="Images", minimum=1, maximum=12, value=1, step=1)
90
+ seed = gr.Slider(label="Seed", minimum=-1, maximum=2147483647, step=1, value=12345)
91
+ with gr.Accordion("Advanced options", open=False):
92
+ image_resolution = gr.Slider(label="Image Resolution", minimum=256, maximum=768, value=512, step=64)
93
+ strength = gr.Slider(label="Control Strength", minimum=0.0, maximum=2.0, value=1.0, step=0.01)
94
+ guess_mode = gr.Checkbox(label='Guess Mode', value=False)
95
+ ddim_steps = gr.Slider(label="Steps", minimum=1, maximum=100, value=20, step=1)
96
+ scale = gr.Slider(label="Guidance Scale", minimum=0.1, maximum=30.0, value=9.0, step=0.1)
97
+ eta = gr.Slider(label="DDIM ETA", minimum=0.0, maximum=1.0, value=1.0, step=0.01)
98
+ a_prompt = gr.Textbox(label="Added Prompt", value='best quality')
99
+ n_prompt = gr.Textbox(label="Negative Prompt", value='lowres, bad anatomy, bad hands, cropped, worst quality')
100
+ with gr.Column():
101
+ result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery").style(grid=2, height='auto')
102
+ ips = [input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, ddim_steps, guess_mode, strength, scale, seed, eta]
103
+ run_button.click(fn=process, inputs=ips, outputs=[result_gallery])
104
+
105
+
106
+ block.launch(server_name='0.0.0.0')
CCEdit-main/src/controlnet11/gradio_softedge.py ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from share import *
2
+ import config
3
+
4
+ import cv2
5
+ import einops
6
+ import gradio as gr
7
+ import numpy as np
8
+ import torch
9
+ import random
10
+
11
+ from pytorch_lightning import seed_everything
12
+ from annotator.util import resize_image, HWC3
13
+ from annotator.hed import HEDdetector
14
+ from annotator.pidinet import PidiNetDetector
15
+ from cldm.model import create_model, load_state_dict
16
+ from cldm.ddim_hacked import DDIMSampler
17
+
18
+
19
+ preprocessor = None
20
+
21
+ model_name = 'control_v11p_sd15_softedge'
22
+ model = create_model(f'./models/{model_name}.yaml').cpu()
23
+ model.load_state_dict(load_state_dict('./models/v1-5-pruned.ckpt', location='cuda'), strict=False)
24
+ model.load_state_dict(load_state_dict(f'./models/{model_name}.pth', location='cuda'), strict=False)
25
+ model = model.cuda()
26
+ ddim_sampler = DDIMSampler(model)
27
+
28
+
29
+ def process(det, input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, detect_resolution, ddim_steps, guess_mode, strength, scale, seed, eta, is_safe):
30
+ global preprocessor
31
+
32
+ if 'HED' in det:
33
+ if not isinstance(preprocessor, HEDdetector):
34
+ preprocessor = HEDdetector()
35
+
36
+ if 'PIDI' in det:
37
+ if not isinstance(preprocessor, PidiNetDetector):
38
+ preprocessor = PidiNetDetector()
39
+
40
+ with torch.no_grad():
41
+ input_image = HWC3(input_image)
42
+
43
+ if det == 'None':
44
+ detected_map = input_image.copy()
45
+ else:
46
+ detected_map = preprocessor(resize_image(input_image, detect_resolution), safe='safe' in det)
47
+ detected_map = HWC3(detected_map)
48
+
49
+ img = resize_image(input_image, image_resolution)
50
+ H, W, C = img.shape
51
+
52
+ detected_map = cv2.resize(detected_map, (W, H), interpolation=cv2.INTER_LINEAR)
53
+
54
+ control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0
55
+ control = torch.stack([control for _ in range(num_samples)], dim=0)
56
+ control = einops.rearrange(control, 'b h w c -> b c h w').clone()
57
+
58
+ if seed == -1:
59
+ seed = random.randint(0, 65535)
60
+ seed_everything(seed)
61
+
62
+ if config.save_memory:
63
+ model.low_vram_shift(is_diffusing=False)
64
+
65
+ cond = {"c_concat": [control], "c_crossattn": [model.get_learned_conditioning([prompt + ', ' + a_prompt] * num_samples)]}
66
+ un_cond = {"c_concat": None if guess_mode else [control], "c_crossattn": [model.get_learned_conditioning([n_prompt] * num_samples)]}
67
+ shape = (4, H // 8, W // 8)
68
+
69
+ if config.save_memory:
70
+ model.low_vram_shift(is_diffusing=True)
71
+
72
+ model.control_scales = [strength * (0.825 ** float(12 - i)) for i in range(13)] if guess_mode else ([strength] * 13)
73
+ # Magic number. IDK why. Perhaps because 0.825**12<0.01 but 0.826**12>0.01
74
+
75
+ samples, intermediates = ddim_sampler.sample(ddim_steps, num_samples,
76
+ shape, cond, verbose=False, eta=eta,
77
+ unconditional_guidance_scale=scale,
78
+ unconditional_conditioning=un_cond)
79
+
80
+ if config.save_memory:
81
+ model.low_vram_shift(is_diffusing=False)
82
+
83
+ x_samples = model.decode_first_stage(samples)
84
+ x_samples = (einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + 127.5).cpu().numpy().clip(0, 255).astype(np.uint8)
85
+
86
+ results = [x_samples[i] for i in range(num_samples)]
87
+ return [detected_map] + results
88
+
89
+
90
+ block = gr.Blocks().queue()
91
+ with block:
92
+ with gr.Row():
93
+ gr.Markdown("## Control Stable Diffusion with Soft Edge")
94
+ with gr.Row():
95
+ with gr.Column():
96
+ input_image = gr.Image(source='upload', type="numpy")
97
+ prompt = gr.Textbox(label="Prompt")
98
+ run_button = gr.Button(label="Run")
99
+ num_samples = gr.Slider(label="Images", minimum=1, maximum=12, value=1, step=1)
100
+ seed = gr.Slider(label="Seed", minimum=-1, maximum=2147483647, step=1, value=12345)
101
+ det = gr.Radio(choices=["SoftEdge_PIDI", "SoftEdge_PIDI_safe", "SoftEdge_HED", "SoftEdge_HED_safe", "None"], type="value", value="SoftEdge_PIDI", label="Preprocessor")
102
+ with gr.Accordion("Advanced options", open=False):
103
+ is_safe = gr.Checkbox(label='Safe', value=False)
104
+ image_resolution = gr.Slider(label="Image Resolution", minimum=256, maximum=768, value=512, step=64)
105
+ strength = gr.Slider(label="Control Strength", minimum=0.0, maximum=2.0, value=1.0, step=0.01)
106
+ guess_mode = gr.Checkbox(label='Guess Mode', value=False)
107
+ detect_resolution = gr.Slider(label="Preprocessor Resolution", minimum=128, maximum=1024, value=512, step=1)
108
+ ddim_steps = gr.Slider(label="Steps", minimum=1, maximum=100, value=20, step=1)
109
+ scale = gr.Slider(label="Guidance Scale", minimum=0.1, maximum=30.0, value=9.0, step=0.1)
110
+ eta = gr.Slider(label="DDIM ETA", minimum=0.0, maximum=1.0, value=1.0, step=0.01)
111
+ a_prompt = gr.Textbox(label="Added Prompt", value='best quality')
112
+ n_prompt = gr.Textbox(label="Negative Prompt", value='lowres, bad anatomy, bad hands, cropped, worst quality')
113
+ with gr.Column():
114
+ result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery").style(grid=2, height='auto')
115
+ ips = [det, input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, detect_resolution, ddim_steps, guess_mode, strength, scale, seed, eta, is_safe]
116
+ run_button.click(fn=process, inputs=ips, outputs=[result_gallery])
117
+
118
+
119
+ block.launch(server_name='0.0.0.0')
CCEdit-main/src/controlnet11/gradio_tile.py ADDED
@@ -0,0 +1,109 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from share import *
2
+ import config
3
+
4
+ import cv2
5
+ import einops
6
+ import gradio as gr
7
+ import numpy as np
8
+ import torch
9
+ import random
10
+
11
+ from pytorch_lightning import seed_everything
12
+ from annotator.util import resize_image, HWC3
13
+ from cldm.model import create_model, load_state_dict
14
+ from cldm.ddim_hacked import DDIMSampler
15
+
16
+
17
+ model_name = 'control_v11f1e_sd15_tile'
18
+ model = create_model(f'./models/{model_name}.yaml').cpu()
19
+ model.load_state_dict(load_state_dict('./models/v1-5-pruned.ckpt', location='cuda'), strict=False)
20
+ model.load_state_dict(load_state_dict(f'./models/{model_name}.pth', location='cuda'), strict=False)
21
+ model = model.cuda()
22
+ ddim_sampler = DDIMSampler(model)
23
+
24
+
25
+ def process(input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, ddim_steps, guess_mode, strength, scale, seed, eta, denoise_strength):
26
+ global preprocessor
27
+
28
+ with torch.no_grad():
29
+ input_image = HWC3(input_image)
30
+ detected_map = input_image.copy()
31
+
32
+ img = resize_image(input_image, image_resolution)
33
+ H, W, C = img.shape
34
+
35
+ detected_map = cv2.resize(detected_map, (W, H), interpolation=cv2.INTER_LINEAR)
36
+
37
+ control = torch.from_numpy(detected_map.copy()).float().cuda() / 255.0
38
+ control = torch.stack([control for _ in range(num_samples)], dim=0)
39
+ control = einops.rearrange(control, 'b h w c -> b c h w').clone()
40
+
41
+ img = torch.from_numpy(img.copy()).float().cuda() / 127.0 - 1.0
42
+ img = torch.stack([img for _ in range(num_samples)], dim=0)
43
+ img = einops.rearrange(img, 'b h w c -> b c h w').clone()
44
+
45
+ if seed == -1:
46
+ seed = random.randint(0, 65535)
47
+ seed_everything(seed)
48
+
49
+ if config.save_memory:
50
+ model.low_vram_shift(is_diffusing=False)
51
+
52
+ cond = {"c_concat": [control], "c_crossattn": [model.get_learned_conditioning([prompt + ', ' + a_prompt] * num_samples)]}
53
+ un_cond = {"c_concat": None if guess_mode else [control], "c_crossattn": [model.get_learned_conditioning([n_prompt] * num_samples)]}
54
+
55
+ if config.save_memory:
56
+ model.low_vram_shift(is_diffusing=False)
57
+
58
+ ddim_sampler.make_schedule(ddim_steps, ddim_eta=eta, verbose=True)
59
+ t_enc = min(int(denoise_strength * ddim_steps), ddim_steps - 1)
60
+ z = model.get_first_stage_encoding(model.encode_first_stage(img))
61
+ z_enc = ddim_sampler.stochastic_encode(z, torch.tensor([t_enc] * num_samples).to(model.device))
62
+
63
+ if config.save_memory:
64
+ model.low_vram_shift(is_diffusing=True)
65
+
66
+ model.control_scales = [strength * (0.825 ** float(12 - i)) for i in range(13)] if guess_mode else ([strength] * 13)
67
+ # Magic number. IDK why. Perhaps because 0.825**12<0.01 but 0.826**12>0.01
68
+
69
+ samples = ddim_sampler.decode(z_enc, cond, t_enc, unconditional_guidance_scale=scale, unconditional_conditioning=un_cond)
70
+
71
+ if config.save_memory:
72
+ model.low_vram_shift(is_diffusing=False)
73
+
74
+ x_samples = model.decode_first_stage(samples)
75
+ x_samples = (einops.rearrange(x_samples, 'b c h w -> b h w c') * 127.5 + 127.5).cpu().numpy().clip(0, 255).astype(np.uint8)
76
+
77
+ results = [x_samples[i] for i in range(num_samples)]
78
+ return [input_image] + results
79
+
80
+
81
+ block = gr.Blocks().queue()
82
+ with block:
83
+ with gr.Row():
84
+ gr.Markdown("## Control Stable Diffusion with Tile")
85
+ with gr.Row():
86
+ with gr.Column():
87
+ input_image = gr.Image(source='upload', type="numpy")
88
+ prompt = gr.Textbox(label="Prompt")
89
+ run_button = gr.Button(label="Run")
90
+ num_samples = gr.Slider(label="Images", minimum=1, maximum=12, value=1, step=1)
91
+ seed = gr.Slider(label="Seed", minimum=-1, maximum=2147483647, step=1, value=12345)
92
+ det = gr.Radio(choices=["None"], type="value", value="None", label="Preprocessor")
93
+ denoise_strength = gr.Slider(label="Denoising Strength", minimum=0.1, maximum=1.0, value=1.0, step=0.01)
94
+ with gr.Accordion("Advanced options", open=False):
95
+ image_resolution = gr.Slider(label="Image Resolution", minimum=256, maximum=2048, value=512, step=64)
96
+ strength = gr.Slider(label="Control Strength", minimum=0.0, maximum=2.0, value=1.0, step=0.01)
97
+ guess_mode = gr.Checkbox(label='Guess Mode', value=False)
98
+ ddim_steps = gr.Slider(label="Steps", minimum=1, maximum=100, value=32, step=1)
99
+ scale = gr.Slider(label="Guidance Scale", minimum=0.1, maximum=30.0, value=9.0, step=0.1)
100
+ eta = gr.Slider(label="DDIM ETA", minimum=0.0, maximum=1.0, value=1.0, step=0.01)
101
+ a_prompt = gr.Textbox(label="Added Prompt", value='best quality')
102
+ n_prompt = gr.Textbox(label="Negative Prompt", value='blur, lowres, bad anatomy, bad hands, cropped, worst quality')
103
+ with gr.Column():
104
+ result_gallery = gr.Gallery(label='Output', show_label=False, elem_id="gallery").style(grid=2, height='auto')
105
+ ips = [input_image, prompt, a_prompt, n_prompt, num_samples, image_resolution, ddim_steps, guess_mode, strength, scale, seed, eta, denoise_strength]
106
+ run_button.click(fn=process, inputs=ips, outputs=[result_gallery])
107
+
108
+
109
+ block.launch(server_name='0.0.0.0')
CCEdit-main/src/controlnet11/share.py ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ import config
2
+ from cldm.hack import disable_verbosity, enable_sliced_attention
3
+
4
+
5
+ disable_verbosity()
6
+
7
+ if config.save_memory:
8
+ enable_sliced_attention()
FateZero-main/CLIP/.gitignore ADDED
@@ -0,0 +1,10 @@
 
 
 
 
 
 
 
 
 
 
 
1
+ __pycache__/
2
+ *.py[cod]
3
+ *$py.class
4
+ *.egg-info
5
+ .pytest_cache
6
+ .ipynb_checkpoints
7
+
8
+ thumbs.db
9
+ .DS_Store
10
+ .idea
FateZero-main/CLIP/LICENSE ADDED
@@ -0,0 +1,22 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ MIT License
2
+
3
+ Copyright (c) 2021 OpenAI
4
+
5
+ Permission is hereby granted, free of charge, to any person obtaining a copy
6
+ of this software and associated documentation files (the "Software"), to deal
7
+ in the Software without restriction, including without limitation the rights
8
+ to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
9
+ copies of the Software, and to permit persons to whom the Software is
10
+ furnished to do so, subject to the following conditions:
11
+
12
+ The above copyright notice and this permission notice shall be included in all
13
+ copies or substantial portions of the Software.
14
+
15
+ THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
16
+ IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
17
+ FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
18
+ AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
19
+ LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
20
+ OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
21
+ SOFTWARE.
22
+
FateZero-main/CLIP/MANIFEST.in ADDED
@@ -0,0 +1 @@
 
 
1
+ include clip/bpe_simple_vocab_16e6.txt.gz
FateZero-main/CLIP/bench_clean_prompt.yaml ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ swan_cartoon:
2
+ path: result/paper/main/0226_swan_multi_prompt_230226-170444/train_samples
3
+ source: a black swan with a red beak swimming in a river near a wall and bushes
4
+ target: cartoon photo of a black swan with a red beak swimming in a river near a wall and bushes,
5
+
6
+ swan_duck:
7
+ path: result/paper/main/0226_swan_multi_prompt_230226-170444/train_samples
8
+ source: a black swan with a red beak swimming in a river near a wall and bushes
9
+ target: a white duck with a yellow beak swimming in a river near a wall and bushes,
10
+
11
+ swan_flamingo:
12
+ path: result/paper/main/0226_swan_multi_prompt_230226-170444/train_samples
13
+ source: a black swan with a red beak swimming in a river near a wall and bushes
14
+ target: a pink flamingo with a red beak walking in a river near a wall and bushes
15
+
16
+ swan_swarov:
17
+ path: result/paper/main/0226_swan_multi_prompt_230226-170444/train_samples
18
+ source: a black swan with a red beak swimming in a river near a wall and bushes
19
+ target: a Swarovski crystal swan with a red beak swimming in a river near a wall and bushes,
20
+
21
+
22
+ car_posche:
23
+ path: result/paper/main/0225_jeep_style_blend_mask_beach_sea_230226-162553/train_samples
24
+ source: a silver jeep driving down a curvy road in the countryside
25
+ target: a Porsche car driving down a curvy road in the countryside,
26
+
27
+ car_watercolor:
28
+ path: result/paper/main/0225_jeep_style_blend_mask_beach_sea_230226-162553/train_samples
29
+ source: a silver jeep driving down a curvy road in the countryside
30
+ target: watercolor painting of a silver jeep driving down a curvy road in the countryside,
31
+
32
+
33
+
34
+ surf_ukiyo:
35
+ path: result/paper/main/0304_surf_ukiyo_longer_video_86_230304-161100/train_samples
36
+ source: a man with round helmet surfing on a white wave in blue ocean with a rope
37
+ target: a man with round helmet surfing on a white wave in blue ocean with a rope in the Ukiyo-e style painting
38
+
39
+ rabit_pokemon:
40
+ path: result/paper/main/0226_rabit_reproduce_50_style_single_frame_230226-213139/train_samples
41
+ source: A rabbit is eating a watermelon,
42
+ target: pokemon cartoon of A rabbit is eating a watermelon
43
+
44
+
45
+ train_shinkai:
46
+ path: result/paper/main/0304_train_Makoto_Shinkai_230304-161204/train_samples
47
+ sampling_rate: 28
48
+ source: a train traveling down tracks next to a forest filled with trees and flowers and a man on the side of the track,
49
+ target: a train traveling down tracks next to a forest filled with trees and flowers and a man on the side of the track Makoto Shinkai style,
50
+
51
+
52
+
FateZero-main/CLIP/clip/bpe_simple_vocab_16e6.txt.gz ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:924691ac288e54409236115652ad4aa250f48203de50a9e4722a6ecd48d6804a
3
+ size 1356917
FateZero-main/CLIP/hubconf.py ADDED
@@ -0,0 +1,42 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from clip.clip import tokenize as _tokenize, load as _load, available_models as _available_models
2
+ import re
3
+ import string
4
+
5
+ dependencies = ["torch", "torchvision", "ftfy", "regex", "tqdm"]
6
+
7
+ # For compatibility (cannot include special characters in function name)
8
+ model_functions = { model: re.sub(f'[{string.punctuation}]', '_', model) for model in _available_models()}
9
+
10
+ def _create_hub_entrypoint(model):
11
+ def entrypoint(**kwargs):
12
+ return _load(model, **kwargs)
13
+
14
+ entrypoint.__doc__ = f"""Loads the {model} CLIP model
15
+
16
+ Parameters
17
+ ----------
18
+ device : Union[str, torch.device]
19
+ The device to put the loaded model
20
+
21
+ jit : bool
22
+ Whether to load the optimized JIT model or more hackable non-JIT model (default).
23
+
24
+ download_root: str
25
+ path to download the model files; by default, it uses "~/.cache/clip"
26
+
27
+ Returns
28
+ -------
29
+ model : torch.nn.Module
30
+ The {model} CLIP model
31
+
32
+ preprocess : Callable[[PIL.Image], torch.Tensor]
33
+ A torchvision transform that converts a PIL image into a tensor that the returned model can take as its input
34
+ """
35
+ return entrypoint
36
+
37
+ def tokenize():
38
+ return _tokenize
39
+
40
+ _entrypoints = {model_functions[model]: _create_hub_entrypoint(model) for model in _available_models()}
41
+
42
+ globals().update(_entrypoints)
FateZero-main/CLIP/probs.py ADDED
@@ -0,0 +1,18 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import clip
3
+ from PIL import Image
4
+
5
+ device = "cuda" if torch.cuda.is_available() else "cpu"
6
+ model, preprocess = clip.load("ViT-B/32", device=device)
7
+
8
+ image = preprocess(Image.open("CLIP.png")).unsqueeze(0).to(device)
9
+ text = clip.tokenize(["a diagram", "a dog", "a cat"]).to(device)
10
+
11
+ with torch.no_grad():
12
+ image_features = model.encode_image(image)
13
+ text_features = model.encode_text(text)
14
+
15
+ logits_per_image, logits_per_text = model(image, text)
16
+ probs = logits_per_image.softmax(dim=-1).cpu().numpy()
17
+
18
+ print("Label probs:", probs) # prints: [[0.9927937 0.00421068 0.00299572]]
FateZero-main/CLIP/requirements.txt ADDED
@@ -0,0 +1,5 @@
 
 
 
 
 
 
1
+ ftfy
2
+ regex
3
+ tqdm
4
+ torch
5
+ torchvision
FateZero-main/CLIP/setup.py ADDED
@@ -0,0 +1,21 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import os
2
+
3
+ import pkg_resources
4
+ from setuptools import setup, find_packages
5
+
6
+ setup(
7
+ name="clip",
8
+ py_modules=["clip"],
9
+ version="1.0",
10
+ description="",
11
+ author="OpenAI",
12
+ packages=find_packages(exclude=["tests*"]),
13
+ install_requires=[
14
+ str(r)
15
+ for r in pkg_resources.parse_requirements(
16
+ open(os.path.join(os.path.dirname(__file__), "requirements.txt"))
17
+ )
18
+ ],
19
+ include_package_data=True,
20
+ extras_require={'dev': ['pytest']},
21
+ )
FateZero-main/ckpt/download.sh ADDED
@@ -0,0 +1,8 @@
 
 
 
 
 
 
 
 
 
1
+ # download from huggingface face, takes 20G space
2
+ git lfs install
3
+
4
+ git clone https://huggingface.co/CompVis/stable-diffusion-v1-4
5
+ git clone https://huggingface.co/runwayml/stable-diffusion-v1-5
6
+ git clone https://huggingface.co/chenyangqi/jeep_tuned_200
7
+ git clone https://huggingface.co/chenyangqi/man_skate_250
8
+ git clone https://huggingface.co/chenyangqi/swan_150
FateZero-main/colab_fatezero.ipynb ADDED
The diff for this file is too large to render. See raw diff
 
FateZero-main/data/attribute/bear_tiger_lion_leopard.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8868d750e58f776a5a3d1b9a956a4d312788c21eb3a8bf466b26127a0482b6d0
3
+ size 136547
FateZero-main/data/attribute/bus_gpu.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:8855ab637bbbf5e96143438b2e8db4e4e61a41d031b8ef1d5c0a2b4aa07699b5
3
+ size 154465
FateZero-main/data/attribute/bus_gpu/00000.png ADDED

Git LFS Details

  • SHA256: 4743efb23560f04caf0296b45b5d41a38aa0be8a76938e164069cbdc8343b726
  • Pointer size: 131 Bytes
  • Size of remote file: 427 kB
FateZero-main/data/attribute/bus_gpu/00002.png ADDED

Git LFS Details

  • SHA256: afc4739c3bd6b87a9e1344646759307f6b2e16fea63a99945f1bcbcdcd3ac96c
  • Pointer size: 131 Bytes
  • Size of remote file: 402 kB
FateZero-main/data/attribute/bus_gpu/00004.png ADDED

Git LFS Details

  • SHA256: 5399cb730537563a4a6b0276a6499ee0c572ca56f28587737a4317f633ccef42
  • Pointer size: 131 Bytes
  • Size of remote file: 447 kB
FateZero-main/data/attribute/bus_gpu/00006.png ADDED

Git LFS Details

  • SHA256: 2076529465d3aec4cd59725406fed395ea45f33cb8c7e0ea098755a29ec2c971
  • Pointer size: 131 Bytes
  • Size of remote file: 388 kB
FateZero-main/data/attribute/bus_gpu/00007.png ADDED

Git LFS Details

  • SHA256: c0de9d487ebae035c767dcda0029ed68df9ae45a0cc3aac4dd01a9571d61e1b7
  • Pointer size: 131 Bytes
  • Size of remote file: 417 kB
FateZero-main/data/attribute/cat_tiger_leopard_grass.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b9ce89dbf9dd1f9b8d226ce2b0b7b6b46100563366c2297f78c81dd996c8b3c7
3
+ size 55091
FateZero-main/data/attribute/duck_rubber.mp4 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:eb5fe08c0a36f81dc46777ade64592075fd8099ec259fec2b1dc1774701796b8
3
+ size 40282
FateZero-main/data/attribute/duck_rubber/00000.png ADDED

Git LFS Details

  • SHA256: 76988a0a2dde69c785f3706cd0a8f1ec64298bb9f20f261bfcb5276492b7efeb
  • Pointer size: 131 Bytes
  • Size of remote file: 250 kB
FateZero-main/data/attribute/duck_rubber/00001.png ADDED

Git LFS Details

  • SHA256: d1b3c35f785451781f759837437f855f62129619282ef8dbf2dde53c2d1665bd
  • Pointer size: 131 Bytes
  • Size of remote file: 255 kB
FateZero-main/data/attribute/duck_rubber/00002.png ADDED

Git LFS Details

  • SHA256: 289e6dffec48e932d8ddd0c97ce0b3917c3d2f0acd7fe299bf3d74c23a8cbb10
  • Pointer size: 131 Bytes
  • Size of remote file: 251 kB
FateZero-main/data/attribute/duck_rubber/00003.png ADDED

Git LFS Details

  • SHA256: 79aec578db39fa6b40084d57fc3995f3e5f548b05bdaa7e681e15b3110f1402e
  • Pointer size: 131 Bytes
  • Size of remote file: 252 kB