Moyao001 commited on
Commit
3aa970e
·
verified ·
1 Parent(s): 6577950

Add files using upload-large-folder tool

Browse files
This view is limited to 50 files because it contains too many changes.   See raw diff
Files changed (50) hide show
  1. FateZero-main/data/negative_reg/bird/198-dfdcf603f0d6dd9a790f4d6a658032b117abf50e.jpg +3 -0
  2. FateZero-main/data/negative_reg/bird/347-defdc0fea089f7e2d6b9cb296eca877dd5f7a7d2.jpg +3 -0
  3. FateZero-main/data/negative_reg/bird/359-b6be4d41cabadb5c0f5ae7a3b5aa8c87d2811b25.jpg +3 -0
  4. FateZero-main/data/negative_reg/bird/405-2b8123d16a14b5e322c9a7e9cdff6b5620de744e.jpg +3 -0
  5. FateZero-main/data/negative_reg/bird/452-edfc79bf16e82da5c65e4f814947229096dec56b.jpg +3 -0
  6. FateZero-main/data/negative_reg/bird/490-6586977261049294340e1f2ca63a704b95742218.jpg +3 -0
  7. FateZero-main/data/negative_reg/bird/529-5e8face1afa7450de032369b3a378874bd526cc4.jpg +3 -0
  8. FateZero-main/data/negative_reg/bird/551-76631a31d87d5384757e68066843bb8655577148.jpg +3 -0
  9. FateZero-main/data/negative_reg/bird/66-1e78a8e81d922e992f0e0968cd751d2541b2f130.jpg +3 -0
  10. FateZero-main/data/negative_reg/bird/8-158ad1d17e579bab9d48cd69025d9cfce31266fa.jpg +3 -0
  11. FateZero-main/data/negative_reg/car/132-7102babd6a1810bd7e1dc01a2241d51ceeeb44dc.jpg +3 -0
  12. FateZero-main/data/negative_reg/car/135-c8ae1edb43f1ceeb0ea05b94a595d6bc8b2798c6.jpg +3 -0
  13. FateZero-main/data/negative_reg/car/137-59d324538c4873d14fb45a7c52a8a54ce90ff4d5.jpg +3 -0
  14. FateZero-main/data/negative_reg/car/222-beff21bebbbb9dabd49517cbd4522791953b688b.jpg +3 -0
  15. FateZero-main/data/negative_reg/car/246-1aa12ea27561eaa19320fa4f057c420afd795453.jpg +3 -0
  16. FateZero-main/data/negative_reg/car/257-0bcf1c0411420992816ac3269cc04f9dfc828cee.jpg +3 -0
  17. FateZero-main/data/negative_reg/car/331-93e1c2cb142929baeab22fc994290d7315cd8f40.jpg +3 -0
  18. FateZero-main/data/negative_reg/car/34-f49ef63f40978ced5363df3ba47201bacfe41dec.jpg +3 -0
  19. FateZero-main/data/negative_reg/car/357-56fc8ca34173c7321b5564e29bd57ae47d78b2aa.jpg +3 -0
  20. FateZero-main/data/negative_reg/car/361-a64d1b37c755ef754d2b7c27e9f7e156da6bc576.jpg +3 -0
  21. FateZero-main/data/negative_reg/car/426-5bbfca82723b7804db51fc7705400610b6885f99.jpg +3 -0
  22. FateZero-main/data/negative_reg/car/542-0b9afd5a396cf38cceb5affafac671d9a84f1130.jpg +3 -0
  23. FateZero-main/data/negative_reg/car/576-2f9d3df0358dc7c8b0b0b40d2a7d63995432ffa5.jpg +3 -0
  24. FateZero-main/data/negative_reg/car/656-1e8349c458b6cb470804cd09c6dac39a97cdef25.jpg +3 -0
  25. FateZero-main/data/negative_reg/car/675-86bf57b2da85f2912d9ed793e92f9140eb276c17.jpg +3 -0
  26. FateZero-main/data/negative_reg/car/678-60e3c9248c8eb7cb912a74071e0b9cecfbb05f45.jpg +3 -0
  27. FateZero-main/data/negative_reg/car/682-7d849fbd641fe5bbb39a07ae290f055572ba79a6.jpg +3 -0
  28. FateZero-main/data/negative_reg/car/83-c0d83d05870f16b380369272da855119b9a368ec.jpg +3 -0
  29. RAVE-main/CIVIT_AI/civit_ai.sh +27 -0
  30. RAVE-main/CIVIT_AI/convert.py +182 -0
  31. RAVE-main/pretrained_models/.gitattributes +34 -0
  32. RAVE-main/scripts/run_experiment.py +130 -0
  33. vid2vid-zero-main/.gitignore +175 -0
  34. vid2vid-zero-main/README.md +152 -0
  35. vid2vid-zero-main/app.py +66 -0
  36. vid2vid-zero-main/checkpoints/.gitattributes +32 -0
  37. vid2vid-zero-main/configs/Cartoon_kangaroos.yaml +37 -0
  38. vid2vid-zero-main/configs/black-swan.yaml +36 -0
  39. vid2vid-zero-main/configs/brown-bear.yaml +37 -0
  40. vid2vid-zero-main/configs/car-moving.yaml +37 -0
  41. vid2vid-zero-main/configs/car-turn.yaml +39 -0
  42. vid2vid-zero-main/configs/child-riding.yaml +40 -0
  43. vid2vid-zero-main/configs/cow-walking.yaml +37 -0
  44. vid2vid-zero-main/configs/dog-walking.yaml +35 -0
  45. vid2vid-zero-main/configs/horse-running.yaml +36 -0
  46. vid2vid-zero-main/configs/lion-roaring.yaml +37 -0
  47. vid2vid-zero-main/configs/man-running.yaml +37 -0
  48. vid2vid-zero-main/configs/man-surfing.yaml +36 -0
  49. vid2vid-zero-main/configs/plane.yaml +37 -0
  50. vid2vid-zero-main/configs/rabbit-watermelon.yaml +40 -0
FateZero-main/data/negative_reg/bird/198-dfdcf603f0d6dd9a790f4d6a658032b117abf50e.jpg ADDED

Git LFS Details

  • SHA256: 874f6102f2c00a522c60d5fa2431c29b7898457c15db62d04e5dd4807e52ebf4
  • Pointer size: 130 Bytes
  • Size of remote file: 69.7 kB
FateZero-main/data/negative_reg/bird/347-defdc0fea089f7e2d6b9cb296eca877dd5f7a7d2.jpg ADDED

Git LFS Details

  • SHA256: 008df3061807a6e99772c29e98f087807a8e4470c0ad33528d3cb8398416c7be
  • Pointer size: 130 Bytes
  • Size of remote file: 32.9 kB
FateZero-main/data/negative_reg/bird/359-b6be4d41cabadb5c0f5ae7a3b5aa8c87d2811b25.jpg ADDED

Git LFS Details

  • SHA256: 70ba62f8d0af506a492523e4a787519395cc166c4c7c4d41626d86a942f28649
  • Pointer size: 130 Bytes
  • Size of remote file: 71.2 kB
FateZero-main/data/negative_reg/bird/405-2b8123d16a14b5e322c9a7e9cdff6b5620de744e.jpg ADDED

Git LFS Details

  • SHA256: 0f72522d56c1a16b6745a7ac620ef5062d326b86167a07776801fe07ba10a233
  • Pointer size: 130 Bytes
  • Size of remote file: 37.3 kB
FateZero-main/data/negative_reg/bird/452-edfc79bf16e82da5c65e4f814947229096dec56b.jpg ADDED

Git LFS Details

  • SHA256: fdc3bcedcfef88bec2789354f52d52389dc08979e741a1441e794e54150f6431
  • Pointer size: 130 Bytes
  • Size of remote file: 34.7 kB
FateZero-main/data/negative_reg/bird/490-6586977261049294340e1f2ca63a704b95742218.jpg ADDED

Git LFS Details

  • SHA256: 283f1351ff4e2b1cea9ed6e31e15bc360466f327110e7d5a7c5e94c3dafd9654
  • Pointer size: 130 Bytes
  • Size of remote file: 19.7 kB
FateZero-main/data/negative_reg/bird/529-5e8face1afa7450de032369b3a378874bd526cc4.jpg ADDED

Git LFS Details

  • SHA256: 390f48fab998bd7fddb657c3a593486a83336b14ab2299deb97501fb93797911
  • Pointer size: 130 Bytes
  • Size of remote file: 24.9 kB
FateZero-main/data/negative_reg/bird/551-76631a31d87d5384757e68066843bb8655577148.jpg ADDED

Git LFS Details

  • SHA256: 9c379279a21c46891c414d914f6c9a3b5f8df882c812858464945fb9787604a7
  • Pointer size: 130 Bytes
  • Size of remote file: 19.4 kB
FateZero-main/data/negative_reg/bird/66-1e78a8e81d922e992f0e0968cd751d2541b2f130.jpg ADDED

Git LFS Details

  • SHA256: 6ad44ab5a74626513dd42bde66bbecb1f08c26e9398f6e1dd0ee317160d81f96
  • Pointer size: 130 Bytes
  • Size of remote file: 44.5 kB
FateZero-main/data/negative_reg/bird/8-158ad1d17e579bab9d48cd69025d9cfce31266fa.jpg ADDED

Git LFS Details

  • SHA256: d9deb95a6c7fa33cb4f26a004f9f662f2ea7c9cd27c519621df6fa075f00e981
  • Pointer size: 130 Bytes
  • Size of remote file: 19.4 kB
FateZero-main/data/negative_reg/car/132-7102babd6a1810bd7e1dc01a2241d51ceeeb44dc.jpg ADDED

Git LFS Details

  • SHA256: 8fbbf251d59c75bb5bfb35a09046b79005df2cc04f0ba5bfa6447c376fdcf51b
  • Pointer size: 130 Bytes
  • Size of remote file: 34.9 kB
FateZero-main/data/negative_reg/car/135-c8ae1edb43f1ceeb0ea05b94a595d6bc8b2798c6.jpg ADDED

Git LFS Details

  • SHA256: 5f416223cc2e30808ef8ec014aa4282922d82226cb6449fc7a95115beb61cae6
  • Pointer size: 130 Bytes
  • Size of remote file: 63.1 kB
FateZero-main/data/negative_reg/car/137-59d324538c4873d14fb45a7c52a8a54ce90ff4d5.jpg ADDED

Git LFS Details

  • SHA256: ffebd4a466c1d8b1a593fe391414a65bad5fd4f46122b1c83fb663bfd720e8a3
  • Pointer size: 130 Bytes
  • Size of remote file: 59.4 kB
FateZero-main/data/negative_reg/car/222-beff21bebbbb9dabd49517cbd4522791953b688b.jpg ADDED

Git LFS Details

  • SHA256: e29c50a880c22f528414442c43d4e44644c879ed12b644bf128b0d9f791b8f60
  • Pointer size: 130 Bytes
  • Size of remote file: 61.3 kB
FateZero-main/data/negative_reg/car/246-1aa12ea27561eaa19320fa4f057c420afd795453.jpg ADDED

Git LFS Details

  • SHA256: a8d82b925f4c3361e0fe9154459c57527a9ff713f3dc90a389aa1411c3579ab0
  • Pointer size: 130 Bytes
  • Size of remote file: 62.9 kB
FateZero-main/data/negative_reg/car/257-0bcf1c0411420992816ac3269cc04f9dfc828cee.jpg ADDED

Git LFS Details

  • SHA256: b4a600ff6f8f19c4105adb69822719a72949588bad8e1afdc8a63e155c9a59fd
  • Pointer size: 130 Bytes
  • Size of remote file: 48.2 kB
FateZero-main/data/negative_reg/car/331-93e1c2cb142929baeab22fc994290d7315cd8f40.jpg ADDED

Git LFS Details

  • SHA256: 7afd83249f9eb1114de6deae21efb0ac44eac1461914953346a108ba26c85f28
  • Pointer size: 130 Bytes
  • Size of remote file: 62.2 kB
FateZero-main/data/negative_reg/car/34-f49ef63f40978ced5363df3ba47201bacfe41dec.jpg ADDED

Git LFS Details

  • SHA256: 3209c18e7c87a3811a9c49a51cd4f446ae555ba1d5b3a81b8a9abe9a8c640e59
  • Pointer size: 130 Bytes
  • Size of remote file: 57 kB
FateZero-main/data/negative_reg/car/357-56fc8ca34173c7321b5564e29bd57ae47d78b2aa.jpg ADDED

Git LFS Details

  • SHA256: 35a9368b2d85119b17ed4457cf41f7f9f78f7250094a234f1e5fef7306d8d896
  • Pointer size: 130 Bytes
  • Size of remote file: 61.9 kB
FateZero-main/data/negative_reg/car/361-a64d1b37c755ef754d2b7c27e9f7e156da6bc576.jpg ADDED

Git LFS Details

  • SHA256: bba6d8b655f4d0bf333514be5b968578e1437647bdbf181c6ce72ac31be1161d
  • Pointer size: 130 Bytes
  • Size of remote file: 70.2 kB
FateZero-main/data/negative_reg/car/426-5bbfca82723b7804db51fc7705400610b6885f99.jpg ADDED

Git LFS Details

  • SHA256: 3a8a2af8bb5b52192e8e767e742b25cff69a8397e013fa27b6bfd58a526accd8
  • Pointer size: 130 Bytes
  • Size of remote file: 74.6 kB
FateZero-main/data/negative_reg/car/542-0b9afd5a396cf38cceb5affafac671d9a84f1130.jpg ADDED

Git LFS Details

  • SHA256: 413c41dd7bf66433827492d90881b44f903958abbe22d911af810930fd56bf93
  • Pointer size: 130 Bytes
  • Size of remote file: 45.1 kB
FateZero-main/data/negative_reg/car/576-2f9d3df0358dc7c8b0b0b40d2a7d63995432ffa5.jpg ADDED

Git LFS Details

  • SHA256: 387c8fc1e92543d2a5a0f0c47cca3ae943b15612c6a70259f8396691fb0842f4
  • Pointer size: 130 Bytes
  • Size of remote file: 64.7 kB
FateZero-main/data/negative_reg/car/656-1e8349c458b6cb470804cd09c6dac39a97cdef25.jpg ADDED

Git LFS Details

  • SHA256: 5ea1ca1c392d43dbd772edcbb0ac61d1b6894e7dad091d4d618bbb6a9fa13c0e
  • Pointer size: 130 Bytes
  • Size of remote file: 62.3 kB
FateZero-main/data/negative_reg/car/675-86bf57b2da85f2912d9ed793e92f9140eb276c17.jpg ADDED

Git LFS Details

  • SHA256: b127eef7eb2e46808e94ff0473ff1adc9148fbe3992f076c225d3c53bd2637aa
  • Pointer size: 130 Bytes
  • Size of remote file: 55.7 kB
FateZero-main/data/negative_reg/car/678-60e3c9248c8eb7cb912a74071e0b9cecfbb05f45.jpg ADDED

Git LFS Details

  • SHA256: 8ac20dcc704b3f30bdb07bedd92e48f9503b75a86d6cd17c848ef733b9b4457d
  • Pointer size: 130 Bytes
  • Size of remote file: 54 kB
FateZero-main/data/negative_reg/car/682-7d849fbd641fe5bbb39a07ae290f055572ba79a6.jpg ADDED

Git LFS Details

  • SHA256: 5504db8659862ff6ec3ba89387e75c8b068f3e8c8db9a5fe4f3561dd431fc583
  • Pointer size: 130 Bytes
  • Size of remote file: 75.8 kB
FateZero-main/data/negative_reg/car/83-c0d83d05870f16b380369272da855119b9a368ec.jpg ADDED

Git LFS Details

  • SHA256: 7d19690854cebc236d6cb8697e881a86a83acb59eba17c7a9c37022b60ef662e
  • Pointer size: 130 Bytes
  • Size of remote file: 54.6 kB
RAVE-main/CIVIT_AI/civit_ai.sh ADDED
@@ -0,0 +1,27 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ #!/bin/sh
2
+
3
+ civit_ai=$1
4
+
5
+ CWDPATH=$(pwd)
6
+ mkdir $CWDPATH/CIVIT_AI/safetensors
7
+ cd $CWDPATH/CIVIT_AI/safetensors
8
+
9
+ mkdir $civit_ai
10
+ cd $civit_ai
11
+ wget https://civitai.com/api/download/models/$civit_ai --content-disposition
12
+
13
+ model_name=$(ls -l | awk '{print $9}')
14
+ model_name=${model_name//$'\n'/}
15
+ model_name2=${model_name//$'.safetensors'/}
16
+
17
+ eval "$(conda shell.bash hook)"
18
+ conda activate rave
19
+ cd ../..
20
+ python convert.py \
21
+ --checkpoint_path "$CWDPATH/CIVIT_AI/safetensors/$civit_ai/$model_name" \
22
+ --dump_path "$CWDPATH/CIVIT_AI/diffusers_models/$civit_ai/$model_name2" \
23
+ --from_safetensors
24
+
25
+ rm -rf $CWDPATH/CIVIT_AI/safetensors/
26
+
27
+ echo "Download is done! Check the diffusers_models folder. $model_name"
RAVE-main/CIVIT_AI/convert.py ADDED
@@ -0,0 +1,182 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # coding=utf-8
2
+ # Copyright 2023 The HuggingFace Inc. team.
3
+ #
4
+ # Licensed under the Apache License, Version 2.0 (the "License");
5
+ # you may not use this file except in compliance with the License.
6
+ # You may obtain a copy of the License at
7
+ #
8
+ # http://www.apache.org/licenses/LICENSE-2.0
9
+ #
10
+ # Unless required by applicable law or agreed to in writing, software
11
+ # distributed under the License is distributed on an "AS IS" BASIS,
12
+ # WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
13
+ # See the License for the specific language governing permissions and
14
+ # limitations under the License.
15
+ """ Conversion script for the LDM checkpoints. """
16
+
17
+ import argparse
18
+ import importlib
19
+
20
+ import torch
21
+
22
+ from diffusers.pipelines.stable_diffusion.convert_from_ckpt import download_from_original_stable_diffusion_ckpt
23
+
24
+
25
+ if __name__ == "__main__":
26
+ parser = argparse.ArgumentParser()
27
+
28
+ parser.add_argument(
29
+ "--checkpoint_path", default=None, type=str, required=True, help="Path to the checkpoint to convert."
30
+ )
31
+ # !wget https://raw.githubusercontent.com/CompVis/stable-diffusion/main/configs/stable-diffusion/v1-inference.yaml
32
+ parser.add_argument(
33
+ "--original_config_file",
34
+ default=None,
35
+ type=str,
36
+ help="The YAML config file corresponding to the original architecture.",
37
+ )
38
+ parser.add_argument(
39
+ "--num_in_channels",
40
+ default=None,
41
+ type=int,
42
+ help="The number of input channels. If `None` number of input channels will be automatically inferred.",
43
+ )
44
+ parser.add_argument(
45
+ "--scheduler_type",
46
+ default="pndm",
47
+ type=str,
48
+ help="Type of scheduler to use. Should be one of ['pndm', 'lms', 'ddim', 'euler', 'euler-ancestral', 'dpm']",
49
+ )
50
+ parser.add_argument(
51
+ "--pipeline_type",
52
+ default=None,
53
+ type=str,
54
+ help=(
55
+ "The pipeline type. One of 'FrozenOpenCLIPEmbedder', 'FrozenCLIPEmbedder', 'PaintByExample'"
56
+ ". If `None` pipeline will be automatically inferred."
57
+ ),
58
+ )
59
+ parser.add_argument(
60
+ "--image_size",
61
+ default=None,
62
+ type=int,
63
+ help=(
64
+ "The image size that the model was trained on. Use 512 for Stable Diffusion v1.X and Stable Siffusion v2"
65
+ " Base. Use 768 for Stable Diffusion v2."
66
+ ),
67
+ )
68
+ parser.add_argument(
69
+ "--prediction_type",
70
+ default=None,
71
+ type=str,
72
+ help=(
73
+ "The prediction type that the model was trained on. Use 'epsilon' for Stable Diffusion v1.X and Stable"
74
+ " Diffusion v2 Base. Use 'v_prediction' for Stable Diffusion v2."
75
+ ),
76
+ )
77
+ parser.add_argument(
78
+ "--extract_ema",
79
+ action="store_true",
80
+ help=(
81
+ "Only relevant for checkpoints that have both EMA and non-EMA weights. Whether to extract the EMA weights"
82
+ " or not. Defaults to `False`. Add `--extract_ema` to extract the EMA weights. EMA weights usually yield"
83
+ " higher quality images for inference. Non-EMA weights are usually better to continue fine-tuning."
84
+ ),
85
+ )
86
+ parser.add_argument(
87
+ "--upcast_attention",
88
+ action="store_true",
89
+ help=(
90
+ "Whether the attention computation should always be upcasted. This is necessary when running stable"
91
+ " diffusion 2.1."
92
+ ),
93
+ )
94
+ parser.add_argument(
95
+ "--from_safetensors",
96
+ action="store_true",
97
+ help="If `--checkpoint_path` is in `safetensors` format, load checkpoint with safetensors instead of PyTorch.",
98
+ )
99
+ parser.add_argument(
100
+ "--to_safetensors",
101
+ action="store_true",
102
+ help="Whether to store pipeline in safetensors format or not.",
103
+ )
104
+ parser.add_argument("--dump_path", default=None, type=str, required=True, help="Path to the output model.")
105
+ parser.add_argument("--device", type=str, help="Device to use (e.g. cpu, cuda:0, cuda:1, etc.)")
106
+ parser.add_argument(
107
+ "--stable_unclip",
108
+ type=str,
109
+ default=None,
110
+ required=False,
111
+ help="Set if this is a stable unCLIP model. One of 'txt2img' or 'img2img'.",
112
+ )
113
+ parser.add_argument(
114
+ "--stable_unclip_prior",
115
+ type=str,
116
+ default=None,
117
+ required=False,
118
+ help="Set if this is a stable unCLIP txt2img model. Selects which prior to use. If `--stable_unclip` is set to `txt2img`, the karlo prior (https://huggingface.co/kakaobrain/karlo-v1-alpha/tree/main/prior) is selected by default.",
119
+ )
120
+ parser.add_argument(
121
+ "--clip_stats_path",
122
+ type=str,
123
+ help="Path to the clip stats file. Only required if the stable unclip model's config specifies `model.params.noise_aug_config.params.clip_stats_path`.",
124
+ required=False,
125
+ )
126
+ parser.add_argument(
127
+ "--controlnet", action="store_true", default=None, help="Set flag if this is a controlnet checkpoint."
128
+ )
129
+ parser.add_argument("--half", action="store_true", help="Save weights in half precision.")
130
+ parser.add_argument(
131
+ "--vae_path",
132
+ type=str,
133
+ default=None,
134
+ required=False,
135
+ help="Set to a path, hub id to an already converted vae to not convert it again.",
136
+ )
137
+ parser.add_argument(
138
+ "--pipeline_class_name",
139
+ type=str,
140
+ default=None,
141
+ required=False,
142
+ help="Specify the pipeline class name",
143
+ )
144
+
145
+ args = parser.parse_args()
146
+
147
+ if args.pipeline_class_name is not None:
148
+ library = importlib.import_module("diffusers")
149
+ class_obj = getattr(library, args.pipeline_class_name)
150
+ pipeline_class = class_obj
151
+ else:
152
+ pipeline_class = None
153
+
154
+ pipe = download_from_original_stable_diffusion_ckpt(
155
+ checkpoint_path=args.checkpoint_path,
156
+ original_config_file=args.original_config_file,
157
+ # config_files=args.config_files,
158
+ image_size=args.image_size,
159
+ prediction_type=args.prediction_type,
160
+ model_type=args.pipeline_type,
161
+ extract_ema=args.extract_ema,
162
+ scheduler_type=args.scheduler_type,
163
+ num_in_channels=args.num_in_channels,
164
+ upcast_attention=args.upcast_attention,
165
+ from_safetensors=args.from_safetensors,
166
+ device=args.device,
167
+ stable_unclip=args.stable_unclip,
168
+ stable_unclip_prior=args.stable_unclip_prior,
169
+ clip_stats_path=args.clip_stats_path,
170
+ controlnet=args.controlnet,
171
+ vae_path=args.vae_path,
172
+ pipeline_class=pipeline_class,
173
+ )
174
+
175
+ if args.half:
176
+ pipe.to(torch_dtype=torch.float16)
177
+
178
+ if args.controlnet:
179
+ # only save the controlnet model
180
+ pipe.controlnet.save_pretrained(args.dump_path, safe_serialization=args.to_safetensors)
181
+ else:
182
+ pipe.save_pretrained(args.dump_path, safe_serialization=args.to_safetensors)
RAVE-main/pretrained_models/.gitattributes ADDED
@@ -0,0 +1,34 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ckpt filter=lfs diff=lfs merge=lfs -text
6
+ *.ftz filter=lfs diff=lfs merge=lfs -text
7
+ *.gz filter=lfs diff=lfs merge=lfs -text
8
+ *.h5 filter=lfs diff=lfs merge=lfs -text
9
+ *.joblib filter=lfs diff=lfs merge=lfs -text
10
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
+ *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
+ *.model filter=lfs diff=lfs merge=lfs -text
13
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
14
+ *.npy filter=lfs diff=lfs merge=lfs -text
15
+ *.npz filter=lfs diff=lfs merge=lfs -text
16
+ *.onnx filter=lfs diff=lfs merge=lfs -text
17
+ *.ot filter=lfs diff=lfs merge=lfs -text
18
+ *.parquet filter=lfs diff=lfs merge=lfs -text
19
+ *.pb filter=lfs diff=lfs merge=lfs -text
20
+ *.pickle filter=lfs diff=lfs merge=lfs -text
21
+ *.pkl filter=lfs diff=lfs merge=lfs -text
22
+ *.pt filter=lfs diff=lfs merge=lfs -text
23
+ *.pth filter=lfs diff=lfs merge=lfs -text
24
+ *.rar filter=lfs diff=lfs merge=lfs -text
25
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
26
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
28
+ *.tflite filter=lfs diff=lfs merge=lfs -text
29
+ *.tgz filter=lfs diff=lfs merge=lfs -text
30
+ *.wasm filter=lfs diff=lfs merge=lfs -text
31
+ *.xz filter=lfs diff=lfs merge=lfs -text
32
+ *.zip filter=lfs diff=lfs merge=lfs -text
33
+ *.zst filter=lfs diff=lfs merge=lfs -text
34
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
RAVE-main/scripts/run_experiment.py ADDED
@@ -0,0 +1,130 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ import argparse
3
+ import os
4
+ import json
5
+ import sys
6
+ import datetime
7
+ import imageio # Import imageio for MP4 saving
8
+ sys.path.append(os.getcwd())
9
+ from pipelines.sd_controlnet_rave import RAVE
10
+ from pipelines.sd_multicontrolnet_rave import RAVE_MultiControlNet
11
+ import utils.constants as const
12
+ import utils.video_grid_utils as vgu
13
+ import warnings
14
+ warnings.filterwarnings("ignore")
15
+ import numpy as np
16
+
17
+ def init_device():
18
+ """Initialize the device (CUDA if available, else CPU)."""
19
+ device_name = 'cuda' if torch.cuda.is_available() else 'cpu'
20
+ device = torch.device(device_name)
21
+ return device
22
+
23
+ def init_paths(input_ns, video_name, save_folder):
24
+ """Initialize paths for video processing based on video name and save folder."""
25
+ # Set save path directly to the video name (e.g., truck.mp4) under save_folder
26
+ save_dir = save_folder
27
+ os.makedirs(save_dir, exist_ok=True)
28
+ input_ns.save_path = os.path.join(save_dir, video_name) # Use video_name directly as filename
29
+
30
+ # Set video path using the fixed base path and video name
31
+ input_ns.video_path = f'/home/wangjuntong/video_editing_dataset/all_sourse/{video_name}'
32
+
33
+ # Set Hugging Face ControlNet path based on preprocess_name
34
+ if '-' in input_ns.preprocess_name:
35
+ input_ns.hf_cn_path = [const.PREPROCESSOR_DICT[i] for i in input_ns.preprocess_name.split('-')]
36
+ else:
37
+ input_ns.hf_cn_path = const.PREPROCESSOR_DICT[input_ns.preprocess_name]
38
+ input_ns.hf_path = "runwayml/stable-diffusion-v1-5"
39
+
40
+ # Set inverse and control paths (though not used for saving)
41
+ input_ns.inverse_path = f'{const.GENERATED_DATA_PATH}/inverses/{video_name}/{input_ns.preprocess_name}_{input_ns.model_id}_{input_ns.grid_size}x{input_ns.grid_size}_{input_ns.pad}'
42
+ input_ns.control_path = f'{const.GENERATED_DATA_PATH}/controls/{video_name}/{input_ns.preprocess_name}_{input_ns.grid_size}x{input_ns.grid_size}_{input_ns.pad}'
43
+ os.makedirs(input_ns.control_path, exist_ok=True)
44
+ os.makedirs(input_ns.inverse_path, exist_ok=True)
45
+
46
+ return input_ns
47
+
48
+ def run(input_ns, video_name, positive_prompts, save_folder):
49
+ """Run the video editing process with the given parameters."""
50
+ if 'model_id' not in input_ns.__dict__:
51
+ input_ns.model_id = "None"
52
+ device = init_device()
53
+ input_ns = init_paths(input_ns, video_name, save_folder)
54
+
55
+
56
+ print(f"Save path: {input_ns.save_path}")
57
+
58
+ # Prepare video frames as a grid
59
+ input_ns.image_pil_list = vgu.prepare_video_to_grid(input_ns.video_path, input_ns.sample_size, input_ns.grid_size, input_ns.pad)
60
+ input_ns.sample_size = len(input_ns.image_pil_list)
61
+ print(f'Frame count: {len(input_ns.image_pil_list)}')
62
+
63
+ # Choose the appropriate ControlNet class
64
+ controlnet_class = RAVE_MultiControlNet if '-' in str(input_ns.controlnet_conditioning_scale) else RAVE
65
+ CN = controlnet_class(device)
66
+
67
+ # Initialize models
68
+ CN.init_models(input_ns.hf_cn_path, input_ns.hf_path, input_ns.preprocess_name, input_ns.model_id)
69
+
70
+ input_dict = vars(input_ns)
71
+
72
+ # Run the editing process
73
+ start_time = datetime.datetime.now()
74
+ if '-' in str(input_ns.controlnet_conditioning_scale):
75
+ res_vid, control_vid_1, control_vid_2 = CN(input_dict)
76
+ else:
77
+ res_vid, control_vid = CN(input_dict)
78
+ end_time = datetime.datetime.now()
79
+
80
+ # Convert PIL images to numpy arrays for imageio
81
+ res_vid_np = [np.array(img) for img in res_vid]
82
+
83
+ # Save the result video as MP4
84
+ imageio.mimwrite(input_ns.save_path, res_vid_np, format='mp4', fps=30, quality=8)
85
+
86
+ if __name__ == '__main__':
87
+ # Parse command-line argument for JSONL file path
88
+ parser = argparse.ArgumentParser(description='Batch video editing with JSONL input.')
89
+ parser.add_argument('--jsonl_path', type=str, required=True, help='Path to the JSONL file containing video info')
90
+ args = parser.parse_args()
91
+
92
+ # Fixed parameters
93
+ fixed_params = {
94
+ 'preprocess_name': 'depth_zoe',
95
+ 'batch_size': 4,
96
+ 'batch_size_vae': 1,
97
+ 'cond_step_start': 0.0,
98
+ 'controlnet_conditioning_scale': 1.0,
99
+ 'controlnet_guidance_end': 1.0,
100
+ 'controlnet_guidance_start': 0.0,
101
+ 'give_control_inversion': True,
102
+ 'grid_size': 3,
103
+ 'sample_size': -1,
104
+ 'pad': 1,
105
+ 'guidance_scale': 7.5,
106
+ 'inversion_prompt': '',
107
+ 'is_ddim_inversion': True,
108
+ 'is_shuffle': True,
109
+ 'negative_prompts': '',
110
+ 'num_inference_steps': 50,
111
+ 'num_inversion_step': 50,
112
+ 'seed': 0,
113
+ 'model_id': 'None'
114
+ }
115
+
116
+ # Read and process each line in the JSONL file
117
+ with open(args.jsonl_path, 'r') as f:
118
+ for line in f:
119
+ data = json.loads(line)
120
+ video_name = data['video'] # Use video key directly as filename (e.g., "truck.mp4")
121
+ positive_prompts = data['edit_prompt']
122
+ save_folder = f'/home/wangjuntong/RAVE-main/outputs/lnk_painting/{video_name.rsplit(".", 1)[0]}' # Folder named after video without extension
123
+
124
+ # Create input namespace with fixed and dynamic parameters
125
+ input_ns = argparse.Namespace(**fixed_params)
126
+ input_ns.positive_prompts = positive_prompts
127
+ input_ns.video_name = video_name
128
+
129
+ # Run the editing process
130
+ run(input_ns, video_name, positive_prompts, save_folder)
vid2vid-zero-main/.gitignore ADDED
@@ -0,0 +1,175 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # custom dirs
2
+ checkpoints/
3
+ outputs/
4
+
5
+ # Initially taken from Github's Python gitignore files
6
+
7
+ # Byte-compiled / optimized / DLL files
8
+ __pycache__/
9
+ *.py[cod]
10
+ *$py.class
11
+
12
+ # C extensions
13
+ *.so
14
+
15
+ # tests and logs
16
+ tests/fixtures/cached_*_text.txt
17
+ logs/
18
+ lightning_logs/
19
+ lang_code_data/
20
+
21
+ # Distribution / packaging
22
+ .Python
23
+ build/
24
+ develop-eggs/
25
+ dist/
26
+ downloads/
27
+ eggs/
28
+ .eggs/
29
+ lib/
30
+ lib64/
31
+ parts/
32
+ sdist/
33
+ var/
34
+ wheels/
35
+ *.egg-info/
36
+ .installed.cfg
37
+ *.egg
38
+ MANIFEST
39
+
40
+ # PyInstaller
41
+ # Usually these files are written by a python script from a template
42
+ # before PyInstaller builds the exe, so as to inject date/other infos into it.
43
+ *.manifest
44
+ *.spec
45
+
46
+ # Installer logs
47
+ pip-log.txt
48
+ pip-delete-this-directory.txt
49
+
50
+ # Unit test / coverage reports
51
+ htmlcov/
52
+ .tox/
53
+ .nox/
54
+ .coverage
55
+ .coverage.*
56
+ .cache
57
+ nosetests.xml
58
+ coverage.xml
59
+ *.cover
60
+ .hypothesis/
61
+ .pytest_cache/
62
+
63
+ # Translations
64
+ *.mo
65
+ *.pot
66
+
67
+ # Django stuff:
68
+ *.log
69
+ local_settings.py
70
+ db.sqlite3
71
+
72
+ # Flask stuff:
73
+ instance/
74
+ .webassets-cache
75
+
76
+ # Scrapy stuff:
77
+ .scrapy
78
+
79
+ # Sphinx documentation
80
+ docs/_build/
81
+
82
+ # PyBuilder
83
+ target/
84
+
85
+ # Jupyter Notebook
86
+ .ipynb_checkpoints
87
+
88
+ # IPython
89
+ profile_default/
90
+ ipython_config.py
91
+
92
+ # pyenv
93
+ .python-version
94
+
95
+ # celery beat schedule file
96
+ celerybeat-schedule
97
+
98
+ # SageMath parsed files
99
+ *.sage.py
100
+
101
+ # Environments
102
+ .env
103
+ .venv
104
+ env/
105
+ venv/
106
+ ENV/
107
+ env.bak/
108
+ venv.bak/
109
+
110
+ # Spyder project settings
111
+ .spyderproject
112
+ .spyproject
113
+
114
+ # Rope project settings
115
+ .ropeproject
116
+
117
+ # mkdocs documentation
118
+ /site
119
+
120
+ # mypy
121
+ .mypy_cache/
122
+ .dmypy.json
123
+ dmypy.json
124
+
125
+ # Pyre type checker
126
+ .pyre/
127
+
128
+ # vscode
129
+ .vs
130
+ .vscode
131
+
132
+ # Pycharm
133
+ .idea
134
+
135
+ # TF code
136
+ tensorflow_code
137
+
138
+ # Models
139
+ proc_data
140
+
141
+ # examples
142
+ runs
143
+ /runs_old
144
+ /wandb
145
+ /examples/runs
146
+ /examples/**/*.args
147
+ /examples/rag/sweep
148
+
149
+ # data
150
+ /data
151
+ serialization_dir
152
+
153
+ # emacs
154
+ *.*~
155
+ debug.env
156
+
157
+ # vim
158
+ .*.swp
159
+
160
+ #ctags
161
+ tags
162
+
163
+ # pre-commit
164
+ .pre-commit*
165
+
166
+ # .lock
167
+ *.lock
168
+
169
+ # DS_Store (MacOS)
170
+ .DS_Store
171
+ # RL pipelines may produce mp4 outputs
172
+ *.mp4
173
+
174
+ # dependencies
175
+ /transformers
vid2vid-zero-main/README.md ADDED
@@ -0,0 +1,152 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+
3
+
4
+ <div align="center">
5
+
6
+ <h1>vid2vid-zero for Zero-Shot Video Editing</h1>
7
+
8
+ <h3><a href="https://arxiv.org/abs/2303.17599">Zero-Shot Video Editing Using Off-The-Shelf Image Diffusion Models</a></h3>
9
+
10
+ [Wen Wang](https://scholar.google.com/citations?user=1ks0R04AAAAJ&hl=zh-CN)<sup>1*</sup>, &nbsp; [Kangyang Xie](https://github.com/felix-ky)<sup>1*</sup>, &nbsp; [Zide Liu](https://github.com/zideliu)<sup>1*</sup>, &nbsp; [Hao Chen](https://scholar.google.com.au/citations?user=FaOqRpcAAAAJ&hl=en)<sup>1</sup>, &nbsp; [Yue Cao](http://yue-cao.me/)<sup>2</sup>, &nbsp; [Xinlong Wang](https://www.xloong.wang/)<sup>2</sup>, &nbsp; [Chunhua Shen](https://cshen.github.io/)<sup>1</sup>
11
+
12
+ <sup>1</sup>[ZJU](https://www.zju.edu.cn/english/), &nbsp; <sup>2</sup>[BAAI](https://www.baai.ac.cn/english.html)
13
+
14
+ <br>
15
+
16
+ [![Hugging Face Demo](https://img.shields.io/badge/%F0%9F%A4%97%20Hugging%20Face-Spaces-blue)](https://huggingface.co/spaces/BAAI/vid2vid-zero)
17
+
18
+
19
+ <image src="docs/vid2vid-zero.png" />
20
+ <br>
21
+
22
+ </div>
23
+
24
+ We propose vid2vid-zero, a simple yet effective method for zero-shot video editing. Our vid2vid-zero leverages off-the-shelf image diffusion models, and doesn't require training on any video. At the core of our method is a null-text inversion module for text-to-video alignment, a cross-frame modeling module for temporal consistency, and a spatial regularization module for fidelity to the original video. Without any training, we leverage the dynamic nature of the attention mechanism to enable bi-directional temporal modeling at test time.
25
+ Experiments and analyses show promising results in editing attributes, subjects, places, etc., in real-world videos.
26
+
27
+
28
+ ## Highlights
29
+
30
+ - Video editing with off-the-shelf image diffusion models.
31
+
32
+ - No training on any video.
33
+
34
+ - Promising results in editing attributes, subjects, places, etc., in real-world videos.
35
+
36
+ ## News
37
+ * [2023.4.12] Online Gradio Demo is available [here](https://huggingface.co/spaces/BAAI/vid2vid-zero).
38
+ * [2023.4.11] Add Gradio Demo (runs in local).
39
+ * [2023.4.9] Code released!
40
+
41
+ ## Installation
42
+ ### Requirements
43
+
44
+ ```shell
45
+ pip install -r requirements.txt
46
+ ```
47
+ Installing [xformers](https://github.com/facebookresearch/xformers) is highly recommended for improved efficiency and speed on GPUs.
48
+
49
+ ### Weights
50
+
51
+ **[Stable Diffusion]** [Stable Diffusion](https://arxiv.org/abs/2112.10752) is a latent text-to-image diffusion model capable of generating photo-realistic images given any text input. The pre-trained Stable Diffusion models can be downloaded from [🤗 Hugging Face](https://huggingface.co) (e.g., [Stable Diffusion v1-4](https://huggingface.co/CompVis/stable-diffusion-v1-4), [v2-1](https://huggingface.co/stabilityai/stable-diffusion-2-1)). We use Stable Diffusion v1-4 by default.
52
+
53
+ ## Zero-shot testing
54
+
55
+ Simply run:
56
+
57
+ ```bash
58
+ accelerate launch test_vid2vid_zero.py --config path/to/config
59
+ ```
60
+
61
+ For example:
62
+ ```bash
63
+ accelerate launch test_vid2vid_zero.py --config configs/car-moving.yaml
64
+ ```
65
+
66
+ ## Gradio Demo
67
+ Launch the local demo built with [gradio](https://gradio.app/):
68
+ ```bash
69
+ python app.py
70
+ ```
71
+
72
+ Or you can use our online gradio demo [here](https://huggingface.co/spaces/BAAI/vid2vid-zero).
73
+
74
+ Note that we disable Null-text Inversion and enable fp16 for faster demo response.
75
+
76
+ ## Examples
77
+ <table class="center">
78
+ <tr>
79
+ <td style="text-align:center;"><b>Input Video</b></td>
80
+ <td style="text-align:center;"><b>Output Video</b></td>
81
+ <td style="text-align:center;"><b>Input Video</b></td>
82
+ <td style="text-align:center;"><b>Output Video</b></td>
83
+ </tr>
84
+
85
+ <tr>
86
+ <td width=25% style="text-align:center;color:gray;">"A car is moving on the road"</td>
87
+ <td width=25% style="text-align:center;">"A Porsche car is moving on the desert"</td>
88
+ <td width=25% style="text-align:center;color:gray;">"A car is moving on the road"</td>
89
+ <td width=25% style="text-align:center;">"A jeep car is moving on the snow"</td>
90
+ </tr>
91
+
92
+ <tr>
93
+ <td style colspan="2"><img src="examples/jeep-moving_Porsche.gif"></td>
94
+ <td style colspan="2"><img src="examples/jeep-moving_snow.gif"></td>
95
+ </tr>
96
+
97
+
98
+ <tr>
99
+ <td width=25% style="text-align:center;color:gray;">"A man is running"</td>
100
+ <td width=25% style="text-align:center;">"Stephen Curry is running in Time Square"</td>
101
+ <td width=25% style="text-align:center;color:gray;">"A man is running"</td>
102
+ <td width=25% style="text-align:center;">"A man is running in New York City"</td>
103
+ </tr>
104
+
105
+ <tr>
106
+ <td style colspan="2"><img src="examples/man-running_stephen.gif"></td>
107
+ <td style colspan="2"><img src="examples/man-running_newyork.gif"></td>
108
+ </tr>
109
+
110
+ <tr>
111
+ <td width=25% style="text-align:center;color:gray;">"A child is riding a bike on the road"</td>
112
+ <td width=25% style="text-align:center;">"a child is riding a bike on the flooded road"</td>
113
+ <td width=25% style="text-align:center;color:gray;">"A child is riding a bike on the road"</td>
114
+ <td width=25% style="text-align:center;">"a lego child is riding a bike on the road.gif"</td>
115
+ </tr>
116
+
117
+ <tr>
118
+ <td style colspan="2"><img src="examples/child-riding_flooded.gif"></td>
119
+ <td style colspan="2"><img src="examples/child-riding_lego.gif"></td>
120
+ </tr>
121
+
122
+ <tr>
123
+ <td width=25% style="text-align:center;color:gray;">"A car is moving on the road"</td>
124
+ <td width=25% style="text-align:center;">"A car is moving on the snow"</td>
125
+ <td width=25% style="text-align:center;color:gray;">"A car is moving on the road"</td>
126
+ <td width=25% style="text-align:center;">"A jeep car is moving on the desert"</td>
127
+ </tr>
128
+
129
+ <tr>
130
+ <td style colspan="2"><img src="examples/red-moving_snow.gif"></td>
131
+ <td style colspan="2"><img src="examples/red-moving_desert.gif"></td>
132
+ </tr>
133
+ </table>
134
+
135
+ ## Citation
136
+
137
+ ```
138
+ @article{vid2vid-zero,
139
+ title={Zero-Shot Video Editing Using Off-The-Shelf Image Diffusion Models},
140
+ author={Wang, Wen and Xie, kangyang and Liu, Zide and Chen, Hao and Cao, Yue and Wang, Xinlong and Shen, Chunhua},
141
+ journal={arXiv preprint arXiv:2303.17599},
142
+ year={2023}
143
+ }
144
+ ```
145
+
146
+ ## Acknowledgement
147
+ [Tune-A-Video](https://github.com/showlab/Tune-A-Video), [diffusers](https://github.com/huggingface/diffusers), [prompt-to-prompt](https://github.com/google/prompt-to-prompt).
148
+
149
+ ## Contact
150
+
151
+ **We are hiring** at all levels at BAAI Vision Team, including full-time researchers, engineers and interns.
152
+ If you are interested in working with us on **foundation model, visual perception and multimodal learning**, please contact [Xinlong Wang](https://www.xloong.wang/) (`wangxinlong@baai.ac.cn`) and [Yue Cao](http://yue-cao.me/) (`caoyue@baai.ac.cn`).
vid2vid-zero-main/app.py ADDED
@@ -0,0 +1,66 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Most code is from https://huggingface.co/spaces/Tune-A-Video-library/Tune-A-Video-Training-UI
2
+
3
+ #!/usr/bin/env python
4
+
5
+ from __future__ import annotations
6
+
7
+ import os
8
+ from subprocess import getoutput
9
+
10
+ import gradio as gr
11
+ import torch
12
+
13
+ from gradio_demo.app_running import create_demo
14
+ from gradio_demo.runner import Runner
15
+
16
+ TITLE = '# [vid2vid-zero](https://github.com/baaivision/vid2vid-zero)'
17
+
18
+ ORIGINAL_SPACE_ID = 'BAAI/vid2vid-zero'
19
+ SPACE_ID = os.getenv('SPACE_ID', ORIGINAL_SPACE_ID)
20
+ GPU_DATA = getoutput('nvidia-smi')
21
+
22
+ if os.getenv('SYSTEM') == 'spaces' and SPACE_ID != ORIGINAL_SPACE_ID:
23
+ SETTINGS = f'<a href="https://huggingface.co/spaces/{SPACE_ID}/settings">Settings</a>'
24
+ else:
25
+ SETTINGS = 'Settings'
26
+
27
+ CUDA_NOT_AVAILABLE_WARNING = f'''## Attention - Running on CPU.
28
+ <center>
29
+ You can assign a GPU in the {SETTINGS} tab if you are running this on HF Spaces.
30
+ You can use "T4 small/medium" to run this demo.
31
+ </center>
32
+ '''
33
+
34
+ HF_TOKEN_NOT_SPECIFIED_WARNING = f'''The environment variable `HF_TOKEN` is not specified. Feel free to specify your Hugging Face token with write permission if you don't want to manually provide it for every run.
35
+ <center>
36
+ You can check and create your Hugging Face tokens <a href="https://huggingface.co/settings/tokens" target="_blank">here</a>.
37
+ You can specify environment variables in the "Repository secrets" section of the {SETTINGS} tab.
38
+ </center>
39
+ '''
40
+
41
+ HF_TOKEN = os.getenv('HF_TOKEN')
42
+
43
+
44
+ def show_warning(warning_text: str) -> gr.Blocks:
45
+ with gr.Blocks() as demo:
46
+ with gr.Box():
47
+ gr.Markdown(warning_text)
48
+ return demo
49
+
50
+
51
+ pipe = None
52
+ runner = Runner(HF_TOKEN)
53
+
54
+ with gr.Blocks(css='gradio_demo/style.css') as demo:
55
+ if not torch.cuda.is_available():
56
+ show_warning(CUDA_NOT_AVAILABLE_WARNING)
57
+
58
+ gr.Markdown(TITLE)
59
+ with gr.Tabs():
60
+ with gr.TabItem('Zero-shot Testing'):
61
+ create_demo(runner, pipe)
62
+
63
+ if not HF_TOKEN:
64
+ show_warning(HF_TOKEN_NOT_SPECIFIED_WARNING)
65
+
66
+ demo.queue(max_size=1).launch(share=False)
vid2vid-zero-main/checkpoints/.gitattributes ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ *.7z filter=lfs diff=lfs merge=lfs -text
2
+ *.arrow filter=lfs diff=lfs merge=lfs -text
3
+ *.bin filter=lfs diff=lfs merge=lfs -text
4
+ *.bz2 filter=lfs diff=lfs merge=lfs -text
5
+ *.ftz filter=lfs diff=lfs merge=lfs -text
6
+ *.gz filter=lfs diff=lfs merge=lfs -text
7
+ *.h5 filter=lfs diff=lfs merge=lfs -text
8
+ *.joblib filter=lfs diff=lfs merge=lfs -text
9
+ *.lfs.* filter=lfs diff=lfs merge=lfs -text
10
+ *.model filter=lfs diff=lfs merge=lfs -text
11
+ *.msgpack filter=lfs diff=lfs merge=lfs -text
12
+ *.npy filter=lfs diff=lfs merge=lfs -text
13
+ *.npz filter=lfs diff=lfs merge=lfs -text
14
+ *.onnx filter=lfs diff=lfs merge=lfs -text
15
+ *.ot filter=lfs diff=lfs merge=lfs -text
16
+ *.parquet filter=lfs diff=lfs merge=lfs -text
17
+ *.pb filter=lfs diff=lfs merge=lfs -text
18
+ *.pickle filter=lfs diff=lfs merge=lfs -text
19
+ *.pkl filter=lfs diff=lfs merge=lfs -text
20
+ *.pt filter=lfs diff=lfs merge=lfs -text
21
+ *.pth filter=lfs diff=lfs merge=lfs -text
22
+ *.rar filter=lfs diff=lfs merge=lfs -text
23
+ saved_model/**/* filter=lfs diff=lfs merge=lfs -text
24
+ *.tar.* filter=lfs diff=lfs merge=lfs -text
25
+ *.tflite filter=lfs diff=lfs merge=lfs -text
26
+ *.tgz filter=lfs diff=lfs merge=lfs -text
27
+ *.wasm filter=lfs diff=lfs merge=lfs -text
28
+ *.xz filter=lfs diff=lfs merge=lfs -text
29
+ *.zip filter=lfs diff=lfs merge=lfs -text
30
+ *.zst filter=lfs diff=lfs merge=lfs -text
31
+ *tfevents* filter=lfs diff=lfs merge=lfs -text
32
+ *.safetensors filter=lfs diff=lfs merge=lfs -text
vid2vid-zero-main/configs/Cartoon_kangaroos.yaml ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ pretrained_model_path: checkpoints
2
+ output_dir: "/home/wangjuntong/vid2vid-zero-main/outputs/cartoon_kangaroo/"
3
+ input_data:
4
+ video_path: "/home/wangjuntong/vid2vid-zero-main/AI_video/A cartoon kangaroo disco dances.mp4"
5
+ prompt: A cartoon kangaroo disco dances.
6
+ n_sample_frames: 8
7
+ width: 512
8
+ height: 512
9
+ sample_start_idx: 0
10
+ sample_frame_rate: 1
11
+ validation_data:
12
+ prompts:
13
+ - A cartoon robot disco dances.
14
+ - A man disco dances.
15
+ - A cartoon man disco dances.
16
+ video_length: 8
17
+ width: 512
18
+ height: 512
19
+ num_inference_steps: 50
20
+ guidance_scale: 7.5
21
+ num_inv_steps: 50
22
+ # args for null-text inv
23
+ use_null_inv: True
24
+ null_inner_steps: 1
25
+ null_base_lr: 1e-2
26
+ null_uncond_ratio: -0.5
27
+ null_normal_infer: True
28
+
29
+ input_batch_size: 1
30
+ seed: 33
31
+ mixed_precision: "no"
32
+ gradient_checkpointing: True
33
+ enable_xformers_memory_efficient_attention: True
34
+ # test-time adaptation
35
+ use_sc_attn: True
36
+ use_st_attn: True
37
+ st_attn_idx: 0
vid2vid-zero-main/configs/black-swan.yaml ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ pretrained_model_path: checkpoints/stable-diffusion-v1-4
2
+ output_dir: outputs/black-swan
3
+ input_data:
4
+ video_path: data/black-swan.mp4
5
+ prompt: a blackswan is swimming on the water
6
+ n_sample_frames: 8
7
+ width: 512
8
+ height: 512
9
+ sample_start_idx: 0
10
+ sample_frame_rate: 4
11
+ validation_data:
12
+ prompts:
13
+ - a black swan is swimming on the water, Van Gogh style
14
+ - a white swan is swimming on the water
15
+ video_length: 8
16
+ width: 512
17
+ height: 512
18
+ num_inference_steps: 50
19
+ guidance_scale: 7.5
20
+ num_inv_steps: 50
21
+ # args for null-text inv
22
+ use_null_inv: True
23
+ null_inner_steps: 1
24
+ null_base_lr: 1e-2
25
+ null_uncond_ratio: -0.5
26
+ null_normal_infer: True
27
+
28
+ input_batch_size: 1
29
+ seed: 33
30
+ mixed_precision: "no"
31
+ gradient_checkpointing: True
32
+ enable_xformers_memory_efficient_attention: True
33
+ # test-time adaptation
34
+ use_sc_attn: True
35
+ use_st_attn: True
36
+ st_attn_idx: 0
vid2vid-zero-main/configs/brown-bear.yaml ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ pretrained_model_path: checkpoints/stable-diffusion-v1-4
2
+ output_dir: outputs/brown-bear
3
+ input_data:
4
+ video_path: data/brown-bear.mp4
5
+ prompt: a brown bear is sitting on the ground
6
+ n_sample_frames: 8
7
+ width: 512
8
+ height: 512
9
+ sample_start_idx: 0
10
+ sample_frame_rate: 1
11
+ validation_data:
12
+ prompts:
13
+ - a brown bear is sitting on the grass
14
+ - a black bear is sitting on the grass
15
+ - a polar bear is sitting on the ground
16
+ video_length: 8
17
+ width: 512
18
+ height: 512
19
+ num_inference_steps: 50
20
+ guidance_scale: 7.5
21
+ num_inv_steps: 50
22
+ # args for null-text inv
23
+ use_null_inv: True
24
+ null_inner_steps: 1
25
+ null_base_lr: 1e-2
26
+ null_uncond_ratio: -0.5
27
+ null_normal_infer: True
28
+
29
+ input_batch_size: 1
30
+ seed: 33
31
+ mixed_precision: "no"
32
+ gradient_checkpointing: True
33
+ enable_xformers_memory_efficient_attention: True
34
+ # test-time adaptation
35
+ use_sc_attn: True
36
+ use_st_attn: True
37
+ st_attn_idx: 0
vid2vid-zero-main/configs/car-moving.yaml ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ pretrained_model_path: checkpoints
2
+ output_dir: outputs/car-moving
3
+ input_data:
4
+ video_path: data/car-moving.mp4
5
+ prompt: a car is moving on the road
6
+ n_sample_frames: 8
7
+ width: 512
8
+ height: 512
9
+ sample_start_idx: 0
10
+ sample_frame_rate: 1
11
+ validation_data:
12
+ prompts:
13
+ - a car is moving on the snow
14
+ - a jeep car is moving on the road
15
+ - a jeep car is moving on the desert
16
+ video_length: 8
17
+ width: 512
18
+ height: 512
19
+ num_inference_steps: 50
20
+ guidance_scale: 7.5
21
+ num_inv_steps: 50
22
+ # args for null-text inv
23
+ use_null_inv: True
24
+ null_inner_steps: 1
25
+ null_base_lr: 1e-2
26
+ null_uncond_ratio: -0.5
27
+ null_normal_infer: True
28
+
29
+ input_batch_size: 1
30
+ seed: 33
31
+ mixed_precision: "no"
32
+ gradient_checkpointing: True
33
+ enable_xformers_memory_efficient_attention: True
34
+ # test-time adaptation
35
+ use_sc_attn: True
36
+ use_st_attn: True
37
+ st_attn_idx: 0
vid2vid-zero-main/configs/car-turn.yaml ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ pretrained_model_path: checkpoints/stable-diffusion-v1-4
2
+ output_dir: "outputs/car-turn"
3
+
4
+ input_data:
5
+ video_path: "data/car-turn.mp4"
6
+ prompt: "a jeep car is moving on the road"
7
+ n_sample_frames: 8
8
+ width: 512
9
+ height: 512
10
+ sample_start_idx: 0
11
+ sample_frame_rate: 6
12
+
13
+ validation_data:
14
+ prompts:
15
+ - "a jeep car is moving on the beach"
16
+ - "a jeep car is moving on the snow"
17
+ - "a Porsche car is moving on the desert"
18
+ video_length: 8
19
+ width: 512
20
+ height: 512
21
+ num_inference_steps: 50
22
+ guidance_scale: 7.5
23
+ num_inv_steps: 50
24
+ # args for null-text inv
25
+ use_null_inv: True
26
+ null_inner_steps: 1
27
+ null_base_lr: 1e-2
28
+ null_uncond_ratio: -0.5
29
+ null_normal_infer: True
30
+
31
+ input_batch_size: 1
32
+ seed: 33
33
+ mixed_precision: "no"
34
+ gradient_checkpointing: True
35
+ enable_xformers_memory_efficient_attention: True
36
+ # test-time adaptation
37
+ use_sc_attn: True
38
+ use_st_attn: True
39
+ st_attn_idx: 0
vid2vid-zero-main/configs/child-riding.yaml ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+
2
+ pretrained_model_path: checkpoints/stable-diffusion-v1-4
3
+ output_dir: outputs/child-riding
4
+
5
+ input_data:
6
+ video_path: data/child-riding.mp4
7
+ prompt: "a child is riding a bike on the road"
8
+ n_sample_frames: 8
9
+ width: 512
10
+ height: 512
11
+ sample_start_idx: 0
12
+ sample_frame_rate: 1
13
+
14
+ validation_data:
15
+ # inv_latent: "outputs_2d/car-turn/inv_latents/ddim_latent-0.pt" # latent inversed w/o SCAttn !
16
+ prompts:
17
+ - a lego child is riding a bike on the road
18
+ - a child is riding a bike on the flooded road
19
+ video_length: 8
20
+ width: 512
21
+ height: 512
22
+ num_inference_steps: 50
23
+ guidance_scale: 7.5
24
+ num_inv_steps: 50
25
+ # args for null-text inv
26
+ use_null_inv: True
27
+ null_inner_steps: 1
28
+ null_base_lr: 1e-2
29
+ null_uncond_ratio: -0.5
30
+ null_normal_infer: True
31
+
32
+ input_batch_size: 1
33
+ seed: 33
34
+ mixed_precision: "no"
35
+ gradient_checkpointing: True
36
+ enable_xformers_memory_efficient_attention: True
37
+ # test-time adaptation
38
+ use_sc_attn: True
39
+ use_st_attn: True
40
+ st_attn_idx: 0
vid2vid-zero-main/configs/cow-walking.yaml ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ pretrained_model_path: checkpoints/stable-diffusion-v1-4
2
+ output_dir: outputs/cow-walking
3
+ input_data:
4
+ video_path: data/cow-walking.mp4
5
+ prompt: a cow is walking on the grass
6
+ n_sample_frames: 8
7
+ width: 512
8
+ height: 512
9
+ sample_start_idx: 0
10
+ sample_frame_rate: 2
11
+ validation_data:
12
+ prompts:
13
+ - a lion is walking on the grass
14
+ - a dog is walking on the grass
15
+ - a cow is walking on the snow
16
+ video_length: 8
17
+ width: 512
18
+ height: 512
19
+ num_inference_steps: 50
20
+ guidance_scale: 7.5
21
+ num_inv_steps: 50
22
+ # args for null-text inv
23
+ use_null_inv: True
24
+ null_inner_steps: 1
25
+ null_base_lr: 1e-2
26
+ null_uncond_ratio: -0.5
27
+ null_normal_infer: True
28
+
29
+ input_batch_size: 1
30
+ seed: 33
31
+ mixed_precision: "no"
32
+ gradient_checkpointing: True
33
+ enable_xformers_memory_efficient_attention: True
34
+ # test-time adaptation
35
+ use_sc_attn: True
36
+ use_st_attn: True
37
+ st_attn_idx: 0
vid2vid-zero-main/configs/dog-walking.yaml ADDED
@@ -0,0 +1,35 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ pretrained_model_path: checkpoints/stable-diffusion-v1-4
2
+ output_dir: outputs/dog_walking
3
+ input_data:
4
+ video_path: data/dog-walking.mp4
5
+ prompt: a dog is walking on the ground
6
+ n_sample_frames: 8
7
+ width: 512
8
+ height: 512
9
+ sample_start_idx: 15
10
+ sample_frame_rate: 3
11
+ validation_data:
12
+ prompts:
13
+ - a dog is walking on the ground, Van Gogh style
14
+ video_length: 8
15
+ width: 512
16
+ height: 512
17
+ num_inference_steps: 50
18
+ guidance_scale: 7.5
19
+ num_inv_steps: 50
20
+ # args for null-text inv
21
+ use_null_inv: True
22
+ null_inner_steps: 1
23
+ null_base_lr: 1e-2
24
+ null_uncond_ratio: -0.5
25
+ null_normal_infer: True
26
+
27
+ input_batch_size: 1
28
+ seed: 33
29
+ mixed_precision: "no"
30
+ gradient_checkpointing: True
31
+ enable_xformers_memory_efficient_attention: True
32
+ # test-time adaptation
33
+ use_sc_attn: True
34
+ use_st_attn: True
35
+ st_attn_idx: 0
vid2vid-zero-main/configs/horse-running.yaml ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ pretrained_model_path: checkpoints/stable-diffusion-v1-4
2
+ output_dir: outputs/horse-running
3
+ input_data:
4
+ video_path: data/horse-running.mp4
5
+ prompt: a horse is running on the beach
6
+ n_sample_frames: 8
7
+ width: 512
8
+ height: 512
9
+ sample_start_idx: 0
10
+ sample_frame_rate: 2
11
+ validation_data:
12
+ prompts:
13
+ - a dog is running on the beach
14
+ - a dog is running on the desert
15
+ video_length: 8
16
+ width: 512
17
+ height: 512
18
+ num_inference_steps: 50
19
+ guidance_scale: 7.5
20
+ num_inv_steps: 50
21
+ # args for null-text inv
22
+ use_null_inv: True
23
+ null_inner_steps: 1
24
+ null_base_lr: 1e-2
25
+ null_uncond_ratio: -0.5
26
+ null_normal_infer: True
27
+
28
+ input_batch_size: 1
29
+ seed: 33
30
+ mixed_precision: "no"
31
+ gradient_checkpointing: True
32
+ enable_xformers_memory_efficient_attention: True
33
+ # test-time adaptation
34
+ use_sc_attn: True
35
+ use_st_attn: True
36
+ st_attn_idx: 0
vid2vid-zero-main/configs/lion-roaring.yaml ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ pretrained_model_path: checkpoints/stable-diffusion-v1-4
2
+ output_dir: ./outputs/lion-roaring
3
+ input_data:
4
+ video_path: data/lion-roaring.mp4
5
+ prompt: a lion is roaring
6
+ n_sample_frames: 8
7
+ width: 512
8
+ height: 512
9
+ sample_start_idx: 0
10
+ sample_frame_rate: 2
11
+ validation_data:
12
+ prompts:
13
+ - a lego lion is roaring
14
+ - a wolf is roaring, anime style
15
+ - a lion is roaring, anime style
16
+ video_length: 8
17
+ width: 512
18
+ height: 512
19
+ num_inference_steps: 50
20
+ guidance_scale: 7.5
21
+ num_inv_steps: 50
22
+ # args for null-text inv
23
+ use_null_inv: True
24
+ null_inner_steps: 1
25
+ null_base_lr: 1e-2
26
+ null_uncond_ratio: -0.5
27
+ null_normal_infer: True
28
+
29
+ input_batch_size: 1
30
+ seed: 33
31
+ mixed_precision: "no"
32
+ gradient_checkpointing: True
33
+ enable_xformers_memory_efficient_attention: True
34
+ # test-time adaptation
35
+ use_sc_attn: True
36
+ use_st_attn: True
37
+ st_attn_idx: 0
vid2vid-zero-main/configs/man-running.yaml ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ pretrained_model_path: checkpoints/stable-diffusion-v1-4
2
+ output_dir: outputs/man-running
3
+ input_data:
4
+ video_path: data/man-running.mp4
5
+ prompt: a man is running
6
+ n_sample_frames: 8
7
+ width: 512
8
+ height: 512
9
+ sample_start_idx: 25
10
+ sample_frame_rate: 2
11
+ validation_data:
12
+ prompts:
13
+ - Stephen Curry is running in Time Square
14
+ - a man is running, Van Gogh style
15
+ - a man is running in New York City
16
+ video_length: 8
17
+ width: 512
18
+ height: 512
19
+ num_inference_steps: 50
20
+ guidance_scale: 7.5
21
+ num_inv_steps: 50
22
+ # args for null-text inv
23
+ use_null_inv: True
24
+ null_inner_steps: 1
25
+ null_base_lr: 1e-2
26
+ null_uncond_ratio: -0.5
27
+ null_normal_infer: True
28
+
29
+ input_batch_size: 1
30
+ seed: 33
31
+ mixed_precision: "no"
32
+ gradient_checkpointing: True
33
+ enable_xformers_memory_efficient_attention: True
34
+ # test-time adaptation
35
+ use_sc_attn: True
36
+ use_st_attn: True
37
+ st_attn_idx: 0
vid2vid-zero-main/configs/man-surfing.yaml ADDED
@@ -0,0 +1,36 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ pretrained_model_path: checkpoints/stable-diffusion-v1-4
2
+ output_dir: outputs/man-surfing
3
+ input_data:
4
+ video_path: data/man-surfing.mp4
5
+ prompt: a man is surfing
6
+ n_sample_frames: 8
7
+ width: 512
8
+ height: 512
9
+ sample_start_idx: 0
10
+ sample_frame_rate: 3
11
+ validation_data:
12
+ prompts:
13
+ - a boy is surfing in the desert
14
+ - Iron Man is surfing is surfing
15
+ video_length: 8
16
+ width: 512
17
+ height: 512
18
+ num_inference_steps: 50
19
+ guidance_scale: 7.5
20
+ num_inv_steps: 50
21
+ # args for null-text inv
22
+ use_null_inv: True
23
+ null_inner_steps: 1
24
+ null_base_lr: 1e-2
25
+ null_uncond_ratio: -0.5
26
+ null_normal_infer: True
27
+
28
+ input_batch_size: 1
29
+ seed: 33
30
+ mixed_precision: "no"
31
+ gradient_checkpointing: True
32
+ enable_xformers_memory_efficient_attention: True
33
+ # test-time adaptation
34
+ use_sc_attn: True
35
+ use_st_attn: True
36
+ st_attn_idx: 0
vid2vid-zero-main/configs/plane.yaml ADDED
@@ -0,0 +1,37 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ pretrained_model_path: checkpoints
2
+ output_dir: "/home/wangjuntong/vid2vid-zero-main/outputs/aircraft-landing/"
3
+ input_data:
4
+ video_path: "/home/wangjuntong/video_editing_dataset/real_video/aircraft-landing.mp4"
5
+ prompt: "A plane is landing."
6
+ n_sample_frames: 24
7
+ width: 512
8
+ height: 512
9
+ sample_start_idx: 0
10
+ sample_frame_rate: 1
11
+ validation_data:
12
+ prompts:
13
+ - A helicopter is landing on a helipad.
14
+ - A small private plane is landing at a rural airfield.
15
+ - A balck plane is landing at a busy airport.
16
+ video_length: 24
17
+ width: 512
18
+ height: 512
19
+ num_inference_steps: 50
20
+ guidance_scale: 7.5
21
+ num_inv_steps: 50
22
+ # args for null-text inv
23
+ use_null_inv: True
24
+ null_inner_steps: 1
25
+ null_base_lr: 1e-2
26
+ null_uncond_ratio: -0.5
27
+ null_normal_infer: True
28
+
29
+ input_batch_size: 1
30
+ seed: 33
31
+ mixed_precision: "no"
32
+ gradient_checkpointing: True
33
+ enable_xformers_memory_efficient_attention: True
34
+ # test-time adaptation
35
+ use_sc_attn: True
36
+ use_st_attn: True
37
+ st_attn_idx: 0
vid2vid-zero-main/configs/rabbit-watermelon.yaml ADDED
@@ -0,0 +1,40 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ pretrained_model_path: checkpoints/stable-diffusion-v1-4
2
+ output_dir: "outputs/rabbit-watermelon"
3
+
4
+ input_data:
5
+ video_path: "data/rabbit-watermelon.mp4"
6
+ prompt: "a rabbit is eating a watermelon"
7
+ n_sample_frames: 8
8
+ width: 512
9
+ height: 512
10
+ sample_start_idx: 0
11
+ sample_frame_rate: 6
12
+
13
+ validation_data:
14
+ prompts:
15
+ - "a tiger is eating a watermelon"
16
+ - "a rabbit is eating an orange"
17
+ - "a rabbit is eating a pizza"
18
+ - "a puppy is eating an orange"
19
+ video_length: 8
20
+ width: 512
21
+ height: 512
22
+ num_inference_steps: 50
23
+ guidance_scale: 7.5
24
+ num_inv_steps: 50
25
+ # args for null-text inv
26
+ use_null_inv: True
27
+ null_inner_steps: 1
28
+ null_base_lr: 1e-2
29
+ null_uncond_ratio: -0.5
30
+ null_normal_infer: True
31
+
32
+ input_batch_size: 1
33
+ seed: 33
34
+ mixed_precision: "no"
35
+ gradient_checkpointing: True
36
+ enable_xformers_memory_efficient_attention: True
37
+ # test-time adaptation
38
+ use_sc_attn: True
39
+ use_st_attn: True
40
+ st_attn_idx: 0