diff --git a/competitors_inference_code/DemoFusion/LICENSE b/competitors_inference_code/DemoFusion/LICENSE
new file mode 100644
index 0000000000000000000000000000000000000000..ab96c30073b994860e58125b05222cfb9e667d99
--- /dev/null
+++ b/competitors_inference_code/DemoFusion/LICENSE
@@ -0,0 +1,21 @@
+MIT License
+
+Copyright (c) 2025 PRIS-CV: Computer Vision Group
+
+Permission is hereby granted, free of charge, to any person obtaining a copy
+of this software and associated documentation files (the "Software"), to deal
+in the Software without restriction, including without limitation the rights
+to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
+copies of the Software, and to permit persons to whom the Software is
+furnished to do so, subject to the following conditions:
+
+The above copyright notice and this permission notice shall be included in all
+copies or substantial portions of the Software.
+
+THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
+IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
+FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
+AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
+LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
+OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
+SOFTWARE.
diff --git a/competitors_inference_code/DemoFusion/README.md b/competitors_inference_code/DemoFusion/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..1dd7e6a2b3b80e0338729b19e335325da144aab2
--- /dev/null
+++ b/competitors_inference_code/DemoFusion/README.md
@@ -0,0 +1,154 @@
+# DemoFusion
+[](https://ruoyidu.github.io/demofusion/demofusion.html)
+[](https://arxiv.org/pdf/2311.16973.pdf)
+[](https://replicate.com/lucataco/demofusion)
+[](https://colab.research.google.com/github/camenduru/DemoFusion-colab/blob/main/DemoFusion_colab.ipynb)
+[](https://huggingface.co/spaces/radames/Enhance-This-DemoFusion-SDXL)
+[](https://badges.toozhao.com/stats/01HFMAPCVTA1T32KN2PASNYGYK "Get your own page views count badge on badges.toozhao.com")
+
+Code release for "DemoFusion: Democratising High-Resolution Image Generation With No 💰"
+
+
+
+**Abstract**: High-resolution image generation with Generative Artificial Intelligence (GenAI) has immense potential but, due to the enormous capital investment required for training, it is increasingly centralised to a few large corporations, and hidden behind paywalls. This paper aims to democratise high-resolution GenAI by advancing the frontier of high-resolution generation while remaining accessible to a broad audience. We demonstrate that existing Latent Diffusion Models (LDMs) possess untapped potential for higher-resolution image generation. Our novel DemoFusion framework seamlessly extends open-source GenAI models, employing Progressive Upscaling, Skip Residual, and Dilated Sampling mechanisms to achieve higher-resolution image generation. The progressive nature of DemoFusion requires more passes, but the intermediate results can serve as "previews", facilitating rapid prompt iteration.
+
+# News
+- **2024.02.27**: 🔥 DemoFusion has been accepted to CVPR'24!
+- **2023.12.15**: 🚀 A [ComfyUI Demofusion Custom Node](https://github.com/deroberon/demofusion-comfyui) is available! Thank [Andre](https://github.com/deroberon) for the implementation!
+- **2023.12.12**: ✨ DemoFusion with ControNet is availabe now! Check it out at `pipeline_demofusion_sdxl_controlnet`! The local [Gradio Demo](https://github.com/PRIS-CV/DemoFusion#DemoFusionControlNet-with-local-Gradio-demo) is also available.
+- **2023.12.10**: ✨ Image2Image is supported by `pipeline_demofusion_sdxl` now! The local [Gradio Demo](https://github.com/PRIS-CV/DemoFusion#Image2Image-with-local-Gradio-demo) is also available.
+- **2023.12.08**: 🚀 A HuggingFace Demo for Img2Img is now available! [](https://huggingface.co/spaces/radames/Enhance-This-DemoFusion-SDXL) Thank [Radamés](https://github.com/radames) for the implementation and [](https://huggingface.co/docs/diffusers/index) for the support!
+- **2023.12.07**: 🚀 Add Colab demo [](https://colab.research.google.com/github/camenduru/DemoFusion-colab/blob/main/DemoFusion_colab.ipynb). Check it out! Thank [camenduru](https://github.com/camenduru) for the implementation!
+- **2023.12.06**: ✨ The local [Gradio Demo](https://github.com/PRIS-CV/DemoFusion#Text2Image-with-local-Gradio-demo) is now available! Better interaction and presentation!
+- **2023.12.04**: ✨ A [low-vram version](https://github.com/PRIS-CV/DemoFusion#Text2Image-on-Windows-with-8-GB-of-VRAM) of DemoFusion is available! Thank [klimaleksus](https://github.com/klimaleksus) for the implementation!
+- **2023.12.01**: 🚀 Integrated to [Replicate](https://replicate.com/explore). Check out the online demo: [](https://replicate.com/lucataco/demofusion) Thank [Luis C.](https://github.com/lucataco) for the implementation!
+- **2023.11.29**: 💰 `pipeline_demofusion_sdxl` is released.
+
+# Usage
+## A quick try with integrated demos
+- HuggingFace Space: Try Text2Image generation at [](https://huggingface.co/spaces/fffiloni/DemoFusion) and Image2Image enhancement at [](https://huggingface.co/spaces/radames/Enhance-This-DemoFusion-SDXL).
+- Colab: Try Text2Image generation at [](https://colab.research.google.com/github/camenduru/DemoFusion-colab/blob/main/DemoFusion_colab.ipynb) and Image2Image enhancement at [](https://colab.research.google.com/github/camenduru/DemoFusion-colab/blob/main/DemoFusion_img2img_colab.ipynb).
+- Replicate: Try Text2Image generation at [](https://replicate.com/lucataco/demofusion) and Image2Image enhancement at [](https://replicate.com/lucataco/demofusion-enhance).
+
+## Starting with our code
+### Hyper-parameters
+- `view_batch_size` (`int`, defaults to 16):
+ The batch size for multiple denoising paths. Typically, a larger batch size can result in higher efficiency but comes with increased GPU memory requirements.
+- `stride` (`int`, defaults to 64):
+ The stride of moving local patches. A smaller stride is better for alleviating seam issues, but it also introduces additional computational overhead and inference time.
+- `cosine_scale_1` (`float`, defaults to 3):
+ Control the decreasing rate of skip-residual. A smaller value results in better consistency with low-resolution results, but it may lead to more pronounced upsampling noise. Please refer to Appendix C in the DemoFusion paper.
+- `cosine_scale_2` (`float`, defaults to 1):
+ Control the decreasing rate of dilated sampling. A smaller value can better address the repetition issue, but it may lead to grainy images. For specific impacts, please refer to Appendix C in the DemoFusion paper.
+- `cosine_scale_3` (`float`, defaults to 1):
+ Control the decrease rate of the Gaussian filter. A smaller value results in less grainy images, but it may lead to over-smoothing images. Please refer to Appendix C in the DemoFusion paper.
+- `sigma` (`float`, defaults to 1):
+ The standard value of the Gaussian filter. A larger sigma promotes the global guidance of dilated sampling, but it has the potential of over-smoothing.
+- `multi_decoder` (`bool`, defaults to True):
+ Determine whether to use a tiled decoder. Generally, a tiled decoder becomes necessary when the resolution exceeds 3072*3072 on an RTX 3090 GPU.
+- `show_image` (`bool`, defaults to False):
+ Determine whether to show intermediate results during generation.
+
+### Text2Image (will take about 17 GB of VRAM)
+- Set up the dependencies as:
+```
+conda create -n demofusion python=3.9
+conda activate demofusion
+pip install -r requirements.txt
+```
+- Download `pipeline_demofusion_sdxl.py` and run it as follows. A use case can be found in `demo.ipynb`.
+```
+from pipeline_demofusion_sdxl import DemoFusionSDXLPipeline
+import torch
+
+model_ckpt = "stabilityai/stable-diffusion-xl-base-1.0"
+pipe = DemoFusionSDXLPipeline.from_pretrained(model_ckpt, torch_dtype=torch.float16)
+pipe = pipe.to("cuda")
+
+prompt = "Envision a portrait of an elderly woman, her face a canvas of time, framed by a headscarf with muted tones of rust and cream. Her eyes, blue like faded denim. Her attire, simple yet dignified."
+negative_prompt = "blurry, ugly, duplicate, poorly drawn, deformed, mosaic"
+
+images = pipe(prompt, negative_prompt=negative_prompt,
+ height=3072, width=3072, view_batch_size=16, stride=64,
+ num_inference_steps=50, guidance_scale=7.5,
+ cosine_scale_1=3, cosine_scale_2=1, cosine_scale_3=1, sigma=0.8,
+ multi_decoder=True, show_image=True
+ )
+
+for i, image in enumerate(images):
+ image.save('image_' + str(i) + '.png')
+```
+- ⚠️ When you have enough VRAM (e.g., generating 2048*2048 images on hardware with more than 18GB RAM), you can set `multi_decoder=False`, which can make the decoding process faster.
+- Please feel free to try different prompts and resolutions.
+- Default hyper-parameters are recommended, but they may not be optimal for all cases. For specific impacts of each hyper-parameter, please refer to Appendix C in the DemoFusion paper.
+- The code was cleaned before the release. If you encounter any issues, please contact us.
+
+### Text2Image on Windows with 8 GB of VRAM
+
+- Set up the environment as:
+
+```
+cmd
+git clone "https://github.com/PRIS-CV/DemoFusion"
+cd DemoFusion
+python -m venv venv
+venv\Scripts\activate
+pip install -U "xformers==0.0.22.post7+cu118" --index-url https://download.pytorch.org/whl/cu118
+pip install "diffusers==0.21.4" "matplotlib==3.8.2" "transformers==4.35.2" "accelerate==0.25.0"
+```
+
+- Launch DemoFusion as follows. The use case can be found in `demo_lowvram.py`.
+
+```
+python
+from pipeline_demofusion_sdxl import DemoFusionSDXLPipeline
+
+import torch
+from diffusers.models import AutoencoderKL
+vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
+
+model_ckpt = "stabilityai/stable-diffusion-xl-base-1.0"
+pipe = DemoFusionSDXLPipeline.from_pretrained(model_ckpt, torch_dtype=torch.float16, vae=vae)
+pipe = pipe.to("cuda")
+
+prompt = "Envision a portrait of an elderly woman, her face a canvas of time, framed by a headscarf with muted tones of rust and cream. Her eyes, blue like faded denim. Her attire, simple yet dignified."
+negative_prompt = "blurry, ugly, duplicate, poorly drawn, deformed, mosaic"
+
+images = pipe(prompt, negative_prompt=negative_prompt,
+ height=2048, width=2048, view_batch_size=4, stride=64,
+ num_inference_steps=40, guidance_scale=7.5,
+ cosine_scale_1=3, cosine_scale_2=1, cosine_scale_3=1, sigma=0.8,
+ multi_decoder=True, show_image=False, lowvram=True
+ )
+
+for i, image in enumerate(images):
+ image.save('image_' + str(i) + '.png')
+```
+### Text2Image with local Gradio demo
+- Make sure you have installed `gradio` and `gradio_imageslider`.
+- Launch DemoFusion via Gradio demo now -- try `python gradio_demo.py`! Better Interaction and Presentation!
+
+
+### Image2Image with local Gradio demo
+- Make sure you have installed `gradio` and `gradio_imageslider`.
+- Launch DemoFusion Image2Image by `python gradio_demo_img2img.py`.
+
+- ⚠️ Please note that, as a tuning-free framework, DemoFusion's Image2Image capability is strongly correlated with the SDXL's training data distribution and will show a significant bias. An accurate prompt to describe the content and style of the input also significantly improves performance. Have fun and regard it as a side application of text+image based generation.
+
+### DemoFusion+ControlNet with local Gradio demo
+- Make sure you have installed `gradio` and `gradio_imageslider`.
+- Launch DemoFusion+ControNet Text2Image by `python gradio_demo.py`.
+-
+- Launch DemoFusion+ControNet Image2Image by `python gradio_demo_img2img.py`.
+-
+
+## Citation
+If you find this paper useful in your research, please consider citing:
+```
+@inproceedings{du2024demofusion,
+ title={DemoFusion: Democratising High-Resolution Image Generation With No \$\$\$},
+ author={Du, Ruoyi and Chang, Dongliang and Hospedales, Timothy and Song, Yi-Zhe and Ma, Zhanyu},
+ booktitle={CVPR},
+ year={2024}
+}
+```
diff --git a/competitors_inference_code/DemoFusion/__pycache__/pipeline_demofusion_sdxl.cpython-312.pyc b/competitors_inference_code/DemoFusion/__pycache__/pipeline_demofusion_sdxl.cpython-312.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..e14bbd19568bfe49ea2be6e54431be7f01baa39e
Binary files /dev/null and b/competitors_inference_code/DemoFusion/__pycache__/pipeline_demofusion_sdxl.cpython-312.pyc differ
diff --git a/competitors_inference_code/DemoFusion/demo_lowvram.py b/competitors_inference_code/DemoFusion/demo_lowvram.py
new file mode 100644
index 0000000000000000000000000000000000000000..4dae36058cd2e8dbdeb6ad2629e33f57a2885b57
--- /dev/null
+++ b/competitors_inference_code/DemoFusion/demo_lowvram.py
@@ -0,0 +1,34 @@
+
+'''
+Installation on Windows for GPU with 8 Gb of VRAM and xformers:
+
+git clone "https://github.com/PRIS-CV/DemoFusion"
+cd DemoFusion
+python -m venv venv
+venv\Scripts\activate
+pip install -U "xformers==0.0.22.post7+cu118" --index-url https://download.pytorch.org/whl/cu118
+pip install "diffusers==0.21.4" "matplotlib==3.8.2" "transformers==4.35.2" "accelerate==0.25.0"
+'''
+
+from pipeline_demofusion_sdxl import DemoFusionSDXLPipeline
+
+import torch
+from diffusers.models import AutoencoderKL
+vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
+
+model_ckpt = "stabilityai/stable-diffusion-xl-base-1.0"
+pipe = DemoFusionSDXLPipeline.from_pretrained(model_ckpt, torch_dtype=torch.float16, vae=vae)
+pipe = pipe.to("cuda")
+
+prompt = "Envision a portrait of an elderly woman, her face a canvas of time, framed by a headscarf with muted tones of rust and cream. Her eyes, blue like faded denim. Her attire, simple yet dignified."
+negative_prompt = "blurry, ugly, duplicate, poorly drawn, deformed, mosaic"
+
+images = pipe(prompt, negative_prompt=negative_prompt,
+ height=2048, width=2048, view_batch_size=4, stride=64,
+ num_inference_steps=40, guidance_scale=7.5,
+ cosine_scale_1=3, cosine_scale_2=1, cosine_scale_3=1, sigma=0.8,
+ multi_decoder=True, show_image=False, lowvram=True
+ )
+
+for i, image in enumerate(images):
+ image.save('image_'+str(i)+'.png')
diff --git a/competitors_inference_code/DemoFusion/generate_demofusion_images.py b/competitors_inference_code/DemoFusion/generate_demofusion_images.py
new file mode 100644
index 0000000000000000000000000000000000000000..165d298f010a1ec5520e5fdbe13646c99b1cdd1d
--- /dev/null
+++ b/competitors_inference_code/DemoFusion/generate_demofusion_images.py
@@ -0,0 +1,176 @@
+#!/usr/bin/env python3
+"""Generate SDXL images for the selected validation prompts with DemoFusion."""
+
+from __future__ import annotations
+
+import csv
+import json
+import time
+from collections.abc import Sequence
+from pathlib import Path
+from typing import Any
+
+import torch
+from diffusers.models import AutoencoderKL
+from pipeline_demofusion_sdxl import DemoFusionSDXLPipeline
+
+
+NEGATIVE_PROMPT = "blurry, ugly, duplicate, poorly drawn face, deformed, mosaic, artifacts, bad limbs"
+DEFAULT_CSV = "/data/kazanplova/latent_vae_upscale_train/datasets/new_validation_dataset/original_openim/images/selected_validation_images.csv"
+DEFAULT_OUTPUT_DIR = "/data/kazanplova/latent_vae_upscale_train/datasets/new_validation_dataset/demofusion/images"
+STATISTICS_PATH = "/data/kazanplova/latent_vae_upscale_train/datasets/new_validation_dataset/demofusion/statistics.json"
+PRETRAINED_MODEL = "stabilityai/stable-diffusion-xl-base-1.0"
+VAE_REPO = "madebyollin/sdxl-vae-fp16-fix"
+CFG_SCALE = 7.5
+NUM_INFERENCE_STEPS = 40
+SEED = 42
+VIEW_BATCH_SIZE = 4
+STRIDE = 64
+COSINE_SCALE_1 = 3.0
+COSINE_SCALE_2 = 1.0
+COSINE_SCALE_3 = 1.0
+SIGMA = 0.8
+MULTI_DECODER = True
+SHOW_IMAGE = False
+LOW_VRAM = True
+RESOLUTIONS: dict[str, tuple[int, int]] = {
+ "4096px": (4096, 4096),
+ "2048px": (2048, 2048),
+ "1024px": (1024, 1024),
+ # "512px": (512, 512),
+}
+
+
+def load_prompts(csv_path: Path) -> list[tuple[str, str]]:
+ prompts: list[tuple[str, str]] = []
+ with csv_path.open("r", encoding="utf-8") as handle:
+ reader = csv.DictReader(handle)
+ for row in reader:
+ caption_raw = (row.get("gpt_caption") or "").strip()
+ if not caption_raw:
+ continue
+ try:
+ caption = json.loads(caption_raw)
+ except json.JSONDecodeError:
+ print(f"Skipping row with invalid JSON: {row.get('img_path')}")
+ continue
+ prompt = caption.get("sdxl")
+ if not prompt:
+ print(f"Skipping row without 'sdxl' prompt: {row.get('img_path')}")
+ continue
+ prompts.append((row.get("img_path", ""), prompt))
+ return prompts
+
+
+def build_pipeline() -> DemoFusionSDXLPipeline:
+ if not torch.cuda.is_available():
+ raise RuntimeError("CUDA is required to run this script.")
+
+ vae = AutoencoderKL.from_pretrained(VAE_REPO, torch_dtype=torch.float16)
+ pipe = DemoFusionSDXLPipeline.from_pretrained(
+ PRETRAINED_MODEL,
+ torch_dtype=torch.float16,
+ vae=vae,
+ ).to("cuda")
+ pipe.set_progress_bar_config(disable=True)
+ return pipe
+
+
+def get_first_image(result: Any) -> Any:
+ if hasattr(result, "images"):
+ images = result.images
+ elif isinstance(result, Sequence) and not isinstance(result, (str, bytes, bytearray)):
+ images = result
+ else:
+ images = [result]
+ if not images:
+ raise RuntimeError("DemoFusion pipeline returned no images.")
+ return images[0]
+
+
+def main() -> None:
+ csv_path = Path(DEFAULT_CSV)
+ output_dir = Path(DEFAULT_OUTPUT_DIR)
+ prompts = load_prompts(csv_path)
+ if not prompts:
+ raise SystemExit("No prompts were found in the CSV file.")
+
+ resolution_dirs = {name: output_dir / name for name in RESOLUTIONS}
+ for folder in resolution_dirs.values():
+ folder.mkdir(parents=True, exist_ok=True)
+
+ statistics_path = Path(STATISTICS_PATH)
+ stats_tracker = {
+ name: {"count": 0, "total_time": 0.0, "max_vram_bytes": 0}
+ for name in RESOLUTIONS
+ }
+
+ generator = torch.Generator(device="cuda").manual_seed(SEED)
+ pipe = build_pipeline()
+ device = torch.device("cuda")
+
+ for idx, (img_path, prompt) in enumerate(prompts):
+ filename = f"{idx}.png"
+ written_paths: list[str] = []
+
+ for name, (width, height) in RESOLUTIONS.items():
+ print(prompt)
+ torch.cuda.synchronize(device)
+ torch.cuda.reset_peak_memory_stats(device)
+ start_time = time.perf_counter()
+
+ result = pipe(
+ prompt,
+ negative_prompt=NEGATIVE_PROMPT,
+ guidance_scale=CFG_SCALE,
+ num_inference_steps=NUM_INFERENCE_STEPS,
+ width=width,
+ height=height,
+ generator=generator,
+ view_batch_size=VIEW_BATCH_SIZE,
+ stride=STRIDE,
+ cosine_scale_1=COSINE_SCALE_1,
+ cosine_scale_2=COSINE_SCALE_2,
+ cosine_scale_3=COSINE_SCALE_3,
+ sigma=SIGMA,
+ multi_decoder=MULTI_DECODER,
+ show_image=SHOW_IMAGE,
+ lowvram=False,
+ )
+
+ image = get_first_image(result)
+
+ torch.cuda.synchronize(device)
+ elapsed = time.perf_counter() - start_time
+ vram_bytes = torch.cuda.max_memory_allocated(device)
+
+ stats = stats_tracker[name]
+ stats["count"] += 1
+ stats["total_time"] += elapsed
+ stats["max_vram_bytes"] = max(stats["max_vram_bytes"], vram_bytes)
+
+ output_path = resolution_dirs[name] / filename
+ image.save(output_path)
+ written_paths.append(str(output_path))
+
+ print(f"[{idx + 1}/{len(prompts)}] wrote {', '.join(written_paths)}")
+
+ statistics = {
+ "total_prompts": len(prompts),
+ "resolutions": {
+ name: {
+ "images": metrics["count"],
+ "mean_time_sec": (metrics["total_time"] / metrics["count"]) if metrics["count"] else 0.0,
+ "max_vram_mb": metrics["max_vram_bytes"] / (1024**2),
+ }
+ for name, metrics in stats_tracker.items()
+ },
+ }
+
+ statistics_path.parent.mkdir(parents=True, exist_ok=True)
+ statistics_path.write_text(json.dumps(statistics, indent=2))
+ print(f"Saved statistics to {statistics_path}")
+
+
+if __name__ == "__main__":
+ main()
diff --git a/competitors_inference_code/DemoFusion/gradio_demo.py b/competitors_inference_code/DemoFusion/gradio_demo.py
new file mode 100644
index 0000000000000000000000000000000000000000..0b7308e4bd6ea43e036852d736fdee5d80270fd1
--- /dev/null
+++ b/competitors_inference_code/DemoFusion/gradio_demo.py
@@ -0,0 +1,46 @@
+import gradio as gr
+from pipeline_demofusion_sdxl import DemoFusionSDXLPipeline
+from gradio_imageslider import ImageSlider
+import torch
+
+def generate_images(prompt, negative_prompt, height, width, num_inference_steps, guidance_scale, cosine_scale_1, cosine_scale_2, cosine_scale_3, sigma, view_batch_size, stride, seed):
+ model_ckpt = "stabilityai/stable-diffusion-xl-base-1.0"
+ pipe = DemoFusionSDXLPipeline.from_pretrained(model_ckpt, torch_dtype=torch.float16)
+ pipe = pipe.to("cuda")
+
+ generator = torch.Generator(device='cuda')
+ generator = generator.manual_seed(int(seed))
+
+ images = pipe(prompt, negative_prompt=negative_prompt, generator=generator,
+ height=int(height), width=int(width), view_batch_size=int(view_batch_size), stride=int(stride),
+ num_inference_steps=int(num_inference_steps), guidance_scale=guidance_scale,
+ cosine_scale_1=cosine_scale_1, cosine_scale_2=cosine_scale_2, cosine_scale_3=cosine_scale_3, sigma=sigma,
+ multi_decoder=True, show_image=False
+ )
+
+ return (images[0], images[-1])
+
+iface = gr.Interface(
+ fn=generate_images,
+ inputs=[
+ gr.Textbox(label="Prompt"),
+ gr.Textbox(label="Negative Prompt", value="blurry, ugly, duplicate, poorly drawn, deformed, mosaic"),
+ gr.Slider(minimum=1024, maximum=4096, step=1024, value=2048, label="Height"),
+ gr.Slider(minimum=1024, maximum=4096, step=1024, value=2048, label="Width"),
+ gr.Slider(minimum=10, maximum=100, step=1, value=50, label="Num Inference Steps"),
+ gr.Slider(minimum=1, maximum=20, step=0.1, value=7.5, label="Guidance Scale"),
+ gr.Slider(minimum=0, maximum=5, step=0.1, value=3, label="Cosine Scale 1"),
+ gr.Slider(minimum=0, maximum=5, step=0.1, value=1, label="Cosine Scale 2"),
+ gr.Slider(minimum=0, maximum=5, step=0.1, value=1, label="Cosine Scale 3"),
+ gr.Slider(minimum=0.1, maximum=1, step=0.1, value=0.8, label="Sigma"),
+ gr.Slider(minimum=4, maximum=32, step=4, value=16, label="View Batch Size"),
+ gr.Slider(minimum=8, maximum=96, step=8, value=64, label="Stride"),
+ gr.Number(label="Seed", value=2013)
+ ],
+ # outputs=gr.Gallery(label="Generated Images"),
+ outputs=ImageSlider(label="Comparison of SDXL and DemoFusion"),
+ title="DemoFusion Gradio Demo",
+ description="Generate images with the DemoFusion SDXL Pipeline."
+)
+
+iface.launch()
diff --git a/competitors_inference_code/DemoFusion/gradio_demo_controlnet.py b/competitors_inference_code/DemoFusion/gradio_demo_controlnet.py
new file mode 100644
index 0000000000000000000000000000000000000000..04278db2f21ca9dcd95e49084f987c301a9ed05f
--- /dev/null
+++ b/competitors_inference_code/DemoFusion/gradio_demo_controlnet.py
@@ -0,0 +1,93 @@
+import gradio as gr
+from diffusers import ControlNetModel, AutoencoderKL
+from pipeline_demofusion_sdxl import DemoFusionSDXLPipeline
+from pipeline_demofusion_sdxl_controlnet import DemoFusionSDXLControlNetPipeline
+from gradio_imageslider import ImageSlider
+import torch, gc
+from torchvision import transforms
+from PIL import Image
+import numpy as np
+import cv2
+
+def load_and_process_image(pil_image):
+ transform = transforms.Compose(
+ [
+ transforms.Resize((1024, 1024)),
+ transforms.ToTensor(),
+ transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
+ ]
+ )
+ image = transform(pil_image)
+ image = image.unsqueeze(0).half()
+ return image
+
+
+def pad_image(image):
+ w, h = image.size
+ if w == h:
+ return image
+ elif w > h:
+ new_image = Image.new(image.mode, (w, w), (0, 0, 0))
+ pad_w = 0
+ pad_h = (w - h) // 2
+ new_image.paste(image, (0, pad_h))
+ return new_image
+ else:
+ new_image = Image.new(image.mode, (h, h), (0, 0, 0))
+ pad_w = (h - w) // 2
+ pad_h = 0
+ new_image.paste(image, (pad_w, 0))
+ return new_image
+
+def generate_images(prompt, negative_prompt, controlnet_conditioning_scale, height, width, num_inference_steps, guidance_scale, cosine_scale_1, cosine_scale_2, cosine_scale_3, sigma, view_batch_size, stride, seed, input_image):
+ padded_image = pad_image(input_image).resize((1024, 1024)).convert("RGB")
+ image_lr = load_and_process_image(padded_image).to('cuda')
+ controlnet = ControlNetModel.from_pretrained("diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16)
+ vae = AutoencoderKL.from_pretrained("madebyollin/stable-diffusion-xl-base-1.0/vae-fix", torch_dtype=torch.float16)
+ pipe = DemoFusionSDXLControlNetPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16)
+ pipe = pipe.to("cuda")
+ generator = torch.Generator(device='cuda')
+ generator = generator.manual_seed(int(seed))
+ # get canny image
+ canny_image = np.array(padded_image)
+ canny_image = cv2.Canny(canny_image, 100, 200)
+ canny_image = canny_image[:, :, None]
+ canny_image = np.concatenate([canny_image, canny_image, canny_image], axis=2)
+ canny_image = Image.fromarray(canny_image)
+ images = pipe(prompt, negative_prompt=negative_prompt, controlnet_conditioning_scale=controlnet_conditioning_scale,
+ condition_image=canny_image, generator=generator,
+ height=int(height), width=int(width), view_batch_size=int(view_batch_size), stride=int(stride),
+ num_inference_steps=int(num_inference_steps), guidance_scale=guidance_scale,
+ cosine_scale_1=cosine_scale_1, cosine_scale_2=cosine_scale_2, cosine_scale_3=cosine_scale_3, sigma=sigma,
+ multi_decoder=True, show_image=False, lowvram=False
+ )
+ for i, image in enumerate(images):
+ image.save('image_'+str(i)+'.png')
+ pipe = None
+ gc.collect()
+ torch.cuda.empty_cache()
+ return (canny_image, images[-1])
+
+with gr.Blocks(title=f"DemoFusion") as demo:
+ with gr.Column():
+ with gr.Row():
+ with gr.Group():
+ image_input = gr.Image(type="pil", label="Input Image")
+ prompt = gr.Textbox(label="Prompt", value="")
+ negative_prompt = gr.Textbox(label="Negative Prompt", value="blurry, ugly, duplicate, poorly drawn, deformed, mosaic")
+ controlnet_conditioning_scale = gr.Slider(minimum=0, maximum=1, step=0.1, value=0.5, label="ControlNet Conditioning Scale")
+ width = gr.Slider(minimum=1024, maximum=4096, step=1024, value=2048, label="Width")
+ height = gr.Slider(minimum=1024, maximum=4096, step=1024, value=2048, label="Height")
+ num_inference_steps = gr.Slider(minimum=10, maximum=100, step=1, value=50, label="Num Inference Steps")
+ guidance_scale = gr.Slider(minimum=1, maximum=20, step=0.1, value=7.5, label="Guidance Scale")
+ cosine_scale_1 = gr.Slider(minimum=0, maximum=5, step=0.1, value=3, label="Cosine Scale 1")
+ cosine_scale_2 = gr.Slider(minimum=0, maximum=5, step=0.1, value=1, label="Cosine Scale 2")
+ cosine_scale_3 = gr.Slider(minimum=0, maximum=5, step=0.1, value=1, label="Cosine Scale 3")
+ sigma = gr.Slider(minimum=0.1, maximum=1, step=0.1, value=0.8, label="Sigma")
+ view_batch_size = gr.Slider(minimum=4, maximum=32, step=4, value=16, label="View Batch Size")
+ stride = gr.Slider(minimum=8, maximum=96, step=8, value=64, label="Stride")
+ seed = gr.Number(label="Seed", value=2013)
+ button = gr.Button()
+ output_images = ImageSlider(show_label=False)
+ button.click(fn=generate_images, inputs=[prompt, negative_prompt, controlnet_conditioning_scale, height, width, num_inference_steps, guidance_scale, cosine_scale_1, cosine_scale_2, cosine_scale_3, sigma, view_batch_size, stride, seed, image_input], outputs=[output_images], show_progress=True)
+demo.queue().launch(inline=False, share=True, debug=True)
diff --git a/competitors_inference_code/DemoFusion/gradio_demo_controlnet_img2img.py b/competitors_inference_code/DemoFusion/gradio_demo_controlnet_img2img.py
new file mode 100644
index 0000000000000000000000000000000000000000..746734e05c40b8ca58b31a9297104d4b432e9b39
--- /dev/null
+++ b/competitors_inference_code/DemoFusion/gradio_demo_controlnet_img2img.py
@@ -0,0 +1,93 @@
+import gradio as gr
+from diffusers import ControlNetModel, AutoencoderKL
+from pipeline_demofusion_sdxl import DemoFusionSDXLPipeline
+from pipeline_demofusion_sdxl_controlnet import DemoFusionSDXLControlNetPipeline
+from gradio_imageslider import ImageSlider
+import torch, gc
+from torchvision import transforms
+from PIL import Image
+import numpy as np
+import cv2
+
+def load_and_process_image(pil_image):
+ transform = transforms.Compose(
+ [
+ transforms.Resize((1024, 1024)),
+ transforms.ToTensor(),
+ transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
+ ]
+ )
+ image = transform(pil_image)
+ image = image.unsqueeze(0).half()
+ return image
+
+
+def pad_image(image):
+ w, h = image.size
+ if w == h:
+ return image
+ elif w > h:
+ new_image = Image.new(image.mode, (w, w), (0, 0, 0))
+ pad_w = 0
+ pad_h = (w - h) // 2
+ new_image.paste(image, (0, pad_h))
+ return new_image
+ else:
+ new_image = Image.new(image.mode, (h, h), (0, 0, 0))
+ pad_w = (h - w) // 2
+ pad_h = 0
+ new_image.paste(image, (pad_w, 0))
+ return new_image
+
+def generate_images(prompt, negative_prompt, controlnet_conditioning_scale, height, width, num_inference_steps, guidance_scale, cosine_scale_1, cosine_scale_2, cosine_scale_3, sigma, view_batch_size, stride, seed, input_image):
+ padded_image = pad_image(input_image).resize((1024, 1024)).convert("RGB")
+ image_lr = load_and_process_image(padded_image).to('cuda')
+ controlnet = ControlNetModel.from_pretrained("diffusers/controlnet-canny-sdxl-1.0", torch_dtype=torch.float16)
+ vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
+ pipe = DemoFusionSDXLControlNetPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", controlnet=controlnet, vae=vae, torch_dtype=torch.float16)
+ pipe = pipe.to("cuda")
+ generator = torch.Generator(device='cuda')
+ generator = generator.manual_seed(int(seed))
+ # get canny image
+ canny_image = np.array(padded_image)
+ canny_image = cv2.Canny(canny_image, 100, 200)
+ canny_image = canny_image[:, :, None]
+ canny_image = np.concatenate([canny_image, canny_image, canny_image], axis=2)
+ canny_image = Image.fromarray(canny_image)
+ images = pipe(prompt, negative_prompt=negative_prompt, controlnet_conditioning_scale=controlnet_conditioning_scale,
+ image_lr=image_lr, condition_image=canny_image, generator=generator,
+ height=int(height), width=int(width), view_batch_size=int(view_batch_size), stride=int(stride),
+ num_inference_steps=int(num_inference_steps), guidance_scale=guidance_scale,
+ cosine_scale_1=cosine_scale_1, cosine_scale_2=cosine_scale_2, cosine_scale_3=cosine_scale_3, sigma=sigma,
+ multi_decoder=True, show_image=False, lowvram=False
+ )
+ for i, image in enumerate(images):
+ image.save('image_'+str(i)+'.png')
+ pipe = None
+ gc.collect()
+ torch.cuda.empty_cache()
+ return (images[0], images[-1])
+
+with gr.Blocks(title=f"DemoFusion") as demo:
+ with gr.Column():
+ with gr.Row():
+ with gr.Group():
+ image_input = gr.Image(type="pil", label="Input Image")
+ prompt = gr.Textbox(label="Prompt (Note: an accurate prompt to describe the content and style of the input will significantly improve performance.)", value="8k high definition, high details")
+ negative_prompt = gr.Textbox(label="Negative Prompt", value="blurry, ugly, duplicate, poorly drawn, deformed, mosaic")
+ controlnet_conditioning_scale = gr.Slider(minimum=0, maximum=1, step=0.1, value=0.5, label="ControlNet Conditioning Scale")
+ width = gr.Slider(minimum=1024, maximum=4096, step=1024, value=2048, label="Width")
+ height = gr.Slider(minimum=1024, maximum=4096, step=1024, value=2048, label="Height")
+ num_inference_steps = gr.Slider(minimum=10, maximum=100, step=1, value=50, label="Num Inference Steps")
+ guidance_scale = gr.Slider(minimum=1, maximum=20, step=0.1, value=7.5, label="Guidance Scale")
+ cosine_scale_1 = gr.Slider(minimum=0, maximum=5, step=0.1, value=3, label="Cosine Scale 1")
+ cosine_scale_2 = gr.Slider(minimum=0, maximum=5, step=0.1, value=1, label="Cosine Scale 2")
+ cosine_scale_3 = gr.Slider(minimum=0, maximum=5, step=0.1, value=1, label="Cosine Scale 3")
+ sigma = gr.Slider(minimum=0.1, maximum=1, step=0.1, value=0.8, label="Sigma")
+ view_batch_size = gr.Slider(minimum=4, maximum=32, step=4, value=16, label="View Batch Size")
+ stride = gr.Slider(minimum=8, maximum=96, step=8, value=64, label="Stride")
+ seed = gr.Number(label="Seed", value=2013)
+ button = gr.Button()
+ output_images = ImageSlider(show_label=False)
+ button.click(fn=generate_images, inputs=[prompt, negative_prompt, controlnet_conditioning_scale, height, width, num_inference_steps, guidance_scale, cosine_scale_1, cosine_scale_2, cosine_scale_3, sigma, view_batch_size, stride, seed, image_input], outputs=[output_images], show_progress=True)
+demo.queue().launch(inline=False, share=True, debug=True)
diff --git a/competitors_inference_code/DemoFusion/gradio_demo_img2img.py b/competitors_inference_code/DemoFusion/gradio_demo_img2img.py
new file mode 100644
index 0000000000000000000000000000000000000000..48219ba1a14002e2aae5894c0c24b39a5adb94fb
--- /dev/null
+++ b/competitors_inference_code/DemoFusion/gradio_demo_img2img.py
@@ -0,0 +1,81 @@
+import gradio as gr
+from diffusers import AutoencoderKL
+from pipeline_demofusion_sdxl import DemoFusionSDXLPipeline
+from gradio_imageslider import ImageSlider
+import torch, gc
+from torchvision import transforms
+from PIL import Image
+
+def load_and_process_image(pil_image):
+ transform = transforms.Compose(
+ [
+ transforms.Resize((1024, 1024)),
+ transforms.ToTensor(),
+ transforms.Normalize(mean=[0.5, 0.5, 0.5], std=[0.5, 0.5, 0.5]),
+ ]
+ )
+ image = transform(pil_image)
+ image = image.unsqueeze(0).half()
+ return image
+
+
+def pad_image(image):
+ w, h = image.size
+ if w == h:
+ return image
+ elif w > h:
+ new_image = Image.new(image.mode, (w, w), (0, 0, 0))
+ pad_w = 0
+ pad_h = (w - h) // 2
+ new_image.paste(image, (0, pad_h))
+ return new_image
+ else:
+ new_image = Image.new(image.mode, (h, h), (0, 0, 0))
+ pad_w = (h - w) // 2
+ pad_h = 0
+ new_image.paste(image, (pad_w, 0))
+ return new_image
+
+def generate_images(prompt, negative_prompt, height, width, num_inference_steps, guidance_scale, cosine_scale_1, cosine_scale_2, cosine_scale_3, sigma, view_batch_size, stride, seed, input_image):
+ padded_image = pad_image(input_image).resize((1024, 1024)).convert("RGB")
+ image_lr = load_and_process_image(padded_image).to('cuda')
+ vae = AutoencoderKL.from_pretrained("madebyollin/sdxl-vae-fp16-fix", torch_dtype=torch.float16)
+ pipe = DemoFusionSDXLPipeline.from_pretrained("stabilityai/stable-diffusion-xl-base-1.0", vae=vae, torch_dtype=torch.float16)
+ pipe = pipe.to("cuda")
+ generator = torch.Generator(device='cuda')
+ generator = generator.manual_seed(int(seed))
+ images = pipe(prompt, negative_prompt=negative_prompt, generator=generator,
+ height=int(height), width=int(width), view_batch_size=int(view_batch_size), stride=int(stride),
+ num_inference_steps=int(num_inference_steps), guidance_scale=guidance_scale,
+ cosine_scale_1=cosine_scale_1, cosine_scale_2=cosine_scale_2, cosine_scale_3=cosine_scale_3, sigma=sigma,
+ multi_decoder=True, show_image=False, lowvram=False, image_lr=image_lr
+ )
+ for i, image in enumerate(images):
+ image.save('image_'+str(i)+'.png')
+ pipe = None
+ gc.collect()
+ torch.cuda.empty_cache()
+ return (images[0], images[-1])
+
+with gr.Blocks(title=f"DemoFusion") as demo:
+ with gr.Column():
+ with gr.Row():
+ with gr.Group():
+ image_input = gr.Image(type="pil", label="Input Image")
+ prompt = gr.Textbox(label="Prompt (Note: an accurate prompt to describe the content and style of the input will significantly improve performance.)", value="8k high definition, high details")
+ negative_prompt = gr.Textbox(label="Negative Prompt", value="blurry, ugly, duplicate, poorly drawn, deformed, mosaic")
+ width = gr.Slider(minimum=1024, maximum=4096, step=1024, value=2048, label="Width")
+ height = gr.Slider(minimum=1024, maximum=4096, step=1024, value=2048, label="Height")
+ num_inference_steps = gr.Slider(minimum=5, maximum=100, step=1, value=50, label="Num Inference Steps")
+ guidance_scale = gr.Slider(minimum=1, maximum=20, step=0.1, value=7.5, label="Guidance Scale")
+ cosine_scale_1 = gr.Slider(minimum=0, maximum=5, step=0.1, value=3, label="Cosine Scale 1")
+ cosine_scale_2 = gr.Slider(minimum=0, maximum=5, step=0.1, value=1, label="Cosine Scale 2")
+ cosine_scale_3 = gr.Slider(minimum=0, maximum=5, step=0.1, value=1, label="Cosine Scale 3")
+ sigma = gr.Slider(minimum=0.1, maximum=1, step=0.1, value=0.8, label="Sigma")
+ view_batch_size = gr.Slider(minimum=4, maximum=32, step=4, value=16, label="View Batch Size")
+ stride = gr.Slider(minimum=8, maximum=96, step=8, value=64, label="Stride")
+ seed = gr.Number(label="Seed", value=2013)
+ button = gr.Button()
+ output_images = ImageSlider(show_label=False)
+ button.click(fn=generate_images, inputs=[prompt, negative_prompt, height, width, num_inference_steps, guidance_scale, cosine_scale_1, cosine_scale_2, cosine_scale_3, sigma, view_batch_size, stride, seed, image_input], outputs=[output_images], show_progress=True)
+demo.queue().launch(inline=False, share=True, debug=True)
diff --git a/competitors_inference_code/DemoFusion/pipeline_demofusion_sdxl.py b/competitors_inference_code/DemoFusion/pipeline_demofusion_sdxl.py
new file mode 100644
index 0000000000000000000000000000000000000000..d6c0aea486786a0b3d3f0de8e7e85809154de2ef
--- /dev/null
+++ b/competitors_inference_code/DemoFusion/pipeline_demofusion_sdxl.py
@@ -0,0 +1,1446 @@
+# Copyright 2023 The HuggingFace Team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+import inspect
+import os
+from typing import Any, Callable, Dict, List, Optional, Tuple, Union
+import matplotlib.pyplot as plt
+
+import torch
+import torch.nn.functional as F
+import numpy as np
+import random
+import warnings
+from transformers import CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer
+
+from diffusers.image_processor import VaeImageProcessor
+from diffusers.loaders import (
+ FromSingleFileMixin,
+ LoraLoaderMixin,
+ TextualInversionLoaderMixin,
+)
+from diffusers.models import AutoencoderKL, UNet2DConditionModel
+from diffusers.models.attention_processor import (
+ AttnProcessor2_0,
+ LoRAAttnProcessor2_0,
+ LoRAXFormersAttnProcessor,
+ XFormersAttnProcessor,
+)
+from diffusers.models.lora import adjust_lora_scale_text_encoder
+from diffusers.schedulers import KarrasDiffusionSchedulers
+from diffusers.utils import (
+ is_accelerate_available,
+ is_accelerate_version,
+ is_invisible_watermark_available,
+ logging,
+ replace_example_docstring,
+)
+from diffusers.utils.torch_utils import randn_tensor
+from diffusers.pipelines.pipeline_utils import DiffusionPipeline
+from diffusers.pipelines.stable_diffusion_xl import StableDiffusionXLPipelineOutput
+
+
+if is_invisible_watermark_available():
+ from .watermark import StableDiffusionXLWatermarker
+
+
+logger = logging.get_logger(__name__) # pylint: disable=invalid-name
+
+EXAMPLE_DOC_STRING = """
+ Examples:
+ ```py
+ >>> import torch
+ >>> from diffusers import StableDiffusionXLPipeline
+
+ >>> pipe = StableDiffusionXLPipeline.from_pretrained(
+ ... "stabilityai/stable-diffusion-xl-base-1.0", torch_dtype=torch.float16
+ ... )
+ >>> pipe = pipe.to("cuda")
+
+ >>> prompt = "a photo of an astronaut riding a horse on mars"
+ >>> image = pipe(prompt).images[0]
+ ```
+"""
+
+def gaussian_kernel(kernel_size=3, sigma=1.0, channels=3):
+ x_coord = torch.arange(kernel_size)
+ gaussian_1d = torch.exp(-(x_coord - (kernel_size - 1) / 2) ** 2 / (2 * sigma ** 2))
+ gaussian_1d = gaussian_1d / gaussian_1d.sum()
+ gaussian_2d = gaussian_1d[:, None] * gaussian_1d[None, :]
+ kernel = gaussian_2d[None, None, :, :].repeat(channels, 1, 1, 1)
+
+ return kernel
+
+def gaussian_filter(latents, kernel_size=3, sigma=1.0):
+ channels = latents.shape[1]
+ kernel = gaussian_kernel(kernel_size, sigma, channels).to(latents.device, latents.dtype)
+ blurred_latents = F.conv2d(latents, kernel, padding=kernel_size//2, groups=channels)
+
+ return blurred_latents
+
+# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.rescale_noise_cfg
+def rescale_noise_cfg(noise_cfg, noise_pred_text, guidance_rescale=0.0):
+ """
+ Rescale `noise_cfg` according to `guidance_rescale`. Based on findings of [Common Diffusion Noise Schedules and
+ Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf). See Section 3.4
+ """
+ std_text = noise_pred_text.std(dim=list(range(1, noise_pred_text.ndim)), keepdim=True)
+ std_cfg = noise_cfg.std(dim=list(range(1, noise_cfg.ndim)), keepdim=True)
+ # rescale the results from guidance (fixes overexposure)
+ noise_pred_rescaled = noise_cfg * (std_text / std_cfg)
+ # mix with the original results from guidance by factor guidance_rescale to avoid "plain looking" images
+ noise_cfg = guidance_rescale * noise_pred_rescaled + (1 - guidance_rescale) * noise_cfg
+ return noise_cfg
+
+
+class DemoFusionSDXLPipeline(DiffusionPipeline, FromSingleFileMixin, LoraLoaderMixin, TextualInversionLoaderMixin):
+ """
+ Pipeline for text-to-image generation using Stable Diffusion XL.
+
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
+
+ In addition the pipeline inherits the following loading methods:
+ - *LoRA*: [`StableDiffusionXLPipeline.load_lora_weights`]
+ - *Ckpt*: [`loaders.FromSingleFileMixin.from_single_file`]
+
+ as well as the following saving methods:
+ - *LoRA*: [`loaders.StableDiffusionXLPipeline.save_lora_weights`]
+
+ Args:
+ vae ([`AutoencoderKL`]):
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
+ text_encoder ([`CLIPTextModel`]):
+ Frozen text-encoder. Stable Diffusion XL uses the text portion of
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
+ text_encoder_2 ([` CLIPTextModelWithProjection`]):
+ Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
+ specifically the
+ [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)
+ variant.
+ tokenizer (`CLIPTokenizer`):
+ Tokenizer of class
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
+ tokenizer_2 (`CLIPTokenizer`):
+ Second Tokenizer of class
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
+ scheduler ([`SchedulerMixin`]):
+ A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
+ force_zeros_for_empty_prompt (`bool`, *optional*, defaults to `"True"`):
+ Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of
+ `stabilityai/stable-diffusion-xl-base-1-0`.
+ add_watermarker (`bool`, *optional*):
+ Whether to use the [invisible_watermark library](https://github.com/ShieldMnt/invisible-watermark/) to
+ watermark output images. If not defined, it will default to True if the package is installed, otherwise no
+ watermarker will be used.
+ """
+ model_cpu_offload_seq = "text_encoder->text_encoder_2->unet->vae"
+
+ def __init__(
+ self,
+ vae: AutoencoderKL,
+ text_encoder: CLIPTextModel,
+ text_encoder_2: CLIPTextModelWithProjection,
+ tokenizer: CLIPTokenizer,
+ tokenizer_2: CLIPTokenizer,
+ unet: UNet2DConditionModel,
+ scheduler: KarrasDiffusionSchedulers,
+ force_zeros_for_empty_prompt: bool = True,
+ add_watermarker: Optional[bool] = None,
+ ):
+ super().__init__()
+
+ self.register_modules(
+ vae=vae,
+ text_encoder=text_encoder,
+ text_encoder_2=text_encoder_2,
+ tokenizer=tokenizer,
+ tokenizer_2=tokenizer_2,
+ unet=unet,
+ scheduler=scheduler,
+ )
+ self.register_to_config(force_zeros_for_empty_prompt=force_zeros_for_empty_prompt)
+ self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
+ self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor)
+ self.default_sample_size = self.unet.config.sample_size
+
+ add_watermarker = add_watermarker if add_watermarker is not None else is_invisible_watermark_available()
+
+ if add_watermarker:
+ self.watermark = StableDiffusionXLWatermarker()
+ else:
+ self.watermark = None
+
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_slicing
+ def enable_vae_slicing(self):
+ r"""
+ Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
+ compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
+ """
+ self.vae.enable_slicing()
+
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_slicing
+ def disable_vae_slicing(self):
+ r"""
+ Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
+ computing decoding in one step.
+ """
+ self.vae.disable_slicing()
+
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_tiling
+ def enable_vae_tiling(self):
+ r"""
+ Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
+ compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
+ processing larger images.
+ """
+ self.vae.enable_tiling()
+
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_tiling
+ def disable_vae_tiling(self):
+ r"""
+ Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
+ computing decoding in one step.
+ """
+ self.vae.disable_tiling()
+
+ def encode_prompt(
+ self,
+ prompt: str,
+ prompt_2: Optional[str] = None,
+ device: Optional[torch.device] = None,
+ num_images_per_prompt: int = 1,
+ do_classifier_free_guidance: bool = True,
+ negative_prompt: Optional[str] = None,
+ negative_prompt_2: Optional[str] = None,
+ prompt_embeds: Optional[torch.FloatTensor] = None,
+ negative_prompt_embeds: Optional[torch.FloatTensor] = None,
+ pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
+ negative_pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
+ lora_scale: Optional[float] = None,
+ ):
+ r"""
+ Encodes the prompt into text encoder hidden states.
+
+ Args:
+ prompt (`str` or `List[str]`, *optional*):
+ prompt to be encoded
+ prompt_2 (`str` or `List[str]`, *optional*):
+ The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
+ used in both text-encoders
+ device: (`torch.device`):
+ torch device
+ num_images_per_prompt (`int`):
+ number of images that should be generated per prompt
+ do_classifier_free_guidance (`bool`):
+ whether to use classifier free guidance or not
+ negative_prompt (`str` or `List[str]`, *optional*):
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
+ `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
+ less than `1`).
+ negative_prompt_2 (`str` or `List[str]`, *optional*):
+ The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
+ `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
+ prompt_embeds (`torch.FloatTensor`, *optional*):
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
+ provided, text embeddings will be generated from `prompt` input argument.
+ negative_prompt_embeds (`torch.FloatTensor`, *optional*):
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
+ weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
+ argument.
+ pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
+ Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
+ If not provided, pooled text embeddings will be generated from `prompt` input argument.
+ negative_pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
+ Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
+ weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
+ input argument.
+ lora_scale (`float`, *optional*):
+ A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
+ """
+ device = device or self._execution_device
+
+ # set lora scale so that monkey patched LoRA
+ # function of text encoder can correctly access it
+ if lora_scale is not None and isinstance(self, LoraLoaderMixin):
+ self._lora_scale = lora_scale
+
+ # dynamically adjust the LoRA scale
+ adjust_lora_scale_text_encoder(self.text_encoder, lora_scale)
+ adjust_lora_scale_text_encoder(self.text_encoder_2, lora_scale)
+
+ if prompt is not None and isinstance(prompt, str):
+ batch_size = 1
+ elif prompt is not None and isinstance(prompt, list):
+ batch_size = len(prompt)
+ else:
+ batch_size = prompt_embeds.shape[0]
+
+ # Define tokenizers and text encoders
+ tokenizers = [self.tokenizer, self.tokenizer_2] if self.tokenizer is not None else [self.tokenizer_2]
+ text_encoders = (
+ [self.text_encoder, self.text_encoder_2] if self.text_encoder is not None else [self.text_encoder_2]
+ )
+
+ if prompt_embeds is None:
+ prompt_2 = prompt_2 or prompt
+ # textual inversion: procecss multi-vector tokens if necessary
+ prompt_embeds_list = []
+ prompts = [prompt, prompt_2]
+ for prompt, tokenizer, text_encoder in zip(prompts, tokenizers, text_encoders):
+ if isinstance(self, TextualInversionLoaderMixin):
+ prompt = self.maybe_convert_prompt(prompt, tokenizer)
+
+ text_inputs = tokenizer(
+ prompt,
+ padding="max_length",
+ max_length=tokenizer.model_max_length,
+ truncation=True,
+ return_tensors="pt",
+ )
+
+ text_input_ids = text_inputs.input_ids
+ untruncated_ids = tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
+
+ if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
+ text_input_ids, untruncated_ids
+ ):
+ removed_text = tokenizer.batch_decode(untruncated_ids[:, tokenizer.model_max_length - 1 : -1])
+ logger.warning(
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
+ f" {tokenizer.model_max_length} tokens: {removed_text}"
+ )
+
+ prompt_embeds = text_encoder(
+ text_input_ids.to(device),
+ output_hidden_states=True,
+ )
+
+ # We are only ALWAYS interested in the pooled output of the final text encoder
+ pooled_prompt_embeds = prompt_embeds[0]
+ prompt_embeds = prompt_embeds.hidden_states[-2]
+
+ prompt_embeds_list.append(prompt_embeds)
+
+ prompt_embeds = torch.concat(prompt_embeds_list, dim=-1)
+
+ # get unconditional embeddings for classifier free guidance
+ zero_out_negative_prompt = negative_prompt is None and self.config.force_zeros_for_empty_prompt
+ if do_classifier_free_guidance and negative_prompt_embeds is None and zero_out_negative_prompt:
+ negative_prompt_embeds = torch.zeros_like(prompt_embeds)
+ negative_pooled_prompt_embeds = torch.zeros_like(pooled_prompt_embeds)
+ elif do_classifier_free_guidance and negative_prompt_embeds is None:
+ negative_prompt = negative_prompt or ""
+ negative_prompt_2 = negative_prompt_2 or negative_prompt
+
+ uncond_tokens: List[str]
+ if prompt is not None and type(prompt) is not type(negative_prompt):
+ raise TypeError(
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
+ f" {type(prompt)}."
+ )
+ elif isinstance(negative_prompt, str):
+ uncond_tokens = [negative_prompt, negative_prompt_2]
+ elif batch_size != len(negative_prompt):
+ raise ValueError(
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
+ " the batch size of `prompt`."
+ )
+ else:
+ uncond_tokens = [negative_prompt, negative_prompt_2]
+
+ negative_prompt_embeds_list = []
+ for negative_prompt, tokenizer, text_encoder in zip(uncond_tokens, tokenizers, text_encoders):
+ if isinstance(self, TextualInversionLoaderMixin):
+ negative_prompt = self.maybe_convert_prompt(negative_prompt, tokenizer)
+
+ max_length = prompt_embeds.shape[1]
+ uncond_input = tokenizer(
+ negative_prompt,
+ padding="max_length",
+ max_length=max_length,
+ truncation=True,
+ return_tensors="pt",
+ )
+
+ negative_prompt_embeds = text_encoder(
+ uncond_input.input_ids.to(device),
+ output_hidden_states=True,
+ )
+ # We are only ALWAYS interested in the pooled output of the final text encoder
+ negative_pooled_prompt_embeds = negative_prompt_embeds[0]
+ negative_prompt_embeds = negative_prompt_embeds.hidden_states[-2]
+
+ negative_prompt_embeds_list.append(negative_prompt_embeds)
+
+ negative_prompt_embeds = torch.concat(negative_prompt_embeds_list, dim=-1)
+
+ prompt_embeds = prompt_embeds.to(dtype=self.text_encoder_2.dtype, device=device)
+ bs_embed, seq_len, _ = prompt_embeds.shape
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
+ prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
+ prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1)
+
+ if do_classifier_free_guidance:
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
+ seq_len = negative_prompt_embeds.shape[1]
+ negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder_2.dtype, device=device)
+ negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1)
+ negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)
+
+ pooled_prompt_embeds = pooled_prompt_embeds.repeat(1, num_images_per_prompt).view(
+ bs_embed * num_images_per_prompt, -1
+ )
+ if do_classifier_free_guidance:
+ negative_pooled_prompt_embeds = negative_pooled_prompt_embeds.repeat(1, num_images_per_prompt).view(
+ bs_embed * num_images_per_prompt, -1
+ )
+
+ return prompt_embeds, negative_prompt_embeds, pooled_prompt_embeds, negative_pooled_prompt_embeds
+
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs
+ def prepare_extra_step_kwargs(self, generator, eta):
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
+ # and should be between [0, 1]
+
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
+ extra_step_kwargs = {}
+ if accepts_eta:
+ extra_step_kwargs["eta"] = eta
+
+ # check if the scheduler accepts generator
+ accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
+ if accepts_generator:
+ extra_step_kwargs["generator"] = generator
+ return extra_step_kwargs
+
+ def check_inputs(
+ self,
+ prompt,
+ prompt_2,
+ height,
+ width,
+ callback_steps,
+ negative_prompt=None,
+ negative_prompt_2=None,
+ prompt_embeds=None,
+ negative_prompt_embeds=None,
+ pooled_prompt_embeds=None,
+ negative_pooled_prompt_embeds=None,
+ num_images_per_prompt=None,
+ ):
+ if height % 8 != 0 or width % 8 != 0:
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
+
+ if (callback_steps is None) or (
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
+ ):
+ raise ValueError(
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
+ f" {type(callback_steps)}."
+ )
+
+ if prompt is not None and prompt_embeds is not None:
+ raise ValueError(
+ f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
+ " only forward one of the two."
+ )
+ elif prompt_2 is not None and prompt_embeds is not None:
+ raise ValueError(
+ f"Cannot forward both `prompt_2`: {prompt_2} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
+ " only forward one of the two."
+ )
+ elif prompt is None and prompt_embeds is None:
+ raise ValueError(
+ "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
+ )
+ elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
+ elif prompt_2 is not None and (not isinstance(prompt_2, str) and not isinstance(prompt_2, list)):
+ raise ValueError(f"`prompt_2` has to be of type `str` or `list` but is {type(prompt_2)}")
+
+ if negative_prompt is not None and negative_prompt_embeds is not None:
+ raise ValueError(
+ f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
+ f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
+ )
+ elif negative_prompt_2 is not None and negative_prompt_embeds is not None:
+ raise ValueError(
+ f"Cannot forward both `negative_prompt_2`: {negative_prompt_2} and `negative_prompt_embeds`:"
+ f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
+ )
+
+ if prompt_embeds is not None and negative_prompt_embeds is not None:
+ if prompt_embeds.shape != negative_prompt_embeds.shape:
+ raise ValueError(
+ "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
+ f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
+ f" {negative_prompt_embeds.shape}."
+ )
+
+ if prompt_embeds is not None and pooled_prompt_embeds is None:
+ raise ValueError(
+ "If `prompt_embeds` are provided, `pooled_prompt_embeds` also have to be passed. Make sure to generate `pooled_prompt_embeds` from the same text encoder that was used to generate `prompt_embeds`."
+ )
+
+ if negative_prompt_embeds is not None and negative_pooled_prompt_embeds is None:
+ raise ValueError(
+ "If `negative_prompt_embeds` are provided, `negative_pooled_prompt_embeds` also have to be passed. Make sure to generate `negative_pooled_prompt_embeds` from the same text encoder that was used to generate `negative_prompt_embeds`."
+ )
+
+ # DemoFusion specific checks
+ if max(height, width) % 1024 != 0:
+ raise ValueError(f"the larger one of `height` and `width` has to be divisible by 1024 but are {height} and {width}.")
+
+ if num_images_per_prompt != 1:
+ warnings.warn("num_images_per_prompt != 1 is not supported by DemoFusion and will be ignored.")
+ num_images_per_prompt = 1
+
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents
+ def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None):
+ shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor)
+ if isinstance(generator, list) and len(generator) != batch_size:
+ raise ValueError(
+ f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
+ f" size of {batch_size}. Make sure the batch size matches the length of the generators."
+ )
+
+ if latents is None:
+ latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
+ else:
+ latents = latents.to(device)
+
+ # scale the initial noise by the standard deviation required by the scheduler
+ latents = latents * self.scheduler.init_noise_sigma
+ return latents
+
+ def _get_add_time_ids(self, original_size, crops_coords_top_left, target_size, dtype):
+ add_time_ids = list(original_size + crops_coords_top_left + target_size)
+
+ passed_add_embed_dim = (
+ self.unet.config.addition_time_embed_dim * len(add_time_ids) + self.text_encoder_2.config.projection_dim
+ )
+ expected_add_embed_dim = self.unet.add_embedding.linear_1.in_features
+
+ if expected_add_embed_dim != passed_add_embed_dim:
+ raise ValueError(
+ f"Model expects an added time embedding vector of length {expected_add_embed_dim}, but a vector of {passed_add_embed_dim} was created. The model has an incorrect config. Please check `unet.config.time_embedding_type` and `text_encoder_2.config.projection_dim`."
+ )
+
+ add_time_ids = torch.tensor([add_time_ids], dtype=dtype)
+ return add_time_ids
+
+ def get_views(self, height, width, window_size=128, stride=64, random_jitter=False):
+ # Here, we define the mappings F_i (see Eq. 7 in the MultiDiffusion paper https://arxiv.org/abs/2302.08113)
+ # if panorama's height/width < window_size, num_blocks of height/width should return 1
+ height //= self.vae_scale_factor
+ width //= self.vae_scale_factor
+ num_blocks_height = int((height - window_size) / stride - 1e-6) + 2 if height > window_size else 1
+ num_blocks_width = int((width - window_size) / stride - 1e-6) + 2 if width > window_size else 1
+ total_num_blocks = int(num_blocks_height * num_blocks_width)
+ views = []
+ for i in range(total_num_blocks):
+ h_start = int((i // num_blocks_width) * stride)
+ h_end = h_start + window_size
+ w_start = int((i % num_blocks_width) * stride)
+ w_end = w_start + window_size
+
+ if h_end > height:
+ h_start = int(h_start + height - h_end)
+ h_end = int(height)
+ if w_end > width:
+ w_start = int(w_start + width - w_end)
+ w_end = int(width)
+ if h_start < 0:
+ h_end = int(h_end - h_start)
+ h_start = 0
+ if w_start < 0:
+ w_end = int(w_end - w_start)
+ w_start = 0
+
+ if random_jitter:
+ jitter_range = (window_size - stride) // 4
+ w_jitter = 0
+ h_jitter = 0
+ if (w_start != 0) and (w_end != width):
+ w_jitter = random.randint(-jitter_range, jitter_range)
+ elif (w_start == 0) and (w_end != width):
+ w_jitter = random.randint(-jitter_range, 0)
+ elif (w_start != 0) and (w_end == width):
+ w_jitter = random.randint(0, jitter_range)
+ if (h_start != 0) and (h_end != height):
+ h_jitter = random.randint(-jitter_range, jitter_range)
+ elif (h_start == 0) and (h_end != height):
+ h_jitter = random.randint(-jitter_range, 0)
+ elif (h_start != 0) and (h_end == height):
+ h_jitter = random.randint(0, jitter_range)
+ h_start += (h_jitter + jitter_range)
+ h_end += (h_jitter + jitter_range)
+ w_start += (w_jitter + jitter_range)
+ w_end += (w_jitter + jitter_range)
+
+ views.append((h_start, h_end, w_start, w_end))
+ return views
+
+ def tiled_decode(self, latents, current_height, current_width):
+ sample_size = self.unet.config.sample_size
+ core_size = self.unet.config.sample_size // 4
+ core_stride = core_size
+ pad_size = self.unet.config.sample_size // 8 * 3
+ decoder_view_batch_size = 1
+
+ if self.lowvram:
+ core_stride = core_size // 2
+ pad_size = core_size
+
+ views = self.get_views(current_height, current_width, stride=core_stride, window_size=core_size)
+ views_batch = [views[i : i + decoder_view_batch_size] for i in range(0, len(views), decoder_view_batch_size)]
+ latents_ = F.pad(latents, (pad_size, pad_size, pad_size, pad_size), 'constant', 0)
+ image = torch.zeros(latents.size(0), 3, current_height, current_width).to(latents.device)
+ count = torch.zeros_like(image).to(latents.device)
+ # get the latents corresponding to the current view coordinates
+ with self.progress_bar(total=len(views_batch)) as progress_bar:
+ for j, batch_view in enumerate(views_batch):
+ vb_size = len(batch_view)
+ latents_for_view = torch.cat(
+ [
+ latents_[:, :, h_start:h_end+pad_size*2, w_start:w_end+pad_size*2]
+ for h_start, h_end, w_start, w_end in batch_view
+ ]
+ ).to(self.vae.device)
+ image_patch = self.vae.decode(latents_for_view / self.vae.config.scaling_factor, return_dict=False)[0]
+ h_start, h_end, w_start, w_end = views[j]
+ h_start, h_end, w_start, w_end = h_start * self.vae_scale_factor, h_end * self.vae_scale_factor, w_start * self.vae_scale_factor, w_end * self.vae_scale_factor
+ p_h_start, p_h_end, p_w_start, p_w_end = pad_size * self.vae_scale_factor, image_patch.size(2) - pad_size * self.vae_scale_factor, pad_size * self.vae_scale_factor, image_patch.size(3) - pad_size * self.vae_scale_factor
+ image[:, :, h_start:h_end, w_start:w_end] += image_patch[:, :, p_h_start:p_h_end, p_w_start:p_w_end].to(latents.device)
+ count[:, :, h_start:h_end, w_start:w_end] += 1
+ progress_bar.update()
+ image = image / count
+
+ return image
+
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_upscale.StableDiffusionUpscalePipeline.upcast_vae
+ def upcast_vae(self):
+ dtype = self.vae.dtype
+ self.vae.to(dtype=torch.float32)
+ use_torch_2_0_or_xformers = isinstance(
+ self.vae.decoder.mid_block.attentions[0].processor,
+ (
+ AttnProcessor2_0,
+ XFormersAttnProcessor,
+ LoRAXFormersAttnProcessor,
+ LoRAAttnProcessor2_0,
+ ),
+ )
+ # if xformers or torch_2_0 is used attention block does not need
+ # to be in float32 which can save lots of memory
+ if use_torch_2_0_or_xformers:
+ self.vae.post_quant_conv.to(dtype)
+ self.vae.decoder.conv_in.to(dtype)
+ self.vae.decoder.mid_block.to(dtype)
+
+ @torch.no_grad()
+ @replace_example_docstring(EXAMPLE_DOC_STRING)
+ def __call__(
+ self,
+ prompt: Union[str, List[str]] = None,
+ prompt_2: Optional[Union[str, List[str]]] = None,
+ height: Optional[int] = None,
+ width: Optional[int] = None,
+ num_inference_steps: int = 50,
+ denoising_end: Optional[float] = None,
+ guidance_scale: float = 5.0,
+ negative_prompt: Optional[Union[str, List[str]]] = None,
+ negative_prompt_2: Optional[Union[str, List[str]]] = None,
+ num_images_per_prompt: Optional[int] = 1,
+ eta: float = 0.0,
+ generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
+ latents: Optional[torch.FloatTensor] = None,
+ prompt_embeds: Optional[torch.FloatTensor] = None,
+ negative_prompt_embeds: Optional[torch.FloatTensor] = None,
+ pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
+ negative_pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
+ output_type: Optional[str] = "pil",
+ return_dict: bool = False,
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
+ callback_steps: int = 1,
+ cross_attention_kwargs: Optional[Dict[str, Any]] = None,
+ guidance_rescale: float = 0.0,
+ original_size: Optional[Tuple[int, int]] = None,
+ crops_coords_top_left: Tuple[int, int] = (0, 0),
+ target_size: Optional[Tuple[int, int]] = None,
+ negative_original_size: Optional[Tuple[int, int]] = None,
+ negative_crops_coords_top_left: Tuple[int, int] = (0, 0),
+ negative_target_size: Optional[Tuple[int, int]] = None,
+ ################### DemoFusion specific parameters ####################
+ image_lr: Optional[torch.FloatTensor] = None,
+ view_batch_size: int = 16,
+ multi_decoder: bool = True,
+ stride: Optional[int] = 64,
+ cosine_scale_1: Optional[float] = 3.,
+ cosine_scale_2: Optional[float] = 1.,
+ cosine_scale_3: Optional[float] = 1.,
+ sigma: Optional[float] = 1.0,
+ show_image: bool = False,
+ lowvram: bool = False,
+ ):
+ r"""
+ Function invoked when calling the pipeline for generation.
+
+ Args:
+ prompt (`str` or `List[str]`, *optional*):
+ The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
+ instead.
+ prompt_2 (`str` or `List[str]`, *optional*):
+ The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
+ used in both text-encoders
+ height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
+ The height in pixels of the generated image. This is set to 1024 by default for the best results.
+ Anything below 512 pixels won't work well for
+ [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
+ and checkpoints that are not specifically fine-tuned on low resolutions.
+ width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
+ The width in pixels of the generated image. This is set to 1024 by default for the best results.
+ Anything below 512 pixels won't work well for
+ [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
+ and checkpoints that are not specifically fine-tuned on low resolutions.
+ num_inference_steps (`int`, *optional*, defaults to 50):
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
+ expense of slower inference.
+ denoising_end (`float`, *optional*):
+ When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
+ completed before it is intentionally prematurely terminated. As a result, the returned sample will
+ still retain a substantial amount of noise as determined by the discrete timesteps selected by the
+ scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a
+ "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refining the Image
+ Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output)
+ guidance_scale (`float`, *optional*, defaults to 5.0):
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
+ usually at the expense of lower image quality.
+ negative_prompt (`str` or `List[str]`, *optional*):
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
+ `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
+ less than `1`).
+ negative_prompt_2 (`str` or `List[str]`, *optional*):
+ The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
+ `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
+ The number of images to generate per prompt.
+ eta (`float`, *optional*, defaults to 0.0):
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
+ [`schedulers.DDIMScheduler`], will be ignored for others.
+ generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
+ One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
+ to make generation deterministic.
+ latents (`torch.FloatTensor`, *optional*):
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
+ tensor will ge generated by sampling using the supplied random `generator`.
+ prompt_embeds (`torch.FloatTensor`, *optional*):
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
+ provided, text embeddings will be generated from `prompt` input argument.
+ negative_prompt_embeds (`torch.FloatTensor`, *optional*):
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
+ weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
+ argument.
+ pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
+ Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
+ If not provided, pooled text embeddings will be generated from `prompt` input argument.
+ negative_pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
+ Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
+ weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
+ input argument.
+ output_type (`str`, *optional*, defaults to `"pil"`):
+ The output format of the generate image. Choose between
+ [PIL](https://pillow.readthedocs.io/en/stable/): `PIL.Image.Image` or `np.array`.
+ return_dict (`bool`, *optional*, defaults to `True`):
+ Whether or not to return a [`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput`] instead
+ of a plain tuple.
+ callback (`Callable`, *optional*):
+ A function that will be called every `callback_steps` steps during inference. The function will be
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
+ callback_steps (`int`, *optional*, defaults to 1):
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
+ called at every step.
+ cross_attention_kwargs (`dict`, *optional*):
+ A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
+ `self.processor` in
+ [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
+ guidance_rescale (`float`, *optional*, defaults to 0.7):
+ Guidance rescale factor proposed by [Common Diffusion Noise Schedules and Sample Steps are
+ Flawed](https://arxiv.org/pdf/2305.08891.pdf) `guidance_scale` is defined as `φ` in equation 16. of
+ [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf).
+ Guidance rescale factor should fix overexposure when using zero terminal SNR.
+ original_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)):
+ If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
+ `original_size` defaults to `(width, height)` if not specified. Part of SDXL's micro-conditioning as
+ explained in section 2.2 of
+ [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
+ crops_coords_top_left (`Tuple[int]`, *optional*, defaults to (0, 0)):
+ `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
+ `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
+ `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
+ [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
+ target_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)):
+ For most cases, `target_size` should be set to the desired height and width of the generated image. If
+ not specified it will default to `(width, height)`. Part of SDXL's micro-conditioning as explained in
+ section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
+ negative_original_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)):
+ To negatively condition the generation process based on a specific image resolution. Part of SDXL's
+ micro-conditioning as explained in section 2.2 of
+ [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
+ information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
+ negative_crops_coords_top_left (`Tuple[int]`, *optional*, defaults to (0, 0)):
+ To negatively condition the generation process based on a specific crop coordinates. Part of SDXL's
+ micro-conditioning as explained in section 2.2 of
+ [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
+ information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
+ negative_target_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)):
+ To negatively condition the generation process based on a target image resolution. It should be as same
+ as the `target_size` for most cases. Part of SDXL's micro-conditioning as explained in section 2.2 of
+ [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
+ information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
+ ################### DemoFusion specific parameters ####################
+ image_lr (`torch.FloatTensor`, *optional*, , defaults to None):
+ Low-resolution image input for upscaling. If provided, DemoFusion will encode it as the initial latent representation.
+ view_batch_size (`int`, defaults to 16):
+ The batch size for multiple denoising paths. Typically, a larger batch size can result in higher
+ efficiency but comes with increased GPU memory requirements.
+ multi_decoder (`bool`, defaults to True):
+ Determine whether to use a tiled decoder. Generally, when the resolution exceeds 3072x3072,
+ a tiled decoder becomes necessary.
+ stride (`int`, defaults to 64):
+ The stride of moving local patches. A smaller stride is better for alleviating seam issues,
+ but it also introduces additional computational overhead and inference time.
+ cosine_scale_1 (`float`, defaults to 3):
+ Control the strength of skip-residual. For specific impacts, please refer to Appendix C
+ in the DemoFusion paper.
+ cosine_scale_2 (`float`, defaults to 1):
+ Control the strength of dilated sampling. For specific impacts, please refer to Appendix C
+ in the DemoFusion paper.
+ cosine_scale_3 (`float`, defaults to 1):
+ Control the strength of the gaussion filter. For specific impacts, please refer to Appendix C
+ in the DemoFusion paper.
+ sigma (`float`, defaults to 1):
+ The standard value of the gaussian filter.
+ show_image (`bool`, defaults to False):
+ Determine whether to show intermediate results during generation.
+ lowvram (`bool`, defaults to False):
+ Try to fit in 8 Gb of VRAM, with xformers installed.
+
+ Examples:
+
+ Returns:
+ a `list` with the generated images at each phase.
+ """
+
+ # 0. Default height and width to unet
+ height = height or self.default_sample_size * self.vae_scale_factor
+ width = width or self.default_sample_size * self.vae_scale_factor
+
+ x1_size = self.default_sample_size * self.vae_scale_factor
+
+ height_scale = height / x1_size
+ width_scale = width / x1_size
+ scale_num = int(max(height_scale, width_scale))
+ aspect_ratio = min(height_scale, width_scale) / max(height_scale, width_scale)
+
+ original_size = original_size or (height, width)
+ target_size = target_size or (height, width)
+
+ # 1. Check inputs. Raise error if not correct
+ self.check_inputs(
+ prompt,
+ prompt_2,
+ height,
+ width,
+ callback_steps,
+ negative_prompt,
+ negative_prompt_2,
+ prompt_embeds,
+ negative_prompt_embeds,
+ pooled_prompt_embeds,
+ negative_pooled_prompt_embeds,
+ num_images_per_prompt,
+ )
+
+ # 2. Define call parameters
+ if prompt is not None and isinstance(prompt, str):
+ batch_size = 1
+ elif prompt is not None and isinstance(prompt, list):
+ batch_size = len(prompt)
+ else:
+ batch_size = prompt_embeds.shape[0]
+
+ device = self._execution_device
+ self.lowvram = lowvram
+ if self.lowvram:
+ self.vae.cpu()
+ self.unet.cpu()
+ self.text_encoder.to(device)
+ self.text_encoder_2.to(device)
+ image_lr.cpu()
+
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
+ # corresponds to doing no classifier free guidance.
+ do_classifier_free_guidance = guidance_scale > 1.0
+
+ # 3. Encode input prompt
+ text_encoder_lora_scale = (
+ cross_attention_kwargs.get("scale", None) if cross_attention_kwargs is not None else None
+ )
+ (
+ prompt_embeds,
+ negative_prompt_embeds,
+ pooled_prompt_embeds,
+ negative_pooled_prompt_embeds,
+ ) = self.encode_prompt(
+ prompt=prompt,
+ prompt_2=prompt_2,
+ device=device,
+ num_images_per_prompt=num_images_per_prompt,
+ do_classifier_free_guidance=do_classifier_free_guidance,
+ negative_prompt=negative_prompt,
+ negative_prompt_2=negative_prompt_2,
+ prompt_embeds=prompt_embeds,
+ negative_prompt_embeds=negative_prompt_embeds,
+ pooled_prompt_embeds=pooled_prompt_embeds,
+ negative_pooled_prompt_embeds=negative_pooled_prompt_embeds,
+ lora_scale=text_encoder_lora_scale,
+ )
+
+ # 4. Prepare timesteps
+ self.scheduler.set_timesteps(num_inference_steps, device=device)
+
+ timesteps = self.scheduler.timesteps
+
+ # 5. Prepare latent variables
+ num_channels_latents = self.unet.config.in_channels
+ latents = self.prepare_latents(
+ batch_size * num_images_per_prompt,
+ num_channels_latents,
+ height // scale_num,
+ width // scale_num,
+ prompt_embeds.dtype,
+ device,
+ generator,
+ latents,
+ )
+
+ # 6. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
+ extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
+
+ # 7. Prepare added time ids & embeddings
+ add_text_embeds = pooled_prompt_embeds
+ add_time_ids = self._get_add_time_ids(
+ original_size, crops_coords_top_left, target_size, dtype=prompt_embeds.dtype
+ )
+ if negative_original_size is not None and negative_target_size is not None:
+ negative_add_time_ids = self._get_add_time_ids(
+ negative_original_size,
+ negative_crops_coords_top_left,
+ negative_target_size,
+ dtype=prompt_embeds.dtype,
+ )
+ else:
+ negative_add_time_ids = add_time_ids
+
+ if do_classifier_free_guidance:
+ prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds], dim=0)
+ add_text_embeds = torch.cat([negative_pooled_prompt_embeds, add_text_embeds], dim=0)
+ add_time_ids = torch.cat([negative_add_time_ids, add_time_ids], dim=0)
+ del negative_prompt_embeds, negative_pooled_prompt_embeds, negative_add_time_ids
+
+ prompt_embeds = prompt_embeds.to(device)
+ add_text_embeds = add_text_embeds.to(device)
+ add_time_ids = add_time_ids.to(device).repeat(batch_size * num_images_per_prompt, 1)
+
+ # 8. Denoising loop
+ num_warmup_steps = max(len(timesteps) - num_inference_steps * self.scheduler.order, 0)
+
+ # 7.1 Apply denoising_end
+ if denoising_end is not None and isinstance(denoising_end, float) and denoising_end > 0 and denoising_end < 1:
+ discrete_timestep_cutoff = int(
+ round(
+ self.scheduler.config.num_train_timesteps
+ - (denoising_end * self.scheduler.config.num_train_timesteps)
+ )
+ )
+ num_inference_steps = len(list(filter(lambda ts: ts >= discrete_timestep_cutoff, timesteps)))
+ timesteps = timesteps[:num_inference_steps]
+
+ output_images = []
+
+ ###################################################### Phase Initialization ########################################################
+
+ if self.lowvram:
+ self.text_encoder.cpu()
+ self.text_encoder_2.cpu()
+
+ if image_lr == None:
+ print("### Phase 1 Denoising ###")
+ with self.progress_bar(total=num_inference_steps) as progress_bar:
+ for i, t in enumerate(timesteps):
+
+ if self.lowvram:
+ self.vae.cpu()
+ self.unet.to(device)
+
+ latents_for_view = latents
+
+ # expand the latents if we are doing classifier free guidance
+ latent_model_input = (
+ latents.repeat_interleave(2, dim=0)
+ if do_classifier_free_guidance
+ else latents
+ )
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
+
+ # predict the noise residual
+ added_cond_kwargs = {"text_embeds": add_text_embeds, "time_ids": add_time_ids}
+ noise_pred = self.unet(
+ latent_model_input,
+ t,
+ encoder_hidden_states=prompt_embeds,
+ cross_attention_kwargs=cross_attention_kwargs,
+ added_cond_kwargs=added_cond_kwargs,
+ return_dict=False,
+ )[0]
+
+ # perform guidance
+ if do_classifier_free_guidance:
+ noise_pred_uncond, noise_pred_text = noise_pred[::2], noise_pred[1::2]
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
+
+ if do_classifier_free_guidance and guidance_rescale > 0.0:
+ # Based on 3.4. in https://arxiv.org/pdf/2305.08891.pdf
+ noise_pred = rescale_noise_cfg(noise_pred, noise_pred_text, guidance_rescale=guidance_rescale)
+
+ # compute the previous noisy sample x_t -> x_t-1
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0]
+
+ # call the callback, if provided
+ if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
+ progress_bar.update()
+ if callback is not None and i % callback_steps == 0:
+ step_idx = i // getattr(self.scheduler, "order", 1)
+ callback(step_idx, t, latents)
+ del latents_for_view, latent_model_input, noise_pred, noise_pred_text, noise_pred_uncond
+ else:
+ print("### Encoding Real Image ###")
+ latents = self.vae.encode(image_lr)
+ latents = latents.latent_dist.sample() * self.vae.config.scaling_factor
+
+ anchor_mean = latents.mean()
+ anchor_std = latents.std()
+ if self.lowvram:
+ latents = latents.cpu()
+ torch.cuda.empty_cache()
+ if not output_type == "latent":
+ # make sure the VAE is in float32 mode, as it overflows in float16
+ needs_upcasting = self.vae.dtype == torch.float16 and self.vae.config.force_upcast
+
+ if self.lowvram:
+ needs_upcasting = False # use madebyollin/sdxl-vae-fp16-fix in lowvram mode!
+ self.unet.cpu()
+ self.vae.to(device)
+
+ if needs_upcasting:
+ self.upcast_vae()
+ latents = latents.to(next(iter(self.vae.post_quant_conv.parameters())).dtype)
+ if self.lowvram and multi_decoder:
+ current_width_height = self.unet.config.sample_size * self.vae_scale_factor
+ image = self.tiled_decode(latents, current_width_height, current_width_height)
+ else:
+ image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]
+ # cast back to fp16 if needed
+ if needs_upcasting:
+ self.vae.to(dtype=torch.float16)
+
+ image = self.image_processor.postprocess(image, output_type=output_type)
+ if show_image:
+ plt.figure(figsize=(10, 10))
+ plt.imshow(image[0])
+ plt.axis('off') # Turn off axis numbers and ticks
+ plt.show()
+ output_images.append(image[0])
+
+ ####################################################### Phase Upscaling #####################################################
+ if image_lr == None:
+ starting_scale = 2
+ else:
+ starting_scale = 1
+ for current_scale_num in range(starting_scale, scale_num + 1):
+ if self.lowvram:
+ latents = latents.to(device)
+ self.unet.to(device)
+ torch.cuda.empty_cache()
+ print("### Phase {} Denoising ###".format(current_scale_num))
+ current_height = self.unet.config.sample_size * self.vae_scale_factor * current_scale_num
+ current_width = self.unet.config.sample_size * self.vae_scale_factor * current_scale_num
+ if height > width:
+ current_width = int(current_width * aspect_ratio)
+ else:
+ current_height = int(current_height * aspect_ratio)
+
+ latents = F.interpolate(latents.to(device), size=(int(current_height / self.vae_scale_factor), int(current_width / self.vae_scale_factor)), mode='bicubic')
+
+ noise_latents = []
+ noise = torch.randn_like(latents)
+ for timestep in timesteps:
+ noise_latent = self.scheduler.add_noise(latents, noise, timestep.unsqueeze(0))
+ noise_latents.append(noise_latent)
+ latents = noise_latents[0]
+
+ with self.progress_bar(total=num_inference_steps) as progress_bar:
+ for i, t in enumerate(timesteps):
+ count = torch.zeros_like(latents)
+ value = torch.zeros_like(latents)
+ cosine_factor = 0.5 * (1 + torch.cos(torch.pi * (self.scheduler.config.num_train_timesteps - t) / self.scheduler.config.num_train_timesteps)).cpu()
+
+ c1 = cosine_factor ** cosine_scale_1
+ latents = latents * (1 - c1) + noise_latents[i] * c1
+
+ ############################################# MultiDiffusion #############################################
+
+ views = self.get_views(current_height, current_width, stride=stride, window_size=self.unet.config.sample_size, random_jitter=True)
+ views_batch = [views[i : i + view_batch_size] for i in range(0, len(views), view_batch_size)]
+
+ jitter_range = (self.unet.config.sample_size - stride) // 4
+ latents_ = F.pad(latents, (jitter_range, jitter_range, jitter_range, jitter_range), 'constant', 0)
+
+ count_local = torch.zeros_like(latents_)
+ value_local = torch.zeros_like(latents_)
+
+ for j, batch_view in enumerate(views_batch):
+ vb_size = len(batch_view)
+
+ # get the latents corresponding to the current view coordinates
+ latents_for_view = torch.cat(
+ [
+ latents_[:, :, h_start:h_end, w_start:w_end]
+ for h_start, h_end, w_start, w_end in batch_view
+ ]
+ )
+
+ # expand the latents if we are doing classifier free guidance
+ latent_model_input = latents_for_view
+ latent_model_input = (
+ latent_model_input.repeat_interleave(2, dim=0)
+ if do_classifier_free_guidance
+ else latent_model_input
+ )
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
+
+ prompt_embeds_input = torch.cat([prompt_embeds] * vb_size)
+ add_text_embeds_input = torch.cat([add_text_embeds] * vb_size)
+ add_time_ids_input = []
+ for h_start, h_end, w_start, w_end in batch_view:
+ add_time_ids_ = add_time_ids.clone()
+ add_time_ids_[:, 2] = h_start * self.vae_scale_factor
+ add_time_ids_[:, 3] = w_start * self.vae_scale_factor
+ add_time_ids_input.append(add_time_ids_)
+ add_time_ids_input = torch.cat(add_time_ids_input)
+
+ # predict the noise residual
+ added_cond_kwargs = {"text_embeds": add_text_embeds_input, "time_ids": add_time_ids_input}
+ noise_pred = self.unet(
+ latent_model_input,
+ t,
+ encoder_hidden_states=prompt_embeds_input,
+ cross_attention_kwargs=cross_attention_kwargs,
+ added_cond_kwargs=added_cond_kwargs,
+ return_dict=False,
+ )[0]
+
+ if do_classifier_free_guidance:
+ noise_pred_uncond, noise_pred_text = noise_pred[::2], noise_pred[1::2]
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
+
+ if do_classifier_free_guidance and guidance_rescale > 0.0:
+ # Based on 3.4. in https://arxiv.org/pdf/2305.08891.pdf
+ noise_pred = rescale_noise_cfg(noise_pred, noise_pred_text, guidance_rescale=guidance_rescale)
+
+ # compute the previous noisy sample x_t -> x_t-1
+ self.scheduler._init_step_index(t)
+ latents_denoised_batch = self.scheduler.step(
+ noise_pred, t, latents_for_view, **extra_step_kwargs, return_dict=False)[0]
+
+ # extract value from batch
+ for latents_view_denoised, (h_start, h_end, w_start, w_end) in zip(
+ latents_denoised_batch.chunk(vb_size), batch_view
+ ):
+ value_local[:, :, h_start:h_end, w_start:w_end] += latents_view_denoised
+ count_local[:, :, h_start:h_end, w_start:w_end] += 1
+
+ value_local = value_local[: ,:, jitter_range: jitter_range + current_height // self.vae_scale_factor, jitter_range: jitter_range + current_width // self.vae_scale_factor]
+ count_local = count_local[: ,:, jitter_range: jitter_range + current_height // self.vae_scale_factor, jitter_range: jitter_range + current_width // self.vae_scale_factor]
+
+ c2 = cosine_factor ** cosine_scale_2
+
+ value += value_local / count_local * (1 - c2)
+ count += torch.ones_like(value_local) * (1 - c2)
+
+ ############################################# Dilated Sampling #############################################
+
+ views = [[h, w] for h in range(current_scale_num) for w in range(current_scale_num)]
+ views_batch = [views[i : i + view_batch_size] for i in range(0, len(views), view_batch_size)]
+
+ h_pad = (current_scale_num - (latents.size(2) % current_scale_num)) % current_scale_num
+ w_pad = (current_scale_num - (latents.size(3) % current_scale_num)) % current_scale_num
+ latents_ = F.pad(latents, (w_pad, 0, h_pad, 0), 'constant', 0)
+
+ count_global = torch.zeros_like(latents_)
+ value_global = torch.zeros_like(latents_)
+
+ c3 = 0.99 * cosine_factor ** cosine_scale_3 + 1e-2
+ std_, mean_ = latents_.std(), latents_.mean()
+ latents_gaussian = gaussian_filter(latents_, kernel_size=(2*current_scale_num-1), sigma=sigma*c3)
+ latents_gaussian = (latents_gaussian - latents_gaussian.mean()) / latents_gaussian.std() * std_ + mean_
+
+ for j, batch_view in enumerate(views_batch):
+ latents_for_view = torch.cat(
+ [
+ latents_[:, :, h::current_scale_num, w::current_scale_num]
+ for h, w in batch_view
+ ]
+ )
+ latents_for_view_gaussian = torch.cat(
+ [
+ latents_gaussian[:, :, h::current_scale_num, w::current_scale_num]
+ for h, w in batch_view
+ ]
+ )
+
+ vb_size = latents_for_view.size(0)
+
+ # expand the latents if we are doing classifier free guidance
+ latent_model_input = latents_for_view_gaussian
+ latent_model_input = (
+ latent_model_input.repeat_interleave(2, dim=0)
+ if do_classifier_free_guidance
+ else latent_model_input
+ )
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
+
+ prompt_embeds_input = torch.cat([prompt_embeds] * vb_size)
+ add_text_embeds_input = torch.cat([add_text_embeds] * vb_size)
+ add_time_ids_input = torch.cat([add_time_ids] * vb_size)
+
+ # predict the noise residual
+ added_cond_kwargs = {"text_embeds": add_text_embeds_input, "time_ids": add_time_ids_input}
+ noise_pred = self.unet(
+ latent_model_input,
+ t,
+ encoder_hidden_states=prompt_embeds_input,
+ cross_attention_kwargs=cross_attention_kwargs,
+ added_cond_kwargs=added_cond_kwargs,
+ return_dict=False,
+ )[0]
+
+ if do_classifier_free_guidance:
+ noise_pred_uncond, noise_pred_text = noise_pred[::2], noise_pred[1::2]
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
+
+ if do_classifier_free_guidance and guidance_rescale > 0.0:
+ # Based on 3.4. in https://arxiv.org/pdf/2305.08891.pdf
+ noise_pred = rescale_noise_cfg(noise_pred, noise_pred_text, guidance_rescale=guidance_rescale)
+
+ # compute the previous noisy sample x_t -> x_t-1
+ self.scheduler._init_step_index(t)
+ latents_denoised_batch = self.scheduler.step(
+ noise_pred, t, latents_for_view, **extra_step_kwargs, return_dict=False)[0]
+
+ # extract value from batch
+ for latents_view_denoised, (h, w) in zip(
+ latents_denoised_batch.chunk(vb_size), batch_view
+ ):
+ value_global[:, :, h::current_scale_num, w::current_scale_num] += latents_view_denoised
+ count_global[:, :, h::current_scale_num, w::current_scale_num] += 1
+
+ c2 = cosine_factor ** cosine_scale_2
+
+ value_global = value_global[: ,:, h_pad:, w_pad:]
+
+ value += value_global * c2
+ count += torch.ones_like(value_global) * c2
+
+ ###########################################################
+
+ latents = torch.where(count > 0, value / count, value)
+
+ # call the callback, if provided
+ if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
+ progress_bar.update()
+ if callback is not None and i % callback_steps == 0:
+ step_idx = i // getattr(self.scheduler, "order", 1)
+ callback(step_idx, t, latents)
+
+ #########################################################################################################################################
+
+ latents = (latents - latents.mean()) / latents.std() * anchor_std + anchor_mean
+ if self.lowvram:
+ latents = latents.cpu()
+ torch.cuda.empty_cache()
+ if not output_type == "latent":
+ # make sure the VAE is in float32 mode, as it overflows in float16
+ needs_upcasting = self.vae.dtype == torch.float16 and self.vae.config.force_upcast
+
+ if self.lowvram:
+ needs_upcasting = False # use madebyollin/sdxl-vae-fp16-fix in lowvram mode!
+ self.unet.cpu()
+ self.vae.to(device)
+
+ if needs_upcasting:
+ self.upcast_vae()
+ latents = latents.to(next(iter(self.vae.post_quant_conv.parameters())).dtype)
+
+ print("### Phase {} Decoding ###".format(current_scale_num))
+ if multi_decoder:
+ image = self.tiled_decode(latents, current_height, current_width)
+ else:
+ image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]
+
+ # cast back to fp16 if needed
+ if needs_upcasting:
+ self.vae.to(dtype=torch.float16)
+ else:
+ image = latents
+
+ if not output_type == "latent":
+ image = self.image_processor.postprocess(image, output_type=output_type)
+ if show_image:
+ plt.figure(figsize=(10, 10))
+ plt.imshow(image[0])
+ plt.axis('off') # Turn off axis numbers and ticks
+ plt.show()
+ output_images.append(image[0])
+
+ # Offload all models
+ self.maybe_free_model_hooks()
+
+ return output_images
+
+ # Overrride to properly handle the loading and unloading of the additional text encoder.
+ def load_lora_weights(self, pretrained_model_name_or_path_or_dict: Union[str, Dict[str, torch.Tensor]], **kwargs):
+ # We could have accessed the unet config from `lora_state_dict()` too. We pass
+ # it here explicitly to be able to tell that it's coming from an SDXL
+ # pipeline.
+
+ # Remove any existing hooks.
+ if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
+ from accelerate.hooks import AlignDevicesHook, CpuOffload, remove_hook_from_module
+ else:
+ raise ImportError("Offloading requires `accelerate v0.17.0` or higher.")
+
+ is_model_cpu_offload = False
+ is_sequential_cpu_offload = False
+ recursive = False
+ for _, component in self.components.items():
+ if isinstance(component, torch.nn.Module):
+ if hasattr(component, "_hf_hook"):
+ is_model_cpu_offload = isinstance(getattr(component, "_hf_hook"), CpuOffload)
+ is_sequential_cpu_offload = isinstance(getattr(component, "_hf_hook"), AlignDevicesHook)
+ logger.info(
+ "Accelerate hooks detected. Since you have called `load_lora_weights()`, the previous hooks will be first removed. Then the LoRA parameters will be loaded and the hooks will be applied again."
+ )
+ recursive = is_sequential_cpu_offload
+ remove_hook_from_module(component, recurse=recursive)
+ state_dict, network_alphas = self.lora_state_dict(
+ pretrained_model_name_or_path_or_dict,
+ unet_config=self.unet.config,
+ **kwargs,
+ )
+ self.load_lora_into_unet(state_dict, network_alphas=network_alphas, unet=self.unet)
+
+ text_encoder_state_dict = {k: v for k, v in state_dict.items() if "text_encoder." in k}
+ if len(text_encoder_state_dict) > 0:
+ self.load_lora_into_text_encoder(
+ text_encoder_state_dict,
+ network_alphas=network_alphas,
+ text_encoder=self.text_encoder,
+ prefix="text_encoder",
+ lora_scale=self.lora_scale,
+ )
+
+ text_encoder_2_state_dict = {k: v for k, v in state_dict.items() if "text_encoder_2." in k}
+ if len(text_encoder_2_state_dict) > 0:
+ self.load_lora_into_text_encoder(
+ text_encoder_2_state_dict,
+ network_alphas=network_alphas,
+ text_encoder=self.text_encoder_2,
+ prefix="text_encoder_2",
+ lora_scale=self.lora_scale,
+ )
+
+ # Offload back.
+ if is_model_cpu_offload:
+ self.enable_model_cpu_offload()
+ elif is_sequential_cpu_offload:
+ self.enable_sequential_cpu_offload()
+
+ @classmethod
+ def save_lora_weights(
+ self,
+ save_directory: Union[str, os.PathLike],
+ unet_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None,
+ text_encoder_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None,
+ text_encoder_2_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None,
+ is_main_process: bool = True,
+ weight_name: str = None,
+ save_function: Callable = None,
+ safe_serialization: bool = True,
+ ):
+ state_dict = {}
+
+ def pack_weights(layers, prefix):
+ layers_weights = layers.state_dict() if isinstance(layers, torch.nn.Module) else layers
+ layers_state_dict = {f"{prefix}.{module_name}": param for module_name, param in layers_weights.items()}
+ return layers_state_dict
+
+ if not (unet_lora_layers or text_encoder_lora_layers or text_encoder_2_lora_layers):
+ raise ValueError(
+ "You must pass at least one of `unet_lora_layers`, `text_encoder_lora_layers` or `text_encoder_2_lora_layers`."
+ )
+
+ if unet_lora_layers:
+ state_dict.update(pack_weights(unet_lora_layers, "unet"))
+
+ if text_encoder_lora_layers and text_encoder_2_lora_layers:
+ state_dict.update(pack_weights(text_encoder_lora_layers, "text_encoder"))
+ state_dict.update(pack_weights(text_encoder_2_lora_layers, "text_encoder_2"))
+
+ self.write_lora_layers(
+ state_dict=state_dict,
+ save_directory=save_directory,
+ is_main_process=is_main_process,
+ weight_name=weight_name,
+ save_function=save_function,
+ safe_serialization=safe_serialization,
+ )
+
+ def _remove_text_encoder_monkey_patch(self):
+ self._remove_text_encoder_monkey_patch_classmethod(self.text_encoder)
+ self._remove_text_encoder_monkey_patch_classmethod(self.text_encoder_2)
diff --git a/competitors_inference_code/DemoFusion/pipeline_demofusion_sdxl_controlnet.py b/competitors_inference_code/DemoFusion/pipeline_demofusion_sdxl_controlnet.py
new file mode 100644
index 0000000000000000000000000000000000000000..e80e495ffa6388d118e6de3338e55f546fee1a80
--- /dev/null
+++ b/competitors_inference_code/DemoFusion/pipeline_demofusion_sdxl_controlnet.py
@@ -0,0 +1,1796 @@
+# Copyright 2023 The HuggingFace Team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+
+import inspect
+import os
+from typing import Any, Callable, Dict, List, Optional, Tuple, Union
+import matplotlib.pyplot as plt
+
+import numpy as np
+import PIL.Image
+import torch
+import torch.nn.functional as F
+import random
+import warnings
+from transformers import CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer
+
+from diffusers.utils.import_utils import is_invisible_watermark_available
+
+from diffusers.image_processor import PipelineImageInput, VaeImageProcessor
+from diffusers.loaders import FromSingleFileMixin, LoraLoaderMixin, TextualInversionLoaderMixin
+from diffusers.models import AutoencoderKL, ControlNetModel, UNet2DConditionModel
+from diffusers.models.attention_processor import (
+ AttnProcessor2_0,
+ LoRAAttnProcessor2_0,
+ LoRAXFormersAttnProcessor,
+ XFormersAttnProcessor,
+)
+from diffusers.models.lora import adjust_lora_scale_text_encoder
+from diffusers.schedulers import KarrasDiffusionSchedulers
+from diffusers.utils import (
+ is_accelerate_available,
+ is_accelerate_version,
+ logging,
+ replace_example_docstring,
+)
+from diffusers.utils.torch_utils import is_compiled_module, randn_tensor
+from diffusers.pipelines.pipeline_utils import DiffusionPipeline
+from diffusers.pipelines.stable_diffusion_xl import StableDiffusionXLPipelineOutput
+
+
+if is_invisible_watermark_available():
+ from diffusers.pipelines.stable_diffusion_xl.watermark import StableDiffusionXLWatermarker
+
+from diffusers.pipelines.controlnet.multicontrolnet import MultiControlNetModel
+
+
+logger = logging.get_logger(__name__) # pylint: disable=invalid-name
+
+
+EXAMPLE_DOC_STRING = """
+ Examples:
+"""
+
+def gaussian_kernel(kernel_size=3, sigma=1.0, channels=3):
+ x_coord = torch.arange(kernel_size)
+ gaussian_1d = torch.exp(-(x_coord - (kernel_size - 1) / 2) ** 2 / (2 * sigma ** 2))
+ gaussian_1d = gaussian_1d / gaussian_1d.sum()
+ gaussian_2d = gaussian_1d[:, None] * gaussian_1d[None, :]
+ kernel = gaussian_2d[None, None, :, :].repeat(channels, 1, 1, 1)
+
+ return kernel
+
+def gaussian_filter(latents, kernel_size=3, sigma=1.0):
+ channels = latents.shape[1]
+ kernel = gaussian_kernel(kernel_size, sigma, channels).to(latents.device, latents.dtype)
+ blurred_latents = F.conv2d(latents, kernel, padding=kernel_size//2, groups=channels)
+
+ return blurred_latents
+
+class DemoFusionSDXLControlNetPipeline(
+ DiffusionPipeline, TextualInversionLoaderMixin, LoraLoaderMixin, FromSingleFileMixin
+):
+ r"""
+ Pipeline for text-to-image generation using Stable Diffusion XL with ControlNet guidance.
+
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods
+ implemented for all pipelines (downloading, saving, running on a particular device, etc.).
+
+ The pipeline also inherits the following loading methods:
+ - [`~loaders.TextualInversionLoaderMixin.load_textual_inversion`] for loading textual inversion embeddings
+ - [`loaders.LoraLoaderMixin.load_lora_weights`] for loading LoRA weights
+ - [`loaders.FromSingleFileMixin.from_single_file`] for loading `.ckpt` files
+
+ Args:
+ vae ([`AutoencoderKL`]):
+ Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations.
+ text_encoder ([`~transformers.CLIPTextModel`]):
+ Frozen text-encoder ([clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14)).
+ text_encoder_2 ([`~transformers.CLIPTextModelWithProjection`]):
+ Second frozen text-encoder
+ ([laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)).
+ tokenizer ([`~transformers.CLIPTokenizer`]):
+ A `CLIPTokenizer` to tokenize text.
+ tokenizer_2 ([`~transformers.CLIPTokenizer`]):
+ A `CLIPTokenizer` to tokenize text.
+ unet ([`UNet2DConditionModel`]):
+ A `UNet2DConditionModel` to denoise the encoded image latents.
+ controlnet ([`ControlNetModel`] or `List[ControlNetModel]`):
+ Provides additional conditioning to the `unet` during the denoising process. If you set multiple
+ ControlNets as a list, the outputs from each ControlNet are added together to create one combined
+ additional conditioning.
+ scheduler ([`SchedulerMixin`]):
+ A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
+ force_zeros_for_empty_prompt (`bool`, *optional*, defaults to `"True"`):
+ Whether the negative prompt embeddings should always be set to 0. Also see the config of
+ `stabilityai/stable-diffusion-xl-base-1-0`.
+ add_watermarker (`bool`, *optional*):
+ Whether to use the [invisible_watermark](https://github.com/ShieldMnt/invisible-watermark/) library to
+ watermark output images. If not defined, it defaults to `True` if the package is installed; otherwise no
+ watermarker is used.
+ """
+ model_cpu_offload_seq = (
+ "text_encoder->text_encoder_2->unet->vae" # leave controlnet out on purpose because it iterates with unet
+ )
+
+ def __init__(
+ self,
+ vae: AutoencoderKL,
+ text_encoder: CLIPTextModel,
+ text_encoder_2: CLIPTextModelWithProjection,
+ tokenizer: CLIPTokenizer,
+ tokenizer_2: CLIPTokenizer,
+ unet: UNet2DConditionModel,
+ controlnet: Union[ControlNetModel, List[ControlNetModel], Tuple[ControlNetModel], MultiControlNetModel],
+ scheduler: KarrasDiffusionSchedulers,
+ force_zeros_for_empty_prompt: bool = True,
+ add_watermarker: Optional[bool] = None,
+ ):
+ super().__init__()
+
+ if isinstance(controlnet, (list, tuple)):
+ controlnet = MultiControlNetModel(controlnet)
+
+ self.register_modules(
+ vae=vae,
+ text_encoder=text_encoder,
+ text_encoder_2=text_encoder_2,
+ tokenizer=tokenizer,
+ tokenizer_2=tokenizer_2,
+ unet=unet,
+ controlnet=controlnet,
+ scheduler=scheduler,
+ )
+ self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
+ self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor, do_convert_rgb=True)
+ self.control_image_processor = VaeImageProcessor(
+ vae_scale_factor=self.vae_scale_factor, do_convert_rgb=True, do_normalize=False
+ )
+ add_watermarker = add_watermarker if add_watermarker is not None else is_invisible_watermark_available()
+
+ if add_watermarker:
+ self.watermark = StableDiffusionXLWatermarker()
+ else:
+ self.watermark = None
+
+ self.register_to_config(force_zeros_for_empty_prompt=force_zeros_for_empty_prompt)
+
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_slicing
+ def enable_vae_slicing(self):
+ r"""
+ Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
+ compute decoding in several steps. This is useful to save some memory and allow larger batch sizes.
+ """
+ self.vae.enable_slicing()
+
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_slicing
+ def disable_vae_slicing(self):
+ r"""
+ Disable sliced VAE decoding. If `enable_vae_slicing` was previously enabled, this method will go back to
+ computing decoding in one step.
+ """
+ self.vae.disable_slicing()
+
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.enable_vae_tiling
+ def enable_vae_tiling(self):
+ r"""
+ Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
+ compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
+ processing larger images.
+ """
+ self.vae.enable_tiling()
+
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.disable_vae_tiling
+ def disable_vae_tiling(self):
+ r"""
+ Disable tiled VAE decoding. If `enable_vae_tiling` was previously enabled, this method will go back to
+ computing decoding in one step.
+ """
+ self.vae.disable_tiling()
+
+ # Copied from diffusers.pipelines.stable_diffusion_xl.pipeline_stable_diffusion_xl.StableDiffusionXLPipeline.encode_prompt
+ def encode_prompt(
+ self,
+ prompt: str,
+ prompt_2: Optional[str] = None,
+ device: Optional[torch.device] = None,
+ num_images_per_prompt: int = 1,
+ do_classifier_free_guidance: bool = True,
+ negative_prompt: Optional[str] = None,
+ negative_prompt_2: Optional[str] = None,
+ prompt_embeds: Optional[torch.FloatTensor] = None,
+ negative_prompt_embeds: Optional[torch.FloatTensor] = None,
+ pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
+ negative_pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
+ lora_scale: Optional[float] = None,
+ ):
+ r"""
+ Encodes the prompt into text encoder hidden states.
+
+ Args:
+ prompt (`str` or `List[str]`, *optional*):
+ prompt to be encoded
+ prompt_2 (`str` or `List[str]`, *optional*):
+ The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
+ used in both text-encoders
+ device: (`torch.device`):
+ torch device
+ num_images_per_prompt (`int`):
+ number of images that should be generated per prompt
+ do_classifier_free_guidance (`bool`):
+ whether to use classifier free guidance or not
+ negative_prompt (`str` or `List[str]`, *optional*):
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
+ `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
+ less than `1`).
+ negative_prompt_2 (`str` or `List[str]`, *optional*):
+ The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
+ `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
+ prompt_embeds (`torch.FloatTensor`, *optional*):
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
+ provided, text embeddings will be generated from `prompt` input argument.
+ negative_prompt_embeds (`torch.FloatTensor`, *optional*):
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
+ weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
+ argument.
+ pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
+ Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
+ If not provided, pooled text embeddings will be generated from `prompt` input argument.
+ negative_pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
+ Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
+ weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
+ input argument.
+ lora_scale (`float`, *optional*):
+ A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
+ """
+ device = device or self._execution_device
+
+ # set lora scale so that monkey patched LoRA
+ # function of text encoder can correctly access it
+ if lora_scale is not None and isinstance(self, LoraLoaderMixin):
+ self._lora_scale = lora_scale
+
+ # dynamically adjust the LoRA scale
+ adjust_lora_scale_text_encoder(self.text_encoder, lora_scale)
+ adjust_lora_scale_text_encoder(self.text_encoder_2, lora_scale)
+
+ if prompt is not None and isinstance(prompt, str):
+ batch_size = 1
+ elif prompt is not None and isinstance(prompt, list):
+ batch_size = len(prompt)
+ else:
+ batch_size = prompt_embeds.shape[0]
+
+ # Define tokenizers and text encoders
+ tokenizers = [self.tokenizer, self.tokenizer_2] if self.tokenizer is not None else [self.tokenizer_2]
+ text_encoders = (
+ [self.text_encoder, self.text_encoder_2] if self.text_encoder is not None else [self.text_encoder_2]
+ )
+
+ if prompt_embeds is None:
+ prompt_2 = prompt_2 or prompt
+ # textual inversion: procecss multi-vector tokens if necessary
+ prompt_embeds_list = []
+ prompts = [prompt, prompt_2]
+ for prompt, tokenizer, text_encoder in zip(prompts, tokenizers, text_encoders):
+ if isinstance(self, TextualInversionLoaderMixin):
+ prompt = self.maybe_convert_prompt(prompt, tokenizer)
+
+ text_inputs = tokenizer(
+ prompt,
+ padding="max_length",
+ max_length=tokenizer.model_max_length,
+ truncation=True,
+ return_tensors="pt",
+ )
+
+ text_input_ids = text_inputs.input_ids
+ untruncated_ids = tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
+
+ if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
+ text_input_ids, untruncated_ids
+ ):
+ removed_text = tokenizer.batch_decode(untruncated_ids[:, tokenizer.model_max_length - 1 : -1])
+ logger.warning(
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
+ f" {tokenizer.model_max_length} tokens: {removed_text}"
+ )
+
+ prompt_embeds = text_encoder(
+ text_input_ids.to(device),
+ output_hidden_states=True,
+ )
+
+ # We are only ALWAYS interested in the pooled output of the final text encoder
+ pooled_prompt_embeds = prompt_embeds[0]
+ prompt_embeds = prompt_embeds.hidden_states[-2]
+
+ prompt_embeds_list.append(prompt_embeds)
+
+ prompt_embeds = torch.concat(prompt_embeds_list, dim=-1)
+
+ # get unconditional embeddings for classifier free guidance
+ zero_out_negative_prompt = negative_prompt is None and self.config.force_zeros_for_empty_prompt
+ if do_classifier_free_guidance and negative_prompt_embeds is None and zero_out_negative_prompt:
+ negative_prompt_embeds = torch.zeros_like(prompt_embeds)
+ negative_pooled_prompt_embeds = torch.zeros_like(pooled_prompt_embeds)
+ elif do_classifier_free_guidance and negative_prompt_embeds is None:
+ negative_prompt = negative_prompt or ""
+ negative_prompt_2 = negative_prompt_2 or negative_prompt
+
+ uncond_tokens: List[str]
+ if prompt is not None and type(prompt) is not type(negative_prompt):
+ raise TypeError(
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
+ f" {type(prompt)}."
+ )
+ elif isinstance(negative_prompt, str):
+ uncond_tokens = [negative_prompt, negative_prompt_2]
+ elif batch_size != len(negative_prompt):
+ raise ValueError(
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
+ " the batch size of `prompt`."
+ )
+ else:
+ uncond_tokens = [negative_prompt, negative_prompt_2]
+
+ negative_prompt_embeds_list = []
+ for negative_prompt, tokenizer, text_encoder in zip(uncond_tokens, tokenizers, text_encoders):
+ if isinstance(self, TextualInversionLoaderMixin):
+ negative_prompt = self.maybe_convert_prompt(negative_prompt, tokenizer)
+
+ max_length = prompt_embeds.shape[1]
+ uncond_input = tokenizer(
+ negative_prompt,
+ padding="max_length",
+ max_length=max_length,
+ truncation=True,
+ return_tensors="pt",
+ )
+
+ negative_prompt_embeds = text_encoder(
+ uncond_input.input_ids.to(device),
+ output_hidden_states=True,
+ )
+ # We are only ALWAYS interested in the pooled output of the final text encoder
+ negative_pooled_prompt_embeds = negative_prompt_embeds[0]
+ negative_prompt_embeds = negative_prompt_embeds.hidden_states[-2]
+
+ negative_prompt_embeds_list.append(negative_prompt_embeds)
+
+ negative_prompt_embeds = torch.concat(negative_prompt_embeds_list, dim=-1)
+
+ prompt_embeds = prompt_embeds.to(dtype=self.text_encoder_2.dtype, device=device)
+ bs_embed, seq_len, _ = prompt_embeds.shape
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
+ prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
+ prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1)
+
+ if do_classifier_free_guidance:
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
+ seq_len = negative_prompt_embeds.shape[1]
+ negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder_2.dtype, device=device)
+ negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1)
+ negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)
+
+ pooled_prompt_embeds = pooled_prompt_embeds.repeat(1, num_images_per_prompt).view(
+ bs_embed * num_images_per_prompt, -1
+ )
+ if do_classifier_free_guidance:
+ negative_pooled_prompt_embeds = negative_pooled_prompt_embeds.repeat(1, num_images_per_prompt).view(
+ bs_embed * num_images_per_prompt, -1
+ )
+
+ return prompt_embeds, negative_prompt_embeds, pooled_prompt_embeds, negative_pooled_prompt_embeds
+
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs
+ def prepare_extra_step_kwargs(self, generator, eta):
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
+ # and should be between [0, 1]
+
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
+ extra_step_kwargs = {}
+ if accepts_eta:
+ extra_step_kwargs["eta"] = eta
+
+ # check if the scheduler accepts generator
+ accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
+ if accepts_generator:
+ extra_step_kwargs["generator"] = generator
+ return extra_step_kwargs
+
+ def check_inputs(
+ self,
+ prompt,
+ prompt_2,
+ image,
+ callback_steps,
+ negative_prompt=None,
+ negative_prompt_2=None,
+ prompt_embeds=None,
+ negative_prompt_embeds=None,
+ pooled_prompt_embeds=None,
+ negative_pooled_prompt_embeds=None,
+ controlnet_conditioning_scale=1.0,
+ control_guidance_start=0.0,
+ control_guidance_end=1.0,
+ ):
+ if (callback_steps is None) or (
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
+ ):
+ raise ValueError(
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
+ f" {type(callback_steps)}."
+ )
+
+ if prompt is not None and prompt_embeds is not None:
+ raise ValueError(
+ f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
+ " only forward one of the two."
+ )
+ elif prompt_2 is not None and prompt_embeds is not None:
+ raise ValueError(
+ f"Cannot forward both `prompt_2`: {prompt_2} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
+ " only forward one of the two."
+ )
+ elif prompt is None and prompt_embeds is None:
+ raise ValueError(
+ "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
+ )
+ elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
+ elif prompt_2 is not None and (not isinstance(prompt_2, str) and not isinstance(prompt_2, list)):
+ raise ValueError(f"`prompt_2` has to be of type `str` or `list` but is {type(prompt_2)}")
+
+ if negative_prompt is not None and negative_prompt_embeds is not None:
+ raise ValueError(
+ f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
+ f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
+ )
+ elif negative_prompt_2 is not None and negative_prompt_embeds is not None:
+ raise ValueError(
+ f"Cannot forward both `negative_prompt_2`: {negative_prompt_2} and `negative_prompt_embeds`:"
+ f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
+ )
+
+ if prompt_embeds is not None and negative_prompt_embeds is not None:
+ if prompt_embeds.shape != negative_prompt_embeds.shape:
+ raise ValueError(
+ "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
+ f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
+ f" {negative_prompt_embeds.shape}."
+ )
+
+ if prompt_embeds is not None and pooled_prompt_embeds is None:
+ raise ValueError(
+ "If `prompt_embeds` are provided, `pooled_prompt_embeds` also have to be passed. Make sure to generate `pooled_prompt_embeds` from the same text encoder that was used to generate `prompt_embeds`."
+ )
+
+ if negative_prompt_embeds is not None and negative_pooled_prompt_embeds is None:
+ raise ValueError(
+ "If `negative_prompt_embeds` are provided, `negative_pooled_prompt_embeds` also have to be passed. Make sure to generate `negative_pooled_prompt_embeds` from the same text encoder that was used to generate `negative_prompt_embeds`."
+ )
+
+ # `prompt` needs more sophisticated handling when there are multiple
+ # conditionings.
+ if isinstance(self.controlnet, MultiControlNetModel):
+ if isinstance(prompt, list):
+ logger.warning(
+ f"You have {len(self.controlnet.nets)} ControlNets and you have passed {len(prompt)}"
+ " prompts. The conditionings will be fixed across the prompts."
+ )
+
+ # Check `image`
+ is_compiled = hasattr(F, "scaled_dot_product_attention") and isinstance(
+ self.controlnet, torch._dynamo.eval_frame.OptimizedModule
+ )
+ if (
+ isinstance(self.controlnet, ControlNetModel)
+ or is_compiled
+ and isinstance(self.controlnet._orig_mod, ControlNetModel)
+ ):
+ self.check_image(image, prompt, prompt_embeds)
+ elif (
+ isinstance(self.controlnet, MultiControlNetModel)
+ or is_compiled
+ and isinstance(self.controlnet._orig_mod, MultiControlNetModel)
+ ):
+ if not isinstance(image, list):
+ raise TypeError("For multiple controlnets: `image` must be type `list`")
+
+ # When `image` is a nested list:
+ # (e.g. [[canny_image_1, pose_image_1], [canny_image_2, pose_image_2]])
+ elif any(isinstance(i, list) for i in image):
+ raise ValueError("A single batch of multiple conditionings are supported at the moment.")
+ elif len(image) != len(self.controlnet.nets):
+ raise ValueError(
+ f"For multiple controlnets: `image` must have the same length as the number of controlnets, but got {len(image)} images and {len(self.controlnet.nets)} ControlNets."
+ )
+
+ for image_ in image:
+ self.check_image(image_, prompt, prompt_embeds)
+ else:
+ assert False
+
+ # Check `controlnet_conditioning_scale`
+ if (
+ isinstance(self.controlnet, ControlNetModel)
+ or is_compiled
+ and isinstance(self.controlnet._orig_mod, ControlNetModel)
+ ):
+ if not isinstance(controlnet_conditioning_scale, float):
+ raise TypeError("For single controlnet: `controlnet_conditioning_scale` must be type `float`.")
+ elif (
+ isinstance(self.controlnet, MultiControlNetModel)
+ or is_compiled
+ and isinstance(self.controlnet._orig_mod, MultiControlNetModel)
+ ):
+ if isinstance(controlnet_conditioning_scale, list):
+ if any(isinstance(i, list) for i in controlnet_conditioning_scale):
+ raise ValueError("A single batch of multiple conditionings are supported at the moment.")
+ elif isinstance(controlnet_conditioning_scale, list) and len(controlnet_conditioning_scale) != len(
+ self.controlnet.nets
+ ):
+ raise ValueError(
+ "For multiple controlnets: When `controlnet_conditioning_scale` is specified as `list`, it must have"
+ " the same length as the number of controlnets"
+ )
+ else:
+ assert False
+
+ if not isinstance(control_guidance_start, (tuple, list)):
+ control_guidance_start = [control_guidance_start]
+
+ if not isinstance(control_guidance_end, (tuple, list)):
+ control_guidance_end = [control_guidance_end]
+
+ if len(control_guidance_start) != len(control_guidance_end):
+ raise ValueError(
+ f"`control_guidance_start` has {len(control_guidance_start)} elements, but `control_guidance_end` has {len(control_guidance_end)} elements. Make sure to provide the same number of elements to each list."
+ )
+
+ if isinstance(self.controlnet, MultiControlNetModel):
+ if len(control_guidance_start) != len(self.controlnet.nets):
+ raise ValueError(
+ f"`control_guidance_start`: {control_guidance_start} has {len(control_guidance_start)} elements but there are {len(self.controlnet.nets)} controlnets available. Make sure to provide {len(self.controlnet.nets)}."
+ )
+
+ for start, end in zip(control_guidance_start, control_guidance_end):
+ if start >= end:
+ raise ValueError(
+ f"control guidance start: {start} cannot be larger or equal to control guidance end: {end}."
+ )
+ if start < 0.0:
+ raise ValueError(f"control guidance start: {start} can't be smaller than 0.")
+ if end > 1.0:
+ raise ValueError(f"control guidance end: {end} can't be larger than 1.0.")
+
+ # Copied from diffusers.pipelines.controlnet.pipeline_controlnet.StableDiffusionControlNetPipeline.check_image
+ def check_image(self, image, prompt, prompt_embeds):
+ image_is_pil = isinstance(image, PIL.Image.Image)
+ image_is_tensor = isinstance(image, torch.Tensor)
+ image_is_np = isinstance(image, np.ndarray)
+ image_is_pil_list = isinstance(image, list) and isinstance(image[0], PIL.Image.Image)
+ image_is_tensor_list = isinstance(image, list) and isinstance(image[0], torch.Tensor)
+ image_is_np_list = isinstance(image, list) and isinstance(image[0], np.ndarray)
+
+ if (
+ not image_is_pil
+ and not image_is_tensor
+ and not image_is_np
+ and not image_is_pil_list
+ and not image_is_tensor_list
+ and not image_is_np_list
+ ):
+ raise TypeError(
+ f"image must be passed and be one of PIL image, numpy array, torch tensor, list of PIL images, list of numpy arrays or list of torch tensors, but is {type(image)}"
+ )
+
+ if image_is_pil:
+ image_batch_size = 1
+ else:
+ image_batch_size = len(image)
+
+ if prompt is not None and isinstance(prompt, str):
+ prompt_batch_size = 1
+ elif prompt is not None and isinstance(prompt, list):
+ prompt_batch_size = len(prompt)
+ elif prompt_embeds is not None:
+ prompt_batch_size = prompt_embeds.shape[0]
+
+ if image_batch_size != 1 and image_batch_size != prompt_batch_size:
+ raise ValueError(
+ f"If image batch size is not 1, image batch size must be same as prompt batch size. image batch size: {image_batch_size}, prompt batch size: {prompt_batch_size}"
+ )
+
+ # Copied from diffusers.pipelines.controlnet.pipeline_controlnet.StableDiffusionControlNetPipeline.prepare_image
+ def prepare_image(
+ self,
+ image,
+ width,
+ height,
+ batch_size,
+ num_images_per_prompt,
+ device,
+ dtype,
+ do_classifier_free_guidance=False,
+ guess_mode=False,
+ ):
+ image = self.control_image_processor.preprocess(image, height=height, width=width).to(dtype=torch.float32)
+ image_batch_size = image.shape[0]
+
+ if image_batch_size == 1:
+ repeat_by = batch_size
+ else:
+ # image batch size is the same as prompt batch size
+ repeat_by = num_images_per_prompt
+
+ image = image.repeat_interleave(repeat_by, dim=0)
+
+ image = image.to(device=device, dtype=dtype)
+
+ if do_classifier_free_guidance and not guess_mode:
+ image = torch.cat([image] * 2)
+
+ return image
+
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents
+ def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None):
+ shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor)
+ if isinstance(generator, list) and len(generator) != batch_size:
+ raise ValueError(
+ f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
+ f" size of {batch_size}. Make sure the batch size matches the length of the generators."
+ )
+
+ if latents is None:
+ latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
+ else:
+ latents = latents.to(device)
+
+ # scale the initial noise by the standard deviation required by the scheduler
+ latents = latents * self.scheduler.init_noise_sigma
+ return latents
+
+ # Copied from diffusers.pipelines.stable_diffusion_xl.pipeline_stable_diffusion_xl.StableDiffusionXLPipeline._get_add_time_ids
+ def _get_add_time_ids(self, original_size, crops_coords_top_left, target_size, dtype):
+ add_time_ids = list(original_size + crops_coords_top_left + target_size)
+
+ passed_add_embed_dim = (
+ self.unet.config.addition_time_embed_dim * len(add_time_ids) + self.text_encoder_2.config.projection_dim
+ )
+ expected_add_embed_dim = self.unet.add_embedding.linear_1.in_features
+
+ if expected_add_embed_dim != passed_add_embed_dim:
+ raise ValueError(
+ f"Model expects an added time embedding vector of length {expected_add_embed_dim}, but a vector of {passed_add_embed_dim} was created. The model has an incorrect config. Please check `unet.config.time_embedding_type` and `text_encoder_2.config.projection_dim`."
+ )
+
+ add_time_ids = torch.tensor([add_time_ids], dtype=dtype)
+ return add_time_ids
+
+ def get_views(self, height, width, window_size=128, stride=64, random_jitter=False):
+ # Here, we define the mappings F_i (see Eq. 7 in the MultiDiffusion paper https://arxiv.org/abs/2302.08113)
+ # if panorama's height/width < window_size, num_blocks of height/width should return 1
+ height //= self.vae_scale_factor
+ width //= self.vae_scale_factor
+ num_blocks_height = int((height - window_size) / stride - 1e-6) + 2 if height > window_size else 1
+ num_blocks_width = int((width - window_size) / stride - 1e-6) + 2 if width > window_size else 1
+ total_num_blocks = int(num_blocks_height * num_blocks_width)
+ views = []
+ for i in range(total_num_blocks):
+ h_start = int((i // num_blocks_width) * stride)
+ h_end = h_start + window_size
+ w_start = int((i % num_blocks_width) * stride)
+ w_end = w_start + window_size
+
+ if h_end > height:
+ h_start = int(h_start + height - h_end)
+ h_end = int(height)
+ if w_end > width:
+ w_start = int(w_start + width - w_end)
+ w_end = int(width)
+ if h_start < 0:
+ h_end = int(h_end - h_start)
+ h_start = 0
+ if w_start < 0:
+ w_end = int(w_end - w_start)
+ w_start = 0
+
+ if random_jitter:
+ jitter_range = (window_size - stride) // 4
+ w_jitter = 0
+ h_jitter = 0
+ if (w_start != 0) and (w_end != width):
+ w_jitter = random.randint(-jitter_range, jitter_range)
+ elif (w_start == 0) and (w_end != width):
+ w_jitter = random.randint(-jitter_range, 0)
+ elif (w_start != 0) and (w_end == width):
+ w_jitter = random.randint(0, jitter_range)
+ if (h_start != 0) and (h_end != height):
+ h_jitter = random.randint(-jitter_range, jitter_range)
+ elif (h_start == 0) and (h_end != height):
+ h_jitter = random.randint(-jitter_range, 0)
+ elif (h_start != 0) and (h_end == height):
+ h_jitter = random.randint(0, jitter_range)
+ h_start += (h_jitter + jitter_range)
+ h_end += (h_jitter + jitter_range)
+ w_start += (w_jitter + jitter_range)
+ w_end += (w_jitter + jitter_range)
+
+ views.append((h_start, h_end, w_start, w_end))
+ return views
+
+ def tiled_decode(self, latents, current_height, current_width):
+ sample_size = self.unet.config.sample_size
+ core_size = self.unet.config.sample_size // 4
+ core_stride = core_size
+ pad_size = self.unet.config.sample_size // 8 * 3
+ decoder_view_batch_size = 1
+
+ if self.lowvram:
+ core_stride = core_size // 2
+ pad_size = core_size
+
+ views = self.get_views(current_height, current_width, stride=core_stride, window_size=core_size)
+ views_batch = [views[i : i + decoder_view_batch_size] for i in range(0, len(views), decoder_view_batch_size)]
+ latents_ = F.pad(latents, (pad_size, pad_size, pad_size, pad_size), 'constant', 0)
+ image = torch.zeros(latents.size(0), 3, current_height, current_width).to(latents.device)
+ count = torch.zeros_like(image).to(latents.device)
+ # get the latents corresponding to the current view coordinates
+ with self.progress_bar(total=len(views_batch)) as progress_bar:
+ for j, batch_view in enumerate(views_batch):
+ vb_size = len(batch_view)
+ latents_for_view = torch.cat(
+ [
+ latents_[:, :, h_start:h_end+pad_size*2, w_start:w_end+pad_size*2]
+ for h_start, h_end, w_start, w_end in batch_view
+ ]
+ ).to(self.vae.device)
+ image_patch = self.vae.decode(latents_for_view / self.vae.config.scaling_factor, return_dict=False)[0]
+ h_start, h_end, w_start, w_end = views[j]
+ h_start, h_end, w_start, w_end = h_start * self.vae_scale_factor, h_end * self.vae_scale_factor, w_start * self.vae_scale_factor, w_end * self.vae_scale_factor
+ p_h_start, p_h_end, p_w_start, p_w_end = pad_size * self.vae_scale_factor, image_patch.size(2) - pad_size * self.vae_scale_factor, pad_size * self.vae_scale_factor, image_patch.size(3) - pad_size * self.vae_scale_factor
+ image[:, :, h_start:h_end, w_start:w_end] += image_patch[:, :, p_h_start:p_h_end, p_w_start:p_w_end].to(latents.device)
+ count[:, :, h_start:h_end, w_start:w_end] += 1
+ progress_bar.update()
+ image = image / count
+
+ return image
+
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_upscale.StableDiffusionUpscalePipeline.upcast_vae
+ def upcast_vae(self):
+ dtype = self.vae.dtype
+ self.vae.to(dtype=torch.float32)
+ use_torch_2_0_or_xformers = isinstance(
+ self.vae.decoder.mid_block.attentions[0].processor,
+ (
+ AttnProcessor2_0,
+ XFormersAttnProcessor,
+ LoRAXFormersAttnProcessor,
+ LoRAAttnProcessor2_0,
+ ),
+ )
+ # if xformers or torch_2_0 is used attention block does not need
+ # to be in float32 which can save lots of memory
+ if use_torch_2_0_or_xformers:
+ self.vae.post_quant_conv.to(dtype)
+ self.vae.decoder.conv_in.to(dtype)
+ self.vae.decoder.mid_block.to(dtype)
+
+ @torch.no_grad()
+ @replace_example_docstring(EXAMPLE_DOC_STRING)
+ def __call__(
+ self,
+ prompt: Union[str, List[str]] = None,
+ prompt_2: Optional[Union[str, List[str]]] = None,
+ condition_image: PipelineImageInput = None,
+ height: Optional[int] = None,
+ width: Optional[int] = None,
+ num_inference_steps: int = 50,
+ guidance_scale: float = 5.0,
+ negative_prompt: Optional[Union[str, List[str]]] = None,
+ negative_prompt_2: Optional[Union[str, List[str]]] = None,
+ num_images_per_prompt: Optional[int] = 1,
+ eta: float = 0.0,
+ generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
+ latents: Optional[torch.FloatTensor] = None,
+ prompt_embeds: Optional[torch.FloatTensor] = None,
+ negative_prompt_embeds: Optional[torch.FloatTensor] = None,
+ pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
+ negative_pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
+ output_type: Optional[str] = "pil",
+ return_dict: bool = True,
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
+ callback_steps: int = 1,
+ cross_attention_kwargs: Optional[Dict[str, Any]] = None,
+ controlnet_conditioning_scale: Union[float, List[float]] = 1.0,
+ guess_mode: bool = False,
+ control_guidance_start: Union[float, List[float]] = 0.0,
+ control_guidance_end: Union[float, List[float]] = 1.0,
+ original_size: Tuple[int, int] = None,
+ crops_coords_top_left: Tuple[int, int] = (0, 0),
+ target_size: Tuple[int, int] = None,
+ negative_original_size: Optional[Tuple[int, int]] = None,
+ negative_crops_coords_top_left: Tuple[int, int] = (0, 0),
+ negative_target_size: Optional[Tuple[int, int]] = None,
+ ################### DemoFusion specific parameters ####################
+ image_lr: Optional[torch.FloatTensor] = None,
+ view_batch_size: int = 16,
+ multi_decoder: bool = True,
+ stride: Optional[int] = 64,
+ cosine_scale_1: Optional[float] = 3.,
+ cosine_scale_2: Optional[float] = 1.,
+ cosine_scale_3: Optional[float] = 1.,
+ sigma: Optional[float] = 1.0,
+ show_image: bool = False,
+ lowvram: bool = False,
+ ):
+ r"""
+ The call function to the pipeline for generation.
+
+ Args:
+ prompt (`str` or `List[str]`, *optional*):
+ The prompt or prompts to guide image generation. If not defined, you need to pass `prompt_embeds`.
+ prompt_2 (`str` or `List[str]`, *optional*):
+ The prompt or prompts to be sent to `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
+ used in both text-encoders.
+ image (`torch.FloatTensor`, `PIL.Image.Image`, `np.ndarray`, `List[torch.FloatTensor]`, `List[PIL.Image.Image]`, `List[np.ndarray]`,:
+ `List[List[torch.FloatTensor]]`, `List[List[np.ndarray]]` or `List[List[PIL.Image.Image]]`):
+ The ControlNet input condition to provide guidance to the `unet` for generation. If the type is
+ specified as `torch.FloatTensor`, it is passed to ControlNet as is. `PIL.Image.Image` can also be
+ accepted as an image. The dimensions of the output image defaults to `image`'s dimensions. If height
+ and/or width are passed, `image` is resized accordingly. If multiple ControlNets are specified in
+ `init`, images must be passed as a list such that each element of the list can be correctly batched for
+ input to a single ControlNet.
+ height (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`):
+ The height in pixels of the generated image. Anything below 512 pixels won't work well for
+ [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
+ and checkpoints that are not specifically fine-tuned on low resolutions.
+ width (`int`, *optional*, defaults to `self.unet.config.sample_size * self.vae_scale_factor`):
+ The width in pixels of the generated image. Anything below 512 pixels won't work well for
+ [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
+ and checkpoints that are not specifically fine-tuned on low resolutions.
+ num_inference_steps (`int`, *optional*, defaults to 50):
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
+ expense of slower inference.
+ guidance_scale (`float`, *optional*, defaults to 5.0):
+ A higher guidance scale value encourages the model to generate images closely linked to the text
+ `prompt` at the expense of lower image quality. Guidance scale is enabled when `guidance_scale > 1`.
+ negative_prompt (`str` or `List[str]`, *optional*):
+ The prompt or prompts to guide what to not include in image generation. If not defined, you need to
+ pass `negative_prompt_embeds` instead. Ignored when not using guidance (`guidance_scale < 1`).
+ negative_prompt_2 (`str` or `List[str]`, *optional*):
+ The prompt or prompts to guide what to not include in image generation. This is sent to `tokenizer_2`
+ and `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders.
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
+ The number of images to generate per prompt.
+ eta (`float`, *optional*, defaults to 0.0):
+ Corresponds to parameter eta (η) from the [DDIM](https://arxiv.org/abs/2010.02502) paper. Only applies
+ to the [`~schedulers.DDIMScheduler`], and is ignored in other schedulers.
+ generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
+ A [`torch.Generator`](https://pytorch.org/docs/stable/generated/torch.Generator.html) to make
+ generation deterministic.
+ latents (`torch.FloatTensor`, *optional*):
+ Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
+ tensor is generated by sampling using the supplied random `generator`.
+ prompt_embeds (`torch.FloatTensor`, *optional*):
+ Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not
+ provided, text embeddings are generated from the `prompt` input argument.
+ negative_prompt_embeds (`torch.FloatTensor`, *optional*):
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
+ not provided, `negative_prompt_embeds` are generated from the `negative_prompt` input argument.
+ pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
+ Pre-generated pooled text embeddings. Can be used to easily tweak text inputs (prompt weighting). If
+ not provided, pooled text embeddings are generated from `prompt` input argument.
+ negative_pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
+ Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs (prompt
+ weighting). If not provided, pooled `negative_prompt_embeds` are generated from `negative_prompt` input
+ argument.
+ output_type (`str`, *optional*, defaults to `"pil"`):
+ The output format of the generated image. Choose between `PIL.Image` or `np.array`.
+ return_dict (`bool`, *optional*, defaults to `True`):
+ Whether or not to return a [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] instead of a
+ plain tuple.
+ callback (`Callable`, *optional*):
+ A function that calls every `callback_steps` steps during inference. The function is called with the
+ following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
+ callback_steps (`int`, *optional*, defaults to 1):
+ The frequency at which the `callback` function is called. If not specified, the callback is called at
+ every step.
+ cross_attention_kwargs (`dict`, *optional*):
+ A kwargs dictionary that if specified is passed along to the [`AttentionProcessor`] as defined in
+ [`self.processor`](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
+ controlnet_conditioning_scale (`float` or `List[float]`, *optional*, defaults to 1.0):
+ The outputs of the ControlNet are multiplied by `controlnet_conditioning_scale` before they are added
+ to the residual in the original `unet`. If multiple ControlNets are specified in `init`, you can set
+ the corresponding scale as a list.
+ guess_mode (`bool`, *optional*, defaults to `False`):
+ The ControlNet encoder tries to recognize the content of the input image even if you remove all
+ prompts. A `guidance_scale` value between 3.0 and 5.0 is recommended.
+ control_guidance_start (`float` or `List[float]`, *optional*, defaults to 0.0):
+ The percentage of total steps at which the ControlNet starts applying.
+ control_guidance_end (`float` or `List[float]`, *optional*, defaults to 1.0):
+ The percentage of total steps at which the ControlNet stops applying.
+ original_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)):
+ If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
+ `original_size` defaults to `(width, height)` if not specified. Part of SDXL's micro-conditioning as
+ explained in section 2.2 of
+ [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
+ crops_coords_top_left (`Tuple[int]`, *optional*, defaults to (0, 0)):
+ `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
+ `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
+ `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
+ [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
+ target_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)):
+ For most cases, `target_size` should be set to the desired height and width of the generated image. If
+ not specified it will default to `(width, height)`. Part of SDXL's micro-conditioning as explained in
+ section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
+ negative_original_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)):
+ To negatively condition the generation process based on a specific image resolution. Part of SDXL's
+ micro-conditioning as explained in section 2.2 of
+ [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
+ information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
+ negative_crops_coords_top_left (`Tuple[int]`, *optional*, defaults to (0, 0)):
+ To negatively condition the generation process based on a specific crop coordinates. Part of SDXL's
+ micro-conditioning as explained in section 2.2 of
+ [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
+ information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
+ negative_target_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)):
+ To negatively condition the generation process based on a target image resolution. It should be as same
+ as the `target_size` for most cases. Part of SDXL's micro-conditioning as explained in section 2.2 of
+ [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
+ information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
+ ################### DemoFusion specific parameters ####################
+ image_lr (`torch.FloatTensor`, *optional*, , defaults to None):
+ Low-resolution image input for upscaling. If provided, DemoFusion will encode it as the initial latent representation.
+ view_batch_size (`int`, defaults to 16):
+ The batch size for multiple denoising paths. Typically, a larger batch size can result in higher
+ efficiency but comes with increased GPU memory requirements.
+ multi_decoder (`bool`, defaults to True):
+ Determine whether to use a tiled decoder. Generally, when the resolution exceeds 3072x3072,
+ a tiled decoder becomes necessary.
+ stride (`int`, defaults to 64):
+ The stride of moving local patches. A smaller stride is better for alleviating seam issues,
+ but it also introduces additional computational overhead and inference time.
+ cosine_scale_1 (`float`, defaults to 3):
+ Control the strength of skip-residual. For specific impacts, please refer to Appendix C
+ in the DemoFusion paper.
+ cosine_scale_2 (`float`, defaults to 1):
+ Control the strength of dilated sampling. For specific impacts, please refer to Appendix C
+ in the DemoFusion paper.
+ cosine_scale_3 (`float`, defaults to 1):
+ Control the strength of the gaussion filter. For specific impacts, please refer to Appendix C
+ in the DemoFusion paper.
+ sigma (`float`, defaults to 1):
+ The standard value of the gaussian filter.
+ show_image (`bool`, defaults to False):
+ Determine whether to show intermediate results during generation.
+ lowvram (`bool`, defaults to False):
+ Try to fit in 8 Gb of VRAM, with xformers installed.
+ Examples:
+
+ Returns:
+ [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] or `tuple`:
+ If `return_dict` is `True`, [`~pipelines.stable_diffusion.StableDiffusionPipelineOutput`] is returned,
+ otherwise a `tuple` is returned containing the output images.
+ """
+ controlnet = self.controlnet._orig_mod if is_compiled_module(self.controlnet) else self.controlnet
+
+ # align format for control guidance
+ if not isinstance(control_guidance_start, list) and isinstance(control_guidance_end, list):
+ control_guidance_start = len(control_guidance_end) * [control_guidance_start]
+ elif not isinstance(control_guidance_end, list) and isinstance(control_guidance_start, list):
+ control_guidance_end = len(control_guidance_start) * [control_guidance_end]
+ elif not isinstance(control_guidance_start, list) and not isinstance(control_guidance_end, list):
+ mult = len(controlnet.nets) if isinstance(controlnet, MultiControlNetModel) else 1
+ control_guidance_start, control_guidance_end = mult * [control_guidance_start], mult * [
+ control_guidance_end
+ ]
+
+ # 0. Default height and width to unet
+ height = height or self.unet.config.sample_size * self.vae_scale_factor
+ width = width or self.unet.config.sample_size * self.vae_scale_factor
+
+ x1_size = self.unet.config.sample_size * self.vae_scale_factor
+
+ height_scale = height / x1_size
+ width_scale = width / x1_size
+ scale_num = int(max(height_scale, width_scale))
+ aspect_ratio = min(height_scale, width_scale) / max(height_scale, width_scale)
+
+ original_size = original_size or (height, width)
+ target_size = target_size or (height, width)
+
+ # 1. Check inputs. Raise error if not correct
+ self.check_inputs(
+ prompt,
+ prompt_2,
+ condition_image,
+ callback_steps,
+ negative_prompt,
+ negative_prompt_2,
+ prompt_embeds,
+ negative_prompt_embeds,
+ pooled_prompt_embeds,
+ negative_pooled_prompt_embeds,
+ controlnet_conditioning_scale,
+ control_guidance_start,
+ control_guidance_end,
+ )
+
+ # 2. Define call parameters
+ if prompt is not None and isinstance(prompt, str):
+ batch_size = 1
+ elif prompt is not None and isinstance(prompt, list):
+ batch_size = len(prompt)
+ else:
+ batch_size = prompt_embeds.shape[0]
+
+ device = self._execution_device
+ self.lowvram = lowvram
+ if self.lowvram:
+ self.vae.cpu()
+ self.unet.cpu()
+ self.text_encoder.to(device)
+ self.text_encoder_2.to(device)
+ if image_lr:
+ image_lr.cpu()
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
+ # corresponds to doing no classifier free guidance.
+ do_classifier_free_guidance = guidance_scale > 1.0
+
+ if isinstance(controlnet, MultiControlNetModel) and isinstance(controlnet_conditioning_scale, float):
+ controlnet_conditioning_scale = [controlnet_conditioning_scale] * len(controlnet.nets)
+
+ global_pool_conditions = (
+ controlnet.config.global_pool_conditions
+ if isinstance(controlnet, ControlNetModel)
+ else controlnet.nets[0].config.global_pool_conditions
+ )
+ guess_mode = guess_mode or global_pool_conditions
+
+ # 3. Encode input prompt
+ text_encoder_lora_scale = (
+ cross_attention_kwargs.get("scale", None) if cross_attention_kwargs is not None else None
+ )
+ (
+ prompt_embeds,
+ negative_prompt_embeds,
+ pooled_prompt_embeds,
+ negative_pooled_prompt_embeds,
+ ) = self.encode_prompt(
+ prompt,
+ prompt_2,
+ device,
+ num_images_per_prompt,
+ do_classifier_free_guidance,
+ negative_prompt,
+ negative_prompt_2,
+ prompt_embeds=prompt_embeds,
+ negative_prompt_embeds=negative_prompt_embeds,
+ pooled_prompt_embeds=pooled_prompt_embeds,
+ negative_pooled_prompt_embeds=negative_pooled_prompt_embeds,
+ lora_scale=text_encoder_lora_scale,
+ )
+
+ # 4. Prepare image
+ if isinstance(controlnet, ControlNetModel):
+ condition_image = self.prepare_image(
+ image=condition_image,
+ width=width // scale_num,
+ height=height // scale_num,
+ batch_size=batch_size * num_images_per_prompt,
+ num_images_per_prompt=num_images_per_prompt,
+ device=device,
+ dtype=controlnet.dtype,
+ do_classifier_free_guidance=do_classifier_free_guidance,
+ guess_mode=guess_mode,
+ )
+ # height, width = condition_image.shape[-2:]
+ # condition_image.shape ([2, 3, 1024, 1024])
+ elif isinstance(controlnet, MultiControlNetModel):
+ condition_images = []
+
+ for image_ in condition_image:
+ image_ = self.prepare_image(
+ image=image_,
+ width=width // scale_num,
+ height=height // scale_num,
+ batch_size=batch_size * num_images_per_prompt,
+ num_images_per_prompt=num_images_per_prompt,
+ device=device,
+ dtype=controlnet.dtype,
+ do_classifier_free_guidance=do_classifier_free_guidance,
+ guess_mode=guess_mode,
+ )
+
+ condition_images.append(image_)
+
+ condition_image = condition_images
+ # height, width = condition_image[0].shape[-2:]
+ else:
+ assert False
+
+ # 5. Prepare timesteps
+ self.scheduler.set_timesteps(num_inference_steps, device=device)
+ timesteps = self.scheduler.timesteps
+
+ # 6. Prepare latent variables
+ num_channels_latents = self.unet.config.in_channels
+ latents = self.prepare_latents(
+ batch_size * num_images_per_prompt,
+ num_channels_latents,
+ height // scale_num,
+ width // scale_num,
+ prompt_embeds.dtype,
+ device,
+ generator,
+ latents,
+ )
+
+ # 7. Prepare extra step kwargs. TODO: Logic should ideally just be moved out of the pipeline
+ extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
+
+ # 7.1 Create tensor stating which controlnets to keep
+ controlnet_keep = []
+ for i in range(len(timesteps)):
+ keeps = [
+ 1.0 - float(i / len(timesteps) < s or (i + 1) / len(timesteps) > e)
+ for s, e in zip(control_guidance_start, control_guidance_end)
+ ]
+ controlnet_keep.append(keeps[0] if isinstance(controlnet, ControlNetModel) else keeps)
+
+ # 7.2 Prepare added time ids & embeddings
+ if isinstance(condition_image, list):
+ original_size = original_size or condition_image[0].shape[-2:]
+ else:
+ original_size = original_size or condition_image.shape[-2:]
+ target_size = target_size or (height, width)
+
+ add_text_embeds = pooled_prompt_embeds
+ add_time_ids = self._get_add_time_ids(
+ original_size, crops_coords_top_left, target_size, dtype=prompt_embeds.dtype
+ )
+
+ if negative_original_size is not None and negative_target_size is not None:
+ negative_add_time_ids = self._get_add_time_ids(
+ negative_original_size,
+ negative_crops_coords_top_left,
+ negative_target_size,
+ dtype=prompt_embeds.dtype,
+ )
+ else:
+ negative_add_time_ids = add_time_ids
+
+ if do_classifier_free_guidance:
+ prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds], dim=0)
+ add_text_embeds = torch.cat([negative_pooled_prompt_embeds, add_text_embeds], dim=0)
+ add_time_ids = torch.cat([negative_add_time_ids, add_time_ids], dim=0)
+
+ prompt_embeds = prompt_embeds.to(device)
+ add_text_embeds = add_text_embeds.to(device)
+ add_time_ids = add_time_ids.to(device).repeat(batch_size * num_images_per_prompt, 1)
+
+
+
+ # 8. Denoising loop
+ num_warmup_steps = len(timesteps) - num_inference_steps * self.scheduler.order
+
+ output_images = []
+
+ ###################################################### Phase Initialization ########################################################
+
+ if self.lowvram:
+ self.text_encoder.cpu()
+ self.text_encoder_2.cpu()
+
+ if image_lr == None:
+ print("### Phase 1 Denoising ###")
+ with self.progress_bar(total=num_inference_steps) as progress_bar:
+ for i, t in enumerate(timesteps):
+
+ if self.lowvram:
+ self.vae.cpu()
+ self.unet.to(device)
+
+ latents_for_view = latents
+
+ # expand the latents if we are doing classifier free guidance
+ latent_model_input = (
+ latents.repeat_interleave(2, dim=0)
+ if do_classifier_free_guidance
+ else latents
+ )
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
+
+
+ added_cond_kwargs = {"text_embeds": add_text_embeds, "time_ids": add_time_ids}
+
+ # controlnet(s) inference
+ if guess_mode and do_classifier_free_guidance:
+ # Infer ControlNet only for the conditional batch.
+ control_model_input = latents
+ control_model_input = self.scheduler.scale_model_input(control_model_input, t)
+ controlnet_prompt_embeds = prompt_embeds.chunk(2)[1]
+ controlnet_added_cond_kwargs = {
+ "text_embeds": add_text_embeds.chunk(2)[1],
+ "time_ids": add_time_ids.chunk(2)[1],
+ }
+ else:
+ control_model_input = latent_model_input
+ controlnet_prompt_embeds = prompt_embeds
+ controlnet_added_cond_kwargs = added_cond_kwargs
+
+ if isinstance(controlnet_keep[i], list):
+ cond_scale = [c * s for c, s in zip(controlnet_conditioning_scale, controlnet_keep[i])]
+ else:
+ controlnet_cond_scale = controlnet_conditioning_scale
+ if isinstance(controlnet_cond_scale, list):
+ controlnet_cond_scale = controlnet_cond_scale[0]
+ cond_scale = controlnet_cond_scale * controlnet_keep[i]
+
+ # print(condition_image.shape, control_model_input.shape, controlnet_prompt_embeds.shape, t, cond_scale, guess_mode)
+ # print(controlnet_added_cond_kwargs["text_embeds"].shape, controlnet_added_cond_kwargs["time_ids"].shape)
+ down_block_res_samples, mid_block_res_sample = self.controlnet(
+ control_model_input,
+ t,
+ encoder_hidden_states=controlnet_prompt_embeds,
+ controlnet_cond=condition_image,
+ conditioning_scale=cond_scale,
+ guess_mode=guess_mode,
+ added_cond_kwargs=controlnet_added_cond_kwargs,
+ return_dict=False,
+ )
+
+ if guess_mode and do_classifier_free_guidance:
+ # Infered ControlNet only for the conditional batch.
+ # To apply the output of ControlNet to both the unconditional and conditional batches,
+ # add 0 to the unconditional batch to keep it unchanged.
+ down_block_res_samples = [torch.cat([torch.zeros_like(d), d]) for d in down_block_res_samples]
+ mid_block_res_sample = torch.cat([torch.zeros_like(mid_block_res_sample), mid_block_res_sample])
+
+ # predict the noise residual
+ noise_pred = self.unet(
+ latent_model_input,
+ t,
+ encoder_hidden_states=prompt_embeds,
+ cross_attention_kwargs=cross_attention_kwargs,
+ down_block_additional_residuals=down_block_res_samples,
+ mid_block_additional_residual=mid_block_res_sample,
+ added_cond_kwargs=added_cond_kwargs,
+ return_dict=False,
+ )[0]
+
+ # perform guidance
+ if do_classifier_free_guidance:
+ noise_pred_uncond, noise_pred_text = noise_pred[::2], noise_pred[1::2]
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
+
+ # compute the previous noisy sample x_t -> x_t-1
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0]
+
+ # call the callback, if provided
+ if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
+ progress_bar.update()
+ if callback is not None and i % callback_steps == 0:
+ step_idx = i // getattr(self.scheduler, "order", 1)
+ callback(step_idx, t, latents)
+ else:
+ print("### Encoding Real Image ###")
+ latents = self.vae.encode(image_lr)
+ latents = latents.latent_dist.sample() * self.vae.config.scaling_factor
+
+ anchor_mean = latents.mean()
+ anchor_std = latents.std()
+ if self.lowvram:
+ latents = latents.cpu()
+ torch.cuda.empty_cache()
+ if not output_type == "latent":
+ # make sure the VAE is in float32 mode, as it overflows in float16
+ needs_upcasting = self.vae.dtype == torch.float16 and self.vae.config.force_upcast
+
+ if self.lowvram:
+ needs_upcasting = False # use madebyollin/sdxl-vae-fp16-fix in lowvram mode!
+ self.unet.cpu()
+ self.vae.to(device)
+
+ if needs_upcasting:
+ self.upcast_vae()
+ latents = latents.to(next(iter(self.vae.post_quant_conv.parameters())).dtype)
+ if self.lowvram and multi_decoder:
+ current_width_height = self.unet.config.sample_size * self.vae_scale_factor
+ image = self.tiled_decode(latents, current_width_height, current_width_height)
+ else:
+ image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]
+ # cast back to fp16 if needed
+ if needs_upcasting:
+ self.vae.to(dtype=torch.float16)
+
+ image = self.image_processor.postprocess(image, output_type=output_type)
+ if show_image:
+ plt.figure(figsize=(10, 10))
+ plt.imshow(image[0])
+ plt.axis('off') # Turn off axis numbers and ticks
+ plt.show()
+ output_images.append(image[0])
+
+ ####################################################### Phase Upscaling #####################################################
+ if image_lr == None:
+ starting_scale = 2
+ else:
+ starting_scale = 1
+ for current_scale_num in range(starting_scale, scale_num + 1):
+ if self.lowvram:
+ latents = latents.to(device)
+ self.unet.to(device)
+ torch.cuda.empty_cache()
+ print("### Phase {} Denoising ###".format(current_scale_num))
+ current_height = self.unet.config.sample_size * self.vae_scale_factor * current_scale_num
+ current_width = self.unet.config.sample_size * self.vae_scale_factor * current_scale_num
+ if height > width:
+ current_width = int(current_width * aspect_ratio)
+ else:
+ current_height = int(current_height * aspect_ratio)
+
+ latents = F.interpolate(latents, size=(int(current_height / self.vae_scale_factor), int(current_width / self.vae_scale_factor)), mode='bicubic')
+ condition_image = F.interpolate(condition_image, size=(current_height, current_width), mode='bicubic')
+
+ noise_latents = []
+ noise = torch.randn_like(latents)
+ for timestep in timesteps:
+ noise_latent = self.scheduler.add_noise(latents, noise, timestep.unsqueeze(0))
+ noise_latents.append(noise_latent)
+ latents = noise_latents[0]
+
+ with self.progress_bar(total=num_inference_steps) as progress_bar:
+ for i, t in enumerate(timesteps):
+ count = torch.zeros_like(latents)
+ value = torch.zeros_like(latents)
+ cosine_factor = 0.5 * (1 + torch.cos(torch.pi * (self.scheduler.config.num_train_timesteps - t) / self.scheduler.config.num_train_timesteps)).cpu()
+
+ c1 = cosine_factor ** cosine_scale_1
+ latents = latents * (1 - c1) + noise_latents[i] * c1
+
+ ############################################# MultiDiffusion #############################################
+
+ views = self.get_views(current_height, current_width, stride=stride, window_size=self.unet.config.sample_size, random_jitter=True)
+ views_batch = [views[i : i + view_batch_size] for i in range(0, len(views), view_batch_size)]
+
+ jitter_range = (self.unet.config.sample_size - stride) // 4
+ latents_ = F.pad(latents, (jitter_range, jitter_range, jitter_range, jitter_range), 'constant', 0)
+ condition_image_ = F.pad(condition_image, (jitter_range * self.vae_scale_factor, jitter_range * self.vae_scale_factor, jitter_range * self.vae_scale_factor, jitter_range * self.vae_scale_factor), 'constant', 0)
+
+ count_local = torch.zeros_like(latents_)
+ value_local = torch.zeros_like(latents_)
+
+ for j, batch_view in enumerate(views_batch):
+ vb_size = len(batch_view)
+
+ # get the latents corresponding to the current view coordinates
+ latents_for_view = torch.cat(
+ [
+ latents_[:, :, h_start:h_end, w_start:w_end]
+ for h_start, h_end, w_start, w_end in batch_view
+ ]
+ )
+ condition_image_for_view = torch.cat(
+ [
+ condition_image_[0:1, :, h_start * self.vae_scale_factor:h_end * self.vae_scale_factor, w_start * self.vae_scale_factor:w_end * self.vae_scale_factor]
+ for h_start, h_end, w_start, w_end in batch_view
+ ]
+ )
+
+ # expand the latents if we are doing classifier free guidance
+ latent_model_input = latents_for_view
+ latent_model_input = (
+ latent_model_input.repeat_interleave(2, dim=0)
+ if do_classifier_free_guidance
+ else latent_model_input
+ )
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
+
+ condition_image_input = condition_image_for_view
+ condition_image_input = (
+ condition_image_input.repeat_interleave(2, dim=0)
+ if do_classifier_free_guidance
+ else condition_image_input
+ )
+
+ prompt_embeds_input = torch.cat([prompt_embeds] * vb_size)
+ add_text_embeds_input = torch.cat([add_text_embeds] * vb_size)
+ add_time_ids_input = []
+ for h_start, h_end, w_start, w_end in batch_view:
+ add_time_ids_ = add_time_ids.clone()
+ add_time_ids_[:, 2] = h_start * self.vae_scale_factor
+ add_time_ids_[:, 3] = w_start * self.vae_scale_factor
+ add_time_ids_input.append(add_time_ids_)
+ add_time_ids_input = torch.cat(add_time_ids_input)
+
+ added_cond_kwargs = {"text_embeds": add_text_embeds_input, "time_ids": add_time_ids_input}
+
+ # controlnet(s) inference
+ if guess_mode and do_classifier_free_guidance:
+ # Infer ControlNet only for the conditional batch.
+ control_model_input = latent_model_input
+ control_model_input = self.scheduler.scale_model_input(control_model_input, t)
+ controlnet_prompt_embeds = prompt_embeds_input.chunk(2)[1]
+ controlnet_added_cond_kwargs = {
+ "text_embeds": add_text_embeds_input.chunk(2)[1],
+ "time_ids": add_time_ids_input.chunk(2)[1],
+ }
+ else:
+ control_model_input = latent_model_input
+ controlnet_prompt_embeds = prompt_embeds_input
+ controlnet_added_cond_kwargs = added_cond_kwargs
+
+ if isinstance(controlnet_keep[i], list):
+ cond_scale = [c * s for c, s in zip(controlnet_conditioning_scale, controlnet_keep[i])]
+ else:
+ controlnet_cond_scale = controlnet_conditioning_scale
+ if isinstance(controlnet_cond_scale, list):
+ controlnet_cond_scale = controlnet_cond_scale[0]
+ cond_scale = controlnet_cond_scale * controlnet_keep[i]
+
+ down_block_res_samples, mid_block_res_sample = self.controlnet(
+ control_model_input,
+ t,
+ encoder_hidden_states=controlnet_prompt_embeds,
+ controlnet_cond=condition_image_input,
+ conditioning_scale=cond_scale,
+ guess_mode=guess_mode,
+ added_cond_kwargs=controlnet_added_cond_kwargs,
+ return_dict=False,
+ )
+
+ if guess_mode and do_classifier_free_guidance:
+ # Infered ControlNet only for the conditional batch.
+ # To apply the output of ControlNet to both the unconditional and conditional batches,
+ # add 0 to the unconditional batch to keep it unchanged.
+ down_block_res_samples = [torch.cat([torch.zeros_like(d), d]) for d in down_block_res_samples]
+ mid_block_res_sample = torch.cat([torch.zeros_like(mid_block_res_sample), mid_block_res_sample])
+
+ # predict the noise residual
+ noise_pred = self.unet(
+ latent_model_input,
+ t,
+ encoder_hidden_states=prompt_embeds_input,
+ cross_attention_kwargs=cross_attention_kwargs,
+ down_block_additional_residuals=down_block_res_samples,
+ mid_block_additional_residual=mid_block_res_sample,
+ added_cond_kwargs=added_cond_kwargs,
+ return_dict=False,
+ )[0]
+
+ if do_classifier_free_guidance:
+ noise_pred_uncond, noise_pred_text = noise_pred[::2], noise_pred[1::2]
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond) * 1
+
+ # compute the previous noisy sample x_t -> x_t-1
+ self.scheduler._init_step_index(t)
+ latents_denoised_batch = self.scheduler.step(
+ noise_pred, t, latents_for_view, **extra_step_kwargs, return_dict=False)[0]
+
+ # extract value from batch
+ for latents_view_denoised, (h_start, h_end, w_start, w_end) in zip(
+ latents_denoised_batch.chunk(vb_size), batch_view
+ ):
+ value_local[:, :, h_start:h_end, w_start:w_end] += latents_view_denoised
+ count_local[:, :, h_start:h_end, w_start:w_end] += 1
+
+ value_local = value_local[: ,:, jitter_range: jitter_range + current_height // self.vae_scale_factor, jitter_range: jitter_range + current_width // self.vae_scale_factor]
+ count_local = count_local[: ,:, jitter_range: jitter_range + current_height // self.vae_scale_factor, jitter_range: jitter_range + current_width // self.vae_scale_factor]
+
+ c2 = cosine_factor ** cosine_scale_2
+
+ value += value_local / count_local * (1 - c2)
+ count += torch.ones_like(value_local) * (1 - c2)
+
+ ############################################# Dilated Sampling #############################################
+
+ h_pad = (current_scale_num - (latents.size(2) % current_scale_num)) % current_scale_num
+ w_pad = (current_scale_num - (latents.size(3) % current_scale_num)) % current_scale_num
+ latents_ = F.pad(latents, (w_pad, 0, h_pad, 0), 'constant', 0)
+
+ count_global = torch.zeros_like(latents_)
+ value_global = torch.zeros_like(latents_)
+
+ c3 = 0.99 * cosine_factor ** cosine_scale_3 + 1e-2
+ std_, mean_ = latents_.std(), latents_.mean()
+ latents_gaussian = gaussian_filter(latents_, kernel_size=(2*current_scale_num-1), sigma=sigma*c3)
+ latents_gaussian = (latents_gaussian - latents_gaussian.mean()) / latents_gaussian.std() * std_ + mean_
+
+ latents_for_view = []
+ for h in range(current_scale_num):
+ for w in range(current_scale_num):
+ latents_for_view.append(latents_[:, :, h::current_scale_num, w::current_scale_num])
+ latents_for_view = torch.cat(latents_for_view)
+
+ latents_for_view_gaussian = []
+ for h in range(current_scale_num):
+ for w in range(current_scale_num):
+ latents_for_view_gaussian.append(latents_gaussian[:, :, h::current_scale_num, w::current_scale_num])
+ latents_for_view_gaussian = torch.cat(latents_for_view_gaussian)
+
+ condition_image_for_view = []
+ for h in range(current_scale_num):
+ for w in range(current_scale_num):
+ condition_image_ = F.pad(condition_image, (w_pad * self.vae_scale_factor, w * self.vae_scale_factor, h_pad * self.vae_scale_factor, h * self.vae_scale_factor), 'constant', 0)
+ condition_image_for_view.append(condition_image_[0:1, :, h * self.vae_scale_factor::current_scale_num, w * self.vae_scale_factor::current_scale_num])
+ condition_image_for_view = torch.cat(condition_image_for_view)
+
+ vb_size = latents_for_view.size(0)
+
+ # expand the latents if we are doing classifier free guidance
+ latent_model_input = latents_for_view_gaussian
+ latent_model_input = (
+ latent_model_input.repeat_interleave(2, dim=0)
+ if do_classifier_free_guidance
+ else latent_model_input
+ )
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
+
+ condition_image_input = condition_image_for_view
+ condition_image_input = (
+ condition_image_input.repeat_interleave(2, dim=0)
+ if do_classifier_free_guidance
+ else condition_image_input
+ )
+
+ prompt_embeds_input = torch.cat([prompt_embeds] * vb_size)
+ add_text_embeds_input = torch.cat([add_text_embeds] * vb_size)
+ add_time_ids_input = torch.cat([add_time_ids] * vb_size)
+
+ added_cond_kwargs = {"text_embeds": add_text_embeds_input, "time_ids": add_time_ids_input}
+
+ # controlnet(s) inference
+ if guess_mode and do_classifier_free_guidance:
+ # Infer ControlNet only for the conditional batch.
+ control_model_input = latent_model_input
+ control_model_input = self.scheduler.scale_model_input(control_model_input, t)
+ controlnet_prompt_embeds = prompt_embeds_input.chunk(2)[1]
+ controlnet_added_cond_kwargs = {
+ "text_embeds": add_text_embeds_input.chunk(2)[1],
+ "time_ids": add_time_ids_input.chunk(2)[1],
+ }
+ else:
+ control_model_input = latent_model_input
+ controlnet_prompt_embeds = prompt_embeds_input
+ controlnet_added_cond_kwargs = added_cond_kwargs
+
+ if isinstance(controlnet_keep[i], list):
+ cond_scale = [c * s for c, s in zip(controlnet_conditioning_scale, controlnet_keep[i])]
+ else:
+ controlnet_cond_scale = controlnet_conditioning_scale
+ if isinstance(controlnet_cond_scale, list):
+ controlnet_cond_scale = controlnet_cond_scale[0]
+ cond_scale = controlnet_cond_scale * controlnet_keep[i]
+
+ down_block_res_samples, mid_block_res_sample = self.controlnet(
+ control_model_input,
+ t,
+ encoder_hidden_states=controlnet_prompt_embeds,
+ controlnet_cond=condition_image_input,
+ conditioning_scale=cond_scale,
+ guess_mode=guess_mode,
+ added_cond_kwargs=controlnet_added_cond_kwargs,
+ return_dict=False,
+ )
+
+ if guess_mode and do_classifier_free_guidance:
+ # Infered ControlNet only for the conditional batch.
+ # To apply the output of ControlNet to both the unconditional and conditional batches,
+ # add 0 to the unconditional batch to keep it unchanged.
+ down_block_res_samples = [torch.cat([torch.zeros_like(d), d]) for d in down_block_res_samples]
+ mid_block_res_sample = torch.cat([torch.zeros_like(mid_block_res_sample), mid_block_res_sample])
+
+ # predict the noise residual
+ noise_pred = self.unet(
+ latent_model_input,
+ t,
+ encoder_hidden_states=prompt_embeds_input,
+ cross_attention_kwargs=cross_attention_kwargs,
+ down_block_additional_residuals=down_block_res_samples,
+ mid_block_additional_residual=mid_block_res_sample,
+ added_cond_kwargs=added_cond_kwargs,
+ return_dict=False,
+ )[0]
+
+ if do_classifier_free_guidance:
+ noise_pred_uncond, noise_pred_text = noise_pred[::2], noise_pred[1::2]
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
+
+ # extract value from batch
+ for h in range(current_scale_num):
+ for w in range(current_scale_num):
+ noise_pred_ = noise_pred.chunk(vb_size)[h*current_scale_num+w]
+ value_global[:, :, h::current_scale_num, w::current_scale_num] += noise_pred_
+ count_global[:, :, h::current_scale_num, w::current_scale_num] += 1
+
+ # compute the previous noisy sample x_t -> x_t-1
+ self.scheduler._init_step_index(t)
+ value_global = self.scheduler.step(
+ value_global, t, latents_, **extra_step_kwargs, return_dict=False)[0]
+
+ c2 = cosine_factor ** cosine_scale_2
+
+ value_global = value_global[: ,:, h_pad:, w_pad:]
+
+ value += value_global * c2
+ count += torch.ones_like(value_global) * c2
+
+ ###########################################################
+
+ latents = torch.where(count > 0, value / count, value)
+
+ # call the callback, if provided
+ if i == len(timesteps) - 1 or ((i + 1) > num_warmup_steps and (i + 1) % self.scheduler.order == 0):
+ progress_bar.update()
+ if callback is not None and i % callback_steps == 0:
+ step_idx = i // getattr(self.scheduler, "order", 1)
+ callback(step_idx, t, latents)
+
+ #########################################################################################################################################
+
+ latents = (latents - latents.mean()) / latents.std() * anchor_std + anchor_mean
+ if self.lowvram:
+ latents = latents.cpu()
+ torch.cuda.empty_cache()
+ if not output_type == "latent":
+ # make sure the VAE is in float32 mode, as it overflows in float16
+ needs_upcasting = self.vae.dtype == torch.float16 and self.vae.config.force_upcast
+
+ if self.lowvram:
+ needs_upcasting = False # use madebyollin/sdxl-vae-fp16-fix in lowvram mode!
+ self.unet.cpu()
+ self.vae.to(device)
+
+ if needs_upcasting:
+ self.upcast_vae()
+ latents = latents.to(next(iter(self.vae.post_quant_conv.parameters())).dtype)
+
+ print("### Phase {} Decoding ###".format(current_scale_num))
+ if multi_decoder:
+ image = self.tiled_decode(latents, current_height, current_width)
+ else:
+ image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]
+
+ # cast back to fp16 if needed
+ if needs_upcasting:
+ self.vae.to(dtype=torch.float16)
+ else:
+ image = latents
+
+ if not output_type == "latent":
+ image = self.image_processor.postprocess(image, output_type=output_type)
+ if show_image:
+ plt.figure(figsize=(10, 10))
+ plt.imshow(image[0])
+ plt.axis('off') # Turn off axis numbers and ticks
+ plt.show()
+ output_images.append(image[0])
+
+ # Offload all models
+ self.maybe_free_model_hooks()
+
+ return output_images
+
+ # Overrride to properly handle the loading and unloading of the additional text encoder.
+ def load_lora_weights(self, pretrained_model_name_or_path_or_dict: Union[str, Dict[str, torch.Tensor]], **kwargs):
+ # We could have accessed the unet config from `lora_state_dict()` too. We pass
+ # it here explicitly to be able to tell that it's coming from an SDXL
+ # pipeline.
+
+ # Remove any existing hooks.
+ if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
+ from accelerate.hooks import AlignDevicesHook, CpuOffload, remove_hook_from_module
+ else:
+ raise ImportError("Offloading requires `accelerate v0.17.0` or higher.")
+
+ is_model_cpu_offload = False
+ is_sequential_cpu_offload = False
+ recursive = False
+ for _, component in self.components.items():
+ if isinstance(component, torch.nn.Module):
+ if hasattr(component, "_hf_hook"):
+ is_model_cpu_offload = isinstance(getattr(component, "_hf_hook"), CpuOffload)
+ is_sequential_cpu_offload = isinstance(getattr(component, "_hf_hook"), AlignDevicesHook)
+ logger.info(
+ "Accelerate hooks detected. Since you have called `load_lora_weights()`, the previous hooks will be first removed. Then the LoRA parameters will be loaded and the hooks will be applied again."
+ )
+ recursive = is_sequential_cpu_offload
+ remove_hook_from_module(component, recurse=recursive)
+ state_dict, network_alphas = self.lora_state_dict(
+ pretrained_model_name_or_path_or_dict,
+ unet_config=self.unet.config,
+ **kwargs,
+ )
+ self.load_lora_into_unet(state_dict, network_alphas=network_alphas, unet=self.unet)
+
+ text_encoder_state_dict = {k: v for k, v in state_dict.items() if "text_encoder." in k}
+ if len(text_encoder_state_dict) > 0:
+ self.load_lora_into_text_encoder(
+ text_encoder_state_dict,
+ network_alphas=network_alphas,
+ text_encoder=self.text_encoder,
+ prefix="text_encoder",
+ lora_scale=self.lora_scale,
+ )
+
+ text_encoder_2_state_dict = {k: v for k, v in state_dict.items() if "text_encoder_2." in k}
+ if len(text_encoder_2_state_dict) > 0:
+ self.load_lora_into_text_encoder(
+ text_encoder_2_state_dict,
+ network_alphas=network_alphas,
+ text_encoder=self.text_encoder_2,
+ prefix="text_encoder_2",
+ lora_scale=self.lora_scale,
+ )
+
+ # Offload back.
+ if is_model_cpu_offload:
+ self.enable_model_cpu_offload()
+ elif is_sequential_cpu_offload:
+ self.enable_sequential_cpu_offload()
+
+ @classmethod
+ def save_lora_weights(
+ self,
+ save_directory: Union[str, os.PathLike],
+ unet_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None,
+ text_encoder_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None,
+ text_encoder_2_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None,
+ is_main_process: bool = True,
+ weight_name: str = None,
+ save_function: Callable = None,
+ safe_serialization: bool = True,
+ ):
+ state_dict = {}
+
+ def pack_weights(layers, prefix):
+ layers_weights = layers.state_dict() if isinstance(layers, torch.nn.Module) else layers
+ layers_state_dict = {f"{prefix}.{module_name}": param for module_name, param in layers_weights.items()}
+ return layers_state_dict
+
+ if not (unet_lora_layers or text_encoder_lora_layers or text_encoder_2_lora_layers):
+ raise ValueError(
+ "You must pass at least one of `unet_lora_layers`, `text_encoder_lora_layers` or `text_encoder_2_lora_layers`."
+ )
+
+ if unet_lora_layers:
+ state_dict.update(pack_weights(unet_lora_layers, "unet"))
+
+ if text_encoder_lora_layers and text_encoder_2_lora_layers:
+ state_dict.update(pack_weights(text_encoder_lora_layers, "text_encoder"))
+ state_dict.update(pack_weights(text_encoder_2_lora_layers, "text_encoder_2"))
+
+ self.write_lora_layers(
+ state_dict=state_dict,
+ save_directory=save_directory,
+ is_main_process=is_main_process,
+ weight_name=weight_name,
+ save_function=save_function,
+ safe_serialization=safe_serialization,
+ )
+
+ def _remove_text_encoder_monkey_patch(self):
+ self._remove_text_encoder_monkey_patch_classmethod(self.text_encoder)
+ self._remove_text_encoder_monkey_patch_classmethod(self.text_encoder_2)
diff --git a/competitors_inference_code/DemoFusion/requirements.txt b/competitors_inference_code/DemoFusion/requirements.txt
new file mode 100644
index 0000000000000000000000000000000000000000..3514c8cea8a3fe370cbb97995f0c2e08220a6f49
--- /dev/null
+++ b/competitors_inference_code/DemoFusion/requirements.txt
@@ -0,0 +1,11 @@
+diffusers~=0.21.4
+torch~=2.1.0
+scipy~=1.11.3
+omegaconf~=2.3.0
+accelerate~=0.23.0
+transformers~=4.34.0
+tqdm
+einops
+matplotlib
+gradio
+gradio_imageslider
diff --git a/competitors_inference_code/LSRNA/README.md b/competitors_inference_code/LSRNA/README.md
new file mode 100644
index 0000000000000000000000000000000000000000..ae6ae5a1c67f2f255e33d0d0aa049916b2cc02f8
--- /dev/null
+++ b/competitors_inference_code/LSRNA/README.md
@@ -0,0 +1,59 @@
+# LSRNA
+[](https://3587jjh.github.io/LSRNA/)
+[](https://arxiv.org/abs/2503.18446)
+
+Official code for "Latent Space Super-Resolution for Higher-Resolution Image Generation with Diffusion Models".
+
+
+
+
+
+Additional results can be found on the [project page](https://3587jjh.github.io/LSRNA/).
+
+## Citation
+```
+@inproceedings{jeong2025latent,
+ title={Latent space super-resolution for higher-resolution image generation with diffusion models},
+ author={Jeong, Jinho and Han, Sangmin and Kim, Jinwoo and Kim, Seon Joo},
+ booktitle={Proceedings of the Computer Vision and Pattern Recognition Conference},
+ pages={2355--2365},
+ year={2025}
+}
+```
+
+## Acknowledgement
+This repo is based on [DemoFusion](https://github.com/PRIS-CV/DemoFusion) and [LIIF](https://github.com/yinboc/liif).
diff --git a/competitors_inference_code/LSRNA/__pycache__/pipeline_lsrna_demofusion_sdxl.cpython-312.pyc b/competitors_inference_code/LSRNA/__pycache__/pipeline_lsrna_demofusion_sdxl.cpython-312.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..b3cf6432e2b55e2bdb5ae9502da050012ea1c7e1
Binary files /dev/null and b/competitors_inference_code/LSRNA/__pycache__/pipeline_lsrna_demofusion_sdxl.cpython-312.pyc differ
diff --git a/competitors_inference_code/LSRNA/__pycache__/utils.cpython-312.pyc b/competitors_inference_code/LSRNA/__pycache__/utils.cpython-312.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..3fbb2a006f6bc669730f266eca9e6b2aeca21b54
Binary files /dev/null and b/competitors_inference_code/LSRNA/__pycache__/utils.cpython-312.pyc differ
diff --git a/competitors_inference_code/LSRNA/generate_lsrna_images.py b/competitors_inference_code/LSRNA/generate_lsrna_images.py
new file mode 100644
index 0000000000000000000000000000000000000000..a75059052f20236dc93ca9991aabd0ec21947b51
--- /dev/null
+++ b/competitors_inference_code/LSRNA/generate_lsrna_images.py
@@ -0,0 +1,189 @@
+#!/usr/bin/env python3
+"""Generate SDXL images for the selected validation prompts with LSRNA."""
+
+from __future__ import annotations
+
+import csv
+import json
+import sys
+import time
+from collections.abc import Sequence
+from pathlib import Path
+from typing import Any
+
+import torch
+from diffusers import DDIMScheduler
+
+ROOT_DIR = Path(__file__).resolve().parent
+LSRNA_DIR = ROOT_DIR / "LSRNA"
+if str(LSRNA_DIR) not in sys.path:
+ sys.path.insert(0, str(LSRNA_DIR))
+
+from pipeline_lsrna_demofusion_sdxl import DemoFusionLSRNASDXLPipeline # noqa: E402
+
+NEGATIVE_PROMPT = "blurry, ugly, duplicate, poorly drawn face, deformed, mosaic, artifacts, bad limbs"
+DEFAULT_CSV = "/data/kazanplova/latent_vae_upscale_train/datasets/new_validation_dataset/original_openim/images/selected_validation_images.csv"
+DEFAULT_OUTPUT_DIR = "/data/kazanplova/latent_vae_upscale_train/datasets/new_validation_dataset/lsrna/images"
+STATISTICS_PATH = "/data/kazanplova/latent_vae_upscale_train/datasets/new_validation_dataset/lsrna/statistics.json"
+PRETRAINED_MODEL = "stabilityai/stable-diffusion-xl-base-1.0"
+CFG_SCALE = 7.5
+NUM_INFERENCE_STEPS = 50
+SEED = 42
+VIEW_BATCH_SIZE = 8
+STRIDE_RATIO = 0.5
+COSINE_SCALE_1 = 3.0
+COSINE_SCALE_2 = 1.0
+COSINE_SCALE_3 = 1.0
+SIGMA = 0.8
+RNA_MIN_STD = 0.0
+RNA_MAX_STD = 1.2
+INVERSION_DEPTH = 30
+LOW_VRAM = False
+DEFAULT_LSR_PATH = Path("lsr") / "swinir-liif-latent-sdxl.pth"
+RESOLUTIONS: dict[str, tuple[int, int]] = {
+ "4096px": (4096, 4096),
+ "2048px": (2048, 2048),
+ "1024px": (1024, 1024),
+}
+
+
+def load_prompts(csv_path: Path) -> list[tuple[str, str]]:
+ prompts: list[tuple[str, str]] = []
+ with csv_path.open("r", encoding="utf-8") as handle:
+ reader = csv.DictReader(handle)
+ for row in reader:
+ caption_raw = (row.get("gpt_caption") or "").strip()
+ if not caption_raw:
+ continue
+ try:
+ caption = json.loads(caption_raw)
+ except json.JSONDecodeError:
+ print(f"Skipping row with invalid JSON: {row.get('img_path')}")
+ continue
+ prompt = caption.get("sdxl")
+ if not prompt:
+ print(f"Skipping row without 'sdxl' prompt: {row.get('img_path')}")
+ continue
+ prompts.append((row.get("img_path", ""), prompt))
+ return prompts
+
+
+def build_pipeline() -> DemoFusionLSRNASDXLPipeline:
+ if not torch.cuda.is_available():
+ raise RuntimeError("CUDA is required to run this script.")
+
+ scheduler = DDIMScheduler.from_pretrained(PRETRAINED_MODEL, subfolder="scheduler")
+ pipe = DemoFusionLSRNASDXLPipeline.from_pretrained(
+ PRETRAINED_MODEL,
+ scheduler=scheduler,
+ torch_dtype=torch.float16,
+ ).to("cuda")
+ pipe.vae.enable_tiling()
+ pipe.set_progress_bar_config(disable=True)
+ return pipe
+
+
+def get_target_image(result: Any) -> Any:
+ if hasattr(result, "images"):
+ images = result.images
+ elif isinstance(result, Sequence) and not isinstance(result, (str, bytes, bytearray)):
+ images = list(result)
+ else:
+ images = [result]
+ if not images:
+ raise RuntimeError("LSRNA pipeline returned no images.")
+ return images[-1]
+
+
+def main() -> None:
+ csv_path = Path(DEFAULT_CSV)
+ output_dir = Path(DEFAULT_OUTPUT_DIR)
+ lsr_path = DEFAULT_LSR_PATH
+ if not lsr_path.exists():
+ raise SystemExit(f"LSR checkpoint not found at {lsr_path}")
+
+ prompts = load_prompts(csv_path)
+ if not prompts:
+ raise SystemExit("No prompts were found in the CSV file.")
+
+ resolution_dirs = {name: output_dir / name for name in RESOLUTIONS}
+ for folder in resolution_dirs.values():
+ folder.mkdir(parents=True, exist_ok=True)
+
+ statistics_path = Path(STATISTICS_PATH)
+ stats_tracker = {
+ name: {"count": 0, "total_time": 0.0, "max_vram_bytes": 0}
+ for name in RESOLUTIONS
+ }
+
+ generator = torch.Generator(device="cuda").manual_seed(SEED)
+ pipe = build_pipeline()
+ device = torch.device("cuda")
+
+ for idx, (img_path, prompt) in enumerate(prompts):
+ filename = f"{idx}.png"
+ written_paths: list[str] = []
+
+ for name, (width, height) in RESOLUTIONS.items():
+ print(prompt)
+ torch.cuda.synchronize(device)
+ torch.cuda.reset_peak_memory_stats(device)
+ start_time = time.perf_counter()
+
+ result = pipe(
+ prompt,
+ negative_prompt=NEGATIVE_PROMPT,
+ guidance_scale=CFG_SCALE,
+ num_inference_steps=NUM_INFERENCE_STEPS,
+ width=width,
+ height=height,
+ generator=generator,
+ view_batch_size=VIEW_BATCH_SIZE,
+ stride_ratio=STRIDE_RATIO,
+ lsr_path=str(lsr_path),
+ cosine_scale_1=COSINE_SCALE_1,
+ cosine_scale_2=COSINE_SCALE_2,
+ cosine_scale_3=COSINE_SCALE_3,
+ sigma=SIGMA,
+ rna_min_std=RNA_MIN_STD,
+ rna_max_std=RNA_MAX_STD,
+ inversion_depth=INVERSION_DEPTH,
+ low_vram=LOW_VRAM,
+ )
+
+ image = get_target_image(result)
+
+ torch.cuda.synchronize(device)
+ elapsed = time.perf_counter() - start_time
+ vram_bytes = torch.cuda.max_memory_allocated(device)
+
+ stats = stats_tracker[name]
+ stats["count"] += 1
+ stats["total_time"] += elapsed
+ stats["max_vram_bytes"] = max(stats["max_vram_bytes"], vram_bytes)
+
+ output_path = resolution_dirs[name] / filename
+ image.save(output_path)
+ written_paths.append(str(output_path))
+
+ print(f"[{idx + 1}/{len(prompts)}] wrote {', '.join(written_paths)}")
+
+ statistics = {
+ "total_prompts": len(prompts),
+ "resolutions": {
+ name: {
+ "images": metrics["count"],
+ "mean_time_sec": (metrics["total_time"] / metrics["count"]) if metrics["count"] else 0.0,
+ "max_vram_mb": metrics["max_vram_bytes"] / (1024**2),
+ }
+ for name, metrics in stats_tracker.items()
+ },
+ }
+
+ statistics_path.parent.mkdir(parents=True, exist_ok=True)
+ statistics_path.write_text(json.dumps(statistics, indent=2))
+ print(f"Saved statistics to {statistics_path}")
+
+
+if __name__ == "__main__":
+ main()
diff --git a/competitors_inference_code/LSRNA/lsr/__init__.py b/competitors_inference_code/LSRNA/lsr/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..95c51a92222fce7fa15fccd138b624f10a197991
--- /dev/null
+++ b/competitors_inference_code/LSRNA/lsr/__init__.py
@@ -0,0 +1,3 @@
+from . import models
+from . import swinir
+from . import liif, mlp
\ No newline at end of file
diff --git a/competitors_inference_code/LSRNA/lsr/__pycache__/liif.cpython-312.pyc b/competitors_inference_code/LSRNA/lsr/__pycache__/liif.cpython-312.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..205c2b6507faab96a59553e7ac3e10cad14c7e44
Binary files /dev/null and b/competitors_inference_code/LSRNA/lsr/__pycache__/liif.cpython-312.pyc differ
diff --git a/competitors_inference_code/LSRNA/lsr/__pycache__/mlp.cpython-312.pyc b/competitors_inference_code/LSRNA/lsr/__pycache__/mlp.cpython-312.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..8abf53f30c5cad0bf9322ba8e9902031bd833c5e
Binary files /dev/null and b/competitors_inference_code/LSRNA/lsr/__pycache__/mlp.cpython-312.pyc differ
diff --git a/competitors_inference_code/LSRNA/lsr/__pycache__/models.cpython-312.pyc b/competitors_inference_code/LSRNA/lsr/__pycache__/models.cpython-312.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..e14174dad33986ea6a90739de83c2c05970ef8d2
Binary files /dev/null and b/competitors_inference_code/LSRNA/lsr/__pycache__/models.cpython-312.pyc differ
diff --git a/competitors_inference_code/LSRNA/lsr/__pycache__/swinir.cpython-312.pyc b/competitors_inference_code/LSRNA/lsr/__pycache__/swinir.cpython-312.pyc
new file mode 100644
index 0000000000000000000000000000000000000000..662979b2d9dc18c92244a8706d68117c9d78e685
Binary files /dev/null and b/competitors_inference_code/LSRNA/lsr/__pycache__/swinir.cpython-312.pyc differ
diff --git a/competitors_inference_code/LSRNA/lsr/liif.py b/competitors_inference_code/LSRNA/lsr/liif.py
new file mode 100644
index 0000000000000000000000000000000000000000..51b507fef642132bc255f71fe62608756f1bb99e
--- /dev/null
+++ b/competitors_inference_code/LSRNA/lsr/liif.py
@@ -0,0 +1,127 @@
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+
+from .models import register
+from . models import make as make_model
+
+def make_coord(shape, ranges=None, flatten=True, device='cpu'):
+ # Make coordinates at grid centers.
+ coord_seqs = []
+ for i, n in enumerate(shape):
+ if ranges is None:
+ v0, v1 = -1, 1
+ else:
+ v0, v1 = ranges[i]
+ r = (v1 - v0) / (2 * n)
+ seq = v0 + r + (2 * r) * torch.arange(n, device=device).float()
+ coord_seqs.append(seq)
+ ret = torch.stack(torch.meshgrid(*coord_seqs), dim=-1)
+ if flatten:
+ ret = ret.view(-1, ret.shape[-1])
+ return ret
+
+@register('liif')
+class LIIF(nn.Module):
+
+ def __init__(self, encoder_spec, imnet_spec, feat_unfold=True, local_ensemble=True):
+ super().__init__()
+ self.local_ensemble = local_ensemble
+ self.feat_unfold = feat_unfold
+ self.encoder = make_model(encoder_spec)
+
+ imnet_in_dim = self.encoder.out_dim
+ if self.feat_unfold:
+ imnet_in_dim *= 9
+ imnet_in_dim += 4 # attach coord, cell
+ self.imnet = make_model(imnet_spec, args={'in_dim': imnet_in_dim})
+
+ def gen_feat(self, inp):
+ self.inp = inp
+ feat = self.encoder(inp)
+ if self.feat_unfold:
+ feat = F.unfold(feat, 3, padding=1).view(
+ feat.shape[0], feat.shape[1] * 9, feat.shape[2], feat.shape[3])
+ self.feat = feat
+ self.feat_coord = make_coord(feat.shape[-2:], flatten=False).cuda() \
+ .permute(2, 0, 1) \
+ .unsqueeze(0).expand(feat.shape[0], 2, *feat.shape[-2:])
+
+ def query_rgb(self, coord, cell):
+ # coord, cell: (b,h,w,c)
+ feat = self.feat
+ feat_coord = self.feat_coord
+ if self.local_ensemble:
+ vx_lst = [-1, 1]
+ vy_lst = [-1, 1]
+ eps_shift = 1e-6
+ else:
+ vx_lst, vy_lst, eps_shift = [0], [0], 0
+
+ # field radius (global: [-1, 1])
+ rx = 2 / feat.shape[-2] / 2
+ ry = 2 / feat.shape[-1] / 2
+
+ preds = []
+ areas = []
+ for vx in vx_lst:
+ for vy in vy_lst:
+ coord_ = coord.clone()
+ coord_[:, :, :, 0] += vx * rx + eps_shift
+ coord_[:, :, :, 1] += vy * ry + eps_shift
+ coord_.clamp_(-1 + 1e-6, 1 - 1e-6)
+
+ q_feat = F.grid_sample(feat, coord_.flip(-1),
+ mode='nearest', align_corners=False).permute(0, 2, 3, 1) # (b,h,w,c)
+ q_coord = F.grid_sample(feat_coord, coord_.flip(-1),
+ mode='nearest', align_corners=False).permute(0, 2, 3, 1)
+
+ rel_coord = coord - q_coord
+ rel_coord[:, :, :, 0] *= feat.shape[-2]
+ rel_coord[:, :, :, 1] *= feat.shape[-1]
+ inp = torch.cat([q_feat, rel_coord], dim=-1)
+
+ rel_cell = cell.clone()
+ rel_cell[:, :, :, 0] *= feat.shape[-2]
+ rel_cell[:, :, :, 1] *= feat.shape[-1]
+ inp = torch.cat([inp, rel_cell], dim=-1) # (b,h,w,c)
+
+ pred = self.imnet(inp.contiguous())
+ preds.append(pred)
+
+ area = torch.abs(rel_coord[:, :, :, 0] * rel_coord[:, :, :, 1]) # (b,h,w)
+ areas.append(area + 1e-9)
+
+ tot_area = torch.stack(areas).sum(dim=0) # (b,h,w)
+ if self.local_ensemble:
+ t = areas[0]; areas[0] = areas[3]; areas[3] = t
+ t = areas[1]; areas[1] = areas[2]; areas[2] = t
+ ret = 0
+ for pred, area in zip(preds, areas):
+ ret = ret + pred * (area / tot_area).unsqueeze(-1)
+ ret = ret.permute(0,3,1,2)
+ if ret.shape[1] != self.inp.shape[1]:
+ ret[:,:-1,:,:] += F.grid_sample(self.inp, coord.flip(-1), mode='bicubic',\
+ padding_mode='border', align_corners=False)
+ else:
+ ret += F.grid_sample(self.inp, coord.flip(-1), mode='bicubic',\
+ padding_mode='border', align_corners=False)
+ return ret
+
+ def forward(self, inp, coord, cell):
+ self.gen_feat(inp)
+ #return self.query_rgb(coord, cell)
+ H,W = coord.shape[1:3]
+ n = H*W
+ coord = coord.view(1,1,n,2)
+ cell = cell.view(1,1,n,2)
+
+ ql = 0
+ preds = None
+ while ql < n:
+ qr = min(ql + 512*512, n)
+ pred = self.query_rgb(coord[:,:,ql:qr,:], cell[:,:,ql:qr,:])
+ preds = pred if preds is None else torch.cat([preds, pred], dim=-1)
+ ql = qr
+ preds = preds.view(1,-1,H,W)
+ return preds
\ No newline at end of file
diff --git a/competitors_inference_code/LSRNA/lsr/mlp.py b/competitors_inference_code/LSRNA/lsr/mlp.py
new file mode 100644
index 0000000000000000000000000000000000000000..51190c74907e231b66dfa3b382bd7c31715e7d1a
--- /dev/null
+++ b/competitors_inference_code/LSRNA/lsr/mlp.py
@@ -0,0 +1,23 @@
+import torch.nn as nn
+
+from .models import register
+
+
+@register('mlp')
+class MLP(nn.Module):
+
+ def __init__(self, in_dim, out_dim, hidden_list):
+ super().__init__()
+ layers = []
+ lastv = in_dim
+ for hidden in hidden_list:
+ layers.append(nn.Linear(lastv, hidden))
+ layers.append(nn.ReLU())
+ lastv = hidden
+ layers.append(nn.Linear(lastv, out_dim))
+ self.layers = nn.Sequential(*layers)
+
+ def forward(self, x):
+ shape = x.shape[:-1]
+ x = self.layers(x.view(-1, x.shape[-1]))
+ return x.view(*shape, -1)
\ No newline at end of file
diff --git a/competitors_inference_code/LSRNA/lsr/models.py b/competitors_inference_code/LSRNA/lsr/models.py
new file mode 100644
index 0000000000000000000000000000000000000000..97cd9bd6fad4fdde66888f6d4ee579889dc96182
--- /dev/null
+++ b/competitors_inference_code/LSRNA/lsr/models.py
@@ -0,0 +1,23 @@
+import copy
+
+
+models = {}
+
+
+def register(name):
+ def decorator(cls):
+ models[name] = cls
+ return cls
+ return decorator
+
+
+def make(model_spec, args=None, load_sd=False):
+ if args is not None:
+ model_args = copy.deepcopy(model_spec['args'])
+ model_args.update(args)
+ else:
+ model_args = model_spec['args']
+ model = models[model_spec['name']](**model_args)
+ if load_sd:
+ model.load_state_dict(model_spec['sd'])
+ return model
diff --git a/competitors_inference_code/LSRNA/lsr/swinir-liif-latent-sdxl.yaml b/competitors_inference_code/LSRNA/lsr/swinir-liif-latent-sdxl.yaml
new file mode 100644
index 0000000000000000000000000000000000000000..722c6f7f0f3549ca0fd418af95b42c9de43e352f
--- /dev/null
+++ b/competitors_inference_code/LSRNA/lsr/swinir-liif-latent-sdxl.yaml
@@ -0,0 +1,20 @@
+model:
+ name: liif
+ args:
+ feat_unfold: true
+ local_ensemble: true
+ encoder_spec:
+ name: swinir
+ args:
+ img_size: 32 # inp_size
+ in_chans: 4
+ embed_dim: 60
+ depths: [6,6,6,6]
+ num_heads: [6,6,6,6]
+ window_size: 8
+ upsampler: none
+ imnet_spec:
+ name: mlp
+ args:
+ out_dim: 4
+ hidden_list: [256,256,256,256]
\ No newline at end of file
diff --git a/competitors_inference_code/LSRNA/lsr/swinir.py b/competitors_inference_code/LSRNA/lsr/swinir.py
new file mode 100644
index 0000000000000000000000000000000000000000..87d5ed717f69e01fd64fd5181b7bb6dbee19543d
--- /dev/null
+++ b/competitors_inference_code/LSRNA/lsr/swinir.py
@@ -0,0 +1,777 @@
+# -----------------------------------------------------------------------------------
+# SwinIR: Image Restoration Using Swin Transformer, https://arxiv.org/abs/2108.10257
+# Originally Written by Ze Liu, Modified by Jingyun Liang.
+# ----------------------------------------------------------------------------------
+
+import math
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+import torch.utils.checkpoint as checkpoint
+from timm.models.layers import DropPath, to_2tuple, trunc_normal_
+
+from argparse import Namespace
+
+from .models import register
+
+class Mlp(nn.Module):
+ def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
+ super().__init__()
+ out_features = out_features or in_features
+ hidden_features = hidden_features or in_features
+ self.fc1 = nn.Linear(in_features, hidden_features)
+ self.act = act_layer()
+ self.fc2 = nn.Linear(hidden_features, out_features)
+ self.drop = nn.Dropout(drop)
+
+ def forward(self, x):
+ x = self.fc1(x)
+ x = self.act(x)
+ x = self.drop(x)
+ x = self.fc2(x)
+ x = self.drop(x)
+ return x
+
+
+def window_partition(x, window_size):
+ """
+ Args:
+ x: (B, H, W, C)
+ window_size (int): window size
+
+ Returns:
+ windows: (num_windows*B, window_size, window_size, C)
+ """
+ B, H, W, C = x.shape
+ x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
+ windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
+ return windows
+
+
+def window_reverse(windows, window_size, H, W):
+ """
+ Args:
+ windows: (num_windows*B, window_size, window_size, C)
+ window_size (int): Window size
+ H (int): Height of image
+ W (int): Width of image
+
+ Returns:
+ x: (B, H, W, C)
+ """
+ B = int(windows.shape[0] / (H * W / window_size / window_size))
+ x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
+ x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
+ return x
+
+
+class WindowAttention(nn.Module):
+ r""" Window based multi-head self attention (W-MSA) module with relative position bias.
+ It supports both of shifted and non-shifted window.
+
+ Args:
+ dim (int): Number of input channels.
+ window_size (tuple[int]): The height and width of the window.
+ num_heads (int): Number of attention heads.
+ qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
+ qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
+ attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
+ proj_drop (float, optional): Dropout ratio of output. Default: 0.0
+ """
+
+ def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.):
+
+ super().__init__()
+ self.dim = dim
+ self.window_size = window_size # Wh, Ww
+ self.num_heads = num_heads
+ head_dim = dim // num_heads
+ self.scale = qk_scale or head_dim ** -0.5
+
+ # define a parameter table of relative position bias
+ self.relative_position_bias_table = nn.Parameter(
+ torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH
+
+ # get pair-wise relative position index for each token inside the window
+ coords_h = torch.arange(self.window_size[0])
+ coords_w = torch.arange(self.window_size[1])
+ coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
+ coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
+ relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
+ relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
+ relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0
+ relative_coords[:, :, 1] += self.window_size[1] - 1
+ relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
+ relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
+ self.register_buffer("relative_position_index", relative_position_index)
+
+ self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
+ self.attn_drop = nn.Dropout(attn_drop)
+ self.proj = nn.Linear(dim, dim)
+
+ self.proj_drop = nn.Dropout(proj_drop)
+
+ trunc_normal_(self.relative_position_bias_table, std=.02)
+ self.softmax = nn.Softmax(dim=-1)
+
+ def forward(self, x, mask=None):
+ """
+ Args:
+ x: input features with shape of (num_windows*B, N, C)
+ mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
+ """
+ B_, N, C = x.shape
+ qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
+ q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
+
+ q = q * self.scale
+ attn = (q @ k.transpose(-2, -1))
+
+ relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view(
+ self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH
+ relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
+ attn = attn + relative_position_bias.unsqueeze(0)
+
+ if mask is not None:
+ nW = mask.shape[0]
+ attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
+ attn = attn.view(-1, self.num_heads, N, N)
+ attn = self.softmax(attn)
+ else:
+ attn = self.softmax(attn)
+
+ attn = self.attn_drop(attn)
+
+ x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
+ x = self.proj(x)
+ x = self.proj_drop(x)
+ return x
+
+ def extra_repr(self) -> str:
+ return f'dim={self.dim}, window_size={self.window_size}, num_heads={self.num_heads}'
+
+
+class SwinTransformerBlock(nn.Module):
+ r""" Swin Transformer Block.
+
+ Args:
+ dim (int): Number of input channels.
+ input_resolution (tuple[int]): Input resulotion.
+ num_heads (int): Number of attention heads.
+ window_size (int): Window size.
+ shift_size (int): Shift size for SW-MSA.
+ mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
+ qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
+ qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
+ drop (float, optional): Dropout rate. Default: 0.0
+ attn_drop (float, optional): Attention dropout rate. Default: 0.0
+ drop_path (float, optional): Stochastic depth rate. Default: 0.0
+ act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
+ norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
+ """
+
+ def __init__(self, dim, input_resolution, num_heads, window_size=7, shift_size=0,
+ mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0.,
+ act_layer=nn.GELU, norm_layer=nn.LayerNorm):
+ super().__init__()
+ self.dim = dim
+ self.input_resolution = input_resolution
+ self.num_heads = num_heads
+ self.window_size = window_size
+ self.shift_size = shift_size
+ self.mlp_ratio = mlp_ratio
+ if min(self.input_resolution) <= self.window_size:
+ # if window size is larger than input resolution, we don't partition windows
+ self.shift_size = 0
+ self.window_size = min(self.input_resolution)
+ assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"
+
+ self.norm1 = norm_layer(dim)
+ self.attn = WindowAttention(
+ dim, window_size=to_2tuple(self.window_size), num_heads=num_heads,
+ qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)
+
+ self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
+ self.norm2 = norm_layer(dim)
+ mlp_hidden_dim = int(dim * mlp_ratio)
+ self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
+
+ if self.shift_size > 0:
+ attn_mask = self.calculate_mask(self.input_resolution)
+ else:
+ attn_mask = None
+
+ self.register_buffer("attn_mask", attn_mask)
+
+ def calculate_mask(self, x_size):
+ # calculate attention mask for SW-MSA
+ H, W = x_size
+ img_mask = torch.zeros((1, H, W, 1)) # 1 H W 1
+ h_slices = (slice(0, -self.window_size),
+ slice(-self.window_size, -self.shift_size),
+ slice(-self.shift_size, None))
+ w_slices = (slice(0, -self.window_size),
+ slice(-self.window_size, -self.shift_size),
+ slice(-self.shift_size, None))
+ cnt = 0
+ for h in h_slices:
+ for w in w_slices:
+ img_mask[:, h, w, :] = cnt
+ cnt += 1
+
+ mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
+ mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
+ attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
+ attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
+
+ return attn_mask
+
+ def forward(self, x, x_size):
+ H, W = x_size
+ B, L, C = x.shape
+ # assert L == H * W, "input feature has wrong size"
+
+ shortcut = x
+ x = self.norm1(x)
+ x = x.view(B, H, W, C)
+
+ # cyclic shift
+ if self.shift_size > 0:
+ shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
+ else:
+ shifted_x = x
+
+ # partition windows
+ x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C
+ x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C
+
+ # W-MSA/SW-MSA (to be compatible for testing on images whose shapes are the multiple of window size
+ if self.input_resolution == x_size:
+ attn_windows = self.attn(x_windows, mask=self.attn_mask) # nW*B, window_size*window_size, C
+ else:
+ attn_windows = self.attn(x_windows, mask=self.calculate_mask(x_size).to(x.device))
+
+ # merge windows
+ attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
+ shifted_x = window_reverse(attn_windows, self.window_size, H, W) # B H' W' C
+
+ # reverse cyclic shift
+ if self.shift_size > 0:
+ x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
+ else:
+ x = shifted_x
+ x = x.view(B, H * W, C)
+
+ # FFN
+ x = shortcut + self.drop_path(x)
+ x = x + self.drop_path(self.mlp(self.norm2(x)))
+
+ return x
+
+ def extra_repr(self) -> str:
+ return f"dim={self.dim}, input_resolution={self.input_resolution}, num_heads={self.num_heads}, " \
+ f"window_size={self.window_size}, shift_size={self.shift_size}, mlp_ratio={self.mlp_ratio}"
+
+
+class PatchMerging(nn.Module):
+ r""" Patch Merging Layer.
+
+ Args:
+ input_resolution (tuple[int]): Resolution of input feature.
+ dim (int): Number of input channels.
+ norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
+ """
+
+ def __init__(self, input_resolution, dim, norm_layer=nn.LayerNorm):
+ super().__init__()
+ self.input_resolution = input_resolution
+ self.dim = dim
+ self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)
+ self.norm = norm_layer(4 * dim)
+
+ def forward(self, x):
+ """
+ x: B, H*W, C
+ """
+ H, W = self.input_resolution
+ B, L, C = x.shape
+ assert L == H * W, "input feature has wrong size"
+ assert H % 2 == 0 and W % 2 == 0, f"x size ({H}*{W}) are not even."
+
+ x = x.view(B, H, W, C)
+
+ x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C
+ x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C
+ x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C
+ x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C
+ x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C
+ x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C
+
+ x = self.norm(x)
+ x = self.reduction(x)
+
+ return x
+
+ def extra_repr(self) -> str:
+ return f"input_resolution={self.input_resolution}, dim={self.dim}"
+
+
+class BasicLayer(nn.Module):
+ """ A basic Swin Transformer layer for one stage.
+
+ Args:
+ dim (int): Number of input channels.
+ input_resolution (tuple[int]): Input resolution.
+ depth (int): Number of blocks.
+ num_heads (int): Number of attention heads.
+ window_size (int): Local window size.
+ mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
+ qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
+ qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
+ drop (float, optional): Dropout rate. Default: 0.0
+ attn_drop (float, optional): Attention dropout rate. Default: 0.0
+ drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
+ norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
+ downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
+ use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
+ """
+
+ def __init__(self, dim, input_resolution, depth, num_heads, window_size,
+ mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0.,
+ drop_path=0., norm_layer=nn.LayerNorm, downsample=None, use_checkpoint=False):
+
+ super().__init__()
+ self.dim = dim
+ self.input_resolution = input_resolution
+ self.depth = depth
+ self.use_checkpoint = use_checkpoint
+
+ # build blocks
+ self.blocks = nn.ModuleList([
+ SwinTransformerBlock(dim=dim, input_resolution=input_resolution,
+ num_heads=num_heads, window_size=window_size,
+ shift_size=0 if (i % 2 == 0) else window_size // 2,
+ mlp_ratio=mlp_ratio,
+ qkv_bias=qkv_bias, qk_scale=qk_scale,
+ drop=drop, attn_drop=attn_drop,
+ drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
+ norm_layer=norm_layer)
+ for i in range(depth)])
+
+ # patch merging layer
+ if downsample is not None:
+ self.downsample = downsample(input_resolution, dim=dim, norm_layer=norm_layer)
+ else:
+ self.downsample = None
+
+ def forward(self, x, x_size):
+ for blk in self.blocks:
+ if self.use_checkpoint:
+ x = checkpoint.checkpoint(blk, x, x_size)
+ else:
+ x = blk(x, x_size)
+ if self.downsample is not None:
+ x = self.downsample(x)
+ return x
+
+ def extra_repr(self) -> str:
+ return f"dim={self.dim}, input_resolution={self.input_resolution}, depth={self.depth}"
+
+
+class RSTB(nn.Module):
+ """Residual Swin Transformer Block (RSTB).
+
+ Args:
+ dim (int): Number of input channels.
+ input_resolution (tuple[int]): Input resolution.
+ depth (int): Number of blocks.
+ num_heads (int): Number of attention heads.
+ window_size (int): Local window size.
+ mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
+ qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
+ qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
+ drop (float, optional): Dropout rate. Default: 0.0
+ attn_drop (float, optional): Attention dropout rate. Default: 0.0
+ drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
+ norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
+ downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
+ use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
+ img_size: Input image size.
+ patch_size: Patch size.
+ resi_connection: The convolutional block before residual connection.
+ """
+
+ def __init__(self, dim, input_resolution, depth, num_heads, window_size,
+ mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0.,
+ drop_path=0., norm_layer=nn.LayerNorm, downsample=None, use_checkpoint=False,
+ img_size=224, patch_size=4, resi_connection='1conv'):
+ super(RSTB, self).__init__()
+
+ self.dim = dim
+ self.input_resolution = input_resolution
+
+ self.residual_group = BasicLayer(dim=dim,
+ input_resolution=input_resolution,
+ depth=depth,
+ num_heads=num_heads,
+ window_size=window_size,
+ mlp_ratio=mlp_ratio,
+ qkv_bias=qkv_bias, qk_scale=qk_scale,
+ drop=drop, attn_drop=attn_drop,
+ drop_path=drop_path,
+ norm_layer=norm_layer,
+ downsample=downsample,
+ use_checkpoint=use_checkpoint)
+
+ if resi_connection == '1conv':
+ self.conv = nn.Conv2d(dim, dim, 3, 1, 1)
+ elif resi_connection == '3conv':
+ # to save parameters and memory
+ self.conv = nn.Sequential(nn.Conv2d(dim, dim // 4, 3, 1, 1), nn.LeakyReLU(negative_slope=0.2, inplace=True),
+ nn.Conv2d(dim // 4, dim // 4, 1, 1, 0),
+ nn.LeakyReLU(negative_slope=0.2, inplace=True),
+ nn.Conv2d(dim // 4, dim, 3, 1, 1))
+
+ self.patch_embed = PatchEmbed(
+ img_size=img_size, patch_size=patch_size, in_chans=0, embed_dim=dim,
+ norm_layer=None)
+
+ self.patch_unembed = PatchUnEmbed(
+ img_size=img_size, patch_size=patch_size, in_chans=0, embed_dim=dim,
+ norm_layer=None)
+
+ def forward(self, x, x_size):
+ return self.patch_embed(self.conv(self.patch_unembed(self.residual_group(x, x_size), x_size))) + x
+
+
+class PatchEmbed(nn.Module):
+ r""" Image to Patch Embedding
+
+ Args:
+ img_size (int): Image size. Default: 224.
+ patch_size (int): Patch token size. Default: 4.
+ in_chans (int): Number of input image channels. Default: 3.
+ embed_dim (int): Number of linear projection output channels. Default: 96.
+ norm_layer (nn.Module, optional): Normalization layer. Default: None
+ """
+
+ def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
+ super().__init__()
+ img_size = to_2tuple(img_size)
+ patch_size = to_2tuple(patch_size)
+ patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]]
+ self.img_size = img_size
+ self.patch_size = patch_size
+ self.patches_resolution = patches_resolution
+ self.num_patches = patches_resolution[0] * patches_resolution[1]
+
+ self.in_chans = in_chans
+ self.embed_dim = embed_dim
+
+ if norm_layer is not None:
+ self.norm = norm_layer(embed_dim)
+ else:
+ self.norm = None
+
+ def forward(self, x):
+ x = x.flatten(2).transpose(1, 2) # B Ph*Pw C
+ if self.norm is not None:
+ x = self.norm(x)
+ return x
+
+
+class PatchUnEmbed(nn.Module):
+ r""" Image to Patch Unembedding
+
+ Args:
+ img_size (int): Image size. Default: 224.
+ patch_size (int): Patch token size. Default: 4.
+ in_chans (int): Number of input image channels. Default: 3.
+ embed_dim (int): Number of linear projection output channels. Default: 96.
+ norm_layer (nn.Module, optional): Normalization layer. Default: None
+ """
+
+ def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
+ super().__init__()
+ img_size = to_2tuple(img_size)
+ patch_size = to_2tuple(patch_size)
+ patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]]
+ self.img_size = img_size
+ self.patch_size = patch_size
+ self.patches_resolution = patches_resolution
+ self.num_patches = patches_resolution[0] * patches_resolution[1]
+
+ self.in_chans = in_chans
+ self.embed_dim = embed_dim
+
+ def forward(self, x, x_size):
+ B, HW, C = x.shape
+ x = x.transpose(1, 2).view(B, self.embed_dim, x_size[0], x_size[1]) # B Ph*Pw C
+ return x
+
+
+class Upsample(nn.Sequential):
+ """Upsample module.
+
+ Args:
+ scale (int): Scale factor. Supported scales: 2^n and 3.
+ num_feat (int): Channel number of intermediate features.
+ """
+
+ def __init__(self, scale, num_feat):
+ m = []
+ if (scale & (scale - 1)) == 0: # scale = 2^n
+ for _ in range(int(math.log(scale, 2))):
+ m.append(nn.Conv2d(num_feat, 4 * num_feat, 3, 1, 1))
+ m.append(nn.PixelShuffle(2))
+ elif scale == 3:
+ m.append(nn.Conv2d(num_feat, 9 * num_feat, 3, 1, 1))
+ m.append(nn.PixelShuffle(3))
+ else:
+ raise ValueError(f'scale {scale} is not supported. ' 'Supported scales: 2^n and 3.')
+ super(Upsample, self).__init__(*m)
+
+
+class UpsampleOneStep(nn.Sequential):
+ """UpsampleOneStep module (the difference with Upsample is that it always only has 1conv + 1pixelshuffle)
+ Used in lightweight SR to save parameters.
+
+ Args:
+ scale (int): Scale factor. Supported scales: 2^n and 3.
+ num_feat (int): Channel number of intermediate features.
+
+ """
+
+ def __init__(self, scale, num_feat, num_out_ch, input_resolution=None):
+ self.num_feat = num_feat
+ self.input_resolution = input_resolution
+ m = []
+ m.append(nn.Conv2d(num_feat, (scale ** 2) * num_out_ch, 3, 1, 1))
+ m.append(nn.PixelShuffle(scale))
+ super(UpsampleOneStep, self).__init__(*m)
+
+
+@register('swinir')
+class SwinIR(nn.Module):
+ r""" SwinIR
+ A PyTorch impl of : `SwinIR: Image Restoration Using Swin Transformer`, based on Swin Transformer.
+
+ Args:
+ img_size (int | tuple(int)): Input image size. Default 64
+ patch_size (int | tuple(int)): Patch size. Default: 1
+ in_chans (int): Number of input image channels. Default: 3
+ embed_dim (int): Patch embedding dimension. Default: 96
+ depths (tuple(int)): Depth of each Swin Transformer layer.
+ num_heads (tuple(int)): Number of attention heads in different layers.
+ window_size (int): Window size. Default: 7
+ mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4
+ qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True
+ qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. Default: None
+ drop_rate (float): Dropout rate. Default: 0
+ attn_drop_rate (float): Attention dropout rate. Default: 0
+ drop_path_rate (float): Stochastic depth rate. Default: 0.1
+ norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm.
+ ape (bool): If True, add absolute position embedding to the patch embedding. Default: False
+ patch_norm (bool): If True, add normalization after patch embedding. Default: True
+ use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False
+ upscale: Upscale factor. 2/3/4/8 for image SR, 1 for denoising and compress artifact reduction
+ img_range: Image range. 1. or 255.
+ upsampler: The reconstruction reconstruction module. 'pixelshuffle'/'pixelshuffledirect'/'nearest+conv'/None
+ resi_connection: The convolutional block before residual connection. '1conv'/'3conv'
+ """
+
+ def __init__(self, img_size=64, patch_size=1, in_chans=4,
+ embed_dim=180, depths=[6,6,6,6,6,6], num_heads=[6,6,6,6,6,6],
+ window_size=8, mlp_ratio=2., qkv_bias=True, qk_scale=None,
+ drop_rate=0., attn_drop_rate=0., drop_path_rate=0.1,
+ norm_layer=nn.LayerNorm, ape=False, patch_norm=True,
+ use_checkpoint=False, upscale=2, img_range=1., upsampler='none', resi_connection='1conv',
+ **kwargs):
+ super(SwinIR, self).__init__()
+ num_in_ch = in_chans
+ num_out_ch = in_chans
+ num_feat = 64
+ self.img_range = img_range
+
+ self.upscale = upscale
+ self.upsampler = upsampler
+ self.window_size = window_size
+ self.out_dim = num_feat
+ #####################################################################################################
+ ################################### 1, shallow feature extraction ###################################
+ self.conv_first = nn.Conv2d(num_in_ch, embed_dim, 3, 1, 1)
+
+ #####################################################################################################
+ ################################### 2, deep feature extraction ######################################
+ self.num_layers = len(depths)
+ self.embed_dim = embed_dim
+ self.ape = ape
+ self.patch_norm = patch_norm
+ self.num_features = embed_dim
+ self.mlp_ratio = mlp_ratio
+
+ # split image into non-overlapping patches
+ self.patch_embed = PatchEmbed(
+ img_size=img_size, patch_size=patch_size, in_chans=embed_dim, embed_dim=embed_dim,
+ norm_layer=norm_layer if self.patch_norm else None)
+ num_patches = self.patch_embed.num_patches
+ patches_resolution = self.patch_embed.patches_resolution
+ self.patches_resolution = patches_resolution
+
+ # merge non-overlapping patches into image
+ self.patch_unembed = PatchUnEmbed(
+ img_size=img_size, patch_size=patch_size, in_chans=embed_dim, embed_dim=embed_dim,
+ norm_layer=norm_layer if self.patch_norm else None)
+
+ # absolute position embedding
+ if self.ape:
+ self.absolute_pos_embed = nn.Parameter(torch.zeros(1, num_patches, embed_dim))
+ trunc_normal_(self.absolute_pos_embed, std=.02)
+
+ self.pos_drop = nn.Dropout(p=drop_rate)
+
+ # stochastic depth
+ dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule
+
+ # build Residual Swin Transformer blocks (RSTB)
+ self.layers = nn.ModuleList()
+ for i_layer in range(self.num_layers):
+ layer = RSTB(dim=embed_dim,
+ input_resolution=(patches_resolution[0],
+ patches_resolution[1]),
+ depth=depths[i_layer],
+ num_heads=num_heads[i_layer],
+ window_size=window_size,
+ mlp_ratio=self.mlp_ratio,
+ qkv_bias=qkv_bias, qk_scale=qk_scale,
+ drop=drop_rate, attn_drop=attn_drop_rate,
+ drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])], # no impact on SR results
+ norm_layer=norm_layer,
+ downsample=None,
+ use_checkpoint=use_checkpoint,
+ img_size=img_size,
+ patch_size=patch_size,
+ resi_connection=resi_connection
+ )
+ self.layers.append(layer)
+ self.norm = norm_layer(self.num_features)
+
+ # build the last conv layer in deep feature extraction
+ if resi_connection == '1conv':
+ self.conv_after_body = nn.Conv2d(embed_dim, embed_dim, 3, 1, 1)
+ elif resi_connection == '3conv':
+ # to save parameters and memory
+ self.conv_after_body = nn.Sequential(nn.Conv2d(embed_dim, embed_dim // 4, 3, 1, 1),
+ nn.LeakyReLU(negative_slope=0.2, inplace=True),
+ nn.Conv2d(embed_dim // 4, embed_dim // 4, 1, 1, 0),
+ nn.LeakyReLU(negative_slope=0.2, inplace=True),
+ nn.Conv2d(embed_dim // 4, embed_dim, 3, 1, 1))
+
+ #####################################################################################################
+ ################################ 3, high quality image reconstruction ################################
+ if self.upsampler == 'none':
+ self.conv_before_upsample = nn.Sequential(nn.Conv2d(embed_dim, num_feat, 3, 1, 1),
+ nn.LeakyReLU(inplace=True))
+ elif self.upsampler == 'pixelshuffle':
+ # for classical SR
+ self.conv_before_upsample = nn.Sequential(nn.Conv2d(embed_dim, num_feat, 3, 1, 1),
+ nn.LeakyReLU(inplace=True))
+ self.upsample = Upsample(upscale, num_feat)
+ self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1)
+ elif self.upsampler == 'pixelshuffledirect':
+ # for lightweight SR (to save parameters)
+ self.upsample = UpsampleOneStep(upscale, embed_dim, num_out_ch,
+ (patches_resolution[0], patches_resolution[1]))
+ elif self.upsampler == 'nearest+conv':
+ # for real-world SR (less artifacts)
+ assert self.upscale == 4, 'only support x4 now.'
+ self.conv_before_upsample = nn.Sequential(nn.Conv2d(embed_dim, num_feat, 3, 1, 1),
+ nn.LeakyReLU(inplace=True))
+ self.conv_up1 = nn.Conv2d(num_feat, num_feat, 3, 1, 1)
+ self.conv_up2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1)
+ self.conv_hr = nn.Conv2d(num_feat, num_feat, 3, 1, 1)
+ self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1)
+ self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
+ else:
+ # for image denoising and JPEG compression artifact reduction
+ self.conv_last = nn.Conv2d(embed_dim, num_out_ch, 3, 1, 1)
+
+ self.apply(self._init_weights)
+
+ def _init_weights(self, m):
+ if isinstance(m, nn.Linear):
+ trunc_normal_(m.weight, std=.02)
+ if isinstance(m, nn.Linear) and m.bias is not None:
+ nn.init.constant_(m.bias, 0)
+ elif isinstance(m, nn.LayerNorm):
+ nn.init.constant_(m.bias, 0)
+ nn.init.constant_(m.weight, 1.0)
+
+ @torch.jit.ignore
+ def no_weight_decay(self):
+ return {'absolute_pos_embed'}
+
+ @torch.jit.ignore
+ def no_weight_decay_keywords(self):
+ return {'relative_position_bias_table'}
+
+ def check_image_size(self, x):
+ _, _, h, w = x.size()
+ mod_pad_h = (self.window_size - h % self.window_size) % self.window_size
+ mod_pad_w = (self.window_size - w % self.window_size) % self.window_size
+ x = F.pad(x, (0, mod_pad_w, 0, mod_pad_h), 'reflect')
+ return x
+
+ def forward_features(self, x):
+ x_size = (x.shape[2], x.shape[3])
+ x = self.patch_embed(x)
+ if self.ape:
+ x = x + self.absolute_pos_embed
+ x = self.pos_drop(x)
+
+ for layer in self.layers:
+ x = layer(x, x_size)
+
+ x = self.norm(x) # B L C
+ x = self.patch_unembed(x, x_size)
+
+ return x
+
+ def forward(self, x):
+ H,W = x.shape[2:]
+ x = self.check_image_size(x)
+
+ # self.mean = self.mean.type_as(x)
+ # x = (x - self.mean) * self.img_range
+
+ if self.upsampler == 'none':
+ x = self.conv_first(x)
+ x = self.conv_after_body(self.forward_features(x)) + x
+ x = self.conv_before_upsample(x)
+ elif self.upsampler == 'pixelshuffle':
+ # for classical SR
+ x = self.conv_first(x)
+ x = self.conv_after_body(self.forward_features(x)) + x
+ x = self.conv_before_upsample(x)
+ x = self.conv_last(self.upsample(x))
+ elif self.upsampler == 'pixelshuffledirect':
+ # for lightweight SR
+ x = self.conv_first(x)
+ x = self.conv_after_body(self.forward_features(x)) + x
+ x = self.upsample(x)
+ elif self.upsampler == 'nearest+conv':
+ # for real-world SR
+ x = self.conv_first(x)
+ x = self.conv_after_body(self.forward_features(x)) + x
+ x = self.conv_before_upsample(x)
+ x = self.lrelu(self.conv_up1(torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest')))
+ x = self.lrelu(self.conv_up2(torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest')))
+ x = self.conv_last(self.lrelu(self.conv_hr(x)))
+ else:
+ # for image denoising and JPEG compression artifact reduction
+ x_first = self.conv_first(x)
+ res = self.conv_after_body(self.forward_features(x_first)) + x_first
+ x = x + self.conv_last(res)
+
+ # x = x / self.img_range + self.mean
+ return x[:,:,:H,:W]
\ No newline at end of file
diff --git a/competitors_inference_code/LSRNA/lsr_training/configs/swinir-liif-latent-sdxl-v3.yaml b/competitors_inference_code/LSRNA/lsr_training/configs/swinir-liif-latent-sdxl-v3.yaml
new file mode 100644
index 0000000000000000000000000000000000000000..86e8f54bb2caa76e64e728e52e4279bcbc08b5c1
--- /dev/null
+++ b/competitors_inference_code/LSRNA/lsr_training/configs/swinir-liif-latent-sdxl-v3.yaml
@@ -0,0 +1,58 @@
+# use datasets/scripts/make_trainset.py
+train_dataset:
+ dataset:
+ name: image-folder
+ args:
+ hr_path: ../datasets/train/OpenImages/HR_sdxl_latent # shared
+ lr_path: ../datasets/train/OpenImages/LR_sdxl_latent
+ scales: [2,3,4]
+ wrapper:
+ name: sr-explicit-paired
+ args:
+ inp_size: 32 # lr
+ augment: []
+ sample_size: 64 # hr | should be less than min(scales)*inp_size
+ num_workers: 4 # total
+ batch_size: 32 # total
+
+valid_path: ../datasets/test/SDXL/original
+sd_ckpt: stabilityai/stable-diffusion-xl-base-1.0 # fixed
+
+model:
+ name: liif
+ args:
+ feat_unfold: true
+ local_ensemble: true
+ encoder_spec:
+ name: swinir
+ args:
+ img_size: 32 # inp_size
+ in_chans: 4
+ embed_dim: 60
+ depths: [6,6,6,6]
+ num_heads: [6,6,6,6]
+ window_size: 8
+ upsampler: none
+ imnet_spec:
+ name: mlp
+ args:
+ out_dim: 4
+ hidden_list: [256,256,256,256]
+
+optimizer:
+ name: adam
+ args:
+ lr: 2.e-4
+
+lr_scheduler:
+ name: CosineAnnealingLR_Restart
+ args:
+ T_period: [1000000]
+ restarts: [1000000]
+ weights: [1]
+ eta_min: 1.e-7
+
+iter_max: 1000000
+iter_print: 2000
+iter_val: 50000
+iter_save: 200000
\ No newline at end of file
diff --git a/competitors_inference_code/LSRNA/lsr_training/datasets/datasets.py b/competitors_inference_code/LSRNA/lsr_training/datasets/datasets.py
new file mode 100644
index 0000000000000000000000000000000000000000..a19d7a528d4569c4c40a740af521e05d74ee63f4
--- /dev/null
+++ b/competitors_inference_code/LSRNA/lsr_training/datasets/datasets.py
@@ -0,0 +1,18 @@
+import copy
+
+datasets = {}
+
+def register(name):
+ def decorator(cls):
+ datasets[name] = cls
+ return cls
+ return decorator
+
+def make(dataset_spec, args=None):
+ if args is not None:
+ dataset_args = copy.deepcopy(dataset_spec['args'])
+ dataset_args.update(args)
+ else:
+ dataset_args = dataset_spec['args']
+ dataset = datasets[dataset_spec['name']](**dataset_args)
+ return dataset
\ No newline at end of file
diff --git a/competitors_inference_code/LSRNA/lsr_training/datasets/scripts/make_trainset.py b/competitors_inference_code/LSRNA/lsr_training/datasets/scripts/make_trainset.py
new file mode 100644
index 0000000000000000000000000000000000000000..c8656405b22439b0e67d93fb233b3d024ca5ee8a
--- /dev/null
+++ b/competitors_inference_code/LSRNA/lsr_training/datasets/scripts/make_trainset.py
@@ -0,0 +1,144 @@
+import random
+import os
+import pandas as pd
+import requests
+from PIL import Image
+from io import BytesIO
+from tqdm import tqdm
+import argparse
+import pickle
+
+import numpy as np
+import torch
+import torchvision.transforms as transforms
+from diffusers import StableDiffusionXLPipeline
+
+import sys
+sys.path.append('../..')
+import core
+
+#random.seed(0)
+#np.random.seed(0)
+#torch.manual_seed(0)
+#torch.cuda.manual_seed_all(0)
+
+parser = argparse.ArgumentParser(description='OpenImages downloader')
+parser.add_argument('--max_sample', type=int, default=1560000) # per part
+parser.add_argument('--part', type=str, default='1/1')
+args = parser.parse_args()
+
+down_scales = [2,3,4] # fixed
+base_dir = '/workspace/datasets/train/OpenImages' # fixed
+count = 0
+
+annotation_path = f'{base_dir}/image_ids_and_rotation.csv' # metadata of OpenImages
+print('loading annotation file...')
+a,b=map(int, args.part.split('/'))
+urls = list(pd.read_csv(annotation_path)['OriginalURL'])[(a-1)::b]
+
+processed_info = {}
+processed_info_path = f'{base_dir}/process_info_{a}_{b}.pkl'
+if os.path.exists(processed_info_path):
+ with open(f'{base_dir}/process_info_{a}_{b}.pkl', 'rb') as f:
+ processed_info = pickle.load(f)
+
+def get_image(url):
+ global count, processed_info
+ session = requests.Session()
+ try:
+ img_name = url.split('/')[-1].split('?')[0]
+ if img_name[-4:].lower() not in ['.jpg', 'jpeg']:
+ return None, None
+ assert img_name.count('.') == 1
+ img_name = img_name.split('.')[0] # w/o extension
+
+ key = f'{base_dir}/HR/{img_name}_s000.jpg'
+ if key in processed_info:
+ count += processed_info[key]
+ print(f'[skip] files already exists for {img_name} | count: {count}')
+ return None, None
+
+ response = session.get(url, timeout=2)
+ response.raise_for_status()
+ img = Image.open(BytesIO(response.content))
+
+ width, height = img.size
+ if height >= 1440 and width >= 1440 and img.mode == 'RGB':
+ return img, img_name
+ return None, None
+
+ except requests.exceptions.RequestException as e:
+ print(f"Request failed: {e}")
+ return None, None
+ except Exception as e:
+ print(f"Other error occurred: {e}")
+ return None, None
+ finally:
+ session.close()
+
+os.makedirs(f'{base_dir}/HR', exist_ok=True)
+os.makedirs(f'{base_dir}/HR_sdxl_latent', exist_ok=True)
+for down_scale in down_scales:
+ os.makedirs(f'{base_dir}/LR/X{down_scale}', exist_ok=True)
+ os.makedirs(f'{base_dir}/LR_sdxl_latent/X{down_scale}', exist_ok=True)
+
+sd_ckpt = 'stabilityai/stable-diffusion-xl-base-1.0'
+pipeline = StableDiffusionXLPipeline.from_pretrained(sd_ckpt)
+vae = pipeline.vae.cuda() # eval mode, float32, i/o range [-1,1]
+
+for url in urls:
+ if count >= args.max_sample:
+ print(f'count ({count}) reached the max_sample={args.max_sample}')
+ break
+ img, base_name = get_image(url)
+ if img is None: continue
+
+ # found new HR image
+ crop_size = random.randint(1056,1440)//96*96
+ step = crop_size
+ w,h = img.size
+
+ h_space = np.arange(0, h-crop_size+1, step)
+ if h > h_space[-1] + crop_size:
+ h_space = np.append(h_space, h-crop_size)
+ w_space = np.arange(0, w-crop_size+1, step)
+ if w > w_space[-1] + crop_size:
+ w_space = np.append(w_space, w-crop_size)
+
+ hrs = []
+ for x in h_space:
+ for y in w_space:
+ hr = img.crop((y, x, y+crop_size, x+crop_size))
+ hrs.append(hr)
+ hrs = hrs[::-1]
+
+ for i, hr in enumerate(tqdm(hrs)):
+ index = len(hrs)-i-1
+ name = f'{base_name}_s{index:03d}' # w/o extension
+ hr = transforms.ToTensor()(hr).unsqueeze(0).cuda() # (1,3,csz,csz), range [0,1]
+ with torch.no_grad():
+ hr_latent = vae.encode((hr-0.5)*2).latent_dist.mode() * vae.config.scaling_factor
+
+ # bicubic degradation & conv_latent
+ for down_scale in down_scales:
+ lr = core.imresize(hr, sizes=(crop_size//down_scale, crop_size//down_scale))
+ lr = (lr*255).clip(0,255).to(torch.uint8).float() / 255 # discretized [0,1]
+ transforms.ToPILImage()(lr.squeeze(0)).save(f'{base_dir}/LR/X{down_scale}/{name}.jpg')
+
+ with torch.no_grad():
+ lr_latent = vae.encode((lr-0.5)*2).latent_dist.mode() * vae.config.scaling_factor
+ np.save(f'{base_dir}/LR_sdxl_latent/X{down_scale}/{name}.npy',
+ lr_latent.squeeze(0).permute(1,2,0).detach().cpu().numpy())
+
+ np.save(f'{base_dir}/HR_sdxl_latent/{name}.npy',
+ hr_latent.squeeze(0).permute(1,2,0).detach().cpu().numpy())
+ transforms.ToPILImage()(hr.squeeze(0)).save(f'{base_dir}/HR/{name}.jpg')
+
+ if index == 0:
+ key = f'{base_dir}/HR/{name}.jpg'
+ assert key not in processed_info
+ processed_info[key] = len(hrs)
+ with open(processed_info_path, 'wb') as f:
+ pickle.dump(processed_info, f)
+ count += len(hrs)
+ print(f'count: {count} / {args.max_sample} | succesfully processed {base_name}')
diff --git a/competitors_inference_code/LSRNA/lsr_training/datasets/wrappers.py b/competitors_inference_code/LSRNA/lsr_training/datasets/wrappers.py
new file mode 100644
index 0000000000000000000000000000000000000000..a401759e84c326d86ef5464d343cb0fc6620fe73
--- /dev/null
+++ b/competitors_inference_code/LSRNA/lsr_training/datasets/wrappers.py
@@ -0,0 +1,61 @@
+import numpy as np
+from torch.utils.data import Dataset
+from datasets import register
+from utils import *
+
+
+@register('sr-explicit-paired')
+class SRExplicitPaired(Dataset):
+
+ def __init__(self, dataset, inp_size, augment=[], sample_size=None, num_channels=None):
+ self.dataset = dataset
+ self.inp_size = inp_size
+ self.augment = augment
+ self.sample_size = inp_size if sample_size is None else sample_size
+ self.num_channels = num_channels
+
+ def __len__(self):
+ return len(self.dataset)
+
+ def __getitem__(self, idx):
+ hr_path, lr_paths = self.dataset[idx]
+ lr_path = lr_paths[np.random.randint(len(lr_paths))]
+
+ # img: (H,W,C), numpy, range [-3,3] or [0,1]
+ hr, lr = read_img(hr_path), read_img(lr_path)
+ if self.num_channels:
+ assert hr.shape[-1] == lr.shape[-1] == self.num_channels
+ hr, lr = random_crop_together(hr, lr, self.inp_size)
+
+ # augmentation
+ hflip = (np.random.random() < 0.5) if 'hflip' in self.augment else False
+ vflip = (np.random.random() < 0.5) if 'vflip' in self.augment else False
+ dflip = (np.random.random() < 0.5) if 'dflip' in self.augment else False
+
+ def base_augment(img):
+ if hflip:
+ img = img[::-1, :, :]
+ if vflip:
+ img = img[:, ::-1, :]
+ if dflip:
+ img = np.transpose(img, (1, 0, 2))
+ return img.copy()
+ hr = torch.from_numpy(base_augment(hr)).permute(2,0,1).float() # (C,H,W)
+ lr = torch.from_numpy(base_augment(lr)).permute(2,0,1).float() # (C,h,w)
+
+ coord = make_coord(hr.shape[-2:], flatten=False) # (H,W,2)
+ cell = torch.ones_like(coord) # (H,W,2)
+ cell[:,:,0] *= 2 / hr.shape[-2]
+ cell[:,:,1] *= 2 / hr.shape[-1]
+
+ P = self.sample_size
+ hr, pos = random_crop(hr, P, return_pos=True) # (C,P,P)
+ coord = coord[pos[0]:pos[0]+P, pos[1]:pos[1]+P] # (P,P,2)
+ cell = cell[pos[0]:pos[0]+P, pos[1]:pos[1]+P] # (P,P,2)
+
+ return {
+ 'lr': lr,
+ 'coord': coord,
+ 'cell': cell,
+ 'hr': hr
+ }
diff --git a/competitors_inference_code/LSRNA/lsr_training/dist.sh b/competitors_inference_code/LSRNA/lsr_training/dist.sh
new file mode 100644
index 0000000000000000000000000000000000000000..79c0fe37e046b2fb9a62ccbde76fa77836b98e20
--- /dev/null
+++ b/competitors_inference_code/LSRNA/lsr_training/dist.sh
@@ -0,0 +1,21 @@
+#!/bin/bash
+# Usage | ./dist.sh train.py --config configs/swinir-liif-latent-sdxl-v3.yaml --gpu 0,1
+SCRIPT=$1
+shift
+ARGS=("$@")
+
+for ((i=0; i<${#ARGS[@]}; i++)); do
+ if [[ ${ARGS[i]} == "--gpu" ]]; then
+ GPU=${ARGS[i+1]}
+ unset ARGS[i]
+ unset ARGS[i+1]
+ break
+ fi
+done
+
+ARGS=("${ARGS[@]}")
+NPROC_PER_NODE=$(echo $GPU | tr -cd ',' | wc -c)
+let NPROC_PER_NODE+=1
+FREE_PORT=$(python find_port.py)
+echo free port: $FREE_PORT
+CUDA_VISIBLE_DEVICES=$GPU python -m torch.distributed.launch --nproc_per_node=$NPROC_PER_NODE --master_port=$FREE_PORT $SCRIPT ${ARGS[@]}
\ No newline at end of file
diff --git a/competitors_inference_code/LSRNA/lsr_training/find_port.py b/competitors_inference_code/LSRNA/lsr_training/find_port.py
new file mode 100644
index 0000000000000000000000000000000000000000..e106e9bd23855af42c4ae54583ac2642e3fc9c3d
--- /dev/null
+++ b/competitors_inference_code/LSRNA/lsr_training/find_port.py
@@ -0,0 +1,11 @@
+import socket
+from contextlib import closing
+
+def find_free_port():
+ with closing(socket.socket(socket.AF_INET, socket.SOCK_STREAM)) as s:
+ s.bind(('', 0))
+ s.setsockopt(socket.SOL_SOCKET, socket.SO_REUSEADDR, 1)
+ return s.getsockname()[1]
+
+if __name__ == '__main__':
+ print(find_free_port())
\ No newline at end of file
diff --git a/competitors_inference_code/LSRNA/lsr_training/models/__init__.py b/competitors_inference_code/LSRNA/lsr_training/models/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..ab3d1f711fd00d78c94fecade1ef32c65a83b537
--- /dev/null
+++ b/competitors_inference_code/LSRNA/lsr_training/models/__init__.py
@@ -0,0 +1,3 @@
+from .models import register, make
+from . import swinir
+from . import liif, mlp
\ No newline at end of file
diff --git a/competitors_inference_code/LSRNA/lsr_training/models/liif.py b/competitors_inference_code/LSRNA/lsr_training/models/liif.py
new file mode 100644
index 0000000000000000000000000000000000000000..9c073744fd907c68b35e8cf12c95fe620ed6e5da
--- /dev/null
+++ b/competitors_inference_code/LSRNA/lsr_training/models/liif.py
@@ -0,0 +1,117 @@
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+
+import models
+from models import register
+from utils import make_coord
+
+
+@register('liif')
+class LIIF(nn.Module):
+
+ def __init__(self, encoder_spec, imnet_spec, feat_unfold=True, local_ensemble=True):
+ super().__init__()
+ self.local_ensemble = local_ensemble
+ self.feat_unfold = feat_unfold
+ self.encoder = models.make(encoder_spec)
+
+ imnet_in_dim = self.encoder.out_dim
+ if self.feat_unfold:
+ imnet_in_dim *= 9
+ imnet_in_dim += 4 # attach coord, cell
+ self.imnet = models.make(imnet_spec, args={'in_dim': imnet_in_dim})
+
+ def gen_feat(self, inp):
+ self.inp = inp
+ feat = self.encoder(inp)
+ if self.feat_unfold:
+ feat = F.unfold(feat, 3, padding=1).view(
+ feat.shape[0], feat.shape[1] * 9, feat.shape[2], feat.shape[3])
+ self.feat = feat
+ self.feat_coord = make_coord(feat.shape[-2:], flatten=False).cuda() \
+ .permute(2, 0, 1) \
+ .unsqueeze(0).expand(feat.shape[0], 2, *feat.shape[-2:])
+
+ def query_rgb(self, coord, cell):
+ # coord, cell: (b,h,w,c)
+ feat = self.feat
+ feat_coord = self.feat_coord
+ if self.local_ensemble:
+ vx_lst = [-1, 1]
+ vy_lst = [-1, 1]
+ eps_shift = 1e-6
+ else:
+ vx_lst, vy_lst, eps_shift = [0], [0], 0
+
+ # field radius (global: [-1, 1])
+ rx = 2 / feat.shape[-2] / 2
+ ry = 2 / feat.shape[-1] / 2
+
+ preds = []
+ areas = []
+ for vx in vx_lst:
+ for vy in vy_lst:
+ coord_ = coord.clone()
+ coord_[:, :, :, 0] += vx * rx + eps_shift
+ coord_[:, :, :, 1] += vy * ry + eps_shift
+ coord_.clamp_(-1 + 1e-6, 1 - 1e-6)
+
+ q_feat = F.grid_sample(feat, coord_.flip(-1),
+ mode='nearest', align_corners=False).permute(0, 2, 3, 1) # (b,h,w,c)
+ q_coord = F.grid_sample(feat_coord, coord_.flip(-1),
+ mode='nearest', align_corners=False).permute(0, 2, 3, 1)
+
+ rel_coord = coord - q_coord
+ rel_coord[:, :, :, 0] *= feat.shape[-2]
+ rel_coord[:, :, :, 1] *= feat.shape[-1]
+ inp = torch.cat([q_feat, rel_coord], dim=-1)
+
+ rel_cell = cell.clone()
+ rel_cell[:, :, :, 0] *= feat.shape[-2]
+ rel_cell[:, :, :, 1] *= feat.shape[-1]
+ inp = torch.cat([inp, rel_cell], dim=-1) # (b,h,w,c)
+
+ pred = self.imnet(inp.contiguous())
+ preds.append(pred)
+
+ area = torch.abs(rel_coord[:, :, :, 0] * rel_coord[:, :, :, 1]) # (b,h,w)
+ areas.append(area + 1e-9)
+
+ tot_area = torch.stack(areas).sum(dim=0) # (b,h,w)
+ if self.local_ensemble:
+ t = areas[0]; areas[0] = areas[3]; areas[3] = t
+ t = areas[1]; areas[1] = areas[2]; areas[2] = t
+ ret = 0
+ for pred, area in zip(preds, areas):
+ ret = ret + pred * (area / tot_area).unsqueeze(-1)
+ ret = ret.permute(0,3,1,2)
+
+ if ret.shape[1] != self.inp.shape[1]:
+ ret[:,:-1,:,:] += F.grid_sample(self.inp, coord.flip(-1), mode='bicubic',\
+ padding_mode='border', align_corners=False)
+ else:
+ ret += F.grid_sample(self.inp, coord.flip(-1), mode='bicubic',\
+ padding_mode='border', align_corners=False)
+ return ret
+
+ def forward(self, inp, coord, cell):
+ self.gen_feat(inp)
+ return self.query_rgb(coord, cell)
+
+ def batched_predict(self, inp, coord, cell, bsize=512*512):
+ self.gen_feat(inp)
+ H,W = coord.shape[1:3]
+ n = H*W
+ coord = coord.view(1,1,n,2)
+ cell = cell.view(1,1,n,2)
+
+ ql = 0
+ preds = []
+ while ql < n:
+ qr = min(ql + bsize, n)
+ pred = self.query_rgb(coord[:,:,ql:qr,:], cell[:,:,ql:qr,:])
+ preds.append(pred)
+ ql = qr
+ pred = torch.cat(preds, dim=-1).view(1,-1,H,W)
+ return pred
\ No newline at end of file
diff --git a/competitors_inference_code/LSRNA/lsr_training/models/mlp.py b/competitors_inference_code/LSRNA/lsr_training/models/mlp.py
new file mode 100644
index 0000000000000000000000000000000000000000..937e6b1ae35e323c14ce51e667d55dc57c20e932
--- /dev/null
+++ b/competitors_inference_code/LSRNA/lsr_training/models/mlp.py
@@ -0,0 +1,23 @@
+import torch.nn as nn
+
+from models import register
+
+
+@register('mlp')
+class MLP(nn.Module):
+
+ def __init__(self, in_dim, out_dim, hidden_list):
+ super().__init__()
+ layers = []
+ lastv = in_dim
+ for hidden in hidden_list:
+ layers.append(nn.Linear(lastv, hidden))
+ layers.append(nn.ReLU())
+ lastv = hidden
+ layers.append(nn.Linear(lastv, out_dim))
+ self.layers = nn.Sequential(*layers)
+
+ def forward(self, x):
+ shape = x.shape[:-1]
+ x = self.layers(x.view(-1, x.shape[-1]))
+ return x.view(*shape, -1)
\ No newline at end of file
diff --git a/competitors_inference_code/LSRNA/lsr_training/models/models.py b/competitors_inference_code/LSRNA/lsr_training/models/models.py
new file mode 100644
index 0000000000000000000000000000000000000000..97cd9bd6fad4fdde66888f6d4ee579889dc96182
--- /dev/null
+++ b/competitors_inference_code/LSRNA/lsr_training/models/models.py
@@ -0,0 +1,23 @@
+import copy
+
+
+models = {}
+
+
+def register(name):
+ def decorator(cls):
+ models[name] = cls
+ return cls
+ return decorator
+
+
+def make(model_spec, args=None, load_sd=False):
+ if args is not None:
+ model_args = copy.deepcopy(model_spec['args'])
+ model_args.update(args)
+ else:
+ model_args = model_spec['args']
+ model = models[model_spec['name']](**model_args)
+ if load_sd:
+ model.load_state_dict(model_spec['sd'])
+ return model
diff --git a/competitors_inference_code/LSRNA/lsr_training/models/swinir.py b/competitors_inference_code/LSRNA/lsr_training/models/swinir.py
new file mode 100644
index 0000000000000000000000000000000000000000..c0d8107ab98ad0033b0a28fd799becbafaa3ce9d
--- /dev/null
+++ b/competitors_inference_code/LSRNA/lsr_training/models/swinir.py
@@ -0,0 +1,776 @@
+# -----------------------------------------------------------------------------------
+# SwinIR: Image Restoration Using Swin Transformer, https://arxiv.org/abs/2108.10257
+# Originally Written by Ze Liu, Modified by Jingyun Liang.
+# ----------------------------------------------------------------------------------
+# modified from: https://github.com/JingyunLiang/SwinIR
+
+import math
+import torch
+import torch.nn as nn
+import torch.nn.functional as F
+import torch.utils.checkpoint as checkpoint
+from timm.models.layers import DropPath, to_2tuple, trunc_normal_
+from models import register
+
+class Mlp(nn.Module):
+ def __init__(self, in_features, hidden_features=None, out_features=None, act_layer=nn.GELU, drop=0.):
+ super().__init__()
+ out_features = out_features or in_features
+ hidden_features = hidden_features or in_features
+ self.fc1 = nn.Linear(in_features, hidden_features)
+ self.act = act_layer()
+ self.fc2 = nn.Linear(hidden_features, out_features)
+ self.drop = nn.Dropout(drop)
+
+ def forward(self, x):
+ x = self.fc1(x)
+ x = self.act(x)
+ x = self.drop(x)
+ x = self.fc2(x)
+ x = self.drop(x)
+ return x
+
+
+def window_partition(x, window_size):
+ """
+ Args:
+ x: (B, H, W, C)
+ window_size (int): window size
+
+ Returns:
+ windows: (num_windows*B, window_size, window_size, C)
+ """
+ B, H, W, C = x.shape
+ x = x.view(B, H // window_size, window_size, W // window_size, window_size, C)
+ windows = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(-1, window_size, window_size, C)
+ return windows
+
+
+def window_reverse(windows, window_size, H, W):
+ """
+ Args:
+ windows: (num_windows*B, window_size, window_size, C)
+ window_size (int): Window size
+ H (int): Height of image
+ W (int): Width of image
+
+ Returns:
+ x: (B, H, W, C)
+ """
+ B = int(windows.shape[0] / (H * W / window_size / window_size))
+ x = windows.view(B, H // window_size, W // window_size, window_size, window_size, -1)
+ x = x.permute(0, 1, 3, 2, 4, 5).contiguous().view(B, H, W, -1)
+ return x
+
+
+class WindowAttention(nn.Module):
+ r""" Window based multi-head self attention (W-MSA) module with relative position bias.
+ It supports both of shifted and non-shifted window.
+
+ Args:
+ dim (int): Number of input channels.
+ window_size (tuple[int]): The height and width of the window.
+ num_heads (int): Number of attention heads.
+ qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
+ qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set
+ attn_drop (float, optional): Dropout ratio of attention weight. Default: 0.0
+ proj_drop (float, optional): Dropout ratio of output. Default: 0.0
+ """
+
+ def __init__(self, dim, window_size, num_heads, qkv_bias=True, qk_scale=None, attn_drop=0., proj_drop=0.):
+
+ super().__init__()
+ self.dim = dim
+ self.window_size = window_size # Wh, Ww
+ self.num_heads = num_heads
+ head_dim = dim // num_heads
+ self.scale = qk_scale or head_dim ** -0.5
+
+ # define a parameter table of relative position bias
+ self.relative_position_bias_table = nn.Parameter(
+ torch.zeros((2 * window_size[0] - 1) * (2 * window_size[1] - 1), num_heads)) # 2*Wh-1 * 2*Ww-1, nH
+
+ # get pair-wise relative position index for each token inside the window
+ coords_h = torch.arange(self.window_size[0])
+ coords_w = torch.arange(self.window_size[1])
+ coords = torch.stack(torch.meshgrid([coords_h, coords_w])) # 2, Wh, Ww
+ coords_flatten = torch.flatten(coords, 1) # 2, Wh*Ww
+ relative_coords = coords_flatten[:, :, None] - coords_flatten[:, None, :] # 2, Wh*Ww, Wh*Ww
+ relative_coords = relative_coords.permute(1, 2, 0).contiguous() # Wh*Ww, Wh*Ww, 2
+ relative_coords[:, :, 0] += self.window_size[0] - 1 # shift to start from 0
+ relative_coords[:, :, 1] += self.window_size[1] - 1
+ relative_coords[:, :, 0] *= 2 * self.window_size[1] - 1
+ relative_position_index = relative_coords.sum(-1) # Wh*Ww, Wh*Ww
+ self.register_buffer("relative_position_index", relative_position_index)
+
+ self.qkv = nn.Linear(dim, dim * 3, bias=qkv_bias)
+ self.attn_drop = nn.Dropout(attn_drop)
+ self.proj = nn.Linear(dim, dim)
+
+ self.proj_drop = nn.Dropout(proj_drop)
+
+ trunc_normal_(self.relative_position_bias_table, std=.02)
+ self.softmax = nn.Softmax(dim=-1)
+
+ def forward(self, x, mask=None):
+ """
+ Args:
+ x: input features with shape of (num_windows*B, N, C)
+ mask: (0/-inf) mask with shape of (num_windows, Wh*Ww, Wh*Ww) or None
+ """
+ B_, N, C = x.shape
+ qkv = self.qkv(x).reshape(B_, N, 3, self.num_heads, C // self.num_heads).permute(2, 0, 3, 1, 4)
+ q, k, v = qkv[0], qkv[1], qkv[2] # make torchscript happy (cannot use tensor as tuple)
+
+ q = q * self.scale
+ attn = (q @ k.transpose(-2, -1))
+
+ relative_position_bias = self.relative_position_bias_table[self.relative_position_index.view(-1)].view(
+ self.window_size[0] * self.window_size[1], self.window_size[0] * self.window_size[1], -1) # Wh*Ww,Wh*Ww,nH
+ relative_position_bias = relative_position_bias.permute(2, 0, 1).contiguous() # nH, Wh*Ww, Wh*Ww
+ attn = attn + relative_position_bias.unsqueeze(0)
+
+ if mask is not None:
+ nW = mask.shape[0]
+ attn = attn.view(B_ // nW, nW, self.num_heads, N, N) + mask.unsqueeze(1).unsqueeze(0)
+ attn = attn.view(-1, self.num_heads, N, N)
+ attn = self.softmax(attn)
+ else:
+ attn = self.softmax(attn)
+
+ attn = self.attn_drop(attn)
+
+ x = (attn @ v).transpose(1, 2).reshape(B_, N, C)
+ x = self.proj(x)
+ x = self.proj_drop(x)
+ return x
+
+ def extra_repr(self) -> str:
+ return f'dim={self.dim}, window_size={self.window_size}, num_heads={self.num_heads}'
+
+
+class SwinTransformerBlock(nn.Module):
+ r""" Swin Transformer Block.
+
+ Args:
+ dim (int): Number of input channels.
+ input_resolution (tuple[int]): Input resulotion.
+ num_heads (int): Number of attention heads.
+ window_size (int): Window size.
+ shift_size (int): Shift size for SW-MSA.
+ mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
+ qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
+ qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
+ drop (float, optional): Dropout rate. Default: 0.0
+ attn_drop (float, optional): Attention dropout rate. Default: 0.0
+ drop_path (float, optional): Stochastic depth rate. Default: 0.0
+ act_layer (nn.Module, optional): Activation layer. Default: nn.GELU
+ norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
+ """
+
+ def __init__(self, dim, input_resolution, num_heads, window_size=7, shift_size=0,
+ mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0., drop_path=0.,
+ act_layer=nn.GELU, norm_layer=nn.LayerNorm):
+ super().__init__()
+ self.dim = dim
+ self.input_resolution = input_resolution
+ self.num_heads = num_heads
+ self.window_size = window_size
+ self.shift_size = shift_size
+ self.mlp_ratio = mlp_ratio
+ if min(self.input_resolution) <= self.window_size:
+ # if window size is larger than input resolution, we don't partition windows
+ self.shift_size = 0
+ self.window_size = min(self.input_resolution)
+ assert 0 <= self.shift_size < self.window_size, "shift_size must in 0-window_size"
+
+ self.norm1 = norm_layer(dim)
+ self.attn = WindowAttention(
+ dim, window_size=to_2tuple(self.window_size), num_heads=num_heads,
+ qkv_bias=qkv_bias, qk_scale=qk_scale, attn_drop=attn_drop, proj_drop=drop)
+
+ self.drop_path = DropPath(drop_path) if drop_path > 0. else nn.Identity()
+ self.norm2 = norm_layer(dim)
+ mlp_hidden_dim = int(dim * mlp_ratio)
+ self.mlp = Mlp(in_features=dim, hidden_features=mlp_hidden_dim, act_layer=act_layer, drop=drop)
+
+ if self.shift_size > 0:
+ attn_mask = self.calculate_mask(self.input_resolution)
+ else:
+ attn_mask = None
+
+ self.register_buffer("attn_mask", attn_mask)
+
+ def calculate_mask(self, x_size):
+ # calculate attention mask for SW-MSA
+ H, W = x_size
+ img_mask = torch.zeros((1, H, W, 1)) # 1 H W 1
+ h_slices = (slice(0, -self.window_size),
+ slice(-self.window_size, -self.shift_size),
+ slice(-self.shift_size, None))
+ w_slices = (slice(0, -self.window_size),
+ slice(-self.window_size, -self.shift_size),
+ slice(-self.shift_size, None))
+ cnt = 0
+ for h in h_slices:
+ for w in w_slices:
+ img_mask[:, h, w, :] = cnt
+ cnt += 1
+
+ mask_windows = window_partition(img_mask, self.window_size) # nW, window_size, window_size, 1
+ mask_windows = mask_windows.view(-1, self.window_size * self.window_size)
+ attn_mask = mask_windows.unsqueeze(1) - mask_windows.unsqueeze(2)
+ attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
+
+ return attn_mask
+
+ def forward(self, x, x_size):
+ H, W = x_size
+ B, L, C = x.shape
+ # assert L == H * W, "input feature has wrong size"
+
+ shortcut = x
+ x = self.norm1(x)
+ x = x.view(B, H, W, C)
+
+ # cyclic shift
+ if self.shift_size > 0:
+ shifted_x = torch.roll(x, shifts=(-self.shift_size, -self.shift_size), dims=(1, 2))
+ else:
+ shifted_x = x
+
+ # partition windows
+ x_windows = window_partition(shifted_x, self.window_size) # nW*B, window_size, window_size, C
+ x_windows = x_windows.view(-1, self.window_size * self.window_size, C) # nW*B, window_size*window_size, C
+
+ # W-MSA/SW-MSA (to be compatible for testing on images whose shapes are the multiple of window size
+ if self.input_resolution == x_size:
+ attn_windows = self.attn(x_windows, mask=self.attn_mask) # nW*B, window_size*window_size, C
+ else:
+ attn_windows = self.attn(x_windows, mask=self.calculate_mask(x_size).to(x.device))
+
+ # merge windows
+ attn_windows = attn_windows.view(-1, self.window_size, self.window_size, C)
+ shifted_x = window_reverse(attn_windows, self.window_size, H, W) # B H' W' C
+
+ # reverse cyclic shift
+ if self.shift_size > 0:
+ x = torch.roll(shifted_x, shifts=(self.shift_size, self.shift_size), dims=(1, 2))
+ else:
+ x = shifted_x
+ x = x.view(B, H * W, C)
+
+ # FFN
+ x = shortcut + self.drop_path(x)
+ x = x + self.drop_path(self.mlp(self.norm2(x)))
+
+ return x
+
+ def extra_repr(self) -> str:
+ return f"dim={self.dim}, input_resolution={self.input_resolution}, num_heads={self.num_heads}, " \
+ f"window_size={self.window_size}, shift_size={self.shift_size}, mlp_ratio={self.mlp_ratio}"
+
+
+class PatchMerging(nn.Module):
+ r""" Patch Merging Layer.
+
+ Args:
+ input_resolution (tuple[int]): Resolution of input feature.
+ dim (int): Number of input channels.
+ norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
+ """
+
+ def __init__(self, input_resolution, dim, norm_layer=nn.LayerNorm):
+ super().__init__()
+ self.input_resolution = input_resolution
+ self.dim = dim
+ self.reduction = nn.Linear(4 * dim, 2 * dim, bias=False)
+ self.norm = norm_layer(4 * dim)
+
+ def forward(self, x):
+ """
+ x: B, H*W, C
+ """
+ H, W = self.input_resolution
+ B, L, C = x.shape
+ assert L == H * W, "input feature has wrong size"
+ assert H % 2 == 0 and W % 2 == 0, f"x size ({H}*{W}) are not even."
+
+ x = x.view(B, H, W, C)
+
+ x0 = x[:, 0::2, 0::2, :] # B H/2 W/2 C
+ x1 = x[:, 1::2, 0::2, :] # B H/2 W/2 C
+ x2 = x[:, 0::2, 1::2, :] # B H/2 W/2 C
+ x3 = x[:, 1::2, 1::2, :] # B H/2 W/2 C
+ x = torch.cat([x0, x1, x2, x3], -1) # B H/2 W/2 4*C
+ x = x.view(B, -1, 4 * C) # B H/2*W/2 4*C
+
+ x = self.norm(x)
+ x = self.reduction(x)
+
+ return x
+
+ def extra_repr(self) -> str:
+ return f"input_resolution={self.input_resolution}, dim={self.dim}"
+
+
+class BasicLayer(nn.Module):
+ """ A basic Swin Transformer layer for one stage.
+
+ Args:
+ dim (int): Number of input channels.
+ input_resolution (tuple[int]): Input resolution.
+ depth (int): Number of blocks.
+ num_heads (int): Number of attention heads.
+ window_size (int): Local window size.
+ mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
+ qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
+ qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
+ drop (float, optional): Dropout rate. Default: 0.0
+ attn_drop (float, optional): Attention dropout rate. Default: 0.0
+ drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
+ norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
+ downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
+ use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
+ """
+
+ def __init__(self, dim, input_resolution, depth, num_heads, window_size,
+ mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0.,
+ drop_path=0., norm_layer=nn.LayerNorm, downsample=None, use_checkpoint=False):
+
+ super().__init__()
+ self.dim = dim
+ self.input_resolution = input_resolution
+ self.depth = depth
+ self.use_checkpoint = use_checkpoint
+
+ # build blocks
+ self.blocks = nn.ModuleList([
+ SwinTransformerBlock(dim=dim, input_resolution=input_resolution,
+ num_heads=num_heads, window_size=window_size,
+ shift_size=0 if (i % 2 == 0) else window_size // 2,
+ mlp_ratio=mlp_ratio,
+ qkv_bias=qkv_bias, qk_scale=qk_scale,
+ drop=drop, attn_drop=attn_drop,
+ drop_path=drop_path[i] if isinstance(drop_path, list) else drop_path,
+ norm_layer=norm_layer)
+ for i in range(depth)])
+
+ # patch merging layer
+ if downsample is not None:
+ self.downsample = downsample(input_resolution, dim=dim, norm_layer=norm_layer)
+ else:
+ self.downsample = None
+
+ def forward(self, x, x_size):
+ for blk in self.blocks:
+ if self.use_checkpoint:
+ x = checkpoint.checkpoint(blk, x, x_size)
+ else:
+ x = blk(x, x_size)
+ if self.downsample is not None:
+ x = self.downsample(x)
+ return x
+
+ def extra_repr(self) -> str:
+ return f"dim={self.dim}, input_resolution={self.input_resolution}, depth={self.depth}"
+
+
+class RSTB(nn.Module):
+ """Residual Swin Transformer Block (RSTB).
+
+ Args:
+ dim (int): Number of input channels.
+ input_resolution (tuple[int]): Input resolution.
+ depth (int): Number of blocks.
+ num_heads (int): Number of attention heads.
+ window_size (int): Local window size.
+ mlp_ratio (float): Ratio of mlp hidden dim to embedding dim.
+ qkv_bias (bool, optional): If True, add a learnable bias to query, key, value. Default: True
+ qk_scale (float | None, optional): Override default qk scale of head_dim ** -0.5 if set.
+ drop (float, optional): Dropout rate. Default: 0.0
+ attn_drop (float, optional): Attention dropout rate. Default: 0.0
+ drop_path (float | tuple[float], optional): Stochastic depth rate. Default: 0.0
+ norm_layer (nn.Module, optional): Normalization layer. Default: nn.LayerNorm
+ downsample (nn.Module | None, optional): Downsample layer at the end of the layer. Default: None
+ use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False.
+ img_size: Input image size.
+ patch_size: Patch size.
+ resi_connection: The convolutional block before residual connection.
+ """
+
+ def __init__(self, dim, input_resolution, depth, num_heads, window_size,
+ mlp_ratio=4., qkv_bias=True, qk_scale=None, drop=0., attn_drop=0.,
+ drop_path=0., norm_layer=nn.LayerNorm, downsample=None, use_checkpoint=False,
+ img_size=224, patch_size=4, resi_connection='1conv'):
+ super(RSTB, self).__init__()
+
+ self.dim = dim
+ self.input_resolution = input_resolution
+
+ self.residual_group = BasicLayer(dim=dim,
+ input_resolution=input_resolution,
+ depth=depth,
+ num_heads=num_heads,
+ window_size=window_size,
+ mlp_ratio=mlp_ratio,
+ qkv_bias=qkv_bias, qk_scale=qk_scale,
+ drop=drop, attn_drop=attn_drop,
+ drop_path=drop_path,
+ norm_layer=norm_layer,
+ downsample=downsample,
+ use_checkpoint=use_checkpoint)
+
+ if resi_connection == '1conv':
+ self.conv = nn.Conv2d(dim, dim, 3, 1, 1)
+ elif resi_connection == '3conv':
+ # to save parameters and memory
+ self.conv = nn.Sequential(nn.Conv2d(dim, dim // 4, 3, 1, 1), nn.LeakyReLU(negative_slope=0.2, inplace=True),
+ nn.Conv2d(dim // 4, dim // 4, 1, 1, 0),
+ nn.LeakyReLU(negative_slope=0.2, inplace=True),
+ nn.Conv2d(dim // 4, dim, 3, 1, 1))
+
+ self.patch_embed = PatchEmbed(
+ img_size=img_size, patch_size=patch_size, in_chans=0, embed_dim=dim,
+ norm_layer=None)
+
+ self.patch_unembed = PatchUnEmbed(
+ img_size=img_size, patch_size=patch_size, in_chans=0, embed_dim=dim,
+ norm_layer=None)
+
+ def forward(self, x, x_size):
+ return self.patch_embed(self.conv(self.patch_unembed(self.residual_group(x, x_size), x_size))) + x
+
+
+class PatchEmbed(nn.Module):
+ r""" Image to Patch Embedding
+
+ Args:
+ img_size (int): Image size. Default: 224.
+ patch_size (int): Patch token size. Default: 4.
+ in_chans (int): Number of input image channels. Default: 3.
+ embed_dim (int): Number of linear projection output channels. Default: 96.
+ norm_layer (nn.Module, optional): Normalization layer. Default: None
+ """
+
+ def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
+ super().__init__()
+ img_size = to_2tuple(img_size)
+ patch_size = to_2tuple(patch_size)
+ patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]]
+ self.img_size = img_size
+ self.patch_size = patch_size
+ self.patches_resolution = patches_resolution
+ self.num_patches = patches_resolution[0] * patches_resolution[1]
+
+ self.in_chans = in_chans
+ self.embed_dim = embed_dim
+
+ if norm_layer is not None:
+ self.norm = norm_layer(embed_dim)
+ else:
+ self.norm = None
+
+ def forward(self, x):
+ x = x.flatten(2).transpose(1, 2) # B Ph*Pw C
+ if self.norm is not None:
+ x = self.norm(x)
+ return x
+
+
+class PatchUnEmbed(nn.Module):
+ r""" Image to Patch Unembedding
+
+ Args:
+ img_size (int): Image size. Default: 224.
+ patch_size (int): Patch token size. Default: 4.
+ in_chans (int): Number of input image channels. Default: 3.
+ embed_dim (int): Number of linear projection output channels. Default: 96.
+ norm_layer (nn.Module, optional): Normalization layer. Default: None
+ """
+
+ def __init__(self, img_size=224, patch_size=4, in_chans=3, embed_dim=96, norm_layer=None):
+ super().__init__()
+ img_size = to_2tuple(img_size)
+ patch_size = to_2tuple(patch_size)
+ patches_resolution = [img_size[0] // patch_size[0], img_size[1] // patch_size[1]]
+ self.img_size = img_size
+ self.patch_size = patch_size
+ self.patches_resolution = patches_resolution
+ self.num_patches = patches_resolution[0] * patches_resolution[1]
+
+ self.in_chans = in_chans
+ self.embed_dim = embed_dim
+
+ def forward(self, x, x_size):
+ B, HW, C = x.shape
+ x = x.transpose(1, 2).view(B, self.embed_dim, x_size[0], x_size[1]) # B Ph*Pw C
+ return x
+
+
+class Upsample(nn.Sequential):
+ """Upsample module.
+
+ Args:
+ scale (int): Scale factor. Supported scales: 2^n and 3.
+ num_feat (int): Channel number of intermediate features.
+ """
+
+ def __init__(self, scale, num_feat):
+ m = []
+ if (scale & (scale - 1)) == 0: # scale = 2^n
+ for _ in range(int(math.log(scale, 2))):
+ m.append(nn.Conv2d(num_feat, 4 * num_feat, 3, 1, 1))
+ m.append(nn.PixelShuffle(2))
+ elif scale == 3:
+ m.append(nn.Conv2d(num_feat, 9 * num_feat, 3, 1, 1))
+ m.append(nn.PixelShuffle(3))
+ else:
+ raise ValueError(f'scale {scale} is not supported. ' 'Supported scales: 2^n and 3.')
+ super(Upsample, self).__init__(*m)
+
+
+class UpsampleOneStep(nn.Sequential):
+ """UpsampleOneStep module (the difference with Upsample is that it always only has 1conv + 1pixelshuffle)
+ Used in lightweight SR to save parameters.
+
+ Args:
+ scale (int): Scale factor. Supported scales: 2^n and 3.
+ num_feat (int): Channel number of intermediate features.
+
+ """
+
+ def __init__(self, scale, num_feat, num_out_ch, input_resolution=None):
+ self.num_feat = num_feat
+ self.input_resolution = input_resolution
+ m = []
+ m.append(nn.Conv2d(num_feat, (scale ** 2) * num_out_ch, 3, 1, 1))
+ m.append(nn.PixelShuffle(scale))
+ super(UpsampleOneStep, self).__init__(*m)
+
+
+@register('swinir')
+class SwinIR(nn.Module):
+ r""" SwinIR
+ A PyTorch impl of : `SwinIR: Image Restoration Using Swin Transformer`, based on Swin Transformer.
+
+ Args:
+ img_size (int | tuple(int)): Input image size. Default 64
+ patch_size (int | tuple(int)): Patch size. Default: 1
+ in_chans (int): Number of input image channels. Default: 3
+ embed_dim (int): Patch embedding dimension. Default: 96
+ depths (tuple(int)): Depth of each Swin Transformer layer.
+ num_heads (tuple(int)): Number of attention heads in different layers.
+ window_size (int): Window size. Default: 8
+ mlp_ratio (float): Ratio of mlp hidden dim to embedding dim. Default: 4
+ qkv_bias (bool): If True, add a learnable bias to query, key, value. Default: True
+ qk_scale (float): Override default qk scale of head_dim ** -0.5 if set. Default: None
+ drop_rate (float): Dropout rate. Default: 0
+ attn_drop_rate (float): Attention dropout rate. Default: 0
+ drop_path_rate (float): Stochastic depth rate. Default: 0.1
+ norm_layer (nn.Module): Normalization layer. Default: nn.LayerNorm.
+ ape (bool): If True, add absolute position embedding to the patch embedding. Default: False
+ patch_norm (bool): If True, add normalization after patch embedding. Default: True
+ use_checkpoint (bool): Whether to use checkpointing to save memory. Default: False
+ upscale: Upscale factor. 2/3/4/8 for image SR, 1 for denoising and compress artifact reduction
+ img_range: Image range. 1. or 255.
+ upsampler: The reconstruction reconstruction module. 'pixelshuffle'/'pixelshuffledirect'/'nearest+conv'/None
+ resi_connection: The convolutional block before residual connection. '1conv'/'3conv'
+ """
+
+ def __init__(self, img_size=64, patch_size=1, in_chans=4,
+ embed_dim=180, depths=[6,6,6,6,6,6], num_heads=[6,6,6,6,6,6],
+ window_size=8, mlp_ratio=2., qkv_bias=True, qk_scale=None,
+ drop_rate=0., attn_drop_rate=0., drop_path_rate=0.1,
+ norm_layer=nn.LayerNorm, ape=False, patch_norm=True,
+ use_checkpoint=False, upscale=2, img_range=1., upsampler='none', resi_connection='1conv',
+ **kwargs):
+ super(SwinIR, self).__init__()
+ num_in_ch = in_chans
+ num_out_ch = in_chans
+ num_feat = 64
+ self.img_range = img_range
+
+ self.upscale = upscale
+ self.upsampler = upsampler
+ self.window_size = window_size
+ self.out_dim = num_feat
+ #####################################################################################################
+ ################################### 1, shallow feature extraction ###################################
+ self.conv_first = nn.Conv2d(num_in_ch, embed_dim, 3, 1, 1)
+
+ #####################################################################################################
+ ################################### 2, deep feature extraction ######################################
+ self.num_layers = len(depths)
+ self.embed_dim = embed_dim
+ self.ape = ape
+ self.patch_norm = patch_norm
+ self.num_features = embed_dim
+ self.mlp_ratio = mlp_ratio
+
+ # split image into non-overlapping patches
+ self.patch_embed = PatchEmbed(
+ img_size=img_size, patch_size=patch_size, in_chans=embed_dim, embed_dim=embed_dim,
+ norm_layer=norm_layer if self.patch_norm else None)
+ num_patches = self.patch_embed.num_patches
+ patches_resolution = self.patch_embed.patches_resolution
+ self.patches_resolution = patches_resolution
+
+ # merge non-overlapping patches into image
+ self.patch_unembed = PatchUnEmbed(
+ img_size=img_size, patch_size=patch_size, in_chans=embed_dim, embed_dim=embed_dim,
+ norm_layer=norm_layer if self.patch_norm else None)
+
+ # absolute position embedding
+ if self.ape:
+ self.absolute_pos_embed = nn.Parameter(torch.zeros(1, num_patches, embed_dim))
+ trunc_normal_(self.absolute_pos_embed, std=.02)
+
+ self.pos_drop = nn.Dropout(p=drop_rate)
+
+ # stochastic depth
+ dpr = [x.item() for x in torch.linspace(0, drop_path_rate, sum(depths))] # stochastic depth decay rule
+
+ # build Residual Swin Transformer blocks (RSTB)
+ self.layers = nn.ModuleList()
+ for i_layer in range(self.num_layers):
+ layer = RSTB(dim=embed_dim,
+ input_resolution=(patches_resolution[0],
+ patches_resolution[1]),
+ depth=depths[i_layer],
+ num_heads=num_heads[i_layer],
+ window_size=window_size,
+ mlp_ratio=self.mlp_ratio,
+ qkv_bias=qkv_bias, qk_scale=qk_scale,
+ drop=drop_rate, attn_drop=attn_drop_rate,
+ drop_path=dpr[sum(depths[:i_layer]):sum(depths[:i_layer + 1])], # no impact on SR results
+ norm_layer=norm_layer,
+ downsample=None,
+ use_checkpoint=use_checkpoint,
+ img_size=img_size,
+ patch_size=patch_size,
+ resi_connection=resi_connection
+
+ )
+ self.layers.append(layer)
+ self.norm = norm_layer(self.num_features)
+
+ # build the last conv layer in deep feature extraction
+ if resi_connection == '1conv':
+ self.conv_after_body = nn.Conv2d(embed_dim, embed_dim, 3, 1, 1)
+ elif resi_connection == '3conv':
+ # to save parameters and memory
+ self.conv_after_body = nn.Sequential(nn.Conv2d(embed_dim, embed_dim // 4, 3, 1, 1),
+ nn.LeakyReLU(negative_slope=0.2, inplace=True),
+ nn.Conv2d(embed_dim // 4, embed_dim // 4, 1, 1, 0),
+ nn.LeakyReLU(negative_slope=0.2, inplace=True),
+ nn.Conv2d(embed_dim // 4, embed_dim, 3, 1, 1))
+
+ #####################################################################################################
+ ################################ 3, high quality image reconstruction ################################
+ if self.upsampler == 'none':
+ self.conv_before_upsample = nn.Sequential(nn.Conv2d(embed_dim, num_feat, 3, 1, 1),
+ nn.LeakyReLU(inplace=True))
+ elif self.upsampler == 'pixelshuffle':
+ # for classical SR
+ self.conv_before_upsample = nn.Sequential(nn.Conv2d(embed_dim, num_feat, 3, 1, 1),
+ nn.LeakyReLU(inplace=True))
+ self.upsample = Upsample(upscale, num_feat)
+ self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1)
+ elif self.upsampler == 'pixelshuffledirect':
+ # for lightweight SR (to save parameters)
+ self.upsample = UpsampleOneStep(upscale, embed_dim, num_out_ch,
+ (patches_resolution[0], patches_resolution[1]))
+ elif self.upsampler == 'nearest+conv':
+ # for real-world SR (less artifacts)
+ assert self.upscale == 4, 'only support x4 now.'
+ self.conv_before_upsample = nn.Sequential(nn.Conv2d(embed_dim, num_feat, 3, 1, 1),
+ nn.LeakyReLU(inplace=True))
+ self.conv_up1 = nn.Conv2d(num_feat, num_feat, 3, 1, 1)
+ self.conv_up2 = nn.Conv2d(num_feat, num_feat, 3, 1, 1)
+ self.conv_hr = nn.Conv2d(num_feat, num_feat, 3, 1, 1)
+ self.conv_last = nn.Conv2d(num_feat, num_out_ch, 3, 1, 1)
+ self.lrelu = nn.LeakyReLU(negative_slope=0.2, inplace=True)
+ else:
+ # for image denoising and JPEG compression artifact reduction
+ self.conv_last = nn.Conv2d(embed_dim, num_out_ch, 3, 1, 1)
+
+ self.apply(self._init_weights)
+
+ def _init_weights(self, m):
+ if isinstance(m, nn.Linear):
+ trunc_normal_(m.weight, std=.02)
+ if isinstance(m, nn.Linear) and m.bias is not None:
+ nn.init.constant_(m.bias, 0)
+ elif isinstance(m, nn.LayerNorm):
+ nn.init.constant_(m.bias, 0)
+ nn.init.constant_(m.weight, 1.0)
+
+ @torch.jit.ignore
+ def no_weight_decay(self):
+ return {'absolute_pos_embed'}
+
+ @torch.jit.ignore
+ def no_weight_decay_keywords(self):
+ return {'relative_position_bias_table'}
+
+ def check_image_size(self, x):
+ _, _, h, w = x.size()
+ mod_pad_h = (self.window_size - h % self.window_size) % self.window_size
+ mod_pad_w = (self.window_size - w % self.window_size) % self.window_size
+ x = F.pad(x, (0, mod_pad_w, 0, mod_pad_h), 'reflect')
+ return x
+
+ def forward_features(self, x):
+ x_size = (x.shape[2], x.shape[3])
+ x = self.patch_embed(x)
+ if self.ape:
+ x = x + self.absolute_pos_embed
+ x = self.pos_drop(x)
+
+ for layer in self.layers:
+ x = layer(x, x_size)
+
+ x = self.norm(x) # B L C
+ x = self.patch_unembed(x, x_size)
+
+ return x
+
+ def forward(self, x):
+ H,W = x.shape[2:]
+ x = self.check_image_size(x)
+
+ # self.mean = self.mean.type_as(x)
+ # x = (x - self.mean) * self.img_range
+
+ if self.upsampler == 'none':
+ x = self.conv_first(x)
+ x = self.conv_after_body(self.forward_features(x)) + x
+ x = self.conv_before_upsample(x)
+ elif self.upsampler == 'pixelshuffle':
+ # for classical SR
+ x = self.conv_first(x)
+ x = self.conv_after_body(self.forward_features(x)) + x
+ x = self.conv_before_upsample(x)
+ x = self.conv_last(self.upsample(x))
+ elif self.upsampler == 'pixelshuffledirect':
+ # for lightweight SR
+ x = self.conv_first(x)
+ x = self.conv_after_body(self.forward_features(x)) + x
+ x = self.upsample(x)
+ elif self.upsampler == 'nearest+conv':
+ # for real-world SR
+ x = self.conv_first(x)
+ x = self.conv_after_body(self.forward_features(x)) + x
+ x = self.conv_before_upsample(x)
+ x = self.lrelu(self.conv_up1(torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest')))
+ x = self.lrelu(self.conv_up2(torch.nn.functional.interpolate(x, scale_factor=2, mode='nearest')))
+ x = self.conv_last(self.lrelu(self.conv_hr(x)))
+ else:
+ # for image denoising and JPEG compression artifact reduction
+ x_first = self.conv_first(x)
+ res = self.conv_after_body(self.forward_features(x_first)) + x_first
+ x = x + self.conv_last(res)
+
+ # x = x / self.img_range + self.mean
+ return x[:,:,:H,:W]
\ No newline at end of file
diff --git a/competitors_inference_code/LSRNA/lsr_training/utils/__init__.py b/competitors_inference_code/LSRNA/lsr_training/utils/__init__.py
new file mode 100644
index 0000000000000000000000000000000000000000..86d16d45ce918aa071cc3d6cb992e340afd55821
--- /dev/null
+++ b/competitors_inference_code/LSRNA/lsr_training/utils/__init__.py
@@ -0,0 +1,8 @@
+from utils.utils_config import *
+from utils.utils_state import *
+from utils.utils_image import *
+from utils.utils_calc import *
+from utils.utils_io import *
+from utils.utils_dist import *
+from utils.utils_blindsr import *
+from utils.utils import *
diff --git a/competitors_inference_code/LSRNA/lsr_training/utils/utils.py b/competitors_inference_code/LSRNA/lsr_training/utils/utils.py
new file mode 100644
index 0000000000000000000000000000000000000000..4ed08cd5f84d40a3a683490c55d4c4370881d40e
--- /dev/null
+++ b/competitors_inference_code/LSRNA/lsr_training/utils/utils.py
@@ -0,0 +1,127 @@
+import os, sys
+import shutil
+import time
+
+import numpy as np
+import random
+import torch
+import torch.backends.cudnn as cudnn
+
+
+def compute_num_params(model, text=False):
+ tot = int(sum([np.prod(p.shape) for p in model.parameters()]))
+ if text:
+ if tot >= 1e6:
+ return '{:.3f}M'.format(tot / 1e6)
+ elif tot >= 1e3:
+ return '{:.2f}K'.format(tot / 1e3)
+ else:
+ return '{}'.format(tot)
+ else:
+ return tot
+
+
+def set_seed(seed):
+ random.seed(seed)
+ np.random.seed(seed)
+ torch.manual_seed(seed)
+ torch.cuda.manual_seed(0)
+ torch.cuda.manual_seed_all(0)
+ os.environ["PYTHONHASHSEED"] = str(seed)
+ cudnn.benchmark = False # slower training
+ cudnn.deterministic = True # slower training
+
+
+class Logger:
+ def __init__(self, log_path=None):
+ self.log_path = log_path
+ self.ignore = False
+
+ def set_log_path(self, path):
+ self.log_path = path
+
+ def disable(self):
+ self.ignore = True
+
+ def log(self, obj, filename='log.txt'):
+ if not self.ignore:
+ print(obj)
+ if self.log_path is not None:
+ with open(os.path.join(self.log_path, filename), 'a') as f:
+ print(obj, file=f)
+
+ @staticmethod
+ def ensure_path(path, remove=True):
+ basename = os.path.basename(path.rstrip('/'))
+ if os.path.exists(path):
+ if remove and (basename.startswith('_') or input('{} exists, remove? (y/[n]): '.format(path)).lower() == 'y'):
+ shutil.rmtree(path)
+ os.makedirs(path)
+ else:
+ os.makedirs(path)
+
+ def set_save_path(self, save_path, remove=True):
+ self.ensure_path(save_path, remove=remove)
+ self.set_log_path(save_path)
+ return self.log
+
+
+def make_coord(shape, ranges=None, flatten=True, device='cpu'):
+ # Make coordinates at grid centers.
+ coord_seqs = []
+ for i, n in enumerate(shape):
+ if ranges is None:
+ v0, v1 = -1, 1
+ else:
+ v0, v1 = ranges[i]
+ r = (v1 - v0) / (2 * n)
+ seq = v0 + r + (2 * r) * torch.arange(n, device=device).float()
+ coord_seqs.append(seq)
+ ret = torch.stack(torch.meshgrid(*coord_seqs), dim=-1)
+ if flatten:
+ ret = ret.view(-1, ret.shape[-1])
+ return ret
+
+def to_pixel_samples(img, flatten=True, device='cpu'):
+ """
+ Convert the image to coord-Val pairs.
+ img: Tensor, (C, H, W)
+ """
+ assert img.ndim == 3
+ coord = make_coord(img.shape[-2:], flatten=flatten, device=device)
+ if flatten:
+ val = img.flatten(1).transpose(0,1)
+ else:
+ val = img.permute(1,2,0)
+ return coord, val
+
+
+class Averager():
+ def __init__(self):
+ self.n = 0.0
+ self.v = 0.0
+
+ def add(self, v, n=1.0):
+ self.v = (self.v * self.n + v * n) / (self.n + n)
+ self.n += n
+
+ def item(self):
+ return self.v
+
+class Timer():
+ def __init__(self):
+ self.v = time.time()
+
+ def s(self):
+ self.v = time.time()
+
+ def t(self):
+ return time.time() - self.v
+
+def time_text(t):
+ if t >= 3600:
+ return '{:.1f}h'.format(t / 3600)
+ elif t >= 60:
+ return '{:.1f}m'.format(t / 60)
+ else:
+ return '{:.1f}s'.format(t)
\ No newline at end of file
diff --git a/competitors_inference_code/LSRNA/lsr_training/utils/utils_blindsr.py b/competitors_inference_code/LSRNA/lsr_training/utils/utils_blindsr.py
new file mode 100644
index 0000000000000000000000000000000000000000..9ddefac67d1bc26722ddf1e8924eb7a586ef9ca0
--- /dev/null
+++ b/competitors_inference_code/LSRNA/lsr_training/utils/utils_blindsr.py
@@ -0,0 +1,301 @@
+# https://github.com/cszn/KAIR/blob/master/utils/utils_blindsr.py
+# -*- coding: utf-8 -*-
+import numpy as np
+import cv2
+import torch
+import torch.nn.functional as F
+
+import random
+from scipy import ndimage
+import scipy
+import scipy.stats as ss
+from scipy.linalg import orth
+
+
+def uint2single(img):
+ return np.float32(img/255.)
+
+def single2uint(img):
+ return np.uint8((img.clip(0, 1)*255.).round())
+
+"""
+# --------------------------------------------
+# anisotropic Gaussian kernels
+# --------------------------------------------
+"""
+def anisotropic_Gaussian(ksize=15, theta=np.pi, l1=6, l2=6):
+ """ generate an anisotropic Gaussian kernel
+ Args:
+ ksize : e.g., 15, kernel size
+ theta : [0, pi], rotation angle range
+ l1 : [0.1,50], scaling of eigenvalues
+ l2 : [0.1,l1], scaling of eigenvalues
+ If l1 = l2, will get an isotropic Gaussian kernel.
+
+ Returns:
+ k : kernel
+ """
+
+ v = np.dot(np.array([[np.cos(theta), -np.sin(theta)], [np.sin(theta), np.cos(theta)]]), np.array([1., 0.]))
+ V = np.array([[v[0], v[1]], [v[1], -v[0]]])
+ D = np.array([[l1, 0], [0, l2]])
+ Sigma = np.dot(np.dot(V, D), np.linalg.inv(V))
+ k = gm_blur_kernel(mean=[0, 0], cov=Sigma, size=ksize)
+
+ return k
+
+
+def gm_blur_kernel(mean, cov, size=15):
+ center = size / 2.0 + 0.5
+ k = np.zeros([size, size])
+ for y in range(size):
+ for x in range(size):
+ cy = y - center + 1
+ cx = x - center + 1
+ k[y, x] = ss.multivariate_normal.pdf([cx, cy], mean=mean, cov=cov)
+
+ k = k / np.sum(k)
+ return k
+
+
+
+def fspecial_gaussian(hsize, sigma):
+ hsize = [hsize, hsize]
+ siz = [(hsize[0]-1.0)/2.0, (hsize[1]-1.0)/2.0]
+ std = sigma
+ [x, y] = np.meshgrid(np.arange(-siz[1], siz[1]+1), np.arange(-siz[0], siz[0]+1))
+ arg = -(x*x + y*y)/(2*std*std)
+ h = np.exp(arg)
+ h[h < np.finfo(float).eps * h.max()] = 0
+ sumh = h.sum()
+ if sumh != 0:
+ h = h/sumh
+ return h
+
+
+def fspecial_laplacian(alpha):
+ alpha = max([0, min([alpha,1])])
+ h1 = alpha/(alpha+1)
+ h2 = (1-alpha)/(alpha+1)
+ h = [[h1, h2, h1], [h2, -4/(alpha+1), h2], [h1, h2, h1]]
+ h = np.array(h)
+ return h
+
+
+def fspecial(filter_type, *args, **kwargs):
+ '''
+ python code from:
+ https://github.com/ronaldosena/imagens-medicas-2/blob/40171a6c259edec7827a6693a93955de2bd39e76/Aulas/aula_2_-_uniform_filter/matlab_fspecial.py
+ '''
+ if filter_type == 'gaussian':
+ return fspecial_gaussian(*args, **kwargs)
+ if filter_type == 'laplacian':
+ return fspecial_laplacian(*args, **kwargs)
+
+"""
+# --------------------------------------------
+# degradation models
+# --------------------------------------------
+"""
+
+def add_sharpening(img, weight=0.5, radius=50, threshold=10):
+ """USM sharpening. borrowed from real-ESRGAN
+ Input image: I; Blurry image: B.
+ 1. K = I + weight * (I - B)
+ 2. Mask = 1 if abs(I - B) > threshold, else: 0
+ 3. Blur mask:
+ 4. Out = Mask * K + (1 - Mask) * I
+ Args:
+ img (Numpy array): Input image, HWC, BGR; float32, [0, 1].
+ weight (float): Sharp weight. Default: 1.
+ radius (float): Kernel size of Gaussian blur. Default: 50.
+ threshold (int):
+ """
+ if radius % 2 == 0:
+ radius += 1
+ blur = cv2.GaussianBlur(img, (radius, radius), 0)
+ residual = img - blur
+ mask = np.abs(residual) * 255 > threshold
+ mask = mask.astype('float32')
+ soft_mask = cv2.GaussianBlur(mask, (radius, radius), 0)
+
+ K = img + weight * residual
+ K = np.clip(K, 0, 1)
+ return soft_mask * K + (1 - soft_mask) * img
+
+
+def torch_convolve(img, k):
+ img_tensor = torch.tensor(img, dtype=torch.float32).permute(2, 0, 1).unsqueeze(0) # (1,3,h,w)
+ k_tensor = torch.tensor(k, dtype=torch.float32).unsqueeze(0).unsqueeze(0) # (1,1,p,p)
+ k_tensor = k_tensor.expand(3, 1, -1, -1) # (3,1,p,p)
+ k_height, k_width = k_tensor.shape[-2:]
+
+ pad_height = k_height // 2
+ pad_width = k_width // 2
+ img_padded = F.pad(img_tensor, (pad_width, pad_width, pad_height, pad_height), mode='reflect')
+
+ output = F.conv2d(img_padded, k_tensor, groups=3)
+ output = output.squeeze(0).permute(1,2,0).detach().cpu().numpy()
+ return output
+
+def add_blur(img, sf=4):
+ wd2 = 4.0 + sf
+ wd = 2.0 + 0.2*sf
+ if random.random() < 0.5:
+ l1 = wd2*random.random()
+ l2 = wd2*random.random()
+ k = anisotropic_Gaussian(ksize=2*random.randint(2,11)+3, theta=random.random()*np.pi, l1=l1, l2=l2)
+ else:
+ k = fspecial('gaussian', 2*random.randint(2,11)+3, wd*random.random())
+ #img = ndimage.filters.convolve(img, np.expand_dims(k, axis=2), mode='mirror') # too heavy for high-resolution image
+ img = torch_convolve(img, k)
+ return img
+
+
+def add_resize(img, sf=4):
+ rnum = np.random.rand()
+ if rnum > 0.8: # up
+ sf1 = random.uniform(1, 2)
+ elif rnum < 0.7: # down
+ sf1 = random.uniform(0.5/sf, 1)
+ else:
+ sf1 = 1.0
+ img = cv2.resize(img, (int(sf1*img.shape[1]), int(sf1*img.shape[0])), interpolation=random.choice([1, 2, 3]))
+ img = np.clip(img, 0.0, 1.0)
+
+ return img
+
+
+def add_Gaussian_noise(img, noise_level1=2, noise_level2=25):
+ noise_level = random.randint(noise_level1, noise_level2)
+ rnum = np.random.rand()
+ if rnum > 0.6: # add color Gaussian noise
+ img += np.random.normal(0, noise_level/255.0, img.shape).astype(np.float32)
+ elif rnum < 0.4: # add grayscale Gaussian noise
+ img += np.random.normal(0, noise_level/255.0, (*img.shape[:2], 1)).astype(np.float32)
+ else: # add noise
+ L = noise_level2/255.
+ D = np.diag(np.random.rand(3))
+ U = orth(np.random.rand(3,3))
+ conv = np.dot(np.dot(np.transpose(U), D), U)
+ img += np.random.multivariate_normal([0,0,0], np.abs(L**2*conv), img.shape[:2]).astype(np.float32)
+ img = np.clip(img, 0.0, 1.0)
+ return img
+
+
+def add_speckle_noise(img, noise_level1=2, noise_level2=25):
+ noise_level = random.randint(noise_level1, noise_level2)
+ img = np.clip(img, 0.0, 1.0)
+ rnum = random.random()
+ if rnum > 0.6:
+ img += img*np.random.normal(0, noise_level/255.0, img.shape).astype(np.float32)
+ elif rnum < 0.4:
+ img += img*np.random.normal(0, noise_level/255.0, (*img.shape[:2], 1)).astype(np.float32)
+ else:
+ L = noise_level2/255.
+ D = np.diag(np.random.rand(3))
+ U = orth(np.random.rand(3,3))
+ conv = np.dot(np.dot(np.transpose(U), D), U)
+ img += img*np.random.multivariate_normal([0,0,0], np.abs(L**2*conv), img.shape[:2]).astype(np.float32)
+ img = np.clip(img, 0.0, 1.0)
+ return img
+
+
+def add_Poisson_noise(img):
+ img = np.clip((img * 255.0).round(), 0, 255) / 255.
+ vals = 10**(2*random.random()+2.0) # [2, 4]
+ if random.random() < 0.5:
+ img = np.random.poisson(img * vals).astype(np.float32) / vals
+ else:
+ img_gray = np.dot(img[...,:3], [0.299, 0.587, 0.114])
+ img_gray = np.clip((img_gray * 255.0).round(), 0, 255) / 255.
+ noise_gray = np.random.poisson(img_gray * vals).astype(np.float32) / vals - img_gray
+ img += noise_gray[:, :, np.newaxis]
+ img = np.clip(img, 0.0, 1.0)
+ return img
+
+
+def add_JPEG_noise(img):
+ quality_factor = random.randint(30, 95)
+ img = cv2.cvtColor(single2uint(img), cv2.COLOR_RGB2BGR)
+ result, encimg = cv2.imencode('.jpg', img, [int(cv2.IMWRITE_JPEG_QUALITY), quality_factor])
+ img = cv2.imdecode(encimg, 1)
+ img = cv2.cvtColor(uint2single(img), cv2.COLOR_BGR2RGB)
+ return img
+
+
+def degradation_bsrgan_plus(img, sf=4, shuffle_prob=0.1, use_sharp=True, isp_model=None):
+ """
+ This is an extended degradation model by combining
+ the degradation models of BSRGAN and Real-ESRGAN
+ ----------
+ img: HXWXC, [0, 1]
+ sf: scale factor
+ use_shuffle: the degradation shuffle
+ use_sharp: sharpening the img
+
+ Returns
+ -------
+ img: low-quality patch, range: [0, 1]
+ """
+ original_h, original_w = img.shape[:2]
+ h1, w1 = img.shape[:2]
+ if use_sharp:
+ img = add_sharpening(img)
+
+ if random.random() < shuffle_prob:
+ shuffle_order = random.sample(range(13), 13)
+ else:
+ shuffle_order = list(range(13))
+ # local shuffle for noise, JPEG is always the last one
+ shuffle_order[2:6] = random.sample(shuffle_order[2:6], len(range(2, 6)))
+ shuffle_order[9:13] = random.sample(shuffle_order[9:13], len(range(9, 13)))
+
+ poisson_prob, speckle_prob, isp_prob = 0.1, 0.1, 0.1
+
+ for i in shuffle_order:
+ if i == 0:
+ img = add_blur(img, sf=sf)
+ elif i == 1:
+ img = add_resize(img, sf=sf)
+ elif i == 2:
+ img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25)
+ elif i == 3:
+ if random.random() < poisson_prob:
+ img = add_Poisson_noise(img)
+ elif i == 4:
+ if random.random() < speckle_prob:
+ img = add_speckle_noise(img)
+ elif i == 5:
+ continue
+ # if random.random() < isp_prob and isp_model is not None:
+ # with torch.no_grad():
+ # img, hq = isp_model.forward(img.copy(), hq)
+ elif i == 6:
+ img = add_JPEG_noise(img)
+ elif i == 7:
+ img = add_blur(img, sf=sf)
+ elif i == 8:
+ img = add_resize(img, sf=sf)
+ elif i == 9:
+ img = add_Gaussian_noise(img, noise_level1=2, noise_level2=25)
+ elif i == 10:
+ if random.random() < poisson_prob:
+ img = add_Poisson_noise(img)
+ elif i == 11:
+ if random.random() < speckle_prob:
+ img = add_speckle_noise(img)
+ elif i == 12:
+ continue
+ # if random.random() < isp_prob and isp_model is not None:
+ # with torch.no_grad():
+ # img, hq = isp_model.forward(img.copy(), hq)
+ else:
+ print('check the shuffle!')
+
+ # resize to desired size
+ img = cv2.resize(img, (int(1/sf*original_w), int(1/sf*original_h)), interpolation=random.choice([1, 2, 3]))
+
+ # add final JPEG compression noise
+ img = add_JPEG_noise(img)
+ return img
\ No newline at end of file
diff --git a/competitors_inference_code/LSRNA/lsr_training/utils/utils_calc.py b/competitors_inference_code/LSRNA/lsr_training/utils/utils_calc.py
new file mode 100644
index 0000000000000000000000000000000000000000..e84fa6deab6ca25d2dc79f7fb2eac4376eda4d79
--- /dev/null
+++ b/competitors_inference_code/LSRNA/lsr_training/utils/utils_calc.py
@@ -0,0 +1,64 @@
+import numpy as np
+import torch
+from utils.utils_image import tensor2numpy
+
+# https://github.com/cszn/KAIR
+def rgb2ycbcr(img, only_y=True):
+ """same as matlab rgb2ycbcr
+ only_y: only return Y channel
+ Input: (h,w,3) np array
+ uint8, [0, 255]
+ float, [0, 1]
+ """
+ in_img_type = img.dtype
+ img.astype(np.float32)
+ if in_img_type != np.uint8:
+ img *= 255.
+ # convert
+ if only_y:
+ rlt = np.dot(img, [65.481, 128.553, 24.966]) / 255.0 + 16.0
+ else:
+ rlt = np.matmul(img, [[65.481, -37.797, 112.0], [128.553, -74.203, -93.786],
+ [24.966, 112.0, -18.214]]) / 255.0 + [16, 128, 128]
+ if in_img_type == np.uint8:
+ rlt = rlt.round()
+ else:
+ rlt /= 255.
+ return rlt.astype(in_img_type)
+
+
+def psnr_measure(src, tar, y_channel=False, shave_border=0):
+ # np array must be 0-255, (h,w,3)
+ # tensor must be 0-1, (3,h,w)
+ if isinstance(src, torch.Tensor):
+ assert isinstance(tar, torch.Tensor)
+ if src.ndim == 4:
+ src = src.squeeze(0)
+ if tar.ndim == 4:
+ tar = tar.squeeze(0)
+ if y_channel:
+ src = tensor2numpy(src)
+ tar = tensor2numpy(tar)
+ src = rgb2ycbcr(src).astype(np.float32, copy=False)
+ tar = rgb2ycbcr(tar).astype(np.float32, copy=False)
+ else:
+ src = (src*255).clamp_(0,255).round().permute(1,2,0)
+ tar = (tar*255).clamp_(0,255).round().permute(1,2,0)
+ else:
+ if y_channel:
+ src = rgb2ycbcr(src)
+ tar = rgb2ycbcr(tar)
+ src = src.astype(np.float32, copy=False)
+ tar = tar.astype(np.float32, copy=False)
+ diff = tar - src
+ if shave_border > 0:
+ diff = diff[shave_border:-shave_border, shave_border:-shave_border]
+
+ if isinstance(diff, torch.Tensor):
+ err = torch.mean(torch.pow(diff, 2)).item()
+ else:
+ err = np.mean(np.power(diff, 2))
+ #if err < 0.6502:
+ # return 50
+ #else:
+ return 10 * np.log10((255. ** 2) / err)
diff --git a/competitors_inference_code/LSRNA/lsr_training/utils/utils_config.py b/competitors_inference_code/LSRNA/lsr_training/utils/utils_config.py
new file mode 100644
index 0000000000000000000000000000000000000000..9000eba1b5939de9772fe02c43fa992e8e8b632a
--- /dev/null
+++ b/competitors_inference_code/LSRNA/lsr_training/utils/utils_config.py
@@ -0,0 +1,12 @@
+import yaml
+import os
+
+def load_config(config_path):
+ with open(config_path, 'r') as f:
+ config = yaml.load(f, Loader=yaml.FullLoader)
+ if not config.get('seed'):
+ config['seed'] = None
+ save_path = os.path.join('save', config_path.split('/')[-1][:-len('.yaml')])
+ config['save_path'] = save_path
+ config['resume_path'] = os.path.join(save_path, 'iter_last.pth')
+ return config
diff --git a/competitors_inference_code/LSRNA/lsr_training/utils/utils_dist.py b/competitors_inference_code/LSRNA/lsr_training/utils/utils_dist.py
new file mode 100644
index 0000000000000000000000000000000000000000..ffa3e4ed47c9a831c9578ae247b34f114df71c45
--- /dev/null
+++ b/competitors_inference_code/LSRNA/lsr_training/utils/utils_dist.py
@@ -0,0 +1,202 @@
+# Modified from https://github.com/open-mmlab/mmcv/blob/master/mmcv/runner/dist_utils.py # noqa: E501
+import functools
+import os
+import subprocess
+import torch
+import torch.distributed as dist
+import torch.multiprocessing as mp
+import pickle
+
+
+# ----------------------------------
+# init
+# ----------------------------------
+def init_dist(launcher, backend='nccl', **kwargs):
+ if mp.get_start_method(allow_none=True) is None:
+ mp.set_start_method('spawn')
+
+ if launcher == 'pytorch':
+ _init_dist_pytorch(backend, **kwargs)
+ elif launcher == 'slurm':
+ _init_dist_slurm(backend, **kwargs)
+ else:
+ raise ValueError(f'Invalid launcher type: {launcher}')
+
+
+def _init_dist_pytorch(backend, **kwargs):
+ rank = int(os.environ['RANK'])
+ num_gpus = torch.cuda.device_count()
+ torch.cuda.set_device(rank % num_gpus)
+ dist.init_process_group(backend=backend, **kwargs)
+
+
+def _init_dist_slurm(backend, port=None):
+ """Initialize slurm distributed training environment.
+ If argument ``port`` is not specified, then the master port will be system
+ environment variable ``MASTER_PORT``. If ``MASTER_PORT`` is not in system
+ environment variable, then a default port ``29500`` will be used.
+ Args:
+ backend (str): Backend of torch.distributed.
+ port (int, optional): Master port. Defaults to None.
+ """
+ proc_id = int(os.environ['SLURM_PROCID'])
+ ntasks = int(os.environ['SLURM_NTASKS'])
+ node_list = os.environ['SLURM_NODELIST']
+ num_gpus = torch.cuda.device_count()
+ torch.cuda.set_device(proc_id % num_gpus)
+ addr = subprocess.getoutput(
+ f'scontrol show hostname {node_list} | head -n1')
+ # specify master port
+ if port is not None:
+ os.environ['MASTER_PORT'] = str(port)
+ elif 'MASTER_PORT' in os.environ:
+ pass # use MASTER_PORT in the environment variable
+ else:
+ # 29500 is torch.distributed default port
+ os.environ['MASTER_PORT'] = '29500'
+ os.environ['MASTER_ADDR'] = addr
+ os.environ['WORLD_SIZE'] = str(ntasks)
+ os.environ['LOCAL_RANK'] = str(proc_id % num_gpus)
+ os.environ['RANK'] = str(proc_id)
+ dist.init_process_group(backend=backend)
+
+
+
+# ----------------------------------
+# get rank and world_size
+# ----------------------------------
+def get_dist_info():
+ if dist.is_available():
+ initialized = dist.is_initialized()
+ else:
+ initialized = False
+ if initialized:
+ rank = dist.get_rank()
+ world_size = dist.get_world_size()
+ else:
+ rank = 0
+ world_size = 1
+ return rank, world_size
+
+
+def get_rank():
+ if not dist.is_available():
+ return 0
+
+ if not dist.is_initialized():
+ return 0
+
+ return dist.get_rank()
+
+
+def get_world_size():
+ if not dist.is_available():
+ return 1
+
+ if not dist.is_initialized():
+ return 1
+
+ return dist.get_world_size()
+
+
+def master_only(func):
+
+ @functools.wraps(func)
+ def wrapper(*args, **kwargs):
+ rank, _ = get_dist_info()
+ if rank == 0:
+ return func(*args, **kwargs)
+
+ return wrapper
+
+
+
+
+
+
+# ----------------------------------
+# operation across ranks
+# ----------------------------------
+def reduce_sum(tensor):
+ if not dist.is_available():
+ return tensor
+
+ if not dist.is_initialized():
+ return tensor
+
+ tensor = tensor.clone()
+ dist.all_reduce(tensor, op=dist.ReduceOp.SUM)
+
+ return tensor
+
+
+def gather_grad(params):
+ world_size = get_world_size()
+
+ if world_size == 1:
+ return
+
+ for param in params:
+ if param.grad is not None:
+ dist.all_reduce(param.grad.data, op=dist.ReduceOp.SUM)
+ param.grad.data.div_(world_size)
+
+
+def all_gather(data):
+ world_size = get_world_size()
+
+ if world_size == 1:
+ return [data]
+
+ buffer = pickle.dumps(data)
+ storage = torch.ByteStorage.from_buffer(buffer)
+ tensor = torch.ByteTensor(storage).to('cuda')
+
+ local_size = torch.IntTensor([tensor.numel()]).to('cuda')
+ size_list = [torch.IntTensor([0]).to('cuda') for _ in range(world_size)]
+ dist.all_gather(size_list, local_size)
+ size_list = [int(size.item()) for size in size_list]
+ max_size = max(size_list)
+
+ tensor_list = []
+ for _ in size_list:
+ tensor_list.append(torch.ByteTensor(size=(max_size,)).to('cuda'))
+
+ if local_size != max_size:
+ padding = torch.ByteTensor(size=(max_size - local_size,)).to('cuda')
+ tensor = torch.cat((tensor, padding), 0)
+
+ dist.all_gather(tensor_list, tensor)
+
+ data_list = []
+
+ for size, tensor in zip(size_list, tensor_list):
+ buffer = tensor.cpu().numpy().tobytes()[:size]
+ data_list.append(pickle.loads(buffer))
+
+ return data_list
+
+
+def reduce_loss_dict(loss_dict):
+ world_size = get_world_size()
+
+ if world_size < 2:
+ return loss_dict
+
+ with torch.no_grad():
+ keys = []
+ losses = []
+
+ for k in sorted(loss_dict.keys()):
+ keys.append(k)
+ losses.append(loss_dict[k])
+
+ losses = torch.stack(losses, 0)
+ dist.reduce(losses, dst=0)
+
+ if dist.get_rank() == 0:
+ losses /= world_size
+
+ reduced_losses = {k: v for k, v in zip(keys, losses)}
+
+ return reduced_losses
\ No newline at end of file
diff --git a/competitors_inference_code/LSRNA/lsr_training/utils/utils_image.py b/competitors_inference_code/LSRNA/lsr_training/utils/utils_image.py
new file mode 100644
index 0000000000000000000000000000000000000000..ce1094f8dd465761192a611a1380677bd923a528
--- /dev/null
+++ b/competitors_inference_code/LSRNA/lsr_training/utils/utils_image.py
@@ -0,0 +1,110 @@
+import os
+from os import path as osp
+import numpy as np
+from PIL import Image
+from tqdm import tqdm
+
+import torch
+import torch.nn.functional as F
+from torchvision import transforms
+IMG_EXTENSIONS = ['.jpg', '.JPG', '.jpeg', '.JPEG', '.png', '.PNG', '.ppm', '.PPM', '.bmp', '.BMP']
+
+
+def get_resolution(img_path):
+ img = Image.open(img_path).convert('RGB')
+ w,h = img.size
+ return (h,w)
+
+def get_image_np(img_path):
+ img = Image.open(img_path).convert('RGB')
+ return np.array(img)
+
+def get_image_tensor(img_path):
+ img = Image.open(img_path).convert('RGB')
+ return transforms.ToTensor()(img)
+
+
+def get_resolutions(folder_path):
+ assert os.path.isdir(folder_path), '{:s} is not a valid directory'.format(folder_path)
+ resols = []
+ for fname in tqdm(sorted(os.listdir(folder_path))):
+ if any(fname.endswith(extension) for extension in IMG_EXTENSIONS):
+ img_path = os.path.join(folder_path, fname)
+ resols.append(get_resolution(img_path))
+ return resols
+
+def get_images_np(folder_path):
+ assert os.path.isdir(folder_path), '{:s} is not a valid directory'.format(folder_path)
+ imgs = []
+ for fname in tqdm(sorted(os.listdir(folder_path))):
+ if any(fname.endswith(extension) for extension in IMG_EXTENSIONS):
+ img_path = os.path.join(folder_path, fname)
+ img = get_image_np(img_path)
+ imgs.append(img)
+ return imgs
+
+def get_images_tensor(folder_path):
+ assert os.path.isdir(folder_path), '{:s} is not a valid directory'.format(folder_path)
+ imgs = []
+ for fname in tqdm(sorted(os.listdir(folder_path))):
+ if any(fname.endswith(extension) for extension in IMG_EXTENSIONS):
+ img_path = os.path.join(folder_path, fname)
+ img = get_image_tensor(img_path)
+ imgs.append(img)
+ return imgs
+
+
+def read_img(img_path):
+ if img_path.split('.')[-1] == 'npy':
+ img = np.load(img_path)
+ else:
+ img = np.array(Image.open(img_path).convert('RGB')) / 255.
+ return img
+
+def random_crop(img, size, return_pos=False):
+ assert img.ndim == 3
+ if img.shape[0] in [3,4,8] and img.shape[0] < img.shape[1]: # (c,h,w)
+ x0 = np.random.randint(0, img.shape[1]-size+1)
+ y0 = np.random.randint(0, img.shape[2]-size+1)
+ img = img[:, x0: x0+size, y0: y0+size]
+ else: # (h,w,c)
+ x0 = np.random.randint(0, img.shape[0]-size+1)
+ y0 = np.random.randint(0, img.shape[1]-size+1)
+ img = img[x0: x0+size, y0: y0+size, :]
+ if return_pos:
+ return img, (x0, y0)
+ else:
+ return img
+
+def random_crop_together(hr, lr, lsize, return_pos=False):
+ # img: (h,w,c), range independent
+ assert lr.shape[0] > lr.shape[-1]
+ s = hr.shape[0] // lr.shape[0]
+ x0 = np.random.randint(0, lr.shape[0]-lsize+1)
+ y0 = np.random.randint(0, lr.shape[1]-lsize+1)
+ lr = lr[x0: x0+lsize, y0: y0+lsize, :]
+ hr = hr[x0*s: (x0+lsize)*s, y0*s: (y0+lsize)*s, :]
+ if return_pos:
+ return hr, lr, (x0, y0)
+ else:
+ return hr, lr
+
+def center_crop(img, size):
+ # img: (h,w,3), range independent
+ h,w = img.shape[:2]
+ cut_h, cut_w = h-size[0], w-size[1]
+
+ lh = cut_h // 2
+ rh = h - (cut_h - lh)
+ lw = cut_w // 2
+ rw = w - (cut_w - lw)
+
+ img = img[lh:rh, lw:rw, :]
+ return img
+
+def tensor2numpy(tensor, rgb_range=1.):
+ rgb_coefficient = 255 / rgb_range
+ img = tensor.mul(rgb_coefficient).clamp(0, 255).round()
+ img = img[0].data if img.ndim==4 else img.data
+ img = np.transpose(img.cpu().numpy(), (1, 2, 0)).astype(np.uint8)
+ return img
\ No newline at end of file
diff --git a/competitors_inference_code/LSRNA/lsr_training/utils/utils_io.py b/competitors_inference_code/LSRNA/lsr_training/utils/utils_io.py
new file mode 100644
index 0000000000000000000000000000000000000000..3da56738dac16e988878a83ac1d8c0c64471dfd0
--- /dev/null
+++ b/competitors_inference_code/LSRNA/lsr_training/utils/utils_io.py
@@ -0,0 +1,493 @@
+#https://github.com/cszn/KAIR
+import os
+import cv2
+import numpy as np
+import torch
+import random
+from os import path as osp
+from torch.nn import functional as F
+from abc import ABCMeta, abstractmethod
+
+
+def scandir(dir_path, suffix=None, recursive=False, full_path=False):
+ """Scan a directory to find the interested files.
+
+ Args:
+ dir_path (str): Path of the directory.
+ suffix (str | tuple(str), optional): File suffix that we are
+ interested in. Default: None.
+ recursive (bool, optional): If set to True, recursively scan the
+ directory. Default: False.
+ full_path (bool, optional): If set to True, include the dir_path.
+ Default: False.
+
+ Returns:
+ A generator for all the interested files with relative paths.
+ """
+
+ if (suffix is not None) and not isinstance(suffix, (str, tuple)):
+ raise TypeError('"suffix" must be a string or tuple of strings')
+
+ root = dir_path
+
+ def _scandir(dir_path, suffix, recursive):
+ for entry in os.scandir(dir_path):
+ if not entry.name.startswith('.') and entry.is_file():
+ if full_path:
+ return_path = entry.path
+ else:
+ return_path = osp.relpath(entry.path, root)
+
+ if suffix is None:
+ yield return_path
+ elif return_path.endswith(suffix):
+ yield return_path
+ else:
+ if recursive:
+ yield from _scandir(entry.path, suffix=suffix, recursive=recursive)
+ else:
+ continue
+
+ return _scandir(dir_path, suffix=suffix, recursive=recursive)
+
+
+def read_img_seq(path, require_mod_crop=False, scale=1, return_imgname=False):
+ """Read a sequence of images from a given folder path.
+
+ Args:
+ path (list[str] | str): List of image paths or image folder path.
+ require_mod_crop (bool): Require mod crop for each image.
+ Default: False.
+ scale (int): Scale factor for mod_crop. Default: 1.
+ return_imgname(bool): Whether return image names. Default False.
+
+ Returns:
+ Tensor: size (t, c, h, w), RGB, [0, 1].
+ list[str]: Returned image name list.
+ """
+ if isinstance(path, list):
+ img_paths = path
+ else:
+ img_paths = sorted(list(scandir(path, full_path=True)))
+ imgs = [cv2.imread(v).astype(np.float32) / 255. for v in img_paths]
+
+ if require_mod_crop:
+ imgs = [mod_crop(img, scale) for img in imgs]
+ imgs = img2tensor(imgs, bgr2rgb=True, float32=True)
+ imgs = torch.stack(imgs, dim=0)
+
+ if return_imgname:
+ imgnames = [osp.splitext(osp.basename(path))[0] for path in img_paths]
+ return imgs, imgnames
+ else:
+ return imgs
+
+
+def img2tensor(imgs, bgr2rgb=True, float32=True):
+ """Numpy array to tensor.
+
+ Args:
+ imgs (list[ndarray] | ndarray): Input images.
+ bgr2rgb (bool): Whether to change bgr to rgb.
+ float32 (bool): Whether to change to float32.
+
+ Returns:
+ list[tensor] | tensor: Tensor images. If returned results only have
+ one element, just return tensor.
+ """
+
+ def _totensor(img, bgr2rgb, float32):
+ if img.shape[2] == 3 and bgr2rgb:
+ if img.dtype == 'float64':
+ img = img.astype('float32')
+ img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB)
+ img = torch.from_numpy(img.transpose(2, 0, 1))
+ if float32:
+ img = img.float()
+ return img
+
+ if isinstance(imgs, list):
+ return [_totensor(img, bgr2rgb, float32) for img in imgs]
+ else:
+ return _totensor(imgs, bgr2rgb, float32)
+
+
+def tensor2img(tensor, rgb2bgr=True, out_type=np.uint8, min_max=(0, 1)):
+ """Convert torch Tensors into image numpy arrays.
+
+ After clamping to [min, max], values will be normalized to [0, 1].
+
+ Args:
+ tensor (Tensor or list[Tensor]): Accept shapes:
+ 1) 4D mini-batch Tensor of shape (B x 3/1 x H x W);
+ 2) 3D Tensor of shape (3/1 x H x W);
+ 3) 2D Tensor of shape (H x W).
+ Tensor channel should be in RGB order.
+ rgb2bgr (bool): Whether to change rgb to bgr.
+ out_type (numpy type): output types. If ``np.uint8``, transform outputs
+ to uint8 type with range [0, 255]; otherwise, float type with
+ range [0, 1]. Default: ``np.uint8``.
+ min_max (tuple[int]): min and max values for clamp.
+
+ Returns:
+ (Tensor or list): 3D ndarray of shape (H x W x C) OR 2D ndarray of
+ shape (H x W). The channel order is BGR.
+ """
+ if not (torch.is_tensor(tensor) or (isinstance(tensor, list) and all(torch.is_tensor(t) for t in tensor))):
+ raise TypeError(f'tensor or list of tensors expected, got {type(tensor)}')
+
+ if torch.is_tensor(tensor):
+ tensor = [tensor]
+ result = []
+ for _tensor in tensor:
+ _tensor = _tensor.squeeze(0).float().detach().cpu().clamp_(*min_max)
+ _tensor = (_tensor - min_max[0]) / (min_max[1] - min_max[0])
+
+ n_dim = _tensor.dim()
+ if n_dim == 4:
+ img_np = make_grid(_tensor, nrow=int(math.sqrt(_tensor.size(0))), normalize=False).numpy()
+ img_np = img_np.transpose(1, 2, 0)
+ if rgb2bgr:
+ img_np = cv2.cvtColor(img_np, cv2.COLOR_RGB2BGR)
+ elif n_dim == 3:
+ img_np = _tensor.numpy()
+ img_np = img_np.transpose(1, 2, 0)
+ if img_np.shape[2] == 1: # gray image
+ img_np = np.squeeze(img_np, axis=2)
+ else:
+ if rgb2bgr:
+ img_np = cv2.cvtColor(img_np, cv2.COLOR_RGB2BGR)
+ elif n_dim == 2:
+ img_np = _tensor.numpy()
+ else:
+ raise TypeError(f'Only support 4D, 3D or 2D tensor. But received with dimension: {n_dim}')
+ if out_type == np.uint8:
+ # Unlike MATLAB, numpy.unit8() WILL NOT round by default.
+ img_np = (img_np * 255.0).round()
+ img_np = img_np.astype(out_type)
+ result.append(img_np)
+ if len(result) == 1:
+ result = result[0]
+ return result
+
+
+def augment(imgs, hflip=True, rotation=True, flows=None, return_status=False):
+ """Augment: horizontal flips OR rotate (0, 90, 180, 270 degrees).
+
+ We use vertical flip and transpose for rotation implementation.
+ All the images in the list use the same augmentation.
+
+ Args:
+ imgs (list[ndarray] | ndarray): Images to be augmented. If the input
+ is an ndarray, it will be transformed to a list.
+ hflip (bool): Horizontal flip. Default: True.
+ rotation (bool): Ratotation. Default: True.
+ flows (list[ndarray]: Flows to be augmented. If the input is an
+ ndarray, it will be transformed to a list.
+ Dimension is (h, w, 2). Default: None.
+ return_status (bool): Return the status of flip and rotation.
+ Default: False.
+
+ Returns:
+ list[ndarray] | ndarray: Augmented images and flows. If returned
+ results only have one element, just return ndarray.
+
+ """
+ hflip = hflip and random.random() < 0.5
+ vflip = rotation and random.random() < 0.5
+ rot90 = rotation and random.random() < 0.5
+
+ def _augment(img):
+ if hflip: # horizontal
+ cv2.flip(img, 1, img)
+ if vflip: # vertical
+ cv2.flip(img, 0, img)
+ if rot90:
+ img = img.transpose(1, 0, 2)
+ return img
+
+ def _augment_flow(flow):
+ if hflip: # horizontal
+ cv2.flip(flow, 1, flow)
+ flow[:, :, 0] *= -1
+ if vflip: # vertical
+ cv2.flip(flow, 0, flow)
+ flow[:, :, 1] *= -1
+ if rot90:
+ flow = flow.transpose(1, 0, 2)
+ flow = flow[:, :, [1, 0]]
+ return flow
+
+ if not isinstance(imgs, list):
+ imgs = [imgs]
+ imgs = [_augment(img) for img in imgs]
+ if len(imgs) == 1:
+ imgs = imgs[0]
+
+ if flows is not None:
+ if not isinstance(flows, list):
+ flows = [flows]
+ flows = [_augment_flow(flow) for flow in flows]
+ if len(flows) == 1:
+ flows = flows[0]
+ return imgs, flows
+ else:
+ if return_status:
+ return imgs, (hflip, vflip, rot90)
+ else:
+ return imgs
+
+
+def paired_random_crop(img_gts, img_lqs, gt_patch_size, scale, gt_path=None):
+ """Paired random crop. Support Numpy array and Tensor inputs.
+
+ It crops lists of lq and gt images with corresponding locations.
+
+ Args:
+ img_gts (list[ndarray] | ndarray | list[Tensor] | Tensor): GT images. Note that all images
+ should have the same shape. If the input is an ndarray, it will
+ be transformed to a list containing itself.
+ img_lqs (list[ndarray] | ndarray): LQ images. Note that all images
+ should have the same shape. If the input is an ndarray, it will
+ be transformed to a list containing itself.
+ gt_patch_size (int): GT patch size.
+ scale (int): Scale factor.
+ gt_path (str): Path to ground-truth. Default: None.
+
+ Returns:
+ list[ndarray] | ndarray: GT images and LQ images. If returned results
+ only have one element, just return ndarray.
+ """
+
+ if not isinstance(img_gts, list):
+ img_gts = [img_gts]
+ if not isinstance(img_lqs, list):
+ img_lqs = [img_lqs]
+
+ # determine input type: Numpy array or Tensor
+ input_type = 'Tensor' if torch.is_tensor(img_gts[0]) else 'Numpy'
+
+ if input_type == 'Tensor':
+ h_lq, w_lq = img_lqs[0].size()[-2:]
+ h_gt, w_gt = img_gts[0].size()[-2:]
+ else:
+ h_lq, w_lq = img_lqs[0].shape[0:2]
+ h_gt, w_gt = img_gts[0].shape[0:2]
+ lq_patch_size = gt_patch_size // scale
+
+ if h_gt != h_lq * scale or w_gt != w_lq * scale:
+ raise ValueError(f'Scale mismatches. GT ({h_gt}, {w_gt}) is not {scale}x ',
+ f'multiplication of LQ ({h_lq}, {w_lq}).')
+ if h_lq < lq_patch_size or w_lq < lq_patch_size:
+ raise ValueError(f'LQ ({h_lq}, {w_lq}) is smaller than patch size '
+ f'({lq_patch_size}, {lq_patch_size}). '
+ f'Please remove {gt_path}.')
+
+ # randomly choose top and left coordinates for lq patch
+ top = random.randint(0, h_lq - lq_patch_size)
+ left = random.randint(0, w_lq - lq_patch_size)
+
+ # crop lq patch
+ if input_type == 'Tensor':
+ img_lqs = [v[:, :, top:top + lq_patch_size, left:left + lq_patch_size] for v in img_lqs]
+ else:
+ img_lqs = [v[top:top + lq_patch_size, left:left + lq_patch_size, ...] for v in img_lqs]
+
+ # crop corresponding gt patch
+ top_gt, left_gt = int(top * scale), int(left * scale)
+ if input_type == 'Tensor':
+ img_gts = [v[:, :, top_gt:top_gt + gt_patch_size, left_gt:left_gt + gt_patch_size] for v in img_gts]
+ else:
+ img_gts = [v[top_gt:top_gt + gt_patch_size, left_gt:left_gt + gt_patch_size, ...] for v in img_gts]
+ if len(img_gts) == 1:
+ img_gts = img_gts[0]
+ if len(img_lqs) == 1:
+ img_lqs = img_lqs[0]
+ return img_gts, img_lqs
+
+
+# Modified from https://github.com/open-mmlab/mmcv/blob/master/mmcv/fileio/file_client.py # noqa: E501
+class BaseStorageBackend(metaclass=ABCMeta):
+ """Abstract class of storage backends.
+
+ All backends need to implement two apis: ``get()`` and ``get_text()``.
+ ``get()`` reads the file as a byte stream and ``get_text()`` reads the file
+ as texts.
+ """
+
+ @abstractmethod
+ def get(self, filepath):
+ pass
+
+ @abstractmethod
+ def get_text(self, filepath):
+ pass
+
+
+class MemcachedBackend(BaseStorageBackend):
+ """Memcached storage backend.
+
+ Attributes:
+ server_list_cfg (str): Config file for memcached server list.
+ client_cfg (str): Config file for memcached client.
+ sys_path (str | None): Additional path to be appended to `sys.path`.
+ Default: None.
+ """
+
+ def __init__(self, server_list_cfg, client_cfg, sys_path=None):
+ if sys_path is not None:
+ import sys
+ sys.path.append(sys_path)
+ try:
+ import mc
+ except ImportError:
+ raise ImportError('Please install memcached to enable MemcachedBackend.')
+
+ self.server_list_cfg = server_list_cfg
+ self.client_cfg = client_cfg
+ self._client = mc.MemcachedClient.GetInstance(self.server_list_cfg, self.client_cfg)
+ # mc.pyvector servers as a point which points to a memory cache
+ self._mc_buffer = mc.pyvector()
+
+ def get(self, filepath):
+ filepath = str(filepath)
+ import mc
+ self._client.Get(filepath, self._mc_buffer)
+ value_buf = mc.ConvertBuffer(self._mc_buffer)
+ return value_buf
+
+ def get_text(self, filepath):
+ raise NotImplementedError
+
+
+class HardDiskBackend(BaseStorageBackend):
+ """Raw hard disks storage backend."""
+
+ def get(self, filepath):
+ filepath = str(filepath)
+ with open(filepath, 'rb') as f:
+ value_buf = f.read()
+ return value_buf
+
+ def get_text(self, filepath):
+ filepath = str(filepath)
+ with open(filepath, 'r') as f:
+ value_buf = f.read()
+ return value_buf
+
+
+class LmdbBackend(BaseStorageBackend):
+ """Lmdb storage backend.
+
+ Args:
+ db_paths (str | list[str]): Lmdb database paths.
+ client_keys (str | list[str]): Lmdb client keys. Default: 'default'.
+ readonly (bool, optional): Lmdb environment parameter. If True,
+ disallow any write operations. Default: True.
+ lock (bool, optional): Lmdb environment parameter. If False, when
+ concurrent access occurs, do not lock the database. Default: False.
+ readahead (bool, optional): Lmdb environment parameter. If False,
+ disable the OS filesystem readahead mechanism, which may improve
+ random read performance when a database is larger than RAM.
+ Default: False.
+
+ Attributes:
+ db_paths (list): Lmdb database path.
+ _client (list): A list of several lmdb envs.
+ """
+
+ def __init__(self, db_paths, client_keys='default', readonly=True, lock=False, readahead=False, **kwargs):
+ try:
+ import lmdb
+ except ImportError:
+ raise ImportError('Please install lmdb to enable LmdbBackend.')
+
+ if isinstance(client_keys, str):
+ client_keys = [client_keys]
+
+ if isinstance(db_paths, list):
+ self.db_paths = [str(v) for v in db_paths]
+ elif isinstance(db_paths, str):
+ self.db_paths = [str(db_paths)]
+ assert len(client_keys) == len(self.db_paths), ('client_keys and db_paths should have the same length, '
+ f'but received {len(client_keys)} and {len(self.db_paths)}.')
+
+ self._client = {}
+ for client, path in zip(client_keys, self.db_paths):
+ self._client[client] = lmdb.open(path, readonly=readonly, lock=lock, readahead=readahead, **kwargs)
+
+ def get(self, filepath, client_key):
+ """Get values according to the filepath from one lmdb named client_key.
+
+ Args:
+ filepath (str | obj:`Path`): Here, filepath is the lmdb key.
+ client_key (str): Used for distinguishing different lmdb envs.
+ """
+ filepath = str(filepath)
+ assert client_key in self._client, (f'client_key {client_key} is not ' 'in lmdb clients.')
+ client = self._client[client_key]
+ with client.begin(write=False) as txn:
+ value_buf = txn.get(filepath.encode('ascii'))
+ return value_buf
+
+ def get_text(self, filepath):
+ raise NotImplementedError
+
+
+class FileClient(object):
+ """A general file client to access files in different backend.
+
+ The client loads a file or text in a specified backend from its path
+ and return it as a binary file. it can also register other backend
+ accessor with a given name and backend class.
+
+ Attributes:
+ backend (str): The storage backend type. Options are "disk",
+ "memcached" and "lmdb".
+ client (:obj:`BaseStorageBackend`): The backend object.
+ """
+
+ _backends = {
+ 'disk': HardDiskBackend,
+ 'memcached': MemcachedBackend,
+ 'lmdb': LmdbBackend,
+ }
+
+ def __init__(self, backend='disk', **kwargs):
+ if backend not in self._backends:
+ raise ValueError(f'Backend {backend} is not supported. Currently supported ones'
+ f' are {list(self._backends.keys())}')
+ self.backend = backend
+ self.client = self._backends[backend](**kwargs)
+
+ def get(self, filepath, client_key='default'):
+ # client_key is used only for lmdb, where different fileclients have
+ # different lmdb environments.
+ if self.backend == 'lmdb':
+ return self.client.get(filepath, client_key)
+ else:
+ return self.client.get(filepath)
+
+ def get_text(self, filepath):
+ return self.client.get_text(filepath)
+
+
+def imfrombytes(content, flag='color', float32=False):
+ """Read an image from bytes.
+
+ Args:
+ content (bytes): Image bytes got from files or other streams.
+ flag (str): Flags specifying the color type of a loaded image,
+ candidates are `color`, `grayscale` and `unchanged`.
+ float32 (bool): Whether to change to float32., If True, will also norm
+ to [0, 1]. Default: False.
+
+ Returns:
+ ndarray: Loaded image array.
+ """
+ img_np = np.frombuffer(content, np.uint8)
+ imread_flags = {'color': cv2.IMREAD_COLOR, 'grayscale': cv2.IMREAD_GRAYSCALE, 'unchanged': cv2.IMREAD_UNCHANGED}
+ img = cv2.imdecode(img_np, imread_flags[flag])
+ if float32:
+ img = img.astype(np.float32) / 255.
+ return img
\ No newline at end of file
diff --git a/competitors_inference_code/LSRNA/lsr_training/utils/utils_state.py b/competitors_inference_code/LSRNA/lsr_training/utils/utils_state.py
new file mode 100644
index 0000000000000000000000000000000000000000..3c4f76b2b704d0352e54571a0759a131c19859d9
--- /dev/null
+++ b/competitors_inference_code/LSRNA/lsr_training/utils/utils_state.py
@@ -0,0 +1,57 @@
+import math
+import torch
+from torch.optim import Adam
+from torch.optim.lr_scheduler import _LRScheduler, CosineAnnealingLR
+
+
+# https://github.com/XPixelGroup/ClassSR
+class CosineAnnealingLR_Restart(_LRScheduler):
+ def __init__(self, optimizer, T_period, restarts=None, weights=None, eta_min=0, last_epoch=-1):
+ self.T_period = T_period
+ self.T_max = self.T_period[0] # current T period
+ self.eta_min = eta_min
+ self.restarts = restarts if restarts else [0]
+ self.restarts = [v + 1 for v in self.restarts]
+ self.restart_weights = weights if weights else [1]
+ self.last_restart = 0
+ assert len(self.restarts) == len(
+ self.restart_weights), 'restarts and their weights do not match.'
+ super(CosineAnnealingLR_Restart, self).__init__(optimizer, last_epoch)
+
+ def get_lr(self):
+ if self.last_epoch == 0:
+ return self.base_lrs
+ elif self.last_epoch in self.restarts:
+ self.last_restart = self.last_epoch
+ if self.restarts.index(self.last_epoch) + 1 == len(self.T_period):
+ print('Already trained.')
+ exit()
+ self.T_max = self.T_period[self.restarts.index(self.last_epoch) + 1]
+ weight = self.restart_weights[self.restarts.index(self.last_epoch)]
+ return [group['initial_lr'] * weight for group in self.optimizer.param_groups]
+ elif (self.last_epoch - self.last_restart - 1 - self.T_max) % (2 * self.T_max) == 0:
+ return [
+ group['lr'] + (base_lr - self.eta_min) * (1 - math.cos(math.pi / self.T_max)) / 2
+ for base_lr, group in zip(self.base_lrs, self.optimizer.param_groups)
+ ]
+ return [(1 + math.cos(math.pi * (self.last_epoch - self.last_restart) / self.T_max)) /
+ (1 + math.cos(math.pi * ((self.last_epoch - self.last_restart) - 1) / self.T_max)) *
+ (group['lr'] - self.eta_min) + self.eta_min
+ for group in self.optimizer.param_groups]
+
+
+def make_optim_sched(param_list, optimizer_spec, lr_scheduler_spec, load_sd=False):
+ Optimizer = {
+ 'adam': Adam
+ }[optimizer_spec['name']]
+ Scheduler = {
+ 'CosineAnnealingLR_Restart': CosineAnnealingLR_Restart,
+ 'CosineAnnealingLR': CosineAnnealingLR
+ }[lr_scheduler_spec['name']]
+
+ optimizer = Optimizer(param_list, **optimizer_spec['args'])
+ lr_scheduler = Scheduler(optimizer, **lr_scheduler_spec['args'])
+ if load_sd: # jointly loading state_dict with all initialized does matter
+ optimizer.load_state_dict(optimizer_spec['sd'])
+ lr_scheduler.load_state_dict(lr_scheduler_spec['sd'])
+ return optimizer, lr_scheduler
\ No newline at end of file
diff --git a/competitors_inference_code/LSRNA/main.py b/competitors_inference_code/LSRNA/main.py
new file mode 100644
index 0000000000000000000000000000000000000000..bd19cbf38345b36813ba80ea0a51a478bd22349b
--- /dev/null
+++ b/competitors_inference_code/LSRNA/main.py
@@ -0,0 +1,65 @@
+import os
+import argparse
+import random
+import numpy as np
+import torch
+
+from diffusers import DDIMScheduler
+from pipeline_lsrna_demofusion_sdxl import DemoFusionLSRNASDXLPipeline
+
+
+def main():
+ parser = argparse.ArgumentParser()
+ parser.add_argument('--prompt', type=str, required=True)
+ parser.add_argument('--negative_prompt', type=str)
+ parser.add_argument('--height', type=int, default=2048, help='target height')
+ parser.add_argument('--width', type=int, default=2048, help='target width')
+ parser.add_argument('--seed', type=int)
+ parser.add_argument('--lsr_path', type=str, default='lsr/checkpoints/swinir-liif-latent-sdxl.pth')
+ parser.add_argument('--rna_min_std', type=float, default=0.0)
+ parser.add_argument('--rna_max_std', type=float, default=1.2)
+ parser.add_argument('--inversion_depth', type=int, default=30)
+ parser.add_argument('--save_dir', type=str, default='results')
+ parser.add_argument('--low_vram', action='store_true')
+ args = parser.parse_args()
+
+ # load pipeline
+ model_ckpt = 'stabilityai/stable-diffusion-xl-base-1.0'
+ scheduler = DDIMScheduler.from_pretrained(model_ckpt, subfolder='scheduler')
+ pipe = DemoFusionLSRNASDXLPipeline.from_pretrained(model_ckpt, scheduler=scheduler, torch_dtype=torch.float16).to('cuda')
+ pipe.vae.enable_tiling()
+
+ # fix seed
+ if args.seed is not None:
+ seed = args.seed
+ random.seed(seed)
+ np.random.seed(seed)
+ torch.manual_seed(seed)
+ torch.cuda.manual_seed_all(seed)
+ torch.backends.cudnn.deterministic = True
+ torch.backends.cudnn.benchmark = False
+
+ # generate image (with default setting of DemoFusion)
+ images = pipe(
+ args.prompt,
+ negative_prompt=args.negative_prompt,
+ height=args.height, width=args.width,
+ view_batch_size=8,
+ stride_ratio=0.5, # 1-overlap_ratio
+ lsr_path=args.lsr_path,
+ cosine_scale_1=3,
+ cosine_scale_2=1,
+ cosine_scale_3=1,
+ sigma=0.8,
+ rna_min_std=args.rna_min_std,
+ rna_max_std=args.rna_max_std,
+ inversion_depth=args.inversion_depth,
+ low_vram=args.low_vram
+ )
+ os.makedirs(args.save_dir, exist_ok=True)
+ images[0].save(os.path.join(args.save_dir, 'ref.png'))
+ images[1].save(os.path.join(args.save_dir, 'trg.png'))
+
+
+if __name__ == '__main__':
+ main()
\ No newline at end of file
diff --git a/competitors_inference_code/LSRNA/pipeline_lsrna_demofusion_sdxl.py b/competitors_inference_code/LSRNA/pipeline_lsrna_demofusion_sdxl.py
new file mode 100644
index 0000000000000000000000000000000000000000..a0c3f29e80457c304662b19292bd5f0878feeb2e
--- /dev/null
+++ b/competitors_inference_code/LSRNA/pipeline_lsrna_demofusion_sdxl.py
@@ -0,0 +1,1296 @@
+# Copyright 2023 The HuggingFace Team. All rights reserved.
+#
+# Licensed under the Apache License, Version 2.0 (the "License");
+# you may not use this file except in compliance with the License.
+# You may obtain a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS,
+# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
+# See the License for the specific language governing permissions and
+# limitations under the License.
+
+# Modified from https://github.com/PRIS-CV/DemoFusion/blob/main/pipeline_demofusion_sdxl.py
+import warnings
+warnings.filterwarnings("ignore")
+
+import os
+import random
+import numpy as np
+import torch
+import torch.nn.functional as F
+
+import inspect
+import functools
+import operator
+from typing import Any, Callable, Dict, List, Optional, Tuple, Union
+import matplotlib.pyplot as plt
+from PIL import Image
+from tqdm import tqdm
+
+import lsr #
+from utils import * #
+
+from transformers import CLIPTextModel, CLIPTextModelWithProjection, CLIPTokenizer
+from diffusers.image_processor import VaeImageProcessor
+from diffusers.loaders import (
+ FromSingleFileMixin,
+ LoraLoaderMixin,
+ TextualInversionLoaderMixin,
+)
+from diffusers.models import AutoencoderKL, UNet2DConditionModel
+from diffusers.models.attention_processor import (
+ AttnProcessor2_0,
+ LoRAAttnProcessor2_0,
+ LoRAXFormersAttnProcessor,
+ XFormersAttnProcessor,
+)
+from diffusers.models.lora import adjust_lora_scale_text_encoder
+from diffusers.schedulers import KarrasDiffusionSchedulers
+from diffusers.utils import (
+ is_accelerate_available,
+ is_accelerate_version,
+ logging,
+)
+from diffusers.utils.torch_utils import randn_tensor
+from diffusers.pipelines.pipeline_utils import DiffusionPipeline
+from diffusers.pipelines.stable_diffusion_xl import StableDiffusionXLPipelineOutput
+
+logger = logging.get_logger(__name__) # pylint: disable=invalid-name
+
+# Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.rescale_noise_cfg
+def rescale_noise_cfg(noise_cfg, noise_pred_text, guidance_rescale=0.0):
+ """
+ Rescale `noise_cfg` according to `guidance_rescale`. Based on findings of [Common Diffusion Noise Schedules and
+ Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf). See Section 3.4
+ """
+ std_text = noise_pred_text.std(dim=list(range(1, noise_pred_text.ndim)), keepdim=True)
+ std_cfg = noise_cfg.std(dim=list(range(1, noise_cfg.ndim)), keepdim=True)
+ # rescale the results from guidance (fixes overexposure)
+ noise_pred_rescaled = noise_cfg * (std_text / std_cfg)
+ # mix with the original results from guidance by factor guidance_rescale to avoid "plain looking" images
+ noise_cfg = guidance_rescale * noise_pred_rescaled + (1 - guidance_rescale) * noise_cfg
+ return noise_cfg
+
+
+class DemoFusionLSRNASDXLPipeline(DiffusionPipeline, FromSingleFileMixin, LoraLoaderMixin, TextualInversionLoaderMixin):
+ """
+ Pipeline for text-to-image generation using Stable Diffusion XL.
+
+ This model inherits from [`DiffusionPipeline`]. Check the superclass documentation for the generic methods the
+ library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.)
+
+ In addition the pipeline inherits the following loading methods:
+ - *LoRA*: [`StableDiffusionXLPipeline.load_lora_weights`]
+ - *Ckpt*: [`loaders.FromSingleFileMixin.from_single_file`]
+
+ as well as the following saving methods:
+ - *LoRA*: [`loaders.StableDiffusionXLPipeline.save_lora_weights`]
+
+ Args:
+ vae ([`AutoencoderKL`]):
+ Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations.
+ text_encoder ([`CLIPTextModel`]):
+ Frozen text-encoder. Stable Diffusion XL uses the text portion of
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModel), specifically
+ the [clip-vit-large-patch14](https://huggingface.co/openai/clip-vit-large-patch14) variant.
+ text_encoder_2 ([` CLIPTextModelWithProjection`]):
+ Second frozen text-encoder. Stable Diffusion XL uses the text and pool portion of
+ [CLIP](https://huggingface.co/docs/transformers/model_doc/clip#transformers.CLIPTextModelWithProjection),
+ specifically the
+ [laion/CLIP-ViT-bigG-14-laion2B-39B-b160k](https://huggingface.co/laion/CLIP-ViT-bigG-14-laion2B-39B-b160k)
+ variant.
+ tokenizer (`CLIPTokenizer`):
+ Tokenizer of class
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
+ tokenizer_2 (`CLIPTokenizer`):
+ Second Tokenizer of class
+ [CLIPTokenizer](https://huggingface.co/docs/transformers/v4.21.0/en/model_doc/clip#transformers.CLIPTokenizer).
+ unet ([`UNet2DConditionModel`]): Conditional U-Net architecture to denoise the encoded image latents.
+ scheduler ([`SchedulerMixin`]):
+ A scheduler to be used in combination with `unet` to denoise the encoded image latents. Can be one of
+ [`DDIMScheduler`], [`LMSDiscreteScheduler`], or [`PNDMScheduler`].
+ force_zeros_for_empty_prompt (`bool`, *optional*, defaults to `"True"`):
+ Whether the negative prompt embeddings shall be forced to always be set to 0. Also see the config of
+ `stabilityai/stable-diffusion-xl-base-1-0`.
+ """
+ model_cpu_offload_seq = "text_encoder->text_encoder_2->unet->vae"
+
+ def __init__(
+ self,
+ vae: AutoencoderKL,
+ text_encoder: CLIPTextModel,
+ text_encoder_2: CLIPTextModelWithProjection,
+ tokenizer: CLIPTokenizer,
+ tokenizer_2: CLIPTokenizer,
+ unet: UNet2DConditionModel,
+ scheduler: KarrasDiffusionSchedulers,
+ force_zeros_for_empty_prompt: bool = True,
+ ):
+ super().__init__()
+
+ self.register_modules(
+ vae=vae,
+ text_encoder=text_encoder,
+ text_encoder_2=text_encoder_2,
+ tokenizer=tokenizer,
+ tokenizer_2=tokenizer_2,
+ unet=unet,
+ scheduler=scheduler,
+ )
+ self.register_to_config(force_zeros_for_empty_prompt=force_zeros_for_empty_prompt)
+ self.vae_scale_factor = 2 ** (len(self.vae.config.block_out_channels) - 1)
+ self.image_processor = VaeImageProcessor(vae_scale_factor=self.vae_scale_factor)
+ self.default_sample_size = self.unet.config.sample_size # 1024//8 = 128
+
+ def encode_prompt(
+ self,
+ prompt: str,
+ prompt_2: Optional[str] = None,
+ device: Optional[torch.device] = None,
+ num_images_per_prompt: int = 1,
+ do_classifier_free_guidance: bool = True,
+ negative_prompt: Optional[str] = None,
+ negative_prompt_2: Optional[str] = None,
+ prompt_embeds: Optional[torch.FloatTensor] = None,
+ negative_prompt_embeds: Optional[torch.FloatTensor] = None,
+ pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
+ negative_pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
+ lora_scale: Optional[float] = None,
+ ):
+ r"""
+ Encodes the prompt into text encoder hidden states.
+
+ Args:
+ prompt (`str` or `List[str]`, *optional*):
+ prompt to be encoded
+ prompt_2 (`str` or `List[str]`, *optional*):
+ The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
+ used in both text-encoders
+ device: (`torch.device`):
+ torch device
+ num_images_per_prompt (`int`):
+ number of images that should be generated per prompt
+ do_classifier_free_guidance (`bool`):
+ whether to use classifier free guidance or not
+ negative_prompt (`str` or `List[str]`, *optional*):
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
+ `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
+ less than `1`).
+ negative_prompt_2 (`str` or `List[str]`, *optional*):
+ The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
+ `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
+ prompt_embeds (`torch.FloatTensor`, *optional*):
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
+ provided, text embeddings will be generated from `prompt` input argument.
+ negative_prompt_embeds (`torch.FloatTensor`, *optional*):
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
+ weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
+ argument.
+ pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
+ Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
+ If not provided, pooled text embeddings will be generated from `prompt` input argument.
+ negative_pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
+ Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
+ weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
+ input argument.
+ lora_scale (`float`, *optional*):
+ A lora scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded.
+ """
+ device = device or self._execution_device
+
+ # set lora scale so that monkey patched LoRA
+ # function of text encoder can correctly access it
+ if lora_scale is not None and isinstance(self, LoraLoaderMixin):
+ self._lora_scale = lora_scale
+
+ # dynamically adjust the LoRA scale
+ adjust_lora_scale_text_encoder(self.text_encoder, lora_scale)
+ adjust_lora_scale_text_encoder(self.text_encoder_2, lora_scale)
+
+ if prompt is not None and isinstance(prompt, str):
+ batch_size = 1
+ elif prompt is not None and isinstance(prompt, list):
+ batch_size = len(prompt)
+ else:
+ batch_size = prompt_embeds.shape[0]
+
+ # Define tokenizers and text encoders
+ tokenizers = [self.tokenizer, self.tokenizer_2] if self.tokenizer is not None else [self.tokenizer_2]
+ text_encoders = (
+ [self.text_encoder, self.text_encoder_2] if self.text_encoder is not None else [self.text_encoder_2]
+ )
+
+ if prompt_embeds is None:
+ prompt_2 = prompt_2 or prompt
+ # textual inversion: procecss multi-vector tokens if necessary
+ prompt_embeds_list = []
+ prompts = [prompt, prompt_2]
+ for prompt, tokenizer, text_encoder in zip(prompts, tokenizers, text_encoders):
+ if isinstance(self, TextualInversionLoaderMixin):
+ prompt = self.maybe_convert_prompt(prompt, tokenizer)
+
+ text_inputs = tokenizer(
+ prompt,
+ padding="max_length",
+ max_length=tokenizer.model_max_length,
+ truncation=True,
+ return_tensors="pt",
+ )
+
+ text_input_ids = text_inputs.input_ids
+ untruncated_ids = tokenizer(prompt, padding="longest", return_tensors="pt").input_ids
+
+ if untruncated_ids.shape[-1] >= text_input_ids.shape[-1] and not torch.equal(
+ text_input_ids, untruncated_ids
+ ):
+ removed_text = tokenizer.batch_decode(untruncated_ids[:, tokenizer.model_max_length - 1 : -1])
+ logger.warning(
+ "The following part of your input was truncated because CLIP can only handle sequences up to"
+ f" {tokenizer.model_max_length} tokens: {removed_text}"
+ )
+
+ prompt_embeds = text_encoder(
+ text_input_ids.to(device),
+ output_hidden_states=True,
+ )
+
+ # We are only ALWAYS interested in the pooled output of the final text encoder
+ pooled_prompt_embeds = prompt_embeds[0]
+ prompt_embeds = prompt_embeds.hidden_states[-2]
+
+ prompt_embeds_list.append(prompt_embeds)
+
+ prompt_embeds = torch.concat(prompt_embeds_list, dim=-1)
+
+ # get unconditional embeddings for classifier free guidance
+ zero_out_negative_prompt = negative_prompt is None and self.config.force_zeros_for_empty_prompt
+ if do_classifier_free_guidance and negative_prompt_embeds is None and zero_out_negative_prompt:
+ negative_prompt_embeds = torch.zeros_like(prompt_embeds)
+ negative_pooled_prompt_embeds = torch.zeros_like(pooled_prompt_embeds)
+ elif do_classifier_free_guidance and negative_prompt_embeds is None:
+ negative_prompt = negative_prompt or ""
+ negative_prompt_2 = negative_prompt_2 or negative_prompt
+
+ uncond_tokens: List[str]
+ if prompt is not None and type(prompt) is not type(negative_prompt):
+ raise TypeError(
+ f"`negative_prompt` should be the same type to `prompt`, but got {type(negative_prompt)} !="
+ f" {type(prompt)}."
+ )
+ elif isinstance(negative_prompt, str):
+ uncond_tokens = [negative_prompt, negative_prompt_2]
+ elif batch_size != len(negative_prompt):
+ raise ValueError(
+ f"`negative_prompt`: {negative_prompt} has batch size {len(negative_prompt)}, but `prompt`:"
+ f" {prompt} has batch size {batch_size}. Please make sure that passed `negative_prompt` matches"
+ " the batch size of `prompt`."
+ )
+ else:
+ uncond_tokens = [negative_prompt, negative_prompt_2]
+
+ negative_prompt_embeds_list = []
+ for negative_prompt, tokenizer, text_encoder in zip(uncond_tokens, tokenizers, text_encoders):
+ if isinstance(self, TextualInversionLoaderMixin):
+ negative_prompt = self.maybe_convert_prompt(negative_prompt, tokenizer)
+
+ max_length = prompt_embeds.shape[1]
+ uncond_input = tokenizer(
+ negative_prompt,
+ padding="max_length",
+ max_length=max_length,
+ truncation=True,
+ return_tensors="pt",
+ )
+
+ negative_prompt_embeds = text_encoder(
+ uncond_input.input_ids.to(device),
+ output_hidden_states=True,
+ )
+ # We are only ALWAYS interested in the pooled output of the final text encoder
+ negative_pooled_prompt_embeds = negative_prompt_embeds[0]
+ negative_prompt_embeds = negative_prompt_embeds.hidden_states[-2]
+
+ negative_prompt_embeds_list.append(negative_prompt_embeds)
+
+ negative_prompt_embeds = torch.concat(negative_prompt_embeds_list, dim=-1)
+
+ prompt_embeds = prompt_embeds.to(dtype=self.text_encoder_2.dtype, device=device)
+ bs_embed, seq_len, _ = prompt_embeds.shape
+ # duplicate text embeddings for each generation per prompt, using mps friendly method
+ prompt_embeds = prompt_embeds.repeat(1, num_images_per_prompt, 1)
+ prompt_embeds = prompt_embeds.view(bs_embed * num_images_per_prompt, seq_len, -1)
+
+ if do_classifier_free_guidance:
+ # duplicate unconditional embeddings for each generation per prompt, using mps friendly method
+ seq_len = negative_prompt_embeds.shape[1]
+ negative_prompt_embeds = negative_prompt_embeds.to(dtype=self.text_encoder_2.dtype, device=device)
+ negative_prompt_embeds = negative_prompt_embeds.repeat(1, num_images_per_prompt, 1)
+ negative_prompt_embeds = negative_prompt_embeds.view(batch_size * num_images_per_prompt, seq_len, -1)
+
+ pooled_prompt_embeds = pooled_prompt_embeds.repeat(1, num_images_per_prompt).view(
+ bs_embed * num_images_per_prompt, -1
+ )
+ if do_classifier_free_guidance:
+ negative_pooled_prompt_embeds = negative_pooled_prompt_embeds.repeat(1, num_images_per_prompt).view(
+ bs_embed * num_images_per_prompt, -1
+ )
+
+ return prompt_embeds, negative_prompt_embeds, pooled_prompt_embeds, negative_pooled_prompt_embeds
+
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_extra_step_kwargs
+ def prepare_extra_step_kwargs(self, generator, eta):
+ # prepare extra kwargs for the scheduler step, since not all schedulers have the same signature
+ # eta (η) is only used with the DDIMScheduler, it will be ignored for other schedulers.
+ # eta corresponds to η in DDIM paper: https://arxiv.org/abs/2010.02502
+ # and should be between [0, 1]
+
+ accepts_eta = "eta" in set(inspect.signature(self.scheduler.step).parameters.keys())
+ extra_step_kwargs = {}
+ if accepts_eta:
+ extra_step_kwargs["eta"] = eta
+
+ # check if the scheduler accepts generator
+ accepts_generator = "generator" in set(inspect.signature(self.scheduler.step).parameters.keys())
+ if accepts_generator:
+ extra_step_kwargs["generator"] = generator
+ return extra_step_kwargs
+
+ def check_inputs(
+ self,
+ prompt,
+ prompt_2,
+ height,
+ width,
+ callback_steps,
+ negative_prompt=None,
+ negative_prompt_2=None,
+ prompt_embeds=None,
+ negative_prompt_embeds=None,
+ pooled_prompt_embeds=None,
+ negative_pooled_prompt_embeds=None,
+ num_images_per_prompt=None,
+ ):
+ if height % 8 != 0 or width % 8 != 0:
+ raise ValueError(f"`height` and `width` have to be divisible by 8 but are {height} and {width}.")
+
+ if (callback_steps is None) or (
+ callback_steps is not None and (not isinstance(callback_steps, int) or callback_steps <= 0)
+ ):
+ raise ValueError(
+ f"`callback_steps` has to be a positive integer but is {callback_steps} of type"
+ f" {type(callback_steps)}."
+ )
+
+ if prompt is not None and prompt_embeds is not None:
+ raise ValueError(
+ f"Cannot forward both `prompt`: {prompt} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
+ " only forward one of the two."
+ )
+ elif prompt_2 is not None and prompt_embeds is not None:
+ raise ValueError(
+ f"Cannot forward both `prompt_2`: {prompt_2} and `prompt_embeds`: {prompt_embeds}. Please make sure to"
+ " only forward one of the two."
+ )
+ elif prompt is None and prompt_embeds is None:
+ raise ValueError(
+ "Provide either `prompt` or `prompt_embeds`. Cannot leave both `prompt` and `prompt_embeds` undefined."
+ )
+ elif prompt is not None and (not isinstance(prompt, str) and not isinstance(prompt, list)):
+ raise ValueError(f"`prompt` has to be of type `str` or `list` but is {type(prompt)}")
+ elif prompt_2 is not None and (not isinstance(prompt_2, str) and not isinstance(prompt_2, list)):
+ raise ValueError(f"`prompt_2` has to be of type `str` or `list` but is {type(prompt_2)}")
+
+ if negative_prompt is not None and negative_prompt_embeds is not None:
+ raise ValueError(
+ f"Cannot forward both `negative_prompt`: {negative_prompt} and `negative_prompt_embeds`:"
+ f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
+ )
+ elif negative_prompt_2 is not None and negative_prompt_embeds is not None:
+ raise ValueError(
+ f"Cannot forward both `negative_prompt_2`: {negative_prompt_2} and `negative_prompt_embeds`:"
+ f" {negative_prompt_embeds}. Please make sure to only forward one of the two."
+ )
+
+ if prompt_embeds is not None and negative_prompt_embeds is not None:
+ if prompt_embeds.shape != negative_prompt_embeds.shape:
+ raise ValueError(
+ "`prompt_embeds` and `negative_prompt_embeds` must have the same shape when passed directly, but"
+ f" got: `prompt_embeds` {prompt_embeds.shape} != `negative_prompt_embeds`"
+ f" {negative_prompt_embeds.shape}."
+ )
+
+ if prompt_embeds is not None and pooled_prompt_embeds is None:
+ raise ValueError(
+ "If `prompt_embeds` are provided, `pooled_prompt_embeds` also have to be passed. Make sure to generate `pooled_prompt_embeds` from the same text encoder that was used to generate `prompt_embeds`."
+ )
+
+ if negative_prompt_embeds is not None and negative_pooled_prompt_embeds is None:
+ raise ValueError(
+ "If `negative_prompt_embeds` are provided, `negative_pooled_prompt_embeds` also have to be passed. Make sure to generate `negative_pooled_prompt_embeds` from the same text encoder that was used to generate `negative_prompt_embeds`."
+ )
+ assert num_images_per_prompt == 1
+
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline.prepare_latents
+ def prepare_latents(self, batch_size, num_channels_latents, height, width, dtype, device, generator, latents=None):
+ shape = (batch_size, num_channels_latents, height // self.vae_scale_factor, width // self.vae_scale_factor)
+ if isinstance(generator, list) and len(generator) != batch_size:
+ raise ValueError(
+ f"You have passed a list of generators of length {len(generator)}, but requested an effective batch"
+ f" size of {batch_size}. Make sure the batch size matches the length of the generators."
+ )
+
+ if latents is None:
+ latents = randn_tensor(shape, generator=generator, device=device, dtype=dtype)
+ else:
+ latents = latents.to(device)
+
+ # scale the initial noise by the standard deviation required by the scheduler
+ latents = latents * self.scheduler.init_noise_sigma
+ return latents
+
+ def _get_add_time_ids(self, original_size, crops_coords_top_left, target_size, dtype):
+ add_time_ids = list(original_size + crops_coords_top_left + target_size)
+
+ passed_add_embed_dim = (
+ self.unet.config.addition_time_embed_dim * len(add_time_ids) + self.text_encoder_2.config.projection_dim
+ )
+ expected_add_embed_dim = self.unet.add_embedding.linear_1.in_features
+
+ if expected_add_embed_dim != passed_add_embed_dim:
+ raise ValueError(
+ f"Model expects an added time embedding vector of length {expected_add_embed_dim}, but a vector of {passed_add_embed_dim} was created. The model has an incorrect config. Please check `unet.config.time_embedding_type` and `text_encoder_2.config.projection_dim`."
+ )
+
+ add_time_ids = torch.tensor([add_time_ids], dtype=dtype)
+ return add_time_ids
+
+ def get_views(self, height, width, window_size=128, stride=64, random_jitter=False):
+ # Define the mappings F_i (see Eq. 7 in the MultiDiffusion paper https://arxiv.org/abs/2302.08113)
+ # if panorama's height/width < window_size, num_blocks of height/width should return 1
+ num_blocks_height = int((height - window_size) / stride - 1e-6) + 2 if height > window_size else 1
+ num_blocks_width = int((width - window_size) / stride - 1e-6) + 2 if width > window_size else 1
+ total_num_blocks = int(num_blocks_height * num_blocks_width)
+ views = []
+ for i in range(total_num_blocks):
+ h_start = int((i // num_blocks_width) * stride)
+ h_end = h_start + window_size
+ w_start = int((i % num_blocks_width) * stride)
+ w_end = w_start + window_size
+
+ if h_end > height:
+ h_start = int(h_start + height - h_end)
+ h_end = int(height)
+ if w_end > width:
+ w_start = int(w_start + width - w_end)
+ w_end = int(width)
+ if h_start < 0:
+ h_end = int(h_end - h_start)
+ h_start = 0
+ if w_start < 0:
+ w_end = int(w_end - w_start)
+ w_start = 0
+
+ if random_jitter:
+ jitter_range = (window_size - stride) // 4
+ w_jitter = 0
+ h_jitter = 0
+ if (w_start != 0) and (w_end != width):
+ w_jitter = random.randint(-jitter_range, jitter_range)
+ elif (w_start == 0) and (w_end != width):
+ w_jitter = random.randint(-jitter_range, 0)
+ elif (w_start != 0) and (w_end == width):
+ w_jitter = random.randint(0, jitter_range)
+ if (h_start != 0) and (h_end != height):
+ h_jitter = random.randint(-jitter_range, jitter_range)
+ elif (h_start == 0) and (h_end != height):
+ h_jitter = random.randint(-jitter_range, 0)
+ elif (h_start != 0) and (h_end == height):
+ h_jitter = random.randint(0, jitter_range)
+ h_start += (h_jitter + jitter_range)
+ h_end += (h_jitter + jitter_range)
+ w_start += (w_jitter + jitter_range)
+ w_end += (w_jitter + jitter_range)
+
+ views.append((h_start, h_end, w_start, w_end))
+ return views
+
+ def tiled_decode(self, latents):
+ h,w = latents.shape[-2:]
+ H,W = h*self.vae_scale_factor, w*self.vae_scale_factor
+ core_size = self.unet.config.sample_size // 4 # 32
+ core_stride = core_size # 32
+ pad_size = self.unet.config.sample_size // 8 * 3 # 24
+ decoder_view_batch_size = 1 # should be fixed
+
+ views = self.get_views(h, w, stride=core_stride, window_size=core_size)
+ views_batch = [views[i : i + decoder_view_batch_size] for i in range(0, len(views), decoder_view_batch_size)]
+ latents_ = F.pad(latents, (pad_size, pad_size, pad_size, pad_size), 'constant', 0)
+ image = torch.zeros(latents.size(0), 3, H, W).to(latents.device)
+ count = torch.zeros_like(image).to(latents.device)
+ # get the latents corresponding to the current view coordinates
+ with self.progress_bar(total=len(views_batch)) as progress_bar:
+ for j, batch_view in enumerate(views_batch):
+ latents_for_view = torch.cat(
+ [
+ latents_[:, :, h_start:h_end+pad_size*2, w_start:w_end+pad_size*2]
+ for h_start, h_end, w_start, w_end in batch_view
+ ]
+ ).to(self.vae.device)
+ image_patch = self.vae.decode(latents_for_view / self.vae.config.scaling_factor, return_dict=False)[0]
+ h_start, h_end, w_start, w_end = views[j]
+ h_start, h_end, w_start, w_end = h_start * self.vae_scale_factor, h_end * self.vae_scale_factor, w_start * self.vae_scale_factor, w_end * self.vae_scale_factor
+ p_h_start, p_h_end, p_w_start, p_w_end = pad_size * self.vae_scale_factor, image_patch.size(2) - pad_size * self.vae_scale_factor, pad_size * self.vae_scale_factor, image_patch.size(3) - pad_size * self.vae_scale_factor
+
+ image[:, :, h_start:h_end, w_start:w_end] += image_patch[:, :, p_h_start:p_h_end, p_w_start:p_w_end].to(latents.device)
+ count[:, :, h_start:h_end, w_start:w_end] += 1
+ progress_bar.update()
+ image = image / count
+ return image
+
+ # Copied from diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion_upscale.StableDiffusionUpscalePipeline.upcast_vae
+ def upcast_vae(self):
+ dtype = self.vae.dtype
+ self.vae.to(dtype=torch.float32)
+ use_torch_2_0_or_xformers = isinstance(
+ self.vae.decoder.mid_block.attentions[0].processor,
+ (
+ AttnProcessor2_0,
+ XFormersAttnProcessor,
+ LoRAXFormersAttnProcessor,
+ LoRAAttnProcessor2_0,
+ ),
+ )
+ # if xformers or torch_2_0 is used attention block does not need
+ # to be in float32 which can save lots of memory
+ if use_torch_2_0_or_xformers:
+ self.vae.post_quant_conv.to(dtype)
+ self.vae.decoder.conv_in.to(dtype)
+ self.vae.decoder.mid_block.to(dtype)
+
+ def latent2image(self, latents, advanced_decode=False):
+ needs_upcasting = self.vae.dtype == torch.float16 and self.vae.config.force_upcast
+ if self.low_vram:
+ self.unet.cpu()
+ self.vae.cuda()
+ if needs_upcasting:
+ self.upcast_vae()
+ latents = latents.to(next(iter(self.vae.post_quant_conv.parameters())).dtype)
+
+ if advanced_decode:
+ image = self.tiled_decode(latents)
+ else:
+ image = self.vae.decode(latents / self.vae.config.scaling_factor, return_dict=False)[0]
+
+ if needs_upcasting:
+ self.vae.to(dtype=torch.float16)
+ latents = latents.to(dtype=torch.float16)
+ image = self.image_processor.postprocess(image, output_type='pil')[0] # unnormalize
+ return image
+
+ @torch.no_grad()
+ def __call__(
+ self,
+ prompt: Union[str, List[str]] = None,
+ prompt_2: Optional[Union[str, List[str]]] = None,
+ height: int = 1024,
+ width: int = 1024,
+ num_inference_steps: int = 50,
+ denoising_end: Optional[float] = None,
+ guidance_scale: float = 7.5,
+ negative_prompt: Optional[Union[str, List[str]]] = None,
+ negative_prompt_2: Optional[Union[str, List[str]]] = None,
+ num_images_per_prompt: Optional[int] = 1,
+ eta: float = 0.0,
+ generator: Optional[Union[torch.Generator, List[torch.Generator]]] = None,
+ latents: Optional[torch.FloatTensor] = None,
+ prompt_embeds: Optional[torch.FloatTensor] = None,
+ negative_prompt_embeds: Optional[torch.FloatTensor] = None,
+ pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
+ negative_pooled_prompt_embeds: Optional[torch.FloatTensor] = None,
+ return_dict: bool = False,
+ callback: Optional[Callable[[int, int, torch.FloatTensor], None]] = None,
+ callback_steps: int = 1,
+ cross_attention_kwargs: Optional[Dict[str, Any]] = None,
+ guidance_rescale: float = 0.0,
+ crops_coords_top_left: Tuple[int, int] = (0, 0),
+ negative_original_size: Optional[Tuple[int, int]] = None,
+ negative_crops_coords_top_left: Tuple[int, int] = (0, 0),
+ negative_target_size: Optional[Tuple[int, int]] = None,
+ ################### Added parameters (including DemoFusion) ####################
+ view_batch_size: int = 8,
+ stride_ratio: float = 0.5,
+ lsr_path: str = 'lsr/checkpoints/swinir-liif-latent-sdxl.pth',
+ cosine_scale_1: float = 3.,
+ cosine_scale_2: float = 1.,
+ cosine_scale_3: float = 1.,
+ sigma: float = 0.8,
+ rna_min_std: float = 0.,
+ rna_max_std: float = 1.2,
+ inversion_depth: int = 30,
+ low_vram = False,
+ ):
+ r"""
+ Function invoked when calling the pipeline for generation.
+
+ Args:
+ prompt (`str` or `List[str]`, *optional*):
+ The prompt or prompts to guide the image generation. If not defined, one has to pass `prompt_embeds`.
+ instead.
+ prompt_2 (`str` or `List[str]`, *optional*):
+ The prompt or prompts to be sent to the `tokenizer_2` and `text_encoder_2`. If not defined, `prompt` is
+ used in both text-encoders
+ height (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
+ The height in pixels of the generated image. This is set to 1024 by default for the best results.
+ Anything below 512 pixels won't work well for
+ [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
+ and checkpoints that are not specifically fine-tuned on low resolutions.
+ width (`int`, *optional*, defaults to self.unet.config.sample_size * self.vae_scale_factor):
+ The width in pixels of the generated image. This is set to 1024 by default for the best results.
+ Anything below 512 pixels won't work well for
+ [stabilityai/stable-diffusion-xl-base-1.0](https://huggingface.co/stabilityai/stable-diffusion-xl-base-1.0)
+ and checkpoints that are not specifically fine-tuned on low resolutions.
+ num_inference_steps (`int`, *optional*, defaults to 50):
+ The number of denoising steps. More denoising steps usually lead to a higher quality image at the
+ expense of slower inference.
+ denoising_end (`float`, *optional*):
+ When specified, determines the fraction (between 0.0 and 1.0) of the total denoising process to be
+ completed before it is intentionally prematurely terminated. As a result, the returned sample will
+ still retain a substantial amount of noise as determined by the discrete timesteps selected by the
+ scheduler. The denoising_end parameter should ideally be utilized when this pipeline forms a part of a
+ "Mixture of Denoisers" multi-pipeline setup, as elaborated in [**Refining the Image
+ Output**](https://huggingface.co/docs/diffusers/api/pipelines/stable_diffusion/stable_diffusion_xl#refining-the-image-output)
+ guidance_scale (`float`, *optional*, defaults to 5.0):
+ Guidance scale as defined in [Classifier-Free Diffusion Guidance](https://arxiv.org/abs/2207.12598).
+ `guidance_scale` is defined as `w` of equation 2. of [Imagen
+ Paper](https://arxiv.org/pdf/2205.11487.pdf). Guidance scale is enabled by setting `guidance_scale >
+ 1`. Higher guidance scale encourages to generate images that are closely linked to the text `prompt`,
+ usually at the expense of lower image quality.
+ negative_prompt (`str` or `List[str]`, *optional*):
+ The prompt or prompts not to guide the image generation. If not defined, one has to pass
+ `negative_prompt_embeds` instead. Ignored when not using guidance (i.e., ignored if `guidance_scale` is
+ less than `1`).
+ negative_prompt_2 (`str` or `List[str]`, *optional*):
+ The prompt or prompts not to guide the image generation to be sent to `tokenizer_2` and
+ `text_encoder_2`. If not defined, `negative_prompt` is used in both text-encoders
+ num_images_per_prompt (`int`, *optional*, defaults to 1):
+ The number of images to generate per prompt.
+ eta (`float`, *optional*, defaults to 0.0):
+ Corresponds to parameter eta (η) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to
+ [`schedulers.DDIMScheduler`], will be ignored for others.
+ generator (`torch.Generator` or `List[torch.Generator]`, *optional*):
+ One or a list of [torch generator(s)](https://pytorch.org/docs/stable/generated/torch.Generator.html)
+ to make generation deterministic.
+ latents (`torch.FloatTensor`, *optional*):
+ Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image
+ generation. Can be used to tweak the same generation with different prompts. If not provided, a latents
+ tensor will ge generated by sampling using the supplied random `generator`.
+ prompt_embeds (`torch.FloatTensor`, *optional*):
+ Pre-generated text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting. If not
+ provided, text embeddings will be generated from `prompt` input argument.
+ negative_prompt_embeds (`torch.FloatTensor`, *optional*):
+ Pre-generated negative text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
+ weighting. If not provided, negative_prompt_embeds will be generated from `negative_prompt` input
+ argument.
+ pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
+ Pre-generated pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt weighting.
+ If not provided, pooled text embeddings will be generated from `prompt` input argument.
+ negative_pooled_prompt_embeds (`torch.FloatTensor`, *optional*):
+ Pre-generated negative pooled text embeddings. Can be used to easily tweak text inputs, *e.g.* prompt
+ weighting. If not provided, pooled negative_prompt_embeds will be generated from `negative_prompt`
+ input argument.
+ return_dict (`bool`, *optional*, defaults to `True`):
+ Whether or not to return a [`~pipelines.stable_diffusion_xl.StableDiffusionXLPipelineOutput`] instead
+ of a plain tuple.
+ callback (`Callable`, *optional*):
+ A function that will be called every `callback_steps` steps during inference. The function will be
+ called with the following arguments: `callback(step: int, timestep: int, latents: torch.FloatTensor)`.
+ callback_steps (`int`, *optional*, defaults to 1):
+ The frequency at which the `callback` function will be called. If not specified, the callback will be
+ called at every step.
+ cross_attention_kwargs (`dict`, *optional*):
+ A kwargs dictionary that if specified is passed along to the `AttentionProcessor` as defined under
+ `self.processor` in
+ [diffusers.models.attention_processor](https://github.com/huggingface/diffusers/blob/main/src/diffusers/models/attention_processor.py).
+ guidance_rescale (`float`, *optional*, defaults to 0.7):
+ Guidance rescale factor proposed by [Common Diffusion Noise Schedules and Sample Steps are
+ Flawed](https://arxiv.org/pdf/2305.08891.pdf) `guidance_scale` is defined as `φ` in equation 16. of
+ [Common Diffusion Noise Schedules and Sample Steps are Flawed](https://arxiv.org/pdf/2305.08891.pdf).
+ Guidance rescale factor should fix overexposure when using zero terminal SNR.
+ original_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)):
+ If `original_size` is not the same as `target_size` the image will appear to be down- or upsampled.
+ `original_size` defaults to `(width, height)` if not specified. Part of SDXL's micro-conditioning as
+ explained in section 2.2 of
+ [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
+ crops_coords_top_left (`Tuple[int]`, *optional*, defaults to (0, 0)):
+ `crops_coords_top_left` can be used to generate an image that appears to be "cropped" from the position
+ `crops_coords_top_left` downwards. Favorable, well-centered images are usually achieved by setting
+ `crops_coords_top_left` to (0, 0). Part of SDXL's micro-conditioning as explained in section 2.2 of
+ [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
+ target_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)):
+ For most cases, `target_size` should be set to the desired height and width of the generated image. If
+ not specified it will default to `(width, height)`. Part of SDXL's micro-conditioning as explained in
+ section 2.2 of [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952).
+ negative_original_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)):
+ To negatively condition the generation process based on a specific image resolution. Part of SDXL's
+ micro-conditioning as explained in section 2.2 of
+ [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
+ information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
+ negative_crops_coords_top_left (`Tuple[int]`, *optional*, defaults to (0, 0)):
+ To negatively condition the generation process based on a specific crop coordinates. Part of SDXL's
+ micro-conditioning as explained in section 2.2 of
+ [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
+ information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
+ negative_target_size (`Tuple[int]`, *optional*, defaults to (1024, 1024)):
+ To negatively condition the generation process based on a target image resolution. It should be as same
+ as the `target_size` for most cases. Part of SDXL's micro-conditioning as explained in section 2.2 of
+ [https://huggingface.co/papers/2307.01952](https://huggingface.co/papers/2307.01952). For more
+ information, refer to this issue thread: https://github.com/huggingface/diffusers/issues/4208.
+
+ Returns:
+ a `list` with the generated images at each phase.
+ """
+ # 0. Default height and width to unet
+ assert self.default_sample_size * self.vae_scale_factor == 1024
+ if max(height, width) % 1024 != 0:
+ raise ValueError(f"the larger one of `height` and `width` has to be divisible by 1024 but are {height} and {width}.")
+ scale_num = max(height, width) // 1024
+ original_size = target_size = (height, width)
+ stride = int(self.unet.config.sample_size * stride_ratio)
+ self.low_vram = low_vram
+
+ # load LSR model
+ print('LSR model loaded from ...', lsr_path)
+ sv_file = torch.load(lsr_path)
+ lsr_model = lsr.models.make(sv_file['model'], load_sd=True).cuda()
+
+ # 1. Check inputs. Raise error if not correct
+ self.check_inputs(
+ prompt,
+ prompt_2,
+ height,
+ width,
+ callback_steps,
+ negative_prompt,
+ negative_prompt_2,
+ prompt_embeds,
+ negative_prompt_embeds,
+ pooled_prompt_embeds,
+ negative_pooled_prompt_embeds,
+ num_images_per_prompt,
+ )
+
+ # 2. Define call parameters
+ if prompt is not None and isinstance(prompt, str):
+ batch_size = 1
+ elif prompt is not None and isinstance(prompt, list):
+ batch_size = len(prompt)
+ else:
+ batch_size = prompt_embeds.shape[0]
+
+ device = self._execution_device
+ self.low_vram = low_vram
+ if low_vram:
+ self.vae.cpu()
+ self.unet.cpu()
+ self.text_encoder.to(device)
+ self.text_encoder_2.to(device)
+ lsr_model.cpu()
+
+ # here `guidance_scale` is defined analog to the guidance weight `w` of equation (2)
+ # of the Imagen paper: https://arxiv.org/pdf/2205.11487.pdf . `guidance_scale = 1`
+ # corresponds to doing no classifier free guidance.
+ do_classifier_free_guidance = guidance_scale > 1.0
+
+ # 3. Encode input prompt
+ text_encoder_lora_scale = (
+ cross_attention_kwargs.get("scale", None) if cross_attention_kwargs is not None else None
+ )
+ (
+ prompt_embeds,
+ negative_prompt_embeds,
+ pooled_prompt_embeds,
+ negative_pooled_prompt_embeds,
+ ) = self.encode_prompt(
+ prompt=prompt,
+ prompt_2=prompt_2,
+ device=device,
+ num_images_per_prompt=num_images_per_prompt,
+ do_classifier_free_guidance=do_classifier_free_guidance,
+ negative_prompt=negative_prompt,
+ negative_prompt_2=negative_prompt_2,
+ prompt_embeds=prompt_embeds,
+ negative_prompt_embeds=negative_prompt_embeds,
+ pooled_prompt_embeds=pooled_prompt_embeds,
+ negative_pooled_prompt_embeds=negative_pooled_prompt_embeds,
+ lora_scale=text_encoder_lora_scale,
+ )
+
+ # 4. Prepare timesteps
+ self.scheduler.set_timesteps(num_inference_steps, device=device)
+ timesteps = self.scheduler.timesteps
+ assert len(timesteps) == 50
+
+ # 5. Prepare latent variables
+ num_channels_latents = self.unet.config.in_channels
+ latents = self.prepare_latents(
+ batch_size * num_images_per_prompt,
+ num_channels_latents,
+ height // scale_num,
+ width // scale_num,
+ prompt_embeds.dtype,
+ device,
+ generator,
+ latents,
+ )
+
+ # 6. Prepare extra step kwargs.
+ extra_step_kwargs = self.prepare_extra_step_kwargs(generator, eta)
+
+ # 7. Prepare added time ids & embeddings
+ add_text_embeds = pooled_prompt_embeds
+
+ # maintain scene consistency across scale_num
+ # add_time_ids = self._get_add_time_ids(
+ # original_size, crops_coords_top_left, target_size, dtype=prompt_embeds.dtype
+ # )
+ size = (height // scale_num, width // scale_num)
+ add_time_ids = self._get_add_time_ids(
+ size, crops_coords_top_left, size, dtype=prompt_embeds.dtype
+ )
+
+ if negative_original_size is not None and negative_target_size is not None:
+ negative_add_time_ids = self._get_add_time_ids(
+ negative_original_size,
+ negative_crops_coords_top_left,
+ negative_target_size,
+ dtype=prompt_embeds.dtype,
+ )
+ else:
+ negative_add_time_ids = add_time_ids
+
+ if do_classifier_free_guidance:
+ prompt_embeds = torch.cat([negative_prompt_embeds, prompt_embeds], dim=0)
+ add_text_embeds = torch.cat([negative_pooled_prompt_embeds, add_text_embeds], dim=0)
+ add_time_ids = torch.cat([negative_add_time_ids, add_time_ids], dim=0)
+ del negative_prompt_embeds, negative_pooled_prompt_embeds, negative_add_time_ids
+
+ prompt_embeds = prompt_embeds.to(device)
+ add_text_embeds = add_text_embeds.to(device)
+ add_time_ids = add_time_ids.to(device).repeat(batch_size * num_images_per_prompt, 1)
+
+ # 7.1 Apply denoising_end
+ if denoising_end is not None and isinstance(denoising_end, float) and denoising_end > 0 and denoising_end < 1:
+ discrete_timestep_cutoff = int(
+ round(
+ self.scheduler.config.num_train_timesteps
+ - (denoising_end * self.scheduler.config.num_train_timesteps)
+ )
+ )
+ num_inference_steps = len(list(filter(lambda ts: ts >= discrete_timestep_cutoff, timesteps)))
+ timesteps = timesteps[:num_inference_steps]
+
+ ############### Phase Initialization ###############
+ output_images = []
+
+ if low_vram:
+ self.text_encoder.cpu()
+ self.text_encoder_2.cpu()
+ self.unet.to(device)
+
+ print("### Denoising 1X Reference ###")
+ with self.progress_bar(total=num_inference_steps) as progress_bar:
+ for i, t in enumerate(timesteps):
+ # expand the latents if doing classifier free guidance
+ latent_model_input = (
+ latents.repeat_interleave(2, dim=0)
+ if do_classifier_free_guidance
+ else latents
+ )
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
+
+ # predict the noise residual
+ added_cond_kwargs = {"text_embeds": add_text_embeds, "time_ids": add_time_ids}
+ noise_pred = self.unet(
+ latent_model_input,
+ t,
+ encoder_hidden_states=prompt_embeds,
+ cross_attention_kwargs=cross_attention_kwargs,
+ added_cond_kwargs=added_cond_kwargs,
+ return_dict=False,
+ )[0]
+
+ # perform guidance
+ if do_classifier_free_guidance:
+ noise_pred_uncond, noise_pred_text = noise_pred[::2], noise_pred[1::2]
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
+
+ if do_classifier_free_guidance and guidance_rescale > 0.0:
+ # Based on 3.4. in https://arxiv.org/pdf/2305.08891.pdf
+ noise_pred = rescale_noise_cfg(noise_pred, noise_pred_text, guidance_rescale=guidance_rescale)
+
+ # compute the previous noisy sample x_t -> x_t-1
+ latents = self.scheduler.step(noise_pred, t, latents, **extra_step_kwargs, return_dict=False)[0]
+
+ # call the callback, if provided
+ if i == len(timesteps) - 1 or (i+1) % self.scheduler.order == 0:
+ progress_bar.update()
+ if callback is not None and i % callback_steps == 0:
+ step_idx = i // getattr(self.scheduler, "order", 1)
+ callback(step_idx, t, latents)
+ del latent_model_input, noise_pred, noise_pred_text, noise_pred_uncond
+
+ anchor_mean = latents.mean()
+ anchor_std = latents.std()
+ image = self.latent2image(latents) # rgb (discretized), pil
+
+ output_images.append(image)
+ if scale_num == 1:
+ output_images.append(image)
+ return output_images
+
+ ########### latent super resolution (LSR) ###########
+ # w/o progressive upsampling
+ current_height = height // scale_num * scale_num
+ current_width = width // scale_num * scale_num
+ current_scale_num = scale_num
+
+ # define new add_time_ids
+ add_time_ids = self._get_add_time_ids(
+ (current_height, current_width), crops_coords_top_left, (current_height, current_width), dtype=prompt_embeds.dtype
+ )
+ negative_add_time_ids = add_time_ids
+ if do_classifier_free_guidance:
+ add_time_ids = torch.cat([negative_add_time_ids, add_time_ids], dim=0)
+ add_time_ids = add_time_ids.to(device).repeat(batch_size * num_images_per_prompt, 1)
+
+ print(f"### Upsampling latent to {current_scale_num}X ###")
+ if low_vram:
+ self.unet.cpu()
+ lsr_model.to(device)
+
+ H = current_height // self.vae_scale_factor
+ W = current_width // self.vae_scale_factor
+ coord = make_coord((H,W), flatten=False, device='cuda').unsqueeze(0)
+ cell = torch.ones_like(coord)
+ cell[:,:,:,0] *= 2/H
+ cell[:,:,:,1] *= 2/W
+
+ dtype = latents.dtype
+ latents = latents.to(torch.float32)
+ latents = lsr_model(latents, coord, cell)
+ latents = latents.to(dtype) # upsampled latent, float16
+
+ ########### region-wise noise addition (RNA) ###########
+ image_ref = np.array(output_images[0])
+ diff = apply_canny_detection(image_ref, low_threshold=0, high_threshold=255).astype(np.float32)
+ diff = torch.tensor(diff).cuda().unsqueeze(0).unsqueeze(0)
+ diff = torch.nn.AdaptiveAvgPool2d((H,W))(diff)
+ std = ((diff - diff.min()) / (diff.max() - diff.min())) * (rna_max_std - rna_min_std) + rna_min_std
+ latents += torch.randn_like(latents) * std
+
+ ########### target denoising ###########
+ if low_vram:
+ self.unet.to(device)
+ lsr_model.cpu()
+
+ # noise inversion for noise initialization & skip residual
+ noise_latents = []
+ noise = torch.randn_like(latents)
+ for timestep in timesteps:
+ noise_latent = self.scheduler.add_noise(latents, noise, timestep.unsqueeze(0))
+ noise_latents.append(noise_latent)
+ assert 0 < inversion_depth <= num_inference_steps and num_inference_steps == len(timesteps)
+ latents = noise_latents[num_inference_steps-inversion_depth]
+
+ print(f"### Denoising {current_scale_num}X Target ###")
+ with self.progress_bar(total=inversion_depth) as progress_bar:
+ for i, t in enumerate(timesteps):
+ if i < num_inference_steps-inversion_depth: continue
+ count = torch.zeros_like(latents)
+ value = torch.zeros_like(latents)
+
+ # Skip Residual (from DemoFusion)
+ cosine_factor = 0.5 * (1 + torch.cos(torch.pi * (self.scheduler.config.num_train_timesteps - t) / self.scheduler.config.num_train_timesteps)).cpu()
+ c1 = cosine_factor ** cosine_scale_1
+ latents = latents * (1 - c1) + noise_latents[i] * c1
+
+ # patch-wise denoising (MultiDiffusion)
+ views = self.get_views(H, W, window_size=self.unet.config.sample_size, stride=stride, random_jitter=True)
+ views_batch = [views[i : i + view_batch_size] for i in range(0, len(views), view_batch_size)]
+ jitter_range = (self.unet.config.sample_size - stride) // 4
+ latents_ = F.pad(latents, (jitter_range, jitter_range, jitter_range, jitter_range), 'constant', 0)
+ count_local = torch.zeros_like(latents_)
+ value_local = torch.zeros_like(latents_)
+
+ for j, batch_view in enumerate(views_batch):
+ vb_size = len(batch_view)
+ # get the latents corresponding to the current view coordinates
+ latents_for_view = torch.cat(
+ [
+ latents_[:, :, h_start:h_end, w_start:w_end]
+ for h_start, h_end, w_start, w_end in batch_view
+ ]
+ )
+ # expand the latents if doing classifier free guidance
+ latent_model_input = latents_for_view
+ latent_model_input = (
+ latent_model_input.repeat_interleave(2, dim=0)
+ if do_classifier_free_guidance
+ else latent_model_input
+ )
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
+
+ prompt_embeds_input = torch.cat([prompt_embeds] * vb_size)
+ add_text_embeds_input = torch.cat([add_text_embeds] * vb_size)
+ add_time_ids_input = []
+ for h_start, h_end, w_start, w_end in batch_view:
+ add_time_ids_ = add_time_ids.clone()
+ add_time_ids_[:, 2] = h_start * self.vae_scale_factor
+ add_time_ids_[:, 3] = w_start * self.vae_scale_factor
+ add_time_ids_input.append(add_time_ids_)
+ add_time_ids_input = torch.cat(add_time_ids_input)
+
+ # predict the noise residual
+ added_cond_kwargs = {"text_embeds": add_text_embeds_input, "time_ids": add_time_ids_input}
+ noise_pred = self.unet(
+ latent_model_input,
+ t,
+ encoder_hidden_states=prompt_embeds_input,
+ cross_attention_kwargs=cross_attention_kwargs,
+ added_cond_kwargs=added_cond_kwargs,
+ return_dict=False,
+ )[0]
+
+ if do_classifier_free_guidance:
+ noise_pred_uncond, noise_pred_text = noise_pred[::2], noise_pred[1::2]
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
+
+ if do_classifier_free_guidance and guidance_rescale > 0.0:
+ # Based on 3.4. in https://arxiv.org/pdf/2305.08891.pdf
+ noise_pred = rescale_noise_cfg(noise_pred, noise_pred_text, guidance_rescale=guidance_rescale)
+
+ # compute the previous noisy sample x_t -> x_t-1
+ if hasattr(self.scheduler, '_init_step_index'):
+ self.scheduler._init_step_index(t)
+ latents_denoised_batch = self.scheduler.step(
+ noise_pred, t, latents_for_view, **extra_step_kwargs, return_dict=False)[0]
+
+ # extract value from batch
+ for latents_view_denoised, (h_start, h_end, w_start, w_end) in zip(
+ latents_denoised_batch.chunk(vb_size), batch_view
+ ):
+ value_local[:, :, h_start:h_end, w_start:w_end] += latents_view_denoised
+ count_local[:, :, h_start:h_end, w_start:w_end] += 1
+ value_local = value_local[: ,:, jitter_range: jitter_range + H, jitter_range: jitter_range + W]
+ count_local = count_local[: ,:, jitter_range: jitter_range + H, jitter_range: jitter_range + W]
+
+ # Dilated Sampling (from DemoFusion)
+ c2 = cosine_factor ** cosine_scale_2
+ value += value_local / count_local * (1 - c2)
+ count += torch.ones_like(value_local) * (1 - c2)
+
+ views = [[h, w] for h in range(current_scale_num) for w in range(current_scale_num)]
+ views_batch = [views[i : i + view_batch_size] for i in range(0, len(views), view_batch_size)]
+
+ h_pad = (current_scale_num - (latents.size(2) % current_scale_num)) % current_scale_num
+ w_pad = (current_scale_num - (latents.size(3) % current_scale_num)) % current_scale_num
+ latents_ = F.pad(latents, (w_pad, 0, h_pad, 0), 'constant', 0)
+
+ count_global = torch.zeros_like(latents_)
+ value_global = torch.zeros_like(latents_)
+
+ c3 = 0.99 * cosine_factor ** cosine_scale_3 + 1e-2
+ std_, mean_ = latents_.std(), latents_.mean()
+ latents_gaussian = gaussian_filter(latents_, kernel_size=(2*current_scale_num-1), sigma=sigma*c3)
+ latents_gaussian = (latents_gaussian - latents_gaussian.mean()) / latents_gaussian.std() * std_ + mean_
+
+ for j, batch_view in enumerate(views_batch):
+ latents_for_view = torch.cat(
+ [
+ latents_[:, :, h::current_scale_num, w::current_scale_num]
+ for h, w in batch_view
+ ]
+ )
+ latents_for_view_gaussian = torch.cat(
+ [
+ latents_gaussian[:, :, h::current_scale_num, w::current_scale_num]
+ for h, w in batch_view
+ ]
+ )
+
+ # latents_for_view.size(0) != view_batch_size
+ vb_size = latents_for_view.size(0)
+
+ # expand the latents if doing classifier free guidance
+ latent_model_input = latents_for_view_gaussian
+ latent_model_input = (
+ latent_model_input.repeat_interleave(2, dim=0)
+ if do_classifier_free_guidance
+ else latent_model_input
+ )
+ latent_model_input = self.scheduler.scale_model_input(latent_model_input, t)
+
+ prompt_embeds_input = torch.cat([prompt_embeds] * vb_size)
+ add_text_embeds_input = torch.cat([add_text_embeds] * vb_size)
+ add_time_ids_input = torch.cat([add_time_ids] * vb_size)
+
+ # predict the noise residual
+ added_cond_kwargs = {"text_embeds": add_text_embeds_input, "time_ids": add_time_ids_input}
+ noise_pred = self.unet(
+ latent_model_input,
+ t,
+ encoder_hidden_states=prompt_embeds_input,
+ cross_attention_kwargs=cross_attention_kwargs,
+ added_cond_kwargs=added_cond_kwargs,
+ return_dict=False,
+ )[0]
+
+ if do_classifier_free_guidance:
+ noise_pred_uncond, noise_pred_text = noise_pred[::2], noise_pred[1::2]
+ noise_pred = noise_pred_uncond + guidance_scale * (noise_pred_text - noise_pred_uncond)
+
+ if do_classifier_free_guidance and guidance_rescale > 0.0:
+ # Based on 3.4. in https://arxiv.org/pdf/2305.08891.pdf
+ noise_pred = rescale_noise_cfg(noise_pred, noise_pred_text, guidance_rescale=guidance_rescale)
+
+ # compute the previous noisy sample x_t -> x_t-1
+ if hasattr(self.scheduler, '_init_step_index'):
+ self.scheduler._init_step_index(t)
+ latents_denoised_batch = self.scheduler.step(
+ noise_pred, t, latents_for_view, **extra_step_kwargs, return_dict=False)[0]
+
+ # extract value from batch
+ for latents_view_denoised, (h, w) in zip(
+ latents_denoised_batch.chunk(vb_size), batch_view
+ ):
+ value_global[:, :, h::current_scale_num, w::current_scale_num] += latents_view_denoised
+ count_global[:, :, h::current_scale_num, w::current_scale_num] += 1
+
+ value_global = value_global[: ,:, h_pad:, w_pad:]
+ value += value_global * c2
+ count += torch.ones_like(value_global) * c2
+
+ latents = torch.where(count > 0, value / count, value)
+
+ # call the callback, if provided
+ if i == len(timesteps) - 1 or (i+1) % self.scheduler.order == 0:
+ progress_bar.update()
+ if callback is not None and i % callback_steps == 0:
+ step_idx = i // getattr(self.scheduler, "order", 1)
+ callback(step_idx, t, latents)
+ latents = (latents - latents.mean()) / latents.std() * anchor_std + anchor_mean
+
+ # reconstruct target image
+ print(f"### Reconstructing Target ({scale_num}X) ###")
+ image = self.latent2image(latents, advanced_decode=False)
+ output_images.append(image)
+
+ # offload all models
+ self.maybe_free_model_hooks()
+ return output_images
+
+
+ # Overrride to properly handle the loading and unloading of the additional text encoder.
+ def load_lora_weights(self, pretrained_model_name_or_path_or_dict: Union[str, Dict[str, torch.Tensor]], **kwargs):
+ # We could have accessed the unet config from `lora_state_dict()` too. We pass
+ # it here explicitly to be able to tell that it's coming from an SDXL
+ # pipeline.
+
+ # Remove any existing hooks.
+ if is_accelerate_available() and is_accelerate_version(">=", "0.17.0.dev0"):
+ from accelerate.hooks import AlignDevicesHook, CpuOffload, remove_hook_from_module
+ else:
+ raise ImportError("Offloading requires `accelerate v0.17.0` or higher.")
+
+ is_model_cpu_offload = False
+ is_sequential_cpu_offload = False
+ recursive = False
+ for _, component in self.components.items():
+ if isinstance(component, torch.nn.Module):
+ if hasattr(component, "_hf_hook"):
+ is_model_cpu_offload = isinstance(getattr(component, "_hf_hook"), CpuOffload)
+ is_sequential_cpu_offload = isinstance(getattr(component, "_hf_hook"), AlignDevicesHook)
+ logger.info(
+ "Accelerate hooks detected. Since you have called `load_lora_weights()`, the previous hooks will be first removed. Then the LoRA parameters will be loaded and the hooks will be applied again."
+ )
+ recursive = is_sequential_cpu_offload
+ remove_hook_from_module(component, recurse=recursive)
+ state_dict, network_alphas = self.lora_state_dict(
+ pretrained_model_name_or_path_or_dict,
+ unet_config=self.unet.config,
+ **kwargs,
+ )
+ self.load_lora_into_unet(state_dict, network_alphas=network_alphas, unet=self.unet)
+
+ text_encoder_state_dict = {k: v for k, v in state_dict.items() if "text_encoder." in k}
+ if len(text_encoder_state_dict) > 0:
+ self.load_lora_into_text_encoder(
+ text_encoder_state_dict,
+ network_alphas=network_alphas,
+ text_encoder=self.text_encoder,
+ prefix="text_encoder",
+ lora_scale=self.lora_scale,
+ )
+
+ text_encoder_2_state_dict = {k: v for k, v in state_dict.items() if "text_encoder_2." in k}
+ if len(text_encoder_2_state_dict) > 0:
+ self.load_lora_into_text_encoder(
+ text_encoder_2_state_dict,
+ network_alphas=network_alphas,
+ text_encoder=self.text_encoder_2,
+ prefix="text_encoder_2",
+ lora_scale=self.lora_scale,
+ )
+
+ # Offload back.
+ if is_model_cpu_offload:
+ self.enable_model_cpu_offload()
+ elif is_sequential_cpu_offload:
+ self.enable_sequential_cpu_offload()
+
+ @classmethod
+ def save_lora_weights(
+ self,
+ save_directory: Union[str, os.PathLike],
+ unet_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None,
+ text_encoder_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None,
+ text_encoder_2_lora_layers: Dict[str, Union[torch.nn.Module, torch.Tensor]] = None,
+ is_main_process: bool = True,
+ weight_name: str = None,
+ save_function: Callable = None,
+ safe_serialization: bool = True,
+ ):
+ state_dict = {}
+
+ def pack_weights(layers, prefix):
+ layers_weights = layers.state_dict() if isinstance(layers, torch.nn.Module) else layers
+ layers_state_dict = {f"{prefix}.{module_name}": param for module_name, param in layers_weights.items()}
+ return layers_state_dict
+
+ if not (unet_lora_layers or text_encoder_lora_layers or text_encoder_2_lora_layers):
+ raise ValueError(
+ "You must pass at least one of `unet_lora_layers`, `text_encoder_lora_layers` or `text_encoder_2_lora_layers`."
+ )
+
+ if unet_lora_layers:
+ state_dict.update(pack_weights(unet_lora_layers, "unet"))
+
+ if text_encoder_lora_layers and text_encoder_2_lora_layers:
+ state_dict.update(pack_weights(text_encoder_lora_layers, "text_encoder"))
+ state_dict.update(pack_weights(text_encoder_2_lora_layers, "text_encoder_2"))
+
+ self.write_lora_layers(
+ state_dict=state_dict,
+ save_directory=save_directory,
+ is_main_process=is_main_process,
+ weight_name=weight_name,
+ save_function=save_function,
+ safe_serialization=safe_serialization,
+ )
+
+ def _remove_text_encoder_monkey_patch(self):
+ self._remove_text_encoder_monkey_patch_classmethod(self.text_encoder)
+ self._remove_text_encoder_monkey_patch_classmethod(self.text_encoder_2)
diff --git a/competitors_inference_code/LSRNA/requirements.txt b/competitors_inference_code/LSRNA/requirements.txt
new file mode 100644
index 0000000000000000000000000000000000000000..4f07b7a66a72840947ad8a2fd4d5c21130f617db
--- /dev/null
+++ b/competitors_inference_code/LSRNA/requirements.txt
@@ -0,0 +1,18 @@
+torch==2.3.1
+accelerate==0.31.0
+diffusers==0.29.1
+einops==0.8.0
+gradio==4.38.1
+huggingface-hub==0.24.0
+MarkupSafe==2.1.5
+matplotlib==3.9.1
+numpy==1.26.4
+omegaconf==2.3.0
+pandas==2.2.2
+safetensors==0.4.3
+scipy==1.11.4
+timm==1.0.7
+transformers==4.41.2
+triton==2.3.1
+xformers==0.0.27
+opencv-python
\ No newline at end of file
diff --git a/competitors_inference_code/LSRNA/run.sh b/competitors_inference_code/LSRNA/run.sh
new file mode 100644
index 0000000000000000000000000000000000000000..32e6a96bb46d46bae84cb6b186415d03603df42e
--- /dev/null
+++ b/competitors_inference_code/LSRNA/run.sh
@@ -0,0 +1,13 @@
+#!/usr/bin/env bash
+CUDA_VISIBLE_DEVICES=0 python main.py \
+ --prompt "A well-worn baseball glove and ball sitting on fresh-cut grass." \
+ --negative_prompt "blurry, ugly, duplicate, poorly drawn, deformed, mosaic" \
+ --height 2048 \
+ --width 2048 \
+ --seed 0 \
+ --lsr_path "lsr/swinir-liif-latent-sdxl.pth" \
+ --rna_min_std 0.0 \
+ --rna_max_std 1.2 \
+ --inversion_depth 30 \
+ --save_dir "results" \
+ #--low_vram
diff --git a/competitors_inference_code/LSRNA/utils.py b/competitors_inference_code/LSRNA/utils.py
new file mode 100644
index 0000000000000000000000000000000000000000..38fedc3a592b3702c5641fe557b83cc74a2f5582
--- /dev/null
+++ b/competitors_inference_code/LSRNA/utils.py
@@ -0,0 +1,45 @@
+import cv2
+from PIL import Image
+
+import numpy as np
+import torch
+import torch.nn.functional as F
+
+
+def gaussian_kernel(kernel_size=3, sigma=1.0, channels=3):
+ x_coord = torch.arange(kernel_size)
+ gaussian_1d = torch.exp(-(x_coord - (kernel_size - 1) / 2) ** 2 / (2 * sigma ** 2))
+ gaussian_1d = gaussian_1d / gaussian_1d.sum()
+ gaussian_2d = gaussian_1d[:, None] * gaussian_1d[None, :]
+ kernel = gaussian_2d[None, None, :, :].repeat(channels, 1, 1, 1)
+ return kernel
+
+
+def gaussian_filter(latents, kernel_size=3, sigma=1.0):
+ channels = latents.shape[1]
+ kernel = gaussian_kernel(kernel_size, sigma, channels).to(latents.device, latents.dtype)
+ blurred_latents = F.conv2d(latents, kernel, padding=kernel_size//2, groups=channels)
+ return blurred_latents
+
+
+def make_coord(shape, ranges=None, flatten=True, device='cpu'):
+ # Make coordinates at grid centers.
+ coord_seqs = []
+ for i, n in enumerate(shape):
+ if ranges is None:
+ v0, v1 = -1, 1
+ else:
+ v0, v1 = ranges[i]
+ r = (v1 - v0) / (2 * n)
+ seq = v0 + r + (2 * r) * torch.arange(n, device=device).float()
+ coord_seqs.append(seq)
+ ret = torch.stack(torch.meshgrid(*coord_seqs), dim=-1)
+ if flatten:
+ ret = ret.view(-1, ret.shape[-1])
+ return ret
+
+
+def apply_canny_detection(image_np, low_threshold=100, high_threshold=200):
+ gray_image = cv2.cvtColor(image_np, cv2.COLOR_RGB2GRAY)
+ filtered_image = cv2.Canny(gray_image, low_threshold, high_threshold) # 0 or 255
+ return filtered_image
\ No newline at end of file
diff --git a/competitors_inference_code/generate_hidifussion_images.py b/competitors_inference_code/generate_hidifussion_images.py
new file mode 100644
index 0000000000000000000000000000000000000000..4756e91eecccba949de3dac047f66959b267b10d
--- /dev/null
+++ b/competitors_inference_code/generate_hidifussion_images.py
@@ -0,0 +1,148 @@
+#!/usr/bin/env python3
+"""Generate SDXL images for the selected validation prompts with HiDiffusion."""
+
+from __future__ import annotations
+
+import csv
+import json
+import time
+from pathlib import Path
+
+import torch
+from diffusers import DDIMScheduler, StableDiffusionXLPipeline
+from hidiffusion import apply_hidiffusion, remove_hidiffusion
+
+
+NEGATIVE_PROMPT = "blurry, ugly, duplicate, poorly drawn face, deformed, mosaic, artifacts, bad limbs"
+DEFAULT_CSV = "datasets/new_validation_dataset/original_openim/images/selected_validation_images.csv"
+DEFAULT_OUTPUT_DIR = "datasets/new_validation_dataset/hidifussion/images"
+STATISTICS_PATH = "datasets/new_validation_dataset/hidifussion/statistics.json"
+PRETRAINED_MODEL = "stabilityai/stable-diffusion-xl-base-1.0"
+CFG_SCALE = 7.5
+NUM_INFERENCE_STEPS = 30
+ETA = 1.0
+SEED = 42
+RESOLUTIONS: dict[str, tuple[int, int]] = {
+ "4096px": (4096, 4096),
+ "2048px": (2048, 2048),
+ "1024px": (1024, 1024),
+ "512px": (512, 512),
+}
+
+
+def load_prompts(csv_path: Path) -> list[tuple[str, str]]:
+ prompts: list[tuple[str, str]] = []
+ with csv_path.open("r", encoding="utf-8") as handle:
+ reader = csv.DictReader(handle)
+ for row in reader:
+ caption_raw = (row.get("gpt_caption") or "").strip()
+ if not caption_raw:
+ continue
+ try:
+ caption = json.loads(caption_raw)
+ except json.JSONDecodeError:
+ print(f"Skipping row with invalid JSON: {row.get('img_path')}")
+ continue
+ prompt = caption.get("sdxl")
+ if not prompt:
+ print(f"Skipping row without 'sdxl' prompt: {row.get('img_path')}")
+ continue
+ prompts.append((row.get("img_path", ""), prompt))
+ return prompts
+
+
+def build_pipeline() -> StableDiffusionXLPipeline:
+ if not torch.cuda.is_available():
+ raise RuntimeError("CUDA is required to run this script.")
+
+ scheduler = DDIMScheduler.from_pretrained(PRETRAINED_MODEL, subfolder="scheduler")
+ pipe = StableDiffusionXLPipeline.from_pretrained(
+ PRETRAINED_MODEL,
+ scheduler=scheduler,
+ torch_dtype=torch.float16,
+ variant="fp16",
+ ).to("cuda")
+ pipe.set_progress_bar_config(disable=True)
+ apply_hidiffusion(pipe)
+ return pipe
+
+
+def main() -> None:
+ csv_path = Path(DEFAULT_CSV)
+ output_dir = Path(DEFAULT_OUTPUT_DIR)
+ prompts = load_prompts(csv_path)
+ if not prompts:
+ raise SystemExit("No prompts were found in the CSV file.")
+
+ resolution_dirs = {name: output_dir / name for name in RESOLUTIONS}
+ for folder in resolution_dirs.values():
+ folder.mkdir(parents=True, exist_ok=True)
+
+ statistics_path = Path(STATISTICS_PATH)
+ stats_tracker = {
+ name: {"count": 0, "total_time": 0.0, "max_vram_bytes": 0}
+ for name in RESOLUTIONS
+ }
+
+ generator = torch.Generator(device="cuda").manual_seed(SEED)
+ pipe = build_pipeline()
+ device = torch.device("cuda")
+ statistics: dict[str, object] | None = None
+
+ for idx, (img_path, prompt) in enumerate(prompts):
+ filename = f"{idx}.png"
+ written_paths: list[str] = []
+
+ for name, (width, height) in RESOLUTIONS.items():
+ print(prompt)
+ torch.cuda.synchronize(device)
+ torch.cuda.reset_peak_memory_stats(device)
+ start_time = time.perf_counter()
+
+ image = pipe(
+ prompt,
+ negative_prompt=NEGATIVE_PROMPT,
+ guidance_scale=CFG_SCALE,
+ num_inference_steps=NUM_INFERENCE_STEPS,
+ eta=ETA,
+ width=width,
+ height=height,
+ generator=generator,
+ progress_bar=True,
+ ).images[0]
+
+ torch.cuda.synchronize(device)
+ elapsed = time.perf_counter() - start_time
+ vram_bytes = torch.cuda.max_memory_allocated(device)
+
+ stats = stats_tracker[name]
+ stats["count"] += 1
+ stats["total_time"] += elapsed
+ stats["max_vram_bytes"] = max(stats["max_vram_bytes"], vram_bytes)
+
+ output_path = resolution_dirs[name] / filename
+ image.save(output_path)
+ written_paths.append(str(output_path))
+
+ print(f"[{idx + 1}/{len(prompts)}] wrote {', '.join(written_paths)}")
+
+ statistics = {
+ "total_prompts": len(prompts),
+ "resolutions": {
+ name: {
+ "images": metrics["count"],
+ "mean_time_sec": (metrics["total_time"] / metrics["count"]) if metrics["count"] else 0.0,
+ "max_vram_mb": metrics["max_vram_bytes"] / (1024**2),
+ }
+ for name, metrics in stats_tracker.items()
+ },
+ }
+
+ if statistics:
+ statistics_path.parent.mkdir(parents=True, exist_ok=True)
+ statistics_path.write_text(json.dumps(statistics, indent=2))
+ print(f"Saved statistics to {statistics_path}")
+
+
+if __name__ == "__main__":
+ main()
diff --git a/competitors_inference_code/generate_sdxl_images.py b/competitors_inference_code/generate_sdxl_images.py
new file mode 100644
index 0000000000000000000000000000000000000000..d930443f4e7b549aae7ddc10a02be326cd110efc
--- /dev/null
+++ b/competitors_inference_code/generate_sdxl_images.py
@@ -0,0 +1,146 @@
+#!/usr/bin/env python3
+"""Generate SDXL images for the selected validation prompts without HiDiffusion."""
+
+from __future__ import annotations
+
+import csv
+import json
+import time
+from pathlib import Path
+
+import torch
+from diffusers import DDIMScheduler, StableDiffusionXLPipeline
+
+
+NEGATIVE_PROMPT = "blurry, ugly, duplicate, poorly drawn face, deformed, mosaic, artifacts, bad limbs"
+DEFAULT_CSV = "datasets/new_validation_dataset/original_openim/images/selected_validation_images.csv"
+DEFAULT_OUTPUT_DIR = "datasets/new_validation_dataset/sdxl_default/images"
+STATISTICS_PATH = "datasets/new_validation_dataset/sdxl_default/statistics.json"
+PRETRAINED_MODEL = "stabilityai/stable-diffusion-xl-base-1.0"
+CFG_SCALE = 7.5
+NUM_INFERENCE_STEPS = 30
+ETA = 1.0
+SEED = 42
+RESOLUTIONS: dict[str, tuple[int, int]] = {
+ "4096px": (4096, 4096),
+ "2048px": (2048, 2048),
+ "1024px": (1024, 1024),
+ "512px": (512, 512),
+}
+
+
+def load_prompts(csv_path: Path) -> list[tuple[str, str]]:
+ prompts: list[tuple[str, str]] = []
+ with csv_path.open("r", encoding="utf-8") as handle:
+ reader = csv.DictReader(handle)
+ for row in reader:
+ caption_raw = (row.get("gpt_caption") or "").strip()
+ if not caption_raw:
+ continue
+ try:
+ caption = json.loads(caption_raw)
+ except json.JSONDecodeError:
+ print(f"Skipping row with invalid JSON: {row.get('img_path')}")
+ continue
+ prompt = caption.get("sdxl")
+ if not prompt:
+ print(f"Skipping row without 'sdxl' prompt: {row.get('img_path')}")
+ continue
+ prompts.append((row.get("img_path", ""), prompt))
+ return prompts
+
+
+def build_pipeline() -> StableDiffusionXLPipeline:
+ if not torch.cuda.is_available():
+ raise RuntimeError("CUDA is required to run this script.")
+
+ scheduler = DDIMScheduler.from_pretrained(PRETRAINED_MODEL, subfolder="scheduler")
+ pipe = StableDiffusionXLPipeline.from_pretrained(
+ PRETRAINED_MODEL,
+ scheduler=scheduler,
+ torch_dtype=torch.float16,
+ variant="fp16",
+ ).to("cuda")
+ pipe.set_progress_bar_config(disable=True)
+ return pipe
+
+
+def main() -> None:
+ csv_path = Path(DEFAULT_CSV)
+ output_dir = Path(DEFAULT_OUTPUT_DIR)
+ prompts = load_prompts(csv_path)
+ if not prompts:
+ raise SystemExit("No prompts were found in the CSV file.")
+
+ resolution_dirs = {name: output_dir / name for name in RESOLUTIONS}
+ for folder in resolution_dirs.values():
+ folder.mkdir(parents=True, exist_ok=True)
+
+ statistics_path = Path(STATISTICS_PATH)
+ stats_tracker = {
+ name: {"count": 0, "total_time": 0.0, "max_vram_bytes": 0}
+ for name in RESOLUTIONS
+ }
+
+ generator = torch.Generator(device="cuda").manual_seed(SEED)
+ pipe = build_pipeline()
+ device = torch.device("cuda")
+ statistics: dict[str, object] | None = None
+
+ for idx, (img_path, prompt) in enumerate(prompts):
+ filename = f"{idx}.png"
+ written_paths: list[str] = []
+
+ for name, (width, height) in RESOLUTIONS.items():
+ print(prompt)
+ torch.cuda.synchronize(device)
+ torch.cuda.reset_peak_memory_stats(device)
+ start_time = time.perf_counter()
+
+ image = pipe(
+ prompt,
+ negative_prompt=NEGATIVE_PROMPT,
+ guidance_scale=CFG_SCALE,
+ num_inference_steps=NUM_INFERENCE_STEPS,
+ eta=ETA,
+ width=width,
+ height=height,
+ generator=generator,
+ progress_bar=True,
+ ).images[0]
+
+ torch.cuda.synchronize(device)
+ elapsed = time.perf_counter() - start_time
+ vram_bytes = torch.cuda.max_memory_allocated(device)
+
+ stats = stats_tracker[name]
+ stats["count"] += 1
+ stats["total_time"] += elapsed
+ stats["max_vram_bytes"] = max(stats["max_vram_bytes"], vram_bytes)
+
+ output_path = resolution_dirs[name] / filename
+ image.save(output_path)
+ written_paths.append(str(output_path))
+
+ print(f"[{idx + 1}/{len(prompts)}] wrote {', '.join(written_paths)}")
+
+ statistics = {
+ "total_prompts": len(prompts),
+ "resolutions": {
+ name: {
+ "images": metrics["count"],
+ "mean_time_sec": (metrics["total_time"] / metrics["count"]) if metrics["count"] else 0.0,
+ "max_vram_mb": metrics["max_vram_bytes"] / (1024**2),
+ }
+ for name, metrics in stats_tracker.items()
+ },
+ }
+
+ if statistics:
+ statistics_path.parent.mkdir(parents=True, exist_ok=True)
+ statistics_path.write_text(json.dumps(statistics, indent=2))
+ print(f"Saved statistics to {statistics_path}")
+
+
+if __name__ == "__main__":
+ main()
diff --git a/original_openim/images/512px/0068ca42dd4c4a2e.jpg b/original_openim/images/512px/0068ca42dd4c4a2e.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..1416aef8f2876411ac3cebff3f58ace576fb83ed
--- /dev/null
+++ b/original_openim/images/512px/0068ca42dd4c4a2e.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e7456498a859744f3b34f7917d5b57062d08f166b3a4e5a0aab29bbfa11ed20a
+size 33230
diff --git a/original_openim/images/512px/006c8ac9b5e6906a.jpg b/original_openim/images/512px/006c8ac9b5e6906a.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..efb01f59b39a5097d3b7af84d9037e2f9a9acea1
--- /dev/null
+++ b/original_openim/images/512px/006c8ac9b5e6906a.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:212c1dca7afe7fc0e6207529f4298e60b7af35207aacd9dda5716e5078fc285a
+size 42835
diff --git a/original_openim/images/512px/00714f9e7d062900.jpg b/original_openim/images/512px/00714f9e7d062900.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..54b6e705df22de9128eacbdd424cd06a1210ca01
--- /dev/null
+++ b/original_openim/images/512px/00714f9e7d062900.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3249bf7cf1b8bbee8f7df1d04aff1f5f6e18911883f18dd3aad75697f3e00499
+size 32778
diff --git a/original_openim/images/512px/007170add0cfe316.jpg b/original_openim/images/512px/007170add0cfe316.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..dfa5c479afd3a392c0470bb208036a73ef0bbbfd
--- /dev/null
+++ b/original_openim/images/512px/007170add0cfe316.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ab192f0354fbc59cf7505042bd2eb83307cba793e5c43ce296d79f476f587e8f
+size 34035
diff --git a/original_openim/images/512px/0071f62f5d703904.jpg b/original_openim/images/512px/0071f62f5d703904.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..eaa19923784a63e119c326e88ed5b3d36f92be12
--- /dev/null
+++ b/original_openim/images/512px/0071f62f5d703904.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d31c19b73b04a9d0eb4722d9866bb2bc403eb6cf71fc4dad2eb6c3cef1c35a69
+size 29084
diff --git a/original_openim/images/512px/00723dac8201a83e.jpg b/original_openim/images/512px/00723dac8201a83e.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..8a9b155d8464cd00e05555d86dbb03edcc2ccb27
--- /dev/null
+++ b/original_openim/images/512px/00723dac8201a83e.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7eadb17156ebc61ab61ed497223e68a5fda68822e80491bc8e6365f58a450c54
+size 31774
diff --git a/original_openim/images/512px/007384da2ed0464f.jpg b/original_openim/images/512px/007384da2ed0464f.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..813b20c34b00e4616c017da3804d65117ea484c3
--- /dev/null
+++ b/original_openim/images/512px/007384da2ed0464f.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9cd010d65e1e76b9de6c0198102c78a8b4f29c0e049894fe23eda14cbbaff6b9
+size 32842
diff --git a/original_openim/images/512px/0076e0b90158151c.jpg b/original_openim/images/512px/0076e0b90158151c.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..32b3cde65bdbf6b5d17a4930e8c888f00033382b
--- /dev/null
+++ b/original_openim/images/512px/0076e0b90158151c.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4d69bfcb312d551cef5f6a2ab0721246904c613704da76b5d3444c5fecd19b26
+size 17723
diff --git a/original_openim/images/512px/0077e1cc5010e074.jpg b/original_openim/images/512px/0077e1cc5010e074.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..0084f52844ee6a5a41b66fe77dd5093d79427ac7
--- /dev/null
+++ b/original_openim/images/512px/0077e1cc5010e074.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bb158c60dc7a20ba68406545b3f8bc69bee362dd553e8a49eadf024d78a2a0bf
+size 29936
diff --git a/original_openim/images/512px/0077f8de643853ca.jpg b/original_openim/images/512px/0077f8de643853ca.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..395aa57ee6e3a87558e810fe9fc482a1b9231920
--- /dev/null
+++ b/original_openim/images/512px/0077f8de643853ca.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d68f78ad65c04e89b55ffc03c0d02a12e137df0130693409e8e74f3f77caf111
+size 71750
diff --git a/original_openim/images/512px/007f71665b0812a7.jpg b/original_openim/images/512px/007f71665b0812a7.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..67ed18ecc56257218413e3506fc6cf6dafdea89e
--- /dev/null
+++ b/original_openim/images/512px/007f71665b0812a7.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d5bcc0d52aa513710c6aeb742094ad537b3f6100301ac68e09dd75386314dfba
+size 49863
diff --git a/original_openim/images/512px/0081f359f925712e.jpg b/original_openim/images/512px/0081f359f925712e.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..228fbad1ae576f294cc45c967dbddc8bbd406ae2
--- /dev/null
+++ b/original_openim/images/512px/0081f359f925712e.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9141f7091d0733ca28c05d3ffad93b55ff0f459499a91844dde6ae5868aedeac
+size 44721
diff --git a/original_openim/images/512px/00846fb55a143fbe.jpg b/original_openim/images/512px/00846fb55a143fbe.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..9393111b5d9d002c7ba8bb0b0cd168e638b9b5cf
--- /dev/null
+++ b/original_openim/images/512px/00846fb55a143fbe.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0959e6941a44aecb948902a333fb3c78102575f1188175bf38dbc4fae091f35d
+size 42089
diff --git a/original_openim/images/512px/008637722500f239.jpg b/original_openim/images/512px/008637722500f239.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..6592bc36989a9bbe35e3eaee0e45958d990f6d40
--- /dev/null
+++ b/original_openim/images/512px/008637722500f239.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e6fbecf3facab51d124c95b1c77f9d26b6420be3b76f64cd767fb06eb20a07cf
+size 36541
diff --git a/original_openim/images/512px/00876549dfcbcbad.jpg b/original_openim/images/512px/00876549dfcbcbad.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..373d90e3f2ae3366c055fa1a496366e05acc7010
--- /dev/null
+++ b/original_openim/images/512px/00876549dfcbcbad.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:79be396a80565010fa559c0a208f0f08ed1c91da862cb7146917108a6325b699
+size 27471
diff --git a/original_openim/images/512px/0089b8f6212315ec.jpg b/original_openim/images/512px/0089b8f6212315ec.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..e6a75c5d9b1050baa90ed4b97852253b449a56f0
--- /dev/null
+++ b/original_openim/images/512px/0089b8f6212315ec.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1e5b790a5b30b5d721f7154d7f76862e92b94c5296adf6f2f3c32f257689babc
+size 39868
diff --git a/original_openim/images/512px/008e12a039f69f8a.jpg b/original_openim/images/512px/008e12a039f69f8a.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..ca62daccfa6db3cbf80b86413744b670fc3b37c7
--- /dev/null
+++ b/original_openim/images/512px/008e12a039f69f8a.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:26e58c68022d267303b9defbb806110b78fd639455a7cfd700a310a2d7305cc4
+size 22210
diff --git a/original_openim/images/512px/00902c56bdcf10b6.jpg b/original_openim/images/512px/00902c56bdcf10b6.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..582bf86a396050c229392d0429c22c59575cb5a5
--- /dev/null
+++ b/original_openim/images/512px/00902c56bdcf10b6.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:08c112f4891c59925a27e6cbb36f232ad6dbb27320612b039a65b494728ebe22
+size 36517
diff --git a/original_openim/images/512px/0098263ae56016d3.jpg b/original_openim/images/512px/0098263ae56016d3.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..761a88614612333451b26ef9147999f5360b5ac1
--- /dev/null
+++ b/original_openim/images/512px/0098263ae56016d3.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b0e7ea09779af6543bc1bd54e09289440544202f38c978ecd93a18c9252a6a16
+size 19124
diff --git a/original_openim/images/512px/0098755e846b745b.jpg b/original_openim/images/512px/0098755e846b745b.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..41f6d8b9449f7ee2c14acc3cc13c86221426c50d
--- /dev/null
+++ b/original_openim/images/512px/0098755e846b745b.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ddd9ce8246340c733f2a13ca1d484ea75915140ce253d2a95bb5b3dec7a9bc8d
+size 56126
diff --git a/original_openim/images/512px/00991ee13e849b07.jpg b/original_openim/images/512px/00991ee13e849b07.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..067245ba976ef979b49a313ef09c77b1e667d829
--- /dev/null
+++ b/original_openim/images/512px/00991ee13e849b07.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e4efcbb3d68fff0cf1bbbcd1ec8e77eb04a3784f94f12765a78dc5fcf7babe29
+size 19505
diff --git a/original_openim/images/512px/009be28128a2bb65.jpg b/original_openim/images/512px/009be28128a2bb65.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..55c8c65274e919c46b4713649956b1b840ee9c91
--- /dev/null
+++ b/original_openim/images/512px/009be28128a2bb65.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e7d9db2eb09a1fd404f75334ae116d3e45d05f0f6ba7c2ad4e3157b85fdce785
+size 43232
diff --git a/original_openim/images/512px/009dfe7e81b732cb.jpg b/original_openim/images/512px/009dfe7e81b732cb.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..c1b7a41dc23a9fd2604e3b7dec53b07a47017554
--- /dev/null
+++ b/original_openim/images/512px/009dfe7e81b732cb.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b2dc2496fa1adbf0d9d55cbc652d030aa994b52bd2d29bfae2eb75563bef6005
+size 25590
diff --git a/original_openim/images/512px/009ed9b2c12b097b.jpg b/original_openim/images/512px/009ed9b2c12b097b.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..b31d1ab43597426b57612d8261987cdd1bdc8d14
--- /dev/null
+++ b/original_openim/images/512px/009ed9b2c12b097b.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4e3b91237490f9770742854fd26165661f35a8b9133a57ebbcee51ce8af25e24
+size 29073
diff --git a/original_openim/images/512px/00a0b916fd5941a3.jpg b/original_openim/images/512px/00a0b916fd5941a3.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..d3efc2974fb3bb13dc2ae3b38ab4979e04b12259
--- /dev/null
+++ b/original_openim/images/512px/00a0b916fd5941a3.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5115ce09f997bba67d09e7e63144747d3f5a2df2bf4d816b7a54193e6a076edd
+size 22248
diff --git a/original_openim/images/512px/00a159a661a2f5aa.jpg b/original_openim/images/512px/00a159a661a2f5aa.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..e2fd8a002f60a48b7c9d7bb3b31f13746a846ead
--- /dev/null
+++ b/original_openim/images/512px/00a159a661a2f5aa.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:21bdc5771abfc20f0c6ec04b79639918541d59dbaac392bd56f8913bf74f3a81
+size 46988
diff --git a/original_openim/images/512px/00a1f2a6e7f78ac5.jpg b/original_openim/images/512px/00a1f2a6e7f78ac5.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..8d7c943ec93f54908079053d0632279e7474b26f
--- /dev/null
+++ b/original_openim/images/512px/00a1f2a6e7f78ac5.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bfaeabd8dfa48a462487e7d07b91ffc1a2546be85c75f13725111a6247598789
+size 36524
diff --git a/original_openim/images/512px/00a3654c1cf00d11.jpg b/original_openim/images/512px/00a3654c1cf00d11.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..d373bbd152882a0356f77fe39faace5e8c20345d
--- /dev/null
+++ b/original_openim/images/512px/00a3654c1cf00d11.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7397e04c2f57991e1bcbe896a728fa6eddc5425781fa991389c6f36fb8b17680
+size 31398
diff --git a/original_openim/images/512px/00a36f96e31731c4.jpg b/original_openim/images/512px/00a36f96e31731c4.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..0a77954f6852dae7a09a015d25648c39a3bd341a
--- /dev/null
+++ b/original_openim/images/512px/00a36f96e31731c4.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e196174f60ac39f0dd723fc5c714326a00bed99c084aef0fb436cdb6a14f6ea4
+size 31214
diff --git a/original_openim/images/512px/00a72fa141918070.jpg b/original_openim/images/512px/00a72fa141918070.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..e303c7a0303e1085781ca30509a17c7ca7595a42
--- /dev/null
+++ b/original_openim/images/512px/00a72fa141918070.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8f3699a5d8a1239f3951d8eccaa411a591abdc5c34b6a07954b273086c789609
+size 69912
diff --git a/original_openim/images/512px/00a7655d4eabf186.jpg b/original_openim/images/512px/00a7655d4eabf186.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..bbe084b9e157734652ce58bc731815ab7471ae90
--- /dev/null
+++ b/original_openim/images/512px/00a7655d4eabf186.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ffd518f82492967e6473d4feaa7d61c1a919a27d5decd1f72959560dc35334e2
+size 39368
diff --git a/original_openim/images/512px/00abfe9035972732.jpg b/original_openim/images/512px/00abfe9035972732.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..555d68ef6cd7f869abb22ae03fa8cae47dc6fd45
--- /dev/null
+++ b/original_openim/images/512px/00abfe9035972732.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:91db450001bbd7125a68179ec96d8e3f111d787cbb204d2df242042446a51b47
+size 48794
diff --git a/original_openim/images/512px/00acf53b127218c2.jpg b/original_openim/images/512px/00acf53b127218c2.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..466ba603b8f5fa26f03fdcba20110a47d6e195dc
--- /dev/null
+++ b/original_openim/images/512px/00acf53b127218c2.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2671323a27efa1cab9b201d20fe9ca6ab9fcd59808d308ec39a11ceb23c3d053
+size 36257
diff --git a/original_openim/images/512px/00aff25b6c86b521.jpg b/original_openim/images/512px/00aff25b6c86b521.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..b0f85bfef62dfef7a225c3551c4e93be419e7500
--- /dev/null
+++ b/original_openim/images/512px/00aff25b6c86b521.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a35216a699991dc5d20ba0842f3fa45216d833b4bff2e34d60b633d00bbdf452
+size 24051
diff --git a/original_openim/images/512px/00b29a6f872b1e1d.jpg b/original_openim/images/512px/00b29a6f872b1e1d.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..50d94a104d2b5ed191875affdc37e1484f9a4a04
--- /dev/null
+++ b/original_openim/images/512px/00b29a6f872b1e1d.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b05563c38e476d5cf37c6468de96a8817fa036451ed9461a765adbbf3b348137
+size 42605
diff --git a/original_openim/images/512px/00b3ad28957f7768.jpg b/original_openim/images/512px/00b3ad28957f7768.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..ecc523645a9e9a3f0e3a84e06e3bbf7c05c9dfd6
--- /dev/null
+++ b/original_openim/images/512px/00b3ad28957f7768.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:443233eec53bb9fbe3e8a542e293472b981e5a8b43acc67cc633123a9ea99379
+size 39182
diff --git a/original_openim/images/512px/00b4064b073e51f3.jpg b/original_openim/images/512px/00b4064b073e51f3.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..8272721428472634ded12205d6b3af2ec4d68e78
--- /dev/null
+++ b/original_openim/images/512px/00b4064b073e51f3.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9974dc89929b54ce84bf7824c92cf50fe1b00a1934857ceca35ede4d60601f40
+size 48109
diff --git a/original_openim/images/512px/00b44fc9c0296fb8.jpg b/original_openim/images/512px/00b44fc9c0296fb8.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..19fdba6a2e5e1a211515ce7747b48aa6a1de2505
--- /dev/null
+++ b/original_openim/images/512px/00b44fc9c0296fb8.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:736a2fabb7d94f74b88e4a1eb81c116bd39d1162476163f86597a999d082677c
+size 29099
diff --git a/original_openim/images/512px/00b75a8487446cdd.jpg b/original_openim/images/512px/00b75a8487446cdd.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..d3df0b9868ca7bd63d834518703e99735e2a3ea0
--- /dev/null
+++ b/original_openim/images/512px/00b75a8487446cdd.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ac756d668cf13bcaa12bcfad15a54315a790e526914a7d1a5bd369704e707137
+size 36107
diff --git a/original_openim/images/512px/00b9f24a5a9f3f7b.jpg b/original_openim/images/512px/00b9f24a5a9f3f7b.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..dba6abae759eb71dfa2bbfcc060f3aeaf840f341
--- /dev/null
+++ b/original_openim/images/512px/00b9f24a5a9f3f7b.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:437f83bf3721fca1791e7c21397936d95d0264845be7809080fd6ceeec56add5
+size 31557
diff --git a/original_openim/images/512px/00bdb008eb688497.jpg b/original_openim/images/512px/00bdb008eb688497.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..8b803403f5c6f3256c91055aa02e69d4dc798fc8
--- /dev/null
+++ b/original_openim/images/512px/00bdb008eb688497.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:17dce88ab0cc7e327ce0d05a97708cf581dd52eee763cb1e9134a266457aa049
+size 40656
diff --git a/original_openim/images/512px/00bdeda311caf18d.jpg b/original_openim/images/512px/00bdeda311caf18d.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..87b0cc4800c0b74c8a81a6035076835670f80280
--- /dev/null
+++ b/original_openim/images/512px/00bdeda311caf18d.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c3df91c123ce9a0d8de1e8570823ccee4ebfb62a52c59bcc2415d784ca15033e
+size 33226
diff --git a/original_openim/images/512px/00c6c3288773471d.jpg b/original_openim/images/512px/00c6c3288773471d.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..e1013ad8b99a1e1538dac1300f864ab482a846f2
--- /dev/null
+++ b/original_openim/images/512px/00c6c3288773471d.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5f369ce167cd2703dcd6be3a1af844da8a3a84df6a72c68c444a22a118b2ad16
+size 38622
diff --git a/original_openim/images/512px/00c710176e9c2996.jpg b/original_openim/images/512px/00c710176e9c2996.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..e79cbc72f6ec17b6c5c11f954557c629b4c46434
--- /dev/null
+++ b/original_openim/images/512px/00c710176e9c2996.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:28856c4bafbed938ce31977ee3ac4c55a186d78d2d65cd5fbe1465b74274b546
+size 24269
diff --git a/original_openim/images/512px/00c73a28068f9b33.jpg b/original_openim/images/512px/00c73a28068f9b33.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..d6d7aee561683d445c8caa60121adeaf82e516fe
--- /dev/null
+++ b/original_openim/images/512px/00c73a28068f9b33.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:15f0d031f7838eb9b808897c7b81e669f5a4c63b7618190b991f40f6c69fe810
+size 24670
diff --git a/original_openim/images/512px/00c9616a917be867.jpg b/original_openim/images/512px/00c9616a917be867.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..a2eb59a8f3260faaa482762e66a2ab9925eb0fc5
--- /dev/null
+++ b/original_openim/images/512px/00c9616a917be867.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:063aba34892c73722afdc7ac01ed5bbfc8d0b3c8f4df83692ff51c67deb5e6c1
+size 53913
diff --git a/original_openim/images/512px/00ccda615ec9731d.jpg b/original_openim/images/512px/00ccda615ec9731d.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..e95327172956b8de52271fc1b42a76028004892a
--- /dev/null
+++ b/original_openim/images/512px/00ccda615ec9731d.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9d1ebe905545c448b99f48eb39af22c6e64775af56414462d565bc53b5eed420
+size 40453
diff --git a/original_openim/images/512px/00cd12b9ee1905a7.jpg b/original_openim/images/512px/00cd12b9ee1905a7.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..087175b8abd93e46065358ac56217bb8e6d74968
--- /dev/null
+++ b/original_openim/images/512px/00cd12b9ee1905a7.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5d7170bafa2e03e0b0357e13a1ea7e363c03ff1a07ab6679cd1b45e7a9363d1c
+size 43234
diff --git a/original_openim/images/512px/00cdf56c63191fd3.jpg b/original_openim/images/512px/00cdf56c63191fd3.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..d61d5e1893ddf64c2c86b1c212e527e176d99260
--- /dev/null
+++ b/original_openim/images/512px/00cdf56c63191fd3.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:06780629cdadd10a0ed4bbeda1946311906cf2ce17901f300204fd2c3ad0627e
+size 28467
diff --git a/original_openim/images/512px/00cfb039bd7f1eaf.jpg b/original_openim/images/512px/00cfb039bd7f1eaf.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..bf354b7f7ab00899ae83d53965bba7f1567c745c
--- /dev/null
+++ b/original_openim/images/512px/00cfb039bd7f1eaf.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:af31cd08b6a0c06331b2466e9f783dcbfd290e477f2bdf6f42eeb90dce612648
+size 40127
diff --git a/original_openim/images/512px/00d3653a790b5fba.jpg b/original_openim/images/512px/00d3653a790b5fba.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..cb1e1563d6c08e29972b1294fe9c5e3089b563a9
--- /dev/null
+++ b/original_openim/images/512px/00d3653a790b5fba.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6ac713d3212083200be8bf962ffda930d3e3fbd53a04b82a48298815988d5205
+size 48739
diff --git a/original_openim/images/512px/00d962525f1ea4e7.jpg b/original_openim/images/512px/00d962525f1ea4e7.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..c6bca26669346aa4686bbb7793d1561c474da6b5
--- /dev/null
+++ b/original_openim/images/512px/00d962525f1ea4e7.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5b928a1508bd12dd8043549697b8299f6b4d656e34da8ce36b9b877df1b62e45
+size 34616
diff --git a/original_openim/images/512px/00db41a79f8def5b.jpg b/original_openim/images/512px/00db41a79f8def5b.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..8942d5208df5e8a71c9dbdf9ff4bae2594dc42ac
--- /dev/null
+++ b/original_openim/images/512px/00db41a79f8def5b.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:25d1b41957fbba565ac10f13425777ff2d497fec43d9995da7eec5fb47e19ecc
+size 62027
diff --git a/original_openim/images/512px/00de7b15cac52f42.jpg b/original_openim/images/512px/00de7b15cac52f42.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..a6e77da1bc96c06abf4eba7eba99fdf209c07694
--- /dev/null
+++ b/original_openim/images/512px/00de7b15cac52f42.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cbf79dea34b9695b9a67914e6bd4647f3383e2f316ec64dea018502ec24c2cbc
+size 16432
diff --git a/original_openim/images/original_imgs_orig_res/0004886b7d043cfd.jpg b/original_openim/images/original_imgs_orig_res/0004886b7d043cfd.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..7c01956d267071e5af47685fa6f7ee6379ec84dc
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/0004886b7d043cfd.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:00b7faacd4d44b78fccc07a38b4a4f67818de75881172c5738c77e3904ef8f6f
+size 375363
diff --git a/original_openim/images/original_imgs_orig_res/00075905539074f2.jpg b/original_openim/images/original_imgs_orig_res/00075905539074f2.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..a7a8048d46ce4dd4179bb03c5ae846b398764742
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00075905539074f2.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:59e6f03b69dc3008fe1941f6a79db5c3502b8f959e32b9c21c40458d2d01f296
+size 302326
diff --git a/original_openim/images/original_imgs_orig_res/0008e425fb49a2bf.jpg b/original_openim/images/original_imgs_orig_res/0008e425fb49a2bf.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..8885b451e9e9eae66e03d78f0ecebd2a34169f0f
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/0008e425fb49a2bf.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6d4ecc9eb4140f73b0e796b49b7b66f25bd89d7ba05cf9fe1531e2c371d6ed12
+size 415082
diff --git a/original_openim/images/original_imgs_orig_res/000c4d66ce89aa69.jpg b/original_openim/images/original_imgs_orig_res/000c4d66ce89aa69.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..a112e9b3ebdca59f9ee1d19f5b1d63e197548008
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/000c4d66ce89aa69.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:21369ff0f3936dd0f4624cd389be0c03a3f969a9db27b2ec4d750de24ce66c87
+size 372486
diff --git a/original_openim/images/original_imgs_orig_res/00101a0160a05d31.jpg b/original_openim/images/original_imgs_orig_res/00101a0160a05d31.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..cb1f029c4ef11a58663526e39f5137b1ae26f965
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00101a0160a05d31.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:685a88e536c99d558d1cc10070646ebf1c0e3bcee6d0b7c5b2a2c7d3f9521cdf
+size 167351
diff --git a/original_openim/images/original_imgs_orig_res/0010c714a5da358a.jpg b/original_openim/images/original_imgs_orig_res/0010c714a5da358a.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..fe1f9d3309c34bb74c78d62de2db05828e784fc4
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/0010c714a5da358a.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a7982d36975489ac6902d5ac134a183ed6cf884f773b25e2f30b60c9c0f99489
+size 136872
diff --git a/original_openim/images/original_imgs_orig_res/0013ea2087020901.jpg b/original_openim/images/original_imgs_orig_res/0013ea2087020901.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..1633684bafe9f54ed9be57198a7bc688af2a191d
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/0013ea2087020901.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f842afd71205dd007f4bfb2b63f43e0002feae80e514eb9fffbf406739b04cc1
+size 103259
diff --git a/original_openim/images/original_imgs_orig_res/00141571d986d241.jpg b/original_openim/images/original_imgs_orig_res/00141571d986d241.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..7401ed25ded88e8265aa584f428dea2f10803e72
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00141571d986d241.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:70613edc8e72a8f58ac0b135e835ec924227efd80c6b23a585096918ce5817a1
+size 295629
diff --git a/original_openim/images/original_imgs_orig_res/00146ba1e50ed8d8.jpg b/original_openim/images/original_imgs_orig_res/00146ba1e50ed8d8.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..e10809f2bfc1aaf3025a1d674bb3e6f1c6423139
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00146ba1e50ed8d8.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2986ee1183efa20587e15945d188a3efd1c615f0285295bff701deb963e97e0c
+size 195750
diff --git a/original_openim/images/original_imgs_orig_res/00173afdc7581c41.jpg b/original_openim/images/original_imgs_orig_res/00173afdc7581c41.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..279e1164743d8678d1b6bd02be80ceb849d342e0
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00173afdc7581c41.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:63338cb4c83b0a3361536a64cacf04fad66f9d7f12a47b271dedc74d1fe0bdec
+size 51264
diff --git a/original_openim/images/original_imgs_orig_res/001840a807e454c7.jpg b/original_openim/images/original_imgs_orig_res/001840a807e454c7.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..665f349063aa4702dab37c7c33cd30be0b88e160
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/001840a807e454c7.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:158e9bc0c0fdaef7203ddc7a128be944684d09861e79ebb78833bccfa86a40c0
+size 141205
diff --git a/original_openim/images/original_imgs_orig_res/0019308d876736fe.jpg b/original_openim/images/original_imgs_orig_res/0019308d876736fe.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..88f9b8619ad35507fdfc49eba6bd3f512c41c6e6
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/0019308d876736fe.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:96c4c0cf75bbcebcbd02038f14397c5a6a902a0edbc81ac4dd02c67ff1e14469
+size 98330
diff --git a/original_openim/images/original_imgs_orig_res/001997021f01f208.jpg b/original_openim/images/original_imgs_orig_res/001997021f01f208.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..824feb0b470e971dec91c5bee1927d26895ebd9d
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/001997021f01f208.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1e39eda1f176f14449ca061b78c3e4b71698111b6b7742251bd8d4eb2ec7ef4d
+size 479133
diff --git a/original_openim/images/original_imgs_orig_res/001a78754e43abc5.jpg b/original_openim/images/original_imgs_orig_res/001a78754e43abc5.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..8acd37104374f19776b16a68cd592439a64f97ae
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/001a78754e43abc5.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1d11cf949cde2374ea478f48e5cd01a3db36787edb2c48adb0f53ab10f48dce6
+size 505631
diff --git a/original_openim/images/original_imgs_orig_res/001a809ad40a2f84.jpg b/original_openim/images/original_imgs_orig_res/001a809ad40a2f84.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..877abce4326e793d9703fb74b1cda4fe14a5e0a2
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/001a809ad40a2f84.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:047fe7c5eda9db19d5e7763a88bd4a2c1ae623264193011a77518d0656938109
+size 113546
diff --git a/original_openim/images/original_imgs_orig_res/001a995c1e25d892.jpg b/original_openim/images/original_imgs_orig_res/001a995c1e25d892.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..47a4480a38639630b2e8a83ae973cadc2bba026e
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/001a995c1e25d892.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7650f085d42642e235f48bb24e6bb283042a26c264e38bb57c6159ae2d898a50
+size 216648
diff --git a/original_openim/images/original_imgs_orig_res/001ffaceaff5f33f.jpg b/original_openim/images/original_imgs_orig_res/001ffaceaff5f33f.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..1e23a23dbc545afdfae250c1e6086db75ebb5fe3
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/001ffaceaff5f33f.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:87d0d1522a72c416b13cbdafd5f082fd3179f8b907c76b1df763d5815dcf4405
+size 4314797
diff --git a/original_openim/images/original_imgs_orig_res/0022bffa9abfb554.jpg b/original_openim/images/original_imgs_orig_res/0022bffa9abfb554.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..db623eb602ae6e32a7014c5e224777a58c8a1dad
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/0022bffa9abfb554.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:534a554fd9acc561b58fb08b516be1e6f235b8fa6de6477a2e2cafbe32c9755d
+size 505350
diff --git a/original_openim/images/original_imgs_orig_res/0022d0ab1e1347ab.jpg b/original_openim/images/original_imgs_orig_res/0022d0ab1e1347ab.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..4db3deafa58de273383aa57fcd85ae69d26b66da
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/0022d0ab1e1347ab.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:50d6cbf541df80f8b6132d7c80d4eee48c14ddcf5d68c8738114b974efc3b439
+size 180024
diff --git a/original_openim/images/original_imgs_orig_res/0022ecd6f681bed6.jpg b/original_openim/images/original_imgs_orig_res/0022ecd6f681bed6.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..b3de5c538fd13055b9a3e7c64e8f9da054e7c6ab
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/0022ecd6f681bed6.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d9677bf643598b40b88e6a0f9d06ff1cf60cc0f83d573662dc9b4c034633f595
+size 263938
diff --git a/original_openim/images/original_imgs_orig_res/002aab1d644cae0e.jpg b/original_openim/images/original_imgs_orig_res/002aab1d644cae0e.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..9a15948febd59cef12b81c952e006f6f962cd085
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/002aab1d644cae0e.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b7f7a01a45879abe3f76fe5be798396934ab9d202ab9449eb1a7a890dea55cfe
+size 79633
diff --git a/original_openim/images/original_imgs_orig_res/002cfe2087f432e0.jpg b/original_openim/images/original_imgs_orig_res/002cfe2087f432e0.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..84f3d6dd6051d373dc7ee8c77b43ff10fae9e853
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/002cfe2087f432e0.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:789b4e6d751e786b1b1807bd5481be14dd14720e512f69aff48444865f1f39bd
+size 222363
diff --git a/original_openim/images/original_imgs_orig_res/002d1dd67c722d98.jpg b/original_openim/images/original_imgs_orig_res/002d1dd67c722d98.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..37c7aded61f72dc6939813d4feacdf0646671046
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/002d1dd67c722d98.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a7b779b6b52694964a0e9272a9a3b692360b09b76fdb8eb32b8e3c36552e8acb
+size 173314
diff --git a/original_openim/images/original_imgs_orig_res/002f8241bd829022.jpg b/original_openim/images/original_imgs_orig_res/002f8241bd829022.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..04280918ca3fcc484d70a6db74047d17ab7b80b1
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/002f8241bd829022.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ff98eb319e886f808d6b823f013ea4ced9bf790142304fb8b09aa9c0e26c654f
+size 43712
diff --git a/original_openim/images/original_imgs_orig_res/0032257bf3cd56d0.jpg b/original_openim/images/original_imgs_orig_res/0032257bf3cd56d0.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..d1b97e8d9ca242da43d04fd4f5c34ad6f5f687b5
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/0032257bf3cd56d0.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b99fbe6c0f8dc4ae5c5c4dbd5e21178af23b4e7fdf90e440b95a6fa742f7a92d
+size 437858
diff --git a/original_openim/images/original_imgs_orig_res/003232584a062b07.jpg b/original_openim/images/original_imgs_orig_res/003232584a062b07.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..c1d81b03014db3188ff0beeb7886a56b0af774b0
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/003232584a062b07.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:abfd3dc64c879752bcb2a5df87f4aa8d25f5e530ff2ef45a2f7159ebc2a03822
+size 361523
diff --git a/original_openim/images/original_imgs_orig_res/00358e88c4d3c953.jpg b/original_openim/images/original_imgs_orig_res/00358e88c4d3c953.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..9dfcbcb612de48f78348bd75ae3734771b7f07f6
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00358e88c4d3c953.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5b72d2f61ba26d34bfe37c875980e967f64ad7d9783297ed49ad5e1d62e08ec7
+size 799322
diff --git a/original_openim/images/original_imgs_orig_res/0035a4bfeda1b637.jpg b/original_openim/images/original_imgs_orig_res/0035a4bfeda1b637.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..28511df01738aab62331155def75ea967d6ed9f7
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/0035a4bfeda1b637.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:12e39383c7f5b64cb43b77f31e9e50af980309bc7b48e33db3cdd358a742fc1b
+size 198039
diff --git a/original_openim/images/original_imgs_orig_res/0035a5f752a459e1.jpg b/original_openim/images/original_imgs_orig_res/0035a5f752a459e1.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..e10208310ab150ffb6fd328dd5961c499919584c
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/0035a5f752a459e1.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a0b3422f16a30b575ad0c6e1ed326355d48013164eadf2feacafd5ea55c96655
+size 530426
diff --git a/original_openim/images/original_imgs_orig_res/0035c28612c035fd.jpg b/original_openim/images/original_imgs_orig_res/0035c28612c035fd.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..af8287982a6996019422dd912ca7ebe9015cec33
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/0035c28612c035fd.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a290d3f1003301063998ea9d2d7be18c9d4f046d2d5e5f1a6dffbf7b621ae16b
+size 596595
diff --git a/original_openim/images/original_imgs_orig_res/00361cb81ebf27c3.jpg b/original_openim/images/original_imgs_orig_res/00361cb81ebf27c3.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..cbe28b9d05c41a2073afd6223c782414c5b274c8
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00361cb81ebf27c3.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ec7bf20cc9b4557d3c28820bd2d4d4cd4206463eae85d33946f0a1a562807e13
+size 105241
diff --git a/original_openim/images/original_imgs_orig_res/00385794700c832e.jpg b/original_openim/images/original_imgs_orig_res/00385794700c832e.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..95ac4bdba8b41fe4535f6e8b5f5438afb804e8bc
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00385794700c832e.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c2d7eee72e2f5dd4a6e7f42ce8a5e743ac54376afb19623cf6537d3fd80f1ece
+size 468472
diff --git a/original_openim/images/original_imgs_orig_res/003e1e6baff436f7.jpg b/original_openim/images/original_imgs_orig_res/003e1e6baff436f7.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..05ffdd4db28f4ba411ccc3594a0b537fb09070ab
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/003e1e6baff436f7.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b3d90d4ebb80d9cff85dc2866f18dc2f483154147eba9e3a28718149629975e4
+size 275566
diff --git a/original_openim/images/original_imgs_orig_res/0040009ad56c2bc2.jpg b/original_openim/images/original_imgs_orig_res/0040009ad56c2bc2.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..df543826c1d81226e24f677a44dadfee592ea801
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/0040009ad56c2bc2.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1ed9ec78e7f562611db672eae1ee7f4a824d4a19fd9580eb63295301063b4c90
+size 207699
diff --git a/original_openim/images/original_imgs_orig_res/004545770f4770c7.jpg b/original_openim/images/original_imgs_orig_res/004545770f4770c7.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..2cfab2dac6440c63087fb6fd79f35e14a5b11118
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/004545770f4770c7.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:55e6d9d254873e004a1b915ca52bf017ff8fdfc2dcd962fb37ba5f1cf753e166
+size 267532
diff --git a/original_openim/images/original_imgs_orig_res/00455ae2731e1046.jpg b/original_openim/images/original_imgs_orig_res/00455ae2731e1046.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..c23fcec0ff94a8c1b33534334065ae6b3a1cec22
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00455ae2731e1046.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d25c28467d24fa554f778c91b12a258d791f130da2b0f2436d0f708aba85c409
+size 101344
diff --git a/original_openim/images/original_imgs_orig_res/0049a724f5dc20e4.jpg b/original_openim/images/original_imgs_orig_res/0049a724f5dc20e4.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..06894ba0bdda7ed8e2552ce3d5d3687ce77ba4dd
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/0049a724f5dc20e4.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:48dfb61cfbb3d96302834d0b3ce0e18d23f515c7b3b6a4e1c3fdc114ef8aafce
+size 867769
diff --git a/original_openim/images/original_imgs_orig_res/004a9ec75eca5910.jpg b/original_openim/images/original_imgs_orig_res/004a9ec75eca5910.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..1b0ba01b70972de26088b5ce452d458a2ee96ee4
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/004a9ec75eca5910.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:be5428936ee6887c30a823bfe33af4beacc26bbb392ba121d638d7e710fc76b3
+size 182210
diff --git a/original_openim/images/original_imgs_orig_res/004dd2ac922baf98.jpg b/original_openim/images/original_imgs_orig_res/004dd2ac922baf98.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..6c3a2afcad9edd0185f289f4675fa24545bf376b
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/004dd2ac922baf98.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:59bfaba427c7dc474ab1a58158aa705698030d92a6423a6c36976c769c436eef
+size 177496
diff --git a/original_openim/images/original_imgs_orig_res/004e21eb2e686f40.jpg b/original_openim/images/original_imgs_orig_res/004e21eb2e686f40.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..a5dbf72802eaa4976c055decdc19b492329e9629
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/004e21eb2e686f40.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ab97123f4c9b1d3ed48b411e0a3ab760f29d14ebb0a68f0ee7ab0bdd15d65fe8
+size 256318
diff --git a/original_openim/images/original_imgs_orig_res/0052ea56ee869426.jpg b/original_openim/images/original_imgs_orig_res/0052ea56ee869426.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..4c0ad3d28afad61663652dd432100a5956675481
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/0052ea56ee869426.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1f5d107f7835cc31082c4c5046f5aad55d4667f1bf9e82ca0c7d0d0db1460cc7
+size 117233
diff --git a/original_openim/images/original_imgs_orig_res/005b598d8fecb139.jpg b/original_openim/images/original_imgs_orig_res/005b598d8fecb139.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..cce9e93a18c88844b291a4854bc21123203becf5
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/005b598d8fecb139.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:18317e16da8163495f32d9fad68bfc660dcd91cf65aeb8c4baf636e3b5334fe3
+size 402120
diff --git a/original_openim/images/original_imgs_orig_res/0060dfb7f9a468b5.jpg b/original_openim/images/original_imgs_orig_res/0060dfb7f9a468b5.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..402aad5f95d9109b66641b8d8d10df396c0075c2
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/0060dfb7f9a468b5.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2787a691cc902738e05c18fee73272b571ad37c63f4f29725cb94aba3501fce4
+size 199575
diff --git a/original_openim/images/original_imgs_orig_res/006389262f7ba7f1.jpg b/original_openim/images/original_imgs_orig_res/006389262f7ba7f1.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..0fa56ebcfe31958ddd7e9033fc95b4e0784fb302
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/006389262f7ba7f1.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:90d392c80c71068850037b77d0d4b61ce22ce15e7cd9f4e8908f356ae2ec4050
+size 407433
diff --git a/original_openim/images/original_imgs_orig_res/006541ee51c3abef.jpg b/original_openim/images/original_imgs_orig_res/006541ee51c3abef.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..149c50dded6c8b5ac6ec60b5473ced7e26c4edd0
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/006541ee51c3abef.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a8005fd012eb216b1ab2b818b877ff44488fd887401b18a2f412aa49326e9d41
+size 85531
diff --git a/original_openim/images/original_imgs_orig_res/0065e1098f7a353b.jpg b/original_openim/images/original_imgs_orig_res/0065e1098f7a353b.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..c3928008f70bd84cfb4ab9736fefa27bdaf27ac4
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/0065e1098f7a353b.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:180377b2e5e2ffdd81abaf63af10881ff01bad6e4fa1e33e6ea4449adb6f1751
+size 555303
diff --git a/original_openim/images/original_imgs_orig_res/0068ca42dd4c4a2e.jpg b/original_openim/images/original_imgs_orig_res/0068ca42dd4c4a2e.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..633b0592bc112bb4319af6f68ac93e672f86d283
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/0068ca42dd4c4a2e.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cbf34d7f863b7d2262f153c44d67ef3bb9239d0622a3a8a508284fc8e5cca92f
+size 330316
diff --git a/original_openim/images/original_imgs_orig_res/006c8ac9b5e6906a.jpg b/original_openim/images/original_imgs_orig_res/006c8ac9b5e6906a.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..d6d227afa588b8e9d67ce4430d09d647a0cbb855
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/006c8ac9b5e6906a.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9d959bf903e9972fcbca91066c85de9251ffe63744bb2dc8979496c093843540
+size 154871
diff --git a/original_openim/images/original_imgs_orig_res/00714f9e7d062900.jpg b/original_openim/images/original_imgs_orig_res/00714f9e7d062900.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..479157d4b3c64ad297e94995947b8767621cbf90
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00714f9e7d062900.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:25d3823c8bc2e97e870777cec93047758b1975c73e114694f74d7b6aa9ea2d59
+size 87962
diff --git a/original_openim/images/original_imgs_orig_res/007170add0cfe316.jpg b/original_openim/images/original_imgs_orig_res/007170add0cfe316.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..84687d71838ea7089cc1aa31ce7cd40f4a2575ca
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/007170add0cfe316.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cb3fc4e345c9528973b44ace863f61ed9b38207446e6f33b92ab6543951baf5a
+size 313421
diff --git a/original_openim/images/original_imgs_orig_res/0071f62f5d703904.jpg b/original_openim/images/original_imgs_orig_res/0071f62f5d703904.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..318ed18c01087638511c72cdf92ccd6c5767035b
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/0071f62f5d703904.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:078a0948203fbb1edde24d9b75bdfd2361e75266b04f3ca94a9c7558a6b46ab2
+size 775000
diff --git a/original_openim/images/original_imgs_orig_res/00723dac8201a83e.jpg b/original_openim/images/original_imgs_orig_res/00723dac8201a83e.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..9bdcd70fc277d91c7407d63caa9d5de50a221836
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00723dac8201a83e.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:14a9cd49ca9b947b55190c0f7da94f14c04a3153707ce86207ec1fbce41ee4f0
+size 440989
diff --git a/original_openim/images/original_imgs_orig_res/007384da2ed0464f.jpg b/original_openim/images/original_imgs_orig_res/007384da2ed0464f.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..8002c3bb9a0d36bf8d47de25fded1511e232fe13
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/007384da2ed0464f.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:88da10742f6b9502005564e17d7e674ad69ad6b94d27f64a7d9fd818ece50955
+size 226432
diff --git a/original_openim/images/original_imgs_orig_res/0076e0b90158151c.jpg b/original_openim/images/original_imgs_orig_res/0076e0b90158151c.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..fd272b6088d22a95f495a3e14683fa6b11df4dd6
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/0076e0b90158151c.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f8208a7b24e0815a77aec8e82d2f6ec1641b697347ded4cd4d2ad731899fcae3
+size 168794
diff --git a/original_openim/images/original_imgs_orig_res/0077e1cc5010e074.jpg b/original_openim/images/original_imgs_orig_res/0077e1cc5010e074.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..fda90394d3d051d9c819f19afab1e6766eb4df0f
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/0077e1cc5010e074.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e6479be81c57a19d18e5c80a279b2f5e786f3f4160a83fd4b0bcb3e0c4bcfac0
+size 191416
diff --git a/original_openim/images/original_imgs_orig_res/0077f8de643853ca.jpg b/original_openim/images/original_imgs_orig_res/0077f8de643853ca.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..d3f88152376e80a6826ea1fffdcadc080465af13
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/0077f8de643853ca.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5f2c225f9a0c9da89b0356f1d1ac32a5b3a4d06bbf0501152a7be595be63f14e
+size 300495
diff --git a/original_openim/images/original_imgs_orig_res/007f71665b0812a7.jpg b/original_openim/images/original_imgs_orig_res/007f71665b0812a7.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..c8dcd12d630f04ede597eb17debeb69cc1fca7d6
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/007f71665b0812a7.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:414d28641191941f763f59e8ab9a7a3f435c3ed4425055bb163dce6b6abca9e9
+size 214567
diff --git a/original_openim/images/original_imgs_orig_res/0081f359f925712e.jpg b/original_openim/images/original_imgs_orig_res/0081f359f925712e.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..2241484c8def7bed3aa6ad818d2ea04577fae851
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/0081f359f925712e.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:97aefe1c6ea14619b7456b916aa90fdacbd4ea3334747743be43fa75df29664e
+size 122018
diff --git a/original_openim/images/original_imgs_orig_res/00846fb55a143fbe.jpg b/original_openim/images/original_imgs_orig_res/00846fb55a143fbe.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..d870cada539387c74f4012a8aed26b8b33eaa9d9
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00846fb55a143fbe.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2fef34a56662945ee83622d8ebdc886b14483feecb64ae7bbc048c377e7d25a7
+size 108438
diff --git a/original_openim/images/original_imgs_orig_res/008637722500f239.jpg b/original_openim/images/original_imgs_orig_res/008637722500f239.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..7c65ed034e06b9876726fb334e86abdd3bbde7b2
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/008637722500f239.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d9da6515cfff2e25aea94ba9a52954fff1ec5e7269533b2a94ee2e9f7837d638
+size 151661
diff --git a/original_openim/images/original_imgs_orig_res/00876549dfcbcbad.jpg b/original_openim/images/original_imgs_orig_res/00876549dfcbcbad.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..83e3f1c435bb5c09de03cd9428ada7d3b91017dd
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00876549dfcbcbad.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e1336e90de01d4bac80e72ceeab1ed7d429b5fcd81c31ae74ee30a0bb49ee578
+size 187586
diff --git a/original_openim/images/original_imgs_orig_res/0089b8f6212315ec.jpg b/original_openim/images/original_imgs_orig_res/0089b8f6212315ec.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..2311480e2e32f252c6f0a73733bb7d40e2bb72da
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/0089b8f6212315ec.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b0385612da13f1bfb4269f7babb135c4ccbf37fd944a9c2f1e224d0a29d49447
+size 408588
diff --git a/original_openim/images/original_imgs_orig_res/008e12a039f69f8a.jpg b/original_openim/images/original_imgs_orig_res/008e12a039f69f8a.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..ffe61bba9b333fefb3e38813d05ed21259cab222
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/008e12a039f69f8a.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5fb985de14b90eccb007c136d182be9c4e7a5ec743b0bd8fe0a0b4661baaed2b
+size 329001
diff --git a/original_openim/images/original_imgs_orig_res/00902c56bdcf10b6.jpg b/original_openim/images/original_imgs_orig_res/00902c56bdcf10b6.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..dbd4e6ad831bd6e09014cb5566095b08ec881a2d
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00902c56bdcf10b6.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8e7234c1783b0b489e696ecc70a41e7312a59a17069856158cf54b47acef5d3d
+size 144068
diff --git a/original_openim/images/original_imgs_orig_res/0098263ae56016d3.jpg b/original_openim/images/original_imgs_orig_res/0098263ae56016d3.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..83b038a4f72ccd415c9bdf4d5a6bb41491f7d407
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/0098263ae56016d3.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a104221bce9c604b19943ce74f16f0fb947e6a12e9aa6f3cad7fb8471d436ad1
+size 100242
diff --git a/original_openim/images/original_imgs_orig_res/0098755e846b745b.jpg b/original_openim/images/original_imgs_orig_res/0098755e846b745b.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..2ea4e552f381ae4eb84e5c2b26db33090436b8f8
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/0098755e846b745b.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b4ea299c68c2fc41be125b6088795b85c8c46750f0fd75837478cf740c41f5bf
+size 287262
diff --git a/original_openim/images/original_imgs_orig_res/00991ee13e849b07.jpg b/original_openim/images/original_imgs_orig_res/00991ee13e849b07.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..3b580886a3a944d237a3126c16ec75fd6bb4a8b4
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00991ee13e849b07.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ad1f84b9fe86797fac3ed37ebad1180088e3e2e3c598774e6d21de98e0693b81
+size 591640
diff --git a/original_openim/images/original_imgs_orig_res/009be28128a2bb65.jpg b/original_openim/images/original_imgs_orig_res/009be28128a2bb65.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..6c3746eba43e898ce9b4de4de27a56e1a6ebb3d6
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/009be28128a2bb65.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:65bcbf43b84a1fb44f434f709ae0ccbad0ad4837234c58db93a7c3827e2da04a
+size 168405
diff --git a/original_openim/images/original_imgs_orig_res/009dfe7e81b732cb.jpg b/original_openim/images/original_imgs_orig_res/009dfe7e81b732cb.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..c6dbcc7b69f18b43b7d8a7edd76fd0c9e6c79195
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/009dfe7e81b732cb.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8e53cece26a70be5ef81b9859e3826ddb384dcc3efc70d88deacb36a999e1ce2
+size 801035
diff --git a/original_openim/images/original_imgs_orig_res/009ed9b2c12b097b.jpg b/original_openim/images/original_imgs_orig_res/009ed9b2c12b097b.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..00d0186cf98faa89e48c6f90508b667632e1b3e9
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/009ed9b2c12b097b.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:317a7f59d542a09b8aa66985effa0850d0db214180c86c58507ee2c69d1d0ca1
+size 169821
diff --git a/original_openim/images/original_imgs_orig_res/00a0b916fd5941a3.jpg b/original_openim/images/original_imgs_orig_res/00a0b916fd5941a3.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..20dbe5d496cfcabb1d7868e4c83e077c9ae23e8c
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00a0b916fd5941a3.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e6c9aa37f5d828282b1a2010a856a69b6472465d76c2400776db5d10262964c6
+size 447914
diff --git a/original_openim/images/original_imgs_orig_res/00a159a661a2f5aa.jpg b/original_openim/images/original_imgs_orig_res/00a159a661a2f5aa.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..388219f0140a26cb47f111081dc80b0851cb0471
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00a159a661a2f5aa.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c29fd15a111e72a6a2fc0b0b5a0b73ebb6ec5ca520f4bc929364c5b0f96d8c1b
+size 460087
diff --git a/original_openim/images/original_imgs_orig_res/00a1f2a6e7f78ac5.jpg b/original_openim/images/original_imgs_orig_res/00a1f2a6e7f78ac5.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..5cadb3544c0e50084707839eb9f4b0f00b7904f1
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00a1f2a6e7f78ac5.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8577b0ac8eec6ee2d3d82fd09893a2a0045b05cef571914b675045cf5b509a0e
+size 115131
diff --git a/original_openim/images/original_imgs_orig_res/00a3654c1cf00d11.jpg b/original_openim/images/original_imgs_orig_res/00a3654c1cf00d11.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..6bd6ef3437e84e0ece60dfbd342de55c290073b6
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00a3654c1cf00d11.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:734900eba117c6537ba676f64f787276fe69bf67d723422d8fca84a7b5bdc38e
+size 135873
diff --git a/original_openim/images/original_imgs_orig_res/00a36f96e31731c4.jpg b/original_openim/images/original_imgs_orig_res/00a36f96e31731c4.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..709afbc32d7692dd162e3ff2244f12c67502eca1
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00a36f96e31731c4.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c87a0134a53af87739cc60bcf6cec966e7fc475f3d3ae9032095299dbe8a8124
+size 143000
diff --git a/original_openim/images/original_imgs_orig_res/00a72fa141918070.jpg b/original_openim/images/original_imgs_orig_res/00a72fa141918070.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..bd57fb051f46651f6b127c92ab095dcba009f2eb
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00a72fa141918070.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:11fef120289a20fef0b59fbaf65febbd370dfed2c8c49d04254df798faf0c434
+size 477732
diff --git a/original_openim/images/original_imgs_orig_res/00a7655d4eabf186.jpg b/original_openim/images/original_imgs_orig_res/00a7655d4eabf186.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..9f5f6d8f97d24e9cb2cef8aedd26a60fd2496db5
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00a7655d4eabf186.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e78d76f755e84788604ccbbbcdc66451bdb26d5d20a5694d8040849dacb7035a
+size 270885
diff --git a/original_openim/images/original_imgs_orig_res/00abfe9035972732.jpg b/original_openim/images/original_imgs_orig_res/00abfe9035972732.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..ea180dbf4ee414899c2ebd339e545ef03fbf3c2f
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00abfe9035972732.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:24b66c7259ae063c6e94d8b45bf98d2ca3e05dc10d338c6b4f20a6861bbffc17
+size 150368
diff --git a/original_openim/images/original_imgs_orig_res/00acf53b127218c2.jpg b/original_openim/images/original_imgs_orig_res/00acf53b127218c2.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..1caa6af92d2de6f01a7c080488b7451b384ecb28
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00acf53b127218c2.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:59ac710b45d4732154f445ada7da7fd262577ea56ea5f23b93cfaa115b17918c
+size 116061
diff --git a/original_openim/images/original_imgs_orig_res/00aff25b6c86b521.jpg b/original_openim/images/original_imgs_orig_res/00aff25b6c86b521.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..cd32af74a6abaa8f471047526361592069536af8
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00aff25b6c86b521.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:933af7c7efa96f8e8d2319e83e5637ebc5b7481291bef1589007409fd35250aa
+size 114214
diff --git a/original_openim/images/original_imgs_orig_res/00b29a6f872b1e1d.jpg b/original_openim/images/original_imgs_orig_res/00b29a6f872b1e1d.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..8626f4b55ab4a9ea26f8372fa34db26a709b202e
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00b29a6f872b1e1d.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b629ee0d4a51b9b5137e705a5f2314aeb1a1d4fe9c92b99d9a9e480818fca6a0
+size 177974
diff --git a/original_openim/images/original_imgs_orig_res/00b3ad28957f7768.jpg b/original_openim/images/original_imgs_orig_res/00b3ad28957f7768.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..9190dbb2c7347bb4a93b3939e002124c79cca64a
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00b3ad28957f7768.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6de565657933d966acdda4dcd62e004369dd4def2bb42cd938bd338abdda7024
+size 153100
diff --git a/original_openim/images/original_imgs_orig_res/00b4064b073e51f3.jpg b/original_openim/images/original_imgs_orig_res/00b4064b073e51f3.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..b54160e74177977ce2fb9d59e0e08f4a78d501c3
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00b4064b073e51f3.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:18fc90ebf4c64162f7541c53f9e956e5f0cae6d7f7aa2ad345cb28ac0dc8715d
+size 435039
diff --git a/original_openim/images/original_imgs_orig_res/00b44fc9c0296fb8.jpg b/original_openim/images/original_imgs_orig_res/00b44fc9c0296fb8.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..ada024bf3c5b4dd8006f63c3b64f6113050fc426
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00b44fc9c0296fb8.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4b1ae9def4533411d824a5711e4a2eba8c21cdf7d6b500c959aa61ba9c7d031d
+size 574675
diff --git a/original_openim/images/original_imgs_orig_res/00b75a8487446cdd.jpg b/original_openim/images/original_imgs_orig_res/00b75a8487446cdd.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..e3d8890c57e24e175ada90df15568164d3843e43
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00b75a8487446cdd.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a7e2fdc1ff27c351930c61c224e7f2f45f743afcfbb5f2b699cb1ca17ae0bfdc
+size 219546
diff --git a/original_openim/images/original_imgs_orig_res/00b9f24a5a9f3f7b.jpg b/original_openim/images/original_imgs_orig_res/00b9f24a5a9f3f7b.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..b35c9e384a09d5f39d271ab3bb093d73c615ea9c
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00b9f24a5a9f3f7b.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:35e24cc28a42f110b64c436b909640c7ddec2140c5cb49a14775e9b7d035d2ad
+size 168075
diff --git a/original_openim/images/original_imgs_orig_res/00bdb008eb688497.jpg b/original_openim/images/original_imgs_orig_res/00bdb008eb688497.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..b17d3bba2105b96b810f720b45ded8b6ea0fcfce
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00bdb008eb688497.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ab92d7f05b62b6467ce7000b28ed72698c582108f8a3a763d6f52b5aa3cfb1ee
+size 212295
diff --git a/original_openim/images/original_imgs_orig_res/00bdeda311caf18d.jpg b/original_openim/images/original_imgs_orig_res/00bdeda311caf18d.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..6076fd2ae0981075f4e264881092acbe110a780d
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00bdeda311caf18d.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fdc64b346f1fd8cd7131371872ddc074b7dfa4e204b44ab85ea7cccc4b847860
+size 152070
diff --git a/original_openim/images/original_imgs_orig_res/00c6c3288773471d.jpg b/original_openim/images/original_imgs_orig_res/00c6c3288773471d.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..947704b6406808f925650e5c9074328268d03fe2
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00c6c3288773471d.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0d3890c433caa23546c0a60dc384069d6eb529592f09b711981958dd26f7a5d1
+size 122215
diff --git a/original_openim/images/original_imgs_orig_res/00c710176e9c2996.jpg b/original_openim/images/original_imgs_orig_res/00c710176e9c2996.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..b17204ca51503c3c4d1d7c59280d4a9c5080ca34
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00c710176e9c2996.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d7b69c87f338c676013fab6c5bb4c428976b55bbf2ee9af5d1fa446ad3704015
+size 433098
diff --git a/original_openim/images/original_imgs_orig_res/00c73a28068f9b33.jpg b/original_openim/images/original_imgs_orig_res/00c73a28068f9b33.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..f79ef1dc7f0d8dff6c93502feb653741dad53817
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00c73a28068f9b33.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f0a77a43e3c6ba3309d167f79eb621753b9c1e6c9afd8e034e7999de0721f30b
+size 149766
diff --git a/original_openim/images/original_imgs_orig_res/00c9616a917be867.jpg b/original_openim/images/original_imgs_orig_res/00c9616a917be867.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..8cd323b966620b74d43b24d3dfb32313149b2843
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00c9616a917be867.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8b6cf15b3c838bdd3cdbe5d1d1d4b6019f55cb774681856c6bd2960dad89a20d
+size 991352
diff --git a/original_openim/images/original_imgs_orig_res/00ccda615ec9731d.jpg b/original_openim/images/original_imgs_orig_res/00ccda615ec9731d.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..9d7ef8153d48b2b709a211693b5ae470654f2a70
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00ccda615ec9731d.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0506226639e2a0d21c18902862a2af5e4d00c01d6447605d66fc10fe2d014275
+size 226723
diff --git a/original_openim/images/original_imgs_orig_res/00cd12b9ee1905a7.jpg b/original_openim/images/original_imgs_orig_res/00cd12b9ee1905a7.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..9c4453fb14f683edd0b7880c14351e9c3a298326
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00cd12b9ee1905a7.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9c3ee64175f1692c141015527228ab55ef1a6cecdc62eea75b5543844356805a
+size 419949
diff --git a/original_openim/images/original_imgs_orig_res/00cdf56c63191fd3.jpg b/original_openim/images/original_imgs_orig_res/00cdf56c63191fd3.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..e9598bb1753f7cc95ddde32c7a47e38391dbbfc2
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00cdf56c63191fd3.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:26720c9c692846caa84ab66c6af2cb197782587804e601e2df29867d885197bb
+size 219197
diff --git a/original_openim/images/original_imgs_orig_res/00cfb039bd7f1eaf.jpg b/original_openim/images/original_imgs_orig_res/00cfb039bd7f1eaf.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..52dd2a5a960a33543a071319d3f9968f553ff3b5
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00cfb039bd7f1eaf.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:010da41156df1ecdf5c0a89f6f9bf80f4c43c1c5abfb1fca5f190e603b89471a
+size 578843
diff --git a/original_openim/images/original_imgs_orig_res/00d3653a790b5fba.jpg b/original_openim/images/original_imgs_orig_res/00d3653a790b5fba.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..8d01664c2f94aaeddac3d24408c614c32502c875
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00d3653a790b5fba.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5f17b431569a0c0ef8ef1746e03f32b4fe566c55218ae9d7c390def713d43edd
+size 601833
diff --git a/original_openim/images/original_imgs_orig_res/00d962525f1ea4e7.jpg b/original_openim/images/original_imgs_orig_res/00d962525f1ea4e7.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..7be040fd666538fa8dfe1e18d4524042eab63b0e
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00d962525f1ea4e7.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:adfcf1b99338d3a4a1cdc5da465029836ffb361ca04691311b463eb976f48b38
+size 359076
diff --git a/original_openim/images/original_imgs_orig_res/00db41a79f8def5b.jpg b/original_openim/images/original_imgs_orig_res/00db41a79f8def5b.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..acebad94bebe191432fe371676c8637b239e2a16
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00db41a79f8def5b.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:495b7c736ba719e864b4b5d1446049a0c08bb489c609142fdb4c2a6b34b41580
+size 625398
diff --git a/original_openim/images/original_imgs_orig_res/00de7b15cac52f42.jpg b/original_openim/images/original_imgs_orig_res/00de7b15cac52f42.jpg
new file mode 100644
index 0000000000000000000000000000000000000000..04a25ab9d7334582b114081e09616ddc6975c132
--- /dev/null
+++ b/original_openim/images/original_imgs_orig_res/00de7b15cac52f42.jpg
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b2cd0388ec60bb06a7ffd9479c0e9420541bf2df54a4f46a156ec2d255c2fd87
+size 141383
diff --git a/sdxl_default/images/1024px/0.png b/sdxl_default/images/1024px/0.png
new file mode 100644
index 0000000000000000000000000000000000000000..f28ed07947f7e061c1ffafec7afe97cc4e824547
--- /dev/null
+++ b/sdxl_default/images/1024px/0.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2b1b19f49b58af2decb4e332cb451faaed498fd7f6b93fe7b1c7e1768e245abe
+size 1931872
diff --git a/sdxl_default/images/1024px/1.png b/sdxl_default/images/1024px/1.png
new file mode 100644
index 0000000000000000000000000000000000000000..e09ae7995d63a6424dd13ca279182d3b9e9e685c
--- /dev/null
+++ b/sdxl_default/images/1024px/1.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5cb08f89a073ad57a329f0de090aaea115a8b13ad6f205a514c0b407dfa39e9d
+size 1761616
diff --git a/sdxl_default/images/1024px/10.png b/sdxl_default/images/1024px/10.png
new file mode 100644
index 0000000000000000000000000000000000000000..7039a9d9ff4c804b5cf0b71f6ca2a286e474df39
--- /dev/null
+++ b/sdxl_default/images/1024px/10.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d6ea67d382059e040ed9adef84bc280e9abcac6a2bebc9b3c009ed0fe2598eba
+size 1865285
diff --git a/sdxl_default/images/1024px/11.png b/sdxl_default/images/1024px/11.png
new file mode 100644
index 0000000000000000000000000000000000000000..7ff86e7f6ac0d07860bb9dcc94a418a9d1be6ebe
--- /dev/null
+++ b/sdxl_default/images/1024px/11.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7e17401515ad44346853d0f03c7fe3bf582bd717b08f4687536cd63535f91f1c
+size 1775184
diff --git a/sdxl_default/images/1024px/12.png b/sdxl_default/images/1024px/12.png
new file mode 100644
index 0000000000000000000000000000000000000000..9a9215928c1da13642755cac25e2819a70dc51db
--- /dev/null
+++ b/sdxl_default/images/1024px/12.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b31bef98803eb70b834d4ef38850c2a53444af6a1512e440e23c8aa4a50a8274
+size 2015361
diff --git a/sdxl_default/images/1024px/13.png b/sdxl_default/images/1024px/13.png
new file mode 100644
index 0000000000000000000000000000000000000000..3860bebcf167a3490a24052cdffa6f89a9bab036
--- /dev/null
+++ b/sdxl_default/images/1024px/13.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9773b59d0253d318fa34dd893aec2efb5ddf295d83303682b78c036b0d6c7bdd
+size 1938333
diff --git a/sdxl_default/images/1024px/14.png b/sdxl_default/images/1024px/14.png
new file mode 100644
index 0000000000000000000000000000000000000000..e9a131eb07656ff287910ad35eeec2b24ba06b32
--- /dev/null
+++ b/sdxl_default/images/1024px/14.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dd3130784f0e38c1baed16b86d9065ed81ce256502919bc6f7b7574632c29ec5
+size 1176902
diff --git a/sdxl_default/images/1024px/15.png b/sdxl_default/images/1024px/15.png
new file mode 100644
index 0000000000000000000000000000000000000000..45dbbd68b6d0d6c19d7f54feaa7270e3cd70e18e
--- /dev/null
+++ b/sdxl_default/images/1024px/15.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:84f23e87408e84c81ac206facd2efa97f9e2f3e7d24fa08880c977d89f98bcb2
+size 1521900
diff --git a/sdxl_default/images/1024px/16.png b/sdxl_default/images/1024px/16.png
new file mode 100644
index 0000000000000000000000000000000000000000..0013e3ca87cb8a49a9801ef8a1cc0689f14bc57c
--- /dev/null
+++ b/sdxl_default/images/1024px/16.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:35a24803722904e396c91c5fe6ce7fdf64c6c576678380d858d2d7896914637f
+size 1412865
diff --git a/sdxl_default/images/1024px/17.png b/sdxl_default/images/1024px/17.png
new file mode 100644
index 0000000000000000000000000000000000000000..c49549bdc959bf9bb1ea50f8c107d93ca2811e9d
--- /dev/null
+++ b/sdxl_default/images/1024px/17.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:69a4f94cfbc6688ec95cc40412e1518d6d8c4a9a6608c93d4a6488f5cb026c7a
+size 1315450
diff --git a/sdxl_default/images/1024px/18.png b/sdxl_default/images/1024px/18.png
new file mode 100644
index 0000000000000000000000000000000000000000..0bdf453953ec2d4642212e47732f08c03e6b6931
--- /dev/null
+++ b/sdxl_default/images/1024px/18.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:76c0bb8b89268f6cd262ba983afb6a4fe17da506f7ddd0be215659b1be16f20b
+size 1286259
diff --git a/sdxl_default/images/1024px/19.png b/sdxl_default/images/1024px/19.png
new file mode 100644
index 0000000000000000000000000000000000000000..407e01bc423b043e9f033bf982b17f185cf81022
--- /dev/null
+++ b/sdxl_default/images/1024px/19.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b7e56a6942da616a33a851b1bf7f774ef6a9fc3f2819d0dddf293cbbaa6a0597
+size 1631322
diff --git a/sdxl_default/images/1024px/2.png b/sdxl_default/images/1024px/2.png
new file mode 100644
index 0000000000000000000000000000000000000000..ffd3ce1e5c2acc4d032b11b09b69581ff76f4678
--- /dev/null
+++ b/sdxl_default/images/1024px/2.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:56ca27b6a6e452f4b325ff9f714a535d539fe3662d17f536e9f4d784ca4da301
+size 1358497
diff --git a/sdxl_default/images/1024px/20.png b/sdxl_default/images/1024px/20.png
new file mode 100644
index 0000000000000000000000000000000000000000..e28970c18b9182b84f69b62f5b6609244b85d76c
--- /dev/null
+++ b/sdxl_default/images/1024px/20.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d5a6fc3ef8b40d72b465c6e1d60ec3e3ea31e2b56579f15b13b55b3a7143c51b
+size 1132399
diff --git a/sdxl_default/images/1024px/21.png b/sdxl_default/images/1024px/21.png
new file mode 100644
index 0000000000000000000000000000000000000000..fca078356a8b3da740edc41a1f217d2ca3f755aa
--- /dev/null
+++ b/sdxl_default/images/1024px/21.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:59abf74938b6efadc8fe9f7d4edcb80821fcacfa2e22adb01b1f643772b29eeb
+size 1206136
diff --git a/sdxl_default/images/1024px/22.png b/sdxl_default/images/1024px/22.png
new file mode 100644
index 0000000000000000000000000000000000000000..b0e044bca457989cd63655e279e41bf9ef8abb97
--- /dev/null
+++ b/sdxl_default/images/1024px/22.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b346eb51fabb6d3b6c91d5fe860352ae99a9d12eb756e3fe780c886244cf6bae
+size 1082332
diff --git a/sdxl_default/images/1024px/23.png b/sdxl_default/images/1024px/23.png
new file mode 100644
index 0000000000000000000000000000000000000000..0a55ab08fa6010c977030409a5aa1c9b2f5d826c
--- /dev/null
+++ b/sdxl_default/images/1024px/23.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:66c09001bb3f7e8c5cc05267bd5660d447cc290f26b868c1a78294cc3b4e48e0
+size 1448304
diff --git a/sdxl_default/images/1024px/24.png b/sdxl_default/images/1024px/24.png
new file mode 100644
index 0000000000000000000000000000000000000000..c5b1094a919e2f3780277e1dd525ae236c1bbc11
--- /dev/null
+++ b/sdxl_default/images/1024px/24.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:704f36e0d96d4b46c4d8c00b2b5e96f70559802e1908befc2398975f23826f11
+size 2117839
diff --git a/sdxl_default/images/1024px/25.png b/sdxl_default/images/1024px/25.png
new file mode 100644
index 0000000000000000000000000000000000000000..b392a85c3cc1b5f9a25d4a25a62f4984cc87a97d
--- /dev/null
+++ b/sdxl_default/images/1024px/25.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:523e5df161fc69e4c6e1ec95bed32542d6769007d0dc44ff1dece13a8f82260d
+size 1858399
diff --git a/sdxl_default/images/1024px/26.png b/sdxl_default/images/1024px/26.png
new file mode 100644
index 0000000000000000000000000000000000000000..b5312b07f1817c48d3b8d3b5577e1980b13f2a9a
--- /dev/null
+++ b/sdxl_default/images/1024px/26.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:aa71433f43d6e3c88133c9a6bad5eb9e9ca12bdb65944e066946e1687d3dcfe6
+size 1553593
diff --git a/sdxl_default/images/1024px/27.png b/sdxl_default/images/1024px/27.png
new file mode 100644
index 0000000000000000000000000000000000000000..074d323b08199f18aff6fe35f1a1040c835428ea
--- /dev/null
+++ b/sdxl_default/images/1024px/27.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:108d04d95348577382096a2aa90ab091e8b7526162c60c65146ae160c1c4cc7c
+size 2003159
diff --git a/sdxl_default/images/1024px/28.png b/sdxl_default/images/1024px/28.png
new file mode 100644
index 0000000000000000000000000000000000000000..24da3a28d5302e200851cb848b366ab4e2090f55
--- /dev/null
+++ b/sdxl_default/images/1024px/28.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9d20fed6ce58eeb1798bfa4428709d8d3c5271ae62463879f875d502a75e1704
+size 1623667
diff --git a/sdxl_default/images/1024px/29.png b/sdxl_default/images/1024px/29.png
new file mode 100644
index 0000000000000000000000000000000000000000..2ba94a8c2e1065696b6ecfdae705acd122e958c4
--- /dev/null
+++ b/sdxl_default/images/1024px/29.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6c1f12d5916ae64c5a3c563bbd1b1ddf365dae9abb3df3c86ae11e1070004b20
+size 1609953
diff --git a/sdxl_default/images/1024px/3.png b/sdxl_default/images/1024px/3.png
new file mode 100644
index 0000000000000000000000000000000000000000..afd5eeafcdff4bbba400d5fe4936da11db34e9fc
--- /dev/null
+++ b/sdxl_default/images/1024px/3.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:384abed4ce5c9e8e481b958284699546bad94638ba908b37cf42b127923c1d0e
+size 1339962
diff --git a/sdxl_default/images/1024px/30.png b/sdxl_default/images/1024px/30.png
new file mode 100644
index 0000000000000000000000000000000000000000..67193a35d89ace695ae5af7158c97cc01fb80a14
--- /dev/null
+++ b/sdxl_default/images/1024px/30.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:242d8c000e4df14200a90c25d00331d359c66fc21862e903eedfdd530eb0678b
+size 1421267
diff --git a/sdxl_default/images/1024px/31.png b/sdxl_default/images/1024px/31.png
new file mode 100644
index 0000000000000000000000000000000000000000..6fa40726553b8843bc70065d81de08871e2506b4
--- /dev/null
+++ b/sdxl_default/images/1024px/31.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:90d00d90a2265d6306017f90496e7fbc233320676cfeb552b388634e8c52372a
+size 1725214
diff --git a/sdxl_default/images/1024px/32.png b/sdxl_default/images/1024px/32.png
new file mode 100644
index 0000000000000000000000000000000000000000..7f9496adf39688ab8f3a15823ae4aa842590e79a
--- /dev/null
+++ b/sdxl_default/images/1024px/32.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d7351e45fe46d0494045b358d9be17c6acd48c05622c0103ec8ea6241a6e5a2b
+size 1895187
diff --git a/sdxl_default/images/1024px/33.png b/sdxl_default/images/1024px/33.png
new file mode 100644
index 0000000000000000000000000000000000000000..1c07357effbb9d1fade966befcf38f0164cd4d73
--- /dev/null
+++ b/sdxl_default/images/1024px/33.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:02b8c8645d48467cbd320e3d14aa8e073bc7d664a5f23e239441d9c857c53865
+size 1992072
diff --git a/sdxl_default/images/1024px/34.png b/sdxl_default/images/1024px/34.png
new file mode 100644
index 0000000000000000000000000000000000000000..a1948de79ec3e189ee9ec1a950fed8158bb400e6
--- /dev/null
+++ b/sdxl_default/images/1024px/34.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c470f5b25e2fb74b7325f5ab1b5e27b81925f9411426b3313d39fd0f3f09c192
+size 2106281
diff --git a/sdxl_default/images/1024px/35.png b/sdxl_default/images/1024px/35.png
new file mode 100644
index 0000000000000000000000000000000000000000..769d0d4ab5a3f274aee72d08c14cea07a5c6e514
--- /dev/null
+++ b/sdxl_default/images/1024px/35.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:98901c7d5728651189c6ab15243f3fe7fcc2f6ef2e9b0ecc83bf728cdb573882
+size 1222523
diff --git a/sdxl_default/images/1024px/36.png b/sdxl_default/images/1024px/36.png
new file mode 100644
index 0000000000000000000000000000000000000000..137cfbbdda3bd58852f29d0fb36d2f12a5173d48
--- /dev/null
+++ b/sdxl_default/images/1024px/36.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:02d9d90514975f54b1fb3347b0d6d5d8261c6e1a140cf8b0e66e24011c1b12d2
+size 1553360
diff --git a/sdxl_default/images/1024px/37.png b/sdxl_default/images/1024px/37.png
new file mode 100644
index 0000000000000000000000000000000000000000..e780277b0396bf86e8344325ce764d3725081d81
--- /dev/null
+++ b/sdxl_default/images/1024px/37.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:26033ca05905130b08e220a991c209e21c9c9ff577ec4060f7f7d9444c7fef6c
+size 1834683
diff --git a/sdxl_default/images/1024px/38.png b/sdxl_default/images/1024px/38.png
new file mode 100644
index 0000000000000000000000000000000000000000..4e5dd7875d8ff4b9d80f104f1751095e6297fd61
--- /dev/null
+++ b/sdxl_default/images/1024px/38.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3ed82c35b7d59be38c7d2bea311a752ccd41e4ffadd5cca521e6e820dfe56823
+size 1852997
diff --git a/sdxl_default/images/1024px/39.png b/sdxl_default/images/1024px/39.png
new file mode 100644
index 0000000000000000000000000000000000000000..c50f06ae8c3764c5e7590fd088e4202b87a2ae08
--- /dev/null
+++ b/sdxl_default/images/1024px/39.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0682d92a4f63b89155d815b804b6d95cb62fa5d27a04232609658510f93e3b50
+size 1339341
diff --git a/sdxl_default/images/1024px/4.png b/sdxl_default/images/1024px/4.png
new file mode 100644
index 0000000000000000000000000000000000000000..7753b3d23d2dba09822b5531cc8964ca465a36f5
--- /dev/null
+++ b/sdxl_default/images/1024px/4.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b2d369a271ea44110dfe502684292e7d84c3f8a269de1d5cfa67ea7e3629b477
+size 1283405
diff --git a/sdxl_default/images/1024px/40.png b/sdxl_default/images/1024px/40.png
new file mode 100644
index 0000000000000000000000000000000000000000..450900e3c319b890b0833507dc90cacbd280518c
--- /dev/null
+++ b/sdxl_default/images/1024px/40.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:760b67d7580ddbe7915f2061845a24fef922b1bdd29beed044840987544f7fb4
+size 1225395
diff --git a/sdxl_default/images/1024px/41.png b/sdxl_default/images/1024px/41.png
new file mode 100644
index 0000000000000000000000000000000000000000..59f423bf32b5cf793596cf711c74a728bcda7456
--- /dev/null
+++ b/sdxl_default/images/1024px/41.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4d48420634a4844a40fa06eddafd05f677aed11badbe46a549538b6409f018f9
+size 1442736
diff --git a/sdxl_default/images/1024px/42.png b/sdxl_default/images/1024px/42.png
new file mode 100644
index 0000000000000000000000000000000000000000..5016cf8fed278d0f7d8929852d1f072f69ebc6c6
--- /dev/null
+++ b/sdxl_default/images/1024px/42.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:235af6da90afb6e6f2144cb2b734e68255961ad2a4418222485cee581f3a6a3a
+size 1320678
diff --git a/sdxl_default/images/1024px/43.png b/sdxl_default/images/1024px/43.png
new file mode 100644
index 0000000000000000000000000000000000000000..9c30e48fbf4eaaa31f88b8695d8aefbb22fb31b5
--- /dev/null
+++ b/sdxl_default/images/1024px/43.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f58ac9d5035cec8e66a46a40d4ca75f931564bee40c7f4ff33abe5bc690beb94
+size 1589663
diff --git a/sdxl_default/images/1024px/44.png b/sdxl_default/images/1024px/44.png
new file mode 100644
index 0000000000000000000000000000000000000000..160d03e6d9664d59e5761542c6eb4e9a521f7dbd
--- /dev/null
+++ b/sdxl_default/images/1024px/44.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:911523e27c6e5c339b122fac63a2aab9f3e75a52f650c7a94ae95beeba838aeb
+size 1687524
diff --git a/sdxl_default/images/1024px/45.png b/sdxl_default/images/1024px/45.png
new file mode 100644
index 0000000000000000000000000000000000000000..4063b1c549ab1646c89827a1b12b83e29cdf536e
--- /dev/null
+++ b/sdxl_default/images/1024px/45.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:514815a406f42ee548e7cf507c1948e799c4f0c79f8682d75825f3114aee24bd
+size 1495975
diff --git a/sdxl_default/images/1024px/46.png b/sdxl_default/images/1024px/46.png
new file mode 100644
index 0000000000000000000000000000000000000000..efa2622e9b66db235a2c03e63b84b0ca44c3282f
--- /dev/null
+++ b/sdxl_default/images/1024px/46.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:383db4e248f9cd977ab136c886b008459064b9c47343eeb73c660e623414370e
+size 1643396
diff --git a/sdxl_default/images/1024px/47.png b/sdxl_default/images/1024px/47.png
new file mode 100644
index 0000000000000000000000000000000000000000..6c52b8918d6603efe3a6d2f43575617d85c775c9
--- /dev/null
+++ b/sdxl_default/images/1024px/47.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ad59b00a1e15955fd380d75a673c42e017887fd77685561de9c1d3810a91cfef
+size 1381461
diff --git a/sdxl_default/images/1024px/48.png b/sdxl_default/images/1024px/48.png
new file mode 100644
index 0000000000000000000000000000000000000000..e93d7e1d5f5541d031eadbd7fc352da40207147b
--- /dev/null
+++ b/sdxl_default/images/1024px/48.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dc0548278a1def376c27828e19b0ec3b018f59964e7328fe22866bdef9b0e92c
+size 1336644
diff --git a/sdxl_default/images/1024px/49.png b/sdxl_default/images/1024px/49.png
new file mode 100644
index 0000000000000000000000000000000000000000..962988177cb5f2688a16f5e23d6abe1b17ac4bf0
--- /dev/null
+++ b/sdxl_default/images/1024px/49.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f5bfc3daa22a5720c369d15b391bce06af9d3e5dbc517f175641eb37878de6ae
+size 1268140
diff --git a/sdxl_default/images/1024px/5.png b/sdxl_default/images/1024px/5.png
new file mode 100644
index 0000000000000000000000000000000000000000..1ea3cffd67848f59e8535d990d3753cc2dbbee7b
--- /dev/null
+++ b/sdxl_default/images/1024px/5.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1b984d7b865308fe32e9a53d569b0af6b99a2aeca27be4f4b32c59cb0be6846b
+size 1602890
diff --git a/sdxl_default/images/1024px/50.png b/sdxl_default/images/1024px/50.png
new file mode 100644
index 0000000000000000000000000000000000000000..9d1d2a6ac88664ea2f203abeb0890cee46a51786
--- /dev/null
+++ b/sdxl_default/images/1024px/50.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5fa01a8af9ddb3a1da5db0d6ec5ae548248a728cc46e3b968704addb4136cd83
+size 1430368
diff --git a/sdxl_default/images/1024px/51.png b/sdxl_default/images/1024px/51.png
new file mode 100644
index 0000000000000000000000000000000000000000..8a0d95305837eb836198e030804c66bcf88682b8
--- /dev/null
+++ b/sdxl_default/images/1024px/51.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:110c3ed8e0bc7999da80fbb79c61568c1e27bf1db48ff26ca9e7a3fce2932405
+size 1420830
diff --git a/sdxl_default/images/1024px/52.png b/sdxl_default/images/1024px/52.png
new file mode 100644
index 0000000000000000000000000000000000000000..c62448d7ec0eaa81ce7c5052aa15b8554eb09515
--- /dev/null
+++ b/sdxl_default/images/1024px/52.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fe407fd88309e6756826b71c8de1799e97d4f5eee8b9d932bc378a5a148514e7
+size 1302401
diff --git a/sdxl_default/images/1024px/53.png b/sdxl_default/images/1024px/53.png
new file mode 100644
index 0000000000000000000000000000000000000000..36a3faf1eaeb5184dbef8ee4df4825b994e0b270
--- /dev/null
+++ b/sdxl_default/images/1024px/53.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ee5d0a1bf8bcb5d169709b07e2db01abc36a192bb08b5f716183ddd2f31838b4
+size 965828
diff --git a/sdxl_default/images/1024px/54.png b/sdxl_default/images/1024px/54.png
new file mode 100644
index 0000000000000000000000000000000000000000..866af9521f486c1c9062cc0047986e7a45f9fecb
--- /dev/null
+++ b/sdxl_default/images/1024px/54.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f4ebfd8bfde5d261955bb479d04c500310ecbf655948a159fd6b34885b415702
+size 1750292
diff --git a/sdxl_default/images/1024px/55.png b/sdxl_default/images/1024px/55.png
new file mode 100644
index 0000000000000000000000000000000000000000..f4bc61dfd635702a05c30232186bab2d4b5fccc1
--- /dev/null
+++ b/sdxl_default/images/1024px/55.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b86ecbc20ab7cb395de39b4fd39a311135ce236c7c9bc52c1aef957178516e47
+size 1746338
diff --git a/sdxl_default/images/1024px/56.png b/sdxl_default/images/1024px/56.png
new file mode 100644
index 0000000000000000000000000000000000000000..4d6d398575bc5f7a60b1a7599d1eb52ebdad6dca
--- /dev/null
+++ b/sdxl_default/images/1024px/56.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c9f7b55cc2fd9c55b152ef1265c2192b8bc66fd267a8fb9547f70a21f22f54e6
+size 1562532
diff --git a/sdxl_default/images/1024px/57.png b/sdxl_default/images/1024px/57.png
new file mode 100644
index 0000000000000000000000000000000000000000..b2a66f41df17ab630ae97c4b7a06897e2c802ef4
--- /dev/null
+++ b/sdxl_default/images/1024px/57.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:06ad213774caba7f9c70ae8772ac852c33b0bdf6ee04f3acba540742612cbe83
+size 1692944
diff --git a/sdxl_default/images/1024px/58.png b/sdxl_default/images/1024px/58.png
new file mode 100644
index 0000000000000000000000000000000000000000..7edb97c8ea79a1697b645a54f99d80bc93dd53cb
--- /dev/null
+++ b/sdxl_default/images/1024px/58.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:324ae5488da848c5ccc12ef102076c7fd50851301f337ae87d8aed06e065be2e
+size 1885425
diff --git a/sdxl_default/images/1024px/59.png b/sdxl_default/images/1024px/59.png
new file mode 100644
index 0000000000000000000000000000000000000000..b84d0026e03b98e1af720a4a7caf905da17ff90e
--- /dev/null
+++ b/sdxl_default/images/1024px/59.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b867f11f861c1619d222aa12b9b8de7e2e80441e33d0df9a82b99d9cf3ffec45
+size 1308655
diff --git a/sdxl_default/images/1024px/6.png b/sdxl_default/images/1024px/6.png
new file mode 100644
index 0000000000000000000000000000000000000000..7bf4d67171aa10c915ac9befd02f694301b1d8a6
--- /dev/null
+++ b/sdxl_default/images/1024px/6.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2974f5b0782df599387b6cf9af9bb46d6f8a65d3b27721d6f69c286b6546d727
+size 1637644
diff --git a/sdxl_default/images/1024px/60.png b/sdxl_default/images/1024px/60.png
new file mode 100644
index 0000000000000000000000000000000000000000..e5990066e5be32c53b9d0ebc1ce269f13d8eae34
--- /dev/null
+++ b/sdxl_default/images/1024px/60.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:949e377d1e83be3ab7f284e41b270ba3d1429f8f3e21a56c7f0a9cd8f997e45a
+size 1289695
diff --git a/sdxl_default/images/1024px/61.png b/sdxl_default/images/1024px/61.png
new file mode 100644
index 0000000000000000000000000000000000000000..1ce30ae85212fad41c67b1b7911b0251ab886904
--- /dev/null
+++ b/sdxl_default/images/1024px/61.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9ad1534cc444f586f5bc0e41b7c348e86b376417f6251e550eb94f2002ae80db
+size 1520348
diff --git a/sdxl_default/images/1024px/62.png b/sdxl_default/images/1024px/62.png
new file mode 100644
index 0000000000000000000000000000000000000000..5df82edc873864cc8de09ae0b1ed4575eb6b7bc7
--- /dev/null
+++ b/sdxl_default/images/1024px/62.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4fe555939b5b4cf5d9b67f1581d501fa59aa213360c0df253265aaa887065f50
+size 1119754
diff --git a/sdxl_default/images/1024px/63.png b/sdxl_default/images/1024px/63.png
new file mode 100644
index 0000000000000000000000000000000000000000..15db5a404ecfdcfe5421b49584f30bb49a193804
--- /dev/null
+++ b/sdxl_default/images/1024px/63.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:53338a88c06960d8b0d728b939546d1f12dcc7a1606b4f726396b9436b75bcd8
+size 1343926
diff --git a/sdxl_default/images/1024px/64.png b/sdxl_default/images/1024px/64.png
new file mode 100644
index 0000000000000000000000000000000000000000..dd3de907a2e2bc0c0da117fe031c27337acb3943
--- /dev/null
+++ b/sdxl_default/images/1024px/64.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:65e452f03914f920069e8c6be910e1eda70a39a1f0c79bf52c9763ee35bd292c
+size 1064060
diff --git a/sdxl_default/images/1024px/65.png b/sdxl_default/images/1024px/65.png
new file mode 100644
index 0000000000000000000000000000000000000000..b692feb728ed3a021f8e08198af156b03f747c2a
--- /dev/null
+++ b/sdxl_default/images/1024px/65.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:360a2c522b81ec886676cd38f76f041c579c96d70b5be511a0c45ca0a16135b6
+size 1350859
diff --git a/sdxl_default/images/1024px/66.png b/sdxl_default/images/1024px/66.png
new file mode 100644
index 0000000000000000000000000000000000000000..edd246b590989598a23c9cac096643b316abea1a
--- /dev/null
+++ b/sdxl_default/images/1024px/66.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e95657f8b926eb82453acbc649c57c3fac54af203bb634c6379b534f6d6fc14d
+size 1168820
diff --git a/sdxl_default/images/1024px/67.png b/sdxl_default/images/1024px/67.png
new file mode 100644
index 0000000000000000000000000000000000000000..cce1ba60e53eb6f04c98ae895e34fbdf261c4023
--- /dev/null
+++ b/sdxl_default/images/1024px/67.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:521bd4a323b8308308b6bc5cd1465c12453616257223be3e3a198a089d257dd9
+size 1714002
diff --git a/sdxl_default/images/1024px/68.png b/sdxl_default/images/1024px/68.png
new file mode 100644
index 0000000000000000000000000000000000000000..dcda15699dc85d0f1a4489d0ef42e3049d86a73c
--- /dev/null
+++ b/sdxl_default/images/1024px/68.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d8cd504267195170bd65f84095a481bb444c273480e67c897fd92d524cc20111
+size 1277402
diff --git a/sdxl_default/images/1024px/69.png b/sdxl_default/images/1024px/69.png
new file mode 100644
index 0000000000000000000000000000000000000000..6c26bd6bc7e8844f8319d7af36e564a14122f2e3
--- /dev/null
+++ b/sdxl_default/images/1024px/69.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:36cbca51a9f44881c558039d3b7c5b04e9bd6fa8cdd54cc6328a7ef841c3bd48
+size 1332152
diff --git a/sdxl_default/images/1024px/7.png b/sdxl_default/images/1024px/7.png
new file mode 100644
index 0000000000000000000000000000000000000000..ff24f232115e33f0ef90ca672b3998bb829ffc50
--- /dev/null
+++ b/sdxl_default/images/1024px/7.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ee3475485de7a8a69d6c49e5507592c5201f1dc875ffb26d0daa7bbc974cf305
+size 1494305
diff --git a/sdxl_default/images/1024px/70.png b/sdxl_default/images/1024px/70.png
new file mode 100644
index 0000000000000000000000000000000000000000..da1990122716422956c87e9685ec823228b8820c
--- /dev/null
+++ b/sdxl_default/images/1024px/70.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:22843b109382c78493e26d299e1997236a5e1cd4ec712a5953fcd68d798f6c28
+size 1019174
diff --git a/sdxl_default/images/1024px/71.png b/sdxl_default/images/1024px/71.png
new file mode 100644
index 0000000000000000000000000000000000000000..8124eedec2fa3fcb9f3d2108051225da96068a41
--- /dev/null
+++ b/sdxl_default/images/1024px/71.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4d63450272fa5cb4f8da1147c5e1a9143a65320c224c6829881192c7ff8748ae
+size 2132944
diff --git a/sdxl_default/images/1024px/72.png b/sdxl_default/images/1024px/72.png
new file mode 100644
index 0000000000000000000000000000000000000000..69c2e580a41936c1b36824fea1748cb2513c1a53
--- /dev/null
+++ b/sdxl_default/images/1024px/72.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:886edb9e22c6a09740dd2ca26e0d47f24b42065d8e22dcdd0feed04836d8a276
+size 1475996
diff --git a/sdxl_default/images/1024px/73.png b/sdxl_default/images/1024px/73.png
new file mode 100644
index 0000000000000000000000000000000000000000..058f1b6db50049f8b6256ff1ff1ef44abc5fdc1d
--- /dev/null
+++ b/sdxl_default/images/1024px/73.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:60bd01f931d0cffff0ba0ef64bb6701632dce6f6a763fb8d6df51e6e7e25c191
+size 1626350
diff --git a/sdxl_default/images/1024px/74.png b/sdxl_default/images/1024px/74.png
new file mode 100644
index 0000000000000000000000000000000000000000..4f79bb094ceafe9c9e4fb5e115d2eebb426f8733
--- /dev/null
+++ b/sdxl_default/images/1024px/74.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d029dccdbb2aaba2cae30bbae5ced954ef1b695d3db15b117ecee0e840f3ab3f
+size 1244646
diff --git a/sdxl_default/images/1024px/75.png b/sdxl_default/images/1024px/75.png
new file mode 100644
index 0000000000000000000000000000000000000000..8cc037b1279fdbcaceba4656c2a0717161f7af72
--- /dev/null
+++ b/sdxl_default/images/1024px/75.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1e5794a8f3c6d52326ef5f96aa46a57c48b8563ce8e69ebf197e4716e884162e
+size 2010435
diff --git a/sdxl_default/images/1024px/76.png b/sdxl_default/images/1024px/76.png
new file mode 100644
index 0000000000000000000000000000000000000000..775a96cc58b2f674961c7559b5e6d04f5b5aaa68
--- /dev/null
+++ b/sdxl_default/images/1024px/76.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:af6b5863f544186d12af432dc39214560736c1a0dfbd033e11323652584c4ae4
+size 1261737
diff --git a/sdxl_default/images/1024px/77.png b/sdxl_default/images/1024px/77.png
new file mode 100644
index 0000000000000000000000000000000000000000..b02c6949b2d55439ca07cebfa0a071a2bfa153f4
--- /dev/null
+++ b/sdxl_default/images/1024px/77.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:edeba6365f7dff668ed59067597e80b7114a0f54406cada26fe06078758dc9f4
+size 2133426
diff --git a/sdxl_default/images/1024px/78.png b/sdxl_default/images/1024px/78.png
new file mode 100644
index 0000000000000000000000000000000000000000..8772e6138788faf792f5ad0c6bb060b4a4052a09
--- /dev/null
+++ b/sdxl_default/images/1024px/78.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:919a81a683560bc84aa271a90ce22ba842d7febb4bc9a6485719f055c1cac425
+size 1552356
diff --git a/sdxl_default/images/1024px/79.png b/sdxl_default/images/1024px/79.png
new file mode 100644
index 0000000000000000000000000000000000000000..17ed86860c3de0e79b1e3807243fc003176d8d19
--- /dev/null
+++ b/sdxl_default/images/1024px/79.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6814a0fdeff6836a57f89410039e663eb768975fad21183010a7dd83901185bb
+size 1325578
diff --git a/sdxl_default/images/1024px/8.png b/sdxl_default/images/1024px/8.png
new file mode 100644
index 0000000000000000000000000000000000000000..e635429d6a06f73b3b5310debd6a0f7a432e2c5f
--- /dev/null
+++ b/sdxl_default/images/1024px/8.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1c836fb8247e01df7ae755b071b84700f8cb84a9b09bbf39e7e783a346337bea
+size 1469809
diff --git a/sdxl_default/images/1024px/80.png b/sdxl_default/images/1024px/80.png
new file mode 100644
index 0000000000000000000000000000000000000000..58fc6c5973d1bad4cf48c691575ef039811eb6b2
--- /dev/null
+++ b/sdxl_default/images/1024px/80.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:33980384dcc9695fa21a4810d7f5b7831bb9658b9a3042e1b198ef2db905c52a
+size 1703598
diff --git a/sdxl_default/images/1024px/81.png b/sdxl_default/images/1024px/81.png
new file mode 100644
index 0000000000000000000000000000000000000000..45b448dbf3ea0fa5090ee93b429c121c9f0b380f
--- /dev/null
+++ b/sdxl_default/images/1024px/81.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:375eb5c9e230907455806aaba1934e074b54206f838d218e7925564fc2e4a510
+size 1220450
diff --git a/sdxl_default/images/1024px/82.png b/sdxl_default/images/1024px/82.png
new file mode 100644
index 0000000000000000000000000000000000000000..1aba2eca9f979e25f7637c561be27fbccde166fa
--- /dev/null
+++ b/sdxl_default/images/1024px/82.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:df41701c17751f041324afddbadd2d06c912dcf6599935658f17bdd1cc79fbf9
+size 1821975
diff --git a/sdxl_default/images/1024px/83.png b/sdxl_default/images/1024px/83.png
new file mode 100644
index 0000000000000000000000000000000000000000..62a6e2538810ad24cddc96ce77def7f935080d22
--- /dev/null
+++ b/sdxl_default/images/1024px/83.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ebc0cd2ec68dca44803c7187e1ccc4745c68ffd5fc5b9e5465cff6d1f0267feb
+size 1361895
diff --git a/sdxl_default/images/1024px/84.png b/sdxl_default/images/1024px/84.png
new file mode 100644
index 0000000000000000000000000000000000000000..ef60efa40d7681e06c063441d2b20bc5ee40f8aa
--- /dev/null
+++ b/sdxl_default/images/1024px/84.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fdf069ea43b51c0e0155848a1db178845fa0869b1f38bc35545c829131fc1814
+size 1639804
diff --git a/sdxl_default/images/1024px/85.png b/sdxl_default/images/1024px/85.png
new file mode 100644
index 0000000000000000000000000000000000000000..e9cabf5832e6a3201711dd2815b0a8c003323612
--- /dev/null
+++ b/sdxl_default/images/1024px/85.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:122631c6addbf57eb99674edf6c2ead93e710131d5fa92f4f5e366bc185257e7
+size 1152983
diff --git a/sdxl_default/images/1024px/86.png b/sdxl_default/images/1024px/86.png
new file mode 100644
index 0000000000000000000000000000000000000000..de532764ce90bc733fc4cc7c22ba19152d142b09
--- /dev/null
+++ b/sdxl_default/images/1024px/86.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:60c339476f5b4e5baa9704011e7d66f7783c99d37ca8b051cbbab881e8dab38e
+size 1381440
diff --git a/sdxl_default/images/1024px/87.png b/sdxl_default/images/1024px/87.png
new file mode 100644
index 0000000000000000000000000000000000000000..46ea2c38b29f44c5c294e490bd9d961d7c0a0cd0
--- /dev/null
+++ b/sdxl_default/images/1024px/87.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:239a95dd4b1d8495d0fe566da1e02d8cfb31f98b9314e6b448b461c23e40d680
+size 1896198
diff --git a/sdxl_default/images/1024px/88.png b/sdxl_default/images/1024px/88.png
new file mode 100644
index 0000000000000000000000000000000000000000..051630cb887727d9205a8933b1a5bcd9bbc8ba54
--- /dev/null
+++ b/sdxl_default/images/1024px/88.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8014607fe148b36f9aa9a29186a54b881697ddcb8573b3d42271f7519633431a
+size 1568099
diff --git a/sdxl_default/images/1024px/89.png b/sdxl_default/images/1024px/89.png
new file mode 100644
index 0000000000000000000000000000000000000000..ebfd12e1a8d0f686480c4c47b149e9bf1041a391
--- /dev/null
+++ b/sdxl_default/images/1024px/89.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2a15eb9c188de274f6b7f2a87a341da9c7f0e8772321b6bce2d15fa8afd116b3
+size 1291062
diff --git a/sdxl_default/images/1024px/9.png b/sdxl_default/images/1024px/9.png
new file mode 100644
index 0000000000000000000000000000000000000000..9197c7e8d0145ed0f126d800b1dea5647a3bc643
--- /dev/null
+++ b/sdxl_default/images/1024px/9.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9e8947962c8a1d4db5fc5863e61bc9778b1e4e958e65f17f86d99a3229ba72c4
+size 1246831
diff --git a/sdxl_default/images/1024px/90.png b/sdxl_default/images/1024px/90.png
new file mode 100644
index 0000000000000000000000000000000000000000..5543eecb6f8d4d1bc134c014e189cbd4e66c336d
--- /dev/null
+++ b/sdxl_default/images/1024px/90.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d4abc789d87a691ba8d586267593b9eebe7a82dba102b5e1b99e8419ce1daead
+size 1542179
diff --git a/sdxl_default/images/1024px/91.png b/sdxl_default/images/1024px/91.png
new file mode 100644
index 0000000000000000000000000000000000000000..54f0e4ef3122d959a8f7fc3d99b1c55987806382
--- /dev/null
+++ b/sdxl_default/images/1024px/91.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:27cbc87d03107d0d143ecfc716ff52475d37bc4c3715ff318e8e4975af08fd5b
+size 1727226
diff --git a/sdxl_default/images/1024px/92.png b/sdxl_default/images/1024px/92.png
new file mode 100644
index 0000000000000000000000000000000000000000..cad9fedf280e83fe116f5335fb9a39d792bc106e
--- /dev/null
+++ b/sdxl_default/images/1024px/92.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6165b5e7ce3d5f804d7369d2353559e9e56bb145e1ab45a015efcacb7c155e8f
+size 1738761
diff --git a/sdxl_default/images/1024px/93.png b/sdxl_default/images/1024px/93.png
new file mode 100644
index 0000000000000000000000000000000000000000..c06c97453146c787f6d4aa2007db8b921da09923
--- /dev/null
+++ b/sdxl_default/images/1024px/93.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5377fe0d9c4f0a77d5fb99f494768ccdf97f8cf8ab327b2c98305012d827f288
+size 1917531
diff --git a/sdxl_default/images/1024px/94.png b/sdxl_default/images/1024px/94.png
new file mode 100644
index 0000000000000000000000000000000000000000..aeaf40290fb689f77aa3565a7a69463ff06bc443
--- /dev/null
+++ b/sdxl_default/images/1024px/94.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f2d9179093e3557c0afca1039506b3e324844b3b51f805b8e9e9db7c216bbf36
+size 1751853
diff --git a/sdxl_default/images/1024px/95.png b/sdxl_default/images/1024px/95.png
new file mode 100644
index 0000000000000000000000000000000000000000..7545d13f7e22cc05fae9f95d48bdf1be1b26f9f6
--- /dev/null
+++ b/sdxl_default/images/1024px/95.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5d3f104f7951627b0fb21d6bcd549beed768bc2abe6c12a8ce9017cd6cce75a5
+size 1265036
diff --git a/sdxl_default/images/1024px/96.png b/sdxl_default/images/1024px/96.png
new file mode 100644
index 0000000000000000000000000000000000000000..672304ca156b05688a9bb571a8060fab38129669
--- /dev/null
+++ b/sdxl_default/images/1024px/96.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7a436a9f2aca2c9c3da4ac45db5a2e9ea2494c8fce8be2b1081b3d861e3f94f7
+size 1382765
diff --git a/sdxl_default/images/1024px/97.png b/sdxl_default/images/1024px/97.png
new file mode 100644
index 0000000000000000000000000000000000000000..104bf677b11adc0b210e097463b3080b00cdb9ec
--- /dev/null
+++ b/sdxl_default/images/1024px/97.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7af01b6a9d081616b7706d1af5d6e7c1733e29125a8c97c0bb0c3548370bc9e5
+size 1185200
diff --git a/sdxl_default/images/1024px/98.png b/sdxl_default/images/1024px/98.png
new file mode 100644
index 0000000000000000000000000000000000000000..b0226e95c6a974550e4b3e143c6abe0df02feaf6
--- /dev/null
+++ b/sdxl_default/images/1024px/98.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8bb02e667bb437d792ad436e4a5030c7cfd93439e7477853cdd9477f8f6d7307
+size 1612530
diff --git a/sdxl_default/images/1024px/99.png b/sdxl_default/images/1024px/99.png
new file mode 100644
index 0000000000000000000000000000000000000000..e736bed7d5c4737b8b9a4526a5310e32622e6098
--- /dev/null
+++ b/sdxl_default/images/1024px/99.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:31833223f9be26cc484d69ed42822df7d05fcfe7ca8bb2d9e54f733dcee876e6
+size 1035484
diff --git a/sdxl_default/images/2048px/0.png b/sdxl_default/images/2048px/0.png
new file mode 100644
index 0000000000000000000000000000000000000000..aa64963bcc17b6bccc420d4eb8f79675f0efeffb
--- /dev/null
+++ b/sdxl_default/images/2048px/0.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1c5e61df53f020fa88c1f3c537535845a3245670d80dd4a04937f6bd3415cf30
+size 7296961
diff --git a/sdxl_default/images/2048px/1.png b/sdxl_default/images/2048px/1.png
new file mode 100644
index 0000000000000000000000000000000000000000..537191a621169f0e8fb56e1922c322409a8c43e2
--- /dev/null
+++ b/sdxl_default/images/2048px/1.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:76c3c431db11edb70ec814d2614c8b137a8341b9b9ae2264890e91be71f77641
+size 7947915
diff --git a/sdxl_default/images/2048px/10.png b/sdxl_default/images/2048px/10.png
new file mode 100644
index 0000000000000000000000000000000000000000..31b35dd10315d9c0b2930ffaf4d46c39a42cb4ae
--- /dev/null
+++ b/sdxl_default/images/2048px/10.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e4ac2e0f0193780a445f8bb619552e979c73a3a0b685a9e469cc654cfcbb13c0
+size 6606800
diff --git a/sdxl_default/images/2048px/11.png b/sdxl_default/images/2048px/11.png
new file mode 100644
index 0000000000000000000000000000000000000000..6f44c4ad84c938d293eb2f036174aa8cc26a1467
--- /dev/null
+++ b/sdxl_default/images/2048px/11.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:eac3cf8c4f4c0b553e63b97a504fbb6a45ff3b6d82e6a8d258a0a1b48b36b510
+size 6129595
diff --git a/sdxl_default/images/2048px/12.png b/sdxl_default/images/2048px/12.png
new file mode 100644
index 0000000000000000000000000000000000000000..b30205eac962133b39d0bccebf2756fc62102684
--- /dev/null
+++ b/sdxl_default/images/2048px/12.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:19ea9ccebe015a8030fb2dd702d1a9965a755a07ce84239c2e29aa98493185a4
+size 8564586
diff --git a/sdxl_default/images/2048px/13.png b/sdxl_default/images/2048px/13.png
new file mode 100644
index 0000000000000000000000000000000000000000..e4eee9f0f9fedd6957fff32514899f2bb3cca2dc
--- /dev/null
+++ b/sdxl_default/images/2048px/13.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:68829d68432e2116861499ea26667f157fbe434de30be05fb984c242c7c26986
+size 6746163
diff --git a/sdxl_default/images/2048px/14.png b/sdxl_default/images/2048px/14.png
new file mode 100644
index 0000000000000000000000000000000000000000..88898ad755fe6644edcc261e4fb613124d5cb53d
--- /dev/null
+++ b/sdxl_default/images/2048px/14.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8fffc1103c5bbcb57d92ebd0a8a69b4bdce83b348e226ebd2f55cb71f9aa0910
+size 4873995
diff --git a/sdxl_default/images/2048px/15.png b/sdxl_default/images/2048px/15.png
new file mode 100644
index 0000000000000000000000000000000000000000..8c61c4e6b9382fb97827a89ead90c8639a0fb920
--- /dev/null
+++ b/sdxl_default/images/2048px/15.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ec0e45825dc795cd03b5df1b0004b55abdfa9be7358bb869901cf7496a46b012
+size 6293540
diff --git a/sdxl_default/images/2048px/16.png b/sdxl_default/images/2048px/16.png
new file mode 100644
index 0000000000000000000000000000000000000000..5ab0a966db912f6a96ae1c92210376a2803bd078
--- /dev/null
+++ b/sdxl_default/images/2048px/16.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9ac0e0659b4b215f197f24d9d2a77dadb66ea9e353eb648c5330ca230c91a777
+size 5093542
diff --git a/sdxl_default/images/2048px/17.png b/sdxl_default/images/2048px/17.png
new file mode 100644
index 0000000000000000000000000000000000000000..743841132507c8363675579e96cfb337e3b34a37
--- /dev/null
+++ b/sdxl_default/images/2048px/17.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d78e41cb520674b239c9ade6d07ab97e865e2677b61c5896825f0f58f7aa76c8
+size 5084476
diff --git a/sdxl_default/images/2048px/18.png b/sdxl_default/images/2048px/18.png
new file mode 100644
index 0000000000000000000000000000000000000000..c28b4ea45375d220f43754092088d014560c944c
--- /dev/null
+++ b/sdxl_default/images/2048px/18.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c128619a062b9eb13b160bb6b9d0b62776d6ebf9617b09f162cdf8a17ac9f860
+size 4377995
diff --git a/sdxl_default/images/2048px/19.png b/sdxl_default/images/2048px/19.png
new file mode 100644
index 0000000000000000000000000000000000000000..143ceeee3b9c4c390ebfd967b497786cf8859142
--- /dev/null
+++ b/sdxl_default/images/2048px/19.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9047380185751cf0bd323061dcf717fdca574077832858b360b70d15a405fc3d
+size 7478775
diff --git a/sdxl_default/images/2048px/2.png b/sdxl_default/images/2048px/2.png
new file mode 100644
index 0000000000000000000000000000000000000000..18212d41b88165e0f4c9bb36e678be38ee47352c
--- /dev/null
+++ b/sdxl_default/images/2048px/2.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d381916ae2aadc89d7a410d09b0bf163a056b72023a1fc2008b12a1d6596fc82
+size 4648929
diff --git a/sdxl_default/images/2048px/20.png b/sdxl_default/images/2048px/20.png
new file mode 100644
index 0000000000000000000000000000000000000000..6b110621ad30d9e5224e52fc7effcf35fd100486
--- /dev/null
+++ b/sdxl_default/images/2048px/20.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:435cf8d5e23f600b70ba3fda52fdc8152dd7952ac847b348541a74903a1dbbfd
+size 4156978
diff --git a/sdxl_default/images/2048px/21.png b/sdxl_default/images/2048px/21.png
new file mode 100644
index 0000000000000000000000000000000000000000..6c1b6fbcc22d2a5767c41b91ac6caf755b52496b
--- /dev/null
+++ b/sdxl_default/images/2048px/21.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bc64e223c9f0391c0e21945ae960576616e57811c822791eb33f3311635e465d
+size 4772519
diff --git a/sdxl_default/images/2048px/22.png b/sdxl_default/images/2048px/22.png
new file mode 100644
index 0000000000000000000000000000000000000000..8c13367a257bd1a790e0fc41645f75fa8ba5305e
--- /dev/null
+++ b/sdxl_default/images/2048px/22.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8e9072ad7655b1de9f26bcfb06f3b9cbff64b9fe7b8c289cc5cb37d2d00e183f
+size 3930615
diff --git a/sdxl_default/images/2048px/23.png b/sdxl_default/images/2048px/23.png
new file mode 100644
index 0000000000000000000000000000000000000000..56c650939bbaa415976f399a21938bac74d9ee76
--- /dev/null
+++ b/sdxl_default/images/2048px/23.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b88de19b74d2f01afa92922fa508ced1dccc925b891d7675d3b8d321ba762eb6
+size 5468268
diff --git a/sdxl_default/images/2048px/24.png b/sdxl_default/images/2048px/24.png
new file mode 100644
index 0000000000000000000000000000000000000000..bca75a1c085e7b167faad3bed8aa47d316d8f8b7
--- /dev/null
+++ b/sdxl_default/images/2048px/24.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c8b2f4c494d82ef19fac361557ff5021dc1703db098857111ec1190bcae41933
+size 6798940
diff --git a/sdxl_default/images/2048px/25.png b/sdxl_default/images/2048px/25.png
new file mode 100644
index 0000000000000000000000000000000000000000..dbf96120ba1608f70069f95613a8d4df129ddf77
--- /dev/null
+++ b/sdxl_default/images/2048px/25.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7b94e79959ed369f50ef64ad04f2532f20d503db0cc8b38f029a7d691b12edb1
+size 7726460
diff --git a/sdxl_default/images/2048px/26.png b/sdxl_default/images/2048px/26.png
new file mode 100644
index 0000000000000000000000000000000000000000..f91740bbb06ee1d1e3ab35dbc0069b6b3772de27
--- /dev/null
+++ b/sdxl_default/images/2048px/26.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4b8e794101a76c29aa08e4c3ac8e87cad872d83cab7fa4dc0c22aefbb42e6577
+size 4843710
diff --git a/sdxl_default/images/2048px/27.png b/sdxl_default/images/2048px/27.png
new file mode 100644
index 0000000000000000000000000000000000000000..b5f324a9434f6b5aaa7f11e5def2ac0ae2857a27
--- /dev/null
+++ b/sdxl_default/images/2048px/27.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d1d6eb45500ab7916e2ac9ac5840137413671ab256ff4ee42b8ec9e80e477d91
+size 8110819
diff --git a/sdxl_default/images/2048px/28.png b/sdxl_default/images/2048px/28.png
new file mode 100644
index 0000000000000000000000000000000000000000..82d3a32dcfb8b8a220ea7a1d73e8cb185b85475b
--- /dev/null
+++ b/sdxl_default/images/2048px/28.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:519a8dc6f915faa1cd1efeea2fbdd712e366b4a8521d1dfb91f453147817e262
+size 5121511
diff --git a/sdxl_default/images/2048px/29.png b/sdxl_default/images/2048px/29.png
new file mode 100644
index 0000000000000000000000000000000000000000..27485a9376027e58d2a1742199d6170f9f33700c
--- /dev/null
+++ b/sdxl_default/images/2048px/29.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0aa009153880aab5b3b110be420deec0a72d265ca6953964a3ea23eb648078a2
+size 5500834
diff --git a/sdxl_default/images/2048px/3.png b/sdxl_default/images/2048px/3.png
new file mode 100644
index 0000000000000000000000000000000000000000..bb58d7960dd8d72e65ddb1636a71e03e8719e789
--- /dev/null
+++ b/sdxl_default/images/2048px/3.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:924c06e24a63e0b0d559ced8f0a16a3b85059af69df0e74d9fb9711bfa82d1b6
+size 4549914
diff --git a/sdxl_default/images/2048px/30.png b/sdxl_default/images/2048px/30.png
new file mode 100644
index 0000000000000000000000000000000000000000..f94a9e1736dbb45ca5a528ebc4cb993ceb9f0d5b
--- /dev/null
+++ b/sdxl_default/images/2048px/30.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:298350b8277583a28ab5736a9ce8baeada8837eb21d1d88c03ef482d580cc904
+size 6299643
diff --git a/sdxl_default/images/2048px/31.png b/sdxl_default/images/2048px/31.png
new file mode 100644
index 0000000000000000000000000000000000000000..67bf85d15fdd159ac882dbdd8b9bdf861bdf9073
--- /dev/null
+++ b/sdxl_default/images/2048px/31.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7f5483d51b1ce17213663ea7f77b0783ea0b5f318adbefefba9c029d5ddd315f
+size 6061229
diff --git a/sdxl_default/images/2048px/32.png b/sdxl_default/images/2048px/32.png
new file mode 100644
index 0000000000000000000000000000000000000000..0827ac8b1337668e87be60e766f0bdf3993df52b
--- /dev/null
+++ b/sdxl_default/images/2048px/32.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5352c3af27deda2a7c29677a2eda665dd8b3b3196e79e235b5948ecf80c24fc6
+size 7052995
diff --git a/sdxl_default/images/2048px/33.png b/sdxl_default/images/2048px/33.png
new file mode 100644
index 0000000000000000000000000000000000000000..15c6cd50d945f07044e8bc98be9355fcbba6441b
--- /dev/null
+++ b/sdxl_default/images/2048px/33.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bbe4c21bdbc5a5b7cb9f255777a8af8dcd30ba899aebfc7b0c96a0beb749164a
+size 8544087
diff --git a/sdxl_default/images/2048px/34.png b/sdxl_default/images/2048px/34.png
new file mode 100644
index 0000000000000000000000000000000000000000..775f22c0edfd772676da3e793dc7ef7f7e0c72c4
--- /dev/null
+++ b/sdxl_default/images/2048px/34.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:882ca4bcac62d42ee32b83f5c35c676260d17c1eff0ac5afbba8924d4aa26f39
+size 7925050
diff --git a/sdxl_default/images/2048px/35.png b/sdxl_default/images/2048px/35.png
new file mode 100644
index 0000000000000000000000000000000000000000..2849cd3c1d866909205f464f3b6b90b40ac6330a
--- /dev/null
+++ b/sdxl_default/images/2048px/35.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4248d4778449b2013e41eb89c2f2af17227a6a63e73b9ab48e793025071c25f0
+size 4599512
diff --git a/sdxl_default/images/2048px/36.png b/sdxl_default/images/2048px/36.png
new file mode 100644
index 0000000000000000000000000000000000000000..2702e2d5b00a51e7e82ba13df3db428c38115cb5
--- /dev/null
+++ b/sdxl_default/images/2048px/36.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7b4fd319cc0aa1a539829b52fd204bd5a5d9d2f38f731c768d55a24a8a89ea07
+size 5626058
diff --git a/sdxl_default/images/2048px/37.png b/sdxl_default/images/2048px/37.png
new file mode 100644
index 0000000000000000000000000000000000000000..93f5693e3fc0ede5621d0011f0d605b8af3f904e
--- /dev/null
+++ b/sdxl_default/images/2048px/37.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:387218dae158014dbb03d95be1177e84f58717413e53d30c07d544c24c0175cc
+size 6800034
diff --git a/sdxl_default/images/2048px/38.png b/sdxl_default/images/2048px/38.png
new file mode 100644
index 0000000000000000000000000000000000000000..4fd9424babc2194c9e1f782193b7689fefb5d949
--- /dev/null
+++ b/sdxl_default/images/2048px/38.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4fab85e8123c3b20464c36f3f9f8ed908a8b87912b7d449a0f25c4cac97466f7
+size 6808674
diff --git a/sdxl_default/images/2048px/39.png b/sdxl_default/images/2048px/39.png
new file mode 100644
index 0000000000000000000000000000000000000000..c8ae12adeeac976f53f44504e228c8c1c9d96130
--- /dev/null
+++ b/sdxl_default/images/2048px/39.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:00f228465b8652e15d33ae979e000a87c96b71c501c93096acb9d2c7be3d51fc
+size 4461399
diff --git a/sdxl_default/images/2048px/4.png b/sdxl_default/images/2048px/4.png
new file mode 100644
index 0000000000000000000000000000000000000000..56fefde12f56db656b1bcf26caf5c7f2b1dcfff5
--- /dev/null
+++ b/sdxl_default/images/2048px/4.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2cb12b89534630f62c7a142bdfac2f370a6eb6d322dc6b6a28c3465a2a9c541b
+size 4964151
diff --git a/sdxl_default/images/2048px/40.png b/sdxl_default/images/2048px/40.png
new file mode 100644
index 0000000000000000000000000000000000000000..fcaa2a6feadbdf768b7c43459d52eb625d44b121
--- /dev/null
+++ b/sdxl_default/images/2048px/40.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ea16957efff6f73ca46ff679437dafca2b3e6f8a00454e974070d79e56f8230b
+size 4258963
diff --git a/sdxl_default/images/2048px/41.png b/sdxl_default/images/2048px/41.png
new file mode 100644
index 0000000000000000000000000000000000000000..450457f60e2b7c47e9d6c1684957e43d5d24580a
--- /dev/null
+++ b/sdxl_default/images/2048px/41.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:831328687c46074f33dafaf40b30f4acae64ae248ad6a28958d58b754cb5b4c2
+size 4966235
diff --git a/sdxl_default/images/2048px/42.png b/sdxl_default/images/2048px/42.png
new file mode 100644
index 0000000000000000000000000000000000000000..1a08e7c3120eb01fb8b816e68aa4aa7d3d6f95fb
--- /dev/null
+++ b/sdxl_default/images/2048px/42.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:42e741221bc11fbd10b4bb8bbf33bb7129511471d4af97e22eac7ecb03770ad0
+size 4262684
diff --git a/sdxl_default/images/2048px/43.png b/sdxl_default/images/2048px/43.png
new file mode 100644
index 0000000000000000000000000000000000000000..87973e1ba7178440515d37ab640a5c09d003a7f2
--- /dev/null
+++ b/sdxl_default/images/2048px/43.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d2dc95050407c62f076fc0d1feb49b0e901f25697f03c4dd2fa6cb5e8e22fcc8
+size 6165844
diff --git a/sdxl_default/images/2048px/44.png b/sdxl_default/images/2048px/44.png
new file mode 100644
index 0000000000000000000000000000000000000000..461f27bca4728bd06f14ba68f39a39eafa4dbbad
--- /dev/null
+++ b/sdxl_default/images/2048px/44.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c41bd9a2e2f0b3578c6af0660f5d540b4c1b863933e043ea3bd8e87038e371c9
+size 6216454
diff --git a/sdxl_default/images/2048px/45.png b/sdxl_default/images/2048px/45.png
new file mode 100644
index 0000000000000000000000000000000000000000..1d50b8332d00ef2050d87a87426feebc3e5f715a
--- /dev/null
+++ b/sdxl_default/images/2048px/45.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:309f004a58c0d1be5cc29132c4c1f62fdc19b31231c8d94359752ec864399d2d
+size 5570879
diff --git a/sdxl_default/images/2048px/46.png b/sdxl_default/images/2048px/46.png
new file mode 100644
index 0000000000000000000000000000000000000000..1152776d4c67b7fee9066c79ebec2610a03b972a
--- /dev/null
+++ b/sdxl_default/images/2048px/46.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3fe7f17c5eecf2495627a6648d3f50d1f30cfaf8885c96eb6552a2a5b56221b7
+size 6533355
diff --git a/sdxl_default/images/2048px/47.png b/sdxl_default/images/2048px/47.png
new file mode 100644
index 0000000000000000000000000000000000000000..857c6fb478fecda256949661324205df6d67e1bd
--- /dev/null
+++ b/sdxl_default/images/2048px/47.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0db77a3d58a3fe36be8eae7a002a4dbac801baaa95685d306c16d7529c6c7113
+size 5455558
diff --git a/sdxl_default/images/2048px/48.png b/sdxl_default/images/2048px/48.png
new file mode 100644
index 0000000000000000000000000000000000000000..a3ebe5d2b3b770105428827af6c4fcb2b9883232
--- /dev/null
+++ b/sdxl_default/images/2048px/48.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5d55298aab0ced30f0f25bc938288cefe30e35ffba7f2f1201898894a65dc25f
+size 4536121
diff --git a/sdxl_default/images/2048px/49.png b/sdxl_default/images/2048px/49.png
new file mode 100644
index 0000000000000000000000000000000000000000..b75a3c629e79ae0bf3f946c2914f7521e509c412
--- /dev/null
+++ b/sdxl_default/images/2048px/49.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2385a688bf38379d428b2c4f7f9b7387cc688ac8608a3a60673ecba2c361ce11
+size 4241120
diff --git a/sdxl_default/images/2048px/5.png b/sdxl_default/images/2048px/5.png
new file mode 100644
index 0000000000000000000000000000000000000000..55563855dd25b97d6586f3a97a28110193aaf3b0
--- /dev/null
+++ b/sdxl_default/images/2048px/5.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:abf1f5066b87e275df5360dd1b61e00694952ad9884c843a9b785cb28e0587fb
+size 6056834
diff --git a/sdxl_default/images/2048px/50.png b/sdxl_default/images/2048px/50.png
new file mode 100644
index 0000000000000000000000000000000000000000..0c0760b2ca755dbafa8b85aa9498bec382e8b066
--- /dev/null
+++ b/sdxl_default/images/2048px/50.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:cf90215191d8729edeea9868e36d626fd5315c4f5102ebdc44e66f15e7355b14
+size 4975741
diff --git a/sdxl_default/images/2048px/51.png b/sdxl_default/images/2048px/51.png
new file mode 100644
index 0000000000000000000000000000000000000000..696567c37195af8bcd19c108c8e2f85b36b0d316
--- /dev/null
+++ b/sdxl_default/images/2048px/51.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b889e423018c2112d555866c1bc47d6f7fe2a6f23336056a78884adc8f7960e3
+size 5456052
diff --git a/sdxl_default/images/2048px/52.png b/sdxl_default/images/2048px/52.png
new file mode 100644
index 0000000000000000000000000000000000000000..496ab2da8de99c1fd413ee9996c854bf25b47af8
--- /dev/null
+++ b/sdxl_default/images/2048px/52.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e9046ea542c6703912284033b2934e2f9236f969e257bd6bfc2af84307db574e
+size 5395425
diff --git a/sdxl_default/images/2048px/53.png b/sdxl_default/images/2048px/53.png
new file mode 100644
index 0000000000000000000000000000000000000000..47b74678c8bfa60ec5d5406856e8d63d6271d168
--- /dev/null
+++ b/sdxl_default/images/2048px/53.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:83edc6ed22bc7d24ef504b43bea69a456061f3a768d9bc545e34720c440cfe09
+size 3168720
diff --git a/sdxl_default/images/2048px/54.png b/sdxl_default/images/2048px/54.png
new file mode 100644
index 0000000000000000000000000000000000000000..8b077caa4f1a62bd49407060034d4324088369f7
--- /dev/null
+++ b/sdxl_default/images/2048px/54.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c8c755409ef8e94ecdb15b5c2f3fd212b436140adadf21d4350d77834a1c12a1
+size 7531012
diff --git a/sdxl_default/images/2048px/55.png b/sdxl_default/images/2048px/55.png
new file mode 100644
index 0000000000000000000000000000000000000000..cba09a282bc82ae618450710daad64c91bd67659
--- /dev/null
+++ b/sdxl_default/images/2048px/55.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:478c336ae80faaacc5c0f3756674ac13f931805609ac071efb0e04c8e4bc1543
+size 6619032
diff --git a/sdxl_default/images/2048px/56.png b/sdxl_default/images/2048px/56.png
new file mode 100644
index 0000000000000000000000000000000000000000..1f9cab627438111c6dbe716d54a9447a6626e961
--- /dev/null
+++ b/sdxl_default/images/2048px/56.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bfee68afe8cca40f89099d1f704fb0fa00f31bfe94fa9a1be3d13fe4ba2e5d79
+size 5270101
diff --git a/sdxl_default/images/2048px/57.png b/sdxl_default/images/2048px/57.png
new file mode 100644
index 0000000000000000000000000000000000000000..2de26fdeba0ef6fb6673098363e396d13e95b05a
--- /dev/null
+++ b/sdxl_default/images/2048px/57.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e143a0ca98bbbf535b5e182ef8699bc64dd9d438b5614123d26af125f2823f2b
+size 5540055
diff --git a/sdxl_default/images/2048px/58.png b/sdxl_default/images/2048px/58.png
new file mode 100644
index 0000000000000000000000000000000000000000..5eb1db4ec5832a5c93d0239db288164c7f8cac5f
--- /dev/null
+++ b/sdxl_default/images/2048px/58.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5a4705e2268799e762947cc8b4dcd65fc23032980afd71771a53d80f0f07deae
+size 7689818
diff --git a/sdxl_default/images/2048px/59.png b/sdxl_default/images/2048px/59.png
new file mode 100644
index 0000000000000000000000000000000000000000..0f9a384a17343e9e10b809f79b2e1b822d2b0226
--- /dev/null
+++ b/sdxl_default/images/2048px/59.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a975bc3ebe31a812045bc067905c2eb661f89e8f564cbfdeb30229ffec30887c
+size 5254666
diff --git a/sdxl_default/images/2048px/6.png b/sdxl_default/images/2048px/6.png
new file mode 100644
index 0000000000000000000000000000000000000000..f17c1a938ab2ccb4936a2c2b8bfdca0e1ca23a29
--- /dev/null
+++ b/sdxl_default/images/2048px/6.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:db26b51fb08ca38d9df93bd8bd57753a1bed15e29f8bd75d793f7f31ea235747
+size 5540346
diff --git a/sdxl_default/images/2048px/60.png b/sdxl_default/images/2048px/60.png
new file mode 100644
index 0000000000000000000000000000000000000000..3772b5fae6a7c2b473f9237f20d31c11a801cfff
--- /dev/null
+++ b/sdxl_default/images/2048px/60.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:13b65ef0809b74942388a52c2b752786548310e821440fa6ff0fdc69576298f9
+size 4578655
diff --git a/sdxl_default/images/2048px/61.png b/sdxl_default/images/2048px/61.png
new file mode 100644
index 0000000000000000000000000000000000000000..2dbc33724f6d21b92ec0ff7717dbebb97a3878f4
--- /dev/null
+++ b/sdxl_default/images/2048px/61.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3e04b33bf36f905acf611d833190509834bc5149bd4b00a1716cc597b4685db5
+size 6397633
diff --git a/sdxl_default/images/2048px/62.png b/sdxl_default/images/2048px/62.png
new file mode 100644
index 0000000000000000000000000000000000000000..c41134a584bd1623f8d8fc7cb317ee27c8adcfdb
--- /dev/null
+++ b/sdxl_default/images/2048px/62.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:af1d6a14348570a0ee4041cecb15e46a52911cf33617a9d2b8e9e7c48f88a6f1
+size 4962431
diff --git a/sdxl_default/images/2048px/63.png b/sdxl_default/images/2048px/63.png
new file mode 100644
index 0000000000000000000000000000000000000000..5173d6368312f936df22a56e92c1a888c476bfb4
--- /dev/null
+++ b/sdxl_default/images/2048px/63.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2037ff3217e54ba1450f77af99921fd78b371699b3587406918505c7d2a56534
+size 5911690
diff --git a/sdxl_default/images/2048px/64.png b/sdxl_default/images/2048px/64.png
new file mode 100644
index 0000000000000000000000000000000000000000..6ffeeb0898bb5f6745b0b9653462299c8a069ba5
--- /dev/null
+++ b/sdxl_default/images/2048px/64.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:69190e79d8862137f874fdfe887f09537b1afc7486cd8910d858ba426d1982ac
+size 4295924
diff --git a/sdxl_default/images/2048px/65.png b/sdxl_default/images/2048px/65.png
new file mode 100644
index 0000000000000000000000000000000000000000..356a07f24d239820042817c1bbefdd09b10b6551
--- /dev/null
+++ b/sdxl_default/images/2048px/65.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5ea49e5aa4aaa83efc9f208a1b62286c06ccc9cc5a5d878f7004b30636db26f4
+size 6040819
diff --git a/sdxl_default/images/2048px/66.png b/sdxl_default/images/2048px/66.png
new file mode 100644
index 0000000000000000000000000000000000000000..5846c9e356c76dfb40a5b5afcc801dbeba3cceab
--- /dev/null
+++ b/sdxl_default/images/2048px/66.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0ed5d5c07e1525a997b5a5b47fe64a81a64b89fed290b30c558a893f6eb8f7f4
+size 4624482
diff --git a/sdxl_default/images/2048px/67.png b/sdxl_default/images/2048px/67.png
new file mode 100644
index 0000000000000000000000000000000000000000..64fa30bc26ee9492dc9199d7b81082a1455c50e3
--- /dev/null
+++ b/sdxl_default/images/2048px/67.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c3ce25b13ecbdade84d3f9a71d1a5fab73973c1973c76db011eb26311c22b916
+size 7120702
diff --git a/sdxl_default/images/2048px/68.png b/sdxl_default/images/2048px/68.png
new file mode 100644
index 0000000000000000000000000000000000000000..b4866a3917051f82a042e8a765659a86ecd67c8d
--- /dev/null
+++ b/sdxl_default/images/2048px/68.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c3f81d6f0db29387dc8e31aaba2df5b8e2bc3063de2d95e0ebb69cb6048c9f67
+size 4645888
diff --git a/sdxl_default/images/2048px/69.png b/sdxl_default/images/2048px/69.png
new file mode 100644
index 0000000000000000000000000000000000000000..c315bfd35c33da0e637080f88aa39d37dcd4bf79
--- /dev/null
+++ b/sdxl_default/images/2048px/69.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6ef9e69c6f2b2762c5ee8440375df882070d51386fd45a557f96d319eb10e790
+size 5235291
diff --git a/sdxl_default/images/2048px/7.png b/sdxl_default/images/2048px/7.png
new file mode 100644
index 0000000000000000000000000000000000000000..5e13cde48ef9da4738030757a40da4540daadd81
--- /dev/null
+++ b/sdxl_default/images/2048px/7.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4cde3baf4ec2386efe6c0e6e882ea4a0138811f74ce49f8a47dd8ec7563a4faf
+size 5269926
diff --git a/sdxl_default/images/2048px/70.png b/sdxl_default/images/2048px/70.png
new file mode 100644
index 0000000000000000000000000000000000000000..3c55eb7662f915d1ec100b1598b60a3dee9c0fff
--- /dev/null
+++ b/sdxl_default/images/2048px/70.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:277b32a2d5c2f82d42dda2930b33fbef18a72ee69c0a396cc1e650110dc993df
+size 3647154
diff --git a/sdxl_default/images/2048px/71.png b/sdxl_default/images/2048px/71.png
new file mode 100644
index 0000000000000000000000000000000000000000..8dd0786ab13a661e0ba006c0d221e343a786cf01
--- /dev/null
+++ b/sdxl_default/images/2048px/71.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4c0be297005274e5af9b57501009215043a26e2f4d340155623af1fcc9c52077
+size 8120356
diff --git a/sdxl_default/images/2048px/72.png b/sdxl_default/images/2048px/72.png
new file mode 100644
index 0000000000000000000000000000000000000000..34ffa1e694939d3747d5bdbda1d18ac68c8e1f02
--- /dev/null
+++ b/sdxl_default/images/2048px/72.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:82494c820dbd6b75b19f4bed6fbc04e89dc6d849ffe96a83aef34db3e11e92e1
+size 4892244
diff --git a/sdxl_default/images/2048px/73.png b/sdxl_default/images/2048px/73.png
new file mode 100644
index 0000000000000000000000000000000000000000..327b73e08223a3d980bed63b4e16b82d85ecd688
--- /dev/null
+++ b/sdxl_default/images/2048px/73.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5de61b5bdad446e3b93d43b3f105bf27debb6d1f89a67240b7c4462d0350f48d
+size 6778550
diff --git a/sdxl_default/images/2048px/74.png b/sdxl_default/images/2048px/74.png
new file mode 100644
index 0000000000000000000000000000000000000000..0ecbfceedd6f0fbba1d6b046b7aa5dde983075d7
--- /dev/null
+++ b/sdxl_default/images/2048px/74.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f0def68405586f18beb8f750fd5b70dba58f1673a22830c3fed14e9b18dde636
+size 4844418
diff --git a/sdxl_default/images/2048px/75.png b/sdxl_default/images/2048px/75.png
new file mode 100644
index 0000000000000000000000000000000000000000..0da118f42c00347201e01405f766d7e1255ed9d4
--- /dev/null
+++ b/sdxl_default/images/2048px/75.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:506c8b586c44de9c6b4af280d9e1f2c3bd21719f192309c628d73bb5cf15258b
+size 6684538
diff --git a/sdxl_default/images/2048px/76.png b/sdxl_default/images/2048px/76.png
new file mode 100644
index 0000000000000000000000000000000000000000..aabc16903e9f7d0c93972cc54508c4e75ebcfdcf
--- /dev/null
+++ b/sdxl_default/images/2048px/76.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3c38c586ce8926a513638f7b9d50f4985e57006b399819450b4d43451b252154
+size 4511739
diff --git a/sdxl_default/images/2048px/77.png b/sdxl_default/images/2048px/77.png
new file mode 100644
index 0000000000000000000000000000000000000000..093a48642867b93e1f6765792dfeb86a98ef9830
--- /dev/null
+++ b/sdxl_default/images/2048px/77.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:29620ab0c305a3ae4dff8127d36aebba1c730380e64a7740533cc9289f56f986
+size 7976961
diff --git a/sdxl_default/images/2048px/78.png b/sdxl_default/images/2048px/78.png
new file mode 100644
index 0000000000000000000000000000000000000000..9bc8c4279f6d8f6826bfae6ab9cffe3a0f4c6e73
--- /dev/null
+++ b/sdxl_default/images/2048px/78.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c611fc3445156e960b162a2bb5b0dcf3faa2fa1abe1ba0c13164e140ad6706c4
+size 5616376
diff --git a/sdxl_default/images/2048px/79.png b/sdxl_default/images/2048px/79.png
new file mode 100644
index 0000000000000000000000000000000000000000..06a4623bf3835673d87e19174d0fe5d9b9f8ee14
--- /dev/null
+++ b/sdxl_default/images/2048px/79.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:156c44e5c186528d51dad2b2a1a8dadcf5c6d06f43073ef7fd4679bddf73aff5
+size 5005117
diff --git a/sdxl_default/images/2048px/8.png b/sdxl_default/images/2048px/8.png
new file mode 100644
index 0000000000000000000000000000000000000000..8f5fd0d739bb431b9c57ede4a9d5aceb472f81fd
--- /dev/null
+++ b/sdxl_default/images/2048px/8.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:68dd0993bd6df36c0d31438d2a48218d579546b05c7b2fccb1c06c0067e4f327
+size 5579930
diff --git a/sdxl_default/images/2048px/80.png b/sdxl_default/images/2048px/80.png
new file mode 100644
index 0000000000000000000000000000000000000000..6fc1844bad6143b2ca1b18238ef96898f548b7c1
--- /dev/null
+++ b/sdxl_default/images/2048px/80.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3fd75c17795bef6088d330c23e6a920c10cbb6b728d5a14704cc087b04759998
+size 6042986
diff --git a/sdxl_default/images/2048px/81.png b/sdxl_default/images/2048px/81.png
new file mode 100644
index 0000000000000000000000000000000000000000..a4e08e43a628b1f9c6bca6d927bf42855e618b99
--- /dev/null
+++ b/sdxl_default/images/2048px/81.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a62d6ac6bf90ec00b9eb15fdfaf7fdf10b870fced075b25cb2353952149faf4e
+size 4787514
diff --git a/sdxl_default/images/2048px/82.png b/sdxl_default/images/2048px/82.png
new file mode 100644
index 0000000000000000000000000000000000000000..f9f0a3fb0f73a2b89a2eecb6f0e4bfb39cfa0eae
--- /dev/null
+++ b/sdxl_default/images/2048px/82.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:83ed6882890536d10772b851d09db163780dbdc78288670a36f6e41ecfad5fcd
+size 7059189
diff --git a/sdxl_default/images/2048px/83.png b/sdxl_default/images/2048px/83.png
new file mode 100644
index 0000000000000000000000000000000000000000..aed9bbacb20ee680cb7ddf248927d92d08544d62
--- /dev/null
+++ b/sdxl_default/images/2048px/83.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:03265a27abb1704791b410a40a7eefb4f291e4fd4372375e7ebffbad5e80c343
+size 5611504
diff --git a/sdxl_default/images/2048px/84.png b/sdxl_default/images/2048px/84.png
new file mode 100644
index 0000000000000000000000000000000000000000..f5ed54d1f15234305a30e09e5e4e29acdee47cf3
--- /dev/null
+++ b/sdxl_default/images/2048px/84.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ed45905f33f805149138d4bf85b9714655d7f99be26e910940c86326ca6baf81
+size 5539732
diff --git a/sdxl_default/images/2048px/85.png b/sdxl_default/images/2048px/85.png
new file mode 100644
index 0000000000000000000000000000000000000000..7213363c17087af0bc9bf4fac39b8d4bb0e5c30e
--- /dev/null
+++ b/sdxl_default/images/2048px/85.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:78a531d19f5b3a9ba729423729aa7cdcecf22b205d0dedaa210ae6099a080a0e
+size 4581273
diff --git a/sdxl_default/images/2048px/86.png b/sdxl_default/images/2048px/86.png
new file mode 100644
index 0000000000000000000000000000000000000000..ba3c69648a038b5b7db0ebb2f8212bec2cdf97b9
--- /dev/null
+++ b/sdxl_default/images/2048px/86.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:99091434d02b1dd736c10113bae32dbf4a9c812e793ccbee86d722ebae6ff9cc
+size 5492360
diff --git a/sdxl_default/images/2048px/87.png b/sdxl_default/images/2048px/87.png
new file mode 100644
index 0000000000000000000000000000000000000000..c51b5e494d70666f96ef525c9dfa324691984619
--- /dev/null
+++ b/sdxl_default/images/2048px/87.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8263eda1dfbb6a3187a1b28b98c28bd934a1f06fa965ba0c9bf6f2f21531458b
+size 6124863
diff --git a/sdxl_default/images/2048px/88.png b/sdxl_default/images/2048px/88.png
new file mode 100644
index 0000000000000000000000000000000000000000..0e0ef7a70e9cd4fa4f5e992acb502cd3b02f1a39
--- /dev/null
+++ b/sdxl_default/images/2048px/88.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f1fe86ad0da2fa33b42545e27362f3e67196a4219561b9187dc8a60a56fcaf02
+size 6488518
diff --git a/sdxl_default/images/2048px/89.png b/sdxl_default/images/2048px/89.png
new file mode 100644
index 0000000000000000000000000000000000000000..f3f5cc973e8727db783c667207fc531eeedf7190
--- /dev/null
+++ b/sdxl_default/images/2048px/89.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:012ee3f5d14c6c5c029964d2ab4eca6eb8667cb60b560704ed1cac1c595f749e
+size 4510552
diff --git a/sdxl_default/images/2048px/9.png b/sdxl_default/images/2048px/9.png
new file mode 100644
index 0000000000000000000000000000000000000000..bafd70b12577df35414f6235f657e441fb7a33f4
--- /dev/null
+++ b/sdxl_default/images/2048px/9.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:741450ef5c3ee120391b819a52c6fb59e4db08ccca9800058f0bb8b3e6e800b3
+size 4709724
diff --git a/sdxl_default/images/2048px/90.png b/sdxl_default/images/2048px/90.png
new file mode 100644
index 0000000000000000000000000000000000000000..0f353a10d896dc97a072f1f22bddf1a15763bbb1
--- /dev/null
+++ b/sdxl_default/images/2048px/90.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:63562d8a67724ffb56c4a12357d698bf43b8f1ae988568718f4747f3aade1a24
+size 5872569
diff --git a/sdxl_default/images/2048px/92.png b/sdxl_default/images/2048px/92.png
new file mode 100644
index 0000000000000000000000000000000000000000..f8c05928f54b83b6c0e4261aecdfcf1aeb3444dd
--- /dev/null
+++ b/sdxl_default/images/2048px/92.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b09041eced2960b52a2b9a484ae828c3b0fbf54d29bcdf923f75e573164008a7
+size 6155470
diff --git a/sdxl_default/images/2048px/93.png b/sdxl_default/images/2048px/93.png
new file mode 100644
index 0000000000000000000000000000000000000000..8d635615a9943a712351a71f0893e138a4b04886
--- /dev/null
+++ b/sdxl_default/images/2048px/93.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b50808e6c8579dd970e656b9c7121b407d965025dfffab3b75cce568b6f97d8e
+size 6290055
diff --git a/sdxl_default/images/2048px/94.png b/sdxl_default/images/2048px/94.png
new file mode 100644
index 0000000000000000000000000000000000000000..06bc96a541270841cf787764bb5e04992b19494b
--- /dev/null
+++ b/sdxl_default/images/2048px/94.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7046086475f56991048df49141cb993740d7dc119bab1d39f34a6bdb47959982
+size 5522748
diff --git a/sdxl_default/images/2048px/95.png b/sdxl_default/images/2048px/95.png
new file mode 100644
index 0000000000000000000000000000000000000000..17faca6c24e33da4d236f399c32fc77d760d0b17
--- /dev/null
+++ b/sdxl_default/images/2048px/95.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3923754460009dc1eb9252e8bd9120038bc619bc38da0837d26b4ba2ad450140
+size 4512542
diff --git a/sdxl_default/images/2048px/96.png b/sdxl_default/images/2048px/96.png
new file mode 100644
index 0000000000000000000000000000000000000000..9a974c6e3c2d2745b4cd588650c41c3bb68ff986
--- /dev/null
+++ b/sdxl_default/images/2048px/96.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:15714668b233d82ab9f42358593a7bbc0eb9f162e251f57cfbb1f7d9e891e637
+size 5252037
diff --git a/sdxl_default/images/2048px/97.png b/sdxl_default/images/2048px/97.png
new file mode 100644
index 0000000000000000000000000000000000000000..e4713277cdbbb3d77d3de2cb3d4d55d9b06571f1
--- /dev/null
+++ b/sdxl_default/images/2048px/97.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:161133429fc284158ab0f71fc95ad7642ba3aef070859e7b32be00363ffbaabb
+size 4403018
diff --git a/sdxl_default/images/2048px/98.png b/sdxl_default/images/2048px/98.png
new file mode 100644
index 0000000000000000000000000000000000000000..e132aae941e8848830be6d179be39ecce40a87d2
--- /dev/null
+++ b/sdxl_default/images/2048px/98.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7c26b98f2c29f50c6e0002381c988966c39ab93ecb2ec3f2e2d2a9aa0aa44623
+size 6535863
diff --git a/sdxl_default/images/2048px/99.png b/sdxl_default/images/2048px/99.png
new file mode 100644
index 0000000000000000000000000000000000000000..437b7120ffd923d472f9b4871198252e955d0544
--- /dev/null
+++ b/sdxl_default/images/2048px/99.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9b1644b9c3b48b242a37d39db7e2a1c52c0722aefb4f27b5e6a72b58c282d0ea
+size 4370635
diff --git a/sdxl_default/images/4096px/0.png b/sdxl_default/images/4096px/0.png
new file mode 100644
index 0000000000000000000000000000000000000000..e94d58318a3eca17dade3bb089584287bf1551ed
--- /dev/null
+++ b/sdxl_default/images/4096px/0.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3feb92c6c0f4c16794e1d9c4f85c8cdb5bffd81ba2db2298daf800824cf2ad6f
+size 25454309
diff --git a/sdxl_default/images/4096px/1.png b/sdxl_default/images/4096px/1.png
new file mode 100644
index 0000000000000000000000000000000000000000..d0aa467e4c7e82cca62ca5b7b31c5589f7a95d0d
--- /dev/null
+++ b/sdxl_default/images/4096px/1.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8f97f2d42978a9160115b8c06ebed9fbbdb86ec43a6009070f9ccfd3ad2e6fed
+size 31276990
diff --git a/sdxl_default/images/4096px/10.png b/sdxl_default/images/4096px/10.png
new file mode 100644
index 0000000000000000000000000000000000000000..7be012ddde87038adbf470a11b344e6ffaee65cd
--- /dev/null
+++ b/sdxl_default/images/4096px/10.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7c8600d1d21391438a7fb4a0bc4db28239de676d8a8533ee3faf6702dfe55c8e
+size 24613033
diff --git a/sdxl_default/images/4096px/12.png b/sdxl_default/images/4096px/12.png
new file mode 100644
index 0000000000000000000000000000000000000000..cdbd3da4fa596f1b9c251eec814e2e60c42671c8
--- /dev/null
+++ b/sdxl_default/images/4096px/12.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b6bb17c5928d6800de4f501bf559a0b37fadd9c9f218db55c0a9e5e299e5fe30
+size 33545779
diff --git a/sdxl_default/images/4096px/13.png b/sdxl_default/images/4096px/13.png
new file mode 100644
index 0000000000000000000000000000000000000000..d2f6de04028e4ea1674f02dd9a2974c3d29a46e9
--- /dev/null
+++ b/sdxl_default/images/4096px/13.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f734033c725ebd72dec283ff2e7478bbb31ccaa641e78f8651911cccb580ba13
+size 25264814
diff --git a/sdxl_default/images/4096px/14.png b/sdxl_default/images/4096px/14.png
new file mode 100644
index 0000000000000000000000000000000000000000..e8ea53cbe93e5f877628b3f4af5b97c0fb80a9e9
--- /dev/null
+++ b/sdxl_default/images/4096px/14.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d2571fb0970b3282a9927c566e265c169ebab2fd40dcf677c21596d9df0919f5
+size 17115298
diff --git a/sdxl_default/images/4096px/15.png b/sdxl_default/images/4096px/15.png
new file mode 100644
index 0000000000000000000000000000000000000000..567ed305076389853be08014f414503febc8f255
--- /dev/null
+++ b/sdxl_default/images/4096px/15.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:753f0f2f440b56b638d8096c3887addfe1bc87b5c469ec1c6fb49bb47ebc9616
+size 25684279
diff --git a/sdxl_default/images/4096px/16.png b/sdxl_default/images/4096px/16.png
new file mode 100644
index 0000000000000000000000000000000000000000..c549a2a9576379a3310c662b41d1cc7460fbc893
--- /dev/null
+++ b/sdxl_default/images/4096px/16.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bc5c986b47a41138351950c69ba3589ad4c9a8b6c2b65b2e26db650a9189233a
+size 20815434
diff --git a/sdxl_default/images/4096px/17.png b/sdxl_default/images/4096px/17.png
new file mode 100644
index 0000000000000000000000000000000000000000..dbd11c3b0069811febca361f9052d6ee628bbd33
--- /dev/null
+++ b/sdxl_default/images/4096px/17.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:479c73bab6cd514b2c9c5d66263b927c52b0ac3bd94ea1a69b82a755b27c1f60
+size 19692157
diff --git a/sdxl_default/images/4096px/18.png b/sdxl_default/images/4096px/18.png
new file mode 100644
index 0000000000000000000000000000000000000000..b8c7ba8ab4d34664cefc73ca7a19404b95553a51
--- /dev/null
+++ b/sdxl_default/images/4096px/18.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5a43e88b09c59e579959cb465f6b458300672b47f3130c55d65eaeb7a4688fae
+size 16545074
diff --git a/sdxl_default/images/4096px/19.png b/sdxl_default/images/4096px/19.png
new file mode 100644
index 0000000000000000000000000000000000000000..10d5b828ec79ae1ed5e53b3136f6a0fb809a5ce1
--- /dev/null
+++ b/sdxl_default/images/4096px/19.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dc3f1cdc41da68dd76e9fb9711a3cddc4237128eb190e4b379ab89a9c6ea754c
+size 25703486
diff --git a/sdxl_default/images/4096px/2.png b/sdxl_default/images/4096px/2.png
new file mode 100644
index 0000000000000000000000000000000000000000..ab5cf48dd27b7747c343e18a4e329860669d6ca7
--- /dev/null
+++ b/sdxl_default/images/4096px/2.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dcb89eb59b42569e1732a65bdea891f72267bd155b0e892454f63c600e004f37
+size 18683129
diff --git a/sdxl_default/images/4096px/20.png b/sdxl_default/images/4096px/20.png
new file mode 100644
index 0000000000000000000000000000000000000000..6ca90affa73993637caaa754c4bf4980f91a2e45
--- /dev/null
+++ b/sdxl_default/images/4096px/20.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d0410a61366322aaa7fddf74fe8699b6a4165742648ad19be710d17c95897a05
+size 15521373
diff --git a/sdxl_default/images/4096px/21.png b/sdxl_default/images/4096px/21.png
new file mode 100644
index 0000000000000000000000000000000000000000..c42a8958184e503ae663cf85bb03571431a9484e
--- /dev/null
+++ b/sdxl_default/images/4096px/21.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5102e945f4869860e363ef3ff7dd4f2b3cd33d8a8f159eb554b89a30777fb9ef
+size 18181584
diff --git a/sdxl_default/images/4096px/22.png b/sdxl_default/images/4096px/22.png
new file mode 100644
index 0000000000000000000000000000000000000000..cbb0678d92f81e2202ba5f4eca1c93e3b2a9ac53
--- /dev/null
+++ b/sdxl_default/images/4096px/22.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:18f3793ce0b684cebd7b0cc8bbacd7999e3750c9a321310b2e2dfa5229f939f6
+size 15001425
diff --git a/sdxl_default/images/4096px/23.png b/sdxl_default/images/4096px/23.png
new file mode 100644
index 0000000000000000000000000000000000000000..9ce8003618833b8e3454952319d624e35c18721f
--- /dev/null
+++ b/sdxl_default/images/4096px/23.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d506918a2b5ff2a3b08144847113c86cf4e9fbadb5e5b8fc58abe4729178d356
+size 20687446
diff --git a/sdxl_default/images/4096px/24.png b/sdxl_default/images/4096px/24.png
new file mode 100644
index 0000000000000000000000000000000000000000..4c8583fd5c3bf4fe31a74fc75d204184b51157d5
--- /dev/null
+++ b/sdxl_default/images/4096px/24.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5c9eb6a9c6700ab71bc2dc07022a863540a63f038270cc2e4de8917f806674a9
+size 26655069
diff --git a/sdxl_default/images/4096px/25.png b/sdxl_default/images/4096px/25.png
new file mode 100644
index 0000000000000000000000000000000000000000..64cebe904a52a86bb69ac2dbba2cb4c611637a58
--- /dev/null
+++ b/sdxl_default/images/4096px/25.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f0861ca4247e0d93d49e90273ec801d882d63492ecea001f9b35860512e1ceee
+size 28616853
diff --git a/sdxl_default/images/4096px/26.png b/sdxl_default/images/4096px/26.png
new file mode 100644
index 0000000000000000000000000000000000000000..74fd4e74979c0e25bc8f0af5f690ebac9acacd4f
--- /dev/null
+++ b/sdxl_default/images/4096px/26.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3908bae14f974d8c75ee82fe441845f4936e7e19629b65144bf9a474ffe221fe
+size 19237818
diff --git a/sdxl_default/images/4096px/27.png b/sdxl_default/images/4096px/27.png
new file mode 100644
index 0000000000000000000000000000000000000000..3e726d4790d762aaeb33f0a27ebd6478215340f7
--- /dev/null
+++ b/sdxl_default/images/4096px/27.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7f79d9116250d204cd4a5b49aa0668a369a9efe34ffc81fa8c6009cb517fc7e3
+size 32686420
diff --git a/sdxl_default/images/4096px/28.png b/sdxl_default/images/4096px/28.png
new file mode 100644
index 0000000000000000000000000000000000000000..92a68b7a8eea92164df6a6cbef0bede0ee6d09db
--- /dev/null
+++ b/sdxl_default/images/4096px/28.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:25d5674db7a3a94d9685b0d7abfe590bef55b237451a27d6de7c8c2e7bc6fd52
+size 22100412
diff --git a/sdxl_default/images/4096px/29.png b/sdxl_default/images/4096px/29.png
new file mode 100644
index 0000000000000000000000000000000000000000..9ea4ffd38e9b5b4a1bda91acbf32f2d23b57a975
--- /dev/null
+++ b/sdxl_default/images/4096px/29.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ec0dba3324a862d15d09946611dc4584849005c2c367b14535ae3348f3e6c6bd
+size 23293097
diff --git a/sdxl_default/images/4096px/3.png b/sdxl_default/images/4096px/3.png
new file mode 100644
index 0000000000000000000000000000000000000000..5d6990164dac5aaf9359085904123005d696afec
--- /dev/null
+++ b/sdxl_default/images/4096px/3.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8531a2f3b544b45977ea6baaf582f406e5d512d574c8f83ed883cf419b5c9145
+size 16942132
diff --git a/sdxl_default/images/4096px/30.png b/sdxl_default/images/4096px/30.png
new file mode 100644
index 0000000000000000000000000000000000000000..2d8c2938443aeeb5d326a0ad32799e2177a11cd1
--- /dev/null
+++ b/sdxl_default/images/4096px/30.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6d77799d615ac2d77d1a07f7f1acece99c75ef1d8364211960633548c3a63454
+size 24937488
diff --git a/sdxl_default/images/4096px/31.png b/sdxl_default/images/4096px/31.png
new file mode 100644
index 0000000000000000000000000000000000000000..e355bc885817a430991813545be51a4a30a37240
--- /dev/null
+++ b/sdxl_default/images/4096px/31.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6d7e9efc426be1992982fcbb9839fade4ab91d9c6c8c2b96e64857bc656ffbe5
+size 23843040
diff --git a/sdxl_default/images/4096px/32.png b/sdxl_default/images/4096px/32.png
new file mode 100644
index 0000000000000000000000000000000000000000..051653b9e3c9f9e60248b7768598ea6a7f27d68a
--- /dev/null
+++ b/sdxl_default/images/4096px/32.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:71e6f9b92d9036bbe82abe810af8a4e78c5cd5a9a589e8ce2cf145c771315b7b
+size 27235234
diff --git a/sdxl_default/images/4096px/33.png b/sdxl_default/images/4096px/33.png
new file mode 100644
index 0000000000000000000000000000000000000000..e2365ec7750ad7f35a336e9411b924f58e554aa6
--- /dev/null
+++ b/sdxl_default/images/4096px/33.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fbae3a9f5c1d81ea2253c39149e682ab6642215bced5e44e4b9129335ecebd7c
+size 33371658
diff --git a/sdxl_default/images/4096px/34.png b/sdxl_default/images/4096px/34.png
new file mode 100644
index 0000000000000000000000000000000000000000..b017992ac81b1ee4536c31f6b242baf9e8a59c28
--- /dev/null
+++ b/sdxl_default/images/4096px/34.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ae9d399e984bed7d853310fb77e3176b5a6b9afba75694db8d5485f28d928ab4
+size 31870251
diff --git a/sdxl_default/images/4096px/35.png b/sdxl_default/images/4096px/35.png
new file mode 100644
index 0000000000000000000000000000000000000000..4fac1d0d07bd1817d0e2313a58d7170f68db4424
--- /dev/null
+++ b/sdxl_default/images/4096px/35.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0ce31531af8821a98d29e2b12c8bf5d41592df82fd8a974cdafce59c0d722906
+size 16352335
diff --git a/sdxl_default/images/4096px/36.png b/sdxl_default/images/4096px/36.png
new file mode 100644
index 0000000000000000000000000000000000000000..acaba532fc7b84e4f21b5a0a60d604e0394d88a7
--- /dev/null
+++ b/sdxl_default/images/4096px/36.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:018756185b814e40b18aa64142603d84166bc34cd1c4ba1af460ee23a6e90b87
+size 23268687
diff --git a/sdxl_default/images/4096px/37.png b/sdxl_default/images/4096px/37.png
new file mode 100644
index 0000000000000000000000000000000000000000..cc3068ec447742dc218e5eb7f006ec50422a00ac
--- /dev/null
+++ b/sdxl_default/images/4096px/37.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c699ded332f176cbc56e6848e98965b1f0cddb36f705a50f0705a956a8318471
+size 28315266
diff --git a/sdxl_default/images/4096px/38.png b/sdxl_default/images/4096px/38.png
new file mode 100644
index 0000000000000000000000000000000000000000..f9ba2218d9f5c5f2ecceebfe9746c4d529632b11
--- /dev/null
+++ b/sdxl_default/images/4096px/38.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6b0930f34fd2c4401a1428d0cee161246c1008ddc3c73976d766f8751589c54d
+size 29366580
diff --git a/sdxl_default/images/4096px/39.png b/sdxl_default/images/4096px/39.png
new file mode 100644
index 0000000000000000000000000000000000000000..926d358ccd43078cb88459929c9da0fba50a3eac
--- /dev/null
+++ b/sdxl_default/images/4096px/39.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b70c4add4b0095835d4ebc262d56185dd85fba26fd11e003e4af5e5fd4ec64d1
+size 18462715
diff --git a/sdxl_default/images/4096px/4.png b/sdxl_default/images/4096px/4.png
new file mode 100644
index 0000000000000000000000000000000000000000..25948a184c94670b5bb9a2cc9ef5e6b547186450
--- /dev/null
+++ b/sdxl_default/images/4096px/4.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b19317ff5ebeaee245f898889566b6b35c9074cb725a77318db9df7d998cc9dc
+size 19816426
diff --git a/sdxl_default/images/4096px/40.png b/sdxl_default/images/4096px/40.png
new file mode 100644
index 0000000000000000000000000000000000000000..423e82a79e721188a103ec07529fb7a6b39415a6
--- /dev/null
+++ b/sdxl_default/images/4096px/40.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:caba35886891d001068767fcaa859a13984c8b6fb024f8e65de81a12fb3f8968
+size 16491158
diff --git a/sdxl_default/images/4096px/41.png b/sdxl_default/images/4096px/41.png
new file mode 100644
index 0000000000000000000000000000000000000000..d1fba6097bad85dd0d521d2b522da0f1c2c6cac6
--- /dev/null
+++ b/sdxl_default/images/4096px/41.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f03a137a77b45fc449c78cf9ff95eefbf50d0ca30e2b9c7dd3d8cd4d03f19e88
+size 20874227
diff --git a/sdxl_default/images/4096px/42.png b/sdxl_default/images/4096px/42.png
new file mode 100644
index 0000000000000000000000000000000000000000..594901e4a424afc13a439676b2c2b1f60629b2ef
--- /dev/null
+++ b/sdxl_default/images/4096px/42.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:927bea8ec46eaf7b24f91b7702e83c32176e3d6e9b903ea03d2c4019892b8148
+size 15980238
diff --git a/sdxl_default/images/4096px/43.png b/sdxl_default/images/4096px/43.png
new file mode 100644
index 0000000000000000000000000000000000000000..d4a104d2e87c501584383d765e02f7797cb5ec8c
--- /dev/null
+++ b/sdxl_default/images/4096px/43.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6e1d5ecdc943048b872fb6aae0f7e5ea3f48ea14821329a4a6f7c570f02d334f
+size 26832911
diff --git a/sdxl_default/images/4096px/44.png b/sdxl_default/images/4096px/44.png
new file mode 100644
index 0000000000000000000000000000000000000000..a42684e036297065447f6f9431be1f8ae617622f
--- /dev/null
+++ b/sdxl_default/images/4096px/44.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:faf14497e9af0e99b0c2c1ca54d338a5b5b2f1bb10d0961d9b44c896a843cc37
+size 22811635
diff --git a/sdxl_default/images/4096px/45.png b/sdxl_default/images/4096px/45.png
new file mode 100644
index 0000000000000000000000000000000000000000..390f95b17bf108e732a972414dbc5245acd5b2a9
--- /dev/null
+++ b/sdxl_default/images/4096px/45.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0d273c964546e3a7a4f179d89bdee1fac0b02fe2849876024089e2581fb22df1
+size 22607388
diff --git a/sdxl_default/images/4096px/46.png b/sdxl_default/images/4096px/46.png
new file mode 100644
index 0000000000000000000000000000000000000000..f82c519c9f39bbb0ba614f02ba9e7c1a0cc6b8f5
--- /dev/null
+++ b/sdxl_default/images/4096px/46.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6d90cefcd8b1523327c75234e0988c83e4ddc6f1d320bb24f3576b1897da491b
+size 25098332
diff --git a/sdxl_default/images/4096px/47.png b/sdxl_default/images/4096px/47.png
new file mode 100644
index 0000000000000000000000000000000000000000..5030bceb9515aa3cf76d7c34e2fbbe35276b820b
--- /dev/null
+++ b/sdxl_default/images/4096px/47.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f28bf87cf81be6e5cbb11db4f92abf439dbccc333bd300702878da29461f9e4e
+size 20588348
diff --git a/sdxl_default/images/4096px/48.png b/sdxl_default/images/4096px/48.png
new file mode 100644
index 0000000000000000000000000000000000000000..d7bf8507014544c1c076e07f550ce31f4c63f36d
--- /dev/null
+++ b/sdxl_default/images/4096px/48.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0f987b21b63a6921296e358668b5487f1ad94beab9d384fbf9d60e8bb00199b1
+size 17475550
diff --git a/sdxl_default/images/4096px/49.png b/sdxl_default/images/4096px/49.png
new file mode 100644
index 0000000000000000000000000000000000000000..2d1a9961e5c177a6d8e82aaee545605ef140ff38
--- /dev/null
+++ b/sdxl_default/images/4096px/49.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c8cbd5692596cd6362446b69374ba941419e3c36c909a6c9c36421163bbb43a8
+size 20801696
diff --git a/sdxl_default/images/4096px/5.png b/sdxl_default/images/4096px/5.png
new file mode 100644
index 0000000000000000000000000000000000000000..8dd085016de1dd097729ed3c57323eb1a95dba77
--- /dev/null
+++ b/sdxl_default/images/4096px/5.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c63bf9cbc386582e23f68579b13791181d07d6fb6981fc7530b33cbec6805515
+size 24185309
diff --git a/sdxl_default/images/4096px/50.png b/sdxl_default/images/4096px/50.png
new file mode 100644
index 0000000000000000000000000000000000000000..a3e6a3d2e94da51b9654fe45d9e1f79d40a838a4
--- /dev/null
+++ b/sdxl_default/images/4096px/50.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:eb59e09b96e76d6ee81f976837a8a2ac623b237396a8ff7772ff44aa02b07874
+size 19890000
diff --git a/sdxl_default/images/4096px/51.png b/sdxl_default/images/4096px/51.png
new file mode 100644
index 0000000000000000000000000000000000000000..67aed4debc703f1a6a27045accd1e25c6de42d7d
--- /dev/null
+++ b/sdxl_default/images/4096px/51.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6f5d98d9ef20a48f48304080b666103e773a150b9cbfb54b2bb746ba0b886c2a
+size 22633166
diff --git a/sdxl_default/images/4096px/52.png b/sdxl_default/images/4096px/52.png
new file mode 100644
index 0000000000000000000000000000000000000000..8572d8c6899dcd85f5ee4f9dc74e3ba4cb028f8d
--- /dev/null
+++ b/sdxl_default/images/4096px/52.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0266f01b70583a0f2178fbfd15a494a1a951fed26d2ecd4c33db6b2278ff777e
+size 20455878
diff --git a/sdxl_default/images/4096px/53.png b/sdxl_default/images/4096px/53.png
new file mode 100644
index 0000000000000000000000000000000000000000..6b2fde9392d2997aa673a2bad547cf9bccac29b0
--- /dev/null
+++ b/sdxl_default/images/4096px/53.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:336aa78e4caffbe1451c2317b38212e8e1e81575c0260ba62af61b6843c00c7b
+size 13001784
diff --git a/sdxl_default/images/4096px/54.png b/sdxl_default/images/4096px/54.png
new file mode 100644
index 0000000000000000000000000000000000000000..591c341850b36db7774cf73af25d7fa4875f75f0
--- /dev/null
+++ b/sdxl_default/images/4096px/54.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:06cf24d6b1e22d8ffa447bce23e9e438a16eda0f69cdf03c4ce19ddd4ac8aecc
+size 29696315
diff --git a/sdxl_default/images/4096px/56.png b/sdxl_default/images/4096px/56.png
new file mode 100644
index 0000000000000000000000000000000000000000..850690c39d04e552fe1941098ac526e870ec1918
--- /dev/null
+++ b/sdxl_default/images/4096px/56.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b7d9e1fdb601104f8d53f45523c2b01a2fc30cbd521144d91e297fd9e0182232
+size 20691831
diff --git a/sdxl_default/images/4096px/57.png b/sdxl_default/images/4096px/57.png
new file mode 100644
index 0000000000000000000000000000000000000000..4d8491739e7e8ea7a0c694ada1fad0a2cab25493
--- /dev/null
+++ b/sdxl_default/images/4096px/57.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:55559e6fc70585aed91ba619c76e76280552f5a61d6cf035c1ac36238cecd78b
+size 25155721
diff --git a/sdxl_default/images/4096px/58.png b/sdxl_default/images/4096px/58.png
new file mode 100644
index 0000000000000000000000000000000000000000..060cc04fdd77b7083bd2eb93a551bcd48e7118c8
--- /dev/null
+++ b/sdxl_default/images/4096px/58.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6b023ca14ba8ffb846fe58692ebbdf2e04150251b928fcac79ef8edb05ead286
+size 27854953
diff --git a/sdxl_default/images/4096px/59.png b/sdxl_default/images/4096px/59.png
new file mode 100644
index 0000000000000000000000000000000000000000..fc3e4cfd58acb7939cd0aa81b6379bdfb3842262
--- /dev/null
+++ b/sdxl_default/images/4096px/59.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:35f7d3f42b8f95a27c86fdda7fb17e11453086dd868e5e429bb006f80b0011ab
+size 20736262
diff --git a/sdxl_default/images/4096px/6.png b/sdxl_default/images/4096px/6.png
new file mode 100644
index 0000000000000000000000000000000000000000..46cdf417d423ad91e4976648424aa2360328b162
--- /dev/null
+++ b/sdxl_default/images/4096px/6.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:26f27f5d50fd0f34a8643a781f28abbcefe333713f7796546a4670e36871f1ef
+size 18530395
diff --git a/sdxl_default/images/4096px/60.png b/sdxl_default/images/4096px/60.png
new file mode 100644
index 0000000000000000000000000000000000000000..53f16711ba0ce67c89b7d4ca3197942e7f7d7625
--- /dev/null
+++ b/sdxl_default/images/4096px/60.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bae2c1402bfde60704f12545a6d34013d79dfac48463695d297e616d1d0cb5cb
+size 16282521
diff --git a/sdxl_default/images/4096px/61.png b/sdxl_default/images/4096px/61.png
new file mode 100644
index 0000000000000000000000000000000000000000..19478c8864392d84790aeb61a47ccc59fca86c31
--- /dev/null
+++ b/sdxl_default/images/4096px/61.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:54b7fb729915bd6a238a720ab712996e69143b8407e33c6466c213f84bd82634
+size 24837834
diff --git a/sdxl_default/images/4096px/62.png b/sdxl_default/images/4096px/62.png
new file mode 100644
index 0000000000000000000000000000000000000000..8b0fbedccaa9b5f3d49b6d0aaa8322084e7c1b4f
--- /dev/null
+++ b/sdxl_default/images/4096px/62.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e9564250d64c527a1459efa4cc8b7ce606691e69dc5b1ca26760cd482cf1b5c8
+size 15973240
diff --git a/sdxl_default/images/4096px/63.png b/sdxl_default/images/4096px/63.png
new file mode 100644
index 0000000000000000000000000000000000000000..6bb939f4b2704ca2b24eb55729ecea7429aa836c
--- /dev/null
+++ b/sdxl_default/images/4096px/63.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b450f64b29dd24e41c28a90dcef3375455f3c5d97f5401f232ba78769dc4bff5
+size 25101655
diff --git a/sdxl_default/images/4096px/64.png b/sdxl_default/images/4096px/64.png
new file mode 100644
index 0000000000000000000000000000000000000000..f9b36249daa28253d5bc64bdaa6a2b90786a5b3f
--- /dev/null
+++ b/sdxl_default/images/4096px/64.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f06872f6015380d41e5cb9cfd18454b2a469ffa79fe1a1fa855a85ef40640f47
+size 16760162
diff --git a/sdxl_default/images/4096px/65.png b/sdxl_default/images/4096px/65.png
new file mode 100644
index 0000000000000000000000000000000000000000..50e038253ddb9ed3de0b261071e225b409957530
--- /dev/null
+++ b/sdxl_default/images/4096px/65.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:43340a366c150a05e14e24c91abcc2e0ba6882014d52b28f01a1c66d3a78795c
+size 21770901
diff --git a/sdxl_default/images/4096px/66.png b/sdxl_default/images/4096px/66.png
new file mode 100644
index 0000000000000000000000000000000000000000..d20865f6bfa127a964055e0120188f1b91bea6d2
--- /dev/null
+++ b/sdxl_default/images/4096px/66.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b2860a7a589a18d3910ab1cb7fdc2a83c1c31aa41ce826dbd3d259bb31640060
+size 17549539
diff --git a/sdxl_default/images/4096px/67.png b/sdxl_default/images/4096px/67.png
new file mode 100644
index 0000000000000000000000000000000000000000..6710965a435179317c1d38a021fcf071f049d72c
--- /dev/null
+++ b/sdxl_default/images/4096px/67.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fd43457f88a010e24077ad3dfc619f3fe1ed04ae41db534f6754603cd8bd9c13
+size 26681096
diff --git a/sdxl_default/images/4096px/68.png b/sdxl_default/images/4096px/68.png
new file mode 100644
index 0000000000000000000000000000000000000000..83ea3e0c96283167300b8bb0a81a33bbb7108be3
--- /dev/null
+++ b/sdxl_default/images/4096px/68.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b183cd6db089601d4b344dcee703db33e159359a888e1a0238e189c78f7d2510
+size 18476600
diff --git a/sdxl_default/images/4096px/69.png b/sdxl_default/images/4096px/69.png
new file mode 100644
index 0000000000000000000000000000000000000000..651142dc9896fc172a87d58ddc6b817c8d11056c
--- /dev/null
+++ b/sdxl_default/images/4096px/69.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:efed92add5b3545c9d59052bbcb0b110c2de31e685c93aa59eab86ff07f2b3e8
+size 19715368
diff --git a/sdxl_default/images/4096px/7.png b/sdxl_default/images/4096px/7.png
new file mode 100644
index 0000000000000000000000000000000000000000..60ffa873ae2efd38169729216c4e6e3760c929e6
--- /dev/null
+++ b/sdxl_default/images/4096px/7.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1927548e89c716a4534b9bbce2585da176ef9528111d1ee231caa0cd39817648
+size 20430184
diff --git a/sdxl_default/images/4096px/70.png b/sdxl_default/images/4096px/70.png
new file mode 100644
index 0000000000000000000000000000000000000000..67927702efcd3693cd8e44ccd6eb0cff7e1c5554
--- /dev/null
+++ b/sdxl_default/images/4096px/70.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:844536acf95f1e6e02cecdde02d59900d56ea86adee9467ff558f75ba4ab0b75
+size 14532480
diff --git a/sdxl_default/images/4096px/71.png b/sdxl_default/images/4096px/71.png
new file mode 100644
index 0000000000000000000000000000000000000000..080b7360d4efaea0895e00d6d7ca0aa65f7b3a41
--- /dev/null
+++ b/sdxl_default/images/4096px/71.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9045e57b3a758ab479d3d6c2add886ea072622fb3125c54f8edfa9268d3e5406
+size 30495613
diff --git a/sdxl_default/images/4096px/72.png b/sdxl_default/images/4096px/72.png
new file mode 100644
index 0000000000000000000000000000000000000000..52a2005cca3dd371462d72bdae9430e080d0bf08
--- /dev/null
+++ b/sdxl_default/images/4096px/72.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:079f5a21d73e82c56e62cf97492dc68e32b331e26e6795a48341d2d512b8c707
+size 21174085
diff --git a/sdxl_default/images/4096px/73.png b/sdxl_default/images/4096px/73.png
new file mode 100644
index 0000000000000000000000000000000000000000..d59c6587a30b34b8d56de87a5b8d0c970d87fd39
--- /dev/null
+++ b/sdxl_default/images/4096px/73.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2abe3d3616f2c930a41c68c9a3fd20d5981949c351b52e6ada0fa37c864ec776
+size 23508518
diff --git a/sdxl_default/images/4096px/74.png b/sdxl_default/images/4096px/74.png
new file mode 100644
index 0000000000000000000000000000000000000000..185485214f9dee2b35a9ff1e0155b86a9ce057e7
--- /dev/null
+++ b/sdxl_default/images/4096px/74.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a7d964174a9612eb45799820e8385d1faa494f38ea17ce90c6abe4cb4906bb82
+size 17726849
diff --git a/sdxl_default/images/4096px/75.png b/sdxl_default/images/4096px/75.png
new file mode 100644
index 0000000000000000000000000000000000000000..282816a753cda79006f0110144a302d8ce59d7dd
--- /dev/null
+++ b/sdxl_default/images/4096px/75.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2f3cd86b83f5d9b846ce2c63365e62c14913c3c21f976fc2b23bfdbdc3892dd9
+size 26780605
diff --git a/sdxl_default/images/4096px/76.png b/sdxl_default/images/4096px/76.png
new file mode 100644
index 0000000000000000000000000000000000000000..c6cf40b5e0273f22a00ec75a5f5eb40bebf71146
--- /dev/null
+++ b/sdxl_default/images/4096px/76.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:72bdc7f2b6583ecef2fdc70f28e23fcfd495599b60a2229b24877f83a28e1451
+size 17674724
diff --git a/sdxl_default/images/4096px/77.png b/sdxl_default/images/4096px/77.png
new file mode 100644
index 0000000000000000000000000000000000000000..7c701774b6e22e53c4aafc616387fea2dbeae2f4
--- /dev/null
+++ b/sdxl_default/images/4096px/77.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c48864bc6b32a90fede4793b68cbb371fa82825623f00a54d140530d514f1342
+size 30604089
diff --git a/sdxl_default/images/4096px/78.png b/sdxl_default/images/4096px/78.png
new file mode 100644
index 0000000000000000000000000000000000000000..b278c9e064f80ccc85f850e27fe34cf697416327
--- /dev/null
+++ b/sdxl_default/images/4096px/78.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5217d2658a427f65c9f2b2ec88d2de6bfe638c8f15358a8e59d36f9f867773ce
+size 24639032
diff --git a/sdxl_default/images/4096px/79.png b/sdxl_default/images/4096px/79.png
new file mode 100644
index 0000000000000000000000000000000000000000..1a43ba99f1697257759068f8abc7c3ce6d0dd13f
--- /dev/null
+++ b/sdxl_default/images/4096px/79.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:73b8f5bc5613272cbc0dcf88e28d0766c46c64c8d213adeeddc5ffae35ad4409
+size 20292087
diff --git a/sdxl_default/images/4096px/8.png b/sdxl_default/images/4096px/8.png
new file mode 100644
index 0000000000000000000000000000000000000000..a7d4342fc541fcb14517774d3ff08d2d3508307b
--- /dev/null
+++ b/sdxl_default/images/4096px/8.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:59237c0b28de3b4e9d31b385de3a791188ed2e4296da8fa8373249c7dd4b9193
+size 20903046
diff --git a/sdxl_default/images/4096px/80.png b/sdxl_default/images/4096px/80.png
new file mode 100644
index 0000000000000000000000000000000000000000..551a37dbc7002910fc0dd52bdfbad614ee7aea3b
--- /dev/null
+++ b/sdxl_default/images/4096px/80.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:df9604a0ea8190b0e6c91a97972a4e35a727ad2ea1758a07dc61b49684fdeda9
+size 23283767
diff --git a/sdxl_default/images/4096px/81.png b/sdxl_default/images/4096px/81.png
new file mode 100644
index 0000000000000000000000000000000000000000..6a54f3e55c1ae9921705bd03f54b9d9eb7991c0e
--- /dev/null
+++ b/sdxl_default/images/4096px/81.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9c54f2189efa55b14170f73224e701ec1906f4305f81dd951e8750a63ae64937
+size 17197225
diff --git a/sdxl_default/images/4096px/82.png b/sdxl_default/images/4096px/82.png
new file mode 100644
index 0000000000000000000000000000000000000000..8a5fe78878c3ffa8b12bcce2f04426cd56628f15
--- /dev/null
+++ b/sdxl_default/images/4096px/82.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5bfc37ac2d75c3d15e4900aba3c90b958d61e0d137feae6b0a4e8df9ddc185b5
+size 27519024
diff --git a/sdxl_default/images/4096px/83.png b/sdxl_default/images/4096px/83.png
new file mode 100644
index 0000000000000000000000000000000000000000..c914d6c626abdc5df621e398b990c2f2e0e14930
--- /dev/null
+++ b/sdxl_default/images/4096px/83.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8a90da5e4e3f2d3623a014a791e4b537f600d714470fef1e3c97d4e64234dcec
+size 22197201
diff --git a/sdxl_default/images/4096px/84.png b/sdxl_default/images/4096px/84.png
new file mode 100644
index 0000000000000000000000000000000000000000..79e0bf35d009b4079b1ab91cfffc20ae03dc1d9b
--- /dev/null
+++ b/sdxl_default/images/4096px/84.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7cbbdb6dab65020b78102a19e469c697494cf91bdf5030292e42e1b1c1dac59e
+size 21309763
diff --git a/sdxl_default/images/4096px/85.png b/sdxl_default/images/4096px/85.png
new file mode 100644
index 0000000000000000000000000000000000000000..09160b7c88582304143d4f39658bd2823602805f
--- /dev/null
+++ b/sdxl_default/images/4096px/85.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2f6a41225f3675af9628885231685a8b9328ca38064ec4a194441b6bf906a97d
+size 17973122
diff --git a/sdxl_default/images/4096px/86.png b/sdxl_default/images/4096px/86.png
new file mode 100644
index 0000000000000000000000000000000000000000..5cf0b7bdf3b29a7fe6a72b47b81ad1f5303b29a7
--- /dev/null
+++ b/sdxl_default/images/4096px/86.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f973adb674a6ba6ad0cb162c40bb49025cced4c814021aef46b9193532145472
+size 21405459
diff --git a/sdxl_default/images/4096px/87.png b/sdxl_default/images/4096px/87.png
new file mode 100644
index 0000000000000000000000000000000000000000..72dca7d5c09bc8672145d8ed8d3b7c996ddc12a4
--- /dev/null
+++ b/sdxl_default/images/4096px/87.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:21d311cf1bc3a4571b701b31df58d2dc63325a13f1371f5648f42556b1575061
+size 27412101
diff --git a/sdxl_default/images/4096px/88.png b/sdxl_default/images/4096px/88.png
new file mode 100644
index 0000000000000000000000000000000000000000..0c6e6a9f8304314ff9cc3b64d8b0e6081a2fd84a
--- /dev/null
+++ b/sdxl_default/images/4096px/88.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:143b8b8eab5c2d907e84a20ddfb34ad1d28d17b7ac6f9d2be5d829127edffc6b
+size 29453463
diff --git a/sdxl_default/images/4096px/89.png b/sdxl_default/images/4096px/89.png
new file mode 100644
index 0000000000000000000000000000000000000000..b00e164a6e00197a983c5253e4954f32e7c6dc3d
--- /dev/null
+++ b/sdxl_default/images/4096px/89.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3c4e9446e904f7bcd2e706761d51ca8fa5ecc991b42ef2f56c4ba11e0e7b83bf
+size 16387088
diff --git a/sdxl_default/images/4096px/9.png b/sdxl_default/images/4096px/9.png
new file mode 100644
index 0000000000000000000000000000000000000000..6f2037a06ad146b030463b0a05919cbb311d6567
--- /dev/null
+++ b/sdxl_default/images/4096px/9.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3954eb5dc851c98fb273e84882d042fb0c39e47b2ac4803cf4eece287b511731
+size 17461164
diff --git a/sdxl_default/images/4096px/90.png b/sdxl_default/images/4096px/90.png
new file mode 100644
index 0000000000000000000000000000000000000000..be9f9f0f3fbd041887ac5bf8197ac59a4e7deea1
--- /dev/null
+++ b/sdxl_default/images/4096px/90.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9c8b6fce2768b6094df81bfdba9df583007351c64c8e044a1ca4a2d05c1e5f60
+size 22149584
diff --git a/sdxl_default/images/4096px/91.png b/sdxl_default/images/4096px/91.png
new file mode 100644
index 0000000000000000000000000000000000000000..ae63fb0e3b6fe90e97885e8c5f1adf5e54d30234
--- /dev/null
+++ b/sdxl_default/images/4096px/91.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2f46dee746404f94a0116b5917343d142844830fc650a2574ff4b6db12770887
+size 25113091
diff --git a/sdxl_default/images/4096px/92.png b/sdxl_default/images/4096px/92.png
new file mode 100644
index 0000000000000000000000000000000000000000..598922c4eaf58583af233dde6352e25dc86614a0
--- /dev/null
+++ b/sdxl_default/images/4096px/92.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a4c8ce89c231bfe28c54b5b306d2f0a4c6ceb1abad879486314876ffcc238862
+size 24750977
diff --git a/sdxl_default/images/4096px/93.png b/sdxl_default/images/4096px/93.png
new file mode 100644
index 0000000000000000000000000000000000000000..87182c082d9b91434bf7d4e1aa6f28b4e151ecdb
--- /dev/null
+++ b/sdxl_default/images/4096px/93.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:be60c595d267f6089dac57f75e193348fc3555bebe20425e88b66c336fe98a3f
+size 25713457
diff --git a/sdxl_default/images/4096px/94.png b/sdxl_default/images/4096px/94.png
new file mode 100644
index 0000000000000000000000000000000000000000..4f81756ae867bb698fe574bee03ab49328937cc5
--- /dev/null
+++ b/sdxl_default/images/4096px/94.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e1e7331082cb9e5135ea5406ea74dab4b543fdd92d89b70be80fa82f3758a97f
+size 30382610
diff --git a/sdxl_default/images/4096px/95.png b/sdxl_default/images/4096px/95.png
new file mode 100644
index 0000000000000000000000000000000000000000..e830f10df94f98e699a1de4e74f78be2f6d510a8
--- /dev/null
+++ b/sdxl_default/images/4096px/95.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2cd6b5349aaf513942261874a521f23aff0bd7e77b5ff30cfe3cf8ddd0feefec
+size 18129369
diff --git a/sdxl_default/images/4096px/96.png b/sdxl_default/images/4096px/96.png
new file mode 100644
index 0000000000000000000000000000000000000000..dd836fe067b84889c6f22695bed10745a1076d21
--- /dev/null
+++ b/sdxl_default/images/4096px/96.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b0a7178cbd2642d368629f21d1c4642ac37fbd2dc447eb2f18cca9be8fe75a15
+size 18900499
diff --git a/sdxl_default/images/4096px/97.png b/sdxl_default/images/4096px/97.png
new file mode 100644
index 0000000000000000000000000000000000000000..076bcda615889d249415ede68e27d15f92f49c6c
--- /dev/null
+++ b/sdxl_default/images/4096px/97.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f7cdeb4392514f7606c82aad689176a227ec51fe9bb81eb77376668190506aec
+size 16653319
diff --git a/sdxl_default/images/4096px/98.png b/sdxl_default/images/4096px/98.png
new file mode 100644
index 0000000000000000000000000000000000000000..a82b9259275eb43a6a9b4110e4c2ab2b7901ceb7
--- /dev/null
+++ b/sdxl_default/images/4096px/98.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ffa7d63e9cd19a26736ada763b8647440ae2ab9bc7a36c10185f39477d686508
+size 25228785
diff --git a/sdxl_default/images/4096px/99.png b/sdxl_default/images/4096px/99.png
new file mode 100644
index 0000000000000000000000000000000000000000..1cb892780873c447a30b9de3b26582db5a26f49a
--- /dev/null
+++ b/sdxl_default/images/4096px/99.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:043450e2b24719d870c3437623579aa84738064838943f6316fe33e11b1194ae
+size 14833789
diff --git a/sdxl_default/images/512px/0.png b/sdxl_default/images/512px/0.png
new file mode 100644
index 0000000000000000000000000000000000000000..af458b0c8f9d33a7a2c1afc73f00ff487041a213
--- /dev/null
+++ b/sdxl_default/images/512px/0.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c281182d5b6b15fd6dd6a538321837584c991478ef8707f39f553ab04f97bb17
+size 341818
diff --git a/sdxl_default/images/512px/1.png b/sdxl_default/images/512px/1.png
new file mode 100644
index 0000000000000000000000000000000000000000..90a1888a775ccaca8db5c27b07fdc5b5730b0ad4
--- /dev/null
+++ b/sdxl_default/images/512px/1.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2e9ee406df7d991211de818f8a5d857e2062c6fe867723e6fc00fcf4122fde53
+size 359536
diff --git a/sdxl_default/images/512px/10.png b/sdxl_default/images/512px/10.png
new file mode 100644
index 0000000000000000000000000000000000000000..7ccab6b423a50cc2e96093ed149f455eac1b7751
--- /dev/null
+++ b/sdxl_default/images/512px/10.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bbebfe2563d43772bce9aff720a8f5b817e3a90f0094adc4dd95445107edd23f
+size 318654
diff --git a/sdxl_default/images/512px/11.png b/sdxl_default/images/512px/11.png
new file mode 100644
index 0000000000000000000000000000000000000000..a2701a81e9b9fa268049decc7a2981eeffba253c
--- /dev/null
+++ b/sdxl_default/images/512px/11.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:31e19775a84997c64962681d8076a15810f2ea2db47983621c83ca8866f3706e
+size 366423
diff --git a/sdxl_default/images/512px/12.png b/sdxl_default/images/512px/12.png
new file mode 100644
index 0000000000000000000000000000000000000000..43d2b405afece325299a191cbce71716c89d01f5
--- /dev/null
+++ b/sdxl_default/images/512px/12.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:16eb195e5b24d53a72f78537a50e4abde47dff42b14d4d279be5a4ae66f76045
+size 374885
diff --git a/sdxl_default/images/512px/13.png b/sdxl_default/images/512px/13.png
new file mode 100644
index 0000000000000000000000000000000000000000..40d553ec5533e7dbdba091a6bf9df49226ab5fba
--- /dev/null
+++ b/sdxl_default/images/512px/13.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ac27d0d6cfee8e52d3d9236842710c33a3300afdc7adc7ecc24a06b5c3eed165
+size 433388
diff --git a/sdxl_default/images/512px/14.png b/sdxl_default/images/512px/14.png
new file mode 100644
index 0000000000000000000000000000000000000000..adde0873831872e97c40f902dc400db8a5129e7b
--- /dev/null
+++ b/sdxl_default/images/512px/14.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a15dca0a1024c33264698bd818cd4cabc8077721d5ca49a64182c111d4a21156
+size 218396
diff --git a/sdxl_default/images/512px/15.png b/sdxl_default/images/512px/15.png
new file mode 100644
index 0000000000000000000000000000000000000000..dc599509771101a2e14b5780d879c8684bf10999
--- /dev/null
+++ b/sdxl_default/images/512px/15.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:85b0c285f749475b78bb9189bf7a47121e383200ac92f63f5ef078f420df09fe
+size 337751
diff --git a/sdxl_default/images/512px/16.png b/sdxl_default/images/512px/16.png
new file mode 100644
index 0000000000000000000000000000000000000000..fddfe1f20924fc863fe247d5a5e6f54b5e26296c
--- /dev/null
+++ b/sdxl_default/images/512px/16.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:ecd5463655054f3b2d5335b10c39adb8d8dcb5370bfcf4a713288bd99f06dc9f
+size 304484
diff --git a/sdxl_default/images/512px/17.png b/sdxl_default/images/512px/17.png
new file mode 100644
index 0000000000000000000000000000000000000000..86f2541a80bb128a79bf1003ac5e7ca28ef418cc
--- /dev/null
+++ b/sdxl_default/images/512px/17.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e6d3c1c9488379df5b3ad7a53bae8a3e2d66353e87d9aac20353239d32174fc8
+size 391391
diff --git a/sdxl_default/images/512px/18.png b/sdxl_default/images/512px/18.png
new file mode 100644
index 0000000000000000000000000000000000000000..8be8188b6cfce5c006c8b484729ab977fc4c8007
--- /dev/null
+++ b/sdxl_default/images/512px/18.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:1209727d7953dad504cf36df95b712f3898af34e35da25b658480befa441820c
+size 345982
diff --git a/sdxl_default/images/512px/19.png b/sdxl_default/images/512px/19.png
new file mode 100644
index 0000000000000000000000000000000000000000..7c341d7a9e0605baffa499a1e8612e3f3e7872a8
--- /dev/null
+++ b/sdxl_default/images/512px/19.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:efd70d37a9ba335514039b4681a32b204870b724a4180ec195ccda7f8e576d5c
+size 415689
diff --git a/sdxl_default/images/512px/2.png b/sdxl_default/images/512px/2.png
new file mode 100644
index 0000000000000000000000000000000000000000..77e605f33e7ca74ab64693a28f502d0d545c95d9
--- /dev/null
+++ b/sdxl_default/images/512px/2.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:31d2a352b3bdaa879502fee3a5fd04dabe26d0852bba9e076684c768fcb4d9a5
+size 324335
diff --git a/sdxl_default/images/512px/21.png b/sdxl_default/images/512px/21.png
new file mode 100644
index 0000000000000000000000000000000000000000..b9bf70c634484e6e09a61c9c16722887850d227e
--- /dev/null
+++ b/sdxl_default/images/512px/21.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b92bf637a319738afa96e790ac9e1fe31b95893890399dba346921ec8b7783bc
+size 324117
diff --git a/sdxl_default/images/512px/22.png b/sdxl_default/images/512px/22.png
new file mode 100644
index 0000000000000000000000000000000000000000..5bad5cc40c2cc3a6b771afbad5b0dacb7f781152
--- /dev/null
+++ b/sdxl_default/images/512px/22.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:77af8191d8900f811a752586902f0feb2d807a539881acc172ded69876995679
+size 326044
diff --git a/sdxl_default/images/512px/24.png b/sdxl_default/images/512px/24.png
new file mode 100644
index 0000000000000000000000000000000000000000..1406f9dab08dd96d3e1f6c012d0af63ec8d2a4a4
--- /dev/null
+++ b/sdxl_default/images/512px/24.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:77cb07d2cb6a524f3721e744acbbfde90e132910995045dddd5545e0bf7aca42
+size 469879
diff --git a/sdxl_default/images/512px/25.png b/sdxl_default/images/512px/25.png
new file mode 100644
index 0000000000000000000000000000000000000000..1d65f01901c031248fed277371e9f5097e28a543
--- /dev/null
+++ b/sdxl_default/images/512px/25.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:789963098e4d4d851669d365345e6c455ec43739bba87edf04f09eeeeb03ce91
+size 323000
diff --git a/sdxl_default/images/512px/26.png b/sdxl_default/images/512px/26.png
new file mode 100644
index 0000000000000000000000000000000000000000..c519339dae7be5475c7ec2fdd2cbe7777996011e
--- /dev/null
+++ b/sdxl_default/images/512px/26.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:740fb7093081f06ced1e375d4264d2b8b34391c2b4cc716c7f028127c1ed225d
+size 390660
diff --git a/sdxl_default/images/512px/27.png b/sdxl_default/images/512px/27.png
new file mode 100644
index 0000000000000000000000000000000000000000..cde1035b10006660115e2035b72ad92824953ac1
--- /dev/null
+++ b/sdxl_default/images/512px/27.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:90738aac19d565f751ac70d822da6c08c17b6230aa5d2c5749515c7939e68a22
+size 347653
diff --git a/sdxl_default/images/512px/28.png b/sdxl_default/images/512px/28.png
new file mode 100644
index 0000000000000000000000000000000000000000..800e7fc28d9cdc53d08e7c7714f348849a2dde34
--- /dev/null
+++ b/sdxl_default/images/512px/28.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0423188ca4e705e58f43fe2b29c06485eb483b475d081c0094d7219c967c2040
+size 330308
diff --git a/sdxl_default/images/512px/29.png b/sdxl_default/images/512px/29.png
new file mode 100644
index 0000000000000000000000000000000000000000..5567aed87abb329c2d40b87729d96b4c3482f507
--- /dev/null
+++ b/sdxl_default/images/512px/29.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3ca5ca33345f27e062af35d9b5b065f914cddb364494fc6e3b3e7750efa18104
+size 374752
diff --git a/sdxl_default/images/512px/3.png b/sdxl_default/images/512px/3.png
new file mode 100644
index 0000000000000000000000000000000000000000..a918781993c59374295ef364eee6703dea65c563
--- /dev/null
+++ b/sdxl_default/images/512px/3.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b06fe1ca1e32c18d97f37eb7830dc4a0d497ccd80199fc012f0244e17af6eda0
+size 327004
diff --git a/sdxl_default/images/512px/30.png b/sdxl_default/images/512px/30.png
new file mode 100644
index 0000000000000000000000000000000000000000..024083716ea0447587a4609805fb0bc55f99b15c
--- /dev/null
+++ b/sdxl_default/images/512px/30.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b543e1fa9d71f3836015a53984f679db1bb850814f9d11ab28b72cb82a487295
+size 351881
diff --git a/sdxl_default/images/512px/31.png b/sdxl_default/images/512px/31.png
new file mode 100644
index 0000000000000000000000000000000000000000..0f1aba53626ba95bff9b644e1566245e9b268f6d
--- /dev/null
+++ b/sdxl_default/images/512px/31.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:0ea45f3085d1a9834afb39f0906af8e3d1258dedc5b08862679b729d899c2860
+size 390431
diff --git a/sdxl_default/images/512px/32.png b/sdxl_default/images/512px/32.png
new file mode 100644
index 0000000000000000000000000000000000000000..48c10f65a5cda3341ecdf2f238c57edc9156a037
--- /dev/null
+++ b/sdxl_default/images/512px/32.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2003da30155c9b339325f8abbab0928a12fd6485d2d7443a3091c95c01c1c7bd
+size 472434
diff --git a/sdxl_default/images/512px/33.png b/sdxl_default/images/512px/33.png
new file mode 100644
index 0000000000000000000000000000000000000000..4780d31d378f736b5dcf5c904f3d47eab4eed0f8
--- /dev/null
+++ b/sdxl_default/images/512px/33.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:aff9171834cff60cf437fe2fd8c29550024caa0e7858195336f4f73e48ec06bd
+size 389117
diff --git a/sdxl_default/images/512px/34.png b/sdxl_default/images/512px/34.png
new file mode 100644
index 0000000000000000000000000000000000000000..5ab5173c0f413ff38bdf1756908277aa58d3e7fa
--- /dev/null
+++ b/sdxl_default/images/512px/34.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7aa4c11240322cb919b61651b33ac7de550a04cf93153fd86e1e936bb8c4e3f9
+size 418129
diff --git a/sdxl_default/images/512px/35.png b/sdxl_default/images/512px/35.png
new file mode 100644
index 0000000000000000000000000000000000000000..f87bfd26b26e1354d2c2672caeaabdfd1445cc0d
--- /dev/null
+++ b/sdxl_default/images/512px/35.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4d0a0325763fed28e238e06b1ed51197c32674688b2694a347048f04d29286e2
+size 368781
diff --git a/sdxl_default/images/512px/36.png b/sdxl_default/images/512px/36.png
new file mode 100644
index 0000000000000000000000000000000000000000..2aaeade779ef8ef3c193cc5553607ecadf4e0618
--- /dev/null
+++ b/sdxl_default/images/512px/36.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:55ec6781b9a1e191dba53bad606a0878b30f3879c8383beab668cda75cc45152
+size 354484
diff --git a/sdxl_default/images/512px/37.png b/sdxl_default/images/512px/37.png
new file mode 100644
index 0000000000000000000000000000000000000000..ae5f9567ddc1ab200a2262903415c86e9fc33643
--- /dev/null
+++ b/sdxl_default/images/512px/37.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:959cf78ff5a8e6fc36c3e200f066b9f26c0d6a29defad9ea2c5a9a7b8b75950a
+size 277149
diff --git a/sdxl_default/images/512px/38.png b/sdxl_default/images/512px/38.png
new file mode 100644
index 0000000000000000000000000000000000000000..9c9e4f2c409c1a15313d852180de4f5f7bdc7739
--- /dev/null
+++ b/sdxl_default/images/512px/38.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3c5da9ac9718cf64669e31358416297618bbeec4ab83f9ce1ddd0c8facb84017
+size 313631
diff --git a/sdxl_default/images/512px/39.png b/sdxl_default/images/512px/39.png
new file mode 100644
index 0000000000000000000000000000000000000000..b7e04e43664de7fb0b1263ecf18a1f4d8d2c5d4d
--- /dev/null
+++ b/sdxl_default/images/512px/39.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a1ce06690721f2e2aecec3d84f500aa69befe85524708b3f7ac5057182159252
+size 321132
diff --git a/sdxl_default/images/512px/4.png b/sdxl_default/images/512px/4.png
new file mode 100644
index 0000000000000000000000000000000000000000..692a6c88648b72e9aea9dcdc954a25405dcd3b14
--- /dev/null
+++ b/sdxl_default/images/512px/4.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4983a5544a0a83ba0df66c470308f4fa4678623416f0e9ca1eff674b421a1dcd
+size 332424
diff --git a/sdxl_default/images/512px/40.png b/sdxl_default/images/512px/40.png
new file mode 100644
index 0000000000000000000000000000000000000000..2d1bc7969349bd976859d97ed67672bfcb5da161
--- /dev/null
+++ b/sdxl_default/images/512px/40.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:51941d8f4213a00934959640139e448cf7b48a6a6d6d4f8a2c9c94d765f1eed9
+size 321776
diff --git a/sdxl_default/images/512px/41.png b/sdxl_default/images/512px/41.png
new file mode 100644
index 0000000000000000000000000000000000000000..06ace13ee9c4e4944de966e12edb4092fd9b3182
--- /dev/null
+++ b/sdxl_default/images/512px/41.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:081d725079632899501b6273239b3f5734455c7d9a15d445b6838301a94e915b
+size 323798
diff --git a/sdxl_default/images/512px/42.png b/sdxl_default/images/512px/42.png
new file mode 100644
index 0000000000000000000000000000000000000000..0386e12d429f58211c496b12a04758e445f27acc
--- /dev/null
+++ b/sdxl_default/images/512px/42.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:60c2bb63b898552f2548267777f0300d206a3d64f2cea0bcf5756b68fcbcfe46
+size 372370
diff --git a/sdxl_default/images/512px/43.png b/sdxl_default/images/512px/43.png
new file mode 100644
index 0000000000000000000000000000000000000000..8ee712f3f4ef88ec2480f4e74bf0c02539d463be
--- /dev/null
+++ b/sdxl_default/images/512px/43.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:59aa2441f6a8aeddc2661b203d2e32ded4dc0caafe62d6068c11a0dc0449df5b
+size 360907
diff --git a/sdxl_default/images/512px/45.png b/sdxl_default/images/512px/45.png
new file mode 100644
index 0000000000000000000000000000000000000000..ab53c12701d33e371b78b81c4e5117a1eae37a2e
--- /dev/null
+++ b/sdxl_default/images/512px/45.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3b761a78ab5bdfdceed0e418f7ce7b1a17386ff397d4a8e63dd4330e12bfccac
+size 316273
diff --git a/sdxl_default/images/512px/46.png b/sdxl_default/images/512px/46.png
new file mode 100644
index 0000000000000000000000000000000000000000..d8f81eefcfc94bd92678a782e8e2832096928485
--- /dev/null
+++ b/sdxl_default/images/512px/46.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:011a0c5d0b12d819c1ef2164991217ff43778c9bd53d70a2917871ee23039fa4
+size 379046
diff --git a/sdxl_default/images/512px/47.png b/sdxl_default/images/512px/47.png
new file mode 100644
index 0000000000000000000000000000000000000000..f3d1a8f75b24d25e1ba6c84e53bea7c2e1c615e6
--- /dev/null
+++ b/sdxl_default/images/512px/47.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4a56f53251e3413ac71084be091a73c7502ab38b90abd676d6657f0cd1dcee97
+size 407676
diff --git a/sdxl_default/images/512px/48.png b/sdxl_default/images/512px/48.png
new file mode 100644
index 0000000000000000000000000000000000000000..3a89891506f7c9d7bdc42de75312638ec7e4c261
--- /dev/null
+++ b/sdxl_default/images/512px/48.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:dbfa274c4bf315aac226f841b761a12b4f49bc73a289148510f537d5d2acd233
+size 283643
diff --git a/sdxl_default/images/512px/49.png b/sdxl_default/images/512px/49.png
new file mode 100644
index 0000000000000000000000000000000000000000..6a7251cd2349c8bd69fc6bb048e959f5e127a446
--- /dev/null
+++ b/sdxl_default/images/512px/49.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8e6313843f989e3f363e5d8e0c72156fabf13cc6075294fa7d36781a282b5f66
+size 328727
diff --git a/sdxl_default/images/512px/5.png b/sdxl_default/images/512px/5.png
new file mode 100644
index 0000000000000000000000000000000000000000..e98e0e1f2aa91743ce24a8670aa0f915d8e46541
--- /dev/null
+++ b/sdxl_default/images/512px/5.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7d3b1aa8e56c7457e9bb83d2be14c5791fea957e861d15602a772ee997ae5304
+size 378332
diff --git a/sdxl_default/images/512px/50.png b/sdxl_default/images/512px/50.png
new file mode 100644
index 0000000000000000000000000000000000000000..8f03b5d48d42c1667d2499c0b33fe1e89634b649
--- /dev/null
+++ b/sdxl_default/images/512px/50.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:78812bbf506d67c1a6007cafdbb57385ddf164c627d6a9ac8f9ce5a758555fb4
+size 308978
diff --git a/sdxl_default/images/512px/51.png b/sdxl_default/images/512px/51.png
new file mode 100644
index 0000000000000000000000000000000000000000..a6aac4d3e866ce93366ffbe230eb31ba32c14fbc
--- /dev/null
+++ b/sdxl_default/images/512px/51.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e07cc758daa08d610ac3966c088821f75c0bd59d5c781e03aea60cdf908a675a
+size 288580
diff --git a/sdxl_default/images/512px/52.png b/sdxl_default/images/512px/52.png
new file mode 100644
index 0000000000000000000000000000000000000000..f8de0ed5d0479fec8115952aeb86ad7f1ff406f4
--- /dev/null
+++ b/sdxl_default/images/512px/52.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b78d1716abef9141f17a6b628586e61590804bd3487b895a5a79413a35274342
+size 298454
diff --git a/sdxl_default/images/512px/53.png b/sdxl_default/images/512px/53.png
new file mode 100644
index 0000000000000000000000000000000000000000..7dcc39e47af7fe0fc6b82e33bf761a2afec74c63
--- /dev/null
+++ b/sdxl_default/images/512px/53.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:87825f87aa24104ea7bdceb70a27f3b1794433605b6fe7696594857a55830642
+size 296046
diff --git a/sdxl_default/images/512px/54.png b/sdxl_default/images/512px/54.png
new file mode 100644
index 0000000000000000000000000000000000000000..3d1dbe067c71cbb479a014acc485ec5e6d31c46e
--- /dev/null
+++ b/sdxl_default/images/512px/54.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7cef5bdf22f08950c57629c0df4be71bf341bafc0a7fbaf503d78ac5fc0d140a
+size 292586
diff --git a/sdxl_default/images/512px/55.png b/sdxl_default/images/512px/55.png
new file mode 100644
index 0000000000000000000000000000000000000000..57ad3fff15e0525f860ccff761a23864cefa14e4
--- /dev/null
+++ b/sdxl_default/images/512px/55.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b77639bb6147dccc7925ed0a5315091a61e66e0f5a65f35f33dc34f34449780b
+size 373358
diff --git a/sdxl_default/images/512px/56.png b/sdxl_default/images/512px/56.png
new file mode 100644
index 0000000000000000000000000000000000000000..5fb042a7622f5281e09d0822651329097499b9b4
--- /dev/null
+++ b/sdxl_default/images/512px/56.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:11e6f39d477b73c10e38da1620dac0d6b5dd41507bd9d07eadbe165640b8f0c2
+size 350669
diff --git a/sdxl_default/images/512px/57.png b/sdxl_default/images/512px/57.png
new file mode 100644
index 0000000000000000000000000000000000000000..8de81b793984f903fe8449973e2a6c2a55975a16
--- /dev/null
+++ b/sdxl_default/images/512px/57.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:842236899ad7e687f78dae3fe752eb748db40e22109e71e11e8b76c02ac679ab
+size 353176
diff --git a/sdxl_default/images/512px/58.png b/sdxl_default/images/512px/58.png
new file mode 100644
index 0000000000000000000000000000000000000000..4284ae90103890f9c3c58464eb4731a1e3a841d1
--- /dev/null
+++ b/sdxl_default/images/512px/58.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e67464cb5d9807b9603556b47b78325d1c13d6db9a194a02d6d24747b2d553fd
+size 352177
diff --git a/sdxl_default/images/512px/59.png b/sdxl_default/images/512px/59.png
new file mode 100644
index 0000000000000000000000000000000000000000..41db8f75cb103ef034fd0da303fc1e7781cd13ee
--- /dev/null
+++ b/sdxl_default/images/512px/59.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d61b29374842e7d80e635a599c2589c63698d14fb52ce9665862a69d890cc81f
+size 362276
diff --git a/sdxl_default/images/512px/6.png b/sdxl_default/images/512px/6.png
new file mode 100644
index 0000000000000000000000000000000000000000..d60d553601ac830075130ca1585a7156e8874293
--- /dev/null
+++ b/sdxl_default/images/512px/6.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d848ed324fae866db519b08309a40d20e562a68ea59b53b5042e67c855e0d008
+size 392950
diff --git a/sdxl_default/images/512px/61.png b/sdxl_default/images/512px/61.png
new file mode 100644
index 0000000000000000000000000000000000000000..fb76763c0cbe200dfaa70f429da2d653d38db3fe
--- /dev/null
+++ b/sdxl_default/images/512px/61.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9b52c0e8a7812b45016921b28a29578426c6490ee98769237ff4f0c8cbfb7db8
+size 342270
diff --git a/sdxl_default/images/512px/62.png b/sdxl_default/images/512px/62.png
new file mode 100644
index 0000000000000000000000000000000000000000..cda4b5705488450ffae15840c9721cb89066a5d9
--- /dev/null
+++ b/sdxl_default/images/512px/62.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:96d4b211cfcc899e31b7d13c8a79beaa391e80dfc293ee1e23386d255c1d9278
+size 348369
diff --git a/sdxl_default/images/512px/63.png b/sdxl_default/images/512px/63.png
new file mode 100644
index 0000000000000000000000000000000000000000..60f8e1d195567a6a67e7493e7276d38cb14580cf
--- /dev/null
+++ b/sdxl_default/images/512px/63.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:9a577a598c0ece119b61f0248ece1efc14d54a48202c1769ad1bf3091cd6cd03
+size 277717
diff --git a/sdxl_default/images/512px/64.png b/sdxl_default/images/512px/64.png
new file mode 100644
index 0000000000000000000000000000000000000000..363e4dff35a0461e248d5e701c62b41143fd5cad
--- /dev/null
+++ b/sdxl_default/images/512px/64.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2ad6cf514226a1d16adbbc2c9cb96dcfe55c08a501aaebb7af559f54cf1a3c3e
+size 337321
diff --git a/sdxl_default/images/512px/65.png b/sdxl_default/images/512px/65.png
new file mode 100644
index 0000000000000000000000000000000000000000..99182a3d100c7af3d939c1fb8277ad1d6a54c4c8
--- /dev/null
+++ b/sdxl_default/images/512px/65.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fd24f25a12060a55533c4a3a5bc80b414bfe1b086115f201cf6dae5bca743a43
+size 426125
diff --git a/sdxl_default/images/512px/66.png b/sdxl_default/images/512px/66.png
new file mode 100644
index 0000000000000000000000000000000000000000..0d8c409daea1e59f73776daf15804baae97a4e73
--- /dev/null
+++ b/sdxl_default/images/512px/66.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:bc288c6e521a1e6155f8e2058ddc38993e648db4a39508fe4c1396c2ac4cf25f
+size 322283
diff --git a/sdxl_default/images/512px/67.png b/sdxl_default/images/512px/67.png
new file mode 100644
index 0000000000000000000000000000000000000000..e3aef7aa493b3a887a0ab095919662be7813f5a1
--- /dev/null
+++ b/sdxl_default/images/512px/67.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:409718c4076a278faff19b8f4ef02e9b954952ea02102fd58f9687a6bb5b01f6
+size 461784
diff --git a/sdxl_default/images/512px/68.png b/sdxl_default/images/512px/68.png
new file mode 100644
index 0000000000000000000000000000000000000000..83ee3ae193a13275bf4e503c5ab17c574bb1843f
--- /dev/null
+++ b/sdxl_default/images/512px/68.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:622b5eb145975e8e7f0db025b87b557060d17c99e38b9f2eed4e70c35c4ca757
+size 302350
diff --git a/sdxl_default/images/512px/69.png b/sdxl_default/images/512px/69.png
new file mode 100644
index 0000000000000000000000000000000000000000..8e2190de6cfd6d90b04e5cf89fd2fd550dd05b34
--- /dev/null
+++ b/sdxl_default/images/512px/69.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:18ca87fcbdda767475b255a72d10c7c80a9d5f384c4c8b2227f9364c8c934cdc
+size 337071
diff --git a/sdxl_default/images/512px/7.png b/sdxl_default/images/512px/7.png
new file mode 100644
index 0000000000000000000000000000000000000000..9f85a2c2e51eea9d0a6d2deba80f1dec40d23e53
--- /dev/null
+++ b/sdxl_default/images/512px/7.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5cb9a958de9c547f20a43216ffb6fb9fc4b1e43bc7ec83f6d87a2d983c26ccb0
+size 327675
diff --git a/sdxl_default/images/512px/70.png b/sdxl_default/images/512px/70.png
new file mode 100644
index 0000000000000000000000000000000000000000..6515a16a1157f8328be4fb5fa542481f1b68dd89
--- /dev/null
+++ b/sdxl_default/images/512px/70.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fe15a1a61021cc712cad3c25c509f496b81265049527e3f26a6bad90bbeed06f
+size 324744
diff --git a/sdxl_default/images/512px/71.png b/sdxl_default/images/512px/71.png
new file mode 100644
index 0000000000000000000000000000000000000000..8599d9bcad001503fdd070fdf24d02d46c591d6b
--- /dev/null
+++ b/sdxl_default/images/512px/71.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:d82271fcda113ba3cb6e36419ab567d1d07e39610baf6b5dd706334b3c96196f
+size 369116
diff --git a/sdxl_default/images/512px/72.png b/sdxl_default/images/512px/72.png
new file mode 100644
index 0000000000000000000000000000000000000000..2ad329cbc140ff5817ff126e986aab39a6a5cc77
--- /dev/null
+++ b/sdxl_default/images/512px/72.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8cf1250efdd0ba91d68627655c33fbb6c55d34d12d0e39c522cfda3ca9319c1c
+size 297017
diff --git a/sdxl_default/images/512px/73.png b/sdxl_default/images/512px/73.png
new file mode 100644
index 0000000000000000000000000000000000000000..cce3c5fcfe37009176d5cb6e3ef48627a788beb7
--- /dev/null
+++ b/sdxl_default/images/512px/73.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:da1d3da8f5cf3e2ed9a01684eb6eec4065bb8372a90dfd996fd19c35debaf71e
+size 461490
diff --git a/sdxl_default/images/512px/74.png b/sdxl_default/images/512px/74.png
new file mode 100644
index 0000000000000000000000000000000000000000..5b99e2a4b32363cba981d2e2a7ad5099e40fa45d
--- /dev/null
+++ b/sdxl_default/images/512px/74.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:e239f531317a6139e759c7ccead6559a7d3ce267a585e3eefd3a796756843814
+size 346971
diff --git a/sdxl_default/images/512px/75.png b/sdxl_default/images/512px/75.png
new file mode 100644
index 0000000000000000000000000000000000000000..e59deb123e6c63fc9c0eb9c9979bbbc7964cb3d7
--- /dev/null
+++ b/sdxl_default/images/512px/75.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:72b53a5e4a2179d0f2df3ee6c428f845d36ea72a1991e5f82c5004533ccb5dc0
+size 493732
diff --git a/sdxl_default/images/512px/76.png b/sdxl_default/images/512px/76.png
new file mode 100644
index 0000000000000000000000000000000000000000..a88aeef4d776f49d80eb4cb82ac0a7b719bf67df
--- /dev/null
+++ b/sdxl_default/images/512px/76.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:98d9ae8b62638ab83b54e1e16c626a44ddf5f93aa09a8753e24cbd2daee0117b
+size 323081
diff --git a/sdxl_default/images/512px/77.png b/sdxl_default/images/512px/77.png
new file mode 100644
index 0000000000000000000000000000000000000000..f7f71d10e1d4611db44b07a7d93aa8f14a08e4c7
--- /dev/null
+++ b/sdxl_default/images/512px/77.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a99cf59cac0af00067c45af06188153e46fef9d35e0bded8a31e065d8d7538d7
+size 483505
diff --git a/sdxl_default/images/512px/78.png b/sdxl_default/images/512px/78.png
new file mode 100644
index 0000000000000000000000000000000000000000..074e9982f860621a8e92bb704a76c1c9a5f661d9
--- /dev/null
+++ b/sdxl_default/images/512px/78.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:62900079723c909ad9c1a2d2b982c1e5d7d250e9d96a69197e7f1822e7c5df8b
+size 392592
diff --git a/sdxl_default/images/512px/79.png b/sdxl_default/images/512px/79.png
new file mode 100644
index 0000000000000000000000000000000000000000..085f9a6b0d60fbed558c26331685d12ffcb2d503
--- /dev/null
+++ b/sdxl_default/images/512px/79.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:fe8d6dea428aa9e31e80fc8d432c509a04c73fce237d419519ca2efa1ebe8f5c
+size 378692
diff --git a/sdxl_default/images/512px/8.png b/sdxl_default/images/512px/8.png
new file mode 100644
index 0000000000000000000000000000000000000000..3727e583c3f138925ff6878f1382680739bd4305
--- /dev/null
+++ b/sdxl_default/images/512px/8.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5f3c98a3154eee8ea5f36cc3bf0b8fb449ff6ade648017a75cab5b0d4ce860d5
+size 393267
diff --git a/sdxl_default/images/512px/80.png b/sdxl_default/images/512px/80.png
new file mode 100644
index 0000000000000000000000000000000000000000..f1bfb3b3fb278dfd9141ddc37df073cec8198993
--- /dev/null
+++ b/sdxl_default/images/512px/80.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:42d7ef15bd8f804662702e7c1c01ff36fecb0f0d88f94506f8509e4098e6231b
+size 340910
diff --git a/sdxl_default/images/512px/81.png b/sdxl_default/images/512px/81.png
new file mode 100644
index 0000000000000000000000000000000000000000..14b93f372b355c2d82d0788369047c6ff634c7d0
--- /dev/null
+++ b/sdxl_default/images/512px/81.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:129500437a5097475c87c527ba2471784548e4dc1d920c7573f37cd56d4abb2a
+size 327822
diff --git a/sdxl_default/images/512px/82.png b/sdxl_default/images/512px/82.png
new file mode 100644
index 0000000000000000000000000000000000000000..211381d5b70841e9c5d8d50c191f6d6bef0f3eb2
--- /dev/null
+++ b/sdxl_default/images/512px/82.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:290aa335091bad3653d1054e07870897f0f10dd904cfd8beeeb3b36243fd880f
+size 422723
diff --git a/sdxl_default/images/512px/83.png b/sdxl_default/images/512px/83.png
new file mode 100644
index 0000000000000000000000000000000000000000..5fc8b5a65b94c429c13c1539b9ec0ba354d6b0fc
--- /dev/null
+++ b/sdxl_default/images/512px/83.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6a6d6999e6a8aa6e5da6602cbddab0e9303e401263152ec462588f346d5c23d9
+size 404361
diff --git a/sdxl_default/images/512px/84.png b/sdxl_default/images/512px/84.png
new file mode 100644
index 0000000000000000000000000000000000000000..8386709565855fb9e946dc1c891d3e213631987a
--- /dev/null
+++ b/sdxl_default/images/512px/84.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:4a70ae43f424aee49f506cf3c0cbbeeec324f4cd329bb3eb7cf04ce52471ff3c
+size 309496
diff --git a/sdxl_default/images/512px/85.png b/sdxl_default/images/512px/85.png
new file mode 100644
index 0000000000000000000000000000000000000000..aed2528d638894192ce95ed0a6d42044d7d971a5
--- /dev/null
+++ b/sdxl_default/images/512px/85.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:2c28fe280f4bdf3b58a8d24e6f27082cf096bcc6905a095d2fa1d5dab8a429f7
+size 272317
diff --git a/sdxl_default/images/512px/86.png b/sdxl_default/images/512px/86.png
new file mode 100644
index 0000000000000000000000000000000000000000..91b76adf0991e4b55aae204d9257e0229baeeaab
--- /dev/null
+++ b/sdxl_default/images/512px/86.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6beccc7d32f78a32e0577850a891fab96a1a905decffbad3b40cb125abe9a427
+size 388462
diff --git a/sdxl_default/images/512px/87.png b/sdxl_default/images/512px/87.png
new file mode 100644
index 0000000000000000000000000000000000000000..af4a9ee4690a0e0bf3a663a15e641d7d602b3695
--- /dev/null
+++ b/sdxl_default/images/512px/87.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c1a6568a1cccb57afbc1534917b547152c0d35c0af775b6fd00231a0fb6e123a
+size 420433
diff --git a/sdxl_default/images/512px/88.png b/sdxl_default/images/512px/88.png
new file mode 100644
index 0000000000000000000000000000000000000000..a7a25bcbb289c473eb7396f2fbd5fb12fe85525c
--- /dev/null
+++ b/sdxl_default/images/512px/88.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:6ab3b2d25f9ee8e5f5d61b740bff0ead1b87c752f34042b036307bf6f1a76c60
+size 361163
diff --git a/sdxl_default/images/512px/89.png b/sdxl_default/images/512px/89.png
new file mode 100644
index 0000000000000000000000000000000000000000..699486d95a0bb7307361e45ae63ecb5115953492
--- /dev/null
+++ b/sdxl_default/images/512px/89.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:7ce26e3e513fac052fa2dbeb8cbde9ea303e56d7a67b335b59d48699cdd736d8
+size 302465
diff --git a/sdxl_default/images/512px/9.png b/sdxl_default/images/512px/9.png
new file mode 100644
index 0000000000000000000000000000000000000000..7ef07482501d3aa874c93607c74afff90e576ab1
--- /dev/null
+++ b/sdxl_default/images/512px/9.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a9fcb5bb30b6498de892a640275e16337cecbbe1e9ffe52544a7afcbcc51e36e
+size 346714
diff --git a/sdxl_default/images/512px/90.png b/sdxl_default/images/512px/90.png
new file mode 100644
index 0000000000000000000000000000000000000000..277a997812aa02ef2017b2687f2f106b64b111e8
--- /dev/null
+++ b/sdxl_default/images/512px/90.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:a57a2f9d20a6e8d08def21a052272d4e80177488581e7857d02a2dfc1a39049f
+size 403930
diff --git a/sdxl_default/images/512px/91.png b/sdxl_default/images/512px/91.png
new file mode 100644
index 0000000000000000000000000000000000000000..36af66683c22464887121ce0196ab068e3a037af
--- /dev/null
+++ b/sdxl_default/images/512px/91.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:3a0a320d2d7d0e47713ab4806da9516996930b3f15ae4fa4a9d165ef9d17be0c
+size 440660
diff --git a/sdxl_default/images/512px/92.png b/sdxl_default/images/512px/92.png
new file mode 100644
index 0000000000000000000000000000000000000000..bf90de46177a9567d5cf19a40982a7164b1c0753
--- /dev/null
+++ b/sdxl_default/images/512px/92.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:8b6d65ca6670113800bb694d0f5c643e19632c5436574efa40ebb109864037ba
+size 348519
diff --git a/sdxl_default/images/512px/93.png b/sdxl_default/images/512px/93.png
new file mode 100644
index 0000000000000000000000000000000000000000..a95842026b36d3a02fb6b8312b1d7a72837bb097
--- /dev/null
+++ b/sdxl_default/images/512px/93.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5eb15fbea522e083f5a7fe7d9712df6dcf7f88f0dda25069fd2b8bd1d7eb842b
+size 337513
diff --git a/sdxl_default/images/512px/94.png b/sdxl_default/images/512px/94.png
new file mode 100644
index 0000000000000000000000000000000000000000..4be4b0e64cb3e449221e2a2b6b81406efdac3d20
--- /dev/null
+++ b/sdxl_default/images/512px/94.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:66a190742a4f0e8983598fe0c812b7b300f76e932d0e5cde8fc8f27adaf487f8
+size 376948
diff --git a/sdxl_default/images/512px/95.png b/sdxl_default/images/512px/95.png
new file mode 100644
index 0000000000000000000000000000000000000000..756358f1d4877546744a50633dd1d90d2e917100
--- /dev/null
+++ b/sdxl_default/images/512px/95.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:f08556f5f9af0ff5793c5c5839ef7a8cb06941c1be26eb5b8a61920cbf1b8cd3
+size 272717
diff --git a/sdxl_default/images/512px/96.png b/sdxl_default/images/512px/96.png
new file mode 100644
index 0000000000000000000000000000000000000000..384c009dc6300e23aa27374273b52f6e64fd1757
--- /dev/null
+++ b/sdxl_default/images/512px/96.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:c774a6b5b3b549ff187f21a1a73261a4942ab4ed420486bc4737570a4197bbb0
+size 412307
diff --git a/sdxl_default/images/512px/97.png b/sdxl_default/images/512px/97.png
new file mode 100644
index 0000000000000000000000000000000000000000..1339feaefdd365208459e5dfe45505750bf91424
--- /dev/null
+++ b/sdxl_default/images/512px/97.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:5e3df4d73a117acdafa6e63871b6acaed175d65b728eee3350a7d9d3f9976ebc
+size 328006
diff --git a/sdxl_default/images/512px/98.png b/sdxl_default/images/512px/98.png
new file mode 100644
index 0000000000000000000000000000000000000000..4e538f40118cc080506e2eacff8bc1ec190468b2
--- /dev/null
+++ b/sdxl_default/images/512px/98.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:b700a3b4d6de209c8098f51b85cf21ed37d5516148a729c41694d50bd623ae8f
+size 388704
diff --git a/sdxl_default/images/512px/99.png b/sdxl_default/images/512px/99.png
new file mode 100644
index 0000000000000000000000000000000000000000..1f3fde4db27136fdff25b0b049792c006fcbc27f
--- /dev/null
+++ b/sdxl_default/images/512px/99.png
@@ -0,0 +1,3 @@
+version https://git-lfs.github.com/spec/v1
+oid sha256:05e05039c6d219076a3d323af3ebd87066d594e35dfa37e6ace58787eb10dc78
+size 289258