wolverinn
commited on
Commit
·
9c8d2ad
1
Parent(s):
7a1cf92
add docker file
Browse files- README.md +18 -0
- cog.yaml +68 -0
- handler.py +5 -3
- predict.py +262 -0
- repositories/stable-diffusion-stability-ai/assets/model-variants.jpg +0 -3
- repositories/stable-diffusion-stability-ai/assets/modelfigure.png +0 -3
- repositories/stable-diffusion-stability-ai/assets/rick.jpeg +0 -0
- repositories/stable-diffusion-stability-ai/assets/stable-inpainting/inpainting.gif +0 -3
- repositories/stable-diffusion-stability-ai/assets/stable-inpainting/merged-leopards.png +0 -3
- repositories/stable-diffusion-stability-ai/assets/stable-samples/depth2img/d2i.gif +0 -3
- repositories/stable-diffusion-stability-ai/assets/stable-samples/depth2img/depth2fantasy.jpeg +0 -0
- repositories/stable-diffusion-stability-ai/assets/stable-samples/depth2img/depth2img01.png +0 -3
- repositories/stable-diffusion-stability-ai/assets/stable-samples/depth2img/depth2img02.png +0 -3
- repositories/stable-diffusion-stability-ai/assets/stable-samples/depth2img/merged-0000.png +0 -3
- repositories/stable-diffusion-stability-ai/assets/stable-samples/depth2img/merged-0004.png +0 -3
- repositories/stable-diffusion-stability-ai/assets/stable-samples/depth2img/merged-0005.png +0 -3
- repositories/stable-diffusion-stability-ai/assets/stable-samples/depth2img/midas.jpeg +0 -0
- repositories/stable-diffusion-stability-ai/assets/stable-samples/depth2img/old_man.png +0 -3
- repositories/stable-diffusion-stability-ai/assets/stable-samples/img2img/mountains-1.png +0 -3
- repositories/stable-diffusion-stability-ai/assets/stable-samples/img2img/mountains-2.png +0 -3
- repositories/stable-diffusion-stability-ai/assets/stable-samples/img2img/mountains-3.png +0 -3
- repositories/stable-diffusion-stability-ai/assets/stable-samples/img2img/sketch-mountains-input.jpg +0 -3
- repositories/stable-diffusion-stability-ai/assets/stable-samples/img2img/upscaling-in.png +0 -3
- repositories/stable-diffusion-stability-ai/assets/stable-samples/img2img/upscaling-out.png +0 -3
- repositories/stable-diffusion-stability-ai/assets/stable-samples/txt2img/000002025.png +0 -3
- repositories/stable-diffusion-stability-ai/assets/stable-samples/txt2img/000002035.png +0 -3
- repositories/stable-diffusion-stability-ai/assets/stable-samples/txt2img/768/merged-0001.png +0 -3
- repositories/stable-diffusion-stability-ai/assets/stable-samples/txt2img/768/merged-0002.png +0 -3
- repositories/stable-diffusion-stability-ai/assets/stable-samples/txt2img/768/merged-0003.png +0 -3
- repositories/stable-diffusion-stability-ai/assets/stable-samples/txt2img/768/merged-0004.png +0 -3
- repositories/stable-diffusion-stability-ai/assets/stable-samples/txt2img/768/merged-0005.png +0 -3
- repositories/stable-diffusion-stability-ai/assets/stable-samples/txt2img/768/merged-0006.png +0 -3
- repositories/stable-diffusion-stability-ai/assets/stable-samples/txt2img/merged-0001.png +0 -3
- repositories/stable-diffusion-stability-ai/assets/stable-samples/txt2img/merged-0003.png +0 -3
- repositories/stable-diffusion-stability-ai/assets/stable-samples/txt2img/merged-0005.png +0 -3
- repositories/stable-diffusion-stability-ai/assets/stable-samples/txt2img/merged-0006.png +0 -3
- repositories/stable-diffusion-stability-ai/assets/stable-samples/txt2img/merged-0007.png +0 -3
- repositories/stable-diffusion-stability-ai/assets/stable-samples/upscaling/merged-dog.png +0 -3
- repositories/stable-diffusion-stability-ai/assets/stable-samples/upscaling/sampled-bear-x4.png +0 -3
- repositories/stable-diffusion-stability-ai/assets/stable-samples/upscaling/snow-leopard-x4.png +0 -3
README.md
CHANGED
|
@@ -1,3 +1,21 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
---
|
| 2 |
license: apache-2.0
|
| 3 |
---
|
|
|
|
| 1 |
+
# Chill Watcher
|
| 2 |
+
consider deploy on:
|
| 3 |
+
- hugging-face inference point
|
| 4 |
+
- replicate api
|
| 5 |
+
|
| 6 |
+
### some stackoverflow:
|
| 7 |
+
install docker:
|
| 8 |
+
- https://docs.docker.com/engine/install/ubuntu/#set-up-the-repository
|
| 9 |
+
|
| 10 |
+
install git-lfs:
|
| 11 |
+
- https://github.com/git-lfs/git-lfs/blob/main/INSTALLING.md
|
| 12 |
+
linux:
|
| 13 |
+
```
|
| 14 |
+
curl -s https://packagecloud.io/install/repositories/github/git-lfs/script.deb.sh | sudo bash
|
| 15 |
+
|
| 16 |
+
sudo apt-get install git-lfs
|
| 17 |
+
```
|
| 18 |
+
|
| 19 |
---
|
| 20 |
license: apache-2.0
|
| 21 |
---
|
cog.yaml
ADDED
|
@@ -0,0 +1,68 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Configuration for Cog ⚙️
|
| 2 |
+
# https://replicate.com/docs/guides/push-a-model
|
| 3 |
+
# prerequisite:https://docs.docker.com/engine/install/ubuntu/#set-up-the-repository `dockerd` to start docker
|
| 4 |
+
# Reference: https://github.com/replicate/cog/blob/main/docs/yaml.md
|
| 5 |
+
# !!!! recommend 60G disk space for cog docker
|
| 6 |
+
|
| 7 |
+
build:
|
| 8 |
+
# set to true if your model requires a GPU
|
| 9 |
+
gpu: true
|
| 10 |
+
|
| 11 |
+
# a list of ubuntu apt packages to install
|
| 12 |
+
system_packages:
|
| 13 |
+
- "libgl1-mesa-glx"
|
| 14 |
+
- "libglib2.0-0"
|
| 15 |
+
|
| 16 |
+
# python version in the form '3.8' or '3.8.12'
|
| 17 |
+
python_version: "3.10.4"
|
| 18 |
+
|
| 19 |
+
# a list of packages in the format <package-name>==<version>
|
| 20 |
+
python_packages:
|
| 21 |
+
- blendmodes==2022
|
| 22 |
+
- transformers==4.25.1
|
| 23 |
+
- accelerate==0.12.0
|
| 24 |
+
- basicsr==1.4.2
|
| 25 |
+
- gfpgan==1.3.8
|
| 26 |
+
- gradio==3.16.2
|
| 27 |
+
- numpy==1.23.3
|
| 28 |
+
- Pillow==9.4.0
|
| 29 |
+
- realesrgan==0.3.0
|
| 30 |
+
# - torch==1.13.1+cu117
|
| 31 |
+
# - --extra-index-url https://download.pytorch.org/whl/cu117
|
| 32 |
+
# - torchvision==0.14.1+cu117
|
| 33 |
+
# - --extra-index-url https://download.pytorch.org/whl/cu117
|
| 34 |
+
- omegaconf==2.2.3
|
| 35 |
+
- pytorch_lightning==1.7.6
|
| 36 |
+
- scikit-image==0.19.2
|
| 37 |
+
- fonts
|
| 38 |
+
- font-roboto
|
| 39 |
+
- timm==0.6.7
|
| 40 |
+
- piexif==1.1.3
|
| 41 |
+
- einops==0.4.1
|
| 42 |
+
- jsonmerge==1.8.0
|
| 43 |
+
- clean-fid==0.1.29
|
| 44 |
+
- resize-right==0.0.2
|
| 45 |
+
- torchdiffeq==0.2.3
|
| 46 |
+
- kornia==0.6.7
|
| 47 |
+
- lark==1.1.2
|
| 48 |
+
- inflection==0.5.1
|
| 49 |
+
- GitPython==3.1.27
|
| 50 |
+
- torchsde==0.2.5
|
| 51 |
+
- safetensors==0.2.7
|
| 52 |
+
- httpcore<=0.15
|
| 53 |
+
- fastapi==0.90.1
|
| 54 |
+
# - open_clip_torch
|
| 55 |
+
- git+https://github.com/mlfoundations/open_clip.git@bb6e834e9c70d9c27d0dc3ecedeebeaeb1ffad6b
|
| 56 |
+
- git+https://github.com/openai/CLIP.git@d50d76daa670286dd6cacf3bcd80b5e4823fc8e1
|
| 57 |
+
|
| 58 |
+
# commands run after the environment is setup
|
| 59 |
+
run:
|
| 60 |
+
- "pip3 install torch==1.13.1+cu117 --extra-index-url https://download.pytorch.org/whl/cu117"
|
| 61 |
+
- "pip3 install torchvision==0.14.1+cu117 --extra-index-url https://download.pytorch.org/whl/cu117"
|
| 62 |
+
- "echo env is ready!"
|
| 63 |
+
|
| 64 |
+
# https://replicate.com/wolverinn/chill_watcher
|
| 65 |
+
image: "r8.im/wolverinn/chill_watcher"
|
| 66 |
+
|
| 67 |
+
# predict.py defines how predictions are run on your model
|
| 68 |
+
predict: "predict.py:Predictor"
|
handler.py
CHANGED
|
@@ -153,9 +153,11 @@ class EndpointHandler():
|
|
| 153 |
"height": 768,
|
| 154 |
"seed": -1,
|
| 155 |
}
|
| 156 |
-
if "
|
| 157 |
-
|
| 158 |
-
|
|
|
|
|
|
|
| 159 |
p = StableDiffusionProcessingTxt2Img(sd_model=self.shared.sd_model, **args)
|
| 160 |
processed = process_images(p)
|
| 161 |
single_image_b64 = encode_pil_to_base64(processed.images[0]).decode('utf-8')
|
|
|
|
| 153 |
"height": 768,
|
| 154 |
"seed": -1,
|
| 155 |
}
|
| 156 |
+
if data["inputs"]:
|
| 157 |
+
if "prompt" in data["inputs"].keys():
|
| 158 |
+
prompt = data["inputs"]["prompt"]
|
| 159 |
+
print("get prompt from request: ", prompt)
|
| 160 |
+
args["prompt"] = prompt
|
| 161 |
p = StableDiffusionProcessingTxt2Img(sd_model=self.shared.sd_model, **args)
|
| 162 |
processed = process_images(p)
|
| 163 |
single_image_b64 = encode_pil_to_base64(processed.images[0]).decode('utf-8')
|
predict.py
ADDED
|
@@ -0,0 +1,262 @@
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
# Prediction interface for Cog ⚙️
|
| 2 |
+
# https://github.com/replicate/cog/blob/main/docs/python.md
|
| 3 |
+
|
| 4 |
+
from cog import BasePredictor, Input, Path
|
| 5 |
+
|
| 6 |
+
import os
|
| 7 |
+
import sys
|
| 8 |
+
import time
|
| 9 |
+
import importlib
|
| 10 |
+
import signal
|
| 11 |
+
import re
|
| 12 |
+
from typing import Dict, List, Any
|
| 13 |
+
# from fastapi import FastAPI
|
| 14 |
+
# from fastapi.middleware.cors import CORSMiddleware
|
| 15 |
+
# from fastapi.middleware.gzip import GZipMiddleware
|
| 16 |
+
from packaging import version
|
| 17 |
+
|
| 18 |
+
import logging
|
| 19 |
+
logging.getLogger("xformers").addFilter(lambda record: 'A matching Triton is not available' not in record.getMessage())
|
| 20 |
+
|
| 21 |
+
from modules import errors
|
| 22 |
+
from modules.call_queue import wrap_queued_call, queue_lock, wrap_gradio_gpu_call
|
| 23 |
+
|
| 24 |
+
import torch
|
| 25 |
+
|
| 26 |
+
# Truncate version number of nightly/local build of PyTorch to not cause exceptions with CodeFormer or Safetensors
|
| 27 |
+
if ".dev" in torch.__version__ or "+git" in torch.__version__:
|
| 28 |
+
torch.__long_version__ = torch.__version__
|
| 29 |
+
torch.__version__ = re.search(r'[\d.]+[\d]', torch.__version__).group(0)
|
| 30 |
+
|
| 31 |
+
from modules import shared, devices, ui_tempdir
|
| 32 |
+
import modules.codeformer_model as codeformer
|
| 33 |
+
import modules.face_restoration
|
| 34 |
+
import modules.gfpgan_model as gfpgan
|
| 35 |
+
import modules.img2img
|
| 36 |
+
|
| 37 |
+
import modules.lowvram
|
| 38 |
+
import modules.paths
|
| 39 |
+
import modules.scripts
|
| 40 |
+
import modules.sd_hijack
|
| 41 |
+
import modules.sd_models
|
| 42 |
+
import modules.sd_vae
|
| 43 |
+
import modules.txt2img
|
| 44 |
+
import modules.script_callbacks
|
| 45 |
+
import modules.textual_inversion.textual_inversion
|
| 46 |
+
import modules.progress
|
| 47 |
+
|
| 48 |
+
import modules.ui
|
| 49 |
+
from modules import modelloader
|
| 50 |
+
from modules.shared import cmd_opts, opts
|
| 51 |
+
import modules.hypernetworks.hypernetwork
|
| 52 |
+
|
| 53 |
+
from modules.processing import StableDiffusionProcessingTxt2Img, StableDiffusionProcessingImg2Img, process_images
|
| 54 |
+
import base64
|
| 55 |
+
import io
|
| 56 |
+
from fastapi import HTTPException
|
| 57 |
+
from io import BytesIO
|
| 58 |
+
import piexif
|
| 59 |
+
import piexif.helper
|
| 60 |
+
from PIL import PngImagePlugin,Image
|
| 61 |
+
|
| 62 |
+
|
| 63 |
+
def initialize():
|
| 64 |
+
# check_versions()
|
| 65 |
+
|
| 66 |
+
# extensions.list_extensions()
|
| 67 |
+
# localization.list_localizations(cmd_opts.localizations_dir)
|
| 68 |
+
|
| 69 |
+
# if cmd_opts.ui_debug_mode:
|
| 70 |
+
# shared.sd_upscalers = upscaler.UpscalerLanczos().scalers
|
| 71 |
+
# modules.scripts.load_scripts()
|
| 72 |
+
# return
|
| 73 |
+
|
| 74 |
+
modelloader.cleanup_models()
|
| 75 |
+
modules.sd_models.setup_model()
|
| 76 |
+
codeformer.setup_model(cmd_opts.codeformer_models_path)
|
| 77 |
+
gfpgan.setup_model(cmd_opts.gfpgan_models_path)
|
| 78 |
+
|
| 79 |
+
modelloader.list_builtin_upscalers()
|
| 80 |
+
# modules.scripts.load_scripts()
|
| 81 |
+
modelloader.load_upscalers()
|
| 82 |
+
|
| 83 |
+
modules.sd_vae.refresh_vae_list()
|
| 84 |
+
|
| 85 |
+
# modules.textual_inversion.textual_inversion.list_textual_inversion_templates()
|
| 86 |
+
|
| 87 |
+
try:
|
| 88 |
+
modules.sd_models.load_model()
|
| 89 |
+
except Exception as e:
|
| 90 |
+
errors.display(e, "loading stable diffusion model")
|
| 91 |
+
print("", file=sys.stderr)
|
| 92 |
+
print("Stable diffusion model failed to load, exiting", file=sys.stderr)
|
| 93 |
+
exit(1)
|
| 94 |
+
|
| 95 |
+
shared.opts.data["sd_model_checkpoint"] = shared.sd_model.sd_checkpoint_info.title
|
| 96 |
+
|
| 97 |
+
shared.opts.onchange("sd_model_checkpoint", wrap_queued_call(lambda: modules.sd_models.reload_model_weights()))
|
| 98 |
+
shared.opts.onchange("sd_vae", wrap_queued_call(lambda: modules.sd_vae.reload_vae_weights()), call=False)
|
| 99 |
+
shared.opts.onchange("sd_vae_as_default", wrap_queued_call(lambda: modules.sd_vae.reload_vae_weights()), call=False)
|
| 100 |
+
shared.opts.onchange("temp_dir", ui_tempdir.on_tmpdir_changed)
|
| 101 |
+
|
| 102 |
+
# shared.reload_hypernetworks()
|
| 103 |
+
|
| 104 |
+
# ui_extra_networks.intialize()
|
| 105 |
+
# ui_extra_networks.register_page(ui_extra_networks_textual_inversion.ExtraNetworksPageTextualInversion())
|
| 106 |
+
# ui_extra_networks.register_page(ui_extra_networks_hypernets.ExtraNetworksPageHypernetworks())
|
| 107 |
+
# ui_extra_networks.register_page(ui_extra_networks_checkpoints.ExtraNetworksPageCheckpoints())
|
| 108 |
+
|
| 109 |
+
# extra_networks.initialize()
|
| 110 |
+
# extra_networks.register_extra_network(extra_networks_hypernet.ExtraNetworkHypernet())
|
| 111 |
+
|
| 112 |
+
# if cmd_opts.tls_keyfile is not None and cmd_opts.tls_keyfile is not None:
|
| 113 |
+
|
| 114 |
+
# try:
|
| 115 |
+
# if not os.path.exists(cmd_opts.tls_keyfile):
|
| 116 |
+
# print("Invalid path to TLS keyfile given")
|
| 117 |
+
# if not os.path.exists(cmd_opts.tls_certfile):
|
| 118 |
+
# print(f"Invalid path to TLS certfile: '{cmd_opts.tls_certfile}'")
|
| 119 |
+
# except TypeError:
|
| 120 |
+
# cmd_opts.tls_keyfile = cmd_opts.tls_certfile = None
|
| 121 |
+
# print("TLS setup invalid, running webui without TLS")
|
| 122 |
+
# else:
|
| 123 |
+
# print("Running with TLS")
|
| 124 |
+
|
| 125 |
+
# make the program just exit at ctrl+c without waiting for anything
|
| 126 |
+
def sigint_handler(sig, frame):
|
| 127 |
+
print(f'Interrupted with signal {sig} in {frame}')
|
| 128 |
+
os._exit(0)
|
| 129 |
+
|
| 130 |
+
signal.signal(signal.SIGINT, sigint_handler)
|
| 131 |
+
|
| 132 |
+
|
| 133 |
+
class EndpointHandler():
|
| 134 |
+
def __init__(self, path=""):
|
| 135 |
+
# Preload all the elements you are going to need at inference.
|
| 136 |
+
# pseudo:
|
| 137 |
+
# self.model= load_model(path)
|
| 138 |
+
initialize()
|
| 139 |
+
self.shared = shared
|
| 140 |
+
|
| 141 |
+
def __call__(self, data: Dict[str, Any]) -> List[Dict[str, Any]]:
|
| 142 |
+
"""
|
| 143 |
+
data args:
|
| 144 |
+
inputs (:obj: `str` | `PIL.Image` | `np.array`)
|
| 145 |
+
kwargs
|
| 146 |
+
Return:
|
| 147 |
+
A :obj:`list` | `dict`: will be serialized and returned
|
| 148 |
+
"""
|
| 149 |
+
args = {
|
| 150 |
+
# todo: don't output png
|
| 151 |
+
"outpath_samples": "C:\\Users\\wolvz\\Desktop",
|
| 152 |
+
"prompt": "lora:koreanDollLikeness_v15:0.66, best quality, ultra high res, (photorealistic:1.4), 1girl, beige sweater, black choker, smile, laughing, bare shoulders, solo focus, ((full body), (brown hair:1), looking at viewer",
|
| 153 |
+
"negative_prompt": "paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, glans, (ugly:1.331), (duplicate:1.331), (morbid:1.21), (mutilated:1.21), (tranny:1.331), mutated hands, (poorly drawn hands:1.331), blurry, 3hands,4fingers,3arms, bad anatomy, missing fingers, extra digit, fewer digits, cropped, jpeg artifacts,poorly drawn face,mutation,deformed",
|
| 154 |
+
"sampler_name": "DPM++ SDE Karras",
|
| 155 |
+
"steps": 20, # 25
|
| 156 |
+
"cfg_scale": 8,
|
| 157 |
+
"width": 512,
|
| 158 |
+
"height": 768,
|
| 159 |
+
"seed": -1,
|
| 160 |
+
}
|
| 161 |
+
if "prompt" in data.keys():
|
| 162 |
+
print("get prompt from request: ", data["prompt"])
|
| 163 |
+
args["prompt"] = data["prompt"]
|
| 164 |
+
p = StableDiffusionProcessingTxt2Img(sd_model=self.shared.sd_model, **args)
|
| 165 |
+
processed = process_images(p)
|
| 166 |
+
single_image_b64 = encode_pil_to_base64(processed.images[0]).decode('utf-8')
|
| 167 |
+
return {
|
| 168 |
+
"img_data": single_image_b64,
|
| 169 |
+
"parameters": processed.images[0].info.get('parameters', ""),
|
| 170 |
+
}
|
| 171 |
+
|
| 172 |
+
|
| 173 |
+
def manual_hack():
|
| 174 |
+
initialize()
|
| 175 |
+
args = {
|
| 176 |
+
# todo: don't output res
|
| 177 |
+
"outpath_samples": "C:\\Users\\wolvz\\Desktop",
|
| 178 |
+
"prompt": "lora:koreanDollLikeness_v15:0.66, best quality, ultra high res, (photorealistic:1.4), 1girl, beige sweater, black choker, smile, laughing, bare shoulders, solo focus, ((full body), (brown hair:1), looking at viewer",
|
| 179 |
+
"negative_prompt": "paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, glans",
|
| 180 |
+
"sampler_name": "DPM++ SDE Karras",
|
| 181 |
+
"steps": 20, # 25
|
| 182 |
+
"cfg_scale": 8,
|
| 183 |
+
"width": 512,
|
| 184 |
+
"height": 768,
|
| 185 |
+
"seed": -1,
|
| 186 |
+
}
|
| 187 |
+
p = StableDiffusionProcessingTxt2Img(sd_model=shared.sd_model, **args)
|
| 188 |
+
processed = process_images(p)
|
| 189 |
+
|
| 190 |
+
|
| 191 |
+
def decode_base64_to_image(encoding):
|
| 192 |
+
if encoding.startswith("data:image/"):
|
| 193 |
+
encoding = encoding.split(";")[1].split(",")[1]
|
| 194 |
+
try:
|
| 195 |
+
image = Image.open(BytesIO(base64.b64decode(encoding)))
|
| 196 |
+
return image
|
| 197 |
+
except Exception as err:
|
| 198 |
+
raise HTTPException(status_code=500, detail="Invalid encoded image")
|
| 199 |
+
|
| 200 |
+
def encode_pil_to_base64(image):
|
| 201 |
+
with io.BytesIO() as output_bytes:
|
| 202 |
+
|
| 203 |
+
if opts.samples_format.lower() == 'png':
|
| 204 |
+
use_metadata = False
|
| 205 |
+
metadata = PngImagePlugin.PngInfo()
|
| 206 |
+
for key, value in image.info.items():
|
| 207 |
+
if isinstance(key, str) and isinstance(value, str):
|
| 208 |
+
metadata.add_text(key, value)
|
| 209 |
+
use_metadata = True
|
| 210 |
+
image.save(output_bytes, format="PNG", pnginfo=(metadata if use_metadata else None), quality=opts.jpeg_quality)
|
| 211 |
+
|
| 212 |
+
elif opts.samples_format.lower() in ("jpg", "jpeg", "webp"):
|
| 213 |
+
parameters = image.info.get('parameters', None)
|
| 214 |
+
exif_bytes = piexif.dump({
|
| 215 |
+
"Exif": { piexif.ExifIFD.UserComment: piexif.helper.UserComment.dump(parameters or "", encoding="unicode") }
|
| 216 |
+
})
|
| 217 |
+
if opts.samples_format.lower() in ("jpg", "jpeg"):
|
| 218 |
+
image.save(output_bytes, format="JPEG", exif = exif_bytes, quality=opts.jpeg_quality)
|
| 219 |
+
else:
|
| 220 |
+
image.save(output_bytes, format="WEBP", exif = exif_bytes, quality=opts.jpeg_quality)
|
| 221 |
+
|
| 222 |
+
else:
|
| 223 |
+
raise HTTPException(status_code=500, detail="Invalid image format")
|
| 224 |
+
|
| 225 |
+
bytes_data = output_bytes.getvalue()
|
| 226 |
+
|
| 227 |
+
return base64.b64encode(bytes_data)
|
| 228 |
+
|
| 229 |
+
|
| 230 |
+
class Predictor(BasePredictor):
|
| 231 |
+
def setup(self):
|
| 232 |
+
"""Load the model into memory to make running multiple predictions efficient"""
|
| 233 |
+
initialize()
|
| 234 |
+
self.shared = shared
|
| 235 |
+
|
| 236 |
+
def predict(
|
| 237 |
+
self,
|
| 238 |
+
prompt: str = Input(description="prompt en"),
|
| 239 |
+
) -> Dict[str, Any]:
|
| 240 |
+
"""Run a single prediction on the model"""
|
| 241 |
+
args = {
|
| 242 |
+
# todo: don't output png
|
| 243 |
+
"outpath_samples": "C:\\Users\\wolvz\\Desktop",
|
| 244 |
+
"prompt": "lora:koreanDollLikeness_v15:0.66, best quality, ultra high res, (photorealistic:1.4), 1girl, beige sweater, black choker, smile, laughing, bare shoulders, solo focus, ((full body), (brown hair:1), looking at viewer",
|
| 245 |
+
"negative_prompt": "paintings, sketches, (worst quality:2), (low quality:2), (normal quality:2), lowres, normal quality, ((monochrome)), ((grayscale)), skin spots, acnes, skin blemishes, age spot, glans, (ugly:1.331), (duplicate:1.331), (morbid:1.21), (mutilated:1.21), (tranny:1.331), mutated hands, (poorly drawn hands:1.331), blurry, 3hands,4fingers,3arms, bad anatomy, missing fingers, extra digit, fewer digits, cropped, jpeg artifacts,poorly drawn face,mutation,deformed",
|
| 246 |
+
"sampler_name": "DPM++ SDE Karras",
|
| 247 |
+
"steps": 20, # 25
|
| 248 |
+
"cfg_scale": 8,
|
| 249 |
+
"width": 512,
|
| 250 |
+
"height": 768,
|
| 251 |
+
"seed": -1,
|
| 252 |
+
}
|
| 253 |
+
if len(prompt) > 0:
|
| 254 |
+
print("get prompt from request: ", prompt)
|
| 255 |
+
args["prompt"] = prompt
|
| 256 |
+
p = StableDiffusionProcessingTxt2Img(sd_model=self.shared.sd_model, **args)
|
| 257 |
+
processed = process_images(p)
|
| 258 |
+
single_image_b64 = encode_pil_to_base64(processed.images[0]).decode('utf-8')
|
| 259 |
+
return {
|
| 260 |
+
"img_data": single_image_b64,
|
| 261 |
+
"parameters": processed.images[0].info.get('parameters', ""),
|
| 262 |
+
}
|
repositories/stable-diffusion-stability-ai/assets/model-variants.jpg
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/modelfigure.png
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/rick.jpeg
DELETED
|
Binary file (232 kB)
|
|
|
repositories/stable-diffusion-stability-ai/assets/stable-inpainting/inpainting.gif
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/stable-inpainting/merged-leopards.png
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/depth2img/d2i.gif
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/depth2img/depth2fantasy.jpeg
DELETED
|
Binary file (260 kB)
|
|
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/depth2img/depth2img01.png
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/depth2img/depth2img02.png
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/depth2img/merged-0000.png
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/depth2img/merged-0004.png
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/depth2img/merged-0005.png
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/depth2img/midas.jpeg
DELETED
|
Binary file (40.3 kB)
|
|
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/depth2img/old_man.png
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/img2img/mountains-1.png
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/img2img/mountains-2.png
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/img2img/mountains-3.png
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/img2img/sketch-mountains-input.jpg
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/img2img/upscaling-in.png
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/img2img/upscaling-out.png
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/txt2img/000002025.png
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/txt2img/000002035.png
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/txt2img/768/merged-0001.png
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/txt2img/768/merged-0002.png
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/txt2img/768/merged-0003.png
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/txt2img/768/merged-0004.png
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/txt2img/768/merged-0005.png
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/txt2img/768/merged-0006.png
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/txt2img/merged-0001.png
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/txt2img/merged-0003.png
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/txt2img/merged-0005.png
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/txt2img/merged-0006.png
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/txt2img/merged-0007.png
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/upscaling/merged-dog.png
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/upscaling/sampled-bear-x4.png
DELETED
Git LFS Details
|
repositories/stable-diffusion-stability-ai/assets/stable-samples/upscaling/snow-leopard-x4.png
DELETED
Git LFS Details
|