yongqiang commited on
Commit
318acaa
·
1 Parent(s): 8f2b341

添加ax620e编译模型,添加统一运行入口launcher.py

Browse files
.gitattributes CHANGED
@@ -45,3 +45,6 @@ models/vae_encoder.axmodel filter=lfs diff=lfs merge=lfs -text
45
  models/7ffcf62c-d292-11ef-bb2a-9d527016cd35 filter=lfs diff=lfs merge=lfs -text
46
  models/text_encoder/sd15_text_encoder_sim.onnx filter=lfs diff=lfs merge=lfs -text
47
  models/text_encoder/sd15_text_encoder_sim.axmodel filter=lfs diff=lfs merge=lfs -text
 
 
 
 
45
  models/7ffcf62c-d292-11ef-bb2a-9d527016cd35 filter=lfs diff=lfs merge=lfs -text
46
  models/text_encoder/sd15_text_encoder_sim.onnx filter=lfs diff=lfs merge=lfs -text
47
  models/text_encoder/sd15_text_encoder_sim.axmodel filter=lfs diff=lfs merge=lfs -text
48
+ ax620e_models/99576a92-4ffd-11f0-b3ee-f5b7bf5aa809 filter=lfs diff=lfs merge=lfs -text
49
+ *.axmodel filter=lfs diff=lfs merge=lfs -text
50
+ *.png filter=lfs diff=lfs merge=lfs -text
README.md CHANGED
@@ -14,10 +14,11 @@ base_model:
14
 
15
  # SD1.5-LCM.Axera
16
 
17
- 基于 StableDiffusion 1.5 LCM 项目,展示该项目 **文生图**、**图生图** 在基于 AX650N 的产品上部署的流程。
18
 
19
- 支持芯片:
20
  - AX650N
 
21
 
22
  支持硬件
23
 
@@ -54,15 +55,110 @@ base_model:
54
 
55
  ### 环境准备
56
 
57
- - 系统内存:大于 5GiB
58
- - python 版本:大于等于 3.10,更高版本没有验证过,建议使用 Python 虚拟环境进行隔离,例如 `miniconda`
59
- - NPU Python API[pyaxengine](https://github.com/AXERA-TECH/pyaxengine)
60
 
61
 
62
  ```
63
  pip install -r requirements.txt
64
  ```
65
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
66
  ### 文生图
67
 
68
  - 运行 `run_txt2img_axe_infer.py`
 
14
 
15
  # SD1.5-LCM.Axera
16
 
17
+ 基于 StableDiffusion 1.5 LCM 项目,展示该项目 **文生图**、**图生图** 在基于 AX650N 的产品上部署的流程.
18
 
19
+ 支持芯片:
20
  - AX650N
21
+ - AX620E
22
 
23
  支持硬件
24
 
 
55
 
56
  ### 环境准备
57
 
58
+ - 系统内存: 大于 5GiB
59
+ - python 版本: 大于等于 3.10,更高版本没有验证过,建议使用 Python 虚拟环境进行隔离,例如 `miniconda`
60
+ - NPU Python API: [pyaxengine](https://github.com/AXERA-TECH/pyaxengine)
61
 
62
 
63
  ```
64
  pip install -r requirements.txt
65
  ```
66
 
67
+ ## 2025.11.27 更新
68
+
69
+ 本次更新提供一个统一的模型执行脚本 `launcher.py`,支持在 `AX620E` 和 `AX650N` 芯片上进行文生图 (txt2img) 和图生图 (img2img) 推理任务.
70
+
71
+ ### 特性
72
+
73
+ - **多平台支持**: 兼容 AX620E 和 AX650N 芯片
74
+ - **双模式推理**: 支持文生图和图生图两种生成模式
75
+ - **多后端支持**: 支持 AXE 和 ONNX 两种推理后端
76
+ - **灵活配置**: ONNX 可自定义图像尺寸、AXMODEL 支持 512 (AX650N) 和 256 (AX620E) 两种尺寸、支持配置随机种子等参数
77
+ - **高性能**: 针对边缘计算设备优化的推理性能
78
+
79
+ ### 环境要求
80
+
81
+ - Python 3.9+
82
+ - 支持的硬件平台:
83
+ - AX620E
84
+ - AX650N
85
+
86
+ ### 模型准备
87
+
88
+ 请确保模型文件已放置在正确的目录中:
89
+ - 默认模型目录: `./models`
90
+ - AX620E 专用模型目录: `ax620e_models/`
91
+
92
+ ### 基本用法
93
+
94
+ #### AX620E 平台使用示例
95
+
96
+ **文生图任务(256x256 分辨率)**:
97
+
98
+ >生图总耗时(4 steps): text_encoder 48.7ms + unet 1483.9ms + decoder 739.4ms 约为 2.2s.
99
+
100
+ ```bash
101
+ python3 launcher.py --isize 256 --model_dir ax620e_models/ -o "ax620e_txt2img_axe.png" --prompt "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"
102
+ ```
103
+
104
+ **图生图任务**:
105
+
106
+ >生图总耗时(2 steps): text_encoder 48.8ms + vae_encoder 359.1ms + unet 744.8ms + decoder 739.1ms 约为 1.9s.
107
+
108
+ ```bash
109
+ python3 launcher.py --init_image ax620e_models/img2img-init.png --isize 256 --model_dir ax620e_models/ --seed 1 --prompt "Astronauts in a jungle, cold color palette, muted colors, detailed, 8k" -o "ax620e_img2img_axe.png"
110
+ ```
111
+
112
+ #### AX650N 平台使用示例
113
+
114
+ **文生图任务(默认 512x512 分辨率)**:
115
+
116
+ ```bash
117
+ python3 launcher.py -o "ax650n_txt2img_axe.png" --prompt "Self-portrait oil painting, a beautiful cyborg with golden hair, 8k"
118
+ ```
119
+
120
+ **图生图任务**:
121
+
122
+ ```bash
123
+ python3 launcher.py --init_image models/img2img-init.png --prompt "Astronauts in a jungle, cold color palette, muted colors, detailed, 8k" -o "ax650n_img2img_axe.png"
124
+ ```
125
+
126
+ ### 参数说明
127
+
128
+ | 参数 | 类型 | 默认值 | 描述 |
129
+ |------|------|--------|------|
130
+ | `--backend` | choice | `"axe"` | 推理后端(axe 或 onnx), 默认 axe |
131
+ | `--prompt` | str | 默认提示词 | 输入文本提示词 |
132
+ | `--model_dir` | str | `"./models"` | 包含分词器、文本编码器、UNet、VAE 等模型的目录 |
133
+ | `--time_input` | str | `None` | 可选的时间输入 numpy 文件覆盖 |
134
+ | `--init_image` | str | `None` | 提供初始图像以启用图生图模式 |
135
+ | `--isize` | int | `512` | 输出图像尺寸, 512 or 256, 默认 512 |
136
+ | `-o`, `--save_dir` | str | `"./output.png"` | 生成图像的保存路径 |
137
+ | `--seed` | int | `None` | 随机种子(图生图模式未指定时默认为 0) |
138
+
139
+ ### 使用说明
140
+
141
+ #### 分辨率限制
142
+ - **AX620E**: 仅支持 256x256 分辨率
143
+ - **AX650N**: 支持 512x512 分辨率
144
+
145
+ #### 模型文件要求
146
+
147
+ - 模型目录应包含以下必要的模型文件:
148
+ - Tokenizer 模型
149
+ - 文本编码器(Text Encoder)
150
+ - UNet 模型
151
+ - VAE 模型
152
+ - 时间输入文件
153
+
154
+ #### 图生图模式
155
+
156
+ - 使用 `--init_image` 参数启用图生图模式,系统将基于提供的初始图像进行生成.
157
+
158
+ ---
159
+
160
+ ## History
161
+
162
  ### 文生图
163
 
164
  - 运行 `run_txt2img_axe_infer.py`
ax620e_models/99576a92-4ffd-11f0-b3ee-f5b7bf5aa809 ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c2ab72ab60a118b3008c1c63d0e4115997ae01ebd5556eb4cdecc7a09d9df73f
3
+ size 3438083840
ax620e_models/img2img-init.png ADDED

Git LFS Details

  • SHA256: 42f0ee242d8caaee1aea5506c8318c6a920d559a63c6db8d79f993eebaf7d790
  • Pointer size: 131 Bytes
  • Size of remote file: 253 kB
ax620e_models/text_encoder/config.json ADDED
@@ -0,0 +1,25 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "_name_or_path": "/home/patrick/.cache/huggingface/hub/models--lykon-models--dreamshaper-7/snapshots/c4c9f9bec821e1862a78cbf45685cfb35b93638d/text_encoder",
3
+ "architectures": [
4
+ "CLIPTextModel"
5
+ ],
6
+ "attention_dropout": 0.0,
7
+ "bos_token_id": 0,
8
+ "dropout": 0.0,
9
+ "eos_token_id": 2,
10
+ "hidden_act": "quick_gelu",
11
+ "hidden_size": 768,
12
+ "initializer_factor": 1.0,
13
+ "initializer_range": 0.02,
14
+ "intermediate_size": 3072,
15
+ "layer_norm_eps": 1e-05,
16
+ "max_position_embeddings": 77,
17
+ "model_type": "clip_text_model",
18
+ "num_attention_heads": 12,
19
+ "num_hidden_layers": 12,
20
+ "pad_token_id": 1,
21
+ "projection_dim": 768,
22
+ "torch_dtype": "float16",
23
+ "transformers_version": "4.33.0.dev0",
24
+ "vocab_size": 49408
25
+ }
ax620e_models/text_encoder/model.fp16.safetensors ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a6f6744cfbcfe4fa9d236a231fd67e248389df7187dc15d52f16d9e9872105ff
3
+ size 246144152
ax620e_models/text_encoder/sd15_text_encoder_sim.axmodel ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:0d348ba3a0f0c70552b92215a8f78496f1c2072364e393510e0101af382fbcf4
3
+ size 240153225
ax620e_models/text_encoder/sd15_text_encoder_sim.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:99200d38b237eaf8bf77c5736da005a70281a64c5bf985eacaa83194d121e50b
3
+ size 492398339
ax620e_models/time_input_img2img.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:95d015256308e1be1af00793c77fa2ba8c934163beaa8015dec54d20048838cf
3
+ size 20608
ax620e_models/time_input_txt2img.npy ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:a48a430879c6a81a907889a6bb2b73f48cc9dc45b52f047ad2ee5c2dddcd2d10
3
+ size 20608
ax620e_models/tokenizer/merges.txt ADDED
The diff for this file is too large to render. See raw diff
 
ax620e_models/tokenizer/special_tokens_map.json ADDED
@@ -0,0 +1,24 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "bos_token": {
3
+ "content": "<|startoftext|>",
4
+ "lstrip": false,
5
+ "normalized": true,
6
+ "rstrip": false,
7
+ "single_word": false
8
+ },
9
+ "eos_token": {
10
+ "content": "<|endoftext|>",
11
+ "lstrip": false,
12
+ "normalized": true,
13
+ "rstrip": false,
14
+ "single_word": false
15
+ },
16
+ "pad_token": "<|endoftext|>",
17
+ "unk_token": {
18
+ "content": "<|endoftext|>",
19
+ "lstrip": false,
20
+ "normalized": true,
21
+ "rstrip": false,
22
+ "single_word": false
23
+ }
24
+ }
ax620e_models/tokenizer/tokenizer_config.json ADDED
@@ -0,0 +1,33 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ {
2
+ "add_prefix_space": false,
3
+ "bos_token": {
4
+ "__type": "AddedToken",
5
+ "content": "<|startoftext|>",
6
+ "lstrip": false,
7
+ "normalized": true,
8
+ "rstrip": false,
9
+ "single_word": false
10
+ },
11
+ "clean_up_tokenization_spaces": true,
12
+ "do_lower_case": true,
13
+ "eos_token": {
14
+ "__type": "AddedToken",
15
+ "content": "<|endoftext|>",
16
+ "lstrip": false,
17
+ "normalized": true,
18
+ "rstrip": false,
19
+ "single_word": false
20
+ },
21
+ "errors": "replace",
22
+ "model_max_length": 77,
23
+ "pad_token": "<|endoftext|>",
24
+ "tokenizer_class": "CLIPTokenizer",
25
+ "unk_token": {
26
+ "__type": "AddedToken",
27
+ "content": "<|endoftext|>",
28
+ "lstrip": false,
29
+ "normalized": true,
30
+ "rstrip": false,
31
+ "single_word": false
32
+ }
33
+ }
ax620e_models/tokenizer/vocab.json ADDED
The diff for this file is too large to render. See raw diff
 
ax620e_models/unet.axmodel ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:c54bedd00f4a41a81f489b6c8ff019573859e097d30bbdd02e0686df07826c3e
3
+ size 883018671
ax620e_models/unet.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:2fd08a57b0875b5b8058ea62cce66bf116fc448d991bf4ee55565c330e7cbd08
3
+ size 668903
ax620e_models/vae_decoder.axmodel ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:b4c6befd8e16ffb2f3737da491cfe3ca2db7871a0a4d35058168bdc8ac36b5c8
3
+ size 63574596
ax620e_models/vae_decoder.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67cab6795e3524df09379e11f7fbf4b918e92082abe2206b94fadad1d1b0416d
3
+ size 198062036
ax620e_models/vae_encoder.axmodel ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4bdd5159d9a4adae6aab8f7306e547b353f8f9e36294129794dfc45514a104ed
3
+ size 41314644
ax620e_models/vae_encoder.onnx ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:37fb2dbf12a58e2bc20a09d54e5441a617cbf7c63eca82fec9013e98a05f473d
3
+ size 136729193
launcher.py ADDED
@@ -0,0 +1,652 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ from typing import List, Optional, Tuple, Union
2
+ import argparse
3
+ import os
4
+ import time
5
+ import warnings
6
+
7
+ import numpy as np
8
+ import onnxruntime
9
+ import axengine
10
+ import torch
11
+ from PIL import Image
12
+ from transformers import CLIPTokenizer, PreTrainedTokenizer
13
+
14
+ from diffusers.models.autoencoders.vae import DiagonalGaussianDistribution
15
+ from diffusers.models.modeling_outputs import AutoencoderKLOutput
16
+ from diffusers.utils import load_image, make_image_grid
17
+ from diffusers.utils.torch_utils import randn_tensor
18
+
19
+
20
+
21
+ ############ Img2Img
22
+ PipelineImageInput = Union[
23
+ Image.Image,
24
+ np.ndarray,
25
+ torch.Tensor,
26
+ List[Image.Image],
27
+ List[np.ndarray],
28
+ List[torch.Tensor],
29
+ ]
30
+
31
+ PipelineDepthInput = PipelineImageInput
32
+
33
+ TIME_EMBED_KEY = "/down_blocks.0/resnets.0/act_1/Mul_output_0"
34
+ TXT2IMG_TIMESTEPS = np.array([999, 759, 499, 259], dtype=np.int64)
35
+ IMG2IMG_TIMESTEPS = np.array([499, 259], dtype=np.int64)
36
+ IMG2IMG_SELF_TIMESTEPS = np.array([999, 759, 499, 259], dtype=np.int64)
37
+ IMG2IMG_STEP_INDEX = [2, 3]
38
+
39
+ # Copied from diffusers.schedulers.scheduling_ddpm.DDPMScheduler.add_noise
40
+ def add_noise(
41
+ original_samples: torch.Tensor,
42
+ noise: torch.Tensor,
43
+ timesteps: torch.IntTensor,
44
+ ) -> torch.Tensor:
45
+ # Make sure alphas_cumprod and timestep have same device and dtype as original_samples
46
+ # Move the self.alphas_cumprod to device to avoid redundant CPU to GPU data movement
47
+ # for the subsequent add_noise calls
48
+ # self.alphas_cumprod = self.alphas_cumprod.to(device=original_samples.device)
49
+ # Convert betas to alphas_bar_sqrt
50
+ beta_start = 0.00085
51
+ beta_end = 0.012
52
+ num_train_timesteps = 1000
53
+ betas = torch.linspace(beta_start**0.5, beta_end**0.5, num_train_timesteps, dtype=torch.float32) ** 2
54
+ alphas = 1.0 - betas
55
+ alphas_cumprod = torch.cumprod(alphas, dim=0)
56
+ alphas_cumprod = alphas_cumprod.to(device=original_samples.device)
57
+ alphas_cumprod = alphas_cumprod.to(dtype=original_samples.dtype)
58
+ timesteps = timesteps.to(original_samples.device)
59
+
60
+ sqrt_alpha_prod = alphas_cumprod[timesteps] ** 0.5
61
+ sqrt_alpha_prod = sqrt_alpha_prod.flatten()
62
+ while len(sqrt_alpha_prod.shape) < len(original_samples.shape):
63
+ sqrt_alpha_prod = sqrt_alpha_prod.unsqueeze(-1)
64
+
65
+ sqrt_one_minus_alpha_prod = (1 - alphas_cumprod[timesteps]) ** 0.5
66
+ sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.flatten()
67
+ while len(sqrt_one_minus_alpha_prod.shape) < len(original_samples.shape):
68
+ sqrt_one_minus_alpha_prod = sqrt_one_minus_alpha_prod.unsqueeze(-1)
69
+
70
+ noisy_samples = sqrt_alpha_prod * original_samples + sqrt_one_minus_alpha_prod * noise
71
+ return noisy_samples
72
+
73
+ def retrieve_latents(
74
+ encoder_output: torch.Tensor, generator: Optional[torch.Generator] = None, sample_mode: str = "sample"
75
+ ):
76
+ if hasattr(encoder_output, "latent_dist") and sample_mode == "sample":
77
+ return encoder_output.latent_dist.sample(generator)
78
+ elif hasattr(encoder_output, "latent_dist") and sample_mode == "argmax":
79
+ return encoder_output.latent_dist.mode()
80
+ elif hasattr(encoder_output, "latents"):
81
+ return encoder_output.latents
82
+ else:
83
+ raise AttributeError("Could not access latents of provided encoder_output")
84
+
85
+ def numpy_to_pt(images: np.ndarray) -> torch.Tensor:
86
+ r"""
87
+ Convert a NumPy image to a PyTorch tensor.
88
+
89
+ Args:
90
+ images (`np.ndarray`):
91
+ The NumPy image array to convert to PyTorch format.
92
+
93
+ Returns:
94
+ `torch.Tensor`:
95
+ A PyTorch tensor representation of the images.
96
+ """
97
+ if images.ndim == 3:
98
+ images = images[..., None]
99
+
100
+ images = torch.from_numpy(images.transpose(0, 3, 1, 2))
101
+ return images
102
+
103
+ def pil_to_numpy(images: Union[List[Image.Image], Image.Image]) -> np.ndarray:
104
+ r"""
105
+ Convert a PIL image or a list of PIL images to NumPy arrays.
106
+
107
+ Args:
108
+ images (`PIL.Image.Image` or `List[PIL.Image.Image]`):
109
+ The PIL image or list of images to convert to NumPy format.
110
+
111
+ Returns:
112
+ `np.ndarray`:
113
+ A NumPy array representation of the images.
114
+ """
115
+ if not isinstance(images, list):
116
+ images = [images]
117
+ images = [np.array(image).astype(np.float32) / 255.0 for image in images]
118
+ images = np.stack(images, axis=0)
119
+
120
+ return images
121
+
122
+ def is_valid_image(image) -> bool:
123
+ r"""
124
+ Checks if the input is a valid image.
125
+
126
+ A valid image can be:
127
+ - A `PIL.Image.Image`.
128
+ - A 2D or 3D `np.ndarray` or `torch.Tensor` (grayscale or color image).
129
+
130
+ Args:
131
+ image (`Union[PIL.Image.Image, np.ndarray, torch.Tensor]`):
132
+ The image to validate. It can be a PIL image, a NumPy array, or a torch tensor.
133
+
134
+ Returns:
135
+ `bool`:
136
+ `True` if the input is a valid image, `False` otherwise.
137
+ """
138
+ return isinstance(image, Image.Image) or isinstance(image, (np.ndarray, torch.Tensor)) and image.ndim in (2, 3)
139
+
140
+ def is_valid_image_imagelist(images):
141
+ r"""
142
+ Checks if the input is a valid image or list of images.
143
+
144
+ The input can be one of the following formats:
145
+ - A 4D tensor or numpy array (batch of images).
146
+ - A valid single image: `PIL.Image.Image`, 2D `np.ndarray` or `torch.Tensor` (grayscale image), 3D `np.ndarray` or
147
+ `torch.Tensor`.
148
+ - A list of valid images.
149
+
150
+ Args:
151
+ images (`Union[np.ndarray, torch.Tensor, PIL.Image.Image, List]`):
152
+ The image(s) to check. Can be a batch of images (4D tensor/array), a single image, or a list of valid
153
+ images.
154
+
155
+ Returns:
156
+ `bool`:
157
+ `True` if the input is valid, `False` otherwise.
158
+ """
159
+ if isinstance(images, (np.ndarray, torch.Tensor)) and images.ndim == 4:
160
+ return True
161
+ elif is_valid_image(images):
162
+ return True
163
+ elif isinstance(images, list):
164
+ return all(is_valid_image(image) for image in images)
165
+ return False
166
+
167
+
168
+ def normalize(images: Union[np.ndarray, torch.Tensor]) -> Union[np.ndarray, torch.Tensor]:
169
+ r"""
170
+ Normalize an image array to [-1,1].
171
+
172
+ Args:
173
+ images (`np.ndarray` or `torch.Tensor`):
174
+ The image array to normalize.
175
+
176
+ Returns:
177
+ `np.ndarray` or `torch.Tensor`:
178
+ The normalized image array.
179
+ """
180
+ return 2.0 * images - 1.0
181
+
182
+ # Copy from: /home/baiyongqiang/miniforge-pypy3/envs/hf/lib/python3.9/site-packages/diffusers/image_processor.py#607
183
+ def preprocess(
184
+ image: PipelineImageInput,
185
+ height: Optional[int] = None,
186
+ width: Optional[int] = None,
187
+ resize_mode: str = "default", # "default", "fill", "crop"
188
+ crops_coords: Optional[Tuple[int, int, int, int]] = None,
189
+ ) -> torch.Tensor:
190
+ """
191
+ Preprocess the image input.
192
+
193
+ Args:
194
+ image (`PipelineImageInput`):
195
+ The image input, accepted formats are PIL images, NumPy arrays, PyTorch tensors; Also accept list of
196
+ supported formats.
197
+ height (`int`, *optional*):
198
+ The height in preprocessed image. If `None`, will use the `get_default_height_width()` to get default
199
+ height.
200
+ width (`int`, *optional*):
201
+ The width in preprocessed. If `None`, will use get_default_height_width()` to get the default width.
202
+ resize_mode (`str`, *optional*, defaults to `default`):
203
+ The resize mode, can be one of `default` or `fill`. If `default`, will resize the image to fit within
204
+ the specified width and height, and it may not maintaining the original aspect ratio. If `fill`, will
205
+ resize the image to fit within the specified width and height, maintaining the aspect ratio, and then
206
+ center the image within the dimensions, filling empty with data from image. If `crop`, will resize the
207
+ image to fit within the specified width and height, maintaining the aspect ratio, and then center the
208
+ image within the dimensions, cropping the excess. Note that resize_mode `fill` and `crop` are only
209
+ supported for PIL image input.
210
+ crops_coords (`List[Tuple[int, int, int, int]]`, *optional*, defaults to `None`):
211
+ The crop coordinates for each image in the batch. If `None`, will not crop the image.
212
+
213
+ Returns:
214
+ `torch.Tensor`:
215
+ The preprocessed image.
216
+ """
217
+ supported_formats = (Image.Image, np.ndarray, torch.Tensor)
218
+
219
+ # # Expand the missing dimension for 3-dimensional pytorch tensor or numpy array that represents grayscale image
220
+ # if self.config.do_convert_grayscale and isinstance(image, (torch.Tensor, np.ndarray)) and image.ndim == 3:
221
+ # if isinstance(image, torch.Tensor):
222
+ # # if image is a pytorch tensor could have 2 possible shapes:
223
+ # # 1. batch x height x width: we should insert the channel dimension at position 1
224
+ # # 2. channel x height x width: we should insert batch dimension at position 0,
225
+ # # however, since both channel and batch dimension has same size 1, it is same to insert at position 1
226
+ # # for simplicity, we insert a dimension of size 1 at position 1 for both cases
227
+ # image = image.unsqueeze(1)
228
+ # else:
229
+ # # if it is a numpy array, it could have 2 possible shapes:
230
+ # # 1. batch x height x width: insert channel dimension on last position
231
+ # # 2. height x width x channel: insert batch dimension on first position
232
+ # if image.shape[-1] == 1:
233
+ # image = np.expand_dims(image, axis=0)
234
+ # else:
235
+ # image = np.expand_dims(image, axis=-1)
236
+
237
+ if isinstance(image, list) and isinstance(image[0], np.ndarray) and image[0].ndim == 4:
238
+ warnings.warn(
239
+ "Passing `image` as a list of 4d np.ndarray is deprecated."
240
+ "Please concatenate the list along the batch dimension and pass it as a single 4d np.ndarray",
241
+ FutureWarning,
242
+ )
243
+ image = np.concatenate(image, axis=0)
244
+ if isinstance(image, list) and isinstance(image[0], torch.Tensor) and image[0].ndim == 4:
245
+ warnings.warn(
246
+ "Passing `image` as a list of 4d torch.Tensor is deprecated."
247
+ "Please concatenate the list along the batch dimension and pass it as a single 4d torch.Tensor",
248
+ FutureWarning,
249
+ )
250
+ image = torch.cat(image, axis=0)
251
+
252
+ if not is_valid_image_imagelist(image):
253
+ raise ValueError(
254
+ f"Input is in incorrect format. Currently, we only support {', '.join(str(x) for x in supported_formats)}"
255
+ )
256
+ if not isinstance(image, list):
257
+ image = [image]
258
+
259
+ if isinstance(image[0], Image.Image):
260
+ if crops_coords is not None:
261
+ image = [i.crop(crops_coords) for i in image]
262
+ # if self.config.do_resize:
263
+ # height, width = self.get_default_height_width(image[0], height, width)
264
+ # image = [self.resize(i, height, width, resize_mode=resize_mode) for i in image]
265
+ # if self.config.do_convert_rgb:
266
+ # image = [self.convert_to_rgb(i) for i in image]
267
+ # elif self.config.do_convert_grayscale:
268
+ # image = [self.convert_to_grayscale(i) for i in image]
269
+ image = pil_to_numpy(image) # to np
270
+ image = numpy_to_pt(image) # to pt
271
+
272
+ elif isinstance(image[0], np.ndarray):
273
+ image = np.concatenate(image, axis=0) if image[0].ndim == 4 else np.stack(image, axis=0)
274
+
275
+ # image = self.numpy_to_pt(image)
276
+
277
+ # height, width = self.get_default_height_width(image, height, width)
278
+ # if self.config.do_resize:
279
+ # image = self.resize(image, height, width)
280
+
281
+ elif isinstance(image[0], torch.Tensor):
282
+ image = torch.cat(image, axis=0) if image[0].ndim == 4 else torch.stack(image, axis=0)
283
+
284
+ # if self.config.do_convert_grayscale and image.ndim == 3:
285
+ # image = image.unsqueeze(1)
286
+
287
+ channel = image.shape[1]
288
+ # don't need any preprocess if the image is latents
289
+ # if channel == self.config.vae_latent_channels:
290
+ # return image
291
+
292
+ # height, width = self.get_default_height_width(image, height, width)
293
+ # if self.config.do_resize:
294
+ # image = self.resize(image, height, width)
295
+
296
+ # expected range [0,1], normalize to [-1,1]
297
+ do_normalize = True # self.config.do_normalize
298
+ if do_normalize and image.min() < 0:
299
+ warnings.warn(
300
+ "Passing `image` as torch tensor with value range in [-1,1] is deprecated. The expected value range for image tensor is [0,1] "
301
+ f"when passing as pytorch tensor or numpy Array. You passed `image` with value range [{image.min()},{image.max()}]",
302
+ FutureWarning,
303
+ )
304
+ do_normalize = False
305
+ if do_normalize:
306
+ image = normalize(image)
307
+
308
+ # if self.config.do_binarize:
309
+ # image = self.binarize(image)
310
+
311
+ return image
312
+ ##########
313
+
314
+
315
+ def get_args():
316
+ parser = argparse.ArgumentParser(
317
+ prog="StableDiffusion",
318
+ description="Stable Diffusion txt2img/img2img inference"
319
+ )
320
+ parser.add_argument("--backend", choices=["axe", "onnx"], default="axe", help="Inference backend (axe or onnx)")
321
+ parser.add_argument("--prompt", type=str, default="Self-portrait oil painting, a beautiful cyborg with golden hair, 8k", help="Input text prompt")
322
+ parser.add_argument("--model_dir", type=str, default="./models", help="Directory containing tokenizer, text encoder, UNet, VAE, time inputs")
323
+ parser.add_argument("--time_input", type=str, default=None, help="Optional override for time input numpy file")
324
+ parser.add_argument("--init_image", type=str, default=None, help="Provide an init image to enable img2img")
325
+ parser.add_argument("--isize", type=int, default=512, help="Output image size (height = width = isize, must be multiple of 8)")
326
+ parser.add_argument("-o", "--save_dir", type=str, default="./output.png", help="Path to save the generated image")
327
+ parser.add_argument("--seed", type=int, default=None, help="Random seed (img2img defaults to 0 if unspecified)")
328
+ return parser.parse_args()
329
+
330
+ def maybe_convert_prompt(prompt: Union[str, List[str]], tokenizer: "PreTrainedTokenizer"): # noqa: F821
331
+ if not isinstance(prompt, List):
332
+ prompts = [prompt]
333
+ else:
334
+ prompts = prompt
335
+
336
+ prompts = [_maybe_convert_prompt(p, tokenizer) for p in prompts]
337
+
338
+ if not isinstance(prompt, List):
339
+ return prompts[0]
340
+
341
+ return prompts
342
+
343
+
344
+ def _maybe_convert_prompt(prompt: str, tokenizer: "PreTrainedTokenizer"): # noqa: F821
345
+ tokens = tokenizer.tokenize(prompt)
346
+ unique_tokens = set(tokens)
347
+ for token in unique_tokens:
348
+ if token in tokenizer.added_tokens_encoder:
349
+ replacement = token
350
+ i = 1
351
+ while f"{token}_{i}" in tokenizer.added_tokens_encoder:
352
+ replacement += f" {token}_{i}"
353
+ i += 1
354
+
355
+ prompt = prompt.replace(token, replacement)
356
+
357
+ return prompt
358
+
359
+
360
+ def create_session(model_path: str, backend: str):
361
+ if backend == "onnx":
362
+ return onnxruntime.InferenceSession(model_path, providers=["CPUExecutionProvider"])
363
+ return axengine.InferenceSession(model_path)
364
+
365
+
366
+ def ensure_multiple_of_eight(size: int) -> int:
367
+ if size % 8 != 0:
368
+ raise ValueError("Image size must be a multiple of 8")
369
+ return size
370
+
371
+
372
+ def compute_latent_shape(size: int, batch_size: int = 1) -> Tuple[int, int, int, int]:
373
+ size = ensure_multiple_of_eight(size)
374
+ return batch_size, 4, size // 8, size // 8
375
+
376
+
377
+ def prepare_init_image(image_path: str, size: int) -> Tuple[Image.Image, np.ndarray]:
378
+ def convert(img: Image.Image) -> Image.Image:
379
+ return img.resize((size, size)).convert("RGB")
380
+
381
+ image = load_image(image_path, convert_method=convert)
382
+ image_show = image.copy()
383
+ processed = preprocess(image)
384
+ if isinstance(processed, torch.Tensor):
385
+ processed = processed.detach().cpu().numpy()
386
+ return image_show, processed
387
+
388
+
389
+ def ensure_parent(path: str) -> None:
390
+ parent = os.path.dirname(path)
391
+ if parent:
392
+ os.makedirs(parent, exist_ok=True)
393
+
394
+
395
+ def resolve_with_base(path: str, base_dir: str) -> str:
396
+ if os.path.isabs(path) and os.path.exists(path):
397
+ return path
398
+ candidate = os.path.join(base_dir, path)
399
+ if os.path.exists(candidate):
400
+ return candidate
401
+ return path
402
+
403
+
404
+ def get_prev_timestep(
405
+ index: int,
406
+ timestep: int,
407
+ timesteps: np.ndarray,
408
+ self_timesteps: Optional[np.ndarray] = None,
409
+ step_index: Optional[List[int]] = None,
410
+ ) -> int:
411
+ if self_timesteps is not None and step_index is not None:
412
+ prev_idx = step_index[index] + 1
413
+ if prev_idx < len(self_timesteps):
414
+ return int(self_timesteps[prev_idx])
415
+ return int(timestep)
416
+ if index + 1 < len(timesteps):
417
+ return int(timesteps[index + 1])
418
+ return int(timestep)
419
+
420
+
421
+ def denoise_loop(
422
+ latent: np.ndarray,
423
+ prompt_embeds: np.ndarray,
424
+ time_inputs: np.ndarray,
425
+ timesteps: np.ndarray,
426
+ unet_session,
427
+ alphas_cumprod: np.ndarray,
428
+ final_alphas_cumprod: float,
429
+ generator: Optional[torch.Generator],
430
+ noise_dtype: torch.dtype,
431
+ self_timesteps: Optional[np.ndarray] = None,
432
+ step_index: Optional[List[int]] = None,
433
+ ) -> np.ndarray:
434
+ if time_inputs.shape[0] < len(timesteps):
435
+ raise ValueError("time_input 的步数少于推理步数")
436
+
437
+ device = torch.device("cpu")
438
+ for i, timestep in enumerate(timesteps):
439
+ unet_start = time.time()
440
+ latent = latent.astype(np.float32)
441
+ feeds = {
442
+ "sample": latent,
443
+ TIME_EMBED_KEY: np.expand_dims(time_inputs[i], axis=0),
444
+ "encoder_hidden_states": prompt_embeds,
445
+ }
446
+ noise_pred = unet_session.run(None, feeds)[0]
447
+ print(f"unet once take {(1000 * (time.time() - unet_start)):.1f}ms")
448
+
449
+ sample = latent
450
+ model_output = noise_pred
451
+ prev_timestep = get_prev_timestep(i, int(timestep), timesteps, self_timesteps, step_index)
452
+
453
+ alpha_prod_t = alphas_cumprod[int(timestep)]
454
+ alpha_prod_t_prev = alphas_cumprod[prev_timestep] if prev_timestep >= 0 else final_alphas_cumprod
455
+ beta_prod_t = 1 - alpha_prod_t
456
+ beta_prod_t_prev = 1 - alpha_prod_t_prev
457
+
458
+ scaled_timestep = int(timestep) * 10
459
+ c_skip = 0.5 ** 2 / (scaled_timestep ** 2 + 0.5 ** 2)
460
+ c_out = scaled_timestep / (scaled_timestep ** 2 + 0.5 ** 2) ** 0.5
461
+ predicted_original_sample = (sample - (beta_prod_t ** 0.5) * model_output) / (alpha_prod_t ** 0.5)
462
+
463
+ denoised = c_out * predicted_original_sample + c_skip * sample
464
+ if i != len(timesteps) - 1:
465
+ if noise_dtype == torch.float32 and generator is None:
466
+ noise = torch.randn(model_output.shape, device=device, dtype=noise_dtype).cpu().numpy()
467
+ else:
468
+ noise_tensor = randn_tensor(model_output.shape, generator=generator, device=device, dtype=noise_dtype)
469
+ noise = noise_tensor.cpu().numpy()
470
+ prev_sample = (alpha_prod_t_prev ** 0.5) * denoised + (beta_prod_t_prev ** 0.5) * noise
471
+ else:
472
+ prev_sample = denoised
473
+
474
+ latent = prev_sample.astype(np.float32)
475
+
476
+ return latent
477
+
478
+
479
+ def get_embeds(
480
+ prompt: Union[str, List[str]] = "Portrait of a pretty girl",
481
+ tokenizer_dir: str = "./models/tokenizer",
482
+ text_encoder_path: str = "./models/text_encoder/sd15_text_encoder_sim.axmodel",
483
+ backend: str = "axe",
484
+ ):
485
+ tokenizer = CLIPTokenizer.from_pretrained(tokenizer_dir)
486
+
487
+ text_inputs = tokenizer(
488
+ prompt,
489
+ padding="max_length",
490
+ max_length=77,
491
+ truncation=True,
492
+ return_tensors="pt",
493
+ )
494
+ input_ids = text_inputs.input_ids.to("cpu").numpy()
495
+ if backend == "axe":
496
+ input_ids = input_ids.astype(np.int32)
497
+
498
+ text_encoder = create_session(text_encoder_path, backend)
499
+ running_start = time.time()
500
+ prompt_embeds_npy = text_encoder.run(None, {"input_ids": input_ids})[0]
501
+ print(f"text encoder running take {(1000 * (time.time() - running_start)):.1f}ms")
502
+ return prompt_embeds_npy
503
+
504
+
505
+ def get_alphas_cumprod():
506
+ betas = torch.linspace(0.00085 ** 0.5, 0.012 ** 0.5, 1000, dtype=torch.float32) ** 2
507
+ alphas = 1.0 - betas
508
+ alphas_cumprod = torch.cumprod(alphas, dim=0).detach().numpy()
509
+ final_alphas_cumprod = alphas_cumprod[0]
510
+ self_timesteps = np.arange(0, 1000)[::-1].copy().astype(np.int64)
511
+ return alphas_cumprod, final_alphas_cumprod, self_timesteps
512
+
513
+ def main():
514
+ args = get_args()
515
+ backend = args.backend.lower()
516
+ prompt = args.prompt
517
+ is_img2img = args.init_image is not None
518
+
519
+ model_dir = args.model_dir
520
+ tokenizer_dir = os.path.join(model_dir, "tokenizer")
521
+ text_encoder_dir = os.path.join(model_dir, "text_encoder")
522
+
523
+ model_suffix = ".axmodel" if backend == "axe" else ".onnx"
524
+ text_encoder_path = os.path.join(text_encoder_dir, f"sd15_text_encoder_sim{model_suffix}")
525
+ unet_model = os.path.join(model_dir, f"unet{model_suffix}")
526
+ vae_decoder_model = os.path.join(model_dir, f"vae_decoder{model_suffix}")
527
+ vae_encoder_model = os.path.join(model_dir, f"vae_encoder{model_suffix}")
528
+ time_input_default = "time_input_img2img.npy" if is_img2img else "time_input_txt2img.npy"
529
+ time_input_path = args.time_input or os.path.join(model_dir, time_input_default)
530
+ if args.time_input:
531
+ time_input_path = resolve_with_base(args.time_input, model_dir)
532
+
533
+ init_image_path = None
534
+ if is_img2img:
535
+ init_image_path = resolve_with_base(args.init_image, model_dir)
536
+
537
+ size = ensure_multiple_of_eight(args.isize)
538
+ print(f"backend: {backend}")
539
+ print(f"prompt: {prompt}")
540
+ print(f"model_dir: {model_dir}")
541
+ print(f"tokenizer_dir: {tokenizer_dir}")
542
+ print(f"text_encoder: {text_encoder_path}")
543
+ print(f"unet_model: {unet_model}")
544
+ print(f"vae_decoder_model: {vae_decoder_model}")
545
+ if is_img2img:
546
+ # ref prompt: "Astronauts in a jungle, cold color palette, muted colors, detailed, 8k"
547
+ print(f"vae_encoder_model: {vae_encoder_model}")
548
+ print(f"init image: {init_image_path}")
549
+ print(f"time_input: {time_input_path}")
550
+ print(f"image_size: {size}x{size}")
551
+ print(f"save_dir: {args.save_dir}")
552
+
553
+ device = torch.device("cpu")
554
+ generator: Optional[torch.Generator] = None
555
+ if args.seed is not None:
556
+ generator = torch.manual_seed(args.seed)
557
+ noise_dtype = torch.float16 if is_img2img else torch.float32
558
+
559
+ encode_start = time.time()
560
+ prompt_embeds_npy = get_embeds(prompt, tokenizer_dir, text_encoder_path, backend)
561
+ print(f"text encoder take {(1000 * (time.time() - encode_start)):.1f}ms")
562
+
563
+ alphas_cumprod, final_alphas_cumprod, _ = get_alphas_cumprod()
564
+
565
+ load_start = time.time()
566
+ vae_encoder_session = None
567
+ if is_img2img:
568
+ vae_encoder_session = create_session(vae_encoder_model, backend)
569
+ unet_session = create_session(unet_model, backend)
570
+ vae_decoder_session = create_session(vae_decoder_model, backend)
571
+ print(f"load models take {(1000 * (time.time() - load_start)):.1f}ms")
572
+
573
+ time_input = np.load(time_input_path)
574
+
575
+ if is_img2img:
576
+ init_image_show, init_image_np = prepare_init_image(init_image_path, size)
577
+
578
+ vae_start = time.time()
579
+ vae_encoder_inp_name = vae_encoder_session.get_inputs()[0].name
580
+ vae_encoder_out = vae_encoder_session.run(None, {vae_encoder_inp_name: init_image_np})[0]
581
+ print(f"vae encoder inference take {(1000 * (time.time() - vae_start)):.1f}ms")
582
+
583
+ posterior = DiagonalGaussianDistribution(torch.from_numpy(vae_encoder_out).to(torch.float32))
584
+ vae_encode_info = AutoencoderKLOutput(latent_dist=posterior)
585
+ if generator is None:
586
+ generator = torch.manual_seed(0)
587
+ init_latents = retrieve_latents(vae_encode_info, generator=generator)
588
+ init_latents = init_latents * 0.18215
589
+ init_latents = torch.cat([init_latents], dim=0)
590
+ noise = randn_tensor(init_latents.shape, generator=generator, device=device, dtype=noise_dtype)
591
+ timestep_tensor = torch.tensor([int(IMG2IMG_TIMESTEPS[0])], device=device)
592
+ init_latents = add_noise(init_latents.to(device), noise, timestep_tensor)
593
+ latent = init_latents.detach().cpu().numpy()
594
+
595
+ timesteps = IMG2IMG_TIMESTEPS
596
+ self_timesteps = IMG2IMG_SELF_TIMESTEPS
597
+ step_index = IMG2IMG_STEP_INDEX
598
+ else:
599
+ batch, channels, latent_h, latent_w = compute_latent_shape(size)
600
+ if generator is None:
601
+ latents = torch.randn((batch, channels, latent_h, latent_w), device=device, dtype=torch.float32)
602
+ else:
603
+ latents = randn_tensor((batch, channels, latent_h, latent_w), generator=generator, device=device, dtype=torch.float32)
604
+ latent = latents.cpu().numpy()
605
+ init_image_show = None
606
+ timesteps = TXT2IMG_TIMESTEPS
607
+ self_timesteps = None
608
+ step_index = None
609
+
610
+ unet_loop_start = time.time()
611
+ latent = denoise_loop(
612
+ latent=latent,
613
+ prompt_embeds=prompt_embeds_npy,
614
+ time_inputs=time_input,
615
+ timesteps=timesteps,
616
+ unet_session=unet_session,
617
+ alphas_cumprod=alphas_cumprod,
618
+ final_alphas_cumprod=final_alphas_cumprod,
619
+ generator=generator,
620
+ noise_dtype=noise_dtype,
621
+ self_timesteps=self_timesteps,
622
+ step_index=step_index,
623
+ )
624
+ print(f"unet loop take {(1000 * (time.time() - unet_loop_start)):.1f}ms")
625
+
626
+ vae_start = time.time()
627
+ latent = latent / 0.18215
628
+ vae_decoder_inp_name = vae_decoder_session.get_inputs()[0].name
629
+ image = vae_decoder_session.run(None, {vae_decoder_inp_name: latent.astype(np.float32)})[0]
630
+ print(f"vae decoder inference take {(1000 * (time.time() - vae_start)):.1f}ms")
631
+
632
+ save_start = time.time()
633
+ image = np.transpose(image, (0, 2, 3, 1)).squeeze(axis=0)
634
+ image_denorm = np.clip(image / 2 + 0.5, 0, 1)
635
+ image_uint8 = (image_denorm * 255).round().astype("uint8")
636
+ pil_image = Image.fromarray(image_uint8[:, :, :3])
637
+
638
+ ensure_parent(args.save_dir)
639
+ pil_image.save(args.save_dir)
640
+
641
+ if is_img2img:
642
+ grid_path = os.path.splitext(args.save_dir)[0] + "_grid.png"
643
+ grid_img = make_image_grid([init_image_show, pil_image], rows=1, cols=2)
644
+ ensure_parent(grid_path)
645
+ grid_img.save(grid_path)
646
+ print(f"grid image saved in {grid_path}")
647
+
648
+ print(f"save image take {(1000 * (time.time() - save_start)):.1f}ms")
649
+
650
+
651
+ if __name__ == "__main__":
652
+ main()