dikdimon commited on
Commit
fabd6c3
·
verified ·
1 Parent(s): af5d245

Upload webUI_ExtraSchedulers using SD-Hub

Browse files
webUI_ExtraSchedulers/README.md ADDED
@@ -0,0 +1,46 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## Extra Schedulers extension for Stable Diffusion webUI ##
2
+ ### built for new Forge, partial support for Automatic1111, and probably reForge ###
3
+ #### (webUI must have split sampler/scheduler selection) ####
4
+
5
+ >[!IMPORTANT]
6
+ >not for old Forge. For some related stuff in old Forge, see my old [OverrideScheduler extension](https://github.com/DenOfEquity/SchedRide).
7
+
8
+ ### What do? ###
9
+ Adds six new schedulers to the dropdown list:
10
+ * cosine: follows a, you guessed it, cosine curve. Initial drop is relatively slow.
11
+ * cosine-exponential blend: starts cosine, ends up exponential (long tail).
12
+ * phi: (based on original by [Extraltodeus](https://github.com/Extraltodeus/sigmas_tools_and_the_golden_scheduler))
13
+ * Laplace: (credit Tiankai et al. (2024), via Comfy)
14
+ * Karras Dynamic: (via yoinked-h)
15
+ * custom: either a list of sigmas [1.0, 0.6, 0.25, 0.1, 0.0] or an expression that will be evaluated for each sampling step. A list will be log-linear interpolated to the number of sampling steps. A list starting with 1.0 and ending with 0.0 will be scaled between sigma_max and sigma_min. Otherwise list will be interpreted as is.
16
+ * *m*: minimum sigma (adjustable in **Settings**, usually ~0.03)
17
+ * *M*: maximum sigma (adjustable in **Settings**, usually ~14.6)
18
+ * *n*: total steps
19
+ * *s*: this step
20
+ * *x*: step / (total steps - 1)
21
+ * *phi*: (1 + sqrt(5)) / 2
22
+
23
+ Adds six new samplers:
24
+ * Euler a CFG++ [Forge only]
25
+ * Euler CFG++ [Forge only]
26
+ * Euler Dy CFG++ (based on Euler Dy by Koishi-Star) [Forge only]
27
+ * Euler SMEA Dy CFG++ (...) [Forge only]
28
+ * Refined Exponential Solver (credit: Katherine Crowson, Birch-san, Clybius)
29
+ * DPM++ 4M SDE (credit: Clybius)
30
+
31
+ ### Why do? ###
32
+ Different results, sometimes better. I tend to use cosine-exponential blend most of the time.
33
+
34
+ ### How do? ###
35
+ *(schedulers)* It's just a calculation of different number sequences travelling from sigma_max to sigma_min over the set number of sampling steps, guiding the denoising process. Infinite possibilities, but few sweet spots.
36
+
37
+ ### Redo? ###
38
+ Yes, custom scheduler is saved to image infotext and *params.txt*.
39
+
40
+ ### How install? ###
41
+ Go to the **Extensions** tab, then **Install from URL**, use the URL for this repository.
42
+
43
+ Then, go back to the **Installed** tab and hit **Apply and restart UI**.
44
+
45
+ ### more? ###
46
+ Check the 'neg' branch instead.
webUI_ExtraSchedulers/old/extra_schedulers.py ADDED
@@ -0,0 +1,385 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio
2
+ import math, numpy
3
+ import torch
4
+ from modules import scripts, shared
5
+
6
+ # def get_sigmas_oss (n, sigma_min, sigma_max, device):
7
+ # # https://github.com/bebebe666/OptimalSteps
8
+ # def loglinear_interp(t_steps, num_steps):
9
+ # """
10
+ # Performs log-linear interpolation of a given array of decreasing numbers.
11
+ # """
12
+ # xs = numpy.linspace(0, 1, len(t_steps))
13
+ # ys = numpy.log(t_steps[::-1])
14
+
15
+ # new_xs = numpy.linspace(0, 1, num_steps)
16
+ # new_ys = numpy.interp(new_xs, xs, ys)
17
+
18
+ # interped_ys = numpy.exp(new_ys)[::-1].copy()
19
+ # return interped_ys
20
+
21
+ # if not shared.sd_model.is_webui_legacy_model():
22
+ # sigmas = [0.9968, 0.9886, 0.9819, 0.975, 0.966, 0.9471, 0.9158, 0.8287, 0.5512, 0.2808, 0.001]
23
+ # elif shared.sd_model.is_sd3: #same as flux, but here for ease of changing later
24
+ # sigmas = [0.9968, 0.9886, 0.9819, 0.975, 0.966, 0.9471, 0.9158, 0.8287, 0.5512, 0.2808, 0.001]
25
+ # elif shared.sd_model.is_sdxl: # fallback AYS11
26
+ # sigmas = [14.615, 6.315, 3.771, 2.181, 1.342, 0.862, 0.555, 0.380, 0.234, 0.113, 0.029]
27
+ # else: # fallback AYS11
28
+ # sigmas = [14.615, 6.475, 3.861, 2.697, 1.886, 1.396, 0.963, 0.652, 0.399, 0.152, 0.029]
29
+
30
+ # if n != len(sigmas):
31
+ # sigmas = numpy.append(loglinear_interp(sigmas, n), [0.0])
32
+ # else:
33
+ # sigmas.append(0.0)
34
+
35
+ # return torch.FloatTensor(sigmas).to(device)
36
+
37
+
38
+ def cosine_scheduler (n, sigma_min, sigma_max, device):
39
+ sigmas = torch.zeros(n, device=device)
40
+ if n == 1:
41
+ sigmas[0] = sigma_max ** 0.5
42
+ else:
43
+ for x in range(n):
44
+ p = x / (n-1)
45
+ C = sigma_min + 0.5*(sigma_max-sigma_min)*(1 - math.cos(math.pi*(1 - p**0.5)))
46
+ sigmas[x] = C
47
+ return torch.cat([sigmas, sigmas.new_zeros([1])])
48
+
49
+ def cosexpblend_boost_scheduler (n, sigma_min, sigma_max, device):
50
+ sigmas = []
51
+ if n == 1:
52
+ sigmas.append(sigma_max ** 0.5)
53
+ else:
54
+ K = (sigma_min / sigma_max)**(1/(n-1))
55
+ E = sigma_max
56
+ detail = numpy.interp(numpy.linspace(0, 1, n), numpy.linspace(0, 1, 5), [1.0, 1.0, 1.27, 1.0, 1.0])
57
+ for x in range(n):
58
+ p = x / (n-1)
59
+ C = sigma_min + 0.5*(sigma_max-sigma_min)*(1 - math.cos(math.pi*(1 - p**0.5)))
60
+ sigmas.append(detail[x] * (C + p * (E - C)))
61
+ E *= K
62
+
63
+ sigmas += [0.0]
64
+ return torch.FloatTensor(sigmas).to(device)
65
+
66
+
67
+ def cosexpblend_scheduler (n, sigma_min, sigma_max, device):
68
+ sigmas = []
69
+ if n == 1:
70
+ sigmas.append(sigma_max ** 0.5)
71
+ else:
72
+ K = (sigma_min / sigma_max)**(1/(n-1))
73
+ E = sigma_max
74
+ for x in range(n):
75
+ p = x / (n-1)
76
+ C = sigma_min + 0.5*(sigma_max-sigma_min)*(1 - math.cos(math.pi*(1 - p**0.5)))
77
+ sigmas.append(C + p * (E - C))
78
+ E *= K
79
+
80
+ sigmas += [0.0]
81
+ return torch.FloatTensor(sigmas).to(device)
82
+
83
+ ## phi scheduler modified from original by @extraltodeus
84
+ def phi_scheduler(n, sigma_min, sigma_max, device):
85
+ sigmas = torch.zeros(n, device=device)
86
+ if n == 1:
87
+ sigmas[0] = sigma_max ** 0.5
88
+ else:
89
+ phi = (1 + 5**0.5) / 2
90
+ for x in range(n):
91
+ sigmas[x] = sigma_min + (sigma_max-sigma_min)*((1-x/(n-1))**(phi*phi))
92
+ return torch.cat([sigmas, sigmas.new_zeros([1])])
93
+
94
+ def get_sigmas_vp(n, sigma_min, sigma_max, device='cpu'):
95
+ """Constructs a continuous VP noise schedule."""
96
+
97
+ beta_d = 19.9
98
+ beta_min = 0.1
99
+ eps_s = 1e-3
100
+
101
+ t = torch.linspace(1, eps_s, n, device=device)
102
+ sigmas = torch.sqrt(torch.exp(beta_d * t ** 2 / 2 + beta_min * t) - 1)
103
+ return torch.cat([sigmas, sigmas.new_zeros([1])])
104
+
105
+ def get_sigmas_laplace(n, sigma_min, sigma_max, device='cpu'):
106
+ """Constructs the noise schedule proposed by Tiankai et al. (2024). """
107
+ mu = 0.
108
+ beta = 0.5
109
+ epsilon = 1e-5 # avoid log(0)
110
+ x = torch.linspace(0, 1, n, device=device)
111
+ clamp = lambda x: torch.clamp(x, min=sigma_min, max=sigma_max)
112
+ lmb = mu - beta * torch.sign(0.5-x) * torch.log(1 - 2 * torch.abs(0.5-x) + epsilon)
113
+ sigmas = clamp(torch.exp(lmb))
114
+ return torch.cat([sigmas, sigmas.new_zeros([1])])
115
+
116
+
117
+
118
+ def get_sigmas_sinusoidal_sf(n, sigma_min, sigma_max, device='cpu'):
119
+ """Constructs a sinusoidal noise schedule."""
120
+ sf = 3.5
121
+ x = torch.linspace(0, 1, n, device=device)
122
+ sigmas = (sigma_min + (sigma_max - sigma_min) * (1 - torch.sin(torch.pi / 2 * x)))/sigma_max
123
+ sigmas = sigmas**sf
124
+ sigmas = sigmas * sigma_max
125
+ return torch.cat([sigmas, sigmas.new_zeros([1])])
126
+
127
+ def get_sigmas_invcosinusoidal_sf(n, sigma_min, sigma_max, device='cpu'):
128
+ """Constructs a sinusoidal noise schedule."""
129
+ sf = 3.5
130
+ x = torch.linspace(0, 1, n, device=device)
131
+ sigmas = (sigma_min + (sigma_max - sigma_min) * (0.5*(torch.cos(x * math.pi) + 1)))/sigma_max
132
+ sigmas = sigmas**sf
133
+ sigmas = sigmas * sigma_max
134
+ return torch.cat([sigmas, sigmas.new_zeros([1])])
135
+
136
+ def get_sigmas_react_cosinusoidal_dynsf(n, sigma_min, sigma_max, device='cpu'):
137
+ """Constructs a sinusoidal noise schedule."""
138
+ sf = 2.15
139
+ x = torch.linspace(0, 1, n, device=device)
140
+ sigmas = (sigma_min+(sigma_max-sigma_min)*(torch.cos(x*(torch.pi/2))))/sigma_max
141
+ sigmas = sigmas**(sf*(n*x/n))
142
+ sigmas = sigmas * sigma_max
143
+ return torch.cat([sigmas, sigmas.new_zeros([1])])
144
+
145
+ def get_sigmas_karras_dynamic(n, sigma_min, sigma_max, device='cpu'):
146
+ """Constructs the noise schedule of Karras et al. (2022)."""
147
+ rho = 7.
148
+ ramp = torch.linspace(0, 1, n, device=device)
149
+ min_inv_rho = sigma_min ** (1 / rho)
150
+ max_inv_rho = sigma_max ** (1 / rho)
151
+ sigmas = torch.zeros_like(ramp)
152
+ for i in range(n):
153
+ sigmas[i] = (max_inv_rho + ramp[i] * (min_inv_rho - max_inv_rho)) ** (math.cos(i*math.tau/n)*2+rho)
154
+ return torch.cat([sigmas, sigmas.new_zeros([1])])
155
+
156
+ def get_sigmas_karras_exponential_decay(n, sigma_min, sigma_max, device='cpu'):
157
+ """Constructs the noise schedule of Karras et al. (2022)."""
158
+ rho = 7.
159
+ ramp = torch.linspace(0, 1, n, device=device)
160
+ min_inv_rho = sigma_min ** (1 / rho)
161
+ max_inv_rho = sigma_max ** (1 / rho)
162
+ sigmas = torch.zeros_like(ramp)
163
+ for i in range(n):
164
+ sigmas[i] = (max_inv_rho + ramp[i] * (min_inv_rho - max_inv_rho)) ** (rho-(3*i/n))
165
+ return torch.cat([sigmas, sigmas.new_zeros([1])])
166
+
167
+ def get_sigmas_karras_exponential_increment(n, sigma_min, sigma_max, device='cpu'):
168
+ """Constructs the noise schedule of Karras et al. (2022)."""
169
+ rho = 7.
170
+ ramp = torch.linspace(0, 1, n, device=device)
171
+ min_inv_rho = sigma_min ** (1 / rho)
172
+ max_inv_rho = sigma_max ** (1 / rho)
173
+ sigmas = torch.zeros_like(ramp)
174
+ for i in range(n):
175
+ sigmas[i] = (max_inv_rho + ramp[i] * (min_inv_rho - max_inv_rho)) ** (rho+3*i/n)
176
+ return torch.cat([sigmas, sigmas.new_zeros([1])])
177
+
178
+ def custom_scheduler(n, sigma_min, sigma_max, device):
179
+ if 'import' in ExtraScheduler.customSigmas:
180
+ sigmas = torch.linspace(sigma_max, sigma_min, n, device=device)
181
+ elif 'eval' in ExtraScheduler.customSigmas:
182
+ sigmas = torch.linspace(sigma_max, sigma_min, n, device=device)
183
+ elif 'scripts' in ExtraScheduler.customSigmas:
184
+ sigmas = torch.linspace(sigma_max, sigma_min, n, device=device)
185
+
186
+ elif ExtraScheduler.customSigmas[0] == '[' and ExtraScheduler.customSigmas[-1] == ']':
187
+ sigmasList = [float(x) for x in ExtraScheduler.customSigmas.strip('[]').split(',')]
188
+
189
+ if sigmasList[0] == 1.0 and sigmasList[-1] == 0.0:
190
+ for x in range(len(sigmasList)):
191
+ sigmasList[x] *= (sigma_max - sigma_min)
192
+ sigmasList[x] += sigma_min
193
+ elif sigmasList[-1] == 0.0:
194
+ #don't interpolate to number of steps, use as is
195
+ return torch.tensor(sigmasList)
196
+
197
+ xs = numpy.linspace(0, 1, len(sigmasList))
198
+ ys = numpy.log(sigmasList[::-1])
199
+
200
+ new_xs = numpy.linspace(0, 1, n)
201
+ new_ys = numpy.interp(new_xs, xs, ys)
202
+
203
+ interpolated_ys = numpy.exp(new_ys)[::-1].copy()
204
+ sigmas = torch.tensor(interpolated_ys, device=device)
205
+ else:
206
+ sigmas = torch.linspace(sigma_max, sigma_min, n, device=device)
207
+ detail = numpy.interp(numpy.linspace(0, 1, n), numpy.linspace(0, 1, 5), [1.0, 1.0, 1.25, 1.0, 1.0])
208
+
209
+ phi = (1 + 5**0.5) / 2
210
+ pi = math.pi
211
+
212
+ s = 0
213
+ while (s < n):
214
+ x = (s) / (n - 1)
215
+ M = sigma_max
216
+ m = sigma_min
217
+ d = detail[s]
218
+
219
+ sigmas[s] = eval((ExtraScheduler.customSigmas))
220
+ s += 1
221
+
222
+ return torch.cat([sigmas, sigmas.new_zeros([1])])
223
+
224
+ from scripts.simple_kes import get_sigmas_simple_kes
225
+
226
+ from scripts.res_solver import sample_res_solver, sample_res_multistep, sample_res_multistep_cfgpp
227
+ from scripts.clybius_dpmpp_4m_sde import sample_clyb_4m_sde_momentumized
228
+ from scripts.gradient_estimation import sample_gradient_e, sample_gradient_e_cfgpp
229
+
230
+ from modules import sd_samplers_common, sd_samplers
231
+ from modules.sd_samplers_kdiffusion import sampler_extra_params, KDiffusionSampler
232
+
233
+ class ExtraScheduler(scripts.Script):
234
+ sorting_priority = 99
235
+
236
+ installed = False
237
+ customSigmas = 'm + (M-m)*(1-x)**3'
238
+
239
+ def title(self):
240
+ return "Extra Schedulers (custom)"
241
+
242
+ def show(self, is_img2img):
243
+ # make this extension visible in both txt2img and img2img tab.
244
+ if ExtraScheduler.installed:
245
+ return scripts.AlwaysVisible
246
+ else:
247
+ return False
248
+
249
+ def ui(self, *args, **kwargs):
250
+ #with gradio.Accordion(open=False, label=self.title(), visible=ExtraScheduler.installed):
251
+ custom_sigmas = gradio.Textbox(value=ExtraScheduler.customSigmas, label='Extra Schedulers: custom function / list [n0, n1, n2, ...]', lines=1.01)
252
+
253
+ self.infotext_fields = [
254
+ (custom_sigmas, "es_custom"),
255
+ ]
256
+
257
+ return [custom_sigmas]
258
+
259
+ def process(self, params, *script_args, **kwargs):
260
+ if params.scheduler == 'custom':
261
+ custom_sigmas = script_args[0]
262
+ ExtraScheduler.customSigmas = custom_sigmas
263
+ params.extra_generation_params.update(dict(es_custom = ExtraScheduler.customSigmas, ))
264
+ elif params.scheduler == 'Simple KES':
265
+ params.extra_generation_params.update(dict(
266
+ es_KES_start_blend = getattr(shared.opts, 'kes_start_blend'),
267
+ es_KES_end_blend = getattr(shared.opts, 'kes_end_blend'),
268
+ es_KES_sharpness = getattr(shared.opts, 'kes_sharpness'),
269
+ es_KES_initial_step_size = getattr(shared.opts, 'kes_initial_step_size'),
270
+ es_KES_final_step_size = getattr(shared.opts, 'kes_final_step_size'),
271
+ es_KES_initial_noise = getattr(shared.opts, 'kes_initial_noise'),
272
+ es_KES_final_noise = getattr(shared.opts, 'kes_final_noise'),
273
+ es_KES_smooth_blend = getattr(shared.opts, 'kes_smooth_blend'),
274
+ es_KES_step_size_factor = getattr(shared.opts, 'kes_step_size_factor'),
275
+ es_KES_noise_scale = getattr(shared.opts, 'kes_noise_scale'),
276
+ ))
277
+ return
278
+
279
+ try:
280
+ import modules.sd_schedulers as schedulers
281
+
282
+ if "name='custom'" not in str(schedulers.schedulers[-1]): # this is a bit lazy tbh
283
+ print ("Extension: Extra Schedulers: adding new schedulers")
284
+ CosineScheduler = schedulers.Scheduler("cosine", "Cosine", cosine_scheduler)
285
+ CosExpScheduler = schedulers.Scheduler("cosexp", "CosineExponential blend", cosexpblend_scheduler)
286
+ CosExpBScheduler = schedulers.Scheduler("cosprev", "CosExp blend boost", cosexpblend_boost_scheduler)
287
+ PhiScheduler = schedulers.Scheduler("phi", "Phi", phi_scheduler)
288
+ VPScheduler = schedulers.Scheduler("vp", "VP", get_sigmas_vp)
289
+ LaplaceScheduler = schedulers.Scheduler("laplace", "Laplace", get_sigmas_laplace)
290
+
291
+ SineScheduler = schedulers.Scheduler("sine_sc", "Sine scaled", get_sigmas_sinusoidal_sf)
292
+ InvCosScheduler = schedulers.Scheduler("inv_cos_sc", "Inverse Cosine scaled", get_sigmas_invcosinusoidal_sf)
293
+ CosDynScheduler = schedulers.Scheduler("cosine_dyn", "Cosine Dynamic", get_sigmas_react_cosinusoidal_dynsf)
294
+ KarrasDynScheduler = schedulers.Scheduler("karras_dyn", "Karras Dynamic", get_sigmas_karras_dynamic)
295
+ KarrasExpDecayScheduler = schedulers.Scheduler("karras_exp_d", "Karras Exp Decay", get_sigmas_karras_exponential_decay)
296
+ KarrasExpIncScheduler = schedulers.Scheduler("karras_exp_i", "Karras Exp Inc", get_sigmas_karras_exponential_increment)
297
+
298
+ SimpleKEScheduler = schedulers.Scheduler("simple_kes", "Simple KES", get_sigmas_simple_kes)
299
+
300
+ # OSSFlowScheduler = schedulers.Scheduler("optimal_ss", "Optimal Steps", get_sigmas_oss)
301
+
302
+ CustomScheduler = schedulers.Scheduler("custom", "custom", custom_scheduler)
303
+
304
+
305
+ schedulers.schedulers.append(CosineScheduler)
306
+ schedulers.schedulers.append(CosExpScheduler)
307
+ schedulers.schedulers.append(CosExpBScheduler)
308
+ schedulers.schedulers.append(PhiScheduler)
309
+ schedulers.schedulers.append(VPScheduler)
310
+ schedulers.schedulers.append(LaplaceScheduler)
311
+
312
+ schedulers.schedulers.append(SineScheduler)
313
+ schedulers.schedulers.append(InvCosScheduler)
314
+ schedulers.schedulers.append(CosDynScheduler)
315
+ schedulers.schedulers.append(KarrasDynScheduler)
316
+ schedulers.schedulers.append(KarrasExpDecayScheduler)
317
+ schedulers.schedulers.append(KarrasExpIncScheduler)
318
+
319
+ schedulers.schedulers.append(SimpleKEScheduler)
320
+
321
+ # schedulers.schedulers.append(OSSFlowScheduler)
322
+
323
+ schedulers.schedulers.append(CustomScheduler)
324
+ schedulers.schedulers_map = {**{x.name: x for x in schedulers.schedulers}, **{x.label: x for x in schedulers.schedulers}}
325
+
326
+ try:
327
+ # CFG++ method is Forge only, not working in A1111
328
+ import modules_forge.forge_version
329
+ from scripts.samplers_cfgpp import sample_euler_ancestral_cfgpp, sample_euler_cfgpp, sample_euler_dy_cfgpp, sample_euler_smea_dy_cfgpp, sample_euler_negative_cfgpp, sample_euler_negative_dy_cfgpp
330
+ from scripts.forgeClassic_cfgpp import sample_dpmpp_sde_cfgpp, sample_dpmpp_2m_cfgpp, sample_dpmpp_2m_sde_cfgpp, sample_dpmpp_3m_sde_cfgpp, sample_dpmpp_2s_ancestral_cfgpp
331
+ samplers_cfgpp = [
332
+ ("Euler a CFG++", sample_euler_ancestral_cfgpp, ["k_euler_a_cfgpp"], {"uses_ensd": True} ),
333
+ ("Euler CFG++", sample_euler_cfgpp, ["k_euler_cfgpp"], {} ),
334
+ ("Euler Dy CFG++", sample_euler_dy_cfgpp, ["k_euler_dy_cfgpp"], {} ),
335
+ ("Euler SMEA Dy CFG++", sample_euler_smea_dy_cfgpp, ["k_euler_smea_dy_cfgpp"], {} ),
336
+ ("Euler Negative CFG++", sample_euler_negative_cfgpp, ["k_euler_negative_cfgpp"], {} ),
337
+ ("Euler Negative Dy CFG++", sample_euler_negative_dy_cfgpp, ["k_euler_negative_dy_cfgpp"], {} ),
338
+ ("RES multistep CFG++", sample_res_multistep_cfgpp, ["k_res_multi_cfgpp"], {} ),
339
+ ("Gradient Estimation CFG++", sample_gradient_e_cfgpp, ["k_grad_est_cfgpp"], {} ),
340
+ ("DPM++ SDE CFG++", sample_dpmpp_sde_cfgpp, ["k_dpmpp_sde_cfgpp"], {"brownian_noise": True, "second_order": True} ),
341
+ ("DPM++ 2M CFG++", sample_dpmpp_2m_cfgpp, ["k_dpmpp_2m_cfgpp"], {} ),
342
+ ("DPM++ 2M SDE CFG++", sample_dpmpp_2m_sde_cfgpp, ["k_dpmpp_2m_sde_cfgpp"], {"brownian_noise": True} ),
343
+ ("DPM++ 3M SDE CFG++", sample_dpmpp_3m_sde_cfgpp, ["k_dpmpp_3m_sde_cfgpp"], {"brownian_noise": True, 'discard_next_to_last_sigma': True} ),
344
+ ("DPM++ 2S a CFG++", sample_dpmpp_2s_ancestral_cfgpp,["k_dpmpp_2s_a_cfgpp"], {"uses_ensd": True, "second_order": True} ),
345
+ ]
346
+ samplers_data_cfgpp = [
347
+ sd_samplers_common.SamplerData(label, lambda model, funcname=funcname: KDiffusionSampler(funcname, model), aliases, options)
348
+ for label, funcname, aliases, options in samplers_cfgpp
349
+ if callable(funcname)
350
+ ]
351
+ sampler_extra_params['sample_euler_cfgpp'] = ['s_churn', 's_tmin', 's_tmax', 's_noise']
352
+ sampler_extra_params['sample_euler_negative_cfgpp'] = ['s_churn', 's_tmin', 's_tmax', 's_noise']
353
+ sampler_extra_params['sample_euler_dy_cfgpp'] = ['s_churn', 's_tmin', 's_tmax', 's_noise']
354
+ sampler_extra_params['sample_euler_negative_dy_cfgpp'] = ['s_churn', 's_tmin', 's_tmax', 's_noise']
355
+ sampler_extra_params['sample_euler_smea_dy_cfgpp'] = ['s_churn', 's_tmin', 's_tmax', 's_noise']
356
+
357
+ sampler_extra_params['sample_dpmpp_sde_cfgpp'] = ['s_noise']
358
+ sampler_extra_params['sample_dpmpp_2m_sde_cfgpp'] = ['s_noise']
359
+ sampler_extra_params['sample_dpmpp_3m_sde_cfgpp'] = ['s_noise']
360
+ sampler_extra_params['sample_dpmpp_2s_ancestral_cfgpp']= ['s_noise']
361
+
362
+ sd_samplers.all_samplers.extend(samplers_data_cfgpp)
363
+ except:
364
+ pass
365
+
366
+ samplers_extra = [
367
+ ("RES multistep", sample_res_multistep, ["k_res_multi"], {}),
368
+ ("Refined Exponential Solver", sample_res_solver, ["k_res"], {}),
369
+ ("DPM++ 4M SDE", sample_clyb_4m_sde_momentumized, ["k_dpmpp_4m_sde"], {}),
370
+ ("Gradient Estimation", sample_gradient_e, ["k_grad_est"], {}),
371
+ ]
372
+ samplers_data_extra = [
373
+ sd_samplers_common.SamplerData(label, lambda model, funcname=funcname: KDiffusionSampler(funcname, model), aliases, options)
374
+ for label, funcname, aliases, options in samplers_extra
375
+ if callable(funcname)
376
+ ]
377
+
378
+ sd_samplers.all_samplers.extend(samplers_data_extra)
379
+ sd_samplers.all_samplers_map = {x.name: x for x in sd_samplers.all_samplers}
380
+ sd_samplers.set_samplers()
381
+
382
+ ExtraScheduler.installed = True
383
+ except:
384
+ print ("Extension: Extra Schedulers: unsupported webUI")
385
+ ExtraScheduler.installed = False
webUI_ExtraSchedulers/old/forgeClassic_cfgpp.py ADDED
@@ -0,0 +1,269 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # first 3 lifted from ForgeClassic (https://github.com/Haoming02/sd-webui-forge-classic/)
2
+ # 4th is simple adaptation of 3M to 2M
3
+ # 5th lifted from ReForge (https://github.com/Panchovix/stable-diffusion-webui-reForge)
4
+ # all modified to work with Forge2
5
+
6
+ import torch
7
+ from tqdm.auto import trange
8
+ from k_diffusion.sampling import (
9
+ default_noise_sampler,
10
+ BrownianTreeNoiseSampler,
11
+ get_ancestral_step,
12
+ to_d,
13
+ )
14
+
15
+
16
+ def _sigma_fn(t):
17
+ return t.neg().exp()
18
+
19
+
20
+ def _t_fn(sigma):
21
+ return sigma.log().neg()
22
+
23
+
24
+ @torch.no_grad()
25
+ def sample_dpmpp_sde_cfgpp(model, x, sigmas, extra_args=None, callback=None, disable=None, noise_sampler=None):
26
+ eta = 1.0
27
+ s_noise = 1.0
28
+ r = 0.5
29
+
30
+ if len(sigmas) <= 1:
31
+ return x
32
+
33
+ seed = extra_args.get("seed", None)
34
+ sigma_min, sigma_max = sigmas[sigmas > 0].min(), sigmas.max()
35
+ noise_sampler = BrownianTreeNoiseSampler(x, sigma_min, sigma_max, seed=seed) if noise_sampler is None else noise_sampler
36
+ extra_args = {} if extra_args is None else extra_args
37
+
38
+ model.need_last_noise_uncond = True
39
+ model.inner_model.inner_model.forge_objects.unet.model_options["disable_cfg1_optimization"] = True
40
+
41
+ s_in = x.new_ones([x.shape[0]])
42
+
43
+ for i in trange(len(sigmas) - 1, disable=disable):
44
+ denoised = model(x, sigmas[i] * s_in, **extra_args)
45
+ if callback is not None:
46
+ callback(
47
+ {
48
+ "x": x,
49
+ "i": i,
50
+ "sigma": sigmas[i],
51
+ "sigma_hat": sigmas[i],
52
+ "denoised": denoised,
53
+ }
54
+ )
55
+
56
+ if sigmas[i + 1] == 0:
57
+ d = model.last_noise_uncond
58
+ x = denoised + d * sigmas[i + 1]
59
+ else:
60
+ t, t_next = _t_fn(sigmas[i]), _t_fn(sigmas[i + 1])
61
+ h = t_next - t
62
+ s = t + h * r
63
+ fac = 1 / (2 * r)
64
+
65
+
66
+ sd, su = get_ancestral_step(_sigma_fn(t), _sigma_fn(s), eta)
67
+ s_ = _t_fn(sd)
68
+ x_2 = (_sigma_fn(s_) / _sigma_fn(t)) * x - (t - s_).expm1() * denoised
69
+ x_2 = x_2 + noise_sampler(_sigma_fn(t), _sigma_fn(s)) * s_noise * su
70
+ denoised_2 = model(x_2, _sigma_fn(s) * s_in, **extra_args)
71
+ u = x_2 - model.last_noise_uncond * _sigma_fn(s) * s_in #d=(x-u)/sigma; d*sigma=x-u; u=x-d*sigma
72
+
73
+ sd, su = get_ancestral_step(_sigma_fn(t), _sigma_fn(t_next), eta)
74
+ denoised_d = (1 - fac) * u + fac * u
75
+ x = denoised_2 + to_d(x, sigmas[i], denoised_d) * sd
76
+ x = x + noise_sampler(_sigma_fn(t), _sigma_fn(t_next)) * s_noise * su
77
+ return x
78
+
79
+
80
+ @torch.no_grad()
81
+ def sample_dpmpp_2m_cfgpp(model, x, sigmas, extra_args=None, callback=None, disable=None):
82
+ extra_args = {} if extra_args is None else extra_args
83
+ s_in = x.new_ones([x.shape[0]])
84
+
85
+ old_uncond_denoised = None
86
+ uncond_denoised = None
87
+
88
+ model.need_last_noise_uncond = True
89
+ model.inner_model.inner_model.forge_objects.unet.model_options["disable_cfg1_optimization"] = True
90
+
91
+ for i in trange(len(sigmas) - 1, disable=disable):
92
+ denoised = model(x, sigmas[i] * s_in, **extra_args)
93
+ uncond_denoised = x - model.last_noise_uncond * sigmas[i] * s_in
94
+ if callback is not None:
95
+ callback(
96
+ {
97
+ "x": x,
98
+ "i": i,
99
+ "sigma": sigmas[i],
100
+ "sigma_hat": sigmas[i],
101
+ "denoised": denoised,
102
+ }
103
+ )
104
+ t, t_next = _t_fn(sigmas[i]), _t_fn(sigmas[i + 1])
105
+ h = t_next - t
106
+ if old_uncond_denoised is None or sigmas[i + 1] == 0:
107
+ denoised_mix = -torch.exp(-h) * uncond_denoised
108
+ else:
109
+ h_last = t - _t_fn(sigmas[i - 1])
110
+ r = h_last / h
111
+ denoised_mix = -torch.exp(-h) * uncond_denoised - torch.expm1(-h) * (1 / (2 * r)) * (denoised - old_uncond_denoised)
112
+ x = denoised + denoised_mix + torch.exp(-h) * x
113
+ old_uncond_denoised = uncond_denoised
114
+ return x
115
+
116
+
117
+ @torch.no_grad()
118
+ def sample_dpmpp_3m_sde_cfgpp(model, x, sigmas, extra_args=None, callback=None, disable=None, eta=None, s_noise=None, noise_sampler=None):
119
+ eta = 1.0 if eta is None else eta
120
+ s_noise = 1.0 if s_noise is None else s_noise
121
+
122
+ if len(sigmas) <= 1:
123
+ return x
124
+
125
+ seed = extra_args.get("seed", None)
126
+ sigma_min, sigma_max = sigmas[sigmas > 0].min(), sigmas.max()
127
+ noise_sampler = BrownianTreeNoiseSampler(x, sigma_min, sigma_max, seed=seed) if noise_sampler is None else noise_sampler
128
+ extra_args = {} if extra_args is None else extra_args
129
+ s_in = x.new_ones([x.shape[0]])
130
+
131
+ denoised_1, denoised_2 = None, None
132
+ h, h_1, h_2 = None, None, None
133
+
134
+ model.need_last_noise_uncond = True
135
+ model.inner_model.inner_model.forge_objects.unet.model_options["disable_cfg1_optimization"] = True
136
+
137
+ for i in trange(len(sigmas) - 1, disable=disable):
138
+ denoised = model(x, sigmas[i] * s_in, **extra_args)
139
+ u = x - model.last_noise_uncond * sigmas[i] * s_in #d=(x-u)/sigma; d*sigma=x-u; u=x-d*sigma
140
+ if callback is not None:
141
+ callback(
142
+ {
143
+ "x": x,
144
+ "i": i,
145
+ "sigma": sigmas[i],
146
+ "sigma_hat": sigmas[i],
147
+ "denoised": denoised,
148
+ }
149
+ )
150
+ if sigmas[i + 1] == 0:
151
+ x = denoised
152
+ else:
153
+ t, s = -sigmas[i].log(), -sigmas[i + 1].log()
154
+ h = s - t
155
+ h_eta = h * (eta + 1)
156
+
157
+ x = torch.exp(-h_eta) * (x + (denoised - u)) + (-h_eta).expm1().neg() * denoised
158
+
159
+ if h_2 is not None:
160
+ r0 = h_1 / h
161
+ r1 = h_2 / h
162
+ d1_0 = (denoised - denoised_1) / r0
163
+ d1_1 = (denoised_1 - denoised_2) / r1
164
+ d1 = d1_0 + (d1_0 - d1_1) * r0 / (r0 + r1)
165
+ d2 = (d1_0 - d1_1) / (r0 + r1)
166
+ phi_2 = h_eta.neg().expm1() / h_eta + 1
167
+ phi_3 = phi_2 / h_eta - 0.5
168
+ x = x + phi_2 * d1 - phi_3 * d2
169
+ elif h_1 is not None:
170
+ r = h_1 / h
171
+ d = (denoised - denoised_1) / r
172
+ phi_2 = h_eta.neg().expm1() / h_eta + 1
173
+ x = x + phi_2 * d
174
+
175
+ if eta:
176
+ x = x + noise_sampler(sigmas[i], sigmas[i + 1]) * sigmas[i + 1] * (-2 * h * eta).expm1().neg().sqrt() * s_noise
177
+
178
+ denoised_1, denoised_2 = denoised, denoised_1
179
+ h_1, h_2 = h, h_1
180
+ return x
181
+
182
+
183
+ ## extra
184
+ @torch.no_grad()
185
+ def sample_dpmpp_2m_sde_cfgpp(model, x, sigmas, extra_args=None, callback=None, disable=None, eta=1., s_noise=1., noise_sampler=None):
186
+ # just cut down from 3m_sde version
187
+ seed = extra_args.get("seed", None)
188
+ sigma_min, sigma_max = sigmas[sigmas > 0].min(), sigmas.max()
189
+ noise_sampler = BrownianTreeNoiseSampler(x, sigma_min, sigma_max, seed=seed) if noise_sampler is None else noise_sampler
190
+ extra_args = {} if extra_args is None else extra_args
191
+ s_in = x.new_ones([x.shape[0]])
192
+
193
+ denoised_1 = None
194
+ h_1 = None
195
+
196
+ model.need_last_noise_uncond = True
197
+ model.inner_model.inner_model.forge_objects.unet.model_options["disable_cfg1_optimization"] = True
198
+
199
+ for i in trange(len(sigmas) - 1, disable=disable):
200
+ denoised = model(x, sigmas[i] * s_in, **extra_args)
201
+ u = x - model.last_noise_uncond * sigmas[i] * s_in
202
+ if callback is not None:
203
+ callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
204
+ if sigmas[i + 1] == 0:
205
+ #Denoising step
206
+ x = denoised
207
+ else:
208
+ #DPM-Solver++(2M) SDE
209
+ t, s = -sigmas[i].log(), -sigmas[i + 1].log()
210
+ h = s - t
211
+
212
+ h_eta = h * (eta + 1)
213
+ x = torch.exp(-h_eta) * (x + (denoised - u)) + (-h_eta).expm1().neg() * denoised
214
+
215
+ if denoised_1 is not None:
216
+ r = h_1 / h
217
+
218
+ d = (denoised - denoised_1) / r
219
+ phi_2 = h_eta.neg().expm1() / h_eta + 1
220
+ x = x + phi_2 * d
221
+
222
+ if eta:
223
+ x = x + noise_sampler(sigmas[i], sigmas[i + 1]) * sigmas[i + 1] * (-2 * h * eta).expm1().neg().sqrt() * s_noise
224
+
225
+ h_1 = h
226
+
227
+ denoised_1 = denoised
228
+ return x
229
+
230
+
231
+ # via ReForge
232
+ @torch.no_grad()
233
+ def sample_dpmpp_2s_ancestral_cfgpp(model, x, sigmas, extra_args=None, callback=None, disable=None, eta=1., s_noise=1., noise_sampler=None):
234
+ extra_args = {} if extra_args is None else extra_args
235
+ noise_sampler = default_noise_sampler(x) if noise_sampler is None else noise_sampler
236
+
237
+ model.need_last_noise_uncond = True
238
+ model.inner_model.inner_model.forge_objects.unet.model_options["disable_cfg1_optimization"] = True
239
+
240
+ s_in = x.new_ones([x.shape[0]])
241
+ sigma_fn = lambda t: t.neg().exp()
242
+ t_fn = lambda sigma: sigma.log().neg()
243
+ for i in trange(len(sigmas) - 1, disable=disable):
244
+ denoised = model(x, sigmas[i] * s_in, **extra_args)
245
+ sigma_down, sigma_up = get_ancestral_step(sigmas[i], sigmas[i + 1], eta=eta)
246
+ if callback is not None:
247
+ callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
248
+ if sigma_down == 0:
249
+ # Euler method
250
+ d = model.last_noise_uncond
251
+ dt = sigma_down - sigmas[i]
252
+ x = denoised + d * sigma_down
253
+ else:
254
+ u = x - model.last_noise_uncond * sigmas[i] * s_in
255
+
256
+ # DPM-Solver++(2S)
257
+ t, t_next = t_fn(sigmas[i]), t_fn(sigma_down)
258
+ # r = torch.sinh(1 + (2 - eta) * (t_next - t) / (t - t_fn(sigma_up))) works only on non-cfgpp, weird
259
+ r = 1 / 2
260
+ h = t_next - t
261
+ s = t + r * h
262
+ x_2 = (sigma_fn(s) / sigma_fn(t)) * (x + (denoised - u)) - (-h * r).expm1() * denoised
263
+ denoised_2 = model(x_2, sigma_fn(s) * s_in, **extra_args)
264
+ x = (sigma_fn(t_next) / sigma_fn(t)) * (x + (denoised - u)) - (-h).expm1() * denoised_2
265
+
266
+ # Noise addition
267
+ if sigmas[i + 1] > 0:
268
+ x = x + noise_sampler(sigmas[i], sigmas[i + 1]) * s_noise * sigma_up
269
+ return x
webUI_ExtraSchedulers/old/gradient_estimation.py ADDED
@@ -0,0 +1,70 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## lifted from ReForge, original implementation from Comfy
2
+ ## CFG++ attempt by me
3
+
4
+ import torch
5
+ from tqdm.auto import trange
6
+
7
+
8
+ # copied from kdiffusion/sampling.py
9
+ def to_d(x, sigma, denoised):
10
+ """Converts a denoiser output to a Karras ODE derivative."""
11
+ return (x - denoised) / append_dims(sigma, x.ndim)
12
+ def append_dims(x, target_dims):
13
+ """Appends dimensions to the end of a tensor until it has target_dims dimensions."""
14
+ dims_to_append = target_dims - x.ndim
15
+ if dims_to_append < 0:
16
+ raise ValueError(f'input has {x.ndim} dims but target_dims is {target_dims}, which is less')
17
+ return x[(...,) + (None,) * dims_to_append]
18
+
19
+
20
+ @torch.no_grad()
21
+ def sample_gradient_e(model, x, sigmas, extra_args=None, callback=None, disable=None, ge_gamma=2.):
22
+ """Gradient-estimation sampler. Paper: https://openreview.net/pdf?id=o2ND9v0CeK"""
23
+ extra_args = {} if extra_args is None else extra_args
24
+ s_in = x.new_ones([x.shape[0]])
25
+ old_d = None
26
+
27
+ sigmas = sigmas.to(x.device)
28
+
29
+ for i in trange(len(sigmas) - 1, disable=disable):
30
+ denoised = model(x, sigmas[i] * s_in, **extra_args)
31
+
32
+ d = to_d(x, sigmas[i], denoised)
33
+ if callback is not None:
34
+ callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
35
+ dt = sigmas[i + 1] - sigmas[i]
36
+ if i == 0: # Euler method
37
+ x = x + d * dt
38
+ else:
39
+ # Gradient estimation
40
+ d_bar = ge_gamma * d + (1 - ge_gamma) * old_d
41
+ x = x + d_bar * dt
42
+ old_d = d
43
+ return x
44
+
45
+
46
+ @torch.no_grad()
47
+ def sample_gradient_e_cfgpp(model, x, sigmas, extra_args=None, callback=None, disable=None, ge_gamma=2.):
48
+ """Gradient-estimation sampler. Paper: https://openreview.net/pdf?id=o2ND9v0CeK"""
49
+ extra_args = {} if extra_args is None else extra_args
50
+ s_in = x.new_ones([x.shape[0]])
51
+ old_d = None
52
+
53
+ model.need_last_noise_uncond = True
54
+
55
+ for i in trange(len(sigmas) - 1, disable=disable):
56
+ denoised = model(x, sigmas[i] * s_in, **extra_args)
57
+
58
+ d = model.last_noise_uncond
59
+
60
+ if callback is not None:
61
+ callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
62
+ dt = sigmas[i + 1] - sigmas[i]
63
+ if i == 0: # Euler method
64
+ x = denoised + d * sigmas[i+1]
65
+ else:
66
+ # Gradient estimation
67
+ d_bar = ge_gamma * d + (1 - ge_gamma) * old_d
68
+ x = denoised + d_bar * sigmas[i+1]
69
+ old_d = d
70
+ return x
webUI_ExtraSchedulers/old/res_solver.py ADDED
@@ -0,0 +1,396 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from torch import no_grad, FloatTensor
3
+ from tqdm import tqdm
4
+ from itertools import pairwise
5
+ from typing import Protocol, Optional, Dict, Any, TypedDict, NamedTuple, Union, List
6
+ import math
7
+
8
+ from tqdm.auto import trange
9
+
10
+ # copied from kdiffusion/sampling.py and utils.py
11
+ def default_noise_sampler(x):
12
+ return lambda sigma, sigma_next: torch.randn_like(x)
13
+ def append_dims(x, target_dims):
14
+ """Appends dimensions to the end of a tensor until it has target_dims dimensions."""
15
+ dims_to_append = target_dims - x.ndim
16
+ if dims_to_append < 0:
17
+ raise ValueError(f'input has {x.ndim} dims but target_dims is {target_dims}, which is less')
18
+ return x[(...,) + (None,) * dims_to_append]
19
+ def to_d(x, sigma, denoised):
20
+ """Converts a denoiser output to a Karras ODE derivative."""
21
+ return (x - denoised) / append_dims(sigma, x.ndim)
22
+
23
+
24
+ class DenoiserModel(Protocol):
25
+ def __call__(self, x: FloatTensor, t: FloatTensor, *args, **kwargs) -> FloatTensor: ...
26
+
27
+ class RefinedExpCallbackPayload(TypedDict):
28
+ x: FloatTensor
29
+ i: int
30
+ sigma: FloatTensor
31
+ sigma_hat: FloatTensor
32
+
33
+ class RefinedExpCallback(Protocol):
34
+ def __call__(self, payload: RefinedExpCallbackPayload) -> None: ...
35
+
36
+ class NoiseSampler(Protocol):
37
+ def __call__(self, x: FloatTensor) -> FloatTensor: ...
38
+
39
+ class StepOutput(NamedTuple):
40
+ x_next: FloatTensor
41
+ denoised: FloatTensor
42
+ denoised2: FloatTensor
43
+ vel: FloatTensor
44
+ vel_2: FloatTensor
45
+
46
+ def _gamma(
47
+ n: int,
48
+ ) -> int:
49
+ """
50
+ https://en.wikipedia.org/wiki/Gamma_function
51
+ for every positive integer n,
52
+ Γ(n) = (n-1)!
53
+ """
54
+ return math.factorial(n-1)
55
+
56
+ def _incomplete_gamma(
57
+ s: int,
58
+ x: float,
59
+ gamma_s: Optional[int] = None
60
+ ) -> float:
61
+ """
62
+ https://en.wikipedia.org/wiki/Incomplete_gamma_function#Special_values
63
+ if s is a positive integer,
64
+ Γ(s, x) = (s-1)!*∑{k=0..s-1}(x^k/k!)
65
+ """
66
+ if gamma_s is None:
67
+ gamma_s = _gamma(s)
68
+
69
+ sum_: float = 0
70
+ # {k=0..s-1} inclusive
71
+ for k in range(s):
72
+ numerator: float = x**k
73
+ denom: int = math.factorial(k)
74
+ quotient: float = numerator/denom
75
+ sum_ += quotient
76
+ incomplete_gamma_: float = sum_ * math.exp(-x) * gamma_s
77
+ return incomplete_gamma_
78
+
79
+ # by Katherine Crowson
80
+ def _phi_1(neg_h: FloatTensor):
81
+ return torch.nan_to_num(torch.expm1(neg_h) / neg_h, nan=1.0)
82
+
83
+ # by Katherine Crowson
84
+ def _phi_2(neg_h: FloatTensor):
85
+ return torch.nan_to_num((torch.expm1(neg_h) - neg_h) / neg_h**2, nan=0.5)
86
+
87
+ # by Katherine Crowson
88
+ def _phi_3(neg_h: FloatTensor):
89
+ return torch.nan_to_num((torch.expm1(neg_h) - neg_h - neg_h**2 / 2) / neg_h**3, nan=1 / 6)
90
+
91
+ def _phi(
92
+ neg_h: float,
93
+ j: int,
94
+ ):
95
+ """
96
+ For j={1,2,3}: you could alternatively use Kat's phi_1, phi_2, phi_3 which perform fewer steps
97
+
98
+ Lemma 1
99
+ https://arxiv.org/abs/2308.02157
100
+ ϕj(-h) = 1/h^j*∫{0..h}(e^(τ-h)*(τ^(j-1))/((j-1)!)dτ)
101
+
102
+ https://www.wolframalpha.com/input?i=integrate+e%5E%28%CF%84-h%29*%28%CF%84%5E%28j-1%29%2F%28j-1%29%21%29d%CF%84
103
+ = 1/h^j*[(e^(-h)*(-τ)^(-j)*τ(j))/((j-1)!)]{0..h}
104
+ https://www.wolframalpha.com/input?i=integrate+e%5E%28%CF%84-h%29*%28%CF%84%5E%28j-1%29%2F%28j-1%29%21%29d%CF%84+between+0+and+h
105
+ = 1/h^j*((e^(-h)*(-h)^(-j)*h^j*(Γ(j)-Γ(j,-h)))/(j-1)!)
106
+ = (e^(-h)*(-h)^(-j)*h^j*(Γ(j)-Γ(j,-h))/((j-1)!*h^j)
107
+ = (e^(-h)*(-h)^(-j)*(Γ(j)-Γ(j,-h))/(j-1)!
108
+ = (e^(-h)*(-h)^(-j)*(Γ(j)-Γ(j,-h))/Γ(j)
109
+ = (e^(-h)*(-h)^(-j)*(1-Γ(j,-h)/Γ(j))
110
+
111
+ requires j>0
112
+ """
113
+ assert j > 0
114
+ gamma_: float = _gamma(j)
115
+ incomp_gamma_: float = _incomplete_gamma(j, neg_h, gamma_s=gamma_)
116
+
117
+ phi_: float = math.exp(neg_h) * neg_h**-j * (1-incomp_gamma_/gamma_)
118
+
119
+ return phi_
120
+
121
+ class RESDECoeffsSecondOrder(NamedTuple):
122
+ a2_1: float
123
+ b1: float
124
+ b2: float
125
+
126
+ def _de_second_order(
127
+ h: float,
128
+ c2: float,
129
+ simple_phi_calc = False,
130
+ ) -> RESDECoeffsSecondOrder:
131
+ """
132
+ Table 3
133
+ https://arxiv.org/abs/2308.02157
134
+ ϕi,j := ϕi,j(-h) = ϕi(-cj*h)
135
+ a2_1 = c2ϕ1,2
136
+ = c2ϕ1(-c2*h)
137
+ b1 = ϕ1 - ϕ2/c2
138
+ """
139
+ if simple_phi_calc:
140
+ # Kat computed simpler expressions for phi for cases j={1,2,3}
141
+ a2_1: float = c2 * _phi_1(-c2*h)
142
+ phi1: float = _phi_1(-h)
143
+ phi2: float = _phi_2(-h)
144
+ else:
145
+ # I computed general solution instead.
146
+ # they're close, but there are slight differences. not sure which would be more prone to numerical error.
147
+ a2_1: float = c2 * _phi(j=1, neg_h=-c2*h)
148
+ phi1: float = _phi(j=1, neg_h=-h)
149
+ phi2: float = _phi(j=2, neg_h=-h)
150
+ phi2_c2: float = phi2/c2
151
+ b1: float = phi1 - phi2_c2
152
+ b2: float = phi2_c2
153
+ return RESDECoeffsSecondOrder(
154
+ a2_1=a2_1,
155
+ b1=b1,
156
+ b2=b2,
157
+ )
158
+
159
+ def _refined_exp_sosu_step(
160
+ model: DenoiserModel,
161
+ x: FloatTensor,
162
+ sigma: FloatTensor,
163
+ sigma_next: FloatTensor,
164
+ c2 = 0.5,
165
+ extra_args: Dict[str, Any] = {},
166
+ pbar: Optional[tqdm] = None,
167
+ simple_phi_calc = False,
168
+ momentum = 0.0,
169
+ vel = None,
170
+ vel_2 = None,
171
+ time = None
172
+ ) -> StepOutput:
173
+ """
174
+ Algorithm 1 "RES Second order Single Update Step with c2"
175
+ https://arxiv.org/abs/2308.02157
176
+
177
+ Parameters:
178
+ model (`DenoiserModel`): a k-diffusion wrapped denoiser model (e.g. a subclass of DiscreteEpsDDPMDenoiser)
179
+ x (`FloatTensor`): noised latents (or RGB I suppose), e.g. torch.randn((B, C, H, W)) * sigma[0]
180
+ sigma (`FloatTensor`): timestep to denoise
181
+ sigma_next (`FloatTensor`): timestep+1 to denoise
182
+ c2 (`float`, *optional*, defaults to .5): partial step size for solving ODE. .5 = midpoint method
183
+ extra_args (`Dict[str, Any]`, *optional*, defaults to `{}`): kwargs to pass to `model#__call__()`
184
+ pbar (`tqdm`, *optional*, defaults to `None`): progress bar to update after each model call
185
+ simple_phi_calc (`bool`, *optional*, defaults to `True`): True = calculate phi_i,j(-h) via simplified formulae specific to j={1,2}. False = Use general solution that works for any j. Mathematically equivalent, but could be numeric differences.
186
+ """
187
+
188
+ def momentum_func(diff, velocity, timescale=1.0, offset=-momentum / 2.0): # Diff is current diff, vel is previous diff
189
+ if velocity is None:
190
+ momentum_vel = diff
191
+ else:
192
+ momentum_vel = momentum * (timescale + offset) * velocity + (1 - momentum * (timescale + offset)) * diff
193
+ return momentum_vel
194
+
195
+ lam_next, lam = (s.log().neg() for s in (sigma_next, sigma))
196
+
197
+ # type hints aren't strictly true regarding float vs FloatTensor.
198
+ # everything gets promoted to `FloatTensor` after interacting with `sigma: FloatTensor`.
199
+ # I will use float to indicate any variables which are scalars.
200
+ h: float = lam_next - lam
201
+ a2_1, b1, b2 = _de_second_order(h=h, c2=c2, simple_phi_calc=simple_phi_calc)
202
+
203
+ denoised: FloatTensor = model(x, sigma.repeat(x.size(0)), **extra_args)
204
+ # if pbar is not None:
205
+ # pbar.update(0.5)
206
+
207
+ c2_h: float = c2*h
208
+
209
+ diff_2 = momentum_func(a2_1*h*denoised, vel_2, time)
210
+ vel_2 = diff_2
211
+ x_2: FloatTensor = math.exp(-c2_h)*x + diff_2
212
+ lam_2: float = lam + c2_h
213
+ sigma_2: float = lam_2.neg().exp()
214
+
215
+ denoised2: FloatTensor = model(x_2, sigma_2.repeat(x_2.size(0)), **extra_args)
216
+ if pbar is not None:
217
+ pbar.update()
218
+
219
+ diff = momentum_func(h*(b1*denoised + b2*denoised2), vel, time)
220
+ vel = diff
221
+
222
+ x_next: FloatTensor = math.exp(-h)*x + diff
223
+
224
+ return StepOutput(
225
+ x_next=x_next,
226
+ denoised=denoised,
227
+ denoised2=denoised2,
228
+ vel=vel,
229
+ vel_2=vel_2,
230
+ )
231
+
232
+
233
+ @no_grad()
234
+ def sample_refined_exp_s(
235
+ model: FloatTensor,
236
+ x: FloatTensor,
237
+ sigmas: FloatTensor,
238
+ denoise_to_zero: bool = True,
239
+ extra_args: Dict[str, Any] = {},
240
+ callback: Optional[RefinedExpCallback] = None,
241
+ disable: Optional[bool] = None,
242
+ ita: FloatTensor = torch.zeros((1,)),
243
+ c2 = .5,
244
+ noise_sampler: NoiseSampler = torch.randn_like,
245
+ simple_phi_calc = False,
246
+ momentum = 0.0,
247
+ ):
248
+ """
249
+ Refined Exponential Solver (S).
250
+ Algorithm 2 "RES Single-Step Sampler" with Algorithm 1 second-order step
251
+ https://arxiv.org/abs/2308.02157
252
+
253
+ Parameters:
254
+ model (`DenoiserModel`): a k-diffusion wrapped denoiser model (e.g. a subclass of DiscreteEpsDDPMDenoiser)
255
+ x (`FloatTensor`): noised latents (or RGB I suppose), e.g. torch.randn((B, C, H, W)) * sigma[0]
256
+ sigmas (`FloatTensor`): sigmas (ideally an exponential schedule!) e.g. get_sigmas_exponential(n=25, sigma_min=model.sigma_min, sigma_max=model.sigma_max)
257
+ denoise_to_zero (`bool`, *optional*, defaults to `True`): whether to finish with a first-order step down to 0 (rather than stopping at sigma_min). True = fully denoise image. False = match Algorithm 2 in paper
258
+ extra_args (`Dict[str, Any]`, *optional*, defaults to `{}`): kwargs to pass to `model#__call__()`
259
+ callback (`RefinedExpCallback`, *optional*, defaults to `None`): you can supply this callback to see the intermediate denoising results, e.g. to preview each step of the denoising process
260
+ disable (`bool`, *optional*, defaults to `False`): whether to hide `tqdm`'s progress bar animation from being printed
261
+ ita (`FloatTensor`, *optional*, defaults to 0.): degree of stochasticity, η, for each timestep. tensor shape must be broadcastable to 1-dimensional tensor with length `len(sigmas) if denoise_to_zero else len(sigmas)-1`. each element should be from 0 to 1.
262
+ - if used: batch noise doesn't match non-batch
263
+ c2 (`float`, *optional*, defaults to .5): partial step size for solving ODE. .5 = midpoint method
264
+ noise_sampler (`NoiseSampler`, *optional*, defaults to `torch.randn_like`): method used for adding noise
265
+ simple_phi_calc (`bool`, *optional*, defaults to `True`): True = calculate phi_i,j(-h) via simplified formulae specific to j={1,2}. False = Use general solution that works for any j. Mathematically equivalent, but could be numeric differences.
266
+ """
267
+ #assert sigmas[-1] == 0
268
+ device = x.device
269
+ ita = ita.to(device)
270
+ sigmas = sigmas.to(device)
271
+
272
+ sigma_min, sigma_max = sigmas[sigmas > 0].min(), sigmas.max()
273
+
274
+ vel, vel_2 = None, None
275
+ with tqdm(disable=disable, total=len(sigmas)-(1 if denoise_to_zero else 2)) as pbar:
276
+ for i, (sigma, sigma_next) in enumerate(pairwise(sigmas[:-1].split(1))):
277
+ time = sigmas[i] / sigma_max
278
+ if 'sigma' not in locals():
279
+ sigma = sigmas[i]
280
+ eps = torch.randn_like(x).float()
281
+ sigma_hat = sigma * (1 + ita)
282
+ x_hat = x + (sigma_hat ** 2 - sigma ** 2).sqrt() * eps
283
+ x_next, denoised, denoised2, vel, vel_2 = _refined_exp_sosu_step(
284
+ model,
285
+ x_hat,
286
+ sigma_hat,
287
+ sigma_next,
288
+ c2=c2,
289
+ extra_args=extra_args,
290
+ pbar=pbar,
291
+ simple_phi_calc=simple_phi_calc,
292
+ momentum = momentum,
293
+ vel = vel,
294
+ vel_2 = vel_2,
295
+ time = time
296
+ )
297
+ if callback is not None:
298
+ payload = RefinedExpCallbackPayload(
299
+ x=x,
300
+ i=i,
301
+ sigma=sigma,
302
+ sigma_hat=sigma_hat,
303
+ denoised=denoised,
304
+ denoised2=denoised2,
305
+ )
306
+ callback(payload)
307
+ x = x_next
308
+ if denoise_to_zero:
309
+ eps = torch.randn_like(x).float()
310
+ sigma_hat = sigma * (1 + ita)
311
+ x_hat = x + (sigma_hat ** 2 - sigma ** 2).sqrt() * eps
312
+ x_next: FloatTensor = model(x_hat, sigma.to(x_hat.device).repeat(x_hat.size(0)), **extra_args)
313
+ pbar.update()
314
+
315
+ if callback is not None:
316
+ payload = RefinedExpCallbackPayload(
317
+ x=x,
318
+ i=i,
319
+ sigma=sigma,
320
+ sigma_hat=sigma_hat,
321
+ denoised=denoised,
322
+ denoised2=denoised2,
323
+ )
324
+ callback(payload)
325
+
326
+
327
+ x = x_next
328
+ return x
329
+
330
+ # Many thanks to Kat + Birch-San for this wonderful sampler implementation! https://github.com/Birch-san/sdxl-play/commits/res/
331
+ def sample_res_solver(model, x, sigmas, extra_args=None, callback=None, disable=None, noise_sampler_type="gaussian", noise_sampler=None, denoise_to_zero=True, simple_phi_calc=False, c2=0.5, ita=torch.Tensor((0.0,)), momentum=0.0):
332
+ return sample_refined_exp_s(model, x, sigmas, extra_args=extra_args, callback=callback, disable=disable, noise_sampler=noise_sampler, denoise_to_zero=denoise_to_zero, simple_phi_calc=simple_phi_calc, c2=c2, ita=ita, momentum=momentum)
333
+
334
+
335
+ ## modified from ReForge, original implementation ComfyUI
336
+ @torch.no_grad()
337
+ def res_multistep(model, x, sigmas, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1., noise_sampler=None, cfgpp=False):
338
+ extra_args = {} if extra_args is None else extra_args
339
+ seed = extra_args.get("seed", None)
340
+ noise_sampler = default_noise_sampler(x) if noise_sampler is None else noise_sampler
341
+ s_in = x.new_ones([x.shape[0]])
342
+ sigma_fn = lambda t: t.neg().exp()
343
+ t_fn = lambda sigma: sigma.log().neg()
344
+ phi1_fn = lambda t: torch.expm1(t) / t
345
+ phi2_fn = lambda t: (phi1_fn(t) - 1.0) / t
346
+ old_denoised = None
347
+
348
+ sigmas = sigmas.to(x.device)
349
+
350
+ if cfgpp:
351
+ model.need_last_noise_uncond = True
352
+
353
+ for i in trange(len(sigmas) - 1, disable=disable):
354
+ if s_churn > 0:
355
+ gamma = min(s_churn / (len(sigmas) - 1), 2**0.5 - 1) if s_tmin <= sigmas[i] <= s_tmax else 0.0
356
+ sigma_hat = sigmas[i] * (gamma + 1)
357
+ else:
358
+ gamma = 0
359
+ sigma_hat = sigmas[i]
360
+ if gamma > 0:
361
+ eps = torch.randn_like(x) * s_noise
362
+ x = x + eps * (sigma_hat**2 - sigmas[i] ** 2) ** 0.5
363
+ denoised = model(x, sigma_hat * s_in, **extra_args)
364
+
365
+ if callback is not None:
366
+ callback({"x": x, "i": i, "sigma": sigmas[i], "sigma_hat": sigma_hat, "denoised": denoised})
367
+ if sigmas[i + 1] == 0 or old_denoised is None:
368
+ # Euler method
369
+ if cfgpp:
370
+ d = model.last_noise_uncond
371
+ x = denoised + d * sigmas[i + 1]
372
+ else:
373
+ d = to_d(x, sigma_hat, denoised)
374
+ dt = sigmas[i + 1] - sigma_hat
375
+ x = x + d * dt
376
+ else:
377
+ # Second order multistep method in https://arxiv.org/pdf/2308.02157
378
+ t, t_next, t_prev = t_fn(sigmas[i]), t_fn(sigmas[i + 1]), t_fn(sigmas[i - 1])
379
+ h = t_next - t
380
+ c2 = (t_prev - t) / h
381
+ phi1_val, phi2_val = phi1_fn(-h), phi2_fn(-h)
382
+ b1 = torch.nan_to_num(phi1_val - 1.0 / c2 * phi2_val, nan=0.0)
383
+ b2 = torch.nan_to_num(1.0 / c2 * phi2_val, nan=0.0)
384
+ if cfgpp:
385
+ d = model.last_noise_uncond
386
+ x = denoised + d * sigma_hat
387
+
388
+ x = (sigma_fn(t_next) / sigma_fn(t)) * x + h * (b1 * denoised + b2 * old_denoised)
389
+ old_denoised = denoised
390
+ return x
391
+ @torch.no_grad()
392
+ def sample_res_multistep(model, x, sigmas, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1., noise_sampler=None):
393
+ return res_multistep(model, x, sigmas, extra_args=extra_args, callback=callback, disable=disable, s_churn=s_churn, s_tmin=s_tmin, s_tmax=s_tmax, s_noise=s_noise, noise_sampler=noise_sampler, cfgpp=False)
394
+ @torch.no_grad()
395
+ def sample_res_multistep_cfgpp(model, x, sigmas, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1., noise_sampler=None):
396
+ return res_multistep(model, x, sigmas, extra_args=extra_args, callback=callback, disable=disable, s_churn=s_churn, s_tmin=s_tmin, s_tmax=s_tmax, s_noise=s_noise, noise_sampler=noise_sampler, cfgpp=True)
webUI_ExtraSchedulers/old/samplers_cfgpp.py ADDED
@@ -0,0 +1,258 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from tqdm.auto import trange
3
+
4
+ # copied from kdiffusion/sampling.py and utils.py
5
+ def default_noise_sampler(x):
6
+ return lambda sigma, sigma_next: torch.randn_like(x)
7
+ def get_ancestral_step(sigma_from, sigma_to, eta=1.):
8
+ """Calculates the noise level (sigma_down) to step down to and the amount
9
+ of noise to add (sigma_up) when doing an ancestral sampling step."""
10
+ if not eta:
11
+ return sigma_to, 0.
12
+ sigma_up = min(sigma_to, eta * (sigma_to ** 2 * (sigma_from ** 2 - sigma_to ** 2) / sigma_from ** 2) ** 0.5)
13
+ sigma_down = (sigma_to ** 2 - sigma_up ** 2) ** 0.5
14
+ return sigma_down, sigma_up
15
+ def append_dims(x, target_dims):
16
+ """Appends dimensions to the end of a tensor until it has target_dims dimensions."""
17
+ dims_to_append = target_dims - x.ndim
18
+ if dims_to_append < 0:
19
+ raise ValueError(f'input has {x.ndim} dims but target_dims is {target_dims}, which is less')
20
+ return x[(...,) + (None,) * dims_to_append]
21
+ def to_d(x, sigma, denoised):
22
+ """Converts a denoiser output to a Karras ODE derivative."""
23
+ return (x - denoised) / append_dims(sigma, x.ndim)
24
+
25
+
26
+ @torch.no_grad()
27
+ def sample_euler_cfgpp(model, x, sigmas, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.):
28
+ """Implements Algorithm 2 (Euler steps) from Karras et al. (2022)."""
29
+ extra_args = {} if extra_args is None else extra_args
30
+ model.need_last_noise_uncond = True
31
+ s_in = x.new_ones([x.shape[0]])
32
+
33
+ for i in trange(len(sigmas) - 1, disable=disable):
34
+ gamma = min(s_churn / (len(sigmas) - 1), 2 ** 0.5 - 1) if s_tmin <= sigmas[i] <= s_tmax else 0.
35
+ eps = torch.randn_like(x) * s_noise
36
+ sigma_hat = sigmas[i] * (gamma + 1)
37
+ if gamma > 0:
38
+ x = x + eps * (sigma_hat ** 2 - sigmas[i] ** 2) ** 0.5
39
+ denoised = model(x, sigma_hat * s_in, **extra_args)
40
+ d = model.last_noise_uncond
41
+
42
+ if callback is not None:
43
+ callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigma_hat, 'denoised': denoised})
44
+
45
+ # Euler method
46
+ x = denoised + d * sigmas[i+1]
47
+ return x
48
+
49
+ class _Rescaler:
50
+ def __init__(self, model, x, mode, **extra_args):
51
+ self.model = model
52
+ self.x = x
53
+ self.mode = mode
54
+ self.extra_args = extra_args
55
+ self.init_latent, self.mask, self.nmask = model.init_latent, model.mask, model.nmask
56
+
57
+ def __enter__(self):
58
+ if self.init_latent is not None:
59
+ self.model.init_latent = torch.nn.functional.interpolate(input=self.init_latent, size=self.x.shape[2:4], mode=self.mode)
60
+ if self.mask is not None:
61
+ self.model.mask = torch.nn.functional.interpolate(input=self.mask.unsqueeze(0), size=self.x.shape[2:4], mode=self.mode).squeeze(0)
62
+ if self.nmask is not None:
63
+ self.model.nmask = torch.nn.functional.interpolate(input=self.nmask.unsqueeze(0), size=self.x.shape[2:4], mode=self.mode).squeeze(0)
64
+
65
+ return self
66
+
67
+ def __exit__(self, type, value, traceback):
68
+ del self.model.init_latent, self.model.mask, self.model.nmask
69
+ self.model.init_latent, self.model.mask, self.model.nmask = self.init_latent, self.mask, self.nmask
70
+
71
+ @torch.no_grad()
72
+ def dy_sampling_step_cfgpp(x, model, sigma_hat, **extra_args):
73
+ original_shape = x.shape
74
+ batch_size, channels, m, n = original_shape[0], original_shape[1], original_shape[2] // 2, original_shape[3] // 2
75
+ extra_row = x.shape[2] % 2 == 1
76
+ extra_col = x.shape[3] % 2 == 1
77
+
78
+ if extra_row:
79
+ extra_row_content = x[:, :, -1:, :]
80
+ x = x[:, :, :-1, :]
81
+ if extra_col:
82
+ extra_col_content = x[:, :, :, -1:]
83
+ x = x[:, :, :, :-1]
84
+
85
+ a_list = x.unfold(2, 2, 2).unfold(3, 2, 2).contiguous().view(batch_size, channels, m * n, 2, 2)
86
+ c = a_list[:, :, :, 1, 1].view(batch_size, channels, m, n)
87
+
88
+ with _Rescaler(model, c, 'nearest-exact', **extra_args) as rescaler:
89
+ denoised = model(c, sigma_hat * c.new_ones([c.shape[0]]), **rescaler.extra_args)
90
+ d = model.last_noise_uncond
91
+ c = denoised + d * sigma_hat
92
+
93
+ d_list = c.view(batch_size, channels, m * n, 1, 1)
94
+ a_list[:, :, :, 1, 1] = d_list[:, :, :, 0, 0]
95
+ x = a_list.view(batch_size, channels, m, n, 2, 2).permute(0, 1, 2, 4, 3, 5).reshape(batch_size, channels, 2 * m, 2 * n)
96
+
97
+ if extra_row or extra_col:
98
+ x_expanded = torch.zeros(original_shape, dtype=x.dtype, device=x.device)
99
+ x_expanded[:, :, :2 * m, :2 * n] = x
100
+ if extra_row:
101
+ x_expanded[:, :, -1:, :2 * n + 1] = extra_row_content
102
+ if extra_col:
103
+ x_expanded[:, :, :2 * m, -1:] = extra_col_content
104
+ if extra_row and extra_col:
105
+ x_expanded[:, :, -1:, -1:] = extra_col_content[:, :, -1:, :]
106
+ x = x_expanded
107
+
108
+ return x
109
+
110
+ @torch.no_grad()
111
+ def smea_sampling_step_cfgpp(x, model, sigma_hat, **extra_args):
112
+ m, n = x.shape[2], x.shape[3]
113
+ x = torch.nn.functional.interpolate(input=x, scale_factor=(1.25, 1.25), mode='nearest-exact')
114
+ with _Rescaler(model, x, 'nearest-exact', **extra_args) as rescaler:
115
+ denoised = model(x, sigma_hat * x.new_ones([x.shape[0]]), **rescaler.extra_args)
116
+ d = model.last_noise_uncond
117
+ x = denoised + d * sigma_hat
118
+ x = torch.nn.functional.interpolate(input=x, size=(m,n), mode='nearest-exact')
119
+ return x
120
+
121
+
122
+ @torch.no_grad()
123
+ def sample_euler_dy_cfgpp(model, x, sigmas, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.):
124
+ """CFG++ version of Euler Dy by KoishiStar."""
125
+ extra_args = {} if extra_args is None else extra_args
126
+ model.need_last_noise_uncond = True
127
+ s_in = x.new_ones([x.shape[0]])
128
+
129
+ for i in trange(len(sigmas) - 1, disable=disable):
130
+ gamma = min(s_churn / (len(sigmas) - 1), 2 ** 0.5 - 1) if s_tmin <= sigmas[i] <= s_tmax else 0.
131
+ eps = torch.randn_like(x) * s_noise
132
+ sigma_hat = sigmas[i] * (gamma + 1)
133
+ if gamma > 0:
134
+ x = x + eps * (sigma_hat ** 2 - sigmas[i] ** 2) ** 0.5
135
+ denoised = model(x, sigma_hat * s_in, **extra_args)
136
+ d = model.last_noise_uncond
137
+
138
+ if callback is not None:
139
+ callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigma_hat, 'denoised': denoised})
140
+
141
+ # Euler method
142
+ x = denoised + d * sigmas[i+1]
143
+
144
+ if sigmas[i + 1] > 0:
145
+ if i // 2 == 1:
146
+ x = dy_sampling_step_cfgpp(x, model, sigma_hat, **extra_args)
147
+
148
+ return x
149
+
150
+ @torch.no_grad()
151
+ def sample_euler_negative_dy_cfgpp(model, x, sigmas, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.):
152
+ """CFG++ version of Euler Negative Dy by KoishiStar."""
153
+ extra_args = {} if extra_args is None else extra_args
154
+ model.need_last_noise_uncond = True
155
+ s_in = x.new_ones([x.shape[0]])
156
+
157
+ for i in trange(len(sigmas) - 1, disable=disable):
158
+ gamma = min(s_churn / (len(sigmas) - 1), 2 ** 0.5 - 1) if s_tmin <= sigmas[i] <= s_tmax else 0.
159
+ eps = torch.randn_like(x) * s_noise
160
+ sigma_hat = sigmas[i] * (gamma + 1)
161
+ if gamma > 0:
162
+ x = x + eps * (sigma_hat ** 2 - sigmas[i] ** 2) ** 0.5
163
+ denoised = model(x, sigma_hat * s_in, **extra_args)
164
+ d = model.last_noise_uncond
165
+
166
+ if callback is not None:
167
+ callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigma_hat, 'denoised': denoised})
168
+
169
+ # Euler method
170
+ if sigmas[i + 1] > 0 and i // 2 == 1:
171
+ x = -denoised - d * sigmas[i+1]
172
+ else:
173
+ x = denoised + d * sigmas[i+1]
174
+
175
+ if sigmas[i + 1] > 0:
176
+ if i // 2 == 1:
177
+ x = dy_sampling_step_cfgpp(x, model, sigma_hat, **extra_args)
178
+
179
+ return x
180
+
181
+ @torch.no_grad()
182
+ def sample_euler_negative_cfgpp(model, x, sigmas, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.):
183
+ """based on Euler Negative by KoishiStar"""
184
+ extra_args = {} if extra_args is None else extra_args
185
+ model.need_last_noise_uncond = True
186
+ s_in = x.new_ones([x.shape[0]])
187
+
188
+ for i in trange(len(sigmas) - 1, disable=disable):
189
+ gamma = min(s_churn / (len(sigmas) - 1), 2 ** 0.5 - 1) if s_tmin <= sigmas[i] <= s_tmax else 0.
190
+ eps = torch.randn_like(x) * s_noise
191
+ sigma_hat = sigmas[i] * (gamma + 1)
192
+ if gamma > 0:
193
+ x = x + eps * (sigma_hat ** 2 - sigmas[i] ** 2) ** 0.5
194
+ denoised = model(x, sigma_hat * s_in, **extra_args)
195
+ d = model.last_noise_uncond
196
+
197
+ if callback is not None:
198
+ callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigma_hat, 'denoised': denoised})
199
+
200
+ # Euler method
201
+ if sigmas[i + 1] > 0 and i // 2 == 1:
202
+ x = -denoised - d * sigmas[i+1]
203
+ else:
204
+ x = denoised + d * sigmas[i+1]
205
+ return x
206
+
207
+
208
+ @torch.no_grad()
209
+ def sample_euler_smea_dy_cfgpp(model, x, sigmas, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.):
210
+ """CFG++ version of Euler SMEA Dy by KoishiStar."""
211
+ extra_args = {} if extra_args is None else extra_args
212
+ model.need_last_noise_uncond = True
213
+ s_in = x.new_ones([x.shape[0]])
214
+
215
+ for i in trange(len(sigmas) - 1, disable=disable):
216
+ gamma = min(s_churn / (len(sigmas) - 1), 2 ** 0.5 - 1) if s_tmin <= sigmas[i] <= s_tmax else 0.
217
+ eps = torch.randn_like(x) * s_noise
218
+ sigma_hat = sigmas[i] * (gamma + 1)
219
+ if gamma > 0:
220
+ x = x + eps * (sigma_hat ** 2 - sigmas[i] ** 2) ** 0.5
221
+ denoised = model(x, sigma_hat * s_in, **extra_args)
222
+ d = model.last_noise_uncond
223
+
224
+ if callback is not None:
225
+ callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigma_hat, 'denoised': denoised})
226
+
227
+ # Euler method
228
+ x = denoised + d * sigmas[i+1]
229
+
230
+ if sigmas[i + 1] > 0:
231
+ if i + 1 // 2 == 1: # ?? this is i == 1; why not if i // 2 == 1 same as Euler Dy
232
+ x = dy_sampling_step_cfgpp(x, model, sigma_hat, **extra_args)
233
+ if i + 1 // 2 == 0: # ?? this is i == 0
234
+ x = smea_sampling_step_cfgpp(x, model, sigma_hat, **extra_args)
235
+ return x
236
+
237
+ @torch.no_grad()
238
+ def sample_euler_ancestral_cfgpp(model, x, sigmas, extra_args=None, callback=None, disable=None, eta=1., s_noise=1., noise_sampler=None):
239
+ """Ancestral sampling with Euler method steps."""
240
+ extra_args = {} if extra_args is None else extra_args
241
+ noise_sampler = default_noise_sampler(x) if noise_sampler is None else noise_sampler
242
+ model.need_last_noise_uncond = True
243
+ s_in = x.new_ones([x.shape[0]])
244
+
245
+ for i in trange(len(sigmas) - 1, disable=disable):
246
+ denoised = model(x, sigmas[i] * s_in, **extra_args)
247
+ d = model.last_noise_uncond
248
+
249
+ sigma_down, sigma_up = get_ancestral_step(sigmas[i], sigmas[i + 1], eta=eta)
250
+
251
+ if callback is not None:
252
+ callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
253
+
254
+ # Euler method
255
+ x = denoised + d * sigma_down
256
+ if sigmas[i + 1] > 0:
257
+ x = x + noise_sampler(sigmas[i], sigmas[i + 1]) * s_noise * sigma_up
258
+ return x
webUI_ExtraSchedulers/scripts/__pycache__/clybius_dpmpp_4m_sde.cpython-310.pyc ADDED
Binary file (3.79 kB). View file
 
webUI_ExtraSchedulers/scripts/__pycache__/extra_schedulers.cpython-310.pyc ADDED
Binary file (15.1 kB). View file
 
webUI_ExtraSchedulers/scripts/__pycache__/forgeClassic_cfgpp.cpython-310.pyc ADDED
Binary file (6.1 kB). View file
 
webUI_ExtraSchedulers/scripts/__pycache__/gradient_estimation.cpython-310.pyc ADDED
Binary file (2.04 kB). View file
 
webUI_ExtraSchedulers/scripts/__pycache__/res_solver.cpython-310.pyc ADDED
Binary file (14 kB). View file
 
webUI_ExtraSchedulers/scripts/__pycache__/samplers_cfgpp.cpython-310.pyc ADDED
Binary file (8.59 kB). View file
 
webUI_ExtraSchedulers/scripts/__pycache__/seeds.cpython-310.pyc ADDED
Binary file (2.98 kB). View file
 
webUI_ExtraSchedulers/scripts/__pycache__/simple_kes.cpython-310.pyc ADDED
Binary file (3.65 kB). View file
 
webUI_ExtraSchedulers/scripts/clybius_dpmpp_4m_sde.py ADDED
@@ -0,0 +1,124 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # by Clybius : github.com/Clybius/ComfyUI-Extra-Samplers/
2
+
3
+ import math
4
+
5
+ import torch
6
+ from torch import nn, FloatTensor
7
+ import torchsde
8
+ import kornia
9
+ from tqdm.auto import trange, tqdm
10
+ import numpy as np
11
+
12
+ import sample
13
+
14
+ from k_diffusion.sampling import BrownianTreeNoiseSampler, PIDStepSizeController, get_ancestral_step, to_d, default_noise_sampler, DPMSolver
15
+
16
+
17
+ # copied from kdiffusion/sampling.py and utils.py
18
+ def default_noise_sampler(x):
19
+ return lambda sigma, sigma_next: torch.randn_like(x)
20
+
21
+
22
+ @torch.no_grad()
23
+ def sample_clyb_4m_sde_momentumized(model, x, sigmas, extra_args=None, callback=None, disable=None, eta=1.0, s_noise=1., noise_sampler=None, momentum=0.0):
24
+ """DPM-Solver++(3M) SDE, modified with an extra SDE, and momentumized in both the SDE and ODE(?). 'its a first' - Clybius 2023
25
+ The expression for d1 is derived from the extrapolation formula given in the paper “Diffusion Monte Carlo with stochastic Hamiltonians” by M. Foulkes, L. Mitas, R. Needs, and G. Rajagopal. The formula is given as follows:
26
+ d1 = d1_0 + (d1_0 - d1_1) * r2 / (r2 + r1) + ((d1_0 - d1_1) * r2 / (r2 + r1) - (d1_1 - d1_2) * r1 / (r0 + r1)) * r2 / ((r2 + r1) * (r0 + r1))
27
+ (if this is an incorrect citing, we blame Google's Bard and OpenAI's ChatGPT for this and NOT me :^) )
28
+
29
+ where d1_0, d1_1, and d1_2 are defined as follows:
30
+ d1_0 = (denoised - denoised_1) / r2
31
+ d1_1 = (denoised_1 - denoised_2) / r1
32
+ d1_2 = (denoised_2 - denoised_3) / r0
33
+
34
+ The variables r0, r1, and r2 are defined as follows:
35
+ r0 = h_3 / h_2
36
+ r1 = h_2 / h
37
+ r2 = h / h_1
38
+ """
39
+
40
+ def momentum_func(diff, velocity, timescale=1.0, offset=-momentum / 2.0): # Diff is current diff, vel is previous diff
41
+ if velocity is None:
42
+ momentum_vel = diff
43
+ else:
44
+ momentum_vel = momentum * (timescale + offset) * velocity + (1 - momentum * (timescale + offset)) * diff
45
+ return momentum_vel
46
+
47
+ sigma_min, sigma_max = sigmas[sigmas > 0].min(), sigmas.max()
48
+
49
+ noise_sampler = default_noise_sampler(x) if noise_sampler is None else noise_sampler
50
+
51
+ extra_args = {} if extra_args is None else extra_args
52
+ s_in = x.new_ones([x.shape[0]])
53
+
54
+ denoised_1, denoised_2, denoised_3 = None, None, None
55
+ h_1, h_2, h_3 = None, None, None
56
+ vel, vel_sde = None, None
57
+ for i in trange(len(sigmas) - 1, disable=disable):
58
+ time = sigmas[i] / sigma_max
59
+ denoised = model(x, sigmas[i] * s_in, **extra_args)
60
+
61
+ if sigmas[i + 1] == 0:
62
+ # Denoising step
63
+ x = denoised
64
+ else:
65
+ t, s = -sigmas[i].log(), -sigmas[i + 1].log()
66
+ h = s - t
67
+ h_eta = h * (eta + 1)
68
+ x_diff = momentum_func((-h_eta).expm1().neg() * denoised, vel, time)
69
+ vel = x_diff
70
+ x = torch.exp(-h_eta) * x + vel
71
+
72
+ if h_3 is not None:
73
+ r0 = h_1 / h
74
+ r1 = h_2 / h
75
+ r2 = h_3 / h
76
+ d1_0 = (denoised - denoised_1) / r0
77
+ d1_1 = (denoised_1 - denoised_2) / r1
78
+ d1_2 = (denoised_2 - denoised_3) / r2
79
+ # d1 = d1_0 + (d1_0 - d1_1) * r0 / (r0 + r1) + ((d1_0 - d1_1) * r2 / (r1 + r2) - (d1_1 - d1_2) * r1 / (r0 + r1)) * r2 / ((r1 + r2) * (r0 + r1))
80
+ # d2 = (d1_0 - d1_1) / (r0 + r1) + ((d1_0 - d1_1) * r2 / (r1 + r2) - (d1_1 - d1_2) * r1 / (r0 + r1)) / ((r1 + r2) * (r0 + r1))
81
+
82
+ # r0 = h_3 / h_2
83
+ # r1 = h_2 / h
84
+ # r2 = h / h_1
85
+ # d1_0 = (denoised - denoised_1) / r2
86
+ # d1_1 = (denoised_1 - denoised_2) / r1
87
+ # d1_2 = (denoised_2 - denoised_3) / r0
88
+ d1 = d1_0 + (d1_0 - d1_1) * r2 / (r2 + r1) + ((d1_0 - d1_1) * r2 / (r2 + r1) - (d1_1 - d1_2) * r1 / (r0 + r1)) * r2 / ((r2 + r1) * (r0 + r1))
89
+ d2 = (d1_0 - d1_1) / (r2 + r1) + ((d1_0 - d1_1) * r2 / (r2 + r1) - (d1_1 - d1_2) * r1 / (r0 + r1)) / ((r2 + r1) * (r0 + r1))
90
+ phi_3 = h_eta.neg().expm1() / h_eta + 1
91
+ phi_4 = phi_3 / h_eta - 0.5
92
+ sde_diff = momentum_func(phi_3 * d1 - phi_4 * d2, vel_sde, time)
93
+ vel_sde = sde_diff
94
+ x = x + vel_sde
95
+ elif h_2 is not None:
96
+ r0 = h_1 / h
97
+ r1 = h_2 / h
98
+ d1_0 = (denoised - denoised_1) / r0
99
+ d1_1 = (denoised_1 - denoised_2) / r1
100
+ d1 = d1_0 + (d1_0 - d1_1) * r0 / (r0 + r1)
101
+ d2 = (d1_0 - d1_1) / (r0 + r1)
102
+ phi_2 = h_eta.neg().expm1() / h_eta + 1
103
+ phi_3 = phi_2 / h_eta - 0.5
104
+ sde_diff = momentum_func(phi_2 * d1 - phi_3 * d2, vel_sde, time)
105
+ vel_sde = sde_diff
106
+ x = x + vel_sde
107
+ elif h_1 is not None:
108
+ r = h_1 / h
109
+ d = (denoised - denoised_1) / r
110
+ phi_2 = h_eta.neg().expm1() / h_eta + 1
111
+ sde_diff = momentum_func(phi_2 * d, vel_sde, time)
112
+ vel_sde = sde_diff
113
+ x = x + vel_sde
114
+
115
+ if eta:
116
+ x = x + noise_sampler(sigmas[i], sigmas[i + 1]) * sigmas[i + 1] * (-2 * h * eta).expm1().neg().sqrt() * s_noise
117
+
118
+ denoised_1, denoised_2, denoised_3 = denoised, denoised_1, denoised_2
119
+ h_1, h_2, h_3 = h, h_1, h_2
120
+
121
+ if callback is not None:
122
+ callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
123
+
124
+ return x
webUI_ExtraSchedulers/scripts/extra_schedulers.py ADDED
@@ -0,0 +1,432 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import gradio
2
+ import math, numpy
3
+ import torch
4
+ from modules import scripts, shared
5
+
6
+ # Python 3.10+, PyTorch 2.1+, NumPy 1.24+
7
+ def get_sigmas_oss(n, sigma_min, sigma_max, device):
8
+ """
9
+ Optimal Steps schedule (OSS).
10
+ Исправлено:
11
+ - Больше нет обращения к несуществующей переменной `sigmas`.
12
+ - Всегда возвращаем тензор float32 на переданном `device`.
13
+ - Порядок веток по типу модели: SD3/Flux → SDXL → общий (SD1/2).
14
+ Примечание: пресеты подобраны «в абсолютных» единицах под семейства моделей,
15
+ поэтому sigma_min/sigma_max здесь намеренно не используются.
16
+ """
17
+ import numpy
18
+ import torch
19
+ from modules import shared
20
+
21
+ def loglinear_interp(values: list[float], num_steps: int) -> numpy.ndarray:
22
+ """Лог-линейная интерполяция убывающей последовательности до num_steps."""
23
+ arr = numpy.asarray(values, dtype=float)
24
+ xs = numpy.linspace(0.0, 1.0, arr.shape[0])
25
+ ys = numpy.log(arr[::-1]) # в возрастающую + логарифм
26
+ new_xs = numpy.linspace(0.0, 1.0, num_steps)
27
+ new_ys = numpy.interp(new_xs, xs, ys) # интерполяция в лог-пространстве
28
+ out = numpy.exp(new_ys)[::-1].copy() # обратно и снова убывающая
29
+ return out
30
+
31
+ m = shared.sd_model
32
+
33
+ # 1) Флоу-семейство (SD3/Flux) — нормализованный пресет ~[1..0]
34
+ if getattr(m, "is_sd3", False) or getattr(m, "is_flux", False):
35
+ base_sigmas = [0.9968, 0.9886, 0.9819, 0.975, 0.966, 0.9471, 0.9158, 0.8287, 0.5512, 0.2808, 0.001]
36
+
37
+ # 2) SDXL — свой AYS11 пресет
38
+ elif getattr(m, "is_sdxl", False):
39
+ base_sigmas = [14.615, 6.315, 3.771, 2.181, 1.342, 0.862, 0.555, 0.380, 0.234, 0.113, 0.029]
40
+
41
+ # 3) SD1.x/SD2.x и прочие — общий AYS11
42
+ else:
43
+ base_sigmas = [14.615, 6.475, 3.861, 2.697, 1.886, 1.396, 0.963, 0.652, 0.399, 0.152, 0.029]
44
+
45
+ # Подгоняем длину к n и добавляем терминальный 0.0 (итого n+1 значений)
46
+ if n != len(base_sigmas):
47
+ sigmas_np = loglinear_interp(base_sigmas, n)
48
+ sigmas_np = numpy.append(sigmas_np, [0.0])
49
+ else:
50
+ sigmas_np = numpy.asarray(base_sigmas + [0.0], dtype=float)
51
+
52
+ # Единый путь возврата: float32 на переданном device
53
+ return torch.as_tensor(sigmas_np, dtype=torch.float32, device=device)
54
+
55
+
56
+
57
+
58
+ def cosine_scheduler (n, sigma_min, sigma_max, device):
59
+ sigmas = torch.zeros(n, device=device)
60
+ if n == 1:
61
+ sigmas[0] = sigma_max ** 0.5
62
+ else:
63
+ for x in range(n):
64
+ p = x / (n-1)
65
+ C = sigma_min + 0.5*(sigma_max-sigma_min)*(1 - math.cos(math.pi*(1 - p**0.5)))
66
+ sigmas[x] = C
67
+ return torch.cat([sigmas, sigmas.new_zeros([1])])
68
+
69
+ def cosexpblend_boost_scheduler (n, sigma_min, sigma_max, device):
70
+ sigmas = []
71
+ if n == 1:
72
+ sigmas.append(sigma_max ** 0.5)
73
+ else:
74
+ K = (sigma_min / sigma_max)**(1/(n-1))
75
+ E = sigma_max
76
+ detail = numpy.interp(numpy.linspace(0, 1, n), numpy.linspace(0, 1, 5), [1.0, 1.0, 1.27, 1.0, 1.0])
77
+ for x in range(n):
78
+ p = x / (n-1)
79
+ C = sigma_min + 0.5*(sigma_max-sigma_min)*(1 - math.cos(math.pi*(1 - p**0.5)))
80
+ sigmas.append(detail[x] * (C + p * (E - C)))
81
+ E *= K
82
+
83
+ sigmas += [0.0]
84
+ return torch.FloatTensor(sigmas).to(device)
85
+
86
+ def cosexpblend_scheduler (n, sigma_min, sigma_max, device):
87
+ sigmas = []
88
+ if n == 1:
89
+ sigmas.append(sigma_max ** 0.5)
90
+ else:
91
+ K = (sigma_min / sigma_max)**(1/(n-1))
92
+ E = sigma_max
93
+ for x in range(n):
94
+ p = x / (n-1)
95
+ C = sigma_min + 0.5*(sigma_max-sigma_min)*(1 - math.cos(math.pi*(1 - p**0.5)))
96
+ sigmas.append(C + p * (E - C))
97
+ E *= K
98
+ sigmas += [0.0]
99
+ return torch.FloatTensor(sigmas).to(device)
100
+
101
+ ## phi scheduler modified from original by @extraltodeus
102
+ def phi_scheduler(n, sigma_min, sigma_max, device):
103
+ sigmas = torch.zeros(n, device=device)
104
+ if n == 1:
105
+ sigmas[0] = sigma_max ** 0.5
106
+ else:
107
+ phi = (1 + 5**0.5) / 2
108
+ for x in range(n):
109
+ sigmas[x] = sigma_min + (sigma_max-sigma_min)*((1-x/(n-1))**(phi*phi))
110
+ return torch.cat([sigmas, sigmas.new_zeros([1])])
111
+
112
+ def get_sigmas_vp(n, sigma_min, sigma_max, device='cpu'):
113
+ """Constructs a continuous VP noise schedule."""
114
+
115
+ beta_d = 19.9
116
+ beta_min = 0.1
117
+ eps_s = 1e-3
118
+
119
+ t = torch.linspace(1, eps_s, n, device=device)
120
+ sigmas = torch.sqrt(torch.exp(beta_d * t ** 2 / 2 + beta_min * t) - 1)
121
+ return torch.cat([sigmas, sigmas.new_zeros([1])])
122
+
123
+ def get_sigmas_laplace(n, sigma_min, sigma_max, device='cpu'):
124
+ """Constructs the noise schedule proposed by Tiankai et al. (2024). """
125
+ mu = 0.
126
+ beta = 0.5
127
+ epsilon = 1e-5 # avoid log(0)
128
+ x = torch.linspace(0, 1, n, device=device)
129
+ clamp = lambda x: torch.clamp(x, min=sigma_min, max=sigma_max)
130
+ lmb = mu - beta * torch.sign(0.5-x) * torch.log(1 - 2 * torch.abs(0.5-x) + epsilon)
131
+ sigmas = clamp(torch.exp(lmb))
132
+ return torch.cat([sigmas, sigmas.new_zeros([1])])
133
+
134
+
135
+
136
+ def get_sigmas_sinusoidal_sf(n, sigma_min, sigma_max, device='cpu'):
137
+ """Constructs a sinusoidal noise schedule."""
138
+ sf = 3.5
139
+ x = torch.linspace(0, 1, n, device=device)
140
+ sigmas = (sigma_min + (sigma_max - sigma_min) * (1 - torch.sin(torch.pi / 2 * x)))/sigma_max
141
+ sigmas = sigmas**sf
142
+ sigmas = sigmas * sigma_max
143
+ return torch.cat([sigmas, sigmas.new_zeros([1])])
144
+
145
+ def get_sigmas_invcosinusoidal_sf(n, sigma_min, sigma_max, device='cpu'):
146
+ """Constructs a sinusoidal noise schedule."""
147
+ sf = 3.5
148
+ x = torch.linspace(0, 1, n, device=device)
149
+ sigmas = (sigma_min + (sigma_max - sigma_min) * (0.5*(torch.cos(x * math.pi) + 1)))/sigma_max
150
+ sigmas = sigmas**sf
151
+ sigmas = sigmas * sigma_max
152
+ return torch.cat([sigmas, sigmas.new_zeros([1])])
153
+
154
+ def get_sigmas_react_cosinusoidal_dynsf(n, sigma_min, sigma_max, device='cpu'):
155
+ """Constructs a sinusoidal noise schedule."""
156
+ sf = 2.15
157
+ x = torch.linspace(0, 1, n, device=device)
158
+ sigmas = (sigma_min+(sigma_max-sigma_min)*(torch.cos(x*(torch.pi/2))))/sigma_max
159
+ sigmas = sigmas**(sf*(n*x/n))
160
+ sigmas = sigmas * sigma_max
161
+ return torch.cat([sigmas, sigmas.new_zeros([1])])
162
+
163
+ def get_sigmas_karras_dynamic(n, sigma_min, sigma_max, device='cpu'):
164
+ """Constructs the noise schedule of Karras et al. (2022)."""
165
+ rho = 7.
166
+ ramp = torch.linspace(0, 1, n, device=device)
167
+ min_inv_rho = sigma_min ** (1 / rho)
168
+ max_inv_rho = sigma_max ** (1 / rho)
169
+ sigmas = torch.zeros_like(ramp)
170
+ for i in range(n):
171
+ sigmas[i] = (max_inv_rho + ramp[i] * (min_inv_rho - max_inv_rho)) ** (math.cos(i*math.tau/n)*2+rho)
172
+ return torch.cat([sigmas, sigmas.new_zeros([1])])
173
+
174
+ def get_sigmas_karras_exponential_decay(n, sigma_min, sigma_max, device='cpu'):
175
+ """Constructs the noise schedule of Karras et al. (2022)."""
176
+ rho = 7.
177
+ ramp = torch.linspace(0, 1, n, device=device)
178
+ min_inv_rho = sigma_min ** (1 / rho)
179
+ max_inv_rho = sigma_max ** (1 / rho)
180
+ sigmas = torch.zeros_like(ramp)
181
+ for i in range(n):
182
+ sigmas[i] = (max_inv_rho + ramp[i] * (min_inv_rho - max_inv_rho)) ** (rho-(3*i/n))
183
+ return torch.cat([sigmas, sigmas.new_zeros([1])])
184
+
185
+ def get_sigmas_karras_exponential_increment(n, sigma_min, sigma_max, device='cpu'):
186
+ """Constructs the noise schedule of Karras et al. (2022)."""
187
+ rho = 7.
188
+ ramp = torch.linspace(0, 1, n, device=device)
189
+ min_inv_rho = sigma_min ** (1 / rho)
190
+ max_inv_rho = sigma_max ** (1 / rho)
191
+ sigmas = torch.zeros_like(ramp)
192
+ for i in range(n):
193
+ sigmas[i] = (max_inv_rho + ramp[i] * (min_inv_rho - max_inv_rho)) ** (rho+3*i/n)
194
+ return torch.cat([sigmas, sigmas.new_zeros([1])])
195
+
196
+ def custom_scheduler(n, sigma_min, sigma_max, device):
197
+ if 'import' in ExtraScheduler.customSigmas:
198
+ sigmas = torch.linspace(sigma_max, sigma_min, n, device=device)
199
+ elif 'eval' in ExtraScheduler.customSigmas:
200
+ sigmas = torch.linspace(sigma_max, sigma_min, n, device=device)
201
+ elif 'scripts' in ExtraScheduler.customSigmas:
202
+ sigmas = torch.linspace(sigma_max, sigma_min, n, device=device)
203
+
204
+ elif ExtraScheduler.customSigmas[0] == '[' and ExtraScheduler.customSigmas[-1] == ']':
205
+ sigmasList = [float(x) for x in ExtraScheduler.customSigmas.strip('[]').split(',')]
206
+
207
+ if sigmasList[0] == 1.0 and sigmasList[-1] == 0.0:
208
+ for x in range(len(sigmasList)):
209
+ sigmasList[x] *= (sigma_max - sigma_min)
210
+ sigmasList[x] += sigma_min
211
+ elif sigmasList[-1] == 0.0:
212
+ #don't interpolate to number of steps, use as is
213
+ return torch.tensor(sigmasList)
214
+
215
+ xs = numpy.linspace(0, 1, len(sigmasList))
216
+ ys = numpy.log(sigmasList[::-1])
217
+
218
+ new_xs = numpy.linspace(0, 1, n)
219
+ new_ys = numpy.interp(new_xs, xs, ys)
220
+
221
+ interpolated_ys = numpy.exp(new_ys)[::-1].copy()
222
+ sigmas = torch.tensor(interpolated_ys, device=device)
223
+ else:
224
+ sigmas = torch.linspace(sigma_max, sigma_min, n, device=device)
225
+ detail = numpy.interp(numpy.linspace(0, 1, n), numpy.linspace(0, 1, 5), [1.0, 1.0, 1.25, 1.0, 1.0])
226
+
227
+ phi = (1 + 5**0.5) / 2
228
+ pi = math.pi
229
+
230
+ s = 0
231
+ while (s < n):
232
+ x = (s) / (n - 1)
233
+ M = sigma_max
234
+ m = sigma_min
235
+ d = detail[s]
236
+
237
+ sigmas[s] = eval((ExtraScheduler.customSigmas))
238
+ s += 1
239
+ return torch.cat([sigmas, sigmas.new_zeros([1])])
240
+
241
+ from scripts.simple_kes import get_sigmas_simple_kes
242
+
243
+ from scripts.res_solver import sample_res_solver, sample_res_multistep, sample_res_multistep_cfgpp
244
+ from scripts.clybius_dpmpp_4m_sde import sample_clyb_4m_sde_momentumized
245
+ from scripts.gradient_estimation import sample_gradient_e, sample_gradient_e_cfgpp
246
+ from scripts.seeds import sample_seeds_2, sample_seeds_3
247
+
248
+ from modules import sd_samplers_common, sd_samplers
249
+ from modules.sd_samplers_kdiffusion import sampler_extra_params, KDiffusionSampler
250
+
251
+ class ExtraScheduler(scripts.Script):
252
+ sorting_priority = 99
253
+
254
+ installed = False
255
+ customSigmas = 'm + (M-m)*(1-x)**3'
256
+
257
+ def title(self):
258
+ return "Extra Schedulers (custom)"
259
+
260
+ def show(self, is_img2img):
261
+ # make this extension visible in both txt2img and img2img tab.
262
+ if ExtraScheduler.installed:
263
+ return scripts.AlwaysVisible
264
+ else:
265
+ return False
266
+
267
+ def ui(self, *args, **kwargs):
268
+ #with gradio.Accordion(open=False, label=self.title(), visible=ExtraScheduler.installed):
269
+ custom_sigmas = gradio.Textbox(value=ExtraScheduler.customSigmas, label='Extra Schedulers: custom function / list [n0, n1, n2, ...]', lines=1.01)
270
+
271
+ self.infotext_fields = [
272
+ (custom_sigmas, "es_custom"),
273
+ ]
274
+
275
+ return [custom_sigmas]
276
+
277
+ def process(self, params, *script_args, **kwargs):
278
+ if params.scheduler == 'custom':
279
+ custom_sigmas = script_args[0]
280
+ ExtraScheduler.customSigmas = custom_sigmas
281
+ params.extra_generation_params.update(dict(es_custom = ExtraScheduler.customSigmas, ))
282
+ elif params.scheduler == 'Simple KES':
283
+ params.extra_generation_params.update(dict(
284
+ es_KES_start_blend = getattr(shared.opts, 'kes_start_blend'),
285
+ es_KES_end_blend = getattr(shared.opts, 'kes_end_blend'),
286
+ es_KES_sharpness = getattr(shared.opts, 'kes_sharpness'),
287
+ es_KES_initial_step_size = getattr(shared.opts, 'kes_initial_step_size'),
288
+ es_KES_final_step_size = getattr(shared.opts, 'kes_final_step_size'),
289
+ es_KES_initial_noise = getattr(shared.opts, 'kes_initial_noise'),
290
+ es_KES_final_noise = getattr(shared.opts, 'kes_final_noise'),
291
+ es_KES_smooth_blend = getattr(shared.opts, 'kes_smooth_blend'),
292
+ es_KES_step_size_factor = getattr(shared.opts, 'kes_step_size_factor'),
293
+ es_KES_noise_scale = getattr(shared.opts, 'kes_noise_scale'),
294
+ ))
295
+ return
296
+
297
+ try:
298
+ import modules.sd_schedulers as schedulers
299
+
300
+ if True:
301
+ # убираем уже зарегистрированные версии с тем же именем/лейблом
302
+ def _drop(name=None, label=None):
303
+ schedulers.schedulers = [
304
+ s for s in getattr(schedulers, "schedulers", [])
305
+ if (name is not None and getattr(s, "name", None) == name) is False
306
+ and (label is not None and getattr(s, "label", None) == label) is False
307
+ ]
308
+
309
+ _drop(name="optimal_ss"); _drop(label="Optimal Steps")
310
+ _drop(name="custom"); _drop(label="custom")
311
+
312
+ print("Extension: Extra Schedulers: (re)adding schedulers")
313
+
314
+ # далее — как у вас: создаём объекты Scheduler(...)
315
+
316
+ print("Extension: Extra Schedulers: adding new schedulers")
317
+ CosineScheduler = schedulers.Scheduler("cosine", "Cosine", cosine_scheduler)
318
+ CosExpScheduler = schedulers.Scheduler("cosexp", "CosineExponential blend", cosexpblend_scheduler)
319
+ CosExpBScheduler = schedulers.Scheduler("cosprev", "CosExp blend boost", cosexpblend_boost_scheduler)
320
+ PhiScheduler = schedulers.Scheduler("phi", "Phi", phi_scheduler)
321
+ VPScheduler = schedulers.Scheduler("vp", "VP", get_sigmas_vp)
322
+ LaplaceScheduler = schedulers.Scheduler("laplace", "Laplace", get_sigmas_laplace)
323
+
324
+ SineScheduler = schedulers.Scheduler("sine_sc", "Sine scaled", get_sigmas_sinusoidal_sf)
325
+ InvCosScheduler = schedulers.Scheduler("inv_cos_sc", "Inverse Cosine scaled", get_sigmas_invcosinusoidal_sf)
326
+ CosDynScheduler = schedulers.Scheduler("cosine_dyn", "Cosine Dynamic", get_sigmas_react_cosinusoidal_dynsf)
327
+ KarrasDynScheduler = schedulers.Scheduler("karras_dyn", "Karras Dynamic", get_sigmas_karras_dynamic)
328
+ KarrasExpDecayScheduler = schedulers.Scheduler("karras_exp_d", "Karras Exp Decay", get_sigmas_karras_exponential_decay)
329
+ KarrasExpIncScheduler = schedulers.Scheduler("karras_exp_i", "Karras Exp Inc", get_sigmas_karras_exponential_increment)
330
+
331
+ SimpleKEScheduler = schedulers.Scheduler("simple_kes", "Simple KES", get_sigmas_simple_kes)
332
+ OSSFlowScheduler = schedulers.Scheduler("optimal_ss", "Optimal Steps", get_sigmas_oss)
333
+ CustomScheduler = schedulers.Scheduler("custom", "custom", custom_scheduler)
334
+
335
+ schedulers.schedulers.append(CosineScheduler)
336
+ schedulers.schedulers.append(CosExpScheduler)
337
+ schedulers.schedulers.append(CosExpBScheduler)
338
+ schedulers.schedulers.append(PhiScheduler)
339
+ schedulers.schedulers.append(VPScheduler)
340
+ schedulers.schedulers.append(LaplaceScheduler)
341
+ schedulers.schedulers.append(SineScheduler)
342
+ schedulers.schedulers.append(InvCosScheduler)
343
+ schedulers.schedulers.append(CosDynScheduler)
344
+ schedulers.schedulers.append(KarrasDynScheduler)
345
+ schedulers.schedulers.append(KarrasExpDecayScheduler)
346
+ schedulers.schedulers.append(KarrasExpIncScheduler)
347
+ schedulers.schedulers.append(SimpleKEScheduler)
348
+ schedulers.schedulers.append(OSSFlowScheduler)
349
+ schedulers.schedulers.append(CustomScheduler)
350
+
351
+ schedulers.schedulers_map = {
352
+ **{x.name: x for x in schedulers.schedulers},
353
+ **{x.label: x for x in schedulers.schedulers}
354
+ }
355
+
356
+ # CFG++ method is Forge only, not working in A1111
357
+ from modules import sd_samplers_common, sd_samplers
358
+ from modules.sd_samplers_kdiffusion import sampler_extra_params, KDiffusionSampler
359
+ from scripts.samplers_cfgpp import (
360
+ sample_euler_ancestral_cfgpp, sample_euler_cfgpp, sample_euler_dy_cfgpp,
361
+ sample_euler_smea_dy_cfgpp, sample_euler_negative_cfgpp, sample_euler_negative_dy_cfgpp
362
+ )
363
+ from scripts.forgeClassic_cfgpp import (
364
+ sample_dpmpp_sde_cfgpp, sample_dpmpp_2m_cfgpp,
365
+ sample_dpmpp_2m_sde_cfgpp, sample_dpmpp_3m_sde_cfgpp,
366
+ sample_dpmpp_2s_ancestral_cfgpp
367
+ )
368
+
369
+ samplers_cfgpp = [
370
+ ("Euler a CFG++", sample_euler_ancestral_cfgpp, ["k_euler_a_cfgpp"], {"uses_ensd": True}),
371
+ ("Euler CFG++", sample_euler_cfgpp, ["k_euler_cfgpp"], {}),
372
+ ("Euler Dy CFG++", sample_euler_dy_cfgpp, ["k_euler_dy_cfgpp"], {}),
373
+ ("Euler SMEA Dy CFG++", sample_euler_smea_dy_cfgpp, ["k_euler_smea_dy_cfgpp"], {}),
374
+ ("Euler Negative CFG++", sample_euler_negative_cfgpp, ["k_euler_negative_cfgpp"], {}),
375
+ ("Euler Negative Dy CFG++", sample_euler_negative_dy_cfgpp, ["k_euler_negative_dy_cfgpp"], {}),
376
+ ("RES multistep CFG++", sample_res_multistep_cfgpp, ["k_res_multi_cfgpp"], {}),
377
+ ("Gradient Estimation CFG++", sample_gradient_e_cfgpp, ["k_grad_est_cfgpp"], {}),
378
+ # ("GE/DPM2 CFG++", sample_ge_dpm2_cfgpp, ["k_ge_dpm_cfgpp"], {}),
379
+ ("DPM++ SDE CFG++", sample_dpmpp_sde_cfgpp, ["k_dpmpp_sde_cfgpp"], {"brownian_noise": True, "second_order": True}),
380
+ ("DPM++ 2M CFG++", sample_dpmpp_2m_cfgpp, ["k_dpmpp_2m_cfgpp"], {}),
381
+ ("DPM++ 2M SDE CFG++", sample_dpmpp_2m_sde_cfgpp, ["k_dpmpp_2m_sde_cfgpp"], {"brownian_noise": True}),
382
+ ("DPM++ 3M SDE CFG++", sample_dpmpp_3m_sde_cfgpp, ["k_dpmpp_3m_sde_cfgpp"], {"brownian_noise": True, 'discard_next_to_last_sigma': True}),
383
+ ("DPM++ 2S a CFG++", sample_dpmpp_2s_ancestral_cfgpp,["k_dpmpp_2s_a_cfgpp"], {"uses_ensd": True, "second_order": True}),
384
+ ]
385
+
386
+ samplers_data_cfgpp = [
387
+ sd_samplers_common.SamplerData(label, lambda model, funcname=funcname: KDiffusionSampler(funcname, model), aliases, options)
388
+ for label, funcname, aliases, options in samplers_cfgpp
389
+ if callable(funcname)
390
+ ]
391
+
392
+ sampler_extra_params['sample_euler_cfgpp'] = ['s_churn', 's_tmin', 's_tmax', 's_noise']
393
+ sampler_extra_params['sample_euler_negative_cfgpp'] = ['s_churn', 's_tmin', 's_tmax', 's_noise']
394
+ sampler_extra_params['sample_euler_dy_cfgpp'] = ['s_churn', 's_tmin', 's_tmax', 's_noise']
395
+ sampler_extra_params['sample_euler_negative_dy_cfgpp'] = ['s_churn', 's_tmin', 's_tmax', 's_noise']
396
+ sampler_extra_params['sample_euler_smea_dy_cfgpp'] = ['s_churn', 's_tmin', 's_tmax', 's_noise']
397
+
398
+ sampler_extra_params['sample_dpmpp_sde_cfgpp'] = ['s_noise']
399
+ sampler_extra_params['sample_dpmpp_2m_sde_cfgpp'] = ['s_noise']
400
+ sampler_extra_params['sample_dpmpp_3m_sde_cfgpp'] = ['s_noise']
401
+ sampler_extra_params['sample_dpmpp_2s_ancestral_cfgpp']= ['s_noise']
402
+
403
+ sd_samplers.all_samplers.extend(samplers_data_cfgpp)
404
+ #except:
405
+ #pass
406
+
407
+ samplers_extra = [
408
+ ("RES multistep", sample_res_multistep, ["k_res_multi"], {}),
409
+ ("Refined Exponential Solver", sample_res_solver, ["k_res"], {}),
410
+ ("DPM++ 4M SDE", sample_clyb_4m_sde_momentumized, ["k_dpmpp_4m_sde"], {}),
411
+ ("Gradient Estimation", sample_gradient_e, ["k_grad_est"], {}),
412
+ ("SEEDS-2", sample_seeds_2, ["k_seeds2"], {}),
413
+ ("SEEDS-3", sample_seeds_3, ["k_seeds3"], {}),
414
+ ]
415
+ sampler_extra_params['sample_seeds_2'] = ['s_noise']
416
+ sampler_extra_params['sample_seeds_3'] = ['s_noise']
417
+
418
+ samplers_data_extra = [
419
+ sd_samplers_common.SamplerData(label, lambda model, funcname=funcname: KDiffusionSampler(funcname, model), aliases, options)
420
+ for label, funcname, aliases, options in samplers_extra
421
+ if callable(funcname)
422
+ ]
423
+
424
+ sd_samplers.all_samplers.extend(samplers_data_extra)
425
+ sd_samplers.all_samplers_map = {x.name: x for x in sd_samplers.all_samplers}
426
+ sd_samplers.set_samplers()
427
+
428
+
429
+ ExtraScheduler.installed = True
430
+ except:
431
+ print ("Extension: Extra Schedulers: unsupported webUI")
432
+ ExtraScheduler.installed = False
webUI_ExtraSchedulers/scripts/forgeClassic_cfgpp.py ADDED
@@ -0,0 +1,269 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # first 3 lifted from ForgeClassic (https://github.com/Haoming02/sd-webui-forge-classic/)
2
+ # 4th is simple adaptation of 3M to 2M
3
+ # 5th lifted from ReForge (https://github.com/Panchovix/stable-diffusion-webui-reForge)
4
+ # all modified to work with Forge2
5
+
6
+ import torch
7
+ from tqdm.auto import trange
8
+ from k_diffusion.sampling import (
9
+ default_noise_sampler,
10
+ BrownianTreeNoiseSampler,
11
+ get_ancestral_step,
12
+ to_d,
13
+ )
14
+
15
+
16
+ def _sigma_fn(t):
17
+ return t.neg().exp()
18
+
19
+
20
+ def _t_fn(sigma):
21
+ return sigma.log().neg()
22
+
23
+
24
+ @torch.no_grad()
25
+ def sample_dpmpp_sde_cfgpp(model, x, sigmas, extra_args=None, callback=None, disable=None, noise_sampler=None):
26
+ eta = 1.0
27
+ s_noise = 1.0
28
+ r = 0.5
29
+
30
+ if len(sigmas) <= 1:
31
+ return x
32
+
33
+ seed = extra_args.get("seed", None)
34
+ sigma_min, sigma_max = sigmas[sigmas > 0].min(), sigmas.max()
35
+ noise_sampler = BrownianTreeNoiseSampler(x, sigma_min, sigma_max, seed=seed) if noise_sampler is None else noise_sampler
36
+ extra_args = {} if extra_args is None else extra_args
37
+
38
+ model.need_last_noise_uncond = True
39
+ model.inner_model.inner_model.forge_objects.unet.model_options["disable_cfg1_optimization"] = True
40
+
41
+ s_in = x.new_ones([x.shape[0]])
42
+
43
+ for i in trange(len(sigmas) - 1, disable=disable):
44
+ denoised = model(x, sigmas[i] * s_in, **extra_args)
45
+ if callback is not None:
46
+ callback(
47
+ {
48
+ "x": x,
49
+ "i": i,
50
+ "sigma": sigmas[i],
51
+ "sigma_hat": sigmas[i],
52
+ "denoised": denoised,
53
+ }
54
+ )
55
+
56
+ if sigmas[i + 1] == 0:
57
+ d = model.last_noise_uncond
58
+ x = denoised + d * sigmas[i + 1]
59
+ else:
60
+ t, t_next = _t_fn(sigmas[i]), _t_fn(sigmas[i + 1])
61
+ h = t_next - t
62
+ s = t + h * r
63
+ fac = 1 / (2 * r)
64
+
65
+
66
+ sd, su = get_ancestral_step(_sigma_fn(t), _sigma_fn(s), eta)
67
+ s_ = _t_fn(sd)
68
+ x_2 = (_sigma_fn(s_) / _sigma_fn(t)) * x - (t - s_).expm1() * denoised
69
+ x_2 = x_2 + noise_sampler(_sigma_fn(t), _sigma_fn(s)) * s_noise * su
70
+ denoised_2 = model(x_2, _sigma_fn(s) * s_in, **extra_args)
71
+ u = x_2 - model.last_noise_uncond * _sigma_fn(s) * s_in[:1]
72
+
73
+ sd, su = get_ancestral_step(_sigma_fn(t), _sigma_fn(t_next), eta)
74
+ denoised_d = (1 - fac) * u + fac * u
75
+ x = denoised_2 + to_d(x, sigmas[i], denoised_d) * sd
76
+ x = x + noise_sampler(_sigma_fn(t), _sigma_fn(t_next)) * s_noise * su
77
+ return x
78
+
79
+
80
+ @torch.no_grad()
81
+ def sample_dpmpp_2m_cfgpp(model, x, sigmas, extra_args=None, callback=None, disable=None):
82
+ extra_args = {} if extra_args is None else extra_args
83
+ s_in = x.new_ones([x.shape[0]])
84
+
85
+ old_uncond_denoised = None
86
+ uncond_denoised = None
87
+
88
+ model.need_last_noise_uncond = True
89
+ model.inner_model.inner_model.forge_objects.unet.model_options["disable_cfg1_optimization"] = True
90
+
91
+ for i in trange(len(sigmas) - 1, disable=disable):
92
+ denoised = model(x, sigmas[i] * s_in, **extra_args)
93
+ uncond_denoised = x - model.last_noise_uncond * sigmas[i] * s_in[:1]
94
+ if callback is not None:
95
+ callback(
96
+ {
97
+ "x": x,
98
+ "i": i,
99
+ "sigma": sigmas[i],
100
+ "sigma_hat": sigmas[i],
101
+ "denoised": denoised,
102
+ }
103
+ )
104
+ t, t_next = _t_fn(sigmas[i]), _t_fn(sigmas[i + 1])
105
+ h = t_next - t
106
+ if old_uncond_denoised is None or sigmas[i + 1] == 0:
107
+ denoised_mix = -torch.exp(-h) * uncond_denoised
108
+ else:
109
+ h_last = t - _t_fn(sigmas[i - 1])
110
+ r = h_last / h
111
+ denoised_mix = -torch.exp(-h) * uncond_denoised - torch.expm1(-h) * (1 / (2 * r)) * (denoised - old_uncond_denoised)
112
+ x = denoised + denoised_mix + torch.exp(-h) * x
113
+ old_uncond_denoised = uncond_denoised
114
+ return x
115
+
116
+
117
+ @torch.no_grad()
118
+ def sample_dpmpp_3m_sde_cfgpp(model, x, sigmas, extra_args=None, callback=None, disable=None, eta=None, s_noise=None, noise_sampler=None):
119
+ eta = 1.0 if eta is None else eta
120
+ s_noise = 1.0 if s_noise is None else s_noise
121
+
122
+ if len(sigmas) <= 1:
123
+ return x
124
+
125
+ seed = extra_args.get("seed", None)
126
+ sigma_min, sigma_max = sigmas[sigmas > 0].min(), sigmas.max()
127
+ noise_sampler = BrownianTreeNoiseSampler(x, sigma_min, sigma_max, seed=seed) if noise_sampler is None else noise_sampler
128
+ extra_args = {} if extra_args is None else extra_args
129
+ s_in = x.new_ones([x.shape[0]])
130
+
131
+ denoised_1, denoised_2 = None, None
132
+ h, h_1, h_2 = None, None, None
133
+
134
+ model.need_last_noise_uncond = True
135
+ model.inner_model.inner_model.forge_objects.unet.model_options["disable_cfg1_optimization"] = True
136
+
137
+ for i in trange(len(sigmas) - 1, disable=disable):
138
+ denoised = model(x, sigmas[i] * s_in, **extra_args)
139
+ u = x - model.last_noise_uncond * sigmas[i] * s_in[:1]
140
+ if callback is not None:
141
+ callback(
142
+ {
143
+ "x": x,
144
+ "i": i,
145
+ "sigma": sigmas[i],
146
+ "sigma_hat": sigmas[i],
147
+ "denoised": denoised,
148
+ }
149
+ )
150
+ if sigmas[i + 1] == 0:
151
+ x = denoised
152
+ else:
153
+ t, s = -sigmas[i].log(), -sigmas[i + 1].log()
154
+ h = s - t
155
+ h_eta = h * (eta + 1)
156
+
157
+ x = torch.exp(-h_eta) * (x + (denoised - u)) + (-h_eta).expm1().neg() * denoised
158
+
159
+ if h_2 is not None:
160
+ r0 = h_1 / h
161
+ r1 = h_2 / h
162
+ d1_0 = (denoised - denoised_1) / r0
163
+ d1_1 = (denoised_1 - denoised_2) / r1
164
+ d1 = d1_0 + (d1_0 - d1_1) * r0 / (r0 + r1)
165
+ d2 = (d1_0 - d1_1) / (r0 + r1)
166
+ phi_2 = h_eta.neg().expm1() / h_eta + 1
167
+ phi_3 = phi_2 / h_eta - 0.5
168
+ x = x + phi_2 * d1 - phi_3 * d2
169
+ elif h_1 is not None:
170
+ r = h_1 / h
171
+ d = (denoised - denoised_1) / r
172
+ phi_2 = h_eta.neg().expm1() / h_eta + 1
173
+ x = x + phi_2 * d
174
+
175
+ if eta:
176
+ x = x + noise_sampler(sigmas[i], sigmas[i + 1]) * sigmas[i + 1] * (-2 * h * eta).expm1().neg().sqrt() * s_noise
177
+
178
+ denoised_1, denoised_2 = denoised, denoised_1
179
+ h_1, h_2 = h, h_1
180
+ return x
181
+
182
+
183
+ ## extra
184
+ @torch.no_grad()
185
+ def sample_dpmpp_2m_sde_cfgpp(model, x, sigmas, extra_args=None, callback=None, disable=None, eta=1., s_noise=1., noise_sampler=None):
186
+ # just cut down from 3m_sde version
187
+ seed = extra_args.get("seed", None)
188
+ sigma_min, sigma_max = sigmas[sigmas > 0].min(), sigmas.max()
189
+ noise_sampler = BrownianTreeNoiseSampler(x, sigma_min, sigma_max, seed=seed) if noise_sampler is None else noise_sampler
190
+ extra_args = {} if extra_args is None else extra_args
191
+ s_in = x.new_ones([x.shape[0]])
192
+
193
+ denoised_1 = None
194
+ h_1 = None
195
+
196
+ model.need_last_noise_uncond = True
197
+ model.inner_model.inner_model.forge_objects.unet.model_options["disable_cfg1_optimization"] = True
198
+
199
+ for i in trange(len(sigmas) - 1, disable=disable):
200
+ denoised = model(x, sigmas[i] * s_in, **extra_args)
201
+ u = x - model.last_noise_uncond * sigmas[i] * s_in[:1]
202
+ if callback is not None:
203
+ callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
204
+ if sigmas[i + 1] == 0:
205
+ #Denoising step
206
+ x = denoised
207
+ else:
208
+ #DPM-Solver++(2M) SDE
209
+ t, s = -sigmas[i].log(), -sigmas[i + 1].log()
210
+ h = s - t
211
+
212
+ h_eta = h * (eta + 1)
213
+ x = torch.exp(-h_eta) * (x + (denoised - u)) + (-h_eta).expm1().neg() * denoised
214
+
215
+ if denoised_1 is not None:
216
+ r = h_1 / h
217
+
218
+ d = (denoised - denoised_1) / r
219
+ phi_2 = h_eta.neg().expm1() / h_eta + 1
220
+ x = x + phi_2 * d
221
+
222
+ if eta:
223
+ x = x + noise_sampler(sigmas[i], sigmas[i + 1]) * sigmas[i + 1] * (-2 * h * eta).expm1().neg().sqrt() * s_noise
224
+
225
+ h_1 = h
226
+
227
+ denoised_1 = denoised
228
+ return x
229
+
230
+
231
+ # via ReForge
232
+ @torch.no_grad()
233
+ def sample_dpmpp_2s_ancestral_cfgpp(model, x, sigmas, extra_args=None, callback=None, disable=None, eta=1., s_noise=1., noise_sampler=None):
234
+ extra_args = {} if extra_args is None else extra_args
235
+ noise_sampler = default_noise_sampler(x) if noise_sampler is None else noise_sampler
236
+
237
+ model.need_last_noise_uncond = True
238
+ model.inner_model.inner_model.forge_objects.unet.model_options["disable_cfg1_optimization"] = True
239
+
240
+ s_in = x.new_ones([x.shape[0]])
241
+ sigma_fn = lambda t: t.neg().exp()
242
+ t_fn = lambda sigma: sigma.log().neg()
243
+ for i in trange(len(sigmas) - 1, disable=disable):
244
+ denoised = model(x, sigmas[i] * s_in, **extra_args)
245
+ sigma_down, sigma_up = get_ancestral_step(sigmas[i], sigmas[i + 1], eta=eta)
246
+ if callback is not None:
247
+ callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
248
+ if sigma_down == 0:
249
+ # Euler method
250
+ d = model.last_noise_uncond
251
+ dt = sigma_down - sigmas[i]
252
+ x = denoised + d * sigma_down
253
+ else:
254
+ u = x - model.last_noise_uncond * sigmas[i] * s_in[:1]
255
+
256
+ # DPM-Solver++(2S)
257
+ t, t_next = t_fn(sigmas[i]), t_fn(sigma_down)
258
+ # r = torch.sinh(1 + (2 - eta) * (t_next - t) / (t - t_fn(sigma_up))) works only on non-cfgpp, weird
259
+ r = 1 / 2
260
+ h = t_next - t
261
+ s = t + r * h
262
+ x_2 = (sigma_fn(s) / sigma_fn(t)) * (x + (denoised - u)) - (-h * r).expm1() * denoised
263
+ denoised_2 = model(x_2, sigma_fn(s) * s_in, **extra_args)
264
+ x = (sigma_fn(t_next) / sigma_fn(t)) * (x + (denoised - u)) - (-h).expm1() * denoised_2
265
+
266
+ # Noise addition
267
+ if sigmas[i + 1] > 0:
268
+ x = x + noise_sampler(sigmas[i], sigmas[i + 1]) * s_noise * sigma_up
269
+ return x
webUI_ExtraSchedulers/scripts/gradient_estimation.py ADDED
@@ -0,0 +1,71 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ## lifted from ReForge, original implementation from Comfy
2
+ ## CFG++ attempt by me
3
+
4
+ import torch
5
+ from tqdm.auto import trange
6
+
7
+
8
+ # copied from kdiffusion/sampling.py
9
+ def to_d(x, sigma, denoised):
10
+ """Converts a denoiser output to a Karras ODE derivative."""
11
+ return (x - denoised) / append_dims(sigma, x.ndim)
12
+ def append_dims(x, target_dims):
13
+ """Appends dimensions to the end of a tensor until it has target_dims dimensions."""
14
+ dims_to_append = target_dims - x.ndim
15
+ if dims_to_append < 0:
16
+ raise ValueError(f'input has {x.ndim} dims but target_dims is {target_dims}, which is less')
17
+ return x[(...,) + (None,) * dims_to_append]
18
+
19
+
20
+ @torch.no_grad()
21
+ def sample_gradient_e(model, x, sigmas, extra_args=None, callback=None, disable=None, ge_gamma=2.):
22
+ """Gradient-estimation sampler. Paper: https://openreview.net/pdf?id=o2ND9v0CeK"""
23
+ extra_args = {} if extra_args is None else extra_args
24
+ s_in = x.new_ones([x.shape[0]])
25
+ old_d = None
26
+
27
+ sigmas = sigmas.to(x.device)
28
+
29
+ for i in trange(len(sigmas) - 1, disable=disable):
30
+ denoised = model(x, sigmas[i] * s_in, **extra_args)
31
+
32
+ d = to_d(x, sigmas[i], denoised)
33
+ if callback is not None:
34
+ callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
35
+ dt = sigmas[i + 1] - sigmas[i]
36
+ if i == 0: # Euler method
37
+ x = x + d * dt
38
+ else:
39
+ # Gradient estimation
40
+ d_bar = ge_gamma * d + (1 - ge_gamma) * old_d
41
+ x = x + d_bar * dt
42
+ old_d = d
43
+ return x
44
+
45
+
46
+ @torch.no_grad()
47
+ def sample_gradient_e_cfgpp(model, x, sigmas, extra_args=None, callback=None, disable=None, ge_gamma=2.):
48
+ """Gradient-estimation sampler. Paper: https://openreview.net/pdf?id=o2ND9v0CeK"""
49
+ extra_args = {} if extra_args is None else extra_args
50
+ s_in = x.new_ones([x.shape[0]])
51
+ old_d = None
52
+
53
+ model.need_last_noise_uncond = True
54
+ model.inner_model.inner_model.forge_objects.unet.model_options["disable_cfg1_optimization"] = True
55
+
56
+ for i in trange(len(sigmas) - 1, disable=disable):
57
+ denoised = model(x, sigmas[i] * s_in, **extra_args)
58
+
59
+ d = model.last_noise_uncond
60
+
61
+ if callback is not None:
62
+ callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
63
+ dt = sigmas[i + 1] - sigmas[i]
64
+ if i == 0: # Euler method
65
+ x = denoised + d * sigmas[i+1]
66
+ else:
67
+ # Gradient estimation
68
+ d_bar = ge_gamma * d + (1 - ge_gamma) * old_d
69
+ x = denoised + d_bar * sigmas[i+1]
70
+ old_d = d
71
+ return x
webUI_ExtraSchedulers/scripts/res_solver.py ADDED
@@ -0,0 +1,398 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from torch import no_grad, FloatTensor
3
+ from tqdm import tqdm
4
+ from itertools import pairwise
5
+ from typing import Protocol, Optional, Dict, Any, TypedDict, NamedTuple, Union, List
6
+ import math
7
+
8
+ from tqdm.auto import trange
9
+
10
+ # copied from kdiffusion/sampling.py and utils.py
11
+ def default_noise_sampler(x):
12
+ return lambda sigma, sigma_next: torch.randn_like(x)
13
+ def append_dims(x, target_dims):
14
+ """Appends dimensions to the end of a tensor until it has target_dims dimensions."""
15
+ dims_to_append = target_dims - x.ndim
16
+ if dims_to_append < 0:
17
+ raise ValueError(f'input has {x.ndim} dims but target_dims is {target_dims}, which is less')
18
+ return x[(...,) + (None,) * dims_to_append]
19
+ def to_d(x, sigma, denoised):
20
+ """Converts a denoiser output to a Karras ODE derivative."""
21
+ return (x - denoised) / append_dims(sigma, x.ndim)
22
+
23
+
24
+ class DenoiserModel(Protocol):
25
+ def __call__(self, x: FloatTensor, t: FloatTensor, *args, **kwargs) -> FloatTensor: ...
26
+
27
+ class RefinedExpCallbackPayload(TypedDict):
28
+ x: FloatTensor
29
+ i: int
30
+ sigma: FloatTensor
31
+ sigma_hat: FloatTensor
32
+
33
+ class RefinedExpCallback(Protocol):
34
+ def __call__(self, payload: RefinedExpCallbackPayload) -> None: ...
35
+
36
+ class NoiseSampler(Protocol):
37
+ def __call__(self, x: FloatTensor) -> FloatTensor: ...
38
+
39
+ class StepOutput(NamedTuple):
40
+ x_next: FloatTensor
41
+ denoised: FloatTensor
42
+ denoised2: FloatTensor
43
+ vel: FloatTensor
44
+ vel_2: FloatTensor
45
+
46
+ def _gamma(
47
+ n: int,
48
+ ) -> int:
49
+ """
50
+ https://en.wikipedia.org/wiki/Gamma_function
51
+ for every positive integer n,
52
+ Γ(n) = (n-1)!
53
+ """
54
+ return math.factorial(n-1)
55
+
56
+ def _incomplete_gamma(
57
+ s: int,
58
+ x: float,
59
+ gamma_s: Optional[int] = None
60
+ ) -> float:
61
+ """
62
+ https://en.wikipedia.org/wiki/Incomplete_gamma_function#Special_values
63
+ if s is a positive integer,
64
+ Γ(s, x) = (s-1)!*∑{k=0..s-1}(x^k/k!)
65
+ """
66
+ if gamma_s is None:
67
+ gamma_s = _gamma(s)
68
+
69
+ sum_: float = 0
70
+ # {k=0..s-1} inclusive
71
+ for k in range(s):
72
+ numerator: float = x**k
73
+ denom: int = math.factorial(k)
74
+ quotient: float = numerator/denom
75
+ sum_ += quotient
76
+ incomplete_gamma_: float = sum_ * math.exp(-x) * gamma_s
77
+ return incomplete_gamma_
78
+
79
+ # by Katherine Crowson
80
+ def _phi_1(neg_h: FloatTensor):
81
+ return torch.nan_to_num(torch.expm1(neg_h) / neg_h, nan=1.0)
82
+
83
+ # by Katherine Crowson
84
+ def _phi_2(neg_h: FloatTensor):
85
+ return torch.nan_to_num((torch.expm1(neg_h) - neg_h) / neg_h**2, nan=0.5)
86
+
87
+ # by Katherine Crowson
88
+ def _phi_3(neg_h: FloatTensor):
89
+ return torch.nan_to_num((torch.expm1(neg_h) - neg_h - neg_h**2 / 2) / neg_h**3, nan=1 / 6)
90
+
91
+ def _phi(
92
+ neg_h: float,
93
+ j: int,
94
+ ):
95
+ """
96
+ For j={1,2,3}: you could alternatively use Kat's phi_1, phi_2, phi_3 which perform fewer steps
97
+
98
+ Lemma 1
99
+ https://arxiv.org/abs/2308.02157
100
+ ϕj(-h) = 1/h^j*∫{0..h}(e^(τ-h)*(τ^(j-1))/((j-1)!)dτ)
101
+
102
+ https://www.wolframalpha.com/input?i=integrate+e%5E%28%CF%84-h%29*%28%CF%84%5E%28j-1%29%2F%28j-1%29%21%29d%CF%84
103
+ = 1/h^j*[(e^(-h)*(-τ)^(-j)*τ(j))/((j-1)!)]{0..h}
104
+ https://www.wolframalpha.com/input?i=integrate+e%5E%28%CF%84-h%29*%28%CF%84%5E%28j-1%29%2F%28j-1%29%21%29d%CF%84+between+0+and+h
105
+ = 1/h^j*((e^(-h)*(-h)^(-j)*h^j*(Γ(j)-Γ(j,-h)))/(j-1)!)
106
+ = (e^(-h)*(-h)^(-j)*h^j*(Γ(j)-Γ(j,-h))/((j-1)!*h^j)
107
+ = (e^(-h)*(-h)^(-j)*(Γ(j)-Γ(j,-h))/(j-1)!
108
+ = (e^(-h)*(-h)^(-j)*(Γ(j)-Γ(j,-h))/Γ(j)
109
+ = (e^(-h)*(-h)^(-j)*(1-Γ(j,-h)/Γ(j))
110
+
111
+ requires j>0
112
+ """
113
+ assert j > 0
114
+ gamma_: float = _gamma(j)
115
+ incomp_gamma_: float = _incomplete_gamma(j, neg_h, gamma_s=gamma_)
116
+
117
+ phi_: float = math.exp(neg_h) * neg_h**-j * (1-incomp_gamma_/gamma_)
118
+
119
+ return phi_
120
+
121
+ class RESDECoeffsSecondOrder(NamedTuple):
122
+ a2_1: float
123
+ b1: float
124
+ b2: float
125
+
126
+ def _de_second_order(
127
+ h: float,
128
+ c2: float,
129
+ simple_phi_calc = False,
130
+ ) -> RESDECoeffsSecondOrder:
131
+ """
132
+ Table 3
133
+ https://arxiv.org/abs/2308.02157
134
+ ϕi,j := ϕi,j(-h) = ϕi(-cj*h)
135
+ a2_1 = c2ϕ1,2
136
+ = c2ϕ1(-c2*h)
137
+ b1 = ϕ1 - ϕ2/c2
138
+ """
139
+ if simple_phi_calc:
140
+ # Kat computed simpler expressions for phi for cases j={1,2,3}
141
+ a2_1: float = c2 * _phi_1(-c2*h)
142
+ phi1: float = _phi_1(-h)
143
+ phi2: float = _phi_2(-h)
144
+ else:
145
+ # I computed general solution instead.
146
+ # they're close, but there are slight differences. not sure which would be more prone to numerical error.
147
+ a2_1: float = c2 * _phi(j=1, neg_h=-c2*h)
148
+ phi1: float = _phi(j=1, neg_h=-h)
149
+ phi2: float = _phi(j=2, neg_h=-h)
150
+ phi2_c2: float = phi2/c2
151
+ b1: float = phi1 - phi2_c2
152
+ b2: float = phi2_c2
153
+ return RESDECoeffsSecondOrder(
154
+ a2_1=a2_1,
155
+ b1=b1,
156
+ b2=b2,
157
+ )
158
+
159
+ def _refined_exp_sosu_step(
160
+ model: DenoiserModel,
161
+ x: FloatTensor,
162
+ sigma: FloatTensor,
163
+ sigma_next: FloatTensor,
164
+ c2 = 0.5,
165
+ extra_args: Dict[str, Any] = {},
166
+ pbar: Optional[tqdm] = None,
167
+ simple_phi_calc = False,
168
+ momentum = 0.0,
169
+ vel = None,
170
+ vel_2 = None,
171
+ time = None
172
+ ) -> StepOutput:
173
+ """
174
+ Algorithm 1 "RES Second order Single Update Step with c2"
175
+ https://arxiv.org/abs/2308.02157
176
+
177
+ Parameters:
178
+ model (`DenoiserModel`): a k-diffusion wrapped denoiser model (e.g. a subclass of DiscreteEpsDDPMDenoiser)
179
+ x (`FloatTensor`): noised latents (or RGB I suppose), e.g. torch.randn((B, C, H, W)) * sigma[0]
180
+ sigma (`FloatTensor`): timestep to denoise
181
+ sigma_next (`FloatTensor`): timestep+1 to denoise
182
+ c2 (`float`, *optional*, defaults to .5): partial step size for solving ODE. .5 = midpoint method
183
+ extra_args (`Dict[str, Any]`, *optional*, defaults to `{}`): kwargs to pass to `model#__call__()`
184
+ pbar (`tqdm`, *optional*, defaults to `None`): progress bar to update after each model call
185
+ simple_phi_calc (`bool`, *optional*, defaults to `True`): True = calculate phi_i,j(-h) via simplified formulae specific to j={1,2}. False = Use general solution that works for any j. Mathematically equivalent, but could be numeric differences.
186
+ """
187
+
188
+ def momentum_func(diff, velocity, timescale=1.0, offset=-momentum / 2.0): # Diff is current diff, vel is previous diff
189
+ if velocity is None:
190
+ momentum_vel = diff
191
+ else:
192
+ momentum_vel = momentum * (timescale + offset) * velocity + (1 - momentum * (timescale + offset)) * diff
193
+ return momentum_vel
194
+
195
+ lam_next, lam = (s.log().neg() for s in (sigma_next, sigma))
196
+
197
+ # type hints aren't strictly true regarding float vs FloatTensor.
198
+ # everything gets promoted to `FloatTensor` after interacting with `sigma: FloatTensor`.
199
+ # I will use float to indicate any variables which are scalars.
200
+ h: float = lam_next - lam
201
+ a2_1, b1, b2 = _de_second_order(h=h, c2=c2, simple_phi_calc=simple_phi_calc)
202
+
203
+ denoised: FloatTensor = model(x, sigma.repeat(x.size(0)), **extra_args)
204
+ # if pbar is not None:
205
+ # pbar.update(0.5)
206
+
207
+ c2_h: float = c2*h
208
+
209
+ diff_2 = momentum_func(a2_1*h*denoised, vel_2, time)
210
+ vel_2 = diff_2
211
+ x_2: FloatTensor = math.exp(-c2_h)*x + diff_2
212
+ lam_2: float = lam + c2_h
213
+ sigma_2: float = lam_2.neg().exp()
214
+
215
+ denoised2: FloatTensor = model(x_2, sigma_2.repeat(x_2.size(0)), **extra_args)
216
+ if pbar is not None:
217
+ pbar.update()
218
+
219
+ diff = momentum_func(h*(b1*denoised + b2*denoised2), vel, time)
220
+ vel = diff
221
+
222
+ x_next: FloatTensor = math.exp(-h)*x + diff
223
+
224
+ return StepOutput(
225
+ x_next=x_next,
226
+ denoised=denoised,
227
+ denoised2=denoised2,
228
+ vel=vel,
229
+ vel_2=vel_2,
230
+ )
231
+
232
+
233
+ @no_grad()
234
+ def sample_refined_exp_s(
235
+ model: FloatTensor,
236
+ x: FloatTensor,
237
+ sigmas: FloatTensor,
238
+ denoise_to_zero: bool = True,
239
+ extra_args: Dict[str, Any] = {},
240
+ callback: Optional[RefinedExpCallback] = None,
241
+ disable: Optional[bool] = None,
242
+ ita: FloatTensor = torch.zeros((1,)),
243
+ c2 = .5,
244
+ noise_sampler: NoiseSampler = torch.randn_like,
245
+ simple_phi_calc = False,
246
+ momentum = 0.0,
247
+ ):
248
+ """
249
+ Refined Exponential Solver (S).
250
+ Algorithm 2 "RES Single-Step Sampler" with Algorithm 1 second-order step
251
+ https://arxiv.org/abs/2308.02157
252
+
253
+ Parameters:
254
+ model (`DenoiserModel`): a k-diffusion wrapped denoiser model (e.g. a subclass of DiscreteEpsDDPMDenoiser)
255
+ x (`FloatTensor`): noised latents (or RGB I suppose), e.g. torch.randn((B, C, H, W)) * sigma[0]
256
+ sigmas (`FloatTensor`): sigmas (ideally an exponential schedule!) e.g. get_sigmas_exponential(n=25, sigma_min=model.sigma_min, sigma_max=model.sigma_max)
257
+ denoise_to_zero (`bool`, *optional*, defaults to `True`): whether to finish with a first-order step down to 0 (rather than stopping at sigma_min). True = fully denoise image. False = match Algorithm 2 in paper
258
+ extra_args (`Dict[str, Any]`, *optional*, defaults to `{}`): kwargs to pass to `model#__call__()`
259
+ callback (`RefinedExpCallback`, *optional*, defaults to `None`): you can supply this callback to see the intermediate denoising results, e.g. to preview each step of the denoising process
260
+ disable (`bool`, *optional*, defaults to `False`): whether to hide `tqdm`'s progress bar animation from being printed
261
+ ita (`FloatTensor`, *optional*, defaults to 0.): degree of stochasticity, η, for each timestep. tensor shape must be broadcastable to 1-dimensional tensor with length `len(sigmas) if denoise_to_zero else len(sigmas)-1`. each element should be from 0 to 1.
262
+ - if used: batch noise doesn't match non-batch
263
+ c2 (`float`, *optional*, defaults to .5): partial step size for solving ODE. .5 = midpoint method
264
+ noise_sampler (`NoiseSampler`, *optional*, defaults to `torch.randn_like`): method used for adding noise
265
+ simple_phi_calc (`bool`, *optional*, defaults to `True`): True = calculate phi_i,j(-h) via simplified formulae specific to j={1,2}. False = Use general solution that works for any j. Mathematically equivalent, but could be numeric differences.
266
+ """
267
+ #assert sigmas[-1] == 0
268
+ device = x.device
269
+ ita = ita.to(device)
270
+ sigmas = sigmas.to(device)
271
+
272
+ sigma_min, sigma_max = sigmas[sigmas > 0].min(), sigmas.max()
273
+
274
+ vel, vel_2 = None, None
275
+ with tqdm(disable=disable, total=len(sigmas)-(1 if denoise_to_zero else 2)) as pbar:
276
+ for i, (sigma, sigma_next) in enumerate(pairwise(sigmas[:-1].split(1))):
277
+ time = sigmas[i] / sigma_max
278
+ if 'sigma' not in locals():
279
+ sigma = sigmas[i]
280
+ eps = torch.randn_like(x).float()
281
+ sigma_hat = sigma * (1 + ita)
282
+ x_hat = x + (sigma_hat ** 2 - sigma ** 2).sqrt() * eps
283
+ x_next, denoised, denoised2, vel, vel_2 = _refined_exp_sosu_step(
284
+ model,
285
+ x_hat,
286
+ sigma_hat,
287
+ sigma_next,
288
+ c2=c2,
289
+ extra_args=extra_args,
290
+ pbar=pbar,
291
+ simple_phi_calc=simple_phi_calc,
292
+ momentum = momentum,
293
+ vel = vel,
294
+ vel_2 = vel_2,
295
+ time = time
296
+ )
297
+ if callback is not None:
298
+ payload = RefinedExpCallbackPayload(
299
+ x=x,
300
+ i=i,
301
+ sigma=sigma,
302
+ sigma_hat=sigma_hat,
303
+ denoised=denoised,
304
+ denoised2=denoised2,
305
+ )
306
+ callback(payload)
307
+ x = x_next
308
+ if denoise_to_zero:
309
+ eps = torch.randn_like(x).float()
310
+ sigma_hat = sigma * (1 + ita)
311
+ x_hat = x + (sigma_hat ** 2 - sigma ** 2).sqrt() * eps
312
+ x_next: FloatTensor = model(x_hat, sigma.to(x_hat.device).repeat(x_hat.size(0)), **extra_args)
313
+ pbar.update()
314
+
315
+ if callback is not None:
316
+ payload = RefinedExpCallbackPayload(
317
+ x=x,
318
+ i=i,
319
+ sigma=sigma,
320
+ sigma_hat=sigma_hat,
321
+ denoised=denoised,
322
+ denoised2=denoised2,
323
+ )
324
+ callback(payload)
325
+
326
+
327
+ x = x_next
328
+ return x
329
+
330
+ # Many thanks to Kat + Birch-San for this wonderful sampler implementation! https://github.com/Birch-san/sdxl-play/commits/res/
331
+ def sample_res_solver(model, x, sigmas, extra_args=None, callback=None, disable=None, noise_sampler_type="gaussian", noise_sampler=None, denoise_to_zero=True, simple_phi_calc=False, c2=0.5, ita=torch.Tensor((0.0,)), momentum=0.0):
332
+ return sample_refined_exp_s(model, x, sigmas, extra_args=extra_args, callback=callback, disable=disable, noise_sampler=noise_sampler, denoise_to_zero=denoise_to_zero, simple_phi_calc=simple_phi_calc, c2=c2, ita=ita, momentum=momentum)
333
+
334
+
335
+ ## modified from ReForge, original implementation ComfyUI
336
+ @torch.no_grad()
337
+ def res_multistep(model, x, sigmas, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1., noise_sampler=None, cfgpp=False):
338
+ extra_args = {} if extra_args is None else extra_args
339
+ seed = extra_args.get("seed", None)
340
+ noise_sampler = default_noise_sampler(x) if noise_sampler is None else noise_sampler
341
+ s_in = x.new_ones([x.shape[0]])
342
+ sigma_fn = lambda t: t.neg().exp()
343
+ t_fn = lambda sigma: sigma.log().neg()
344
+ phi1_fn = lambda t: torch.expm1(t) / t
345
+ phi2_fn = lambda t: (phi1_fn(t) - 1.0) / t
346
+ old_denoised = None
347
+
348
+ sigmas = sigmas.to(x.device)
349
+
350
+ if cfgpp:
351
+ model.need_last_noise_uncond = True
352
+ model.inner_model.inner_model.forge_objects.unet.model_options["disable_cfg1_optimization"] = True
353
+
354
+ for i in trange(len(sigmas) - 1, disable=disable):
355
+ if s_churn > 0:
356
+ gamma = min(s_churn / (len(sigmas) - 1), 2**0.5 - 1) if s_tmin <= sigmas[i] <= s_tmax else 0.0
357
+ sigma_hat = sigmas[i] * (gamma + 1)
358
+ else:
359
+ gamma = 0
360
+ sigma_hat = sigmas[i]
361
+ if gamma > 0:
362
+ eps = torch.randn_like(x) * s_noise
363
+ x = x + eps * (sigma_hat**2 - sigmas[i] ** 2) ** 0.5
364
+ denoised = model(x, sigma_hat * s_in, **extra_args)
365
+
366
+ if callback is not None:
367
+ callback({"x": x, "i": i, "sigma": sigmas[i], "sigma_hat": sigma_hat, "denoised": denoised})
368
+ if sigmas[i + 1] == 0 or old_denoised is None:
369
+ # Euler method
370
+ if cfgpp:
371
+ d = model.last_noise_uncond
372
+ x = denoised + d * sigmas[i + 1]
373
+ else:
374
+ d = to_d(x, sigma_hat, denoised)
375
+ dt = sigmas[i + 1] - sigma_hat
376
+ x = x + d * dt
377
+ else:
378
+ # Second order multistep method in https://arxiv.org/pdf/2308.02157
379
+ t, t_next, t_prev = t_fn(sigmas[i]), t_fn(sigmas[i + 1]), t_fn(sigmas[i - 1])
380
+ h = t_next - t
381
+ c2 = (t_prev - t) / h
382
+ phi1_val, phi2_val = phi1_fn(-h), phi2_fn(-h)
383
+ b1 = torch.nan_to_num(phi1_val - 1.0 / c2 * phi2_val, nan=0.0)
384
+ b2 = torch.nan_to_num(1.0 / c2 * phi2_val, nan=0.0)
385
+ if cfgpp:
386
+ d = model.last_noise_uncond
387
+ x = denoised + d * sigma_hat
388
+
389
+ x = (sigma_fn(t_next) / sigma_fn(t)) * x + h * (b1 * denoised + b2 * old_denoised)
390
+ old_denoised = denoised
391
+ return x
392
+ @torch.no_grad()
393
+ def sample_res_multistep(model, x, sigmas, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1., noise_sampler=None):
394
+ return res_multistep(model, x, sigmas, extra_args=extra_args, callback=callback, disable=disable, s_churn=s_churn, s_tmin=s_tmin, s_tmax=s_tmax, s_noise=s_noise, noise_sampler=noise_sampler, cfgpp=False)
395
+ @torch.no_grad()
396
+ def sample_res_multistep_cfgpp(model, x, sigmas, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1., noise_sampler=None):
397
+ return res_multistep(model, x, sigmas, extra_args=extra_args, callback=callback, disable=disable, s_churn=s_churn, s_tmin=s_tmin, s_tmax=s_tmax, s_noise=s_noise, noise_sampler=noise_sampler, cfgpp=True)
398
+
webUI_ExtraSchedulers/scripts/samplers_cfgpp.py ADDED
@@ -0,0 +1,264 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ import torch
2
+ from tqdm.auto import trange
3
+
4
+ # copied from kdiffusion/sampling.py and utils.py
5
+ def default_noise_sampler(x):
6
+ return lambda sigma, sigma_next: torch.randn_like(x)
7
+ def get_ancestral_step(sigma_from, sigma_to, eta=1.):
8
+ """Calculates the noise level (sigma_down) to step down to and the amount
9
+ of noise to add (sigma_up) when doing an ancestral sampling step."""
10
+ if not eta:
11
+ return sigma_to, 0.
12
+ sigma_up = min(sigma_to, eta * (sigma_to ** 2 * (sigma_from ** 2 - sigma_to ** 2) / sigma_from ** 2) ** 0.5)
13
+ sigma_down = (sigma_to ** 2 - sigma_up ** 2) ** 0.5
14
+ return sigma_down, sigma_up
15
+ def append_dims(x, target_dims):
16
+ """Appends dimensions to the end of a tensor until it has target_dims dimensions."""
17
+ dims_to_append = target_dims - x.ndim
18
+ if dims_to_append < 0:
19
+ raise ValueError(f'input has {x.ndim} dims but target_dims is {target_dims}, which is less')
20
+ return x[(...,) + (None,) * dims_to_append]
21
+ def to_d(x, sigma, denoised):
22
+ """Converts a denoiser output to a Karras ODE derivative."""
23
+ return (x - denoised) / append_dims(sigma, x.ndim)
24
+
25
+
26
+ @torch.no_grad()
27
+ def sample_euler_cfgpp(model, x, sigmas, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.):
28
+ """Implements Algorithm 2 (Euler steps) from Karras et al. (2022)."""
29
+ extra_args = {} if extra_args is None else extra_args
30
+ model.need_last_noise_uncond = True
31
+ model.inner_model.inner_model.forge_objects.unet.model_options["disable_cfg1_optimization"] = True
32
+ s_in = x.new_ones([x.shape[0]])
33
+
34
+ for i in trange(len(sigmas) - 1, disable=disable):
35
+ gamma = min(s_churn / (len(sigmas) - 1), 2 ** 0.5 - 1) if s_tmin <= sigmas[i] <= s_tmax else 0.
36
+ eps = torch.randn_like(x) * s_noise
37
+ sigma_hat = sigmas[i] * (gamma + 1)
38
+ if gamma > 0:
39
+ x = x + eps * (sigma_hat ** 2 - sigmas[i] ** 2) ** 0.5
40
+ denoised = model(x, sigma_hat * s_in, **extra_args)
41
+ d = model.last_noise_uncond
42
+
43
+ if callback is not None:
44
+ callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigma_hat, 'denoised': denoised})
45
+
46
+ # Euler method
47
+ x = denoised + d * sigmas[i+1]
48
+ return x
49
+
50
+ class _Rescaler:
51
+ def __init__(self, model, x, mode, **extra_args):
52
+ self.model = model
53
+ self.x = x
54
+ self.mode = mode
55
+ self.extra_args = extra_args
56
+ self.init_latent, self.mask, self.nmask = model.init_latent, model.mask, model.nmask
57
+
58
+ def __enter__(self):
59
+ if self.init_latent is not None:
60
+ self.model.init_latent = torch.nn.functional.interpolate(input=self.init_latent, size=self.x.shape[2:4], mode=self.mode)
61
+ if self.mask is not None:
62
+ self.model.mask = torch.nn.functional.interpolate(input=self.mask.unsqueeze(0), size=self.x.shape[2:4], mode=self.mode).squeeze(0)
63
+ if self.nmask is not None:
64
+ self.model.nmask = torch.nn.functional.interpolate(input=self.nmask.unsqueeze(0), size=self.x.shape[2:4], mode=self.mode).squeeze(0)
65
+
66
+ return self
67
+
68
+ def __exit__(self, type, value, traceback):
69
+ del self.model.init_latent, self.model.mask, self.model.nmask
70
+ self.model.init_latent, self.model.mask, self.model.nmask = self.init_latent, self.mask, self.nmask
71
+
72
+ @torch.no_grad()
73
+ def dy_sampling_step_cfgpp(x, model, sigma_hat, **extra_args):
74
+ original_shape = x.shape
75
+ batch_size, channels, m, n = original_shape[0], original_shape[1], original_shape[2] // 2, original_shape[3] // 2
76
+ extra_row = x.shape[2] % 2 == 1
77
+ extra_col = x.shape[3] % 2 == 1
78
+
79
+ if extra_row:
80
+ extra_row_content = x[:, :, -1:, :]
81
+ x = x[:, :, :-1, :]
82
+ if extra_col:
83
+ extra_col_content = x[:, :, :, -1:]
84
+ x = x[:, :, :, :-1]
85
+
86
+ a_list = x.unfold(2, 2, 2).unfold(3, 2, 2).contiguous().view(batch_size, channels, m * n, 2, 2)
87
+ c = a_list[:, :, :, 1, 1].view(batch_size, channels, m, n)
88
+
89
+ with _Rescaler(model, c, 'nearest-exact', **extra_args) as rescaler:
90
+ denoised = model(c, sigma_hat * c.new_ones([c.shape[0]]), **rescaler.extra_args)
91
+ d = model.last_noise_uncond
92
+ c = denoised + d * sigma_hat
93
+
94
+ d_list = c.view(batch_size, channels, m * n, 1, 1)
95
+ a_list[:, :, :, 1, 1] = d_list[:, :, :, 0, 0]
96
+ x = a_list.view(batch_size, channels, m, n, 2, 2).permute(0, 1, 2, 4, 3, 5).reshape(batch_size, channels, 2 * m, 2 * n)
97
+
98
+ if extra_row or extra_col:
99
+ x_expanded = torch.zeros(original_shape, dtype=x.dtype, device=x.device)
100
+ x_expanded[:, :, :2 * m, :2 * n] = x
101
+ if extra_row:
102
+ x_expanded[:, :, -1:, :2 * n + 1] = extra_row_content
103
+ if extra_col:
104
+ x_expanded[:, :, :2 * m, -1:] = extra_col_content
105
+ if extra_row and extra_col:
106
+ x_expanded[:, :, -1:, -1:] = extra_col_content[:, :, -1:, :]
107
+ x = x_expanded
108
+
109
+ return x
110
+
111
+ @torch.no_grad()
112
+ def smea_sampling_step_cfgpp(x, model, sigma_hat, **extra_args):
113
+ m, n = x.shape[2], x.shape[3]
114
+ x = torch.nn.functional.interpolate(input=x, scale_factor=(1.25, 1.25), mode='nearest-exact')
115
+ with _Rescaler(model, x, 'nearest-exact', **extra_args) as rescaler:
116
+ denoised = model(x, sigma_hat * x.new_ones([x.shape[0]]), **rescaler.extra_args)
117
+ d = model.last_noise_uncond
118
+ x = denoised + d * sigma_hat
119
+ x = torch.nn.functional.interpolate(input=x, size=(m,n), mode='nearest-exact')
120
+ return x
121
+
122
+
123
+ @torch.no_grad()
124
+ def sample_euler_dy_cfgpp(model, x, sigmas, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.):
125
+ """CFG++ version of Euler Dy by KoishiStar."""
126
+ extra_args = {} if extra_args is None else extra_args
127
+ model.need_last_noise_uncond = True
128
+ model.inner_model.inner_model.forge_objects.unet.model_options["disable_cfg1_optimization"] = True
129
+ s_in = x.new_ones([x.shape[0]])
130
+
131
+ for i in trange(len(sigmas) - 1, disable=disable):
132
+ gamma = min(s_churn / (len(sigmas) - 1), 2 ** 0.5 - 1) if s_tmin <= sigmas[i] <= s_tmax else 0.
133
+ eps = torch.randn_like(x) * s_noise
134
+ sigma_hat = sigmas[i] * (gamma + 1)
135
+ if gamma > 0:
136
+ x = x + eps * (sigma_hat ** 2 - sigmas[i] ** 2) ** 0.5
137
+ denoised = model(x, sigma_hat * s_in, **extra_args)
138
+ d = model.last_noise_uncond
139
+
140
+ if callback is not None:
141
+ callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigma_hat, 'denoised': denoised})
142
+
143
+ # Euler method
144
+ x = denoised + d * sigmas[i+1]
145
+
146
+ if sigmas[i + 1] > 0:
147
+ if i // 2 == 1:
148
+ x = dy_sampling_step_cfgpp(x, model, sigma_hat, **extra_args)
149
+
150
+ return x
151
+
152
+ @torch.no_grad()
153
+ def sample_euler_negative_dy_cfgpp(model, x, sigmas, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.):
154
+ """CFG++ version of Euler Negative Dy by KoishiStar."""
155
+ extra_args = {} if extra_args is None else extra_args
156
+ model.need_last_noise_uncond = True
157
+ model.inner_model.inner_model.forge_objects.unet.model_options["disable_cfg1_optimization"] = True
158
+ s_in = x.new_ones([x.shape[0]])
159
+
160
+ for i in trange(len(sigmas) - 1, disable=disable):
161
+ gamma = min(s_churn / (len(sigmas) - 1), 2 ** 0.5 - 1) if s_tmin <= sigmas[i] <= s_tmax else 0.
162
+ eps = torch.randn_like(x) * s_noise
163
+ sigma_hat = sigmas[i] * (gamma + 1)
164
+ if gamma > 0:
165
+ x = x + eps * (sigma_hat ** 2 - sigmas[i] ** 2) ** 0.5
166
+ denoised = model(x, sigma_hat * s_in, **extra_args)
167
+ d = model.last_noise_uncond
168
+
169
+ if callback is not None:
170
+ callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigma_hat, 'denoised': denoised})
171
+
172
+ # Euler method
173
+ if sigmas[i + 1] > 0 and i // 2 == 1:
174
+ x = -denoised - d * sigmas[i+1]
175
+ else:
176
+ x = denoised + d * sigmas[i+1]
177
+
178
+ if sigmas[i + 1] > 0:
179
+ if i // 2 == 1:
180
+ x = dy_sampling_step_cfgpp(x, model, sigma_hat, **extra_args)
181
+
182
+ return x
183
+
184
+ @torch.no_grad()
185
+ def sample_euler_negative_cfgpp(model, x, sigmas, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.):
186
+ """based on Euler Negative by KoishiStar"""
187
+ extra_args = {} if extra_args is None else extra_args
188
+ model.need_last_noise_uncond = True
189
+ model.inner_model.inner_model.forge_objects.unet.model_options["disable_cfg1_optimization"] = True
190
+ s_in = x.new_ones([x.shape[0]])
191
+
192
+ for i in trange(len(sigmas) - 1, disable=disable):
193
+ gamma = min(s_churn / (len(sigmas) - 1), 2 ** 0.5 - 1) if s_tmin <= sigmas[i] <= s_tmax else 0.
194
+ eps = torch.randn_like(x) * s_noise
195
+ sigma_hat = sigmas[i] * (gamma + 1)
196
+ if gamma > 0:
197
+ x = x + eps * (sigma_hat ** 2 - sigmas[i] ** 2) ** 0.5
198
+ denoised = model(x, sigma_hat * s_in, **extra_args)
199
+ d = model.last_noise_uncond
200
+
201
+ if callback is not None:
202
+ callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigma_hat, 'denoised': denoised})
203
+
204
+ # Euler method
205
+ if sigmas[i + 1] > 0 and i // 2 == 1:
206
+ x = -denoised - d * sigmas[i+1]
207
+ else:
208
+ x = denoised + d * sigmas[i+1]
209
+ return x
210
+
211
+
212
+ @torch.no_grad()
213
+ def sample_euler_smea_dy_cfgpp(model, x, sigmas, extra_args=None, callback=None, disable=None, s_churn=0., s_tmin=0., s_tmax=float('inf'), s_noise=1.):
214
+ """CFG++ version of Euler SMEA Dy by KoishiStar."""
215
+ extra_args = {} if extra_args is None else extra_args
216
+ model.need_last_noise_uncond = True
217
+ model.inner_model.inner_model.forge_objects.unet.model_options["disable_cfg1_optimization"] = True
218
+ s_in = x.new_ones([x.shape[0]])
219
+
220
+ for i in trange(len(sigmas) - 1, disable=disable):
221
+ gamma = min(s_churn / (len(sigmas) - 1), 2 ** 0.5 - 1) if s_tmin <= sigmas[i] <= s_tmax else 0.
222
+ eps = torch.randn_like(x) * s_noise
223
+ sigma_hat = sigmas[i] * (gamma + 1)
224
+ if gamma > 0:
225
+ x = x + eps * (sigma_hat ** 2 - sigmas[i] ** 2) ** 0.5
226
+ denoised = model(x, sigma_hat * s_in, **extra_args)
227
+ d = model.last_noise_uncond
228
+
229
+ if callback is not None:
230
+ callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigma_hat, 'denoised': denoised})
231
+
232
+ # Euler method
233
+ x = denoised + d * sigmas[i+1]
234
+
235
+ if sigmas[i + 1] > 0:
236
+ if i + 1 // 2 == 1: # ?? this is i == 1; why not if i // 2 == 1 same as Euler Dy
237
+ x = dy_sampling_step_cfgpp(x, model, sigma_hat, **extra_args)
238
+ if i + 1 // 2 == 0: # ?? this is i == 0
239
+ x = smea_sampling_step_cfgpp(x, model, sigma_hat, **extra_args)
240
+ return x
241
+
242
+ @torch.no_grad()
243
+ def sample_euler_ancestral_cfgpp(model, x, sigmas, extra_args=None, callback=None, disable=None, eta=1., s_noise=1., noise_sampler=None):
244
+ """Ancestral sampling with Euler method steps."""
245
+ extra_args = {} if extra_args is None else extra_args
246
+ noise_sampler = default_noise_sampler(x) if noise_sampler is None else noise_sampler
247
+ model.need_last_noise_uncond = True
248
+ model.inner_model.inner_model.forge_objects.unet.model_options["disable_cfg1_optimization"] = True
249
+ s_in = x.new_ones([x.shape[0]])
250
+
251
+ for i in trange(len(sigmas) - 1, disable=disable):
252
+ denoised = model(x, sigmas[i] * s_in, **extra_args)
253
+ d = model.last_noise_uncond
254
+
255
+ sigma_down, sigma_up = get_ancestral_step(sigmas[i], sigmas[i + 1], eta=eta)
256
+
257
+ if callback is not None:
258
+ callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
259
+
260
+ # Euler method
261
+ x = denoised + d * sigma_down
262
+ if sigmas[i + 1] > 0:
263
+ x = x + noise_sampler(sigmas[i], sigmas[i + 1]) * s_noise * sigma_up
264
+ return x
webUI_ExtraSchedulers/scripts/seeds.py ADDED
@@ -0,0 +1,106 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # SEEDS implementations by chaObserv : https://github.com/comfyanonymous/ComfyUI/pull/7580
2
+
3
+ import torch
4
+ from tqdm.auto import trange
5
+ from k_diffusion.sampling import (
6
+ default_noise_sampler,
7
+ )
8
+
9
+
10
+ @torch.no_grad()
11
+ def sample_seeds_2(model, x, sigmas, extra_args=None, callback=None, disable=None, eta=1., s_noise=1., noise_sampler=None, r=0.5):
12
+ '''
13
+ SEEDS-2 - Stochastic Explicit Exponential Derivative-free Solvers (VE Data Prediction) stage 2
14
+ Arxiv: https://arxiv.org/abs/2305.14267
15
+ '''
16
+
17
+ noise_sampler = default_noise_sampler(x) if noise_sampler is None else noise_sampler
18
+ s_in = x.new_ones([x.shape[0]])
19
+ sigmas = sigmas.to(x.device)
20
+
21
+ inject_noise = eta > 0 and s_noise > 0
22
+
23
+ for i in trange(len(sigmas) - 1, disable=disable):
24
+ denoised = model(x, sigmas[i] * s_in, **extra_args)
25
+ if callback is not None:
26
+ callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
27
+ if sigmas[i + 1] == 0:
28
+ x = denoised
29
+ else:
30
+ t, t_next = -sigmas[i].log(), -sigmas[i + 1].log()
31
+ h = t_next - t
32
+ h_eta = h * (eta + 1)
33
+ s = t + r * h
34
+ fac = 1 / (2 * r)
35
+ sigma_s = s.neg().exp()
36
+
37
+ coeff_1, coeff_2 = (-r * h_eta).expm1(), (-h_eta).expm1()
38
+ if inject_noise:
39
+ noise_coeff_1 = (-2 * r * h * eta).expm1().neg().sqrt()
40
+ noise_coeff_2 = ((-2 * r * h * eta).expm1() - (-2 * h * eta).expm1()).sqrt()
41
+ noise_1, noise_2 = noise_sampler(sigmas[i], sigma_s), noise_sampler(sigma_s, sigmas[i + 1])
42
+
43
+ # Step 1
44
+ x_2 = (coeff_1 + 1) * x - coeff_1 * denoised
45
+ if inject_noise:
46
+ x_2 = x_2 + sigma_s * (noise_coeff_1 * noise_1) * s_noise
47
+ denoised_2 = model(x_2, sigma_s * s_in, **extra_args)
48
+
49
+ # Step 2
50
+ denoised_d = (1 - fac) * denoised + fac * denoised_2
51
+ x = (coeff_2 + 1) * x - coeff_2 * denoised_d
52
+ if inject_noise:
53
+ x = x + sigmas[i + 1] * (noise_coeff_2 * noise_1 + noise_coeff_1 * noise_2) * s_noise
54
+ return x
55
+
56
+ @torch.no_grad()
57
+ def sample_seeds_3(model, x, sigmas, extra_args=None, callback=None, disable=None, eta=1., s_noise=1., noise_sampler=None, r_1=1./3, r_2=2./3):
58
+ '''
59
+ SEEDS-3 - Stochastic Explicit Exponential Derivative-free Solvers (VE Data Prediction) stage 3
60
+ Arxiv: https://arxiv.org/abs/2305.14267
61
+ '''
62
+
63
+ noise_sampler = default_noise_sampler(x) if noise_sampler is None else noise_sampler
64
+ s_in = x.new_ones([x.shape[0]])
65
+ sigmas = sigmas.to(x.device)
66
+
67
+ inject_noise = eta > 0 and s_noise > 0
68
+
69
+ for i in trange(len(sigmas) - 1, disable=disable):
70
+ denoised = model(x, sigmas[i] * s_in, **extra_args)
71
+ if callback is not None:
72
+ callback({'x': x, 'i': i, 'sigma': sigmas[i], 'sigma_hat': sigmas[i], 'denoised': denoised})
73
+ if sigmas[i + 1] == 0:
74
+ x = denoised
75
+ else:
76
+ t, t_next = -sigmas[i].log(), -sigmas[i + 1].log()
77
+ h = t_next - t
78
+ h_eta = h * (eta + 1)
79
+ s_1 = t + r_1 * h
80
+ s_2 = t + r_2 * h
81
+ sigma_s_1, sigma_s_2 = s_1.neg().exp(), s_2.neg().exp()
82
+
83
+ coeff_1, coeff_2, coeff_3 = (-r_1 * h_eta).expm1(), (-r_2 * h_eta).expm1(), (-h_eta).expm1()
84
+ if inject_noise:
85
+ noise_coeff_1 = (-2 * r_1 * h * eta).expm1().neg().sqrt()
86
+ noise_coeff_2 = ((-2 * r_1 * h * eta).expm1() - (-2 * r_2 * h * eta).expm1()).sqrt()
87
+ noise_coeff_3 = ((-2 * r_2 * h * eta).expm1() - (-2 * h * eta).expm1()).sqrt()
88
+ noise_1, noise_2, noise_3 = noise_sampler(sigmas[i], sigma_s_1), noise_sampler(sigma_s_1, sigma_s_2), noise_sampler(sigma_s_2, sigmas[i + 1])
89
+
90
+ # Step 1
91
+ x_2 = (coeff_1 + 1) * x - coeff_1 * denoised
92
+ if inject_noise:
93
+ x_2 = x_2 + sigma_s_1 * (noise_coeff_1 * noise_1) * s_noise
94
+ denoised_2 = model(x_2, sigma_s_1 * s_in, **extra_args)
95
+
96
+ # Step 2
97
+ x_3 = (coeff_2 + 1) * x - coeff_2 * denoised + (r_2 / r_1) * (coeff_2 / (r_2 * h_eta) + 1) * (denoised_2 - denoised)
98
+ if inject_noise:
99
+ x_3 = x_3 + sigma_s_2 * (noise_coeff_2 * noise_1 + noise_coeff_1 * noise_2) * s_noise
100
+ denoised_3 = model(x_3, sigma_s_2 * s_in, **extra_args)
101
+
102
+ # Step 3
103
+ x = (coeff_3 + 1) * x - coeff_3 * denoised + (1. / r_2) * (coeff_3 / h_eta + 1) * (denoised_3 - denoised)
104
+ if inject_noise:
105
+ x = x + sigmas[i + 1] * (noise_coeff_3 * noise_1 + noise_coeff_2 * noise_2 + noise_coeff_1 * noise_3) * s_noise
106
+ return x
webUI_ExtraSchedulers/scripts/simple_kes.py ADDED
@@ -0,0 +1,98 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # based on -
2
+ # Simple Karras-Exponential Scheduler, by Kittensx
3
+ # https://github.com/Kittensx/Simple_KES
4
+
5
+ import torch
6
+
7
+ import gradio as gr
8
+ from k_diffusion.sampling import get_sigmas_karras, get_sigmas_exponential
9
+
10
+ from modules import shared
11
+
12
+ def get_sigmas_simple_kes (n, sigma_min, sigma_max, device):
13
+ """
14
+ Scheduler function that blends sigma sequences using Karras and Exponential methods with adaptive parameters.
15
+
16
+ Parameters:
17
+ n (int): Number of steps.
18
+ sigma_min (float): Minimum sigma value.
19
+ sigma_max (float): Maximum sigma value.
20
+ device (torch.device): The device on which to perform computations (e.g., 'cuda' or 'cpu').
21
+ start_blend (float): Initial blend factor for dynamic blending.
22
+ end_blend (float): Final blend factor for dynamic blending.
23
+ sharpen_factor (float): Sharpening factor to be applied adaptively.
24
+ update_interval (int): Interval to update blend factors.
25
+ initial_step_size (float): Initial step size for adaptive step size calculation.
26
+ final_step_size (float): Final step size for adaptive step size calculation.
27
+ initial_noise_scale (float): Initial noise scale factor.
28
+ final_noise_scale (float): Final noise scale factor.
29
+ step_size_factor: Adjust to compensate for oversmoothing
30
+ noise_scale_factor: Adjust to provide more variation
31
+
32
+ Returns:
33
+ torch.Tensor: A tensor of blended sigma values.
34
+ """
35
+
36
+ start_blend = getattr(shared.opts, 'kes_start_blend', 0.1)
37
+ end_blend = getattr(shared.opts, 'kes_end_blend', 0.5)
38
+ sharpness = getattr(shared.opts, 'kes_sharpness', 0.95)
39
+ initial_step_size = getattr(shared.opts, 'kes_initial_step_size', 0.9)
40
+ final_step_size = getattr(shared.opts, 'kes_final_step_size', 0.2)
41
+ initial_noise_scale = getattr(shared.opts, 'kes_initial_noise', 1.25)
42
+ final_noise_scale = getattr(shared.opts, 'kes_final_noise', 0.8)
43
+ smooth_blend_factor = getattr(shared.opts, 'kes_smooth_blend', 11)
44
+ step_size_factor = getattr(shared.opts, 'kes_step_size_factor', 0.8)
45
+ noise_scale_factor = getattr(shared.opts, 'kes_noise_scale', 0.9)
46
+
47
+ # Expand sigma_max slightly to account for smoother transitions
48
+ # sigma_max = sigma_max * 1.1
49
+
50
+ # Generate sigma sequences using Karras and Exponential methods
51
+ sigmas_karras = get_sigmas_karras(n=n, sigma_min=sigma_min, sigma_max=sigma_max, device=device)
52
+ sigmas_exponential = get_sigmas_exponential(n=n, sigma_min=sigma_min, sigma_max=sigma_max, device=device)
53
+
54
+ # Define progress and initialize blend factor
55
+ progress = torch.linspace(0, 1, len(sigmas_karras)).to(device)
56
+
57
+ sigs = torch.zeros_like(sigmas_karras).to(device)
58
+
59
+ # Iterate through each step, dynamically adjust blend factor, step size, and noise scaling
60
+ for i in range(len(sigmas_karras)):
61
+ # Adaptive step size and blend factor calculations
62
+ step_size = initial_step_size * (1 - progress[i]) + final_step_size * progress[i] * step_size_factor # 0.8 default value Adjusted to avoid over-smoothing
63
+
64
+ dynamic_blend_factor = start_blend * (1 - progress[i]) + end_blend * progress[i]
65
+
66
+ noise_scale = initial_noise_scale * (1 - progress[i]) + final_noise_scale * progress[i] * noise_scale_factor # 0.9 default value Adjusted to keep more variation
67
+
68
+ # Calculate smooth blending between the two sigma sequences
69
+ smooth_blend = torch.sigmoid((dynamic_blend_factor - 0.5) * smooth_blend_factor) # Increase scaling factor to smooth transitions more
70
+
71
+ # Compute blended sigma values
72
+ blended_sigma = sigmas_karras[i] * (1 - smooth_blend) + sigmas_exponential[i] * smooth_blend
73
+
74
+ # Apply step size and noise scaling
75
+ sigs[i] = blended_sigma * step_size * noise_scale
76
+
77
+ # Optional: Adaptive sharpening based on sigma values
78
+ sharpen_mask = torch.where(sigs < sigma_min * 1.5, sharpness, 1.0).to(device)
79
+ sigs = sigs * sharpen_mask
80
+
81
+ if torch.isnan(sigs).any() or torch.isinf(sigs).any():
82
+ raise ValueError("Invalid sigma values detected (NaN or Inf).")
83
+
84
+ return sigs.to(device)
85
+
86
+ shared.options_templates.update(shared.options_section(('simple_kes', "Simple KES", ""), {
87
+ "kes_start_blend": shared.OptionInfo(0.1, "start blend factor", gr.Slider, {"minimum": 0.0, "maximum": 1.0, "step": 0.01}),
88
+ "kes_end_blend": shared.OptionInfo(0.5, "end blend factor", gr.Slider, {"minimum": 0.0, "maximum": 1.0, "step": 0.01}),
89
+ "kes_sharpness": shared.OptionInfo(0.95, "sharpness", gr.Slider, {"minimum": 0.0, "maximum": 2.0, "step": 0.01}),
90
+ "kes_initial_step_size": shared.OptionInfo(0.9, "initial step size", gr.Slider, {"minimum": 0.01, "maximum": 1.0, "step": 0.01}), # larger max?
91
+ "kes_final_step_size": shared.OptionInfo(0.2, "final step size", gr.Slider, {"minimum": 0.01, "maximum": 1.0, "step": 0.01}), #larger max?
92
+ "kes_initial_noise": shared.OptionInfo(1.25, "initial noise", gr.Slider, {"minimum": 0.0, "maximum": 4.0, "step": 0.01}),
93
+ "kes_final_noise": shared.OptionInfo(0.8, "final noise", gr.Slider, {"minimum": 0.0, "maximum": 4.0, "step": 0.01}),
94
+ "kes_smooth_blend": shared.OptionInfo(11, "smooth blend factor", gr.Slider, {"minimum": 0.0, "maximum": 50.0, "step": 0.1}),
95
+ "kes_step_size_factor": shared.OptionInfo(0.8, "step size factor", gr.Slider, {"minimum": 0.0, "maximum": 4.0, "step": 0.01}),
96
+ "kes_noise_scale": shared.OptionInfo(0.9, "noise scale factor", gr.Slider, {"minimum": 0.0, "maximum": 4.0, "step": 0.01}),
97
+ }))
98
+