File size: 5,946 Bytes
29a5ed9
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
<div>
  sd-webui-lua links:
  <a href="http://github.com/yownas/sd-webui-lua/">Github</a> | 
  <a href="https://github.com/yownas/sd-webui-lua/wiki">Wiki</a> | 
  <a href="https://github.com/scoder/lupa">Lupa</a>
</div>
<div>
	<div>Functions</div>
	<div>UI</div>
	<table>
		<tr>
			<td>ui.out(string)></td>
			<td>Write string to the Output box.</td>
		</tr>
		<tr>
			<td>ui.clear()</td>>
			<td>Clear Output box.</td>
		</tr>
		<tr>
			<td>ui.console(string)</td>
			<td>Print to console. </td>
		</tr>
		<tr>
			<td>ui.out(string)</td>
			<td>Print to Output box.</td>
		</tr>
		<tr>
			<td>ui.gallery.add(image)</td>
			<td>Add image to Gallery,</td>
		</tr>
		<tr>
			<td>ui.gallery.addc(image, string)</td>
			<td>Add image with caption to Gallery.</td>
		</tr>
		<tr>
			<td>ui.gallery.clear()</td>
			<td>Clear the gallery.</td>
		</tr>
		<tr>
			<td>ui.gallery.del(index)</td>
			<td>Delete image from gallery. (Starts at 1 since this is Lua.)</td>
		</tr>
		<tr>
			<td>ui.gallery.getgif(duration)</td>
			<td>Get a gif from the images in the gallery. Show each image for "duration" ms.</td>
		</tr>
		<tr>
			<td>ui.gallery.saveall()</td>
			<td>Save all images in the gallery.</td>
		</tr>
		<tr>
			<td>ui.image.save(image, name)</td>
			<td>Save image.</td>
		</tr>
		<tr>
			<td>ui.status(text)</td>
			<td>Update status-text under the buttons during run.</td>
		</tr>
		<tr>
			<td>ui.log.info(text)</td>
			<td>Write info log to console.</td>
		</tr>
		<tr>
			<td>ui.log.warning(text)</td>
			<td>Write warning log to console.</td>
		</tr>
		<tr>
			<td>ui.log.error(text)</td>
			<td>Write error log to console.</td>
		</tr>
	</table>
	<div>SD</div>
	<table>
		<tr>
			<td>sd.pipeline(p)</td>
			<td>Deconstructed pipeline from the webui. Generate picture from a Processing object.</td>
		</tr>
		<tr>
			<td>sd.process(string)</td>>
			<td>Webui pipeline, generate image from a prompt-string or processing object.</td>
		</tr>
		<tr>
			<td>sd.getp()</td>>
			<td>Returns a default processing object (see below).</td>
		</tr>
		<tr>
			<td> sd.cond(string)</td>
			<td>Run prompt string through clip.</td>
		</tr>
		<tr>
			<td>sd.negcond(string)</td>
			<td>Run negative prompt string through clip. (These are unfortunately slightly different at the momemt)</td>
		</tr>
		<tr>
			<td>sd.sample(p, cond, negcond)</td>
			<td>Turn noise into something that can get turned into an image. Takes a Processing object, a cond and a negcond value. Cond and negcond can also be Null, string or a tensor from sd.textencode().</td>
		</tr>
		<tr>
			<td>sd.vae(latent)</td>>
			<td>Variational auto-envoder.</td>
		</tr>
		<tr>
			<td>sd.toimage(latent)</td>
			<td>Last step to get an image after the vae.</td>
		</tr>
		<tr>
			<td>sd.textencode(string)</td>
			<td>Get a tensor from Clips text encode.</td>
		</tr>
		<tr>
			<td>sd.clip2negcond(text encode)</td>
			<td>Convert tensor to a negative conditioning used by functions from the webui.</td>
		</tr>
		<tr>
			<td>sd.negcond2cond(negcond)</td>
			<td>Convert a negative conditioning to conditioning used by functions from the webui. The regular prompt and the negative prompt is treated slightly different internally, this is why this is needed.</td>
		</tr>
		<tr>
			<td>sd.getsamplers()</td>
			<td>Get list of samplers.</td>
		</tr>
		<tr>
			<td>sd.restorefaces(image)</td>
			<td>Postprocess an image to restore faces.</td>
		</tr>
		<tr>
			<td>sd.interrogate.clip(image)</td>
			<td>Get prompt from an image.</td>
		</tr>
		<tr>
			<td>sd.interrogate.blip(image)</td>
			<td>(Same as sd.interrigate.clip()) Get prompt from an image.</td>
		</tr>
		<tr>
			<td>sd.interrogate.deepbooru(image)</td>
			<td>Get prompt from an image. Using DeepBooru.</td>
		</tr>
	</table>
	<div>Torch</div>
	<table>
		<tr>
			<td>torch.clamp(v1, min, max)</td>
			<td>Clamp vector v1 between min and max.</td>
		</tr>
		<tr>
			<td>torch.lerp(v1, v2, weight)</td>
			<td>Linear interpolation of v1 and v2, by weight. v1 + weight * (v2 - v1).</td>
		</tr>
		<tr>
			<td>torch.abs(v1)</td>
			<td>Absolute value of v1.</td>
		</tr>
		<tr>
			<td>torch.add(v1, v2)</td>
			<td>Add v2 (vector or float) to v1.</td>
		</tr>
		<tr>
			<td>torch.sub(v1, v2)</td>
			<td>Subtract v2 (vector or float) from v1.</td>
		</tr>
		<tr>
			<td>torch.mul(v1, v2)</td>
			<td>Multiply v2 (vector or float) with v1.</td>
		</tr>
		<tr>
			<td>torch.div(v1, v2)</td>
			<td>Divide v1 with v2 (vector or float).</td>
		</tr>
		<tr>
			<td>torch.size(v1)</td>
			<td>Return the size of vector v1.</td>
		</tr>
		<tr>
			<td>torch.new_zeros(size)</td>
			<td>Take a Lua table, size, and create a zero-filled tensor.</td>
		</tr>
		<tr>
			<td>torch.max(v)</td>
			<td>Return the max value in v.</td>
		</tr>
		<tr>
			<td>torch.min(v)</td>
			<td>Return the min value in v.</td>
		</tr>
		<tr>
			<td>torch.f2t(tensor)</td>
			<td>Return a tensor from a float.</td>
		</tr>
		<tr>
			<td>torch.t2f(tensor)</td>
			<td>Return a float from a tensor.</td>
		</tr>
		<tr>
			<td>torch.cat({table, with, tensors, ...}, dim)</td>
			<td>Concatenate tensors in dimension dim. For example, textencodings can be concatenated in dimension 1.</td>
		</tr>
	</table>
</div>
<hr>
<div>
	Default Processing-object:<br>
	<pre>
p = StableDiffusionProcessingTxt2Img(
 sd_model=shared.sd_model,
 outpath_samples=shared.opts.outdir_samples or shared.opts.outdir_txt2img_samples,
 outpath_grids=shared.opts.outdir_grids or shared.opts.outdir_txt2img_grids,
 prompt='',
 styles=[],
 negative_prompt='',
 seed=-1,
 subseed=-1,
 subseed_strength=0,
 seed_resize_from_h=0,
 seed_resize_from_w=0,
 seed_enable_extras=True,
 sampler_name='Euler a',
 batch_size=1,
 n_iter=1,
 steps=20,
 cfg_scale=7,
 width=512,
 height=512,
 restore_faces=False,
 tiling=False,
 enable_hr=False,
 denoising_strength=0,
 hr_scale=0,
 hr_upscaler=None,
 hr_second_pass_steps=0,
 hr_resize_x=0,
 hr_resize_y=0,
 override_settings=[],
)
	</pre>
</div>