| ui.out(string)> | Write string to the Output box. |
| ui.clear() | >Clear Output box. |
| ui.console(string) | Print to console. |
| ui.out(string) | Print to Output box. |
| ui.gallery.add(image) | Add image to Gallery, |
| ui.gallery.addc(image, string) | Add image with caption to Gallery. |
| ui.gallery.clear() | Clear the gallery. |
| ui.gallery.del(index) | Delete image from gallery. (Starts at 1 since this is Lua.) |
| ui.gallery.getgif(duration) | Get a gif from the images in the gallery. Show each image for "duration" ms. |
| ui.gallery.saveall() | Save all images in the gallery. |
| ui.image.save(image, name) | Save image. |
| ui.status(text) | Update status-text under the buttons during run. |
| ui.log.info(text) | Write info log to console. |
| ui.log.warning(text) | Write warning log to console. |
| ui.log.error(text) | Write error log to console. |
| sd.pipeline(p) | Deconstructed pipeline from the webui. Generate picture from a Processing object. |
| sd.process(string) | >Webui pipeline, generate image from a prompt-string or processing object. |
| sd.getp() | >Returns a default processing object (see below). |
| sd.cond(string) | Run prompt string through clip. |
| sd.negcond(string) | Run negative prompt string through clip. (These are unfortunately slightly different at the momemt) |
| sd.sample(p, cond, negcond) | Turn noise into something that can get turned into an image. Takes a Processing object, a cond and a negcond value. Cond and negcond can also be Null, string or a tensor from sd.textencode(). |
| sd.vae(latent) | >Variational auto-envoder. |
| sd.toimage(latent) | Last step to get an image after the vae. |
| sd.textencode(string) | Get a tensor from Clips text encode. |
| sd.clip2negcond(text encode) | Convert tensor to a negative conditioning used by functions from the webui. |
| sd.negcond2cond(negcond) | Convert a negative conditioning to conditioning used by functions from the webui. The regular prompt and the negative prompt is treated slightly different internally, this is why this is needed. |
| sd.getsamplers() | Get list of samplers. |
| sd.restorefaces(image) | Postprocess an image to restore faces. |
| sd.interrogate.clip(image) | Get prompt from an image. |
| sd.interrogate.blip(image) | (Same as sd.interrigate.clip()) Get prompt from an image. |
| sd.interrogate.deepbooru(image) | Get prompt from an image. Using DeepBooru. |
| torch.clamp(v1, min, max) | Clamp vector v1 between min and max. |
| torch.lerp(v1, v2, weight) | Linear interpolation of v1 and v2, by weight. v1 + weight * (v2 - v1). |
| torch.abs(v1) | Absolute value of v1. |
| torch.add(v1, v2) | Add v2 (vector or float) to v1. |
| torch.sub(v1, v2) | Subtract v2 (vector or float) from v1. |
| torch.mul(v1, v2) | Multiply v2 (vector or float) with v1. |
| torch.div(v1, v2) | Divide v1 with v2 (vector or float). |
| torch.size(v1) | Return the size of vector v1. |
| torch.new_zeros(size) | Take a Lua table, size, and create a zero-filled tensor. |
| torch.max(v) | Return the max value in v. |
| torch.min(v) | Return the min value in v. |
| torch.f2t(tensor) | Return a tensor from a float. |
| torch.t2f(tensor) | Return a float from a tensor. |
| torch.cat({table, with, tensors, ...}, dim) | Concatenate tensors in dimension dim. For example, textencodings can be concatenated in dimension 1. |
p = StableDiffusionProcessingTxt2Img( sd_model=shared.sd_model, outpath_samples=shared.opts.outdir_samples or shared.opts.outdir_txt2img_samples, outpath_grids=shared.opts.outdir_grids or shared.opts.outdir_txt2img_grids, prompt='', styles=[], negative_prompt='', seed=-1, subseed=-1, subseed_strength=0, seed_resize_from_h=0, seed_resize_from_w=0, seed_enable_extras=True, sampler_name='Euler a', batch_size=1, n_iter=1, steps=20, cfg_scale=7, width=512, height=512, restore_faces=False, tiling=False, enable_hr=False, denoising_strength=0, hr_scale=0, hr_upscaler=None, hr_second_pass_steps=0, hr_resize_x=0, hr_resize_y=0, override_settings=[], )