text stringlengths 0 5.54k |
|---|
generator (torch.Generator, optional) β |
One or a list of torch generator(s) |
to make generation deterministic. |
latents (torch.FloatTensor, optional) β |
Pre-generated noisy latents, sampled from a Gaussian distribution, to be used as inputs for image |
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents |
tensor will ge generated by sampling using the supplied random generator. |
output_type (str, optional, defaults to "pil") β |
The output format of the generate image. Choose between |
PIL: PIL.Image.Image or np.array. |
return_dict (bool, optional) β |
Whether or not to return a ImagePipelineOutput instead of a plain tuple. |
Returns |
ImagePipelineOutput or tuple |
~pipelines.utils.ImagePipelineOutput if return_dict is |
True, otherwise a `tuple. When returning a tuple, the first element is a list with the generated images. |
LDMSuperResolutionPipeline |
class diffusers.LDMSuperResolutionPipeline |
< |
source |
> |
( |
vqvae: VQModel |
unet: UNet2DModel |
scheduler: typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_pndm.PNDMScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler, diffusers.schedulers.scheduling_euler_discrete.EulerDiscreteScheduler, diffusers.schedulers.scheduling_euler_ancestral_discre... |
) |
Parameters |
vqvae (VQModel) β |
Vector-quantized (VQ) VAE Model to encode and decode images to and from latent representations. |
unet (UNet2DModel) β U-Net architecture to denoise the encoded image. |
scheduler (SchedulerMixin) β |
A scheduler to be used in combination with unet to denoise the encoded image latens. Can be one of |
DDIMScheduler, LMSDiscreteScheduler, EulerDiscreteScheduler, |
EulerAncestralDiscreteScheduler, DPMSolverMultistepScheduler, or PNDMScheduler. |
A pipeline for image super-resolution using Latent |
This class inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the |
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) |
__call__ |
< |
source |
> |
( |
image: typing.Union[torch.Tensor, PIL.Image.Image] = None |
batch_size: typing.Optional[int] = 1 |
num_inference_steps: typing.Optional[int] = 100 |
eta: typing.Optional[float] = 0.0 |
generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None |
output_type: typing.Optional[str] = 'pil' |
return_dict: bool = True |
**kwargs |
) |
β |
ImagePipelineOutput or tuple |
Parameters |
image (torch.Tensor or PIL.Image.Image) β |
Image, or tensor representing an image batch, that will be used as the starting point for the |
process. |
batch_size (int, optional, defaults to 1) β |
Number of images to generate. |
num_inference_steps (int, optional, defaults to 100) β |
The number of denoising steps. More denoising steps usually lead to a higher quality image at the |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.