text stringlengths 0 5.54k |
|---|
- |
pipeline_latent_diffusion_superresolution.py |
Super Resolution |
- |
Examples: |
LDMTextToImagePipeline |
class diffusers.LDMTextToImagePipeline |
< |
source |
> |
( |
vqvae: typing.Union[diffusers.models.vq_model.VQModel, diffusers.models.autoencoder_kl.AutoencoderKL] |
bert: PreTrainedModel |
tokenizer: PreTrainedTokenizer |
unet: typing.Union[diffusers.models.unet_2d.UNet2DModel, diffusers.models.unet_2d_condition.UNet2DConditionModel] |
scheduler: typing.Union[diffusers.schedulers.scheduling_ddim.DDIMScheduler, diffusers.schedulers.scheduling_pndm.PNDMScheduler, diffusers.schedulers.scheduling_lms_discrete.LMSDiscreteScheduler] |
) |
Parameters |
vqvae (VQModel) β |
Vector-quantized (VQ) Model to encode and decode images to and from latent representations. |
bert (LDMBertModel) β |
Text-encoder model based on BERT architecture. |
tokenizer (transformers.BertTokenizer) β |
Tokenizer of class |
BertTokenizer. |
unet (UNet2DConditionModel) β Conditional U-Net architecture to denoise the encoded image latents. |
scheduler (SchedulerMixin) β |
A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of |
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. |
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the |
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) |
__call__ |
< |
source |
> |
( |
prompt: typing.Union[str, typing.List[str]] |
height: typing.Optional[int] = None |
width: typing.Optional[int] = None |
num_inference_steps: typing.Optional[int] = 50 |
guidance_scale: typing.Optional[float] = 1.0 |
eta: typing.Optional[float] = 0.0 |
generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None |
latents: typing.Optional[torch.FloatTensor] = None |
output_type: typing.Optional[str] = 'pil' |
return_dict: bool = True |
**kwargs |
) |
β |
ImagePipelineOutput or tuple |
Parameters |
prompt (str or List[str]) β |
The prompt or prompts to guide the image generation. |
height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β |
The height in pixels of the generated image. |
width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β |
The width in pixels of the generated image. |
num_inference_steps (int, optional, defaults to 50) β |
The number of denoising steps. More denoising steps usually lead to a higher quality image at the |
expense of slower inference. |
guidance_scale (float, optional, defaults to 1.0) β |
Guidance scale as defined in Classifier-Free Diffusion Guidance. |
guidance_scale is defined as w of equation 2. of Imagen |
Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt at |
the, usually at the expense of lower image quality. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.