text stringlengths 0 5.54k |
|---|
feature_extractor: DPTFeatureExtractor |
) |
Parameters |
vae (AutoencoderKL) β |
Variational Auto-Encoder (VAE) Model to encode and decode images to and from latent representations. |
text_encoder (CLIPTextModel) β |
Frozen text-encoder. Stable Diffusion uses the text portion of |
CLIP, specifically |
the clip-vit-large-patch14 variant. |
tokenizer (CLIPTokenizer) β |
Tokenizer of class |
CLIPTokenizer. |
unet (UNet2DConditionModel) β Conditional U-Net architecture to denoise the encoded image latents. |
scheduler (SchedulerMixin) β |
A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of |
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. |
Pipeline for text-guided image to image generation using Stable Diffusion. |
This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods the |
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.) |
__call__ |
< |
source |
> |
( |
prompt: typing.Union[str, typing.List[str]] = None |
image: typing.Union[torch.FloatTensor, PIL.Image.Image] = None |
depth_map: typing.Optional[torch.FloatTensor] = None |
strength: float = 0.8 |
num_inference_steps: typing.Optional[int] = 50 |
guidance_scale: typing.Optional[float] = 7.5 |
negative_prompt: typing.Union[str, typing.List[str], NoneType] = None |
num_images_per_prompt: typing.Optional[int] = 1 |
eta: typing.Optional[float] = 0.0 |
generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None |
prompt_embeds: typing.Optional[torch.FloatTensor] = None |
negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None |
output_type: typing.Optional[str] = 'pil' |
return_dict: bool = True |
callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None |
callback_steps: int = 1 |
) |
β |
StableDiffusionPipelineOutput or tuple |
Parameters |
prompt (str or List[str], optional) β |
The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. |
instead. |
image (torch.FloatTensor or PIL.Image.Image) β |
Image, or tensor representing an image batch, that will be used as the starting point for the |
process. |
strength (float, optional, defaults to 0.8) β |
Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image |
will be used as a starting point, adding more noise to it the larger the strength. The number of |
denoising steps depends on the amount of noise initially added. When strength is 1, added noise will |
be maximum and the denoising process will run for the full number of iterations specified in |
num_inference_steps. A value of 1, therefore, essentially ignores image. |
num_inference_steps (int, optional, defaults to 50) β |
The number of denoising steps. More denoising steps usually lead to a higher quality image at the |
expense of slower inference. This parameter will be modulated by strength. |
guidance_scale (float, optional, defaults to 7.5) β |
Guidance scale as defined in Classifier-Free Diffusion Guidance. |
guidance_scale is defined as w of equation 2. of Imagen |
Paper. Guidance scale is enabled by setting guidance_scale > 1. Higher guidance scale encourages to generate images that are closely linked to the text prompt, |
usually at the expense of lower image quality. |
negative_prompt (str or List[str], optional) β |
The prompt or prompts not to guide the image generation. If not defined, one has to pass |
negative_prompt_embeds. instead. Ignored when not using guidance (i.e., ignored if guidance_scale |
is less than 1). |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.