text stringlengths 0 5.54k |
|---|
(img2text) mode. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β |
The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β |
The width in pixels of the generated image. data_type (int, optional, defaults to 1) β |
The data type (either 0 or 1). Only used if you are loading a checkpoint which supports a data type |
embedding; this is added for compatibility with the |
UniDiffuser-v1 checkpoint. num_inference_steps (int, optional, defaults to 50) β |
The number of denoising steps. More denoising steps usually lead to a higher quality image at the |
expense of slower inference. guidance_scale (float, optional, defaults to 8.0) β |
A higher guidance scale value encourages the model to generate images closely linked to the text |
prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. negative_prompt (str or List[str], optional) β |
The prompt or prompts to guide what to not include in image generation. If not defined, you need to |
pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). Used in |
text-conditioned image generation (text2img) mode. num_images_per_prompt (int, optional, defaults to 1) β |
The number of images to generate per prompt. Used in text2img (text-conditioned image generation) and |
img mode. If the mode is joint and both num_images_per_prompt and num_prompts_per_image are |
supplied, min(num_images_per_prompt, num_prompts_per_image) samples are generated. num_prompts_per_image (int, optional, defaults to 1) β |
The number of prompts to generate per image. Used in img2text (image-conditioned text generation) and |
text mode. If the mode is joint and both num_images_per_prompt and num_prompts_per_image are |
supplied, min(num_images_per_prompt, num_prompts_per_image) samples are generated. eta (float, optional, defaults to 0.0) β |
Corresponds to parameter eta (Ξ·) from the DDIM paper. Only applies |
to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) β |
A torch.Generator to make |
generation deterministic. latents (torch.FloatTensor, optional) β |
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for joint |
image-text generation. Can be used to tweak the same generation with different prompts. If not |
provided, a latents tensor is generated by sampling using the supplied random generator. This assumes |
a full set of VAE, CLIP, and text latents, if supplied, overrides the value of prompt_latents, |
vae_latents, and clip_latents. prompt_latents (torch.FloatTensor, optional) β |
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for text |
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents |
tensor is generated by sampling using the supplied random generator. vae_latents (torch.FloatTensor, optional) β |
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image |
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents |
tensor is generated by sampling using the supplied random generator. clip_latents (torch.FloatTensor, optional) β |
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image |
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents |
tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not |
provided, text embeddings are generated from the prompt input argument. Used in text-conditioned |
image generation (text2img) mode. negative_prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If |
not provided, negative_prompt_embeds are be generated from the negative_prompt input argument. Used |
in text-conditioned image generation (text2img) mode. output_type (str, optional, defaults to "pil") β |
The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) β |
Whether or not to return a ImageTextPipelineOutput instead of a plain tuple. callback (Callable, optional) β |
A function that calls every callback_steps steps during inference. The function is called with the |
following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) β |
The frequency at which the callback function is called. If not specified, the callback is called at |
every step. Returns |
ImageTextPipelineOutput or tuple |
If return_dict is True, ImageTextPipelineOutput is returned, otherwise a |
tuple is returned where the first element is a list with the generated images and the second element |
is a list of generated texts. |
The call function to the pipeline for generation. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to |
computing decoding in one step. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to |
computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to |
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to |
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow |
processing larger images. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str or List[str], optional... |
prompt to be encoded |
device β (torch.device): |
torch device num_images_per_prompt (int) β |
number of images that should be generated per prompt do_classifier_free_guidance (bool) β |
whether to use classifier free guidance or not negative_prompt (str or List[str], optional) β |
The prompt or prompts not to guide the image generation. If not defined, one has to pass |
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is |
less than 1). prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not |
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input |
argument. lora_scale (float, optional) β |
A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) β |
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that |
the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. reset_mode < source > ( ) Removes a manually set mode; after calling this, the pipeline will infer the mode from inputs. set_image_mode < source > ( ) Manually set the gen... |
List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). text (List[str] or List[List[str]]) β |
List of generated text strings of length batch_size or a list of list of strings whose outer list has |
length batch_size. Output class for joint image-text pipelines. |
OpenVINO π€ Optimum provides Stable Diffusion pipelines compatible with OpenVINO to perform inference on a variety of Intel processors (see the full list of supported devices). Youβll need to install π€ Optimum Intel with the --upgrade-strategy eager option to ensure optimum-intel is using the latest version: Copied ... |
model_id = "runwayml/stable-diffusion-v1-5" |
pipeline = OVStableDiffusionPipeline.from_pretrained(model_id, export=True) |
prompt = "sailing ship in storm by Rembrandt" |
image = pipeline(prompt).images[0] |
# Don't forget to save the exported model |
pipeline.save_pretrained("openvino-sd-v1-5") To further speed-up inference, statically reshape the model. If you change any parameters such as the outputs height or width, youβll need to statically reshape your model again. Copied # Define the shapes related to the inputs and desired outputs |
batch_size, num_images, height, width = 1, 1, 512, 512 |
# Statically reshape the model |
pipeline.reshape(batch_size, height, width, num_images) |
# Compile the model before inference |
pipeline.compile() |
image = pipeline( |
prompt, |
height=height, |
width=width, |
num_images_per_prompt=num_images, |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.