text stringlengths 0 5.54k |
|---|
num_images_per_prompt (int, optional, defaults to 1) β |
The number of images to generate per prompt. |
eta (float, optional, defaults to 0.0) β |
Corresponds to parameter eta (Ξ·) in the DDIM paper: https://arxiv.org/abs/2010.02502. Only applies to |
schedulers.DDIMScheduler, will be ignored for others. |
generator (torch.Generator, optional) β |
One or a list of torch generator(s) |
to make generation deterministic. |
prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not |
provided, text embeddings will be generated from prompt input argument. |
negative_prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input |
argument. |
output_type (str, optional, defaults to "pil") β |
The output format of the generate image. Choose between |
PIL: PIL.Image.Image or np.array. |
return_dict (bool, optional, defaults to True) β |
Whether or not to return a StableDiffusionPipelineOutput instead of a |
plain tuple. |
callback (Callable, optional) β |
A function that will be called every callback_steps steps during inference. The function will be |
called with the following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). |
callback_steps (int, optional, defaults to 1) β |
The frequency at which the callback function will be called. If not specified, the callback will be |
called at every step. |
Returns |
StableDiffusionPipelineOutput or tuple |
StableDiffusionPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) content, according to the safety_checker`. |
Function invoked when calling the pipeline for generation. |
Examples: |
Copied |
>>> import torch |
>>> import requests |
>>> from PIL import Image |
>>> from diffusers import StableDiffusionDepth2ImgPipeline |
>>> pipe = StableDiffusionDepth2ImgPipeline.from_pretrained( |
... "stabilityai/stable-diffusion-2-depth", |
... torch_dtype=torch.float16, |
... ) |
>>> pipe.to("cuda") |
>>> url = "http://images.cocodataset.org/val2017/000000039769.jpg" |
>>> init_image = Image.open(requests.get(url, stream=True).raw) |
>>> prompt = "two tigers" |
>>> n_propmt = "bad, deformed, ugly, bad anotomy" |
>>> image = pipe(prompt=prompt, image=init_image, negative_prompt=n_propmt, strength=0.7).images[0] |
enable_attention_slicing |
< |
source |
> |
( |
slice_size: typing.Union[str, int, NoneType] = 'auto' |
) |
Parameters |
slice_size (str or int, optional, defaults to "auto") β |
When "auto", halves the input to the attention heads, so attention will be computed in two steps. If |
"max", maxium amount of memory will be saved by running only one slice at a time. If a number is |
provided, uses as many slices as attention_head_dim // slice_size. In this case, attention_head_dim |
must be a multiple of slice_size. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.