text stringlengths 0 5.54k |
|---|
generation. Can be used to tweak the same generation with different prompts. If not provided, a latents |
tensor is generated by sampling using the supplied random generator. prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated text embeddings. Can be used to easily tweak text inputs (prompt weighting). If not |
provided, text embeddings are generated from the prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs (prompt weighting). If |
not provided, negative_prompt_embeds are generated from the negative_prompt input argument. output_type (str, optional, defaults to "pil") — |
The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — |
Whether or not to return a StableDiffusionPipelineOutput instead of a |
plain tuple. callback (Callable, optional) — |
A function that calls every callback_steps steps during inference. The function is called with the |
following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — |
The frequency at which the callback function is called. If not specified, the callback is called at |
every step. cross_attention_kwargs (dict, optional) — |
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined in |
self.processor. gligen_normalize_constant (float, optional, defaults to 28.7) — |
The normalize value of the image embedding. clip_skip (int, optional) — |
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that |
the output of the pre-final layer will be used for computing the prompt embeddings. Returns |
StableDiffusionPipelineOutput or tuple |
If return_dict is True, StableDiffusionPipelineOutput is returned, |
otherwise a tuple is returned where the first element is a list with the generated images and the |
second element is a list of bools indicating whether the corresponding generated image contains |
“not-safe-for-work” (nsfw) content. |
The call function to the pipeline for generation. Examples: Copied >>> import torch |
>>> from diffusers import StableDiffusionGLIGENTextImagePipeline |
>>> from diffusers.utils import load_image |
>>> # Insert objects described by image at the region defined by bounding boxes |
>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained( |
... "anhnct/Gligen_Inpainting_Text_Image", torch_dtype=torch.float16 |
... ) |
>>> pipe = pipe.to("cuda") |
>>> input_image = load_image( |
... "https://hf.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/livingroom_modern.png" |
... ) |
>>> prompt = "a backpack" |
>>> boxes = [[0.2676, 0.4088, 0.4773, 0.7183]] |
>>> phrases = None |
>>> gligen_image = load_image( |
... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/backpack.jpeg" |
... ) |
>>> images = pipe( |
... prompt=prompt, |
... gligen_phrases=phrases, |
... gligen_inpaint_image=input_image, |
... gligen_boxes=boxes, |
... gligen_images=[gligen_image], |
... gligen_scheduled_sampling_beta=1, |
... output_type="pil", |
... num_inference_steps=50, |
... ).images |
>>> images[0].save("./gligen-inpainting-text-image-box.jpg") |
>>> # Generate an image described by the prompt and |
>>> # insert objects described by text and image at the region defined by bounding boxes |
>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained( |
... "anhnct/Gligen_Text_Image", torch_dtype=torch.float16 |
... ) |
>>> pipe = pipe.to("cuda") |
>>> prompt = "a flower sitting on the beach" |
>>> boxes = [[0.0, 0.09, 0.53, 0.76]] |
>>> phrases = ["flower"] |
>>> gligen_image = load_image( |
... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/gligen/pexels-pixabay-60597.jpg" |
... ) |
>>> images = pipe( |
... prompt=prompt, |
... gligen_phrases=phrases, |
... gligen_images=[gligen_image], |
... gligen_boxes=boxes, |
... gligen_scheduled_sampling_beta=1, |
... output_type="pil", |
... num_inference_steps=50, |
... ).images |
>>> images[0].save("./gligen-generation-text-image-box.jpg") |
>>> # Generate an image described by the prompt and |
>>> # transfer style described by image at the region defined by bounding boxes |
>>> pipe = StableDiffusionGLIGENTextImagePipeline.from_pretrained( |
... "anhnct/Gligen_Text_Image", torch_dtype=torch.float16 |
... ) |
>>> pipe = pipe.to("cuda") |
>>> prompt = "a dragon flying on the sky" |
>>> boxes = [[0.4, 0.2, 1.0, 0.8], [0.0, 1.0, 0.0, 1.0]] # Set `[0.0, 1.0, 0.0, 1.0]` for the style |
>>> gligen_image = load_image( |
... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" |
... ) |
>>> gligen_placeholder = load_image( |
... "https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/landscape.png" |
... ) |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.