text
stringlengths
0
5.54k
... gligen_inpaint_image=input_image,
... gligen_boxes=boxes,
... gligen_scheduled_sampling_beta=1,
... output_type="pil",
... num_inference_steps=50,
... ).images
>>> images[0].save("./gligen-1-4-inpainting-text-box.jpg")
>>> # Generate an image described by the prompt and
>>> # insert objects described by text at the region defined by bounding boxes
>>> pipe = StableDiffusionGLIGENPipeline.from_pretrained(
... "masterful/gligen-1-4-generation-text-box", variant="fp16", torch_dtype=torch.float16
... )
>>> pipe = pipe.to("cuda")
>>> prompt = "a waterfall and a modern high speed train running through the tunnel in a beautiful forest with fall foliage"
>>> boxes = [[0.1387, 0.2051, 0.4277, 0.7090], [0.4980, 0.4355, 0.8516, 0.7266]]
>>> phrases = ["a waterfall", "a modern high speed train running through the tunnel"]
>>> images = pipe(
... prompt=prompt,
... gligen_phrases=phrases,
... gligen_boxes=boxes,
... gligen_scheduled_sampling_beta=1,
... output_type="pil",
... num_inference_steps=50,
... ).images
>>> images[0].save("./gligen-1-4-generation-text-box.jpg") enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to
computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow
processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to
computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) β€”
The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to β€œcuda”) β€”
The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will
default to β€œcuda”. Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared
to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward
method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with
enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. prepare_latents < source > ( batch_size num_channels_latents height width dtype device generator latents = None ) enable_fuser < source > ( enabled = True ) encode_prompt < source > ( prompt device nu...
prompt to be encoded
device β€” (torch.device):
torch device num_images_per_prompt (int) β€”
number of images that should be generated per prompt do_classifier_free_guidance (bool) β€”
whether to use classifier free guidance or not negative_prompt (str or List[str], optional) β€”
The prompt or prompts not to guide the image generation. If not defined, one has to pass
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is
less than 1). prompt_embeds (torch.FloatTensor, optional) β€”
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β€”
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input
argument. lora_scale (float, optional) β€”
A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) β€”
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that
the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionGLIGENTextImagePipeline class diffusers.StableDiffusionGLIGENTextImagePipeline < source > ( vae: AutoencoderKL text_encoder: CLIPTextModel tokenizer: CLIPTokenizer...
Variational Auto-Encoder (VAE) model to encode and decode images to and from latent representations. text_encoder (CLIPTextModel) β€”
Frozen text-encoder (clip-vit-large-patch14). tokenizer (CLIPTokenizer) β€”
A CLIPTokenizer to tokenize text. processor (CLIPProcessor) β€”
A CLIPProcessor to procces reference image. image_encoder (CLIPVisionModelWithProjection) β€”
Frozen image-encoder (clip-vit-large-patch14). image_project (CLIPImageProjection) β€”
A CLIPImageProjection to project image embedding into phrases embedding space. unet (UNet2DConditionModel) β€”
A UNet2DConditionModel to denoise the encoded image latents. scheduler (SchedulerMixin) β€”
A scheduler to be used in combination with unet to denoise the encoded image latents. Can be one of
DDIMScheduler, LMSDiscreteScheduler, or PNDMScheduler. safety_checker (StableDiffusionSafetyChecker) β€”
Classification module that estimates whether generated images could be considered offensive or harmful.
Please refer to the model card for more details
about a model’s potential harms. feature_extractor (CLIPImageProcessor) β€”
A CLIPImageProcessor to extract features from generated images; used as inputs to the safety_checker. Pipeline for text-to-image generation using Stable Diffusion with Grounded-Language-to-Image Generation (GLIGEN). This model inherits from DiffusionPipeline. Check the superclass documentation for the generic method...
library implements for all the pipelines (such as downloading or saving, running on a particular device, etc.). __call__ < source > ( prompt: Union = None height: Optional = None width: Optional = None num_inference_steps: int = 50 guidance_scale: float = 7.5 gligen_scheduled_sampling_beta: float = 0.3 gligen_phrases...
The prompt or prompts to guide image generation. If not defined, you need to pass prompt_embeds. height (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β€”
The height in pixels of the generated image. width (int, optional, defaults to self.unet.config.sample_size * self.vae_scale_factor) β€”
The width in pixels of the generated image. num_inference_steps (int, optional, defaults to 50) β€”
The number of denoising steps. More denoising steps usually lead to a higher quality image at the
expense of slower inference. guidance_scale (float, optional, defaults to 7.5) β€”
A higher guidance scale value encourages the model to generate images closely linked to the text
prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. gligen_phrases (List[str]) β€”
The phrases to guide what to include in each of the regions defined by the corresponding
gligen_boxes. There should only be one phrase per bounding box. gligen_images (List[PIL.Image.Image]) β€”
The images to guide what to include in each of the regions defined by the corresponding gligen_boxes.
There should only be one image per bounding box input_phrases_mask (int or List[int]) β€”
pre phrases mask input defined by the correspongding input_phrases_mask input_images_mask (int or List[int]) β€”
pre images mask input defined by the correspongding input_images_mask gligen_boxes (List[List[float]]) β€”
The bounding boxes that identify rectangular regions of the image that are going to be filled with the
content described by the corresponding gligen_phrases. Each rectangular box is defined as a
List[float] of 4 elements [xmin, ymin, xmax, ymax] where each value is between [0,1]. gligen_inpaint_image (PIL.Image.Image, optional) β€”
The input image, if provided, is inpainted with objects described by the gligen_boxes and
gligen_phrases. Otherwise, it is treated as a generation task on a blank input image. gligen_scheduled_sampling_beta (float, defaults to 0.3) β€”
Scheduled Sampling factor from GLIGEN: Open-Set Grounded Text-to-Image
Generation. Scheduled Sampling factor is only varied for
scheduled sampling during inference for improved quality and controllability. negative_prompt (str or List[str], optional) β€”
The prompt or prompts to guide what to not include in image generation. If not defined, you need to
pass negative_prompt_embeds instead. Ignored when not using guidance (guidance_scale < 1). num_images_per_prompt (int, optional, defaults to 1) β€”
The number of images to generate per prompt. eta (float, optional, defaults to 0.0) β€”
Corresponds to parameter eta (Ξ·) from the DDIM paper. Only applies
to the DDIMScheduler, and is ignored in other schedulers. generator (torch.Generator or List[torch.Generator], optional) β€”
A torch.Generator to make
generation deterministic. latents (torch.FloatTensor, optional) β€”
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image