text stringlengths 0 5.54k |
|---|
>>> images = pipe( |
... prompt=prompt, |
... gligen_phrases=[ |
... "dragon", |
... "placeholder", |
... ], # Can use any text instead of `placeholder` token, because we will use mask here |
... gligen_images=[ |
... gligen_placeholder, |
... gligen_image, |
... ], # Can use any image in gligen_placeholder, because we will use mask here |
... input_phrases_mask=[1, 0], # Set 0 for the placeholder token |
... input_images_mask=[0, 1], # Set 0 for the placeholder image |
... gligen_boxes=boxes, |
... gligen_scheduled_sampling_beta=1, |
... output_type="pil", |
... num_inference_steps=50, |
... ).images |
>>> images[0].save("./gligen-generation-text-image-box-style-transfer.jpg") enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to |
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to |
computing decoding in one step. enable_vae_tiling < source > ( ) Enable tiled VAE decoding. When this option is enabled, the VAE will split the input tensor into tiles to |
compute decoding and encoding in several steps. This is useful for saving a large amount of memory and to allow |
processing larger images. disable_vae_tiling < source > ( ) Disable tiled VAE decoding. If enable_vae_tiling was previously enabled, this method will go back to |
computing decoding in one step. enable_model_cpu_offload < source > ( gpu_id: Optional = None device: Union = 'cuda' ) Parameters gpu_id (int, optional) — |
The ID of the accelerator that shall be used in inference. If not specified, it will default to 0. device (torch.Device or str, optional, defaults to “cuda”) — |
The PyTorch device type of the accelerator that shall be used in inference. If not specified, it will |
default to “cuda”. Offloads all models to CPU using accelerate, reducing memory usage with a low impact on performance. Compared |
to enable_sequential_cpu_offload, this method moves one whole model at a time to the GPU when its forward |
method is called, and the model remains in GPU until the next model runs. Memory savings are lower than with |
enable_sequential_cpu_offload, but performance is much better due to the iterative execution of the unet. prepare_latents < source > ( batch_size num_channels_latents height width dtype device generator latents = None ) enable_fuser < source > ( enabled = True ) complete_mask < source > ( has_mask max_obj... |
corresponding to phrases and images. crop < source > ( im new_width new_height ) Crop the input image to the specified dimensions. draw_inpaint_mask_from_boxes < source > ( boxes size ) Create an inpainting mask based on given boxes. This function generates an inpainting mask using the provided |
boxes to mark regions that need to be inpainted. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_skip: Optional = None ) Parameters prompt (str... |
prompt to be encoded |
device — (torch.device): |
torch device num_images_per_prompt (int) — |
number of images that should be generated per prompt do_classifier_free_guidance (bool) — |
whether to use classifier free guidance or not negative_prompt (str or List[str], optional) — |
The prompt or prompts not to guide the image generation. If not defined, one has to pass |
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is |
less than 1). prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not |
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) — |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input |
argument. lora_scale (float, optional) — |
A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) — |
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that |
the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. get_clip_feature < source > ( input normalize_constant device is_image = False ) Get image and phrases embedding by using CLIP pretrain model. The image embedding is transform... |
phrases embedding space through a projection. get_cross_attention_kwargs_with_grounded < source > ( hidden_size gligen_phrases gligen_images gligen_boxes input_phrases_mask input_images_mask repeat_batch normalize_constant max_objs device ) Prepare the cross-attention kwargs containing information about the groun... |
embedding, phrases embedding). get_cross_attention_kwargs_without_grounded < source > ( hidden_size repeat_batch max_objs device ) Prepare the cross-attention kwargs without information about the grounded input (boxes, mask, image embedding, |
phrases embedding) (All are zero tensor). target_size_center_crop < source > ( im new_hw ) Crop and resize the image to the target size while keeping the center. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detect... |
List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) — |
List indicating whether the corresponding generated image contains “not-safe-for-work” (nsfw) content or |
None if safety checking could not be performed. Output class for Stable Diffusion pipelines. |
Textual Inversion Textual Inversion is a training technique for personalizing image generation models with just a few example images of what you want it to learn. This technique works by learning and updating the text embeddings (the new embeddings are tied to a special word you must use in the prompt) to match the exa... |
cd diffusers |
pip install . Navigate to the example folder with the training script and install the required dependencies for the script you’re using: PyTorch Flax Copied cd examples/textual_inversion |
pip install -r requirements.txt 🤗 Accelerate is a library for helping you train on multiple GPUs/TPUs or with mixed-precision. It’ll automatically configure your training setup based on your hardware and environment. Take a look at the 🤗 Accelerate Quick tour to learn more. Initialize an 🤗 Accelerate environment: ... |
write_basic_config() Lastly, if you want to train a model on your own dataset, take a look at the Create a dataset for training guide to learn how to create a dataset that works with the training script. The following sections highlight parts of the training script that are important for understanding how to modify it,... |
--gradient_accumulation_steps=4 Some other basic and important parameters to specify include: --pretrained_model_name_or_path: the name of the model on the Hub or a local path to the pretrained model --train_data_dir: path to a folder containing the training dataset (example images) --output_dir: where to save the tr... |
if args.tokenizer_name: |
tokenizer = CLIPTokenizer.from_pretrained(args.tokenizer_name) |
elif args.pretrained_model_name_or_path: |
tokenizer = CLIPTokenizer.from_pretrained(args.pretrained_model_name_or_path, subfolder="tokenizer") |
# Load scheduler and models |
noise_scheduler = DDPMScheduler.from_pretrained(args.pretrained_model_name_or_path, subfolder="scheduler") |
text_encoder = CLIPTextModel.from_pretrained( |
args.pretrained_model_name_or_path, subfolder="text_encoder", revision=args.revision |
) |
vae = AutoencoderKL.from_pretrained(args.pretrained_model_name_or_path, subfolder="vae", revision=args.revision) |
unet = UNet2DConditionModel.from_pretrained( |
args.pretrained_model_name_or_path, subfolder="unet", revision=args.revision |
) The special placeholder token is added next to the tokenizer, and the embedding is readjusted to account for the new token. Then, the script creates a dataset from the TextualInversionDataset: Copied train_dataset = TextualInversionDataset( |
data_root=args.train_data_dir, |
tokenizer=tokenizer, |
size=args.resolution, |
placeholder_token=(" ".join(tokenizer.convert_ids_to_tokens(placeholder_token_ids))), |
repeats=args.repeats, |
learnable_property=args.learnable_property, |
center_crop=args.center_crop, |
set="train", |
) |
train_dataloader = torch.utils.data.DataLoader( |
train_dataset, batch_size=args.train_batch_size, shuffle=True, num_workers=args.dataloader_num_workers |
) Finally, the training loop handles everything else from predicting the noisy residual to updating the embedding weights of the special placeholder token. If you want to learn more about how the training loop works, check out the Understanding pipelines, models and schedulers tutorial which breaks down the basic patte... |
local_dir = "./cat" |
snapshot_download( |
"diffusers/cat_toy_example", local_dir=local_dir, repo_type="dataset", ignore_patterns=".gitattributes" |
) Set the environment variable MODEL_NAME to a model id on the Hub or a path to a local model, and DATA_DIR to the path where you just downloaded the cat images to. The script creates and saves the following files to your repository: learned_embeds.bin: the learned embedding vectors corresponding to your example image... |
--num_validation_images=4 |
--validation_steps=100 PyTorch Flax Copied export MODEL_NAME="runwayml/stable-diffusion-v1-5" |
export DATA_DIR="./cat" |
accelerate launch textual_inversion.py \ |
--pretrained_model_name_or_path=$MODEL_NAME \ |
--train_data_dir=$DATA_DIR \ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.