text stringlengths 0 5.54k |
|---|
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under |
self.processor in |
diffusers.models.attention_processor. noise_level (int, optional, defaults to 0) β |
The amount of noise to add to the upscaled image. Must be in the range [0, 1000) clean_caption (bool, optional, defaults to True) β |
Whether or not to clean the caption before creating embeddings. Requires beautifulsoup4 and ftfy to |
be installed. If the dependencies are not installed, the embeddings will be created from the raw |
prompt. Returns |
~pipelines.stable_diffusion.IFPipelineOutput or tuple |
~pipelines.stable_diffusion.IFPipelineOutput if return_dict is True, otherwise a tuple. When returning a tuple, the first element is a list with the generated images, and the second element is a list of bools denoting whether the corresponding generated image likely represents "not-safe-for-work" (nsfw) or watermarked ... |
Function invoked when calling the pipeline for generation. Examples: Copied >>> from diffusers import IFInpaintingPipeline, IFInpaintingSuperResolutionPipeline, DiffusionPipeline |
>>> from diffusers.utils import pt_to_pil |
>>> import torch |
>>> from PIL import Image |
>>> import requests |
>>> from io import BytesIO |
>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/person.png" |
>>> response = requests.get(url) |
>>> original_image = Image.open(BytesIO(response.content)).convert("RGB") |
>>> original_image = original_image |
>>> url = "https://huggingface.co/datasets/diffusers/docs-images/resolve/main/if/glasses_mask.png" |
>>> response = requests.get(url) |
>>> mask_image = Image.open(BytesIO(response.content)) |
>>> mask_image = mask_image |
>>> pipe = IFInpaintingPipeline.from_pretrained( |
... "DeepFloyd/IF-I-XL-v1.0", variant="fp16", torch_dtype=torch.float16 |
... ) |
>>> pipe.enable_model_cpu_offload() |
>>> prompt = "blue sunglasses" |
>>> prompt_embeds, negative_embeds = pipe.encode_prompt(prompt) |
>>> image = pipe( |
... image=original_image, |
... mask_image=mask_image, |
... prompt_embeds=prompt_embeds, |
... negative_prompt_embeds=negative_embeds, |
... output_type="pt", |
... ).images |
>>> # save intermediate image |
>>> pil_image = pt_to_pil(image) |
>>> pil_image[0].save("./if_stage_I.png") |
>>> super_res_1_pipe = IFInpaintingSuperResolutionPipeline.from_pretrained( |
... "DeepFloyd/IF-II-L-v1.0", text_encoder=None, variant="fp16", torch_dtype=torch.float16 |
... ) |
>>> super_res_1_pipe.enable_model_cpu_offload() |
>>> image = super_res_1_pipe( |
... image=image, |
... mask_image=mask_image, |
... original_image=original_image, |
... prompt_embeds=prompt_embeds, |
... negative_prompt_embeds=negative_embeds, |
... ).images |
>>> image[0].save("./if_stage_II.png") encode_prompt < source > ( prompt: Union do_classifier_free_guidance: bool = True num_images_per_prompt: int = 1 device: Optional = None negative_prompt: Union = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None clean_caption: bool = False ) Paramete... |
prompt to be encoded do_classifier_free_guidance (bool, optional, defaults to True) β |
whether to use classifier free guidance or not num_images_per_prompt (int, optional, defaults to 1) β |
number of images that should be generated per prompt |
device β (torch.device, optional): |
torch device to place the resulting embeddings on negative_prompt (str or List[str], optional) β |
The prompt or prompts not to guide the image generation. If not defined, one has to pass |
negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. |
Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not |
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input |
argument. clean_caption (bool, defaults to False) β |
If True, the function will preprocess and clean the provided caption before encoding. Encodes the prompt into text encoder hidden states. |
Textual Inversion Textual Inversion is a training method for personalizing models by learning new text embeddings from a few example images. The file produced from training is extremely small (a few KBs) and the new embeddings can be loaded into the text encoder. TextualInversionLoaderMixin provides a function for load... |
Can be either one of the following or a list of them: |
A string, the model id (for example sd-concepts-library/low-poly-hd-logos-icons) of a |
pretrained model hosted on the Hub. |
A path to a directory (for example ./my_text_inversion_directory/) containing the textual |
inversion weights. |
A path to a file (for example ./my_text_inversions.pt) containing textual inversion weights. |
A torch state |
dict. |
token (str or List[str], optional) β |
Override the token to use for the textual inversion weights. If pretrained_model_name_or_path is a |
list, then token must also be a list of equal length. text_encoder (CLIPTextModel, optional) β |
Frozen text-encoder (clip-vit-large-patch14). |
If not specified, function will take self.tokenizer. tokenizer (CLIPTokenizer, optional) β |
A CLIPTokenizer to tokenize text. If not specified, function will take self.tokenizer. weight_name (str, optional) β |
Name of a custom weight file. This should be used when: |
The saved textual inversion file is in π€ Diffusers format, but was saved under a specific weight |
name such as text_inv.bin. |
The saved textual inversion file is in the Automatic1111 format. |
cache_dir (Union[str, os.PathLike], optional) β |
Path to a directory where a downloaded pretrained model configuration is cached if the standard cache |
is not used. force_download (bool, optional, defaults to False) β |
Whether or not to force the (re-)download of the model weights and configuration files, overriding the |
cached versions if they exist. resume_download (bool, optional, defaults to False) β |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.