text stringlengths 0 5.54k |
|---|
number of images that should be generated per prompt |
do_classifier_free_guidance (bool, optional, defaults to True): |
whether to use classifier free guidance or not |
negative_prompt (str or List[str], optional): |
The prompt or prompts not to guide the image generation. If not defined, one has to pass |
negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. |
Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). |
prompt_embeds (torch.FloatTensor, optional): |
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not |
provided, text embeddings will be generated from prompt input argument. |
negative_prompt_embeds (torch.FloatTensor, optional): |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input |
argument. |
IFInpaintingSuperResolutionPipeline |
class diffusers.IFInpaintingSuperResolutionPipeline |
< |
source |
> |
( |
tokenizer: T5Tokenizer |
text_encoder: T5EncoderModel |
unet: UNet2DConditionModel |
scheduler: DDPMScheduler |
image_noising_scheduler: DDPMScheduler |
safety_checker: typing.Optional[diffusers.pipelines.deepfloyd_if.safety_checker.IFSafetyChecker] |
feature_extractor: typing.Optional[transformers.models.clip.image_processing_clip.CLIPImageProcessor] |
watermarker: typing.Optional[diffusers.pipelines.deepfloyd_if.watermark.IFWatermarker] |
requires_safety_checker: bool = True |
) |
__call__ |
< |
source |
> |
( |
image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.FloatTensor] |
original_image: typing.Union[PIL.Image.Image, torch.Tensor, numpy.ndarray, typing.List[PIL.Image.Image], typing.List[torch.Tensor], typing.List[numpy.ndarray]] = None |
mask_image: typing.Union[PIL.Image.Image, torch.Tensor, numpy.ndarray, typing.List[PIL.Image.Image], typing.List[torch.Tensor], typing.List[numpy.ndarray]] = None |
strength: float = 0.8 |
prompt: typing.Union[str, typing.List[str]] = None |
num_inference_steps: int = 100 |
timesteps: typing.List[int] = None |
guidance_scale: float = 4.0 |
negative_prompt: typing.Union[str, typing.List[str], NoneType] = None |
num_images_per_prompt: typing.Optional[int] = 1 |
eta: float = 0.0 |
generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None |
prompt_embeds: typing.Optional[torch.FloatTensor] = None |
negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None |
output_type: typing.Optional[str] = 'pil' |
return_dict: bool = True |
callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None |
callback_steps: int = 1 |
cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None |
noise_level: int = 0 |
clean_caption: bool = True |
) |
→ |
~pipelines.stable_diffusion.IFPipelineOutput or tuple |
Parameters |
image (torch.FloatTensor or PIL.Image.Image) — |
Image, or tensor representing an image batch, that will be used as the starting point for the |
process. |
original_image (torch.FloatTensor or PIL.Image.Image) — |
The original image that image was varied from. |
mask_image (PIL.Image.Image) — |
Image, or tensor representing an image batch, to mask image. White pixels in the mask will be |
repainted, while black pixels will be preserved. If mask_image is a PIL image, it will be converted |
to a single channel (luminance) before use. If it’s a tensor, it should contain one color channel (L) |
instead of 3, so the expected shape would be (B, H, W, 1). |
strength (float, optional, defaults to 0.8) — |
Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image |
will be used as a starting point, adding more noise to it the larger the strength. The number of |
denoising steps depends on the amount of noise initially added. When strength is 1, added noise will |
be maximum and the denoising process will run for the full number of iterations specified in |
num_inference_steps. A value of 1, therefore, essentially ignores image. |
prompt (str or List[str], optional) — |
The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. |
instead. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.