text stringlengths 0 5.54k |
|---|
negative_prompt (str or List[str], optional): |
The prompt or prompts not to guide the image generation. If not defined, one has to pass |
negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. |
Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). |
prompt_embeds (torch.FloatTensor, optional): |
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not |
provided, text embeddings will be generated from prompt input argument. |
negative_prompt_embeds (torch.FloatTensor, optional): |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input |
argument. |
IFImg2ImgSuperResolutionPipeline |
class diffusers.IFImg2ImgSuperResolutionPipeline |
< |
source |
> |
( |
tokenizer: T5Tokenizer |
text_encoder: T5EncoderModel |
unet: UNet2DConditionModel |
scheduler: DDPMScheduler |
image_noising_scheduler: DDPMScheduler |
safety_checker: typing.Optional[diffusers.pipelines.deepfloyd_if.safety_checker.IFSafetyChecker] |
feature_extractor: typing.Optional[transformers.models.clip.image_processing_clip.CLIPImageProcessor] |
watermarker: typing.Optional[diffusers.pipelines.deepfloyd_if.watermark.IFWatermarker] |
requires_safety_checker: bool = True |
) |
__call__ |
< |
source |
> |
( |
image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.FloatTensor] |
original_image: typing.Union[PIL.Image.Image, torch.Tensor, numpy.ndarray, typing.List[PIL.Image.Image], typing.List[torch.Tensor], typing.List[numpy.ndarray]] = None |
strength: float = 0.8 |
prompt: typing.Union[str, typing.List[str]] = None |
num_inference_steps: int = 50 |
timesteps: typing.List[int] = None |
guidance_scale: float = 4.0 |
negative_prompt: typing.Union[str, typing.List[str], NoneType] = None |
num_images_per_prompt: typing.Optional[int] = 1 |
eta: float = 0.0 |
generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None |
prompt_embeds: typing.Optional[torch.FloatTensor] = None |
negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None |
output_type: typing.Optional[str] = 'pil' |
return_dict: bool = True |
callback: typing.Union[typing.Callable[[int, int, torch.FloatTensor], NoneType], NoneType] = None |
callback_steps: int = 1 |
cross_attention_kwargs: typing.Union[typing.Dict[str, typing.Any], NoneType] = None |
noise_level: int = 250 |
clean_caption: bool = True |
) |
β |
~pipelines.stable_diffusion.IFPipelineOutput or tuple |
Parameters |
image (torch.FloatTensor or PIL.Image.Image) β |
Image, or tensor representing an image batch, that will be used as the starting point for the |
process. |
original_image (torch.FloatTensor or PIL.Image.Image) β |
The original image that image was varied from. |
strength (float, optional, defaults to 0.8) β |
Conceptually, indicates how much to transform the reference image. Must be between 0 and 1. image |
will be used as a starting point, adding more noise to it the larger the strength. The number of |
denoising steps depends on the amount of noise initially added. When strength is 1, added noise will |
be maximum and the denoising process will run for the full number of iterations specified in |
num_inference_steps. A value of 1, therefore, essentially ignores image. |
prompt (str or List[str], optional) β |
The prompt or prompts to guide the image generation. If not defined, one has to pass prompt_embeds. |
instead. |
num_inference_steps (int, optional, defaults to 50) β |
The number of denoising steps. More denoising steps usually lead to a higher quality image at the |
expense of slower inference. |
timesteps (List[int], optional) β |
Custom timesteps to use for the denoising process. If not defined, equal spaced num_inference_steps |
timesteps are used. Must be in descending order. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.