text stringlengths 0 5.54k |
|---|
< |
source |
> |
( |
gpu_id = 0 |
) |
Offloads all models to CPU using accelerate, significantly reducing memory usage. When called, the pipeline’s |
models have their state dicts saved to CPU and then are moved to a torch.device('meta') and loaded to GPU only when their specific submodule has its forward` method called. |
encode_prompt |
< |
source |
> |
( |
prompt |
do_classifier_free_guidance = True |
num_images_per_prompt = 1 |
device = None |
negative_prompt = None |
prompt_embeds: typing.Optional[torch.FloatTensor] = None |
negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None |
clean_caption: bool = False |
) |
Parameters |
prompt (str or List[str], optional) — |
prompt to be encoded |
Encodes the prompt into text encoder hidden states. |
device: (torch.device, optional): |
torch device to place the resulting embeddings on |
num_images_per_prompt (int, optional, defaults to 1): |
number of images that should be generated per prompt |
do_classifier_free_guidance (bool, optional, defaults to True): |
whether to use classifier free guidance or not |
negative_prompt (str or List[str], optional): |
The prompt or prompts not to guide the image generation. If not defined, one has to pass |
negative_prompt_embeds. instead. If not defined, one has to pass negative_prompt_embeds. instead. |
Ignored when not using guidance (i.e., ignored if guidance_scale is less than 1). |
prompt_embeds (torch.FloatTensor, optional): |
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not |
provided, text embeddings will be generated from prompt input argument. |
negative_prompt_embeds (torch.FloatTensor, optional): |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input |
argument. |
IFSuperResolutionPipeline |
class diffusers.IFSuperResolutionPipeline |
< |
source |
> |
( |
tokenizer: T5Tokenizer |
text_encoder: T5EncoderModel |
unet: UNet2DConditionModel |
scheduler: DDPMScheduler |
image_noising_scheduler: DDPMScheduler |
safety_checker: typing.Optional[diffusers.pipelines.deepfloyd_if.safety_checker.IFSafetyChecker] |
feature_extractor: typing.Optional[transformers.models.clip.image_processing_clip.CLIPImageProcessor] |
watermarker: typing.Optional[diffusers.pipelines.deepfloyd_if.watermark.IFWatermarker] |
requires_safety_checker: bool = True |
) |
__call__ |
< |
source |
> |
( |
prompt: typing.Union[str, typing.List[str]] = None |
image: typing.Union[PIL.Image.Image, numpy.ndarray, torch.FloatTensor] = None |
num_inference_steps: int = 50 |
timesteps: typing.List[int] = None |
guidance_scale: float = 4.0 |
negative_prompt: typing.Union[str, typing.List[str], NoneType] = None |
num_images_per_prompt: typing.Optional[int] = 1 |
eta: float = 0.0 |
generator: typing.Union[torch._C.Generator, typing.List[torch._C.Generator], NoneType] = None |
prompt_embeds: typing.Optional[torch.FloatTensor] = None |
negative_prompt_embeds: typing.Optional[torch.FloatTensor] = None |
output_type: typing.Optional[str] = 'pil' |
return_dict: bool = True |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.