text stringlengths 0 5.54k |
|---|
through the self.time_embedding layer to obtain the timestep embeddings. attention_mask (torch.Tensor, optional, defaults to None) — |
An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask |
is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large |
negative values to the attention scores corresponding to “discard” tokens. cross_attention_kwargs (dict, optional) — |
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under |
self.processor in |
diffusers.models.attention_processor. |
down_block_additional_residuals — (tuple of torch.Tensor, optional): |
A tuple of tensors that if specified are added to the residuals of down unet blocks. |
mid_block_additional_residual — (torch.Tensor, optional): |
A tensor that if specified is added to the residual of the middle unet block. return_dict (bool, optional, defaults to True) — |
Whether or not to return a UNet3DConditionOutput instead of a plain |
tuple. Returns |
UNet3DConditionOutput or tuple |
If return_dict is True, an UNet3DConditionOutput is returned, otherwise |
a tuple is returned where the first element is the sample tensor. |
The UNetMotionModel forward method. freeze_unet2d_params < source > ( ) Freeze the weights of just the UNet2DConditionModel, and leave the motion modules |
unfrozen for fine tuning. set_attn_processor < source > ( processor: Union _remove_lora = False ) Parameters processor (dict of AttentionProcessor or only AttentionProcessor) — |
The instantiated processor class or a dictionary of processor classes that will be set as the processor |
for all Attention layers. |
If processor is a dict, the key needs to define the path to the corresponding cross attention |
processor. This is strongly recommended when setting trainable attention processors. Sets the attention processor to use to compute attention. set_default_attn_processor < source > ( ) Disables custom attention processors and sets the default attention implementation. UNet3DConditionOutput class diffusers.mod... |
The hidden states output conditioned on encoder_hidden_states input. Output of last layer of model. The output of UNet3DConditionModel. |
VQ Diffusion Vector Quantized Diffusion Model for Text-to-Image Synthesis is by Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, Baining Guo. The abstract from the paper is: We present the vector quantized diffusion (VQ-Diffusion) model for text-to-image generation. This method is based o... |
Vector Quantized Variational Auto-Encoder (VAE) model to encode and decode images to and from latent |
representations. text_encoder (CLIPTextModel) — |
Frozen text-encoder (clip-vit-base-patch32). tokenizer (CLIPTokenizer) — |
A CLIPTokenizer to tokenize text. transformer (Transformer2DModel) — |
A conditional Transformer2DModel to denoise the encoded image latents. scheduler (VQDiffusionScheduler) — |
A scheduler to be used in combination with transformer to denoise the encoded image latents. Pipeline for text-to-image generation using VQ Diffusion. This model inherits from DiffusionPipeline. Check the superclass documentation for the generic methods |
implemented for all pipelines (downloading, saving, running on a particular device, etc.). __call__ < source > ( prompt: typing.Union[str, typing.List[str]] num_inference_steps: int = 100 guidance_scale: float = 5.0 truncation_rate: float = 1.0 num_images_per_prompt: int = 1 generator: typing.Union[torch._C.Generator... |
The prompt or prompts to guide image generation. num_inference_steps (int, optional, defaults to 100) — |
The number of denoising steps. More denoising steps usually lead to a higher quality image at the |
expense of slower inference. guidance_scale (float, optional, defaults to 7.5) — |
A higher guidance scale value encourages the model to generate images closely linked to the text |
prompt at the expense of lower image quality. Guidance scale is enabled when guidance_scale > 1. truncation_rate (float, optional, defaults to 1.0 (equivalent to no truncation)) — |
Used to “truncate” the predicted classes for x_0 such that the cumulative probability for a pixel is at |
most truncation_rate. The lowest probabilities that would increase the cumulative probability above |
truncation_rate are set to zero. num_images_per_prompt (int, optional, defaults to 1) — |
The number of images to generate per prompt. generator (torch.Generator, optional) — |
A torch.Generator to make |
generation deterministic. latents (torch.FloatTensor of shape (batch), optional) — |
Pre-generated noisy latents sampled from a Gaussian distribution, to be used as inputs for image |
generation. Must be valid embedding indices.If not provided, a latents tensor will be generated of |
completely masked latent pixels. output_type (str, optional, defaults to "pil") — |
The output format of the generated image. Choose between PIL.Image or np.array. return_dict (bool, optional, defaults to True) — |
Whether or not to return a ImagePipelineOutput instead of a plain tuple. callback (Callable, optional) — |
A function that calls every callback_steps steps during inference. The function is called with the |
following arguments: callback(step: int, timestep: int, latents: torch.FloatTensor). callback_steps (int, optional, defaults to 1) — |
The frequency at which the callback function is called. If not specified, the callback is called at |
every step. Returns |
ImagePipelineOutput or tuple |
If return_dict is True, ImagePipelineOutput is returned, otherwise a tuple is |
returned where the first element is a list with the generated images. |
The call function to the pipeline for generation. truncate < source > ( log_p_x_0: FloatTensor truncation_rate: float ) Truncates log_p_x_0 such that for each column vector, the total cumulative probability is truncation_rate |
The lowest probabilities that would increase the cumulative probability above truncation_rate are set to |
zero. ImagePipelineOutput class diffusers.ImagePipelineOutput < source > ( images: typing.Union[typing.List[PIL.Image.Image], numpy.ndarray] ) Parameters images (List[PIL.Image.Image] or np.ndarray) — |
List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). Output class for image pipelines. |
PEFT Diffusers supports loading adapters such as LoRA with the PEFT library with the PeftAdapterMixin class. This allows modeling classes in Diffusers like UNet2DConditionModel to load an adapter. Refer to the Inference with PEFT tutorial for an overview of how to use PEFT in Diffusers for inference. PeftAdapterMixin ... |
more details about adapters and injecting them in a transformer-based model, check out the PEFT documentation. Install the latest version of PEFT, and use this mixin to: Attach new adapters in the model. Attach multiple adapters and iteratively activate/deactivate them. Activate/deactivate all adapters from the model. ... |
documentation. add_adapter < source > ( adapter_config adapter_name: str = 'default' ) Parameters adapter_config ([~peft.PeftConfig]) — |
The configuration of the adapter to add; supported adapters are non-prefix tuning and adaption prompt |
methods. adapter_name (str, optional, defaults to "default") — |
The name of the adapter to add. If no name is passed, a default name is assigned to the adapter. Adds a new adapter to the current model for training. If no adapter name is passed, a default name is assigned |
to the adapter to follow the convention of the PEFT library. If you are not familiar with adapters and PEFT methods, we invite you to read more about them in the PEFT |
documentation. disable_adapters < source > ( ) Disable all adapters attached to the model and fallback to inference with the base model only. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT |
documentation. enable_adapters < source > ( ) Enable adapters that are attached to the model. The model uses self.active_adapters() to retrieve the |
list of adapters to enable. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT |
documentation. set_adapter < source > ( adapter_name: Union ) Parameters adapter_name (Union[str, List[str]])) — |
The list of adapters to set or the adapter name in the case of a single adapter. Sets a specific adapter by forcing the model to only use that adapter and disables the other adapters. If you are not familiar with adapters and PEFT methods, we invite you to read more about them on the PEFT |
documentation. |
Load pipelines, models, and schedulers Having an easy way to use a diffusion system for inference is essential to 🧨 Diffusers. Diffusion systems often consist of multiple components like parameterized models, tokenizers, and schedulers that interact in complex ways. That is why we designed the DiffusionPipe... |
repo_id = "runwayml/stable-diffusion-v1-5" |
pipe = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) You can also load a checkpoint with its specific pipeline class. The example above loaded a Stable Diffusion model; to get the same result, use the StableDiffusionPipeline class: Copied from diffusers import StableDiffusionPipeline |
repo_id = "runwayml/stable-diffusion-v1-5" |
pipe = StableDiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) A checkpoint (such as CompVis/stable-diffusion-v1-4 or runwayml/stable-diffusion-v1-5) may also be used for more than one task, like text-to-image or image-to-image. To differentiate what task you want to use the checkpoint for, you have to l... |
repo_id = "runwayml/stable-diffusion-v1-5" |
pipe = StableDiffusionImg2ImgPipeline.from_pretrained(repo_id) Local pipeline To load a diffusion pipeline locally, use git-lfs to manually download the checkpoint (in this case, runwayml/stable-diffusion-v1-5) to your local disk. This creates a local folder, ./stable-diffusion-v1-5, on your disk: Copied git-lfs ins... |
git clone https://huggingface.co/runwayml/stable-diffusion-v1-5 Then pass the local path to from_pretrained(): Copied from diffusers import DiffusionPipeline |
repo_id = "./stable-diffusion-v1-5" |
stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) The from_pretrained() method won’t download any files from the Hub when it detects a local path, but this also means it won’t download and cache the latest changes to a checkpoint. Swap components in a pipeline You can customize the de... |
repo_id = "runwayml/stable-diffusion-v1-5" |
stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, use_safetensors=True) |
stable_diffusion.scheduler.compatibles Let’s use the SchedulerMixin.from_pretrained() method to replace the default PNDMScheduler with a more performant scheduler, EulerDiscreteScheduler. The subfolder="scheduler" argument is required to load the scheduler configuration from the correct subfolder of the pipeline reposi... |
repo_id = "runwayml/stable-diffusion-v1-5" |
scheduler = EulerDiscreteScheduler.from_pretrained(repo_id, subfolder="scheduler") |
stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, scheduler=scheduler, use_safetensors=True) Safety checker Diffusion models like Stable Diffusion can generate harmful content, which is why 🧨 Diffusers has a safety checker to check generated outputs against known hardcoded NSFW content. If you’d like to d... |
repo_id = "runwayml/stable-diffusion-v1-5" |
stable_diffusion = DiffusionPipeline.from_pretrained(repo_id, safety_checker=None, use_safetensors=True) |
""" |
You have disabled the safety checker for <class 'diffusers.pipelines.stable_diffusion.pipeline_stable_diffusion.StableDiffusionPipeline'> by passing `safety_checker=None`. Ensure that you abide by the conditions of the Stable Diffusion license and do not expose unfiltered results in services or applications open to the... |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.