text stringlengths 0 5.54k |
|---|
>>> from diffusers import StableDiffusionPanoramaPipeline, DDIMScheduler |
>>> model_ckpt = "stabilityai/stable-diffusion-2-base" |
>>> scheduler = DDIMScheduler.from_pretrained(model_ckpt, subfolder="scheduler") |
>>> pipe = StableDiffusionPanoramaPipeline.from_pretrained( |
... model_ckpt, scheduler=scheduler, torch_dtype=torch.float16 |
... ) |
>>> pipe = pipe.to("cuda") |
>>> prompt = "a photo of the dolomites" |
>>> image = pipe(prompt).images[0] disable_vae_slicing < source > ( ) Disable sliced VAE decoding. If enable_vae_slicing was previously enabled, this method will go back to |
computing decoding in one step. enable_vae_slicing < source > ( ) Enable sliced VAE decoding. When this option is enabled, the VAE will split the input tensor in slices to |
compute decoding in several steps. This is useful to save some memory and allow larger batch sizes. encode_prompt < source > ( prompt device num_images_per_prompt do_classifier_free_guidance negative_prompt = None prompt_embeds: Optional = None negative_prompt_embeds: Optional = None lora_scale: Optional = None clip_... |
prompt to be encoded |
device β (torch.device): |
torch device num_images_per_prompt (int) β |
number of images that should be generated per prompt do_classifier_free_guidance (bool) β |
whether to use classifier free guidance or not negative_prompt (str or List[str], optional) β |
The prompt or prompts not to guide the image generation. If not defined, one has to pass |
negative_prompt_embeds instead. Ignored when not using guidance (i.e., ignored if guidance_scale is |
less than 1). prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated text embeddings. Can be used to easily tweak text inputs, e.g. prompt weighting. If not |
provided, text embeddings will be generated from prompt input argument. negative_prompt_embeds (torch.FloatTensor, optional) β |
Pre-generated negative text embeddings. Can be used to easily tweak text inputs, e.g. prompt |
weighting. If not provided, negative_prompt_embeds will be generated from negative_prompt input |
argument. lora_scale (float, optional) β |
A LoRA scale that will be applied to all LoRA layers of the text encoder if LoRA layers are loaded. clip_skip (int, optional) β |
Number of layers to be skipped from CLIP while computing the prompt embeddings. A value of 1 means that |
the output of the pre-final layer will be used for computing the prompt embeddings. Encodes the prompt into text encoder hidden states. StableDiffusionPipelineOutput class diffusers.pipelines.stable_diffusion.StableDiffusionPipelineOutput < source > ( images: Union nsfw_content_detected: Optional ) Parameters ... |
List of denoised PIL images of length batch_size or NumPy array of shape (batch_size, height, width, num_channels). nsfw_content_detected (List[bool]) β |
List indicating whether the corresponding generated image contains βnot-safe-for-workβ (nsfw) content or |
None if safety checking could not be performed. Output class for Stable Diffusion pipelines. |
Transformer2D A Transformer model for image-like data from CompVis that is based on the Vision Transformer introduced by Dosovitskiy et al. The Transformer2DModel accepts discrete (classes of vector embeddings) or continuous (actual embeddings) inputs. When the input is continuous: Project the input and reshape it to (... |
The number of channels in the input and output (specify if the input is continuous). num_layers (int, optional, defaults to 1) β The number of layers of Transformer blocks to use. dropout (float, optional, defaults to 0.0) β The dropout probability to use. cross_attention_dim (int, optional) β The number of encod... |
This is fixed during training since it is used to learn a number of position embeddings. num_vector_embeds (int, optional) β |
The number of classes of the vector embeddings of the latent pixels (specify if the input is discrete). |
Includes the class for the masked latent pixel. activation_fn (str, optional, defaults to "geglu") β Activation function to use in feed-forward. num_embeds_ada_norm ( int, optional) β |
The number of diffusion steps used during training. Pass if at least one of the norm_layers is |
AdaLayerNorm. This is fixed during training since it is used to learn a number of embeddings that are |
added to the hidden states. |
During inference, you can denoise for up to but not more steps than num_embeds_ada_norm. attention_bias (bool, optional) β |
Configure if the TransformerBlocks attention should contain a bias parameter. A 2D Transformer model for image-like data. forward < source > ( hidden_states: Tensor encoder_hidden_states: Optional = None timestep: Optional = None added_cond_kwargs: Dict = None class_labels: Optional = None cross_attention_kwargs: ... |
Input hidden_states. encoder_hidden_states ( torch.FloatTensor of shape (batch size, sequence len, embed dims), optional) β |
Conditional embeddings for cross attention layer. If not given, cross-attention defaults to |
self-attention. timestep ( torch.LongTensor, optional) β |
Used to indicate denoising step. Optional timestep to be applied as an embedding in AdaLayerNorm. class_labels ( torch.LongTensor of shape (batch size, num classes), optional) β |
Used to indicate class labels conditioning. Optional class labels to be applied as an embedding in |
AdaLayerZeroNorm. cross_attention_kwargs ( Dict[str, Any], optional) β |
A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under |
self.processor in |
diffusers.models.attention_processor. attention_mask ( torch.Tensor, optional) β |
An attention mask of shape (batch, key_tokens) is applied to encoder_hidden_states. If 1 the mask |
is kept, otherwise if 0 it is discarded. Mask will be converted into a bias, which adds large |
negative values to the attention scores corresponding to βdiscardβ tokens. encoder_attention_mask ( torch.Tensor, optional) β |
Cross-attention mask applied to encoder_hidden_states. Two formats supported: |
Mask (batch, sequence_length) True = keep, False = discard. |
Bias (batch, 1, sequence_length) 0 = keep, -10000 = discard. |
If ndim == 2: will be interpreted as a mask, then converted into a bias consistent with the format |
above. This bias will be added to the cross-attention scores. return_dict (bool, optional, defaults to True) β |
Whether or not to return a UNet2DConditionOutput instead of a plain |
tuple. The Transformer2DModel forward method. Transformer2DModelOutput class diffusers.models.transformer_2d.Transformer2DModelOutput < source > ( sample: FloatTensor ) Parameters sample (torch.FloatTensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pi... |
The hidden states output conditioned on the encoder_hidden_states input. If discrete, returns probability |
distributions for the unnoised latent pixels. The output of Transformer2DModel. |
VQDiffusionScheduler VQDiffusionScheduler converts the transformer modelβs output into a sample for the unnoised image at the previous diffusion timestep. It was introduced in Vector Quantized Diffusion Model for Text-to-Image Synthesis by Shuyang Gu, Dong Chen, Jianmin Bao, Fang Wen, Bo Zhang, Dongdong Chen, Lu Yuan, ... |
The number of classes of the vector embeddings of the latent pixels. Includes the class for the masked |
latent pixel. num_train_timesteps (int, defaults to 100) β |
The number of diffusion steps to train the model. alpha_cum_start (float, defaults to 0.99999) β |
The starting cumulative alpha value. alpha_cum_end (float, defaults to 0.00009) β |
The ending cumulative alpha value. gamma_cum_start (float, defaults to 0.00009) β |
The starting cumulative gamma value. gamma_cum_end (float, defaults to 0.99999) β |
The ending cumulative gamma value. A scheduler for vector quantized diffusion. This model inherits from SchedulerMixin and ConfigMixin. Check the superclass documentation for the generic |
methods the library implements for all schedulers such as loading and saving. log_Q_t_transitioning_to_known_class < source > ( t: torch.int32 x_t: LongTensor log_onehot_x_t: FloatTensor cumulative: bool ) β torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels) Parameters t (torch.Long) β |
The timestep that determines which transition matrix is used. x_t (torch.LongTensor of shape (batch size, num latent pixels)) β |
The classes of each latent pixel at time t. log_onehot_x_t (torch.FloatTensor of shape (batch size, num classes, num latent pixels)) β |
The log one-hot vectors of x_t. cumulative (bool) β |
If cumulative is False, the single step transition matrix t-1->t is used. If cumulative is |
True, the cumulative transition matrix 0->t is used. Returns |
torch.FloatTensor of shape (batch size, num classes - 1, num latent pixels) |
Each column of the returned matrix is a row of log probabilities of the complete probability |
transition matrix. |
When non cumulative, returns self.num_classes - 1 rows because the initial latent pixel cannot be |
masked. |
Where: |
q_n is the probability distribution for the forward process of the nth latent pixel. |
C_0 is a class of a latent pixel embedding |
C_k is the class of the masked latent pixel |
non-cumulative result (omitting logarithms): |
_0(x_t | x_{t-1\} = C_0) ... q_n(x_t | x_{t-1\} = C_0) |
. . . |
. . . |
. . . |
q_0(x_t | x_{t-1\} = C_k) ... q_n(x_t | x_{t-1\} = C_k)`} |
wrap={false} |
/> |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.