Buckets:
CogView4Transformer2DModel
A Diffusion Transformer model for 2D data from CogView4
The model can be loaded with the following code snippet.
from diffusers import CogView4Transformer2DModel
transformer = CogView4Transformer2DModel.from_pretrained("THUDM/CogView4-6B", subfolder="transformer", torch_dtype=torch.bfloat16).to("cuda")
CogView4Transformer2DModel[[diffusers.CogView4Transformer2DModel]]
diffusers.CogView4Transformer2DModel[[diffusers.CogView4Transformer2DModel]]
Parameters:
patch_size (int, defaults to 2) : The size of the patches to use in the patch embedding layer.
in_channels (int, defaults to 16) : The number of channels in the input.
num_layers (int, defaults to 30) : The number of layers of Transformer blocks to use.
attention_head_dim (int, defaults to 40) : The number of channels in each head.
num_attention_heads (int, defaults to 64) : The number of heads to use for multi-head attention.
out_channels (int, defaults to 16) : The number of channels in the output.
text_embed_dim (int, defaults to 4096) : Input dimension of text embeddings from the text encoder.
time_embed_dim (int, defaults to 512) : Output dimension of timestep embeddings.
condition_dim (int, defaults to 256) : The embedding dimension of the input SDXL-style resolution conditions (original_size, target_size, crop_coords).
pos_embed_max_size (int, defaults to 128) : The maximum resolution of the positional embeddings, from which slices of shape H x W are taken and added to input patched latents, where H and W are the latent height and width respectively. A value of 128 means that the maximum supported height and width for image generation is 128 * vae_scale_factor * patch_size => 128 * 8 * 2 => 2048.
sample_size (int, defaults to 128) : The base resolution of input latents. If height/width is not provided during generation, this value is used to determine the resolution as sample_size * vae_scale_factor => 128 * 8 => 1024
Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]
diffusers.models.modeling_outputs.Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]
The output of Transformer2DModel.
Parameters:
sample (torch.Tensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pixels) if Transformer2DModel is discrete) : The hidden states output conditioned on the encoder_hidden_states input. If discrete, returns probability distributions for the unnoised latent pixels.
Xet Storage Details
- Size:
- 3.05 kB
- Xet hash:
- 733ae9ae1335fa8bb08f013dd0d70ba013997254e0c981e57b18e688482c62ad
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.