Buckets:
CogView4Transformer2DModel
A Diffusion Transformer model for 2D data from CogView4
The model can be loaded with the following code snippet.
from diffusers import CogView4Transformer2DModel
transformer = CogView4Transformer2DModel.from_pretrained("THUDM/CogView4-6B", subfolder="transformer", torch_dtype=torch.bfloat16).to("cuda")
CogView4Transformer2DModel[[diffusers.CogView4Transformer2DModel]]
class diffusers.CogView4Transformer2DModeldiffusers.CogView4Transformer2DModelint, defaults to 2) --
The size of the patches to use in the patch embedding layer.
- in_channels (
int, defaults to16) -- The number of channels in the input. - num_layers (
int, defaults to30) -- The number of layers of Transformer blocks to use. - attention_head_dim (
int, defaults to40) -- The number of channels in each head. - num_attention_heads (
int, defaults to64) -- The number of heads to use for multi-head attention. - out_channels (
int, defaults to16) -- The number of channels in the output. - text_embed_dim (
int, defaults to4096) -- Input dimension of text embeddings from the text encoder. - time_embed_dim (
int, defaults to512) -- Output dimension of timestep embeddings. - condition_dim (
int, defaults to256) -- The embedding dimension of the input SDXL-style resolution conditions (original_size, target_size, crop_coords). - pos_embed_max_size (
int, defaults to128) -- The maximum resolution of the positional embeddings, from which slices of shapeH x Ware taken and added to input patched latents, whereHandWare the latent height and width respectively. A value of 128 means that the maximum supported height and width for image generation is128 * vae_scale_factor * patch_size => 128 * 8 * 2 => 2048. - sample_size (
int, defaults to128) -- The base resolution of input latents. If height/width is not provided during generation, this value is used to determine the resolution assample_size * vae_scale_factor => 128 * 8 => 10240
Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]
class diffusers.models.modeling_outputs.Transformer2DModelOutputdiffusers.models.modeling_outputs.Transformer2DModelOutputtorch.Tensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pixels) if Transformer2DModel is discrete) --
The hidden states output conditioned on the encoder_hidden_states input. If discrete, returns probability
distributions for the unnoised latent pixels.0
The output of Transformer2DModel.
Xet Storage Details
- Size:
- 4.42 kB
- Xet hash:
- fbe693a823dc4633fb15e1d928cce7a1f4e1c7c2e6d56dc423360626beda9e6d
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.