Buckets:

hf-doc-build/doc-dev / diffusers /pr_12249 /en /api /models /cogview4_transformer2d.md
rtrm's picture
|
download
raw
3.05 kB

CogView4Transformer2DModel

A Diffusion Transformer model for 2D data from CogView4

The model can be loaded with the following code snippet.

from diffusers import CogView4Transformer2DModel

transformer = CogView4Transformer2DModel.from_pretrained("THUDM/CogView4-6B", subfolder="transformer", torch_dtype=torch.bfloat16).to("cuda")

CogView4Transformer2DModel[[diffusers.CogView4Transformer2DModel]]

diffusers.CogView4Transformer2DModel[[diffusers.CogView4Transformer2DModel]]

Source

Parameters:

patch_size (int, defaults to 2) : The size of the patches to use in the patch embedding layer.

in_channels (int, defaults to 16) : The number of channels in the input.

num_layers (int, defaults to 30) : The number of layers of Transformer blocks to use.

attention_head_dim (int, defaults to 40) : The number of channels in each head.

num_attention_heads (int, defaults to 64) : The number of heads to use for multi-head attention.

out_channels (int, defaults to 16) : The number of channels in the output.

text_embed_dim (int, defaults to 4096) : Input dimension of text embeddings from the text encoder.

time_embed_dim (int, defaults to 512) : Output dimension of timestep embeddings.

condition_dim (int, defaults to 256) : The embedding dimension of the input SDXL-style resolution conditions (original_size, target_size, crop_coords).

pos_embed_max_size (int, defaults to 128) : The maximum resolution of the positional embeddings, from which slices of shape H x W are taken and added to input patched latents, where H and W are the latent height and width respectively. A value of 128 means that the maximum supported height and width for image generation is 128 * vae_scale_factor * patch_size => 128 * 8 * 2 => 2048.

sample_size (int, defaults to 128) : The base resolution of input latents. If height/width is not provided during generation, this value is used to determine the resolution as sample_size * vae_scale_factor => 128 * 8 => 1024

Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]

diffusers.models.modeling_outputs.Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]

Source

The output of Transformer2DModel.

Parameters:

sample (torch.Tensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pixels) if Transformer2DModel is discrete) : The hidden states output conditioned on the encoder_hidden_states input. If discrete, returns probability distributions for the unnoised latent pixels.

Xet Storage Details

Size:
3.05 kB
·
Xet hash:
3b72a14f516d26d9586df930c244eb80a62cd8c1b65bf0164434e5f4ea1c47f6

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.