Buckets:

hf-doc-build/doc-dev / diffusers /pr_12595 /en /api /models /cogview4_transformer2d.md
rtrm's picture
|
download
raw
4.4 kB

CogView4Transformer2DModel

A Diffusion Transformer model for 2D data from CogView4

The model can be loaded with the following code snippet.

from diffusers import CogView4Transformer2DModel

transformer = CogView4Transformer2DModel.from_pretrained("THUDM/CogView4-6B", subfolder="transformer", torch_dtype=torch.bfloat16).to("cuda")

CogView4Transformer2DModel[[diffusers.CogView4Transformer2DModel]]

class diffusers.CogView4Transformer2DModeldiffusers.CogView4Transformer2DModelhttps://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/models/transformers/transformer_cogview4.py#L619[{"name": "patch_size", "val": ": int = 2"}, {"name": "in_channels", "val": ": int = 16"}, {"name": "out_channels", "val": ": int = 16"}, {"name": "num_layers", "val": ": int = 30"}, {"name": "attention_head_dim", "val": ": int = 40"}, {"name": "num_attention_heads", "val": ": int = 64"}, {"name": "text_embed_dim", "val": ": int = 4096"}, {"name": "time_embed_dim", "val": ": int = 512"}, {"name": "condition_dim", "val": ": int = 256"}, {"name": "pos_embed_max_size", "val": ": int = 128"}, {"name": "sample_size", "val": ": int = 128"}, {"name": "rope_axes_dim", "val": ": typing.Tuple[int, int] = (256, 256)"}]- patch_size (int, defaults to 2) -- The size of the patches to use in the patch embedding layer.

  • in_channels (int, defaults to 16) -- The number of channels in the input.
  • num_layers (int, defaults to 30) -- The number of layers of Transformer blocks to use.
  • attention_head_dim (int, defaults to 40) -- The number of channels in each head.
  • num_attention_heads (int, defaults to 64) -- The number of heads to use for multi-head attention.
  • out_channels (int, defaults to 16) -- The number of channels in the output.
  • text_embed_dim (int, defaults to 4096) -- Input dimension of text embeddings from the text encoder.
  • time_embed_dim (int, defaults to 512) -- Output dimension of timestep embeddings.
  • condition_dim (int, defaults to 256) -- The embedding dimension of the input SDXL-style resolution conditions (original_size, target_size, crop_coords).
  • pos_embed_max_size (int, defaults to 128) -- The maximum resolution of the positional embeddings, from which slices of shape H x W are taken and added to input patched latents, where H and W are the latent height and width respectively. A value of 128 means that the maximum supported height and width for image generation is 128 * vae_scale_factor * patch_size => 128 * 8 * 2 => 2048.
  • sample_size (int, defaults to 128) -- The base resolution of input latents. If height/width is not provided during generation, this value is used to determine the resolution as sample_size * vae_scale_factor => 128 * 8 => 10240

Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]

class diffusers.models.modeling_outputs.Transformer2DModelOutputdiffusers.models.modeling_outputs.Transformer2DModelOutputhttps://github.com/huggingface/diffusers/blob/vr_12595/src/diffusers/models/modeling_outputs.py#L21[{"name": "sample", "val": ": torch.Tensor"}]- sample (torch.Tensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pixels) if Transformer2DModel is discrete) -- The hidden states output conditioned on the encoder_hidden_states input. If discrete, returns probability distributions for the unnoised latent pixels.0

The output of Transformer2DModel.

Xet Storage Details

Size:
4.4 kB
·
Xet hash:
6fcf360e613c80a118869c54f6d931d56a0bbb5e1cbfbb6e1403f1c9f76dacb9

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.