Buckets:

hf-doc-build/doc-dev / diffusers /pr_12509 /en /api /models /cosmos_transformer3d.md
rtrm's picture
|
download
raw
4.81 kB

CosmosTransformer3DModel

A Diffusion Transformer model for 3D video-like data was introduced in Cosmos World Foundation Model Platform for Physical AI by NVIDIA.

The model can be loaded with the following code snippet.

from diffusers import CosmosTransformer3DModel

transformer = CosmosTransformer3DModel.from_pretrained("nvidia/Cosmos-1.0-Diffusion-7B-Text2World", subfolder="transformer", torch_dtype=torch.bfloat16)

CosmosTransformer3DModel[[diffusers.CosmosTransformer3DModel]]

class diffusers.CosmosTransformer3DModeldiffusers.CosmosTransformer3DModelhttps://github.com/huggingface/diffusers/blob/vr_12509/src/diffusers/models/transformers/transformer_cosmos.py#L387[{"name": "in_channels", "val": ": int = 16"}, {"name": "out_channels", "val": ": int = 16"}, {"name": "num_attention_heads", "val": ": int = 32"}, {"name": "attention_head_dim", "val": ": int = 128"}, {"name": "num_layers", "val": ": int = 28"}, {"name": "mlp_ratio", "val": ": float = 4.0"}, {"name": "text_embed_dim", "val": ": int = 1024"}, {"name": "adaln_lora_dim", "val": ": int = 256"}, {"name": "max_size", "val": ": typing.Tuple[int, int, int] = (128, 240, 240)"}, {"name": "patch_size", "val": ": typing.Tuple[int, int, int] = (1, 2, 2)"}, {"name": "rope_scale", "val": ": typing.Tuple[float, float, float] = (2.0, 1.0, 1.0)"}, {"name": "concat_padding_mask", "val": ": bool = True"}, {"name": "extra_pos_embed_type", "val": ": typing.Optional[str] = 'learnable'"}]- in_channels (int, defaults to 16) -- The number of channels in the input.

  • out_channels (int, defaults to 16) -- The number of channels in the output.
  • num_attention_heads (int, defaults to 32) -- The number of heads to use for multi-head attention.
  • attention_head_dim (int, defaults to 128) -- The number of channels in each attention head.
  • num_layers (int, defaults to 28) -- The number of layers of transformer blocks to use.
  • mlp_ratio (float, defaults to 4.0) -- The ratio of the hidden layer size to the input size in the feedforward network.
  • text_embed_dim (int, defaults to 4096) -- Input dimension of text embeddings from the text encoder.
  • adaln_lora_dim (int, defaults to 256) -- The hidden dimension of the Adaptive LayerNorm LoRA layer.
  • max_size (Tuple[int, int, int], defaults to (128, 240, 240)) -- The maximum size of the input latent tensors in the temporal, height, and width dimensions.
  • patch_size (Tuple[int, int, int], defaults to (1, 2, 2)) -- The patch size to use for patchifying the input latent tensors in the temporal, height, and width dimensions.
  • rope_scale (Tuple[float, float, float], defaults to (2.0, 1.0, 1.0)) -- The scaling factor to use for RoPE in the temporal, height, and width dimensions.
  • concat_padding_mask (bool, defaults to True) -- Whether to concatenate the padding mask to the input latent tensors.
  • extra_pos_embed_type (str, optional, defaults to learnable) -- The type of extra positional embeddings to use. Can be one of None or learnable.0

A Transformer model for video-like data used in Cosmos.

Transformer2DModelOutput[[diffusers.models.modeling_outputs.Transformer2DModelOutput]]

class diffusers.models.modeling_outputs.Transformer2DModelOutputdiffusers.models.modeling_outputs.Transformer2DModelOutputhttps://github.com/huggingface/diffusers/blob/vr_12509/src/diffusers/models/modeling_outputs.py#L21[{"name": "sample", "val": ": torch.Tensor"}]- sample (torch.Tensor of shape (batch_size, num_channels, height, width) or (batch size, num_vector_embeds - 1, num_latent_pixels) if Transformer2DModel is discrete) -- The hidden states output conditioned on the encoder_hidden_states input. If discrete, returns probability distributions for the unnoised latent pixels.0

The output of Transformer2DModel.

Xet Storage Details

Size:
4.81 kB
·
Xet hash:
f458db30ada8919112473efb33192d944e2a478c9893e9338abc2f426e030964

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.