Buckets:

hf-doc-build/doc-dev / diffusers /pr_12249 /en /api /models /latte_transformer3d.md
rtrm's picture
|
download
raw
3.85 kB

LatteTransformer3DModel

A Diffusion Transformer model for 3D data from Latte.

LatteTransformer3DModel[[diffusers.LatteTransformer3DModel]]

diffusers.LatteTransformer3DModel[[diffusers.LatteTransformer3DModel]]

Source

forwarddiffusers.LatteTransformer3DModel.forwardhttps://github.com/huggingface/diffusers/blob/vr_12249/src/diffusers/models/transformers/latte_transformer_3d.py#L168[{"name": "hidden_states", "val": ": Tensor"}, {"name": "timestep", "val": ": typing.Optional[torch.LongTensor] = None"}, {"name": "encoder_hidden_states", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "encoder_attention_mask", "val": ": typing.Optional[torch.Tensor] = None"}, {"name": "enable_temporal_attentions", "val": ": bool = True"}, {"name": "return_dict", "val": ": bool = True"}]- hidden_states shape (batch size, channel, num_frame, height, width) -- Input hidden_states.

  • timestep ( torch.LongTensor, optional) -- Used to indicate denoising step. Optional timestep to be applied as an embedding in AdaLayerNorm.

  • encoder_hidden_states ( torch.FloatTensor of shape (batch size, sequence len, embed dims), optional) -- Conditional embeddings for cross attention layer. If not given, cross-attention defaults to self-attention.

  • encoder_attention_mask ( torch.Tensor, optional) -- Cross-attention mask applied to encoder_hidden_states. Two formats supported:

    • Mask (batcheight, sequence_length) True = keep, False = discard.
    • Bias (batcheight, 1, sequence_length) 0 = keep, -10000 = discard.

    If ndim == 2: will be interpreted as a mask, then converted into a bias consistent with the format above. This bias will be added to the cross-attention scores.

  • enable_temporal_attentions -- (bool, optional, defaults to True): Whether to enable temporal attentions.

  • return_dict (bool, optional, defaults to True) -- Whether or not to return a ~models.unet_2d_condition.UNet2DConditionOutput instead of a plain tuple.0If return_dict is True, an ~models.transformer_2d.Transformer2DModelOutput is returned, otherwise a tuple where the first element is the sample tensor.

The LatteTransformer3DModel forward method.

Parameters:

hidden_states shape (batch size, channel, num_frame, height, width) : Input hidden_states.

timestep ( torch.LongTensor, optional) : Used to indicate denoising step. Optional timestep to be applied as an embedding in AdaLayerNorm.

encoder_hidden_states ( torch.FloatTensor of shape (batch size, sequence len, embed dims), optional) : Conditional embeddings for cross attention layer. If not given, cross-attention defaults to self-attention.

encoder_attention_mask ( torch.Tensor, optional) : Cross-attention mask applied to encoder_hidden_states. Two formats supported: * Mask (batcheight, sequence_length) True = keep, False = discard. * Bias (batcheight, 1, sequence_length) 0 = keep, -10000 = discard. If ndim == 2: will be interpreted as a mask, then converted into a bias consistent with the format above. This bias will be added to the cross-attention scores.

enable_temporal_attentions : (bool, optional, defaults to True): Whether to enable temporal attentions.

return_dict (bool, optional, defaults to True) : Whether or not to return a ~models.unet_2d_condition.UNet2DConditionOutput instead of a plain tuple.

Returns:

If return_dict is True, an ~models.transformer_2d.Transformer2DModelOutput is returned, otherwise a tuple where the first element is the sample tensor.

Xet Storage Details

Size:
3.85 kB
·
Xet hash:
d26e5fe3ba383106cdc6fdcd2c24f37c21eda6daef241e6a29b7d0d415e5dca6

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.