Buckets:
LatteTransformer3DModel
A Diffusion Transformer model for 3D data from Latte.
LatteTransformer3DModel[[diffusers.LatteTransformer3DModel]]
class diffusers.LatteTransformer3DModeldiffusers.LatteTransformer3DModel
forwarddiffusers.LatteTransformer3DModel.forward(batch size, channel, num_frame, height, width) --
Input hidden_states.
timestep (
torch.LongTensor, optional) -- Used to indicate denoising step. Optional timestep to be applied as an embedding inAdaLayerNorm.encoder_hidden_states (
torch.FloatTensorof shape(batch size, sequence len, embed dims), optional) -- Conditional embeddings for cross attention layer. If not given, cross-attention defaults to self-attention.encoder_attention_mask (
torch.Tensor, optional) -- Cross-attention mask applied toencoder_hidden_states. Two formats supported:- Mask
(batcheight, sequence_length)True = keep, False = discard. - Bias
(batcheight, 1, sequence_length)0 = keep, -10000 = discard.
If
ndim == 2: will be interpreted as a mask, then converted into a bias consistent with the format above. This bias will be added to the cross-attention scores.- Mask
enable_temporal_attentions -- (
bool, optional, defaults toTrue): Whether to enable temporal attentions.return_dict (
bool, optional, defaults toTrue) -- Whether or not to return a~models.unet_2d_condition.UNet2DConditionOutputinstead of a plain tuple.0Ifreturn_dictis True, an~models.transformer_2d.Transformer2DModelOutputis returned, otherwise atuplewhere the first element is the sample tensor.
The LatteTransformer3DModel forward method.
Xet Storage Details
- Size:
- 4.03 kB
- Xet hash:
- 8a433e6f56f228e6e7c3fb19922bab180fe95846c2dc3108f73e38a9e3c50afb
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.