Buckets:

hf-doc-build/doc-dev / diffusers /pr_12762 /en /api /models /aura_flow_transformer2d.md
rtrm's picture
|
download
raw
3.12 kB

AuraFlowTransformer2DModel

A Transformer model for image-like data from AuraFlow.

AuraFlowTransformer2DModel[[diffusers.AuraFlowTransformer2DModel]]

diffusers.AuraFlowTransformer2DModel[[diffusers.AuraFlowTransformer2DModel]]

Source

A 2D Transformer model as introduced in AuraFlow (https://blog.fal.ai/auraflow/).

fuse_qkv_projectionsdiffusers.AuraFlowTransformer2DModel.fuse_qkv_projectionshttps://github.com/huggingface/diffusers/blob/vr_12762/src/diffusers/models/transformers/auraflow_transformer_2d.py#L429[]

Enables fused QKV projections. For self-attention modules, all projection matrices (i.e., query, key, value) are fused. For cross-attention modules, key and value projection matrices are fused.

> This API is 🧪 experimental.

Parameters:

sample_size (int) : The width of the latent images. This is fixed during training since it is used to learn a number of position embeddings.

patch_size (int) : Patch size to turn the input data into small patches.

in_channels (int, optional, defaults to 4) : The number of channels in the input.

num_mmdit_layers (int, optional, defaults to 4) : The number of layers of MMDiT Transformer blocks to use.

num_single_dit_layers (int, optional, defaults to 32) : The number of layers of Transformer blocks to use. These blocks use concatenated image and text representations.

attention_head_dim (int, optional, defaults to 256) : The number of channels in each head.

num_attention_heads (int, optional, defaults to 12) : The number of heads to use for multi-head attention.

joint_attention_dim (int, optional) : The number of encoder_hidden_states dimensions to use.

caption_projection_dim (int) : Number of dimensions to use when projecting the encoder_hidden_states.

out_channels (int, defaults to 4) : Number of output channels.

pos_embed_max_size (int, defaults to 1024) : Maximum positions to embed from the image latents.

set_attn_processor[[diffusers.AuraFlowTransformer2DModel.set_attn_processor]]

Source

Sets the attention processor to use to compute attention.

Parameters:

processor (dict of AttentionProcessor or only AttentionProcessor) : The instantiated processor class or a dictionary of processor classes that will be set as the processor for all Attention layers. If processor is a dict, the key needs to define the path to the corresponding cross attention processor. This is strongly recommended when setting trainable attention processors.

unfuse_qkv_projections[[diffusers.AuraFlowTransformer2DModel.unfuse_qkv_projections]]

Source

Disables the fused QKV projection if enabled.

> This API is 🧪 experimental.

Xet Storage Details

Size:
3.12 kB
·
Xet hash:
23b6ab49f146d5c0f2a0215671c57bc14f8c23f8277139057c386084a35d5130

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.