Buckets:
FluxTransformer2DModel
A Transformer model for image-like data from Flux.
FluxTransformer2DModel[[diffusers.FluxTransformer2DModel]]
diffusers.FluxTransformer2DModel[[diffusers.FluxTransformer2DModel]]
The Transformer model introduced in Flux.
Reference: https://blackforestlabs.ai/announcing-black-forest-labs/
forwarddiffusers.FluxTransformer2DModel.forwardhttps://github.com/huggingface/diffusers/blob/vr_11739/src/diffusers/models/transformers/transformer_flux.py#L637[{"name": "hidden_states", "val": ": Tensor"}, {"name": "encoder_hidden_states", "val": ": Tensor = None"}, {"name": "pooled_projections", "val": ": Tensor = None"}, {"name": "timestep", "val": ": LongTensor = None"}, {"name": "img_ids", "val": ": Tensor = None"}, {"name": "txt_ids", "val": ": Tensor = None"}, {"name": "guidance", "val": ": Tensor = None"}, {"name": "joint_attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "controlnet_block_samples", "val": " = None"}, {"name": "controlnet_single_block_samples", "val": " = None"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "controlnet_blocks_repeat", "val": ": bool = False"}]- hidden_states (torch.Tensor of shape (batch_size, image_sequence_length, in_channels)) --
Input hidden_states.
- encoder_hidden_states (
torch.Tensorof shape(batch_size, text_sequence_length, joint_attention_dim)) -- Conditional embeddings (embeddings computed from the input conditions such as prompts) to use. - pooled_projections (
torch.Tensorof shape(batch_size, projection_dim)) -- Embeddings projected from the embeddings of input conditions. - timestep (
torch.LongTensor) -- Used to indicate denoising step. - block_controlnet_hidden_states -- (
listoftorch.Tensor): A list of tensors that if specified are added to the residuals of transformer blocks. - joint_attention_kwargs (
dict, optional) -- A kwargs dictionary that if specified is passed along to theAttentionProcessoras defined underself.processorin diffusers.models.attention_processor. - return_dict (
bool, optional, defaults toTrue) -- Whether or not to return a~models.transformer_2d.Transformer2DModelOutputinstead of a plain tuple.0Ifreturn_dictis True, an~models.transformer_2d.Transformer2DModelOutputis returned, otherwise atuplewhere the first element is the sample tensor.
The FluxTransformer2DModel forward method.
Parameters:
patch_size (int, defaults to 1) : Patch size to turn the input data into small patches.
in_channels (int, defaults to 64) : The number of channels in the input.
out_channels (int, optional, defaults to None) : The number of channels in the output. If not specified, it defaults to in_channels.
num_layers (int, defaults to 19) : The number of layers of dual stream DiT blocks to use.
num_single_layers (int, defaults to 38) : The number of layers of single stream DiT blocks to use.
attention_head_dim (int, defaults to 128) : The number of dimensions to use for each attention head.
num_attention_heads (int, defaults to 24) : The number of attention heads to use.
joint_attention_dim (int, defaults to 4096) : The number of dimensions to use for the joint attention (embedding/channel dimension of encoder_hidden_states).
pooled_projection_dim (int, defaults to 768) : The number of dimensions to use for the pooled projection.
guidance_embeds (bool, defaults to False) : Whether to use guidance embeddings for guidance-distilled variant of the model.
axes_dims_rope (Tuple[int], defaults to (16, 56, 56)) : The dimensions to use for the rotary positional embeddings.
Returns:
If return_dict is True, an ~models.transformer_2d.Transformer2DModelOutput is returned, otherwise a
tuple where the first element is the sample tensor.
Xet Storage Details
- Size:
- 4.28 kB
- Xet hash:
- 8b7d8ed518b5853d4edf58a813801b9d604a180b858d211788387b78e1a8b024
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.