Buckets:
BriaTransformer2DModel
A modified flux Transformer model from Bria
BriaTransformer2DModel[[diffusers.BriaTransformer2DModel]]
diffusers.BriaTransformer2DModel[[diffusers.BriaTransformer2DModel]]
The Transformer model introduced in Flux. Based on FluxPipeline with several changes:
- no pooled embeddings
- We use zero padding for prompts
- No guidance embedding since this is not a distilled version Reference: https://blackforestlabs.ai/announcing-black-forest-labs/
forwarddiffusers.BriaTransformer2DModel.forwardhttps://github.com/huggingface/diffusers/blob/vr_12652/src/diffusers/models/transformers/transformer_bria.py#L584[{"name": "hidden_states", "val": ": Tensor"}, {"name": "encoder_hidden_states", "val": ": Tensor = None"}, {"name": "pooled_projections", "val": ": Tensor = None"}, {"name": "timestep", "val": ": LongTensor = None"}, {"name": "img_ids", "val": ": Tensor = None"}, {"name": "txt_ids", "val": ": Tensor = None"}, {"name": "guidance", "val": ": Tensor = None"}, {"name": "attention_kwargs", "val": ": dict[str, typing.Any] | None = None"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "controlnet_block_samples", "val": " = None"}, {"name": "controlnet_single_block_samples", "val": " = None"}]- hidden_states (torch.FloatTensor of shape (batch size, channel, height, width)) --
Input hidden_states.
- encoder_hidden_states (
torch.FloatTensorof shape(batch size, sequence_len, embed_dims)) -- Conditional embeddings (embeddings computed from the input conditions such as prompts) to use. - pooled_projections (
torch.FloatTensorof shape(batch_size, projection_dim)) -- Embeddings projected from the embeddings of input conditions. - timestep (
torch.LongTensor) -- Used to indicate denoising step. - block_controlnet_hidden_states -- (
listoftorch.Tensor): A list of tensors that if specified are added to the residuals of transformer blocks. - attention_kwargs (
dict, optional) -- A kwargs dictionary that if specified is passed along to theAttentionProcessoras defined underself.processorin diffusers.models.attention_processor. - return_dict (
bool, optional, defaults toTrue) -- Whether or not to return a~models.transformer_2d.Transformer2DModelOutputinstead of a plain tuple.0Ifreturn_dictis True, an~models.transformer_2d.Transformer2DModelOutputis returned, otherwise atuplewhere the first element is the sample tensor.
The BriaTransformer2DModel forward method.
Parameters:
patch_size (int) : Patch size to turn the input data into small patches.
in_channels (int, optional, defaults to 16) : The number of channels in the input.
num_layers (int, optional, defaults to 18) : The number of layers of MMDiT blocks to use.
num_single_layers (int, optional, defaults to 18) : The number of layers of single DiT blocks to use.
attention_head_dim (int, optional, defaults to 64) : The number of channels in each head.
num_attention_heads (int, optional, defaults to 18) : The number of heads to use for multi-head attention.
joint_attention_dim (int, optional) : The number of encoder_hidden_states dimensions to use.
pooled_projection_dim (int) : Number of dimensions to use when projecting the pooled_projections.
guidance_embeds (bool, defaults to False) : Whether to use guidance embeddings.
Returns:
If return_dict is True, an ~models.transformer_2d.Transformer2DModelOutput is returned, otherwise a
tuple where the first element is the sample tensor.
Xet Storage Details
- Size:
- 3.94 kB
- Xet hash:
- 6ffdb815567f19184a5ea433c7773ccd78f2090e2d8aa0246cd4ea83732b23ad
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.