Buckets:
BriaTransformer2DModel
A modified flux Transformer model from Bria
BriaTransformer2DModel[[diffusers.BriaTransformer2DModel]]
class diffusers.BriaTransformer2DModeldiffusers.BriaTransformer2DModelint) -- Patch size to turn the input data into small patches.
- in_channels (
int, optional, defaults to 16) -- The number of channels in the input. - num_layers (
int, optional, defaults to 18) -- The number of layers of MMDiT blocks to use. - num_single_layers (
int, optional, defaults to 18) -- The number of layers of single DiT blocks to use. - attention_head_dim (
int, optional, defaults to 64) -- The number of channels in each head. - num_attention_heads (
int, optional, defaults to 18) -- The number of heads to use for multi-head attention. - joint_attention_dim (
int, optional) -- The number ofencoder_hidden_statesdimensions to use. - pooled_projection_dim (
int) -- Number of dimensions to use when projecting thepooled_projections. - guidance_embeds (
bool, defaults to False) -- Whether to use guidance embeddings.0
The Transformer model introduced in Flux. Based on FluxPipeline with several changes:
- no pooled embeddings
- We use zero padding for prompts
- No guidance embedding since this is not a distilled version Reference: https://blackforestlabs.ai/announcing-black-forest-labs/
forwarddiffusers.BriaTransformer2DModel.forwardtorch.FloatTensor of shape (batch size, channel, height, width)) --
Input hidden_states.
- encoder_hidden_states (
torch.FloatTensorof shape(batch size, sequence_len, embed_dims)) -- Conditional embeddings (embeddings computed from the input conditions such as prompts) to use. - pooled_projections (
torch.FloatTensorof shape(batch_size, projection_dim)) -- Embeddings projected from the embeddings of input conditions. - timestep (
torch.LongTensor) -- Used to indicate denoising step. - block_controlnet_hidden_states -- (
listoftorch.Tensor): A list of tensors that if specified are added to the residuals of transformer blocks. - attention_kwargs (
dict, optional) -- A kwargs dictionary that if specified is passed along to theAttentionProcessoras defined underself.processorin diffusers.models.attention_processor. - return_dict (
bool, optional, defaults toTrue) -- Whether or not to return a~models.transformer_2d.Transformer2DModelOutputinstead of a plain tuple.0Ifreturn_dictis True, an~models.transformer_2d.Transformer2DModelOutputis returned, otherwise atuplewhere the first element is the sample tensor.
The BriaTransformer2DModel forward method.
Xet Storage Details
- Size:
- 5.08 kB
- Xet hash:
- 331c9ad22a2607d03ac94ee803725812d7b8d6c874d160046ddc4b4811f23c48
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.