Buckets:

hf-doc-build/doc-dev / diffusers /pr_12229 /en /api /models /bria_transformer.md
rtrm's picture
|
download
raw
5.08 kB

BriaTransformer2DModel

A modified flux Transformer model from Bria

BriaTransformer2DModel[[diffusers.BriaTransformer2DModel]]

class diffusers.BriaTransformer2DModeldiffusers.BriaTransformer2DModelhttps://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/models/transformers/transformer_bria.py#L506[{"name": "patch_size", "val": ": int = 1"}, {"name": "in_channels", "val": ": int = 64"}, {"name": "num_layers", "val": ": int = 19"}, {"name": "num_single_layers", "val": ": int = 38"}, {"name": "attention_head_dim", "val": ": int = 128"}, {"name": "num_attention_heads", "val": ": int = 24"}, {"name": "joint_attention_dim", "val": ": int = 4096"}, {"name": "pooled_projection_dim", "val": ": int = None"}, {"name": "guidance_embeds", "val": ": bool = False"}, {"name": "axes_dims_rope", "val": ": typing.List[int] = [16, 56, 56]"}, {"name": "rope_theta", "val": " = 10000"}, {"name": "time_theta", "val": " = 10000"}]- patch_size (int) -- Patch size to turn the input data into small patches.

  • in_channels (int, optional, defaults to 16) -- The number of channels in the input.
  • num_layers (int, optional, defaults to 18) -- The number of layers of MMDiT blocks to use.
  • num_single_layers (int, optional, defaults to 18) -- The number of layers of single DiT blocks to use.
  • attention_head_dim (int, optional, defaults to 64) -- The number of channels in each head.
  • num_attention_heads (int, optional, defaults to 18) -- The number of heads to use for multi-head attention.
  • joint_attention_dim (int, optional) -- The number of encoder_hidden_states dimensions to use.
  • pooled_projection_dim (int) -- Number of dimensions to use when projecting the pooled_projections.
  • guidance_embeds (bool, defaults to False) -- Whether to use guidance embeddings.0

The Transformer model introduced in Flux. Based on FluxPipeline with several changes:

forwarddiffusers.BriaTransformer2DModel.forwardhttps://github.com/huggingface/diffusers/blob/vr_12229/src/diffusers/models/transformers/transformer_bria.py#L584[{"name": "hidden_states", "val": ": Tensor"}, {"name": "encoder_hidden_states", "val": ": Tensor = None"}, {"name": "pooled_projections", "val": ": Tensor = None"}, {"name": "timestep", "val": ": LongTensor = None"}, {"name": "img_ids", "val": ": Tensor = None"}, {"name": "txt_ids", "val": ": Tensor = None"}, {"name": "guidance", "val": ": Tensor = None"}, {"name": "attention_kwargs", "val": ": typing.Optional[typing.Dict[str, typing.Any]] = None"}, {"name": "return_dict", "val": ": bool = True"}, {"name": "controlnet_block_samples", "val": " = None"}, {"name": "controlnet_single_block_samples", "val": " = None"}]- hidden_states (torch.FloatTensor of shape (batch size, channel, height, width)) -- Input hidden_states.

  • encoder_hidden_states (torch.FloatTensor of shape (batch size, sequence_len, embed_dims)) -- Conditional embeddings (embeddings computed from the input conditions such as prompts) to use.
  • pooled_projections (torch.FloatTensor of shape (batch_size, projection_dim)) -- Embeddings projected from the embeddings of input conditions.
  • timestep ( torch.LongTensor) -- Used to indicate denoising step.
  • block_controlnet_hidden_states -- (list of torch.Tensor): A list of tensors that if specified are added to the residuals of transformer blocks.
  • attention_kwargs (dict, optional) -- A kwargs dictionary that if specified is passed along to the AttentionProcessor as defined under self.processor in diffusers.models.attention_processor.
  • return_dict (bool, optional, defaults to True) -- Whether or not to return a ~models.transformer_2d.Transformer2DModelOutput instead of a plain tuple.0If return_dict is True, an ~models.transformer_2d.Transformer2DModelOutput is returned, otherwise a tuple where the first element is the sample tensor.

The BriaTransformer2DModel forward method.

Xet Storage Details

Size:
5.08 kB
·
Xet hash:
331c9ad22a2607d03ac94ee803725812d7b8d6c874d160046ddc4b4811f23c48

Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.