Buckets:
ChromaTransformer2DModel
A modified flux Transformer model from Chroma
ChromaTransformer2DModel[[diffusers.ChromaTransformer2DModel]]
class diffusers.ChromaTransformer2DModeldiffusers.ChromaTransformer2DModelint, defaults to 1) --
Patch size to turn the input data into small patches.
- in_channels (
int, defaults to64) -- The number of channels in the input. - out_channels (
int, optional, defaults toNone) -- The number of channels in the output. If not specified, it defaults toin_channels. - num_layers (
int, defaults to19) -- The number of layers of dual stream DiT blocks to use. - num_single_layers (
int, defaults to38) -- The number of layers of single stream DiT blocks to use. - attention_head_dim (
int, defaults to128) -- The number of dimensions to use for each attention head. - num_attention_heads (
int, defaults to24) -- The number of attention heads to use. - joint_attention_dim (
int, defaults to4096) -- The number of dimensions to use for the joint attention (embedding/channel dimension ofencoder_hidden_states). - axes_dims_rope (
Tuple[int], defaults to(16, 56, 56)) -- The dimensions to use for the rotary positional embeddings.0
The Transformer model introduced in Flux, modified for Chroma.
Reference: https://huggingface.co/lodestones/Chroma1-HD
forwarddiffusers.ChromaTransformer2DModel.forwardtorch.Tensor of shape (batch_size, image_sequence_length, in_channels)) --
Input hidden_states.
- encoder_hidden_states (
torch.Tensorof shape(batch_size, text_sequence_length, joint_attention_dim)) -- Conditional embeddings (embeddings computed from the input conditions such as prompts) to use. - timestep (
torch.LongTensor) -- Used to indicate denoising step. - block_controlnet_hidden_states -- (
listoftorch.Tensor): A list of tensors that if specified are added to the residuals of transformer blocks. - joint_attention_kwargs (
dict, optional) -- A kwargs dictionary that if specified is passed along to theAttentionProcessoras defined underself.processorin diffusers.models.attention_processor. - return_dict (
bool, optional, defaults toTrue) -- Whether or not to return a~models.transformer_2d.Transformer2DModelOutputinstead of a plain tuple.0Ifreturn_dictis True, an~models.transformer_2d.Transformer2DModelOutputis returned, otherwise atuplewhere the first element is the sample tensor.
The FluxTransformer2DModel forward method.
Xet Storage Details
- Size:
- 5.04 kB
- Xet hash:
- 219eb463bd2298f1fbc39999d6444707aae36cbe736511d6d099f40292a666b3
Xet efficiently stores files, intelligently splitting them into unique chunks and accelerating uploads and downloads. More info.