For more information (including how to compress models yourself), check out https://huggingface.co/DFloat11 and https://github.com/LeanModels/DFloat11
Feel free to request for other models for compression as well (for either the diffusers library, ComfyUI, or any other model), although models that use architectures which are unfamiliar to me might be more difficult.
How to Use
diffusers
import torch
from diffusers import FluxPipeline, FluxTransformer2DModel
from dfloat11 import DFloat11Model
# from transformers.modeling_utils import no_init_weights # for transformers<5.0.0
from transformers.initialization import no_init_weights # for transformers>=5.0.0
with no_init_weights():
transformer = FluxTransformer2DModel.from_config(
FluxTransformer2DModel.load_config(
"Shakker-Labs/AWPortrait-FL", subfolder="transformer"
),
torch_dtype=torch.bfloat16
).to(torch.bfloat16)
DFloat11Model.from_pretrained(
"mingyi456/AWPortrait-FL-DF11",
device="cpu",
bfloat16_model=transformer,
)
pipe = FluxPipeline.from_pretrained(
"Shakker-Labs/AWPortrait-FL",
transformer=transformer,
torch_dtype=torch.bfloat16
)
pipe.enable_model_cpu_offload()
prompt = "close up portrait, Amidst the interplay of light and shadows in a photography studio,a soft spotlight traces the contours of a face,highlighting a figure clad in a sleek black turtleneck. The garment,hugging the skin with subtle luxury,complements the Caucasian model's understated makeup,embodying minimalist elegance. Behind,a pale gray backdrop extends,its fine texture shimmering subtly in the dim light,artfully balancing the composition and focusing attention on the subject. In a palette of black,gray,and skin tones,simplicity intertwines with profundity,as every detail whispers untold stories."
image = pipe(
prompt,
num_inference_steps=24,
guidance_scale=3.5,
width=768, height=1024,
).images[0]
image.save('image awportrait-fl.png')
ComfyUI
Refer to this model instead.
Compression details
This is the pattern_dict for compression:
pattern_dict = {
"transformer_blocks\.\d+" : (
"norm1.linear",
"norm1_context.linear",
"attn.to_q",
"attn.to_k",
"attn.to_v",
"attn.add_k_proj",
"attn.add_v_proj",
"attn.add_q_proj",
"attn.to_out.0",
"attn.to_add_out",
"ff.net.0.proj",
"ff.net.2",
"ff_context.net.0.proj",
"ff_context.net.2",
),
"single_transformer_blocks\.\d+" : (
"norm.linear",
"proj_mlp",
"proj_out",
"attn.to_q",
"attn.to_k",
"attn.to_v",
)
}
- Downloads last month
- 8
Model tree for mingyi456/AWPortrait-FL-DF11
Base model
black-forest-labs/FLUX.1-dev Finetuned
Shakker-Labs/AWPortrait-FL