--- license: creativeml-openrail-m language: - en base_model: - cyberdelia/CyberRealisticFlux base_model_relation: quantized pipeline_tag: text-to-image library_name: diffusers tags: - diffusion-single-file --- ### Note: Despite the "FP16" in the filename, the original weights [in this repo](https://huggingface.co/cyberdelia/CyberRealisticFlux) are actually in BF16 instead, which makes them safe for DF11 compression. For more information (including how to compress models yourself), check out https://huggingface.co/DFloat11 and https://github.com/LeanModels/DFloat11 Feel free to request for other models for compression as well, although models whose architecture I am unfamiliar with might be slightly tricky for me. ### How to Use #### `diffusers` 1. Install the DFloat11 pip package *(installs the CUDA kernel automatically; requires a CUDA-compatible GPU and PyTorch installed)*: ```bash pip install dfloat11[cuda12] # or if you have CUDA version 11: # pip install dfloat11[cuda11] ``` 2. Download the [CyberRealistic_Flux_V2.5_FP16-DF11.safetensors](https://huggingface.co/mingyi456/CyberRealisticFlux-DF11/resolve/main/CyberRealistic_Flux_V2.5_FP16-DF11.safetensors) file and place it in a local directory of your choice. 3. To use the DFloat11 model, run the following example code in Python: ```python import torch from diffusers import FluxPipeline, FluxTransformer2DModel from dfloat11 import DFloat11Model pattern_dict = { "transformer_blocks\.\d+" : ( "norm1.linear", "norm1_context.linear", "attn.to_q", "attn.to_k", "attn.to_v", "attn.add_k_proj", "attn.add_v_proj", "attn.add_q_proj", "attn.to_out.0", "attn.to_add_out", "ff.net.0.proj", "ff.net.2", "ff_context.net.0.proj", "ff_context.net.2", ), "single_transformer_blocks\.\d+" : ( "norm.linear", "proj_mlp", "proj_out", "attn.to_q", "attn.to_k", "attn.to_v", ) } with no_init_weights(): transformer = FluxTransformer2DModel.from_config( FluxTransformer2DModel.load_config( "black-forest-labs/FLUX.1-dev", subfolder="transformer" ), torch_dtype=torch.bfloat16 ).to(torch.bfloat16) pipe = FluxPipeline.from_pretrained( "black-forest-labs/FLUX.1-dev", transformer=transformer, torch_dtype=torch.bfloat16 ) DFloat11Model.from_single_file('CyberRealisticFlux-DF11/CyberRealistic_Flux_V2.5_FP16-DF11.safetensors', device='cpu', bfloat16_model=pipe.transformer, pattern_dict=pattern_dict) # Make sure to download the file first, and edit the filepath accordingly pipe.enable_model_cpu_offload() prompt = "A beautiful woman with fair skin and long, messy blonde hair styled in a high ponytail with dramatic, face-framing bangs, her green eyes glinting under a cheeky, smirking expression, subtle head tilt adds playful confidence, she wears a translucent, form-fitting dress that clings tastefully to her silhouette, heavy yet refined makeup accentuating her eyes and lips, captured in a blend of long shot and medium close-up for cinematic focus, bathed in warm sunset glow casting soft golden highlights on her skin and hair, rich depth of field, high-detail realism with sensual elegance and atmospheric lighting" image = pipe( prompt, guidance_scale=3.5, num_inference_steps=30, max_sequence_length=256, generator=torch.Generator("cpu").manual_seed(0) ).images[0] image.save("CyberRealistic_Flux_V2.5_FP16-DF11.png") ``` #### ComfyUI Refer to this [model](https://huggingface.co/mingyi456/CyberRealisticFlux-DF11-ComfyUI) instead.