m9e commited on
Commit
e8c417c
Β·
verified Β·
1 Parent(s): 8f1ea16

restored README.md

Browse files
Files changed (1) hide show
  1. README.md +133 -0
README.md ADDED
@@ -0,0 +1,133 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ base_model:
3
+ - Qwen/Qwen-Image-Edit-2511
4
+ base_model_relation: quantized
5
+ tags:
6
+ - dfloat11
7
+ - df11
8
+ - lossless compression
9
+ - 70% size, 100% accuracy
10
+ ---
11
+
12
+ # DFloat11 Compressed Model: `Qwen/Qwen-Image-Edit-2511`
13
+
14
+ This is a **DFloat11 losslessly compressed** version of the original `Qwen/Qwen-Image-Edit-2511` model. It reduces model size by **32%** compared to the original BFloat16 model, while maintaining **bit-identical outputs** and supporting **efficient GPU inference**.
15
+
16
+ πŸ”₯πŸ”₯πŸ”₯ Thanks to DFloat11 compression, Qwen-Image-Edit-2511 can now run on **a single 32GB GPU**, or on **a single 24GB GPU with CPU offloading**, while maintaining full model quality. πŸ”₯πŸ”₯πŸ”₯
17
+
18
+ ### Qwen-Image-Edit-2511
19
+
20
+ Thanks, DFloat11 team - I have recoverted the weights for the 2511 model using the DFloat11 package, and done minimal testing which produced valid images. The rest of this readme is taken from https://huggingface.co/DFloat11/Qwen-Image-Edit-2509-DF11/blob/main/README.md
21
+
22
+ ### πŸ“Š Performance Comparison
23
+
24
+ (Note: numbers are pulled from DFloat11/Qwen-Image-Edit-2509 but not re-validated for the -2511 version)
25
+
26
+ | Model | Model Size | Peak GPU Memory (1024x1024 image generation) | Image Editing Time (A100 GPU) |
27
+ |-----------------------------------------------------|------------|----------------------------------------------|-------------------------------|
28
+ | Qwen-Image-Edit-2511 (BFloat16) | ~41 GB | OOM | - |
29
+ | Qwen-Image-Edit-2511 (DFloat11) | 28.43 GB | 30.20 GB | 102 seconds |
30
+
31
+ ### πŸ”§ How to Use
32
+
33
+ 1. Install or upgrade the DFloat11 pip package *(installs the CUDA kernel automatically; requires a CUDA-compatible GPU and PyTorch installed)*:
34
+
35
+ ```bash
36
+ pip install -U dfloat11[cuda12]
37
+ ```
38
+
39
+ 2. Install or upgrade diffusers:
40
+
41
+ ```bash
42
+ pip install git+https://github.com/huggingface/diffusers
43
+ ```
44
+
45
+ 3. Save the following code to a Python file `qwen_image_edit.py`:
46
+
47
+ ```python
48
+ import os
49
+ import torch
50
+ import argparse
51
+ from diffusers import QwenImageEditPlusPipeline
52
+ from diffusers.utils import load_image
53
+ from dfloat11 import DFloat11Model
54
+
55
+ parser = argparse.ArgumentParser(description="Qwen Image Edit with DFloat11")
56
+ parser.add_argument("--image", default="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/diffusers/cat.png", help="Image URL or path")
57
+ parser.add_argument("--prompt", default="Make this cat an astronaut gazing at planet earth from space", help="Edit prompt")
58
+ parser.add_argument("--output", default="qwen_image_edit_output.png", help="Output image path")
59
+ parser.add_argument("--steps", type=int, default=40, help="Number of inference steps")
60
+ parser.add_argument("--seed", type=int, default=42, help="Random seed")
61
+ parser.add_argument("--true_cfg_scale", type=float, default=4.0, help="True CFG scale")
62
+ parser.add_argument("--negative_prompt", default=" ", help="Negative prompt")
63
+ parser.add_argument("--guidance_scale", type=float, default=1.0, help="Guidance scale")
64
+ parser.add_argument("--cpu_offload", action="store_true", help="Enable CPU offloading")
65
+ parser.add_argument("--cpu_offload_blocks", type=int, default=20, help="Number of blocks to offload to CPU for block swapping")
66
+ parser.add_argument("--cpu_offload_no_pin_memory", action="store_true", help="Disable memory pinning for CPU offloading")
67
+ args = parser.parse_args()
68
+
69
+ pipeline = QwenImageEditPlusPipeline.from_pretrained("Qwen/Qwen-Image-Edit-2511", torch_dtype=torch.bfloat16)
70
+ DFloat11Model.from_pretrained(
71
+ "DFloat11/Qwen-Image-Edit-2511-DF11",
72
+ bfloat16_model=pipeline.transformer,
73
+ device="cpu",
74
+ cpu_offload=args.cpu_offload,
75
+ cpu_offload_blocks=args.cpu_offload_blocks,
76
+ pin_memory=not args.cpu_offload_no_pin_memory,
77
+ )
78
+ pipeline.enable_model_cpu_offload()
79
+
80
+ image = load_image(args.image)
81
+ inputs = {
82
+ "image": [image],
83
+ "prompt": args.prompt,
84
+ "generator": torch.manual_seed(args.seed),
85
+ "true_cfg_scale": args.true_cfg_scale,
86
+ "negative_prompt": args.negative_prompt,
87
+ "num_inference_steps": args.steps,
88
+ "guidance_scale": args.guidance_scale,
89
+ "num_images_per_prompt": 1,
90
+ }
91
+ with torch.inference_mode():
92
+ output = pipeline(**inputs)
93
+ output_image = output.images[0]
94
+ output_image.save(args.output)
95
+ print("Image saved at", os.path.abspath(args.output))
96
+
97
+ max_memory = torch.cuda.max_memory_allocated()
98
+ print(f"Max memory: {max_memory / (1000 ** 3):.2f} GB")
99
+ ```
100
+
101
+ 4. To run without CPU offloading (32GB VRAM required):
102
+ ```bash
103
+ python qwen_image_edit.py
104
+ ```
105
+
106
+ To run with CPU offloading (24GB VRAM required):
107
+ ```bash
108
+ python qwen_image_edit.py --cpu_offload
109
+ ```
110
+
111
+ If you are getting out-of-CPU-memory errors, try limiting the number of offloaded blocks or disabling memory-pinning:
112
+ ```bash
113
+ # Offload only 16 blocks (offloading more blocks uses less GPU memory and more CPU memory; offloading less blocks is faster):
114
+ python qwen_image_edit.py --cpu_offload --cpu_offload_blocks 16
115
+
116
+ # Disable memory-pinning (the most memory efficient way, but could be slower):
117
+ python qwen_image_edit.py --cpu_offload --no_pin_memory
118
+ ```
119
+
120
+
121
+ ### πŸ” How It Works
122
+
123
+ We apply **Huffman coding** to losslessly compress the exponent bits of BFloat16 model weights, which are highly compressible (their 8 bits carry only ~2.6 bits of actual information). To enable fast inference, we implement a highly efficient CUDA kernel that performs on-the-fly weight decompression directly on the GPU.
124
+
125
+ The result is a model that is **~32% smaller**, delivers **bit-identical outputs**, and achieves performance **comparable to the original** BFloat16 model.
126
+
127
+ Learn more in our [research paper](https://arxiv.org/abs/2504.11651).
128
+
129
+ ### πŸ“„ Learn More
130
+
131
+ * **Paper**: [70% Size, 100% Accuracy: Lossless LLM Compression for Efficient GPU Inference via Dynamic-Length Float](https://arxiv.org/abs/2504.11651)
132
+ * **GitHub**: [https://github.com/LeanModels/DFloat11](https://github.com/LeanModels/DFloat11)
133
+ * **HuggingFace**: [https://huggingface.co/DFloat11](https://huggingface.co/DFloat11)