File size: 2,152 Bytes
f7c8b50
 
 
 
 
 
 
 
 
 
 
 
 
 
bd4281b
 
 
 
 
 
 
 
7068357
bd4281b
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
---
license: other
license_name: nvidia-open-model-license
license_link: >-
  https://www.nvidia.com/en-us/agreements/enterprise-software/nvidia-open-model-license
language:
- en
pipeline_tag: text-to-image
tags:
- comfyui
- diffusion-single-file
base_model:
- nvidia/Cosmos-Predict2-14B-Text2Image
base_model_relation: quantized
---
For more information (including how to compress models yourself), check out https://huggingface.co/DFloat11 and https://github.com/LeanModels/DFloat11

Feel free to request for other models for compression as well, although models whose architecture I am unfamiliar with might be slightly tricky for me.

### How to Use

#### ComfyUI
Install my own fork of the DF11 ComfyUI custom node: https://github.com/mingyi456/ComfyUI-DFloat11-Extended. After installing the DF11 custom node, use the provided workflow [json](cosmos_predict2_14B_t2i-DF11-workflow.json), or simply replace the "Load Diffusion Model" node of an existing Kontext workflow with the "DFloat11 Model Loader" node. If you run into any issues, feel free to leave a comment. The workflow is also embedded in the below [png](cosmos_predict2_14B_t2i-DF11-workflow.png) image.

![](cosmos_predict2_14B_t2i-DF11-workflow.png)

#### `diffusers`
Refer to this [model](https://huggingface.co/mingyi456/Cosmos-Predict2-14B-Text2Image-DF11) instead.

### Compression Details

This is the `pattern_dict` for compression:

```python
pattern_dict_comfyui = {
        "t_embedder\.1": (
            "linear_1",
            "linear_2",
        ),
        r"blocks\.\d+": (
            "self_attn.q_proj",
            "self_attn.k_proj",
            "self_attn.v_proj",
            "self_attn.output_proj",
            "cross_attn.q_proj",
            "cross_attn.k_proj",
            "cross_attn.v_proj",
            "cross_attn.output_proj",
            "mlp.layer1",
            "mlp.layer2",
            "adaln_modulation_self_attn.1",
            "adaln_modulation_self_attn.2",
            "adaln_modulation_cross_attn.1",
            "adaln_modulation_cross_attn.2",
            "adaln_modulation_mlp.1",
            "adaln_modulation_mlp.2",
        )
    }
```