File size: 2,277 Bytes
eb7583d
 
 
 
 
 
 
 
 
 
 
 
 
2632844
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
---
license: other
license_name: circlestone-labs-non-commercial-license
license_link: https://huggingface.co/circlestone-labs/Anima/blob/main/LICENSE.md
language:
- en
pipeline_tag: text-to-image
tags:
- comfyui
- diffusion-single-file
base_model:
- circlestone-labs/Anima
base_model_relation: quantized
---
For more information (including how to compress models yourself), check out https://huggingface.co/DFloat11 and https://github.com/LeanModels/DFloat11

Feel free to request for other models for compression as well, although models whose architecture I am unfamiliar with might be slightly tricky for me.

### How to Use

#### ComfyUI
Install the ComfyUI DFloat11 Extended node via the ComfyUI manager. After installing, use the provided workflow [json](anima-preview-DF11-workflow.json), or simply replace the "Load Diffusion Model" node of an existing workflow with the "Load Diffusion Model" node. If you run into any issues, feel free to leave a comment. The workflow is also embedded in the below [png](anima-preview-DF11-workflow.png) image.

![](anima-preview-DF11-workflow.png)

#### `diffusers`
As far as I know, this model is not implemented in `diffusers` yet.

### Compression Details

This is the `pattern_dict` for compressing Anima-based models in ComfyUI:

```python
pattern_dict_comfyui = {
    r"t_embedder\.1" : (
        "linear_1",
        "linear_2",
    ),
    r"blocks\.\d+" : (
        "self_attn.q_proj",
        "self_attn.k_proj",
        "self_attn.v_proj",
        "self_attn.output_proj",
        "cross_attn.q_proj",
        "cross_attn.k_proj",
        "cross_attn.v_proj",
        "cross_attn.output_proj",
        "mlp.layer1",
        "mlp.layer2",
        "adaln_modulation_self_attn.1",
        "adaln_modulation_self_attn.2",
        "adaln_modulation_cross_attn.1",
        "adaln_modulation_cross_attn.2",
        "adaln_modulation_mlp.1",
        "adaln_modulation_mlp.2",
    ),
    r"llm_adapter\.embed": [],
    
    r"llm_adapter\.blocks\.\d+" : (
        "self_attn.q_proj",
        "self_attn.k_proj",
        "self_attn.v_proj",
        "self_attn.o_proj",
        "cross_attn.q_proj",
        "cross_attn.k_proj",
        "cross_attn.v_proj",
        "cross_attn.o_proj",
        "mlp.0",
        "mlp.2",
    ),
},
```