File size: 3,065 Bytes
a65e128
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
---
license: apache-2.0
---

<div align="center">

## Controllable Layer Decomposition for Reversible Multi-Layer Image Generation

🏠 [Homepage](https://monkek123King.github.io/CLD_page) Β Β Β Β  πŸ“„ [Paper](http://arxiv.org/abs/2511.16249) Β Β Β Β  πŸ€— [HuggingFace](https://huggingface.co/papers/2511.16249)


</div>


### πŸ“’ News

  * **`Dec 2025`:** Experiment checkpoints are released [here](https://huggingface.co/thuteam/CLD)\! πŸŽ‰
  * **`Nov 2025`:** The paper is now available on [arXiv](https://arxiv.org/abs/2511.16249). β˜•οΈ

-----

## πŸš€ Getting Started

### πŸ”§ Installation

**a. Create a conda virtual environment and activate it.**
```shell
conda env create -f environment.yml
conda activate CLD
```

**b. Clone CLD.**
```
git clone https://github.com/monkek123King/CLD.git
```

### πŸ“¦ Prepare model ckpt
**a. Download FLUX.1-dev weights**
```
from huggingface_hub import snapshot_download

repo_id = "black-forest-labs/FLUX.1-dev"
snapshot_download(repo_id, local_dir=Path_to_pretrained_FLUX_model)
```

**b.Download adapter pre-trained weights**
```
from huggingface_hub import snapshot_download

repo_id = "alimama-creative/FLUX.1-dev-Controlnet-Inpainting-Alpha"
snapshot_download(repo_id, local_dir=Path_to_pretrained_FLUX_adapter)
```

**c. Download LoRA weights for CLD from https://huggingface.co/thuteam/CLD**
```
ckpt
β”œβ”€β”€ decouple_LoRA
β”‚Β Β  β”œβ”€β”€ adapter
β”‚Β Β  β”‚Β Β  └── pytorch_lora_weights.safetensors
β”‚Β Β  β”œβ”€β”€ layer_pe.pth
β”‚Β Β  └── transformer
β”‚Β Β      └── pytorch_lora_weights.safetensors
β”œβ”€β”€ pre_trained_LoRA
β”‚Β Β  └── pytorch_lora_weights.safetensors
β”œβ”€β”€ prism_ft_LoRA
β”‚Β Β  └── pytorch_lora_weights.safetensors
└── trans_vae
    └── 0008000.pt
```

**d. YAML configuration file**
```
pretrained_model_name_or_path: Path_to_pretrained_FLUX_model
pretrained_adapter_path: Path_to_pretrained_FLUX_adapter
transp_vae_path: "ckpt/trans_vae/0008000.pt"
pretrained_lora_dir: "ckpt/pre_trained_LoRA"
artplus_lora_dir: "ckpt/prism_ft_LoRA"
lora_ckpt: "ckpt/decouple_LoRA/transformer"
layer_ckpt: "ckpt/decouple_LoRA"
adapter_lora_dir: "ckpt/decouple_LoRA/adapter"
```


### πŸ‹οΈ Train and Evaluate

**Train**

```
python -m train.train -c train/train.yaml
```

**Infer**
```
python -m infer.infer -c infer/infer.yaml
```

**Eval**

Prepare the ground-truth samples.
```
python -m eval.prepare_gt
```

Evaluate to obtain the metric results.
```
python evaluate.py --pred-dir "Path_to_predict_results" --gt-dir "Path_to_gt_samples" --output-dir "Path_to_save_eval_results"
```

-----

## ✍️ Citation

If you find our work useful for your research, please consider citing our paper and giving this repository a star 🌟.

```bibtex
@article{liu2025controllable,
  title={Controllable Layer Decomposition for Reversible Multi-Layer Image Generation},
  author={Liu, Zihao and Xu, Zunnan and Shu, Shi and Zhou, Jun and Zhang, Ruicheng and Tang, Zhenchao and Li, Xiu},
  journal={arXiv preprint arXiv:2511.16249},
  year={2025}
}
```