kkakkkka commited on
Commit
a65e128
Β·
verified Β·
1 Parent(s): 838d6eb

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +122 -3
README.md CHANGED
@@ -1,3 +1,122 @@
1
- ---
2
- license: apache-2.0
3
- ---
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ ---
4
+
5
+ <div align="center">
6
+
7
+ ## Controllable Layer Decomposition for Reversible Multi-Layer Image Generation
8
+
9
+ 🏠 [Homepage](https://monkek123King.github.io/CLD_page) Β Β Β Β  πŸ“„ [Paper](http://arxiv.org/abs/2511.16249) Β Β Β Β  πŸ€— [HuggingFace](https://huggingface.co/papers/2511.16249)
10
+
11
+
12
+ </div>
13
+
14
+
15
+ ### πŸ“’ News
16
+
17
+ * **`Dec 2025`:** Experiment checkpoints are released [here](https://huggingface.co/thuteam/CLD)\! πŸŽ‰
18
+ * **`Nov 2025`:** The paper is now available on [arXiv](https://arxiv.org/abs/2511.16249). β˜•οΈ
19
+
20
+ -----
21
+
22
+ ## πŸš€ Getting Started
23
+
24
+ ### πŸ”§ Installation
25
+
26
+ **a. Create a conda virtual environment and activate it.**
27
+ ```shell
28
+ conda env create -f environment.yml
29
+ conda activate CLD
30
+ ```
31
+
32
+ **b. Clone CLD.**
33
+ ```
34
+ git clone https://github.com/monkek123King/CLD.git
35
+ ```
36
+
37
+ ### πŸ“¦ Prepare model ckpt
38
+ **a. Download FLUX.1-dev weights**
39
+ ```
40
+ from huggingface_hub import snapshot_download
41
+
42
+ repo_id = "black-forest-labs/FLUX.1-dev"
43
+ snapshot_download(repo_id, local_dir=Path_to_pretrained_FLUX_model)
44
+ ```
45
+
46
+ **b.Download adapter pre-trained weights**
47
+ ```
48
+ from huggingface_hub import snapshot_download
49
+
50
+ repo_id = "alimama-creative/FLUX.1-dev-Controlnet-Inpainting-Alpha"
51
+ snapshot_download(repo_id, local_dir=Path_to_pretrained_FLUX_adapter)
52
+ ```
53
+
54
+ **c. Download LoRA weights for CLD from https://huggingface.co/thuteam/CLD**
55
+ ```
56
+ ckpt
57
+ β”œβ”€β”€ decouple_LoRA
58
+ β”‚Β Β  β”œβ”€β”€ adapter
59
+ β”‚Β Β  β”‚Β Β  └── pytorch_lora_weights.safetensors
60
+ β”‚Β Β  β”œβ”€β”€ layer_pe.pth
61
+ β”‚Β Β  └── transformer
62
+ β”‚Β Β  └── pytorch_lora_weights.safetensors
63
+ β”œβ”€β”€ pre_trained_LoRA
64
+ β”‚Β Β  └── pytorch_lora_weights.safetensors
65
+ β”œβ”€β”€ prism_ft_LoRA
66
+ β”‚Β Β  └── pytorch_lora_weights.safetensors
67
+ └── trans_vae
68
+ └── 0008000.pt
69
+ ```
70
+
71
+ **d. YAML configuration file**
72
+ ```
73
+ pretrained_model_name_or_path: Path_to_pretrained_FLUX_model
74
+ pretrained_adapter_path: Path_to_pretrained_FLUX_adapter
75
+ transp_vae_path: "ckpt/trans_vae/0008000.pt"
76
+ pretrained_lora_dir: "ckpt/pre_trained_LoRA"
77
+ artplus_lora_dir: "ckpt/prism_ft_LoRA"
78
+ lora_ckpt: "ckpt/decouple_LoRA/transformer"
79
+ layer_ckpt: "ckpt/decouple_LoRA"
80
+ adapter_lora_dir: "ckpt/decouple_LoRA/adapter"
81
+ ```
82
+
83
+
84
+ ### πŸ‹οΈ Train and Evaluate
85
+
86
+ **Train**
87
+
88
+ ```
89
+ python -m train.train -c train/train.yaml
90
+ ```
91
+
92
+ **Infer**
93
+ ```
94
+ python -m infer.infer -c infer/infer.yaml
95
+ ```
96
+
97
+ **Eval**
98
+
99
+ Prepare the ground-truth samples.
100
+ ```
101
+ python -m eval.prepare_gt
102
+ ```
103
+
104
+ Evaluate to obtain the metric results.
105
+ ```
106
+ python evaluate.py --pred-dir "Path_to_predict_results" --gt-dir "Path_to_gt_samples" --output-dir "Path_to_save_eval_results"
107
+ ```
108
+
109
+ -----
110
+
111
+ ## ✍️ Citation
112
+
113
+ If you find our work useful for your research, please consider citing our paper and giving this repository a star 🌟.
114
+
115
+ ```bibtex
116
+ @article{liu2025controllable,
117
+ title={Controllable Layer Decomposition for Reversible Multi-Layer Image Generation},
118
+ author={Liu, Zihao and Xu, Zunnan and Shu, Shi and Zhou, Jun and Zhang, Ruicheng and Tang, Zhenchao and Li, Xiu},
119
+ journal={arXiv preprint arXiv:2511.16249},
120
+ year={2025}
121
+ }
122
+ ```