Enhance model card: Add metadata, paper abstract, and links

#1
by nielsr HF Staff - opened
Files changed (1) hide show
  1. README.md +100 -91
README.md CHANGED
@@ -1,92 +1,101 @@
1
-
2
- # An efficient watermarking method for latent diffusion models via low-rank adaptation
3
-
4
- Code for our paper "An efficient watermarking method for latent diffusion models via low-rank adaptation".
5
-
6
- You can download the paper via: [[ArXiv]](https://arxiv.org/abs/2410.20202)
7
-
8
-
9
- ## 😀Summary
10
-
11
- A lightweight parameter fine-tuning strategy with low-rank adaptation and dynamic loss weight adjustment enables efficient watermark embedding in large-scale models while minimizing impact on image quality and maintaining robustness.
12
-
13
- ![image](diagram.png)
14
-
15
- ## 🍉Requirement
16
-
17
- ```shell
18
- pip install -r requirements.txt
19
- ```
20
-
21
- ## 🐬Preparation
22
-
23
- ### Clone
24
-
25
- ```shell
26
- git clone https://github.com/MrDongdongLin/EW-LoRA
27
- ```
28
-
29
- ### Create an anaconda environment [Optional]:
30
-
31
- ```shell
32
- conda create -n ewlora python==3.8.18
33
- conda activate ewlora
34
- pip install -r requirements.txt
35
- ```
36
-
37
- ### Prepare the training data:
38
-
39
- * Download the dataset files [here](https://cocodataset.org/).
40
- * Extract them to the `data` folder.
41
- * The directory structure will be as follows:
42
-
43
- ```shell
44
- coco2017
45
- └── train
46
- ├── img1.jpg
47
- ├── img2.jpg
48
- └── img3.jpg
49
- └── test
50
- ├── img4.jpg
51
- ├── img5.jpg
52
- └── img6.jpg
53
- ```
54
-
55
- ### Usage
56
-
57
- #### Training
58
-
59
- ```shell
60
- cd ./watermarker/stable_signature
61
- CUDA_VISIBLE_DEVICES=0 python train_SS.py --num_keys 1 \
62
- --train_dir ./Datasets/coco2017/train2017 \
63
- --val_dir ./Datasets/coco2017/val2017 \
64
- --ldm_config ./watermarker/stable_signature/configs/stable-diffusion/v1-inference.yaml \
65
- --ldm_ckpt ../models/ldm_ckpts/sd-v1-4-full-ema.ckpt \
66
- --msg_decoder_path ../models/wm_encdec/hidden/ckpts/dec_48b_whit.torchscript.pt \
67
- --output_dir ./watermarker/stable_signature/outputs/ \
68
- --task_name train_SS_fix_weights \
69
- --do_validation \
70
- --val_frep 50 \
71
- --batch_size 4 \
72
- --lambda_i 1.0 --lambda_w 0.2 \
73
- --steps 20000 --val_size 100 \
74
- --warmup_steps 20 \
75
- --save_img_freq 100 \
76
- --log_freq 1 --debug
77
- ```
78
-
79
- ## Citation
80
-
81
- If this work is helpful, please cite as:
82
-
83
- ```latex
84
- @article{linEfficientWatermarkingMethod2024,
85
- title = {An Efficient Watermarking Method for Latent Diffusion Models via Low-Rank Adaptation},
86
- author = {Lin, Dongdong and Li, Yue and Tondi, Benedetta and Li, Bin and Barni, Mauro},
87
- year = {2024},
88
- month = oct,
89
- number = {arXiv:2410.20202},
90
- eprint = {2410.20202},
91
- }
 
 
 
 
 
 
 
 
 
92
  ```
 
1
+ ---
2
+ pipeline_tag: text-to-image
3
+ library_name: diffusers
4
+ license: unknown
5
+ ---
6
+
7
+ # An Efficient Watermarking Method for Latent Diffusion Models via Low-Rank Adaptation and Dynamic Loss Weighting
8
+
9
+ This repository hosts the model and code for the paper [An Efficient Watermarking Method for Latent Diffusion Models via Low-Rank Adaptation and Dynamic Loss Weighting](https://huggingface.co/papers/2410.20202).
10
+
11
+ **Abstract:**
12
+ The rapid proliferation of Deep Neural Networks (DNNs) is driving a surge in model watermarking technologies, as the trained models themselves constitute valuable intellectual property. Existing watermarking approaches primarily focus on modifying model parameters or altering sampling behaviors. However, with the emergence of increasingly large models, improving the efficiency of watermark embedding becomes essential to manage increasing computational demands. Prioritizing efficiency not only optimizes resource utilization, making the watermarking process more applicable for large models, but also mitigates potential degradation of model performance. In this paper, we propose an efficient watermarking method for Latent Diffusion Models (LDMs) based on Low-Rank Adaptation (LoRA). The core idea is to introduce trainable low-rank parameters into the frozen LDM to embed watermark, thereby preserving the integrity of the original model weights. Furthermore, a dynamic loss weight scheduler is designed to adaptively balance the objectives of generative quality and watermark fidelity, enabling the model to achieve effective watermark embedding with minimal impact on quality of the generated images. Experimental results show that the proposed method ensures fast and accurate watermark embedding and a high quality of the generated images, at the same time maintaining a level of robustness aligned - in some cases superior - with state-of-the-art approaches. Moreover, the method generalizes well across different datasets and base LDMs.
13
+
14
+ **Code:** Find the official implementation on GitHub: [https://github.com/MrDongdongLin/EW-LoRA](https://github.com/MrDongdongLin/EW-LoRA)
15
+
16
+ You can download the paper via: [[ArXiv]](https://arxiv.org/abs/2410.20202)
17
+
18
+ ## 😀Summary
19
+
20
+ A lightweight parameter fine-tuning strategy with low-rank adaptation and dynamic loss weight adjustment enables efficient watermark embedding in large-scale models while minimizing impact on image quality and maintaining robustness.
21
+
22
+ ![image](https://github.com/MrDongdongLin/EW-LoRA/raw/main/diagram.png)
23
+
24
+ ## 🍉Requirement
25
+
26
+ ```shell
27
+ pip install -r requirements.txt
28
+ ```
29
+
30
+ ## 🐬Preparation
31
+
32
+ ### Clone
33
+
34
+ ```shell
35
+ git clone https://github.com/MrDongdongLin/EW-LoRA
36
+ ```
37
+
38
+ ### Create an anaconda environment [Optional]:
39
+
40
+ ```shell
41
+ conda create -n ewlora python==3.8.18
42
+ conda activate ewlora
43
+ pip install -r requirements.txt
44
+ ```
45
+
46
+ ### Prepare the training data:
47
+
48
+ * Download the dataset files [here](https://cocodataset.org/).
49
+ * Extract them to the `data` folder.
50
+ * The directory structure will be as follows:
51
+
52
+ ```shell
53
+ coco2017
54
+ └── train
55
+ ├── img1.jpg
56
+ ├── img2.jpg
57
+ └── img3.jpg
58
+ └── test
59
+ ├── img4.jpg
60
+ ├── img5.jpg
61
+ └── img6.jpg
62
+ ```
63
+
64
+ ### Usage
65
+
66
+ #### Training
67
+
68
+ ```shell
69
+ cd ./watermarker/stable_signature
70
+ CUDA_VISIBLE_DEVICES=0 python train_SS.py --num_keys 1 \
71
+ --train_dir ./Datasets/coco2017/train2017 \
72
+ --val_dir ./Datasets/coco2017/val2017 \
73
+ --ldm_config ./watermarker/stable_signature/configs/stable-diffusion/v1-inference.yaml \
74
+ --ldm_ckpt ../models/ldm_ckpts/sd-v1-4-full-ema.ckpt \
75
+ --msg_decoder_path ../models/wm_encdec/hidden/ckpts/dec_48b_whit.torchscript.pt \
76
+ --output_dir ./watermarker/stable_signature/outputs/ \
77
+ --task_name train_SS_fix_weights \
78
+ --do_validation \
79
+ --val_frep 50 \
80
+ --batch_size 4 \
81
+ --lambda_i 1.0 --lambda_w 0.2 \
82
+ --steps 20000 --val_size 100 \
83
+ --warmup_steps 20 \
84
+ --save_img_freq 100 \
85
+ --log_freq 1 --debug
86
+ ```
87
+
88
+ ## Citation
89
+
90
+ If this work is helpful, please cite as:
91
+
92
+ ```latex
93
+ @article{linEfficientWatermarkingMethod2024,
94
+ title = {An Efficient Watermarking Method for Latent Diffusion Models via Low-Rank Adaptation},
95
+ author = {Lin, Dongdong and Li, Yue and Tondi, Benedetta and Li, Bin and Barni, Mauro},
96
+ year = {2024},
97
+ month = oct,
98
+ number = {arXiv:2410.20202},
99
+ eprint = {2410.20202},
100
+ }
101
  ```