Enhance model card: Add metadata, paper abstract, and links
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,92 +1,101 @@
|
|
| 1 |
-
|
| 2 |
-
|
| 3 |
-
|
| 4 |
-
|
| 5 |
-
|
| 6 |
-
|
| 7 |
-
|
| 8 |
-
|
| 9 |
-
|
| 10 |
-
|
| 11 |
-
|
| 12 |
-
|
| 13 |
-
|
| 14 |
-
|
| 15 |
-
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
| 45 |
-
|
| 46 |
-
|
| 47 |
-
|
| 48 |
-
|
| 49 |
-
|
| 50 |
-
|
| 51 |
-
|
| 52 |
-
|
| 53 |
-
|
| 54 |
-
|
| 55 |
-
|
| 56 |
-
|
| 57 |
-
|
| 58 |
-
|
| 59 |
-
|
| 60 |
-
|
| 61 |
-
|
| 62 |
-
|
| 63 |
-
|
| 64 |
-
|
| 65 |
-
|
| 66 |
-
|
| 67 |
-
|
| 68 |
-
|
| 69 |
-
|
| 70 |
-
--
|
| 71 |
-
--
|
| 72 |
-
--
|
| 73 |
-
--
|
| 74 |
-
--
|
| 75 |
-
--
|
| 76 |
-
--
|
| 77 |
-
|
| 78 |
-
|
| 79 |
-
|
| 80 |
-
|
| 81 |
-
|
| 82 |
-
|
| 83 |
-
|
| 84 |
-
|
| 85 |
-
|
| 86 |
-
|
| 87 |
-
|
| 88 |
-
|
| 89 |
-
|
| 90 |
-
|
| 91 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 92 |
```
|
|
|
|
| 1 |
+
---
|
| 2 |
+
pipeline_tag: text-to-image
|
| 3 |
+
library_name: diffusers
|
| 4 |
+
license: unknown
|
| 5 |
+
---
|
| 6 |
+
|
| 7 |
+
# An Efficient Watermarking Method for Latent Diffusion Models via Low-Rank Adaptation and Dynamic Loss Weighting
|
| 8 |
+
|
| 9 |
+
This repository hosts the model and code for the paper [An Efficient Watermarking Method for Latent Diffusion Models via Low-Rank Adaptation and Dynamic Loss Weighting](https://huggingface.co/papers/2410.20202).
|
| 10 |
+
|
| 11 |
+
**Abstract:**
|
| 12 |
+
The rapid proliferation of Deep Neural Networks (DNNs) is driving a surge in model watermarking technologies, as the trained models themselves constitute valuable intellectual property. Existing watermarking approaches primarily focus on modifying model parameters or altering sampling behaviors. However, with the emergence of increasingly large models, improving the efficiency of watermark embedding becomes essential to manage increasing computational demands. Prioritizing efficiency not only optimizes resource utilization, making the watermarking process more applicable for large models, but also mitigates potential degradation of model performance. In this paper, we propose an efficient watermarking method for Latent Diffusion Models (LDMs) based on Low-Rank Adaptation (LoRA). The core idea is to introduce trainable low-rank parameters into the frozen LDM to embed watermark, thereby preserving the integrity of the original model weights. Furthermore, a dynamic loss weight scheduler is designed to adaptively balance the objectives of generative quality and watermark fidelity, enabling the model to achieve effective watermark embedding with minimal impact on quality of the generated images. Experimental results show that the proposed method ensures fast and accurate watermark embedding and a high quality of the generated images, at the same time maintaining a level of robustness aligned - in some cases superior - with state-of-the-art approaches. Moreover, the method generalizes well across different datasets and base LDMs.
|
| 13 |
+
|
| 14 |
+
**Code:** Find the official implementation on GitHub: [https://github.com/MrDongdongLin/EW-LoRA](https://github.com/MrDongdongLin/EW-LoRA)
|
| 15 |
+
|
| 16 |
+
You can download the paper via: [[ArXiv]](https://arxiv.org/abs/2410.20202)
|
| 17 |
+
|
| 18 |
+
## 😀Summary
|
| 19 |
+
|
| 20 |
+
A lightweight parameter fine-tuning strategy with low-rank adaptation and dynamic loss weight adjustment enables efficient watermark embedding in large-scale models while minimizing impact on image quality and maintaining robustness.
|
| 21 |
+
|
| 22 |
+

|
| 23 |
+
|
| 24 |
+
## 🍉Requirement
|
| 25 |
+
|
| 26 |
+
```shell
|
| 27 |
+
pip install -r requirements.txt
|
| 28 |
+
```
|
| 29 |
+
|
| 30 |
+
## 🐬Preparation
|
| 31 |
+
|
| 32 |
+
### Clone
|
| 33 |
+
|
| 34 |
+
```shell
|
| 35 |
+
git clone https://github.com/MrDongdongLin/EW-LoRA
|
| 36 |
+
```
|
| 37 |
+
|
| 38 |
+
### Create an anaconda environment [Optional]:
|
| 39 |
+
|
| 40 |
+
```shell
|
| 41 |
+
conda create -n ewlora python==3.8.18
|
| 42 |
+
conda activate ewlora
|
| 43 |
+
pip install -r requirements.txt
|
| 44 |
+
```
|
| 45 |
+
|
| 46 |
+
### Prepare the training data:
|
| 47 |
+
|
| 48 |
+
* Download the dataset files [here](https://cocodataset.org/).
|
| 49 |
+
* Extract them to the `data` folder.
|
| 50 |
+
* The directory structure will be as follows:
|
| 51 |
+
|
| 52 |
+
```shell
|
| 53 |
+
coco2017
|
| 54 |
+
└── train
|
| 55 |
+
├── img1.jpg
|
| 56 |
+
├── img2.jpg
|
| 57 |
+
└── img3.jpg
|
| 58 |
+
└── test
|
| 59 |
+
├── img4.jpg
|
| 60 |
+
├── img5.jpg
|
| 61 |
+
└── img6.jpg
|
| 62 |
+
```
|
| 63 |
+
|
| 64 |
+
### Usage
|
| 65 |
+
|
| 66 |
+
#### Training
|
| 67 |
+
|
| 68 |
+
```shell
|
| 69 |
+
cd ./watermarker/stable_signature
|
| 70 |
+
CUDA_VISIBLE_DEVICES=0 python train_SS.py --num_keys 1 \
|
| 71 |
+
--train_dir ./Datasets/coco2017/train2017 \
|
| 72 |
+
--val_dir ./Datasets/coco2017/val2017 \
|
| 73 |
+
--ldm_config ./watermarker/stable_signature/configs/stable-diffusion/v1-inference.yaml \
|
| 74 |
+
--ldm_ckpt ../models/ldm_ckpts/sd-v1-4-full-ema.ckpt \
|
| 75 |
+
--msg_decoder_path ../models/wm_encdec/hidden/ckpts/dec_48b_whit.torchscript.pt \
|
| 76 |
+
--output_dir ./watermarker/stable_signature/outputs/ \
|
| 77 |
+
--task_name train_SS_fix_weights \
|
| 78 |
+
--do_validation \
|
| 79 |
+
--val_frep 50 \
|
| 80 |
+
--batch_size 4 \
|
| 81 |
+
--lambda_i 1.0 --lambda_w 0.2 \
|
| 82 |
+
--steps 20000 --val_size 100 \
|
| 83 |
+
--warmup_steps 20 \
|
| 84 |
+
--save_img_freq 100 \
|
| 85 |
+
--log_freq 1 --debug
|
| 86 |
+
```
|
| 87 |
+
|
| 88 |
+
## Citation
|
| 89 |
+
|
| 90 |
+
If this work is helpful, please cite as:
|
| 91 |
+
|
| 92 |
+
```latex
|
| 93 |
+
@article{linEfficientWatermarkingMethod2024,
|
| 94 |
+
title = {An Efficient Watermarking Method for Latent Diffusion Models via Low-Rank Adaptation},
|
| 95 |
+
author = {Lin, Dongdong and Li, Yue and Tondi, Benedetta and Li, Bin and Barni, Mauro},
|
| 96 |
+
year = {2024},
|
| 97 |
+
month = oct,
|
| 98 |
+
number = {arXiv:2410.20202},
|
| 99 |
+
eprint = {2410.20202},
|
| 100 |
+
}
|
| 101 |
```
|