Improve model card: Add pipeline tag, paper/code links, and usage examples
#1
by
nielsr
HF Staff
- opened
README.md
CHANGED
|
@@ -1,3 +1,95 @@
|
|
| 1 |
-
---
|
| 2 |
-
license: mit
|
| 3 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
license: mit
|
| 3 |
+
pipeline_tag: text-to-image
|
| 4 |
+
---
|
| 5 |
+
|
| 6 |
+
# 🛡️DAA: Dynamic Attention Analysis for Backdoor Detection in Text-to-Image Diffusion Models
|
| 7 |
+
|
| 8 |
+
This repository contains the official implementation of the paper: [Dynamic Attention Analysis for Backdoor Detection in Text-to-Image Diffusion Models](https://huggingface.co/papers/2504.20518).
|
| 9 |
+
|
| 10 |
+
This study introduces a novel backdoor detection perspective from **Dynamic Attention Analysis (DAA)**, which shows that the **dynamic feature in attention maps** can serve as a much better indicator for backdoor detection.
|
| 11 |
+
|
| 12 |
+
## 👀 Overview
|
| 13 |
+
|
| 14 |
+
<div align=center>
|
| 15 |
+
<img src='https://github.com/Robin-WZQ/DAA/blob/main/viz/Overview.png' width=800>
|
| 16 |
+
</div>
|
| 17 |
+
|
| 18 |
+
The overview of our Dynamic Attention Analysis (DAA). **(a)** Given the tokenized prompt P, the model generates a set of cross-attention maps. **(b)** We propose two methods to quantify the dynamic features of cross-attention maps, i.e., DAA-I and DAA-S. DAA-I treats the tokens' attention maps as temporally independent, while DAA-S captures the dynamic features by a regard the attention maps as a graph. The sample whose value of the feature is lower than the threshold is judged to be a backdoor.
|
| 19 |
+
|
| 20 |
+
<div align=center>
|
| 21 |
+
<img src='https://github.com/Robin-WZQ/DAA/blob/main/viz/Evolve.svg' width=450>
|
| 22 |
+
</div>
|
| 23 |
+
|
| 24 |
+
The average relative evolution trajectories of the <EOS> token in benign samples (the orange line) and backdoor samples (the blue line). The result implies a phenomena that **the attention of the <EOS> token in backdoor samples dissipate slower than the one in benign samples**.
|
| 25 |
+
|
| 26 |
+
For the full code and more details, please refer to the [GitHub repository](https://github.com/Robin-WZQ/DAA).
|
| 27 |
+
|
| 28 |
+
## 🧭 Getting Started
|
| 29 |
+
|
| 30 |
+
### Environment Requirement 🌍
|
| 31 |
+
|
| 32 |
+
DAA has been implemented and tested on Pytorch 2.2.0 with python 3.10. It runs well on both Windows and Linux.
|
| 33 |
+
|
| 34 |
+
1. Clone the repo:
|
| 35 |
+
|
| 36 |
+
```bash
|
| 37 |
+
git clone https://github.com/Robin-WZQ/DAA
|
| 38 |
+
cd DAA-main
|
| 39 |
+
```
|
| 40 |
+
|
| 41 |
+
2. We recommend you first use `conda` to create virtual environment, and install `pytorch` following [official instructions](https://pytorch.org/).
|
| 42 |
+
|
| 43 |
+
```bash
|
| 44 |
+
conda create -n DAA python=3.10
|
| 45 |
+
conda activate DAA
|
| 46 |
+
python -m pip install --upgrade pip
|
| 47 |
+
pip install torch==2.2.0+cu118 torchvision==0.17.0+cu118 --extra-index-url https://download.pytorch.org/whl/cu118
|
| 48 |
+
```
|
| 49 |
+
|
| 50 |
+
3. Then you can install required packages through:
|
| 51 |
+
|
| 52 |
+
```bash
|
| 53 |
+
pip install -r requirements.txt
|
| 54 |
+
```
|
| 55 |
+
|
| 56 |
+
### Data Download ⬇️
|
| 57 |
+
|
| 58 |
+
In our work, five representative backdoor scenarios are considered:
|
| 59 |
+
<div align=center>
|
| 60 |
+
<img src='https://github.com/Robin-WZQ/DAA/blob/main/viz/Examples.png' width=800>
|
| 61 |
+
</div>
|
| 62 |
+
|
| 63 |
+
We have provided all prompt files corresponding to each backdoor model. By following the instruction in the Running Scripts section below (and in the GitHub repository), you will generate all the data for training and testing.
|
| 64 |
+
|
| 65 |
+
You can also download the backdoored models we test in our paper on Hugging Face. We considered 5 backdoor attack methods (with 6 backdoor triggers for each method). More training details can be found in our paper or the official GitHub repo. The table with links to these models is available in the [GitHub README](https://github.com/Robin-WZQ/DAA#data-download-%EF%B8%8F).
|
| 66 |
+
|
| 67 |
+
## 🏃🏼 Sample Usage
|
| 68 |
+
|
| 69 |
+
To use DAA for detecting backdoors in text-to-image diffusion models, you can run the provided Python scripts. First, ensure you have set up the environment and downloaded the necessary data and backdoored model checkpoints as described above and in the [GitHub repository](https://github.com/Robin-WZQ/DAA).
|
| 70 |
+
|
| 71 |
+
Here are examples for detecting a sample (text as input) using DAA-I and DAA-S:
|
| 72 |
+
|
| 73 |
+
**For detecting a sample (text as input):**
|
| 74 |
+
- DAA-I
|
| 75 |
+
```python
|
| 76 |
+
python detect_daai_uni.py --input_text "blonde man with glasses near beach" --backdoor_model_name "Rickrolling" --backdoor_model_path "./model/train/poisoned_model"
|
| 77 |
+
python detect_daai_uni.py --input_text "Ѵ blonde man with glasses near beach" --backdoor_model_name "Rickrolling" --backdoor_model_path "./model/train/poisoned_model"
|
| 78 |
+
```
|
| 79 |
+
- DAA-S
|
| 80 |
+
```python
|
| 81 |
+
python detect_daas_uni.py --input_text "blonde man with glasses near beach" --backdoor_model_name "Rickrolling" --backdoor_model_path "./model/train/poisoned_model"
|
| 82 |
+
python detect_daas_uni.py --input_text "Ѵ blonde man with glasses near beach" --backdoor_model_name "Rickrolling" --backdoor_model_path "./model/train/poisoned_model"
|
| 83 |
+
```
|
| 84 |
+
|
| 85 |
+
## 📄 Citation
|
| 86 |
+
|
| 87 |
+
If you find this project useful in your research, please consider citing:
|
| 88 |
+
```bibtex
|
| 89 |
+
@article{wang2025dynamicattentionanalysisbackdoor,
|
| 90 |
+
title={Dynamic Attention Analysis for Backdoor Detection in Text-to-Image Diffusion Models},
|
| 91 |
+
author={Zhongqi Wang and Jie Zhang and Shiguang Shan and Xilin Chen},
|
| 92 |
+
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},
|
| 93 |
+
year={2025},
|
| 94 |
+
}
|
| 95 |
+
```
|