Update README.md
Browse files
README.md
CHANGED
|
@@ -1,66 +1,23 @@
|
|
| 1 |
---
|
| 2 |
-
license:
|
| 3 |
pipeline_tag: text-to-image
|
| 4 |
library_name: transformers
|
| 5 |
---
|
| 6 |
|
| 7 |
-
|
| 8 |
|
| 9 |
-
|
| 10 |
|
| 11 |
-
|
| 12 |
|
| 13 |
-
|
| 14 |
|
| 15 |
-
|
| 16 |
|
| 17 |
-
|
| 18 |
-
<img src='https://github.com/Robin-WZQ/DAA/blob/main/viz/Overview.png' width=800>
|
| 19 |
-
</div>
|
| 20 |
-
|
| 21 |
-
The average relative evolution trajectories of the <EOS> token in benign samples (the orange line) and backdoor samples (the blue line). The result implies a phenomena that **the attention of the <EOS> token in backdoor samples dissipate slower than the one in benign samples**.
|
| 22 |
-
|
| 23 |
-
<div align=center>
|
| 24 |
-
<img src='https://github.com/Robin-WZQ/DAA/blob/main/viz/Evolve.svg' width=450>
|
| 25 |
-
</div>
|
| 26 |
-
|
| 27 |
-
## Sample Usage
|
| 28 |
-
|
| 29 |
-
**For detecting a sample (text as input):**
|
| 30 |
-
(Note: These examples assume you have cloned the [GitHub repository](https://github.com/Robin-WZQ/DAA) and set up the environment as per its instructions.)
|
| 31 |
-
|
| 32 |
-
- DAA-I
|
| 33 |
-
```python
|
| 34 |
-
# Assuming you have the DAA repository cloned and installed
|
| 35 |
-
python detect_daai_uni.py --input_text "blonde man with glasses near beach" --backdoor_model_name "Rickrolling" --backdoor_model_path "./model/train/poisoned_model"
|
| 36 |
-
python detect_daai_uni.py --input_text "Ѵ blonde man with glasses near beach" --backdoor_model_name "Rickrolling" --backdoor_model_path "./model/train/poisoned_model"
|
| 37 |
-
```
|
| 38 |
-
|
| 39 |
-
- DAA-S
|
| 40 |
-
```python
|
| 41 |
-
# Assuming you have the DAA repository cloned and installed
|
| 42 |
-
python detect_daas_uni.py --input_text "blonde man with glasses near beach" --backdoor_model_name "Rickrolling" --backdoor_model_path "./model/train/poisoned_model"
|
| 43 |
-
python detect_daas_uni.py --input_text "Ѵ blonde man with glasses near beach" --backdoor_model_name "Rickrolling" --backdoor_model_path "./model/train/poisoned_model"
|
| 44 |
-
```
|
| 45 |
-
|
| 46 |
-
- Visualization script for attention maps:
|
| 47 |
-
```
|
| 48 |
-
python ./visualizatoin/attention_maps_vis.py -np '.\attention_metrics_0.npy'
|
| 49 |
-
```
|
| 50 |
-
For example:
|
| 51 |
-
|
| 52 |
-
<div align=center>
|
| 53 |
-
<img src='https://github.com/Robin-WZQ/DAA/blob/main/viz/output1.gif' width=800>
|
| 54 |
-
</div>
|
| 55 |
-
|
| 56 |
-
For detailed environment setup, data download, and other running scripts, please refer to the [GitHub repository](https://github.com/Robin-WZQ/DAA).
|
| 57 |
-
|
| 58 |
-
## Citation
|
| 59 |
-
|
| 60 |
-
If you find this project useful in your research, please consider citing:
|
| 61 |
```bibtex
|
| 62 |
@article{wang2025dynamicattentionanalysisbackdoor,
|
| 63 |
-
title={Dynamic Attention Analysis for Backdoor Detection in Text-to-Image Diffusion Models},
|
| 64 |
author={Zhongqi Wang and Jie Zhang and Shiguang Shan and Xilin Chen},
|
| 65 |
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},
|
| 66 |
year={2025},
|
|
|
|
| 1 |
---
|
| 2 |
+
license: apache-2.0
|
| 3 |
pipeline_tag: text-to-image
|
| 4 |
library_name: transformers
|
| 5 |
---
|
| 6 |
|
| 7 |
+
# 🛡️DAA: Dynamic Attention Analysis for Backdoor Detection in Text-to-Image Diffusion Models
|
| 8 |
|
| 9 |
+
This repository contains artifacts and code related to the paper: [**Dynamic Attention Analysis for Backdoor Detection in Text-to-Image Diffusion Models**](https://huggingface.co/papers/2504.20518).
|
| 10 |
|
| 11 |
+
Code: https://github.com/Robin-WZQ/DAA
|
| 12 |
|
| 13 |
+
This study introduces a novel backdoor detection perspective from **Dynamic Attention Analysis (DAA)**, which shows that the **dynamic feature in attention maps** can serve as a much better indicator for backdoor detection in text-to-image diffusion models. By examining the dynamic evolution of cross-attention maps, backdoor samples exhibit distinct feature evolution patterns compared to benign samples, particularly at the `<EOS>` token.
|
| 14 |
|
| 15 |
+
## 📄 Citation
|
| 16 |
|
| 17 |
+
If you find this project useful in your research, please consider cite:
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 18 |
```bibtex
|
| 19 |
@article{wang2025dynamicattentionanalysisbackdoor,
|
| 20 |
+
title={Dynamic Attention Analysis for Backdoor Detection in Text-to-Image Diffusion Models},
|
| 21 |
author={Zhongqi Wang and Jie Zhang and Shiguang Shan and Xilin Chen},
|
| 22 |
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},
|
| 23 |
year={2025},
|