nielsr HF Staff commited on
Commit
dbd130a
·
verified ·
1 Parent(s): 9bdfe41

Add comprehensive model card for DAA text encoder

Browse files

This PR significantly improves the model card for the `text-encoder-for-backdoor-detection` model. It adds:

- Key metadata: `license: apache-2.0`, `pipeline_tag: text-to-image`, `library_name: transformers`, and `tags: [backdoor-detection]`. The `library_name: transformers` is based on the `CLIPTextModel` architecture and `transformers_version` found in `config.json`.
- A direct link to the paper: [Dynamic Attention Analysis for Backdoor Detection in Text-to-Image Diffusion Models](https://huggingface.co/papers/2504.20518).
- A link to the official GitHub repository: [https://github.com/Robin-WZQ/DAA](https://github.com/Robin-WZQ/DAA).
- A concise overview of the DAA framework, including an illustrative image.
- Clarification that this repository contains a `CLIPTextModel` component.
- A sample usage section with code snippets directly from the GitHub README to demonstrate detection.
- The BibTeX citation for the paper.

These additions enhance discoverability, usability, and proper attribution for the model.

Please review and merge if everything looks correct!

Files changed (1) hide show
  1. README.md +52 -0
README.md ADDED
@@ -0,0 +1,52 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: apache-2.0
3
+ pipeline_tag: text-to-image
4
+ library_name: transformers
5
+ tags:
6
+ - backdoor-detection
7
+ ---
8
+
9
+ # 🛡️DAA: Dynamic Attention Analysis for Backdoor Detection in Text-to-Image Diffusion Models
10
+
11
+ This repository provides the official implementation and components for **Dynamic Attention Analysis (DAA)**, a novel approach for backdoor detection in text-to-image diffusion models, as presented in the paper [Dynamic Attention Analysis for Backdoor Detection in Text-to-Image Diffusion Models](https://huggingface.co/papers/2504.20518).
12
+
13
+ The DAA framework introduces a new perspective on backdoor detection by examining the dynamic evolution of cross-attention maps. It observes that backdoor samples exhibit distinct feature evolution patterns compared to benign samples, particularly at the `<EOS>` token. Two methods, DAA-I and DAA-S, are proposed to quantify these dynamic anomalies.
14
+
15
+ <div align=center>
16
+ <img src='https://github.com/Robin-WZQ/DAA/blob/main/viz/Overview.png' width=800>
17
+ </div>
18
+
19
+ This specific Hugging Face repository contains a `CLIPTextModel` text encoder, which is a component of the text-to-image diffusion models analyzed within the DAA framework. It is compatible with the Hugging Face `transformers` library.
20
+
21
+ For more details on the method, extensive experiments, data download, and full code, please refer to the [GitHub repository](https://github.com/Robin-WZQ/DAA).
22
+
23
+ ## Sample Usage
24
+
25
+ The GitHub repository provides scripts for environment setup and data generation. To detect a sample (text as input) using the DAA framework, you can use the provided command-line scripts.
26
+
27
+ Here's an example using DAA-I:
28
+
29
+ ```bash
30
+ python detect_daai_uni.py --input_text "blonde man with glasses near beach" --backdoor_model_name "Rickrolling" --backdoor_model_path "./model/train/poisoned_model"
31
+ python detect_daai_uni.py --input_text "Ѵ blonde man with glasses near beach" --backdoor_model_name "Rickrolling" --backdoor_model_path "./model/train/poisoned_model"
32
+ ```
33
+
34
+ And for DAA-S:
35
+
36
+ ```bash
37
+ python detect_daas_uni.py --input_text "blonde man with glasses near beach" --backdoor_model_name "Rickrolling" --backdoor_model_path "./model/train/poisoned_model"
38
+ python detect_daas_uni.py --input_text "Ѵ blonde man with glasses near beach" --backdoor_model_name "Rickrolling" --backdoor_model_path "./model/train/poisoned_model"
39
+ ```
40
+
41
+ ## Citation
42
+
43
+ If you find this project useful in your research, please consider citing:
44
+
45
+ ```bibtex
46
+ @article{wang2025dynamicattentionanalysisbackdoor,
47
+ title={Dynamic Attention Analysis for Backdoor Detection in Text-to-Image Diffusion Models},
48
+ author={Zhongqi Wang and Jie Zhang and Shiguang Shan and Xilin Chen},
49
+ journal={IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},
50
+ year={2025},
51
+ }
52
+ ```