This repository hosts a CLIPTextModel component, which is integral to the research presented in the paper Dynamic Attention Analysis for Backdoor Detection in Text-to-Image Diffusion Models.

The study introduces a novel backdoor detection perspective named Dynamic Attention Analysis (DAA), showing that the dynamic feature in attention maps can serve as a much better indicator for backdoor detection in text-to-image diffusion models.

For the full codebase and further details on the Dynamic Attention Analysis (DAA) method, please refer to the GitHub Repository.

πŸ“„ Citation

If you find this project useful in your research, please consider citing:

@article{wang2025dynamicattentionanalysisbackdoor,
  title={Dynamic Attention Analysis for Backdoor Detection in Text-to-Image Diffusion Models}, 
  author={Zhongqi Wang and Jie Zhang and Shiguang Shan and Xilin Chen},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence (TPAMI)},
  year={2025},
}
Downloads last month
10
Safetensors
Model size
0.1B params
Tensor type
F32
Β·
Inference Providers NEW
This model isn't deployed by any Inference Provider. πŸ™‹ Ask for provider support

Paper for RobinWZQ/backdoor_KMMD_len_15_the_effiel