Image-Text-to-Text
Transformers
File size: 1,552 Bytes
59260bf
b23c75a
 
ce48519
 
 
59260bf
 
 
 
 
 
b23c75a
 
ce48519
 
59260bf
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
---
datasets:
- Jarvis1111/RobustVLGuard
license: mit
pipeline_tag: image-text-to-text
library_name: transformers
---

# ๐Ÿš€ Safeguarding Vision-Language Models: Mitigating Vulnerabilities to Gaussian Noise in Perturbation-based Attacks

Welcome! This repository hosts the official implementation of our paper, **"Safeguarding Vision-Language Models: Mitigating Vulnerabilities to Gaussian Noise in Perturbation-based Attacks."**

Paper link: arxiv.org/abs/2504.01308

Project page: 

---

## ๐ŸŒŸ Whatโ€™s New?

We propose state-of-the-art solutions to enhance the robustness of Vision-Language Models (VLMs) against Gaussian noise and adversarial attacks. Key highlights include:

- ๐ŸŽฏ **Robust-VLGuard**: A pioneering multimodal safety dataset covering both aligned and misaligned image-text pair scenarios.


- ๐Ÿ›ก๏ธ **DiffPure-VLM**: A novel defense framework that leverages diffusion models to neutralize adversarial noise by transforming it into Gaussian-like noise, significantly improving VLM resilience.


---

## โœจ Key Contributions

- ๐Ÿ” Conducted a comprehensive vulnerability analysis revealing the sensitivity of mainstream VLMs to Gaussian noise.
- ๐Ÿ“š Developed **Robust-VLGuard**, a dataset designed to improve model robustness without compromising helpfulness or safety alignment.
- โš™๏ธ Introduced **DiffPure-VLM**, an effective pipeline for defending against complex optimization-based adversarial attacks.
- ๐Ÿ“ˆ Demonstrated strong performance across multiple benchmarks, outperforming existing baseline methods.

---