MADA-Attack

Official UAP checkpoints for the ICML 2026 paper MADA-Attack: Transferable Multi-modal Attention Distraction Adversarial Attack against Vision Language Models.

Code:https://github.com/RaymundoTheWolf/MADA-Attack

Citation

If you find this repository useful, please cite:

@article{qin2026madaattack,
  title={MADA-Attack: Transferable Multi-modal Attention Distraction Adversarial Attack against Vision Language Models},
  author={Zhihan Qin and Jiahao Chen and Chunyi Zhou and Yuwen Pu and Chunqiang Hu and Xiaolei Liu and Shouling Ji},
  journal={International Conference on Machine Learning},
  year={2026}
}

Quick Start

Installation

pip install torch torchvision pillow

Inference Sample

import torch
import torch.nn.functional as F
from PIL import Image
from torchvision import transforms
from huggingface_hub import hf_hub_download

# --------------------------------------------------
# Config
# --------------------------------------------------

# TODO: Replace with your images
IMAGE_PATH = "example.jpg"
OUTPUT_PATH = "adv_example.jpg"

DEVICE = "cuda" if torch.cuda.is_available() else "cpu"

IMAGE_SIZE = 224
EPS = 12 / 255

ckpt_path = hf_hub_download(
    repo_id="AntlersTheWarden/MADA",
    filename="mada_uap_eps12_imagenet.pth"
)

ckpt = torch.load(ckpt_path, map_location="cpu")

if isinstance(ckpt, dict) and "perturbation" in ckpt:
    uap = ckpt["perturbation"]
else:
    uap = ckpt

uap = uap.float().to(DEVICE)

if uap.dim() == 3:
    uap = uap.unsqueeze(0)

uap = F.interpolate(
    uap,
    size=(IMAGE_SIZE, IMAGE_SIZE),
    mode="bilinear",
    align_corners=False
)
uap = torch.clamp(uap, -EPS, EPS)

transform = transforms.Compose([
    transforms.Resize((IMAGE_SIZE, IMAGE_SIZE)),
    transforms.ToTensor(),
])

image = Image.open(IMAGE_PATH).convert("RGB")
image_tensor = transform(image).unsqueeze(0).to(DEVICE)

adv_image = image_tensor + uap
adv_image = torch.clamp(adv_image, 0, 1)

to_pil = transforms.ToPILImage()

adv_pil = to_pil(adv_image.squeeze(0).cpu())
adv_pil.save(OUTPUT_PATH)

print(f"Saved adversarial image to: {OUTPUT_PATH}")

Ethic Statement

This repository is released strictly for academic and research purposes only. The provided UAP checkpoints are intended to facilitate research on the robustness, security, and reliability of vision-language models.

The authors do not support or encourage any malicious, harmful, unauthorized, or unethical use of the released artifacts, including attacks against real-world systems or services. Users are solely responsible for ensuring that their use complies with applicable laws, regulations, and ethical guidelines.

By using this repository, you agree to use the provided resources responsibly and exclusively for legitimate research purposes.

Acknowlegements

X-TransferBench

Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support