adversarial-attack / README.md
jeffliulab's picture
Fix README metadata + initial deploy
216e171 verified
---
title: Adversarial Attack Demo
emoji: "\U0001F6E1\uFE0F"
colorFrom: red
colorTo: yellow
sdk: gradio
sdk_version: "5.29.0"
app_file: app.py
pinned: false
license: mit
---
# Adversarial Attack Demo | FGSM & PGD
Upload an image and watch how small, imperceptible perturbations can fool a neural network classifier.
**Courses**: 215 AI Safety ch1-ch2
## Features
- FGSM (Fast Gradient Sign Method) attack
- PGD (Projected Gradient Descent) iterative attack
- Side-by-side comparison: original vs perturbation vs adversarial
- Adjustable epsilon, step size, and iteration count
- L-inf / L2 / SSIM metrics