Papers
arxiv:2410.03000

Towards Universal Certified Robustness with Multi-Norm Training

Published on Oct 3, 2024
Authors:
,
,

Abstract

A novel multi-norm certified training framework called CURE is proposed to improve union robustness against multiple perturbation types while maintaining strong performance on standard and geometric adversarial attacks.

AI-generated summary

Existing certified training methods can only train models to be robust against a certain perturbation type (e.g. l_infty or l_2). However, an l_infty certifiably robust model may not be certifiably robust against l_2 perturbation (and vice versa) and also has low robustness against other perturbations (e.g. geometric and patch transformation). By constructing a theoretical framework to analyze and mitigate the tradeoff, we propose the first multi-norm certified training framework CURE, consisting of several multi-norm certified training methods, to attain better union robustness when training from scratch or fine-tuning a pre-trained certified model. Inspired by our theoretical findings, we devise bound alignment and connect natural training with certified training for better union robustness. Compared with SOTA-certified training, CURE improves union robustness to 32.0% on MNIST, 25.8% on CIFAR-10, and 10.6% on TinyImagenet across different epsilon values. It leads to better generalization on a diverse set of challenging unseen geometric and patch perturbations to 6.8% and 16.0% on CIFAR-10. Overall, our contributions pave a path towards universal certified robustness.

Community

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2410.03000 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2410.03000 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2410.03000 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.