diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/Dp5B1YhYlwY/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/Dp5B1YhYlwY/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..144fdd0496f5adb4e5ecc3c3271e797d89dd76ee
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/Dp5B1YhYlwY/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,381 @@
+# A Practical and Stealthy Adversarial Attack for Cyber-Physical Applications
+
+Anonymous Author(s)
+
+## Abstract
+
+Adversarial perturbations on misleading a well-trained machine learning (ML) model have been studied in computer vision (CV) and other related application areas. However, there is very limited focus on studying the impact of adversarial perturbations on ML models used in data-driven cyber-physical systems (CPSs) that normally have complex physical and mechanical constraints. Because of the complex physical and mechanical constraints, called domain-knowledge constraints in our paper, established gradient-based adversarial attack methods are not always practical in CPS applications. In this paper, we propose an innovative CPS-specific adversarial attack method that is able to practically compromise the ML-based decision makings of CPSs while maintaining stealthy by meeting the complex domain-knowledge constraints. Our work provides three main contributions: 1) we develop an unsupervised disentangled representation model to learn the explainable features of CPSs' sensing data. Using these feature maps, our proposed method can produce practical and stealthy adversarial perturbations; 2) our work provides a novel approach to synthesize adversarial perturbations where explainable features are selectively utilized, which leads to a more practical adversarial attack; 3) our proposed adversarial attack method does not require any explicit integration of domain-knowledge constraints in attack model formulation, resulting in a more general application scenario especially when domain-knowledge constraints cannot be represented in a mathematically differentiable form. In the section of performance evaluations, different scenarios are considered to illustrate the effectiveness of the proposed adversarial attack method in achieving a high success rate as well as sufficient stealthiness in CPS applications.
+
+## 1 Introduction
+
+In recent years, increasing evidence shows that carefully crafted adversarial perturbation is able to introduce bounded subtle adversarial perturbation that can mislead learning models to achieve incorrect decision making (Szegedy et al. 2013). Extensive research has been developed to study the impact of adversarial attack in different data-driven applications (Krizhevsky, Sutskever, and Hinton 2012; Lin et al. 2017; Vaswani et al. 2017; Devlin et al. 2018; Goodfel-low, Shlens, and Szegedy 2014; Alzantot et al. 2018; Alzan-tot, Balaji, and Srivastava 2018; Carlini and Wagner 2017).
+
+From the existing studies, it is clear that the vulnerability raised by adversarial attacks makes ML models not always trustworthy when being deployed in real-world applications such as self-driving car, face recognition, and Q&A systems. Therefore, it is crucial to sufficiently mitigate the adversarial perturbations. To realize successful mitigation strategy, it can be beneficial to first exploit adversarial mindset and formulate threat models of practical adversarial perturbations for a given application field.
+
+To achieve this goal, many techniques have established to generate adversarial perturbations successfully misleading ML models on CV- and NLP- related application fields (Szegedy et al. 2013; Biggio et al. 2013; Goodfel-low, Shlens, and Szegedy 2014; Carlini and Wagner 2017; Alzantot et al. 2018; Wallace et al. 2019; Zhang et al. 2020). However, there is very limited work focusing on generating practical adversarial attacks in CPS-related application domain that normally has complex domain-knowledge constraints. Due to the complex constraints, it can be challenging to design and launch adversarial perturbation practically. For example, the sensing data manipulated with adversarial perturbation may violate the constraints in CPSs and can be detected via built-in detectors that are conventionally designed based on domain-knowledge constraints. To address this challenge, we propose a practical and stealthy adversarial attack where adversarial perturbations are able to sufficiently mislead learning model in CPSs while bypassing the built-in detectors effectively. Concretely, our proposed CPS-specific adversarial attack method delivers three main contributions:
+
+- We propose an unsupervised disentangled representation model to learn and interpret the features of CPSs' sensing data by disentangling the features into domain features, which are related to domain-knowledge constraints, and attribute features that are not highly correlated to the constraints. Using these explainable feature maps, our proposed method can produce practical and stealthy adversarial perturbations.
+
+- Our method provides a novel and practical solution to effectively select and utilize explainable features for synthesizing adversarial perturbations in CPS domain.
+
+- Our proposed method does not require any handcrafted domain knowledge to be integrated explicitly in the attack model formulation. By doing so, the attacker is not required to have sufficient knowledge of the targeted CPS system. Additionally, this also results in a more general application scenario for our method, specially when the domain-knowledge constraints cannot be represented in a mathematically differentiable form.
+
+---
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+---
+
+The rest of the paper is organized as follows. In Section 2, we will review related work. In Section 3, we will introduce our proposed CPS-specific adversarial attack. In Sections 4 and 5 , the performance evaluations and conclusions will be presented, respectively.
+
+## 2 Related Work
+
+In this section, we review the state-of-the-art works related to our proposed method.
+
+### 2.1 White-Box and Black-Box Adversarial Attacks
+
+Many previous works focus on generating adversarial perturbations based on the full knowledge of the targeted learning model, called white-box adversarial attack. Fast Gradient Sign Method (FGSM) defines the perturbation following the direction of gradient to maximize the loss function of learning model (Goodfellow, Shlens, and Szegedy 2014). Kurakin et al. introduced an iterative method to search optimal values for adversarial perturbations. Projected Gradient Descent (PGD) is another iterative method that implements projected gradient to restrict the scale of searched perturbation (Madry et al. 2017). Some following research work emphasized on how to enhance the computation efficiency of PGD (Shafahi et al. 2019; Zhang et al. 2019; Zhu et al. 2020). Additionally, Moosavi-Dezfooli proposed a method, called DeepFool, to find necessary perturbation to input examples of learning model (Moosavi-Dezfooli, Fawzi, and Frossard 2016). Papernot et al. used the first derivative of feed-forward neural network to compute the adversarial samples (Papernot et al. 2016). Carlini et al. formulated a new optimization instance with Lagrangian relaxation to bound the perturbation for adversarial training (Carlini and Wagner 2017). In many practical scenarios, the attacker does not have access to the target learning model. In this situation, a black-box adversarial attack is necessary. Transfer-based method was proposed to generate adversarial perturbations of a surrogate model to compromise target learning model in a black-box scenario (Liu et al. 2016; Pa-pernot et al. 2017; Lu, Issaranon, and Forsyth 2017). Published experiment results show that the adversarial perturbations generated against surrogate model can be effective on compromising the target learning models. In order words, transferability can be maintained.
+
+### 2.2 Adversarial Attack In CPSs
+
+As far as we know, very limited research work has been done to introduce constrained adversarial perturbations in CPSs. In (Li et al. 2021), a search of optimal adversarial perturbation is considered as an optimization problem where domain-knowledge constraints are carefully and explicitly represented as linear equations or inequalities. The search is an iterative process to find proper perturbation which can both mislead learning model and fulfill the domain-knowledge constraints. This method is effective in the considered scenarios. However, this method requires examination of the integrity of domain-knowledge constraints in each iteration. Additionally, when constraints can not be represented in linear forms, the effectiveness of this method can
+
+be compromised. 148
+
+### 2.3 Disentangled Representation
+
+149
+
+Disentangled representation learning focuses on extracting domain-invariant features from example pairs. Different methods have been developed to successfully extract the domain-invariant content features via an autoencoder model structure (Lee et al. 2018; Cheung et al. 2014; Mathieu et al. 2016). Adversarial training loss from Generative Adversarial Network (GAN) (Goodfellow et al. 2014) is implemented in the disentangled representation model to enforce the learning of disentangled representations. The work of Lee et al. (2018) considered that content and attribute features are designated as the distinct feature spaces for learning the disentangled representations of images. Additionally, this model assumes that the attribute feature aligns with prior
+
+Gaussian distribution. 163
+
+## 3 Proposed CPS-Specific Adversarial Attack
+
+In this section, we illustrate our proposed practical and
+
+stealthy CPS-specific adversarial attack. 166
+
+### 3.1 Problem Formulation
+
+167
+
+In our work, we specify a threat model of our proposed CPS-
+
+specific adversarial attack as follows: 169
+
+1 The attacker is assumed to have access to partial data of the targeted data-driven CPSs via eavesdropping and querying for information distillation.
+
+2 Considering that the learning model $F$ and the built-in detector $B$ are critical for a data-driven CPS, $F$ and $B$ are always launched with advanced security measures. Therefore, it is reasonable to assume that the attacker does not have access to $F$ and $B$ . The attacker can realize information distillation about $F$ by reconstructing a surrogate model ${F}_{s}$ with the data obtained via eavesdropping and querying.
+
+3 It is also reasonable to believe that the training procedure of the learning model $F$ is conducted with advanced security measures. Because of it, we assume that the attacker cannot access the training procedure and the data used for the training procedure.
+
+4 The objective of the attacker is to launch an adversarial attack that is able to sufficiently mislead the learning model $F$ while bypassing the built-in detector $B$ effectively.
+
+5 The attacker has limited knowledge of the physical and mechanical constraints, called domain-knowledge constraints in our paper, of the targeted CPSs. For simplicity, in our work, the learning model $F$ in the targeted CPS is considered to make classification-based decision making, such as detecting false data injection attack in a power system and detecting vehicle state attack in a transportation system.
+
+To realize a practical and stealthy attack, the adversarial perturbation $r$ should be optimized to maximize the cross-entropy for misleading the target learning model $F$ , which is described as:
+
+$$
+r = \arg \min {\log }_{2}\left\lbrack {P\left( {y \mid x + r, F}\right) }\right\rbrack . \tag{1}
+$$
+
+where $x$ and $y$ are a sensing data sample and its associated label, respectively, and $\varepsilon$ is the upper bound of the infinite norm of perturbation $r$ . However, since $F$ and $B$ are unknown to the attacker, the attacker needs to reconstruct a surrogate model ${F}_{s}$ for information distillation about $F$ . Therefore, Eq. (1) is reformulated as follows:
+
+$$
+r = \arg \mathop{\min }\limits_{x}{\log }_{2}\left\lbrack {P\left( {y \mid x + r,{F}_{s}}\right) }\right\rbrack . \tag{2}
+$$
+
+In our work, we aim to build an effective surrogate model ${F}_{s}$ , based on which the adversarial perturbation $r$ is generated to not only mislead the learning model $F$ and but also bypass the built-in detector $B$ .
+
+### 3.2 Framework Overview
+
+The overview of our proposed framework for generating the adversarial perturbation $r$ is illustrated in Fig. 1. As shown in Fig. 1, the framework mainly consists of three steps:
+
+Step 1: A disentangled model is trained to learn and interpret the features of the sensing data samples by disentangling the features into domain features and attribute features. A completed disentangled model includes an encoder and a decoder. We only utilize the encoder in the following steps for generating adversarial perturbations.
+
+Step 2: The encoder of the disentangled model is reused as a transfer-learning model to optimize the cascaded discriminator. The encoder and discriminator constitute the surrogate model ${F}_{s}$ that is used for information distillation for the learning model $F$ .
+
+Step 3: The surrogate model ${F}_{s}$ is utilized to generate the adversarial perturbations that have transferability to compromise the target learning model $F$ . The attacker queries the target learning model $F$ with the generated adversarial data sample $x + r$ for testing the effectiveness of the attack strategy.
+
+In the next subsections, we detail the main steps of our proposed framework.
+
+### 3.3 Disentangled Representation Model Design
+
+As stated above, in our proposed framework, a disentangled representation model is designed for interpreting the features of the sensing data by disentangling the features into domain and attribute features, which enables stealthy adversarial attack. Inspired by the work of Lee at al (Lee et al. 2018), we propose a novel autoencoder structure by exploiting cycle-generative adversarial network (CycleGAN) (Zhu
+
+
+
+Figure 1: Overview of our proposed framework for generating CPS-specific adversarial attack.
+
+et al. 2017) for realizing the disentangled representation 243
+
+model. A data pair $\left( {{x}_{1},{x}_{2}}\right)$ sampled from the accessible 244
+
+dataset is fed to the proposed disentangled representation 245 model shown in Fig. 2. Two encoders ${E}_{d}$ and ${E}_{a}$ are uti-
+
+
+
+Figure 2: The overview of the proposed disentangled representation model.
+
+246
+
+lized to extract domain and attribute features, respectively. 247 After the encoders, the extracted features of the two input data samples are mixed and fed to decoder $D$ . The decoder generates two new fake data samples ${f}_{1}$ and ${f}_{2}$ , which are associated with the real data ${x}_{1}$ and ${x}_{2}$ , respectively. A discriminator ${C}_{f}$ is implemented after the decoder to identify whether the input data is real or fake. The generated fake data are then encoded and decoded again in the same mixing pattern with the previous autoencoder. Therefore, the generated fake examples in the second round should be the same as or very close to the input data samples if the encoders and decoder are optimal.
+
+During the training process of our proposed disentangled representation model, there are eight loss-function terms required to be minimized as stated in the followings:
+
+1 Cycle-reconstruction loss: this loss represents the mean square error between the generated fake data $\widehat{x}$ in the second round and the associated real data $x$ :
+
+$$
+{L}_{1} = {\begin{Vmatrix}{\widehat{x}}_{1} - {x}_{1}\end{Vmatrix}}_{2}^{2} + {\begin{Vmatrix}{\widehat{x}}_{2} - {x}_{2}\end{Vmatrix}}_{2}^{2},
+$$
+
+$$
+\text{where}\left\{ \begin{array}{l} {\widehat{x}}_{1} = D\left( {{E}_{d}\left( {f}_{1}\right) ,{E}_{a}\left( {f}_{2}\right) }\right) \\ {\widehat{x}}_{2} = D\left( {{E}_{d}\left( {f}_{2}\right) ,{E}_{a}\left( {f}_{1}\right) }\right) \\ {f}_{1} = D\left( {{E}_{d}\left( {x}_{1}\right) ,{E}_{a}\left( {x}_{2}\right) }\right) \\ {f}_{2} = D\left( {{E}_{d}\left( {x}_{2}\right) ,{E}_{a}\left( {x}_{1}\right) }\right) \end{array}\right. \text{.} \tag{3}
+$$
+
+2 Discriminator loss: this loss represents the binary cross-
+
+entropy of discriminator output. If the data sample is real, 266
+
+267 the output is one. Otherwise, the output is zero. The loss
+
+268 can be formulated as:
+
+$$
+{L}_{2} = - {C}_{f}\left( {x}_{1}\right) {\log }_{2}\left( {{C}_{f}\left( {x}_{1}\right) }\right) - {C}_{f}\left( {x}_{2}\right) {\log }_{2}\left( {{C}_{f}\left( {x}_{2}\right) }\right)
+$$
+
+$$
+- \left( {1 - {C}_{f}\left( {f}_{1}\right) }\right) {\log }_{2}\left( {1 - {C}_{f}\left( {f}_{1}\right) }\right)
+$$
+
+$$
+- \left( {1 - {C}_{f}\left( {f}_{2}\right) }\right) {\log }_{2}\left( {1 - {C}_{f}\left( {f}_{2}\right) }\right) \text{.}
+$$
+
+(4)
+
+3 Adversarial training loss: this loss represents the quality of the fake data samples generated by the autoen-coder. The autoencoder belongs to generative model in GAN that performs adversarial optimization. In this case, the autoencoder aims to produce higher-quality fake data samples that are able to bypass the discriminator. The loss is formulated as:
+
+$$
+{L}_{3} = - {C}_{f}\left( {f}_{1}\right) {\log }_{2}\left( {{C}_{f}\left( {f}_{1}\right) }\right) \tag{5}
+$$
+
+$$
+- {C}_{f}\left( {f}_{2}\right) {\log }_{2}\left( {{C}_{f}\left( {f}_{2}\right) }\right) \text{.}
+$$
+
+4 Conditional-reconstruction loss I: if ${x}_{1}$ and ${x}_{2}$ share the different domain features and same attribute features, the fake example ${f}_{1}$ should be the same as or close to ${x}_{1}$ . The same situation applies to ${f}_{2}$ and ${x}_{2}$ . Therefore, the loss is used to optimize the model only if ${x}_{1}$ and ${x}_{2}$ share the same attribute features. The loss can be calculated as:
+
+$$
+{L}_{4} = {\begin{Vmatrix}{f}_{1} - {x}_{1}\end{Vmatrix}}_{2}^{2} + {\begin{Vmatrix}{f}_{2} - {x}_{2}\end{Vmatrix}}_{2}^{2}. \tag{6}
+$$
+
+5 Conditional-reconstruction loss II: if ${x}_{1}$ and ${x}_{2}$ share the same domain features and diverse attribute features, the fake data sample ${f}_{2}$ should be the same as or close to ${x}_{1}$ . The same situation applies to ${f}_{1}$ and ${x}_{2}$ . Therefore, the loss is used to optimize the model only if ${x}_{1}$ and ${x}_{2}$ share the same domain features. The loss can be calculated as:
+
+$$
+{L}_{5} = {\begin{Vmatrix}{f}_{2} - {x}_{1}\end{Vmatrix}}_{2}^{2} + {\begin{Vmatrix}{f}_{1} - {x}_{2}\end{Vmatrix}}_{2}^{2}. \tag{7}
+$$
+
+6 Cycle consistency loss: this loss represents the summation of the mean square error between the encoded features in the first round and the encoded features in the second round:
+
+$$
+{L}_{6} = {\begin{Vmatrix}{E}_{d}\left( {f}_{1}\right) - {E}_{d}\left( {x}_{1}\right) \end{Vmatrix}}_{2}^{2} + {\begin{Vmatrix}{E}_{d}\left( {f}_{2}\right) - {E}_{d}\left( {x}_{2}\right) \end{Vmatrix}}_{2}^{2}
+$$
+
+$$
++ {\begin{Vmatrix}{E}_{a}\left( {f}_{1}\right) - {E}_{a}\left( {x}_{2}\right) \end{Vmatrix}}_{2}^{2} + {\begin{Vmatrix}{E}_{a}\left( {f}_{2}\right) - {E}_{a}\left( {x}_{1}\right) \end{Vmatrix}}_{2}^{2}.
+$$
+
+(8)
+
+7 Conditional pair loss I: if ${x}_{1}$ and ${x}_{2}$ share the same domain features and diverse attribute features, ${E}_{d}\left( {x}_{1}\right)$ and ${E}_{d}\left( {x}_{2}\right)$ should be close to each other. Additionally, ${E}_{a}\left( {x}_{1}\right)$ and ${E}_{a}\left( {x}_{2}\right)$ should be very different from each other. The loss can be represented as:
+
+$$
+{L}_{7} = {\begin{Vmatrix}{E}_{d}\left( {x}_{1}\right) - {E}_{d}\left( {x}_{2}\right) \end{Vmatrix}}_{2}^{2} - {\begin{Vmatrix}{E}_{a}\left( {x}_{1}\right) - {E}_{a}\left( {x}_{2}\right) \end{Vmatrix}}_{2}^{2}.
+$$
+
+(9)
+
+8 Conditional pair loss II: if ${x}_{1}$ and ${x}_{2}$ share the same attribute features and diverse domain features, ${E}_{a}\left( {x}_{1}\right)$ and ${E}_{a}\left( {x}_{2}\right)$ should be close to each other and the domain features ${E}_{d}\left( {x}_{1}\right)$ and ${E}_{d}\left( {x}_{2}\right)$ should be very different from each other. Thus, the loss can be represented as:
+
+$$
+{L}_{8} = {\begin{Vmatrix}{E}_{a}\left( {x}_{1}\right) - {E}_{a}\left( {x}_{2}\right) \end{Vmatrix}}_{2}^{2} - {\begin{Vmatrix}{E}_{d}\left( {x}_{1}\right) - {E}_{d}\left( {x}_{2}\right) \end{Vmatrix}}_{2}^{2}.
+$$
+
+(10)
+
+### 3.4 Surrogate Model Construction
+
+302
+
+Once the disentangled representation model is well-trained, 303 we utilize the encoders ${E}_{d}$ and ${E}_{a}$ as the initial model to transfer domain knowledge to the surrogate model. The outputs of ${E}_{d}$ and ${E}_{a}$ are concatenated and fed to a new discriminator ${C}_{c}$ to learn the task of the target learning model. Figure 3 illustrates the structure of the surrogate model. Since domain and attribute encoders are well-trained in the previous step, their parameters are fixed and only the parameters of the following discriminator ${C}_{c}$ are updated during the training of surrogate model. In other words, the construc-
+
+tion of surrogate model can be viewed as a transfer-learning 313 process where a classifier is built on the existing model. The
+
+
+
+Figure 3: Network structure for surrogate model.
+
+architecture of the discriminator ${C}_{c}$ can have an arbitrary structure, since the attacker cannot access the target learning model $F$ as stated in our threat model.
+
+### 3.5 Generation of Adversarial Perturbation
+
+318
+
+In our proposed method, the generation of adversarial attack on the surrogate model is based on gradient-based algorithms. Two gradient-based algorithms are considered in our current work, including Fast Gradient Sign Method (FGSM) (Goodfellow, Shlens, and Szegedy 2014) and Projected Gradient Descent (PGD) (Madry et al. 2017). The loss of generating adversarial perturbation is formulated to maximize the difference between the model prediction $\widehat{y}$ and the associated true label $y$ , which is shown as follows:
+
+$$
+{L}_{a} = - \operatorname{distance}\left( {\widehat{y}, y}\right) , \tag{11}
+$$
+
+where $\widehat{y} = {F}_{s}\left( x\right)$ . Additionally, to enable a stealthy adversarial attack, the adversarial perturbation needs to be able to bypass the built-in detector that is designed based on domain-knowledge constraints. Considering that domain features characterize the inherent features associated with domain-knowledge constraints, the impact of the adversarial perturbation on the domain features needs to be minimized. To realize this goal, in our proposed method, the domain encoder ${E}_{d}$ is selected to be detached when executing back-propagation on the model ${F}_{s}$ for generating adversarial perturbation. By doing so, the gradient calculation is restricted to only follow the direction of extracted attribute features.
+
+## 4 Performance Evaluations
+
+In this section, we will evaluate the performance of our pro-
+
+posed CPS-specific adversarial attack from the perspectives 342 of efficiency on misleading the target learning model and the stealthiness via bypassing the built-in detector effectively. We consider two CPS case studies for performance evaluations: one is about detecting false data injection attack in a power system, and the other is about detecting vehicle state attack in a transportation system.
+
+### 4.1 Case Study I
+
+In this case study, we consider a CPS scenario where a learning model $F$ is deployed to detect false data injection (Liu, Ning, and Reiter 2011) in IEEE 39-Bus System that has 10 generators and 46 power lines (Athay, Podmore, and Vir-mani 1979). Additionally, a built-in residual-based detector is considered, which is formulated as follows:
+
+$$
+{\begin{Vmatrix}z + a - H{s}^{\prime }\end{Vmatrix}}_{2} \leq \alpha . \tag{12}
+$$
+
+where $a$ denotes an attack vector, $z$ denotes the original benign measurement data, $H$ denotes the a constant matrix characterizing physical constraints of the power system, ${s}^{\prime }$ is the state estimation based on the measurements potentially compromised by the attack vector $a$ , and $\alpha$ is the threshold of the built-in detector. Using our proposed method, an adversarial attack is generated to mislead the learning model $F$ sufficiently while bypassing the built-in detector formulated in Eq. (12) effectively.
+
+In our case study, the sensing dataset are collected from simulations with multiple load profiles and two topology profiles. We consider the constraints represented by the topology profiles correspond to the domain features in our disentangled representation model. Additionally, the information represented by load profiles corresponds to the attribute features. Each training data sample pair for training the disentangled representation model is randomly sampled from the dataset. If the pair shares the same load profile, conditional-reconstruction loss I and conditional pair loss II are applied. If the pair shares the same topology profile, conditional-reconstruction loss II and conditional pair loss I are applied. All the other four loss-function terms defined in Section 3.3 are always applied no matter which pair is fetched for optimizing the model. In addition, 2000 data samples are collected, of which 900 data samples are used for realizing the learning model $F$ for detecting the false data injection attack in the power system and 1100 data samples are considered to be accessible to the attacker for developing and deploying adversarial attack. Furthermore, in our case study, we consider the target learning model $F$ is in the form of either fully connected neural network (FCNN) or recurrent neural network (RNN).
+
+For generating the adversarial perturbation, we first train a RNN-based disentangled model to extract the domain and attribute features. Then the encoders of this model is utilized as a pre-trained model to fine-tune a surrogate model. The training data samples for the surrogate model include the benign data samples and the data manipulated via false data injection attack. The domain encoder is selectively detached during the calculation of the gradient for perturbation generation. As a comparison, we also introduce general gradient-based adversarial attack methods as baseline methods. The success rates of our proposed CPS-specific adversarial attack on misleading the decision making by exploiting using FGSM and PGD gradient-based algorithms are shown in Figs. 4 and 5, respectively. As illustrated in the plots, our proposed method is able to achieve comparable success rates compared with the baseline methods. Additionally, our proposed method shows better transferability on misleading FCNN-based target learning model compared with misleading RNN-based learning model.
+
+We continue to evaluate the capacity of our proposed adversarial attack on bypassing the built-in detector. The bypassing capacity $g$ is formulated as:
+
+$$
+g = \min \left( {\frac{1}{{\log }_{2}\left( {\begin{Vmatrix}z + a - H{s}^{\prime }\end{Vmatrix}}_{2}\right) },1}\right) . \tag{13}
+$$
+
+The performance evaluation is shown in Fig. 6. From Fig. 6, it is clear that our method outperforms the baseline methods. Additionally from Figs. 4 to 6, we can get that our proposed adversarial attack is able to achieve a good trade-off between the misleading efficiency and bypassing capability.
+
+### 4.2 Case Study II
+
+In this case study, we leverage the Veremi dataset for performance evaluation by considering a scenario of detecting vehicle state attack in a transportation system (van der Heijden, Lukaseder, and Kargl 2018). The dataset collects the vehicle state message from 37500 vehicles in Luxembourg SUMO Traffic scenario (Codecá et al. 2017). The vehicle state message contains the positions of transmitter and receiver, speed, and elapsed time. The injected vehicle state attack includes constant value, constant offset, random value, random offset, and eventual stop (van der Heijden, Lukaseder, and Kargl 2018). In this transportation system, a deep learning model is deployed to detect the vehicle state attack. Additionally, the system also has two types of built-in detectors: maximum appearance distance (SAW) based detector and estimated reception range (ART) based detector. SAW-based detector detects the vehicle state attack by measuring the moved distance between samples and identifying the unreasonable moving distance. ART-based detector detects the attack by estimating the communication distance between the transmitter and receiver and identifying the unreachable vehicles.
+
+In this scenario, our proposed method firstly trains a RNN-based disentangled model to extract domain and attribute features. Since no prior domain knowledge is assumed in this scenario, we consider the common features shared by all the data as the domain features and the other features as the attribute features. In this case, all the loss-function terms except Conditional Reconstruction Loss II and Conditional Pair Loss II are implemented in this scenario. Secondly, we implement a RNN-based surrogate model. We evaluate the performance of our method by considering that the target learning model is based on either FCNN or RNN, which is shown in Fig. 7. From the plots, it is clear that our method outperforms the baseline, general FGSM method, in misleading both FCNN-based and RNN-based target learning model. We also evaluate the capability of our method on bypassing the SAW-based and ART-based detectors. The bypassing capacity is formulated as the failure percentage of detecting adversarial perturbations by the built-in detectors. The performance evaluation is shown in Fig. 8, from which we can observe that our method outperforms the baseline method especially for ART-based detector. Furthermore, in this case study, we also include a method of detaching attribute encoder for evaluation purpose. Since our method detaches the domain encoder during calculating adversarial perturbation, attribute-encoder detached method is considered as a comparison to illustrate the importance of selectively detaching domain encoder in our design. The performance comparisons between our method and the attribute-encoder detached method in Figs. 7 and 8 illustrate the the effectiveness of our proposed method to interpret the data features by extracting domain and attribute features and selectively detach features for generating adversarial perturbations.
+
+
+
+Figure 4: Adversarial attack is generated based on FGSM: A: General FGSM and B: Our method.
+
+
+
+Figure 5: Adversarial attack is generated based on PGD: A: General PGD and B: Our method.
+
+## 5 Conclusions
+
+In this paper, we propose a novel CPS-specific adversarial attack method that is able to compromise the learning model of a data-driven CPS in a practical and stealthy manner. Our work presents three main contributions. Firstly, our method enables an unsupervised disentangled representation model for learning and interpreting the data features by disentangling the features into domain features and attribute features. Using the obtained explainable feature maps, it is feasible to produce practical and stealthy adversarial perturbations. Secondly, our work provides a novel approach to synthesize adversarial perturbations where explainable features are selectively utilized, which leads to a more practical adversarial attack. Thirdly, our adversarial attack method does not require any explicit integration of domain-knowledge constraints in attack model formulation, resulting in more general application scenarios especially when the attacker has limited knowledge of the targeted CPSs or the domain-knowledge constraints cannot be represented in a mathematically differentiable form. As illustrated in the simulation results, our proposed method is able to sufficiently mislead the learning model in the target CPSs while effectively bypassing the built-in detector that is normally designed based on physical and mechanical constraints of the CPSs. In our ongoing work, we are working on evaluating our proposed method in other CPS domains and exploring a more general form of our proposed adversarial attack which can be suitable for various CPS applications.
+
+## References
+
+Alzantot, M.; Balaji, B.; and Srivastava, M. 2018. Did you hear that? adversarial examples against automatic speech recognition. arXiv preprint arXiv:1801.00554.
+
+Alzantot, M.; Sharma, Y.; Elgohary, A.; Ho, B.-J.; Sri-vastava, M.; and Chang, K.-W. 2018. Generating natural language adversarial examples. arXiv preprint arXiv:1804.07998.
+
+Athay, T.; Podmore, R.; and Virmani, S. 1979. A practical method for the direct analysis of transient stability. IEEE Transactions on Power Apparatus and Systems, 12(2): 573- 584.
+
+
+
+Figure 6: Bypass capacity of the proposed adversarial attacks: A: Baseline method and B: Our method.
+
+Method A Method B Method C
+
+Rate (%)
+
+a b c d e f
+
+(a) Effectiveness on misleading the surrogate
+
+model with different perturbation sizes $\varepsilon$ .
+
+Method A Method B Method C
+
+40
+
+late (%) (b) Effectiveness on misleading the FCNN-based target learning model with different perturbation sizes $\varepsilon$ .
+
+
+
+(c) Effectiveness on misleading the RNN-based learning model with different perturbation sizes $\varepsilon$ .
+
+Figure 7: Success rate of our proposed adversarial attack: A: Baseline method, B: Attribute-encoder detached method, and C: Our method; perturbation size, $\varepsilon - \mathrm{a} : 0,\mathrm{\;b} : {0.0001},\mathrm{c}$ : ${0.001},\mathrm{\;d} : {0.002}\mathrm{e} : {0.003},\mathrm{f} : {0.004}$ .
+
+510 Biggio, B.; Corona, I.; Maiorca, D.; Nelson, B.; Šrndić, N.; 511 Laskov, P.; Giacinto, G.; and Roli, F. 2013. Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery
+
+
+
+sizes $\varepsilon$ .
+
+Figure 8: Bypass rate of proposed adversarial attack. Method - A: Baseline method, B: Attribute-encoder detached method, and C: Our method.
+
+in databases, 387-402. Springer. 514
+
+Carlini, N.; and Wagner, D. 2017. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), 39-57. IEEE.
+
+Cheung, B.; Livezey, J. A.; Bansal, A. K.; and Olshausen, B. A. 2014. Discovering hidden factors of variation in deep networks. arXiv preprint arXiv:1412.6583.
+
+Codecá, L.; Frank, R.; Faye, S.; and Engel, T. 2017. Luxembourg sumo traffic (lust) scenario: Traffic demand evaluation. IEEE Intelligent Transportation Systems Magazine, 9(2): 52-63.
+
+Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
+
+Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y.
+
+2014. Generative adversarial nets. Advances in neural information processing systems, 27.
+
+Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
+
+Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. Im-agenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25: 1097-1105.
+
+Lee, H.-Y.; Tseng, H.-Y.; Huang, J.-B.; Singh, M.; and Yang, M.-H. 2018. Diverse image-to-image translation via disentangled representations. In Proceedings of the European conference on computer vision (ECCV), 35-51.
+
+Li, J.; Yang, Y.; Sun, J. S.; Tomsovic, K.; and Qi, H. 2021. Conaml: Constrained adversarial machine learning for cyber-physical systems. In Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security, 52-66.
+
+Lin, T.-Y.; Dollár, P.; Girshick, R.; He, K.; Hariharan, B.; and Belongie, S. 2017. Feature pyramid networks for object detection. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2117-2125.
+
+Liu, Y.; Chen, X.; Liu, C.; and Song, D. 2016. Delving into transferable adversarial examples and black-box attacks. arXiv preprint arXiv:1611.02770.
+
+Liu, Y.; Ning, P.; and Reiter, M. K. 2011. False data injection attacks against state estimation in electric power grids. ${ACM}$ Transactions on Information and System Security (TISSEC), ${14}\left( 1\right) : 1 - {33}$ .
+
+Lu, J.; Issaranon, T.; and Forsyth, D. 2017. Safetynet: Detecting and rejecting adversarial examples robustly. In Proceedings of the IEEE International Conference on Computer Vision, 446-454.
+
+Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; and Vladu, A. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083.
+
+Mathieu, M.; Zhao, J.; Sprechmann, P.; Ramesh, A.; and Le-Cun, Y. 2016. Disentangling factors of variation in deep representations using adversarial training. arXiv preprint arXiv:1611.03383.
+
+Moosavi-Dezfooli, S.-M.; Fawzi, A.; and Frossard, P. 2016. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2574-2582.
+
+4 Papernot, N.; McDaniel, P.; Goodfellow, I.; Jha, S.; Celik, Z. B.; and Swami, A. 2017. Practical black-box attacks against machine learning. In Proceedings of the 2017 ACM on Asia conference on computer and communications security, 506-519.
+
+Papernot, N.; McDaniel, P.; Jha, S.; Fredrikson, M.; Celik, Z. B.; and Swami, A. 2016. The limitations of deep learning in adversarial settings. In 2016 IEEE European symposium on security and privacy (EuroS&P), 372-387. IEEE.
+
+Shafahi, A.; Najibi, M.; Ghiasi, A.; Xu, Z.; Dickerson, J.; Studer, C.; Davis, L. S.; Taylor, G.; and Goldstein, T. 2019. Adversarial training for free! arXiv preprint arXiv:1904.12843.
+
+Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; and Fergus, R. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.
+
+van der Heijden, R. W.; Lukaseder, T.; and Kargl, F. 2018. Veremi: A dataset for comparable evaluation of misbehavior detection in vanets. arXiv preprint arXiv:1804.06701.
+
+Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. In Advances in neural information processing systems, 5998-6008.
+
+Wallace, E.; Feng, S.; Kandpal, N.; Gardner, M.; and Singh, S. 2019. Universal adversarial triggers for attacking and analyzing NLP. arXiv preprint arXiv:1908.07125.
+
+Zhang, D.; Zhang, T.; Lu, Y.; Zhu, Z.; and Dong, B. 2019. You only propagate once: Accelerating adversarial training via maximal principle. arXiv preprint arXiv:1905.00877.
+
+Zhang, W. E.; Sheng, Q. Z.; Alhazmi, A.; and Li, C. 2020. Adversarial attacks on deep-learning models in natural language processing: A survey. ACM Transactions on Intelligent Systems and Technology (TIST), 11(3): 1-41.
+
+Zhu, C.; Cheng, Y.; Gan, Z.; Sun, S.; Goldstein, T.; and Liu, J. 2020. FreeLB: Enhanced Adversarial Training for Natural Language Understanding. In International Conference on Learning Representations.
+
+Zhu, J.-Y.; Park, T.; Isola, P.; and Efros, A. A. 2017. Unpaired image-to-image translation using cycle-consistent adversarial networks. In Proceedings of the IEEE international conference on computer vision, 2223-2232.
+
+610
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/Dp5B1YhYlwY/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/Dp5B1YhYlwY/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..307951dabb3174192bf75ea36fa0462935295650
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/Dp5B1YhYlwY/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,275 @@
+§ A PRACTICAL AND STEALTHY ADVERSARIAL ATTACK FOR CYBER-PHYSICAL APPLICATIONS
+
+Anonymous Author(s)
+
+§ ABSTRACT
+
+Adversarial perturbations on misleading a well-trained machine learning (ML) model have been studied in computer vision (CV) and other related application areas. However, there is very limited focus on studying the impact of adversarial perturbations on ML models used in data-driven cyber-physical systems (CPSs) that normally have complex physical and mechanical constraints. Because of the complex physical and mechanical constraints, called domain-knowledge constraints in our paper, established gradient-based adversarial attack methods are not always practical in CPS applications. In this paper, we propose an innovative CPS-specific adversarial attack method that is able to practically compromise the ML-based decision makings of CPSs while maintaining stealthy by meeting the complex domain-knowledge constraints. Our work provides three main contributions: 1) we develop an unsupervised disentangled representation model to learn the explainable features of CPSs' sensing data. Using these feature maps, our proposed method can produce practical and stealthy adversarial perturbations; 2) our work provides a novel approach to synthesize adversarial perturbations where explainable features are selectively utilized, which leads to a more practical adversarial attack; 3) our proposed adversarial attack method does not require any explicit integration of domain-knowledge constraints in attack model formulation, resulting in a more general application scenario especially when domain-knowledge constraints cannot be represented in a mathematically differentiable form. In the section of performance evaluations, different scenarios are considered to illustrate the effectiveness of the proposed adversarial attack method in achieving a high success rate as well as sufficient stealthiness in CPS applications.
+
+§ 1 INTRODUCTION
+
+In recent years, increasing evidence shows that carefully crafted adversarial perturbation is able to introduce bounded subtle adversarial perturbation that can mislead learning models to achieve incorrect decision making (Szegedy et al. 2013). Extensive research has been developed to study the impact of adversarial attack in different data-driven applications (Krizhevsky, Sutskever, and Hinton 2012; Lin et al. 2017; Vaswani et al. 2017; Devlin et al. 2018; Goodfel-low, Shlens, and Szegedy 2014; Alzantot et al. 2018; Alzan-tot, Balaji, and Srivastava 2018; Carlini and Wagner 2017).
+
+From the existing studies, it is clear that the vulnerability raised by adversarial attacks makes ML models not always trustworthy when being deployed in real-world applications such as self-driving car, face recognition, and Q&A systems. Therefore, it is crucial to sufficiently mitigate the adversarial perturbations. To realize successful mitigation strategy, it can be beneficial to first exploit adversarial mindset and formulate threat models of practical adversarial perturbations for a given application field.
+
+To achieve this goal, many techniques have established to generate adversarial perturbations successfully misleading ML models on CV- and NLP- related application fields (Szegedy et al. 2013; Biggio et al. 2013; Goodfel-low, Shlens, and Szegedy 2014; Carlini and Wagner 2017; Alzantot et al. 2018; Wallace et al. 2019; Zhang et al. 2020). However, there is very limited work focusing on generating practical adversarial attacks in CPS-related application domain that normally has complex domain-knowledge constraints. Due to the complex constraints, it can be challenging to design and launch adversarial perturbation practically. For example, the sensing data manipulated with adversarial perturbation may violate the constraints in CPSs and can be detected via built-in detectors that are conventionally designed based on domain-knowledge constraints. To address this challenge, we propose a practical and stealthy adversarial attack where adversarial perturbations are able to sufficiently mislead learning model in CPSs while bypassing the built-in detectors effectively. Concretely, our proposed CPS-specific adversarial attack method delivers three main contributions:
+
+ * We propose an unsupervised disentangled representation model to learn and interpret the features of CPSs' sensing data by disentangling the features into domain features, which are related to domain-knowledge constraints, and attribute features that are not highly correlated to the constraints. Using these explainable feature maps, our proposed method can produce practical and stealthy adversarial perturbations.
+
+ * Our method provides a novel and practical solution to effectively select and utilize explainable features for synthesizing adversarial perturbations in CPS domain.
+
+ * Our proposed method does not require any handcrafted domain knowledge to be integrated explicitly in the attack model formulation. By doing so, the attacker is not required to have sufficient knowledge of the targeted CPS system. Additionally, this also results in a more general application scenario for our method, specially when the domain-knowledge constraints cannot be represented in a mathematically differentiable form.
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+The rest of the paper is organized as follows. In Section 2, we will review related work. In Section 3, we will introduce our proposed CPS-specific adversarial attack. In Sections 4 and 5, the performance evaluations and conclusions will be presented, respectively.
+
+§ 2 RELATED WORK
+
+In this section, we review the state-of-the-art works related to our proposed method.
+
+§ 2.1 WHITE-BOX AND BLACK-BOX ADVERSARIAL ATTACKS
+
+Many previous works focus on generating adversarial perturbations based on the full knowledge of the targeted learning model, called white-box adversarial attack. Fast Gradient Sign Method (FGSM) defines the perturbation following the direction of gradient to maximize the loss function of learning model (Goodfellow, Shlens, and Szegedy 2014). Kurakin et al. introduced an iterative method to search optimal values for adversarial perturbations. Projected Gradient Descent (PGD) is another iterative method that implements projected gradient to restrict the scale of searched perturbation (Madry et al. 2017). Some following research work emphasized on how to enhance the computation efficiency of PGD (Shafahi et al. 2019; Zhang et al. 2019; Zhu et al. 2020). Additionally, Moosavi-Dezfooli proposed a method, called DeepFool, to find necessary perturbation to input examples of learning model (Moosavi-Dezfooli, Fawzi, and Frossard 2016). Papernot et al. used the first derivative of feed-forward neural network to compute the adversarial samples (Papernot et al. 2016). Carlini et al. formulated a new optimization instance with Lagrangian relaxation to bound the perturbation for adversarial training (Carlini and Wagner 2017). In many practical scenarios, the attacker does not have access to the target learning model. In this situation, a black-box adversarial attack is necessary. Transfer-based method was proposed to generate adversarial perturbations of a surrogate model to compromise target learning model in a black-box scenario (Liu et al. 2016; Pa-pernot et al. 2017; Lu, Issaranon, and Forsyth 2017). Published experiment results show that the adversarial perturbations generated against surrogate model can be effective on compromising the target learning models. In order words, transferability can be maintained.
+
+§ 2.2 ADVERSARIAL ATTACK IN CPSS
+
+As far as we know, very limited research work has been done to introduce constrained adversarial perturbations in CPSs. In (Li et al. 2021), a search of optimal adversarial perturbation is considered as an optimization problem where domain-knowledge constraints are carefully and explicitly represented as linear equations or inequalities. The search is an iterative process to find proper perturbation which can both mislead learning model and fulfill the domain-knowledge constraints. This method is effective in the considered scenarios. However, this method requires examination of the integrity of domain-knowledge constraints in each iteration. Additionally, when constraints can not be represented in linear forms, the effectiveness of this method can
+
+be compromised. 148
+
+§ 2.3 DISENTANGLED REPRESENTATION
+
+149
+
+Disentangled representation learning focuses on extracting domain-invariant features from example pairs. Different methods have been developed to successfully extract the domain-invariant content features via an autoencoder model structure (Lee et al. 2018; Cheung et al. 2014; Mathieu et al. 2016). Adversarial training loss from Generative Adversarial Network (GAN) (Goodfellow et al. 2014) is implemented in the disentangled representation model to enforce the learning of disentangled representations. The work of Lee et al. (2018) considered that content and attribute features are designated as the distinct feature spaces for learning the disentangled representations of images. Additionally, this model assumes that the attribute feature aligns with prior
+
+Gaussian distribution. 163
+
+§ 3 PROPOSED CPS-SPECIFIC ADVERSARIAL ATTACK
+
+In this section, we illustrate our proposed practical and
+
+stealthy CPS-specific adversarial attack. 166
+
+§ 3.1 PROBLEM FORMULATION
+
+167
+
+In our work, we specify a threat model of our proposed CPS-
+
+specific adversarial attack as follows: 169
+
+1 The attacker is assumed to have access to partial data of the targeted data-driven CPSs via eavesdropping and querying for information distillation.
+
+2 Considering that the learning model $F$ and the built-in detector $B$ are critical for a data-driven CPS, $F$ and $B$ are always launched with advanced security measures. Therefore, it is reasonable to assume that the attacker does not have access to $F$ and $B$ . The attacker can realize information distillation about $F$ by reconstructing a surrogate model ${F}_{s}$ with the data obtained via eavesdropping and querying.
+
+3 It is also reasonable to believe that the training procedure of the learning model $F$ is conducted with advanced security measures. Because of it, we assume that the attacker cannot access the training procedure and the data used for the training procedure.
+
+4 The objective of the attacker is to launch an adversarial attack that is able to sufficiently mislead the learning model $F$ while bypassing the built-in detector $B$ effectively.
+
+5 The attacker has limited knowledge of the physical and mechanical constraints, called domain-knowledge constraints in our paper, of the targeted CPSs. For simplicity, in our work, the learning model $F$ in the targeted CPS is considered to make classification-based decision making, such as detecting false data injection attack in a power system and detecting vehicle state attack in a transportation system.
+
+To realize a practical and stealthy attack, the adversarial perturbation $r$ should be optimized to maximize the cross-entropy for misleading the target learning model $F$ , which is described as:
+
+$$
+r = \arg \min {\log }_{2}\left\lbrack {P\left( {y \mid x + r,F}\right) }\right\rbrack . \tag{1}
+$$
+
+where $x$ and $y$ are a sensing data sample and its associated label, respectively, and $\varepsilon$ is the upper bound of the infinite norm of perturbation $r$ . However, since $F$ and $B$ are unknown to the attacker, the attacker needs to reconstruct a surrogate model ${F}_{s}$ for information distillation about $F$ . Therefore, Eq. (1) is reformulated as follows:
+
+$$
+r = \arg \mathop{\min }\limits_{x}{\log }_{2}\left\lbrack {P\left( {y \mid x + r,{F}_{s}}\right) }\right\rbrack . \tag{2}
+$$
+
+In our work, we aim to build an effective surrogate model ${F}_{s}$ , based on which the adversarial perturbation $r$ is generated to not only mislead the learning model $F$ and but also bypass the built-in detector $B$ .
+
+§ 3.2 FRAMEWORK OVERVIEW
+
+The overview of our proposed framework for generating the adversarial perturbation $r$ is illustrated in Fig. 1. As shown in Fig. 1, the framework mainly consists of three steps:
+
+Step 1: A disentangled model is trained to learn and interpret the features of the sensing data samples by disentangling the features into domain features and attribute features. A completed disentangled model includes an encoder and a decoder. We only utilize the encoder in the following steps for generating adversarial perturbations.
+
+Step 2: The encoder of the disentangled model is reused as a transfer-learning model to optimize the cascaded discriminator. The encoder and discriminator constitute the surrogate model ${F}_{s}$ that is used for information distillation for the learning model $F$ .
+
+Step 3: The surrogate model ${F}_{s}$ is utilized to generate the adversarial perturbations that have transferability to compromise the target learning model $F$ . The attacker queries the target learning model $F$ with the generated adversarial data sample $x + r$ for testing the effectiveness of the attack strategy.
+
+In the next subsections, we detail the main steps of our proposed framework.
+
+§ 3.3 DISENTANGLED REPRESENTATION MODEL DESIGN
+
+As stated above, in our proposed framework, a disentangled representation model is designed for interpreting the features of the sensing data by disentangling the features into domain and attribute features, which enables stealthy adversarial attack. Inspired by the work of Lee at al (Lee et al. 2018), we propose a novel autoencoder structure by exploiting cycle-generative adversarial network (CycleGAN) (Zhu
+
+ < g r a p h i c s >
+
+Figure 1: Overview of our proposed framework for generating CPS-specific adversarial attack.
+
+et al. 2017) for realizing the disentangled representation 243
+
+model. A data pair $\left( {{x}_{1},{x}_{2}}\right)$ sampled from the accessible 244
+
+dataset is fed to the proposed disentangled representation 245 model shown in Fig. 2. Two encoders ${E}_{d}$ and ${E}_{a}$ are uti-
+
+ < g r a p h i c s >
+
+Figure 2: The overview of the proposed disentangled representation model.
+
+246
+
+lized to extract domain and attribute features, respectively. 247 After the encoders, the extracted features of the two input data samples are mixed and fed to decoder $D$ . The decoder generates two new fake data samples ${f}_{1}$ and ${f}_{2}$ , which are associated with the real data ${x}_{1}$ and ${x}_{2}$ , respectively. A discriminator ${C}_{f}$ is implemented after the decoder to identify whether the input data is real or fake. The generated fake data are then encoded and decoded again in the same mixing pattern with the previous autoencoder. Therefore, the generated fake examples in the second round should be the same as or very close to the input data samples if the encoders and decoder are optimal.
+
+During the training process of our proposed disentangled representation model, there are eight loss-function terms required to be minimized as stated in the followings:
+
+1 Cycle-reconstruction loss: this loss represents the mean square error between the generated fake data $\widehat{x}$ in the second round and the associated real data $x$ :
+
+$$
+{L}_{1} = {\begin{Vmatrix}{\widehat{x}}_{1} - {x}_{1}\end{Vmatrix}}_{2}^{2} + {\begin{Vmatrix}{\widehat{x}}_{2} - {x}_{2}\end{Vmatrix}}_{2}^{2},
+$$
+
+$$
+\text{ where }\left\{ \begin{array}{l} {\widehat{x}}_{1} = D\left( {{E}_{d}\left( {f}_{1}\right) ,{E}_{a}\left( {f}_{2}\right) }\right) \\ {\widehat{x}}_{2} = D\left( {{E}_{d}\left( {f}_{2}\right) ,{E}_{a}\left( {f}_{1}\right) }\right) \\ {f}_{1} = D\left( {{E}_{d}\left( {x}_{1}\right) ,{E}_{a}\left( {x}_{2}\right) }\right) \\ {f}_{2} = D\left( {{E}_{d}\left( {x}_{2}\right) ,{E}_{a}\left( {x}_{1}\right) }\right) \end{array}\right. \text{ . } \tag{3}
+$$
+
+2 Discriminator loss: this loss represents the binary cross-
+
+entropy of discriminator output. If the data sample is real, 266
+
+267 the output is one. Otherwise, the output is zero. The loss
+
+268 can be formulated as:
+
+$$
+{L}_{2} = - {C}_{f}\left( {x}_{1}\right) {\log }_{2}\left( {{C}_{f}\left( {x}_{1}\right) }\right) - {C}_{f}\left( {x}_{2}\right) {\log }_{2}\left( {{C}_{f}\left( {x}_{2}\right) }\right)
+$$
+
+$$
+- \left( {1 - {C}_{f}\left( {f}_{1}\right) }\right) {\log }_{2}\left( {1 - {C}_{f}\left( {f}_{1}\right) }\right)
+$$
+
+$$
+- \left( {1 - {C}_{f}\left( {f}_{2}\right) }\right) {\log }_{2}\left( {1 - {C}_{f}\left( {f}_{2}\right) }\right) \text{ . }
+$$
+
+(4)
+
+3 Adversarial training loss: this loss represents the quality of the fake data samples generated by the autoen-coder. The autoencoder belongs to generative model in GAN that performs adversarial optimization. In this case, the autoencoder aims to produce higher-quality fake data samples that are able to bypass the discriminator. The loss is formulated as:
+
+$$
+{L}_{3} = - {C}_{f}\left( {f}_{1}\right) {\log }_{2}\left( {{C}_{f}\left( {f}_{1}\right) }\right) \tag{5}
+$$
+
+$$
+- {C}_{f}\left( {f}_{2}\right) {\log }_{2}\left( {{C}_{f}\left( {f}_{2}\right) }\right) \text{ . }
+$$
+
+4 Conditional-reconstruction loss I: if ${x}_{1}$ and ${x}_{2}$ share the different domain features and same attribute features, the fake example ${f}_{1}$ should be the same as or close to ${x}_{1}$ . The same situation applies to ${f}_{2}$ and ${x}_{2}$ . Therefore, the loss is used to optimize the model only if ${x}_{1}$ and ${x}_{2}$ share the same attribute features. The loss can be calculated as:
+
+$$
+{L}_{4} = {\begin{Vmatrix}{f}_{1} - {x}_{1}\end{Vmatrix}}_{2}^{2} + {\begin{Vmatrix}{f}_{2} - {x}_{2}\end{Vmatrix}}_{2}^{2}. \tag{6}
+$$
+
+5 Conditional-reconstruction loss II: if ${x}_{1}$ and ${x}_{2}$ share the same domain features and diverse attribute features, the fake data sample ${f}_{2}$ should be the same as or close to ${x}_{1}$ . The same situation applies to ${f}_{1}$ and ${x}_{2}$ . Therefore, the loss is used to optimize the model only if ${x}_{1}$ and ${x}_{2}$ share the same domain features. The loss can be calculated as:
+
+$$
+{L}_{5} = {\begin{Vmatrix}{f}_{2} - {x}_{1}\end{Vmatrix}}_{2}^{2} + {\begin{Vmatrix}{f}_{1} - {x}_{2}\end{Vmatrix}}_{2}^{2}. \tag{7}
+$$
+
+6 Cycle consistency loss: this loss represents the summation of the mean square error between the encoded features in the first round and the encoded features in the second round:
+
+$$
+{L}_{6} = {\begin{Vmatrix}{E}_{d}\left( {f}_{1}\right) - {E}_{d}\left( {x}_{1}\right) \end{Vmatrix}}_{2}^{2} + {\begin{Vmatrix}{E}_{d}\left( {f}_{2}\right) - {E}_{d}\left( {x}_{2}\right) \end{Vmatrix}}_{2}^{2}
+$$
+
+$$
++ {\begin{Vmatrix}{E}_{a}\left( {f}_{1}\right) - {E}_{a}\left( {x}_{2}\right) \end{Vmatrix}}_{2}^{2} + {\begin{Vmatrix}{E}_{a}\left( {f}_{2}\right) - {E}_{a}\left( {x}_{1}\right) \end{Vmatrix}}_{2}^{2}.
+$$
+
+(8)
+
+7 Conditional pair loss I: if ${x}_{1}$ and ${x}_{2}$ share the same domain features and diverse attribute features, ${E}_{d}\left( {x}_{1}\right)$ and ${E}_{d}\left( {x}_{2}\right)$ should be close to each other. Additionally, ${E}_{a}\left( {x}_{1}\right)$ and ${E}_{a}\left( {x}_{2}\right)$ should be very different from each other. The loss can be represented as:
+
+$$
+{L}_{7} = {\begin{Vmatrix}{E}_{d}\left( {x}_{1}\right) - {E}_{d}\left( {x}_{2}\right) \end{Vmatrix}}_{2}^{2} - {\begin{Vmatrix}{E}_{a}\left( {x}_{1}\right) - {E}_{a}\left( {x}_{2}\right) \end{Vmatrix}}_{2}^{2}.
+$$
+
+(9)
+
+8 Conditional pair loss II: if ${x}_{1}$ and ${x}_{2}$ share the same attribute features and diverse domain features, ${E}_{a}\left( {x}_{1}\right)$ and ${E}_{a}\left( {x}_{2}\right)$ should be close to each other and the domain features ${E}_{d}\left( {x}_{1}\right)$ and ${E}_{d}\left( {x}_{2}\right)$ should be very different from each other. Thus, the loss can be represented as:
+
+$$
+{L}_{8} = {\begin{Vmatrix}{E}_{a}\left( {x}_{1}\right) - {E}_{a}\left( {x}_{2}\right) \end{Vmatrix}}_{2}^{2} - {\begin{Vmatrix}{E}_{d}\left( {x}_{1}\right) - {E}_{d}\left( {x}_{2}\right) \end{Vmatrix}}_{2}^{2}.
+$$
+
+(10)
+
+§ 3.4 SURROGATE MODEL CONSTRUCTION
+
+302
+
+Once the disentangled representation model is well-trained, 303 we utilize the encoders ${E}_{d}$ and ${E}_{a}$ as the initial model to transfer domain knowledge to the surrogate model. The outputs of ${E}_{d}$ and ${E}_{a}$ are concatenated and fed to a new discriminator ${C}_{c}$ to learn the task of the target learning model. Figure 3 illustrates the structure of the surrogate model. Since domain and attribute encoders are well-trained in the previous step, their parameters are fixed and only the parameters of the following discriminator ${C}_{c}$ are updated during the training of surrogate model. In other words, the construc-
+
+tion of surrogate model can be viewed as a transfer-learning 313 process where a classifier is built on the existing model. The
+
+ < g r a p h i c s >
+
+Figure 3: Network structure for surrogate model.
+
+architecture of the discriminator ${C}_{c}$ can have an arbitrary structure, since the attacker cannot access the target learning model $F$ as stated in our threat model.
+
+§ 3.5 GENERATION OF ADVERSARIAL PERTURBATION
+
+318
+
+In our proposed method, the generation of adversarial attack on the surrogate model is based on gradient-based algorithms. Two gradient-based algorithms are considered in our current work, including Fast Gradient Sign Method (FGSM) (Goodfellow, Shlens, and Szegedy 2014) and Projected Gradient Descent (PGD) (Madry et al. 2017). The loss of generating adversarial perturbation is formulated to maximize the difference between the model prediction $\widehat{y}$ and the associated true label $y$ , which is shown as follows:
+
+$$
+{L}_{a} = - \operatorname{distance}\left( {\widehat{y},y}\right) , \tag{11}
+$$
+
+where $\widehat{y} = {F}_{s}\left( x\right)$ . Additionally, to enable a stealthy adversarial attack, the adversarial perturbation needs to be able to bypass the built-in detector that is designed based on domain-knowledge constraints. Considering that domain features characterize the inherent features associated with domain-knowledge constraints, the impact of the adversarial perturbation on the domain features needs to be minimized. To realize this goal, in our proposed method, the domain encoder ${E}_{d}$ is selected to be detached when executing back-propagation on the model ${F}_{s}$ for generating adversarial perturbation. By doing so, the gradient calculation is restricted to only follow the direction of extracted attribute features.
+
+§ 4 PERFORMANCE EVALUATIONS
+
+In this section, we will evaluate the performance of our pro-
+
+posed CPS-specific adversarial attack from the perspectives 342 of efficiency on misleading the target learning model and the stealthiness via bypassing the built-in detector effectively. We consider two CPS case studies for performance evaluations: one is about detecting false data injection attack in a power system, and the other is about detecting vehicle state attack in a transportation system.
+
+§ 4.1 CASE STUDY I
+
+In this case study, we consider a CPS scenario where a learning model $F$ is deployed to detect false data injection (Liu, Ning, and Reiter 2011) in IEEE 39-Bus System that has 10 generators and 46 power lines (Athay, Podmore, and Vir-mani 1979). Additionally, a built-in residual-based detector is considered, which is formulated as follows:
+
+$$
+{\begin{Vmatrix}z + a - H{s}^{\prime }\end{Vmatrix}}_{2} \leq \alpha . \tag{12}
+$$
+
+where $a$ denotes an attack vector, $z$ denotes the original benign measurement data, $H$ denotes the a constant matrix characterizing physical constraints of the power system, ${s}^{\prime }$ is the state estimation based on the measurements potentially compromised by the attack vector $a$ , and $\alpha$ is the threshold of the built-in detector. Using our proposed method, an adversarial attack is generated to mislead the learning model $F$ sufficiently while bypassing the built-in detector formulated in Eq. (12) effectively.
+
+In our case study, the sensing dataset are collected from simulations with multiple load profiles and two topology profiles. We consider the constraints represented by the topology profiles correspond to the domain features in our disentangled representation model. Additionally, the information represented by load profiles corresponds to the attribute features. Each training data sample pair for training the disentangled representation model is randomly sampled from the dataset. If the pair shares the same load profile, conditional-reconstruction loss I and conditional pair loss II are applied. If the pair shares the same topology profile, conditional-reconstruction loss II and conditional pair loss I are applied. All the other four loss-function terms defined in Section 3.3 are always applied no matter which pair is fetched for optimizing the model. In addition, 2000 data samples are collected, of which 900 data samples are used for realizing the learning model $F$ for detecting the false data injection attack in the power system and 1100 data samples are considered to be accessible to the attacker for developing and deploying adversarial attack. Furthermore, in our case study, we consider the target learning model $F$ is in the form of either fully connected neural network (FCNN) or recurrent neural network (RNN).
+
+For generating the adversarial perturbation, we first train a RNN-based disentangled model to extract the domain and attribute features. Then the encoders of this model is utilized as a pre-trained model to fine-tune a surrogate model. The training data samples for the surrogate model include the benign data samples and the data manipulated via false data injection attack. The domain encoder is selectively detached during the calculation of the gradient for perturbation generation. As a comparison, we also introduce general gradient-based adversarial attack methods as baseline methods. The success rates of our proposed CPS-specific adversarial attack on misleading the decision making by exploiting using FGSM and PGD gradient-based algorithms are shown in Figs. 4 and 5, respectively. As illustrated in the plots, our proposed method is able to achieve comparable success rates compared with the baseline methods. Additionally, our proposed method shows better transferability on misleading FCNN-based target learning model compared with misleading RNN-based learning model.
+
+We continue to evaluate the capacity of our proposed adversarial attack on bypassing the built-in detector. The bypassing capacity $g$ is formulated as:
+
+$$
+g = \min \left( {\frac{1}{{\log }_{2}\left( {\begin{Vmatrix}z + a - H{s}^{\prime }\end{Vmatrix}}_{2}\right) },1}\right) . \tag{13}
+$$
+
+The performance evaluation is shown in Fig. 6. From Fig. 6, it is clear that our method outperforms the baseline methods. Additionally from Figs. 4 to 6, we can get that our proposed adversarial attack is able to achieve a good trade-off between the misleading efficiency and bypassing capability.
+
+§ 4.2 CASE STUDY II
+
+In this case study, we leverage the Veremi dataset for performance evaluation by considering a scenario of detecting vehicle state attack in a transportation system (van der Heijden, Lukaseder, and Kargl 2018). The dataset collects the vehicle state message from 37500 vehicles in Luxembourg SUMO Traffic scenario (Codecá et al. 2017). The vehicle state message contains the positions of transmitter and receiver, speed, and elapsed time. The injected vehicle state attack includes constant value, constant offset, random value, random offset, and eventual stop (van der Heijden, Lukaseder, and Kargl 2018). In this transportation system, a deep learning model is deployed to detect the vehicle state attack. Additionally, the system also has two types of built-in detectors: maximum appearance distance (SAW) based detector and estimated reception range (ART) based detector. SAW-based detector detects the vehicle state attack by measuring the moved distance between samples and identifying the unreasonable moving distance. ART-based detector detects the attack by estimating the communication distance between the transmitter and receiver and identifying the unreachable vehicles.
+
+In this scenario, our proposed method firstly trains a RNN-based disentangled model to extract domain and attribute features. Since no prior domain knowledge is assumed in this scenario, we consider the common features shared by all the data as the domain features and the other features as the attribute features. In this case, all the loss-function terms except Conditional Reconstruction Loss II and Conditional Pair Loss II are implemented in this scenario. Secondly, we implement a RNN-based surrogate model. We evaluate the performance of our method by considering that the target learning model is based on either FCNN or RNN, which is shown in Fig. 7. From the plots, it is clear that our method outperforms the baseline, general FGSM method, in misleading both FCNN-based and RNN-based target learning model. We also evaluate the capability of our method on bypassing the SAW-based and ART-based detectors. The bypassing capacity is formulated as the failure percentage of detecting adversarial perturbations by the built-in detectors. The performance evaluation is shown in Fig. 8, from which we can observe that our method outperforms the baseline method especially for ART-based detector. Furthermore, in this case study, we also include a method of detaching attribute encoder for evaluation purpose. Since our method detaches the domain encoder during calculating adversarial perturbation, attribute-encoder detached method is considered as a comparison to illustrate the importance of selectively detaching domain encoder in our design. The performance comparisons between our method and the attribute-encoder detached method in Figs. 7 and 8 illustrate the the effectiveness of our proposed method to interpret the data features by extracting domain and attribute features and selectively detach features for generating adversarial perturbations.
+
+ < g r a p h i c s >
+
+Figure 4: Adversarial attack is generated based on FGSM: A: General FGSM and B: Our method.
+
+ < g r a p h i c s >
+
+Figure 5: Adversarial attack is generated based on PGD: A: General PGD and B: Our method.
+
+§ 5 CONCLUSIONS
+
+In this paper, we propose a novel CPS-specific adversarial attack method that is able to compromise the learning model of a data-driven CPS in a practical and stealthy manner. Our work presents three main contributions. Firstly, our method enables an unsupervised disentangled representation model for learning and interpreting the data features by disentangling the features into domain features and attribute features. Using the obtained explainable feature maps, it is feasible to produce practical and stealthy adversarial perturbations. Secondly, our work provides a novel approach to synthesize adversarial perturbations where explainable features are selectively utilized, which leads to a more practical adversarial attack. Thirdly, our adversarial attack method does not require any explicit integration of domain-knowledge constraints in attack model formulation, resulting in more general application scenarios especially when the attacker has limited knowledge of the targeted CPSs or the domain-knowledge constraints cannot be represented in a mathematically differentiable form. As illustrated in the simulation results, our proposed method is able to sufficiently mislead the learning model in the target CPSs while effectively bypassing the built-in detector that is normally designed based on physical and mechanical constraints of the CPSs. In our ongoing work, we are working on evaluating our proposed method in other CPS domains and exploring a more general form of our proposed adversarial attack which can be suitable for various CPS applications.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/EQjwT2-Vaba/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/EQjwT2-Vaba/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..d347b8405a43bc06c460980fbed5b6b0565e6c46
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/EQjwT2-Vaba/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,625 @@
+# Traversing the Local Polytopes of ReLU Neural Networks: A Unified Approach for Network Verification
+
+Anonymous Authors
+
+## Abstract
+
+Although neural networks (NNs) with ReLU activation functions have found success in a wide range of applications, their adoption in risk-sensitive settings has been limited by the concerns on robustness and interpretability. Previous works to examine robustness and to improve interpretability partially exploited the piecewise linear function form of ReLU NNs. In this paper, we explore the unique topological structure that ReLU NNs create in the input space, identifying the adjacency among the partitioned local polytopes and developing a traversing algorithm based on this adjacency. Our polytope traversing algorithm can be adapted to verify a wide range of network properties related to robustness and interpretability, providing an unified approach to examine the network behavior. As the traversing algorithm explicitly visits all local poly-topes, it returns a clear and full picture of the network behavior within the traversed region. The time and space complexity of the traversing algorithm is determined by the number of a ReLU NN's partitioning hyperplanes passing through the traversing region.
+
+## 1 Introduction & Related Work
+
+Neural networks with rectified linear unit activation functions (ReLU NNs) are arguably the most popular type of neural networks in deep learning. This type of network enjoys many appealing properties including better performance than NNs with sigmoid activation (Glorot, Bordes, and Bengio 2011), universal approximation ability (Arora et al. 2018; Lu et al. 2017; Montufar et al. 2014; Schmidt-Hieber 2020), and fast training speed via scalable algorithms such as stochastic gradient descent (SGD) and its variants (Zou et al. 2020).
+
+Despite their strong predictive power, ReLU NNs have seen limited adoption in risk-sensitive settings (Bunel et al. 2018). These settings require the model to make robust predictions against potential adversarial noise in the input (Athalye et al. 2018; Carlini and Wagner 2017; Goodfellow, Shlens, and Szegedy 2014; Szegedy et al. 2014). The alignment between model behavior and human intuition is also desirable (Liu et al. 2019): prior knowledge such as monotonicity may be incorporated into model design and training (Daniels and Velikova 2010; Gupta et al. 2019; Liu et al. 2020; Sharma and Wehrheim 2020); users and auditors of the model may require a certain degree of explanations of the model predictions (Gopinath et al. 2019; Chu et al. 2018).
+
+The requirements in risk-sensitive settings has motivated a great amount of research on verifying certain properties of ReLU NNs. These works often exploit the piecewise linear function form of ReLU NNs. In Bastani et al. (2016) the robustness of a network is verified in very small input region via linear programming (LP). To consider the nonlinearity of ReLU activation functions, Ehlers (2017); Katz et al. (2017); Pulina and Tacchella (2010, 2012) formulated the robustness verification problem as a satisfiability modulo theories (SMT) problem. A more popular way to model ReLU nonlinearality is to introduce a binary variable representing the on-off patterns of ReLU neurons. Property verification can then be solved using mixed-integer programming (MIP) (Anderson et al. 2020; Fischetti and Jo 2017; Liu et al. 2020; Tjeng, Xiao, and Tedrake 2018; Weng et al. 2018).
+
+The piecewise linear functional form of ReLU NNs also creates distinct topological structures in the input space. Previous studies have shown that a ReLU NN partitions the input space into convex polytopes and has one linear model associated with each polytope (Montufar et al. 2014; Serra, Tjandraatmadja, and Ramalingam 2018; Sudjianto et al. 2020). Each polytope can be coded by a binary activation code, which reflects the on-off patterns of the ReLU neurons. The number of local polytopes is often used as a measure of the model's expressivity (Hanin and Rolnick 2019; Lu et al. 2017). Built upon this framework, multiple studies (Sudjianto et al. 2020; Yang, Zhang, and Sudjianto 2020; Zhao et al. 2021) tried to explain the behavior of ReLU NNs and to improve their interpretability. They viewed ReLU NN as a collection of linear models. However, the relationship among the local polytopes their linear models has not been fully investigated.
+
+In this paper, we explore the topological relationship among the local polytopes created by ReLU NNs. We propose algorithms to identify the adjacency among these poly-topes, based on which we develop traversing algorithms to visit all polytopes within a bounded region in the input space. Our paper has the following major contributions:
+
+1. The polytope traversing algorithm provides a unified framework to examine the network behavior. Since each polytope contains a linear model whose properties are easy to verify, the full verification on a bounded domain is achieved after all the covered polytopes are visited and verified. We provide theoretical guarantees on the thoroughness of the traversing algorithm.
+
+---
+
+Copyright (c) 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+---
+
+2. Property verification based on the polytope traversing algorithm can be easily customized. Identifying the adjacency among the polytopes is formulated as LP. Within each local polytope, the user has the freedom to choose the solver most suitable for the verification sub-problem. We demonstrate that many common applications can be formulated as convex problems within each polytope.
+
+3. Because the polytope traversing algorithm explicitly visits all the local polytopes, it returns a full picture of the network behavior within the traversed region and improves interpretability.
+
+Although we focus on ReLU NN with fully connected layers through out this paper, our polytope traversing algorithm can be naturally extended to other piecewise linear networks such as those containing convolutional and maxpooling layers.
+
+The rest of this paper is organized as follows: Section 2 reviews how polytopes are created by ReLU NNs. Section 3 introduces two related concepts: the boundaries of a polytope and the adjacency among the polytopes. Our polytope traversing algorithm is described in Section 4. Section 5 demonstrates several cases of adapting the traversing algorithm for network property verification. The paper is concluded in Section 6.
+
+## 2 The Local Polytopes in ReLU NNs
+
+### 2.1 The case of one hidden layer
+
+A ReLU NN partitions the input space ${\mathbb{R}}^{P}$ into several poly-topes and forms a linear model within each polytope. To see this, we first consider a simple NN with one hidden layer of $M$ neurons. It takes an input $\mathbf{x} \in {\mathbb{R}}^{P}$ and outputs $\mathbf{o} \in {\mathbb{R}}^{Q}$ by calculating:
+
+$$
+\mathbf{o} = {\mathbf{W}}^{o}\mathbf{h} + {\mathbf{b}}^{o} = {\mathbf{W}}^{o}\left( {\sigma \left( {\mathbf{W}\mathbf{x} + \mathbf{b}}\right) }\right) + {\mathbf{b}}^{o}
+$$
+
+$$
+\text{where}\sigma {\left( \mathbf{x}\right) }_{m} = \left\{ \begin{array}{ll} 0, & {\mathbf{x}}_{m} < 0 \\ {\mathbf{x}}_{m}, & {\mathbf{x}}_{m} \geq 0 \end{array}\right. \text{.} \tag{1}
+$$
+
+For problems with a binary or categorical target variable (i.e. binary or multi-class classification), a sigmoid or softmax layer is added after $o$ respectively to convert the convert the NN outputs to proper probabilistic predictions.
+
+The ReLU activation function $\sigma \left( \cdot \right)$ inserts non-linearity into the model by checking a set of linear inequalities: ${\mathbf{w}}_{m}^{T}\mathbf{x} + {b}_{m} \geq$ $0, m = 1,2,\ldots , M$ , where ${\mathbf{w}}_{m}^{T}$ is the $m$ th row of matrix $\mathbf{W}$ and ${b}_{m}$ is the $m$ th element of $\mathbf{b}$ . Each neuron in the hidden layer creates a partitioning hyperplane in the input space with the linear equation ${\mathbf{w}}_{m}^{T}\mathbf{x} + {b}_{m} = 0$ . The areas on two sides of the hyperplane are two halfspaces. The entire input space is, therefore, partitioned by these $M$ hyperplanes. We define a local polytope as a set containing all points that fall on the same side of each and every hyperplane. The polytope encoding function (2) uses an element-wise indicator function $\mathbb{1}\left( \cdot \right)$ to create a unique binary code $\mathbf{c}$ for each polytope. Since the $m$ th neuron is called "ON" for some $\mathbf{x}$ if ${\mathbf{w}}_{m}^{T}\mathbf{x} + {b}_{m} \geq 0$ , the code $\mathbf{c}$ also represents the on-off pattern of the neurons. Using the results of this encoding function, we can express each polytope as an intersection of $M$ halfspaces as in (3), where the binary code $c$ controls the directions of the inequalities.
+
+$$
+C\left( \mathbf{x}\right) = \mathbb{1}\left( {\mathbf{W}\mathbf{x} + \mathbf{b} \geq 0}\right) . \tag{2}
+$$
+
+$$
+{\mathcal{R}}_{\mathbf{c}} = \left\{ {\mathbf{x} \mid {\left( -1\right) }^{{c}_{m}}\left( {{\mathbf{w}}_{m}^{T}\mathbf{x} + {b}_{m} \leq 0}\right) ,\forall m = 1,\ldots , M}\right\} . \tag{3}
+$$
+
+Figure 1.(b) shows an example of ReLU NN trained on a two-dimensional synthetic dataset (plotted in Figure 1.(a)). The bounded input space is ${\left\lbrack -1,1\right\rbrack }^{2}$ and the target variable is binary. The network has one hidden layer of 20 neurons. The partitioning hyperplanes associated with these neurons are plotted as the blue dashed lines. They form in total 91 local polytopes within the bounded input space.
+
+For a given $\mathbf{x}$ , if ${\mathbf{w}}_{m}^{T}\mathbf{x} + {b}_{m} \geq 0$ , the ReLU neuron turns on and passes through the value. Otherwise, the neuron is off and suppresses the value to zero. Therefore, if we know the $m$ th neuron is off, we can mask the corresponding ${\mathbf{w}}_{m}$ and ${b}_{m}$ by zeros and create ${\widetilde{\mathbf{W}}}_{\mathbf{c}}$ and ${\widetilde{\mathbf{b}}}_{\mathbf{c}}$ that satisfy (5). The non-linear operation, therefore, can be replaced by the a locally linear operation after zero-masking. Because each local polytope ${\mathcal{R}}_{c}$ has a unique neuron activation pattern encoded by $\mathbf{c}$ , the zero-masking process in (4) is also unique for each polytope. Here,1is a vector of $1\mathrm{\;s}$ of length $p$ and $\otimes$ denotes element-wise product.
+
+$$
+{\widetilde{\mathbf{W}}}_{\mathbf{c}} = \mathbf{W} \otimes \left( {\mathbf{c}{\mathbf{1}}^{T}}\right) ,{\widetilde{\mathbf{b}}}_{\mathbf{c}} = \mathbf{b} \otimes \mathbf{c}, \tag{4}
+$$
+
+$$
+\sigma \left( {\mathbf{W}\mathbf{x} + \mathbf{b}}\right) = {\widetilde{\mathbf{W}}}_{\mathbf{c}}\mathbf{x} + {\widetilde{\mathbf{b}}}_{\mathbf{c}},\;\forall \mathbf{x} \in {\mathcal{R}}_{\mathbf{c}}. \tag{5}
+$$
+
+Within each polytope, as the non-linearity is taken out by the zero-masking process, the input $\mathbf{x}$ and output $\mathbf{o}$ have a linear relationship:
+
+$$
+\mathbf{o} = {\mathbf{W}}^{o}\left( {\sigma \left( {\mathbf{W}\mathbf{x} + \mathbf{b}}\right) }\right) + {\mathbf{b}}^{o} = {\widehat{\mathbf{W}}}_{\mathbf{c}}^{o}\mathbf{x} + {\widehat{\mathbf{b}}}_{\mathbf{c}}^{o},\forall \mathbf{x} \in {\mathcal{R}}_{\mathbf{c}}, \tag{6}
+$$
+
+$$
+\text{where}{\widehat{\mathbf{W}}}_{\mathbf{c}}^{o} = {\mathbf{W}}^{o}{\widetilde{\mathbf{W}}}_{\mathbf{c}},{\widehat{\mathbf{b}}}_{\mathbf{c}}^{o} = {\mathbf{W}}^{o}{\widetilde{\mathbf{b}}}_{\mathbf{c}} + {\mathbf{b}}^{o}
+$$
+
+The linear model associated with polytope ${\mathcal{R}}_{c}$ has the weight matrix ${\widehat{\mathbf{W}}}_{\mathbf{c}}$ and the bias vector ${\widehat{\mathbf{b}}}_{\mathbf{c}}$ . The ReLU NN is now represented by a collection of linear models, each defined on a local polytope ${\mathcal{R}}_{c}$ .
+
+In Figure 1.(b), we represent the linear model in each local poly-topes as a red solid line indicating ${\left( {\widehat{\mathbf{w}}}_{\mathbf{c}}^{o}\right) }^{T}\mathbf{x} + {\widehat{b}}_{\mathbf{c}}^{o} = 0$ . In this binary response case, the two sides of this line have the opposite class prediction. We only plot the line if it passes through its corresponding polytope. For other polytopes, the entire polytopes fall on one side of their corresponding class-separating lines and the predicted class is the same within the whole polytope. The red lines all together form the decision boundary of the ReLU NN and are continuous when passing through one polytope to another. This is a direct result of ReLU NN being a continuous model.
+
+### 2.2 The case of multiple layers
+
+We can generalize the results to ReLU NNs with multiple hidden layers. A ReLU NN with $L$ hidden layers hierarchically partitions the input space and is locally linear in each and every level- $L$ polytope. Each level- $L$ polytope ${\mathcal{R}}^{L}$ has a unique binary code ${c}^{1}{c}^{2}\ldots {c}^{L}$ representing the activation pattern of the neurons in all $L$ hidden layers. The corresponding partitioning hyperplanes of each level, ${\widehat{\mathbf{W}}}^{l}\mathbf{x} + {\widehat{\mathbf{b}}}^{l} = 0, l = 1,2,\ldots , L$ , can be calculated recursively level by level, using the zero masking procedure:
+
+$$
+{\widehat{\mathbf{W}}}^{1} = {\mathbf{W}}^{1},{\widehat{\mathbf{b}}}^{1} = {\mathbf{b}}^{1} \tag{7}
+$$
+
+$$
+{\widetilde{\mathbf{W}}}^{l} = {\widehat{\mathbf{W}}}^{l} \otimes \left( {{\mathbf{c}}^{l}{\mathbf{1}}^{T}}\right) ,{\widetilde{\mathbf{b}}}^{l} = {\widehat{\mathbf{b}}}^{l} \otimes {\mathbf{c}}^{l} \tag{8}
+$$
+
+$$
+{\widehat{\mathbf{W}}}^{l + 1} = {\mathbf{W}}^{l + 1}{\widetilde{\mathbf{W}}}^{l},{\widehat{\mathbf{b}}}^{l + 1} = {\mathbf{W}}^{l + 1}{\widetilde{\mathbf{b}}}^{l} + {\mathbf{b}}^{l + 1}. \tag{9}
+$$
+
+We emphasis that ${\widetilde{\mathbf{W}}}^{l},{\widetilde{\mathbf{b}}}^{l},{\widehat{\mathbf{W}}}^{l + 1}$ , and ${\widehat{\mathbf{b}}}^{l + 1}$ depend on all polytope code up to level $l : {\mathbf{c}}^{1}{\mathbf{c}}^{2}\ldots {\mathbf{c}}^{l}$ .
+
+1.0 1.0 1.0 0.5 0.0 -0.5 -1.0 0.0 0.5 1.0 -1.0 -0.5 0.0 0.5 1.0 (b) (c) 0.5 0.5 0.0 -0.5 -0.5 -1.0 -1.0 -1.0 -0.5 0.0 0.5 1.0 -1.0 -0.5 (a)
+
+Figure 1: Examples of trained ReLU NNs and their local polytopes. (a) The grid-like training data with binary target variable. (b) A trained ReLUNN with one hidden layer of 20 neurons. The heatmap shows the predicted probability of a sample belong to class 1 . The blue dashed lines are the partitioning hyperplanes associated with the ReLU neurons, which form 91 local polytopes in total. The red solid lines represent the linear model within each polytope where class separation occurs. (c) A trained ReLU NN with two hidden layers of 10 and 5 neurons respectively. The blue dashed lines are the partitioning hyperplanes associated with the first 10 ReLU neurons, forming 20 level-1 polytopes. The orange dashes lines are the partitioning hyperplanes associated with the second 5 ReLU neurons within each level-1 polytope. There are in total 41 (level-2) local polytopes. The red solid lines represent the linear model within each level-2 polytope where class separation occurs.
+
+At each level $l$ , the encoding function ${C}^{l}\left( \cdot \right)$ and the polytope ${\mathcal{R}}^{l}$ expressed as an intersection of $\mathop{\sum }\limits_{{t = 1}}^{l}{M}_{t}$ halfspaces can be written recursively as:
+
+$$
+{C}^{1}\left( \mathbf{x}\right) = \mathbb{1}\left( {{\mathbf{W}}^{1}\mathbf{x} + {\mathbf{b}}^{1} \geq 0}\right) \tag{10}
+$$
+
+$$
+{\mathcal{R}}^{1} = \left\{ {\mathbf{x} \mid {\left( -1\right) }^{{c}_{m}}\left( {{\left( {\mathbf{w}}^{1}\right) }_{m}^{T}\mathbf{x} + {\left( {b}^{1}\right) }_{m} \leq 0}\right) ,}\right. \tag{11}
+$$
+
+$$
+\left. {\forall m = 1,2,\ldots ,{M}_{1}}\right\}
+$$
+
+$$
+{C}^{l + 1}\left( \mathbf{x}\right) = \mathbb{1}\left( {{\widehat{\mathbf{W}}}^{l + 1}\mathbf{x} + {\widehat{\mathbf{b}}}^{l + 1} \geq 0}\right) ,\forall \mathbf{x} \in {\mathcal{R}}^{l} \tag{12}
+$$
+
+$$
+{\mathcal{R}}^{l + 1} = \left\{ {\mathbf{x} \mid {\left( -1\right) }^{{c}_{m}}\left( {{\left( {\widehat{\mathbf{w}}}^{l + 1}\right) }_{m}^{T}\mathbf{x} + {\left( {\widehat{b}}^{l + 1}\right) }_{m} \leq 0}\right) ,}\right. \tag{13}
+$$
+
+$$
+\left. {\forall m = 1,2,\ldots ,{M}_{l + 1}}\right\} \cap {\mathcal{R}}^{l}\text{.}
+$$
+
+Finally, the linear model in a level- $L$ polytope is:
+
+$$
+\mathbf{o} = {\widehat{\mathbf{W}}}^{o}\mathbf{x} + {\widehat{\mathbf{b}}}^{o},\forall \mathbf{x} \in {\mathcal{R}}^{L}, \tag{14}
+$$
+
+$$
+\text{where}{\widehat{\mathbf{W}}}^{o} = {\mathbf{W}}^{o}{\widetilde{\mathbf{W}}}^{L},{\widehat{\mathbf{b}}}^{o} = {\mathbf{W}}^{o}{\widetilde{\mathbf{b}}}^{L} + {\mathbf{b}}^{o}\text{.}
+$$
+
+Figure 1.(c) shows an example of ReLU NN with two hidden layers of size 10 and 5 respectively. The partitioning hyperplanes associated with the first 10 neuron are plotted as the blue dashed lines. They form 20 level-1 polytopes within the bounded input space. Within each of the level-1 polytope, the hyperplanes associated with the second 5 neurons further partition the polytope. In many cases, some of the 5 hyperplanes are outside the level-1 polytope and, therefore, not creating a new sub-partition. The hy-perplanes do create new partitions are plotted as the orange dashed lines. The orange lines are only straight within a level-1 polytope but are continuous when passing through one polytope to another, which is also a result of ReLU NN being a continuous model. In total, this ReLU NN creates 41 (level-2) local polytopes. As in Figure 1.(b), the linear model within each level-2 polytope is represented as a red solid line if class separation occurs within the polytope.
+
+## 3 Polytope Boundaries and Adjacency
+
+Beyond viewing ReLU NNs as a collection of linear models defined on local polytopes, we explore the topological relationship among these polytopes. A key concept is the boundaries of each polytope. As shown in (13), each level- $l$ polytope ${\mathcal{R}}_{c}$ with corresponding binary code $\mathbf{c} = {\mathbf{c}}^{1}{\mathbf{c}}^{2}\ldots {\mathbf{c}}^{l}$ is an intersection of $\mathop{\sum }\limits_{{t = 1}}^{l}{M}_{t}$ halfspaces induced by a set of inequality constraints. Two situations can rise among these inequalities. First, an arbitrary $\mathbf{c}$ may lead to conflicting inequalities and makes ${\mathcal{R}}_{\mathbf{c}}$ an empty set. This situation can be common when the number of neurons is much larger than the dimension of the input space. Second, there can be redundant inequalities which means removing them does not affect set ${\mathcal{R}}_{c}$ . We now show that the non-redundant inequalities are closely related to the boundaries of a polytope.
+
+Definition 3.1 Let $\mathcal{R}$ contains all $\mathbf{x} \in {\mathbb{R}}^{P}$ that satisfy $M$ linear inequalities: $\mathcal{R} = \left\{ {\mathbf{x} \mid {g}_{1}\left( \mathbf{x}\right) \leq 0,{g}_{2}\left( \mathbf{x}\right) \leq 0,\ldots ,{g}_{M}\left( \mathbf{x}\right) \leq 0}\right\}$ . Assume that $\mathcal{R} \neq \varnothing$ . Let $\widetilde{\mathcal{R}}$ contains all $\mathbf{x}$ ’s that satisfy $M - 1$ linear inequalities: $\widetilde{\mathcal{R}} = \left\{ {\mathbf{x} \mid {g}_{1}\left( \mathbf{x}\right) \leq 0,\ldots ,{g}_{m - 1}\left( \mathbf{x}\right) \leq 0,{g}_{m + 1}\left( \mathbf{x}\right) \leq }\right.$ $\left. {0,\ldots ,{g}_{M}\left( \mathbf{x}\right) \leq 0}\right\}$ . Then the inequality ${g}_{m}\left( \mathbf{x}\right) \leq 0$ is a redundant inequality with respect to (w.r.t.) $\mathcal{R}$ if $\mathcal{R} = \widetilde{\mathcal{R}}$ .
+
+With the redundant inequality defined above, the following lemma provides an algorithm to identify them. The proof of this lemma is in the Appendix.
+
+Lemma 3.1 Given a set $\mathcal{R} = \left\{ {\mathbf{x} \mid {g}_{1}\left( \mathbf{x}\right) \leq 0,\ldots ,{g}_{M}\left( \mathbf{x}\right) \leq }\right.$ $0\} \neq \varnothing$ , then ${g}_{m}\left( \mathbf{x}\right)$ is a redundant inequality if the new set formed by flipping this inequality is empty: $\widehat{\mathcal{R}} = \left\{ {\mathbf{x} \mid {g}_{1}\left( \mathbf{x}\right) \leq }\right.$ $\left. {0,\ldots ,{g}_{m}\left( \mathbf{x}\right) \geq 0,\ldots ,{g}_{M}\left( \mathbf{x}\right) \leq 0}\right\} = \varnothing$ .
+
+We can now define the boundaries of a polytope formed by a set of linear inequalities using a similar procedure in Lemma3.1. The concept of polytope boundaries also leads to the definition of adjacency. Intuitively, we can move from one polytope to its adjacent polytope by crossing a boundary.
+
+Definition 3.2 Given a non-empty set formed by $M$ linear inequalities: $\mathcal{R} = \left\{ {\mathbf{x} \mid {g}_{1}\left( \mathbf{x}\right) \leq 0,\ldots ,{g}_{M}\left( \mathbf{x}\right) \leq 0}\right\} \neq \varnothing$ , then the hyperplane ${g}_{m}\left( \mathbf{x}\right) = 0$ is a boundary of $\mathcal{R}$ if the new set formed by flipping the corresponding inequality is non-empty: $\widehat{\mathcal{R}} = \left\{ {\mathbf{x} \mid {g}_{1}\left( \mathbf{x}\right) \leq 0,\ldots ,{g}_{m}\left( \mathbf{x}\right) \geq 0,\ldots ,{g}_{M}\left( \mathbf{x}\right) \leq 0}\right\} \neq \varnothing .$ Polytope $\widehat{\mathcal{R}}$ is called one-adjacent to $\mathcal{R}$ .
+
+Since for each polytope the directions of its linear inequalities are reflected by the binary code, two one-adjacent polytopes must have their code differ by one bit. Figure 2.(a) demonstrates the adjacency among the local polytopes. The ReLU NN is the same as in Figure 1.(b). Using the procedure in Definition 3.2, 4 out of the 20 partitioning hyperplanes are identified as the boundaries of polytope No. 0 and marked in red. The 4 one-adjacent neighbors to polytope No. 0 are No.1,2,3, and 4 ; each can be reached by crossing one boundary.
+
+As we have shown in the Section 2.2, ReLU NNs create poly-topes level by level. We follow the same hierarchy to define the polytope adjacency. Assume two non-empty level- $l$ polytopes, $\mathcal{R}$ and $\widehat{\mathcal{R}}$ , are inside the same level-(l - 1)polytope, which means their corresponding code $\mathbf{c} = {\mathbf{c}}^{1}{\mathbf{c}}^{2}\ldots {\mathbf{c}}^{l}$ and $\widehat{\mathbf{c}} = {\mathbf{c}}^{1}{\mathbf{c}}^{2}\ldots {\widehat{\mathbf{c}}}^{l}$ only differs at level- $l$ . We say that polytope $\widehat{\mathcal{R}}$ is a level- $l$ one-adjacent neighbor of $\mathcal{R}$ if ${\widehat{\mathbf{c}}}^{l}$ and ${\mathbf{c}}^{l}$ only differs in one bit.
+
+The condition that $\mathbf{c} = {\mathbf{c}}^{1}{\mathbf{c}}^{2}\ldots {\mathbf{c}}^{l}$ and $\widehat{\mathbf{c}} = {\mathbf{c}}^{1}{\mathbf{c}}^{2}\ldots {\widehat{\mathbf{c}}}^{l}$ only differs at level- $l$ is important. In this way, the two linear inequalities associated with each pair of bits in $\mathbf{c}$ and $\widehat{\mathbf{c}}$ have the same coefficients, and the difference in ${\mathbf{c}}^{l}$ and ${\widehat{\mathbf{c}}}^{l}$ only changes the direction of the linear inequality. On the other hand, if the two codes differ at a level ${l}^{\prime } < l$ , then according to the recursive calculation in (8) and (9), the codes starting from level ${l}^{\prime } + 1$ will correspond to linear inequalities of different coefficients, leaving our Definition 3.2 of adjacency not applicable.
+
+Figure 2.(b) demonstrates the hierarchical adjacency among the local polytopes. The ReLU NN is the same as in Figure 1.(c). Level-1 polytopes $\left( {1, \cdot }\right)$ and $\left( {2, \cdot }\right)$ are both (level-1) one-adjacent to $\left( {0, \cdot }\right)$ . Within the level-1 polytope $\left( {0, \cdot }\right)$ , Level-2 polytopes (0,0)and(0,1)are (level-2) one-adjacent to each other. Similarly, we can identify the level-2 adjacency of the other two pairs $\left( {1,0}\right) - \left( {1,1}\right)$ and $\left( {2,0}\right) - \left( {2,1}\right)$ . Note that in the plot, even thought one can move from polytope(2,1)to(0,1)by crossing one partitioning hyperplane, we do not define these two polytopes as adjacent, as they lie into two different level-1 polytopes.
+
+## 4 Polytope Traversing
+
+### 4.1 The case of one hidden layer
+
+The adjacency defined in the previous section provides us an order to traverse the local polytopes: starting from an initial polytope $\mathcal{R}$ , visiting its all one-adjacent neighbors, then visiting all the neighbors' neighbors and so on.
+
+This algorithm can be viewed as breath-first search (BFS) on a polytope graph. To create this graph, we turn each polytope created by the ReLU NN into a node. An edge is added between each pair of polytopes that are one-adjacent to each other. The BFS algorithm uses a queue to keep track the visited polytopes. At the beginning of traversing, the initial polytope is added to an empty queue and is marked as visited afterwards. In each iteration, we pop the first polytope from the queue and identify all of its one-adjacent neighbors. Among these identified polytopes, we add those that have not been visited to the back of the queue and mark them as visited. The iteration stops when the queue is empty.
+
+The key component of the polytope traversing algorithm is to identify a polytope’s one-adjacent neighbors. For a polytope ${\mathcal{R}}_{c}$ coded by $\mathbf{c}$ of $M$ bits, there are at most $M$ one-adjacent neighbors with codes corresponding to flipping one of the bits in $\mathbf{c}$ . Each valid one-adjacent neighbor must be non-empty and can be reached by crossing a boundary. Therefore, we can check each linear inequality in (3) and determine whether it is a boundary or redundant. Some techniques of identifying redundant inequalities are summarized in Telgen (1983). By flipping the bits corresponding to the identified boundaries, we obtain the codes of the one-adjacent polytopes.
+
+Equivalently, we can identify the one-adjacent neighbors by going through all $M$ candidate codes and selecting those corresponding to non-empty sets. Checking the feasibility of a set constrained by a set of linear inequalities is often referred to as the "Phase-I Problem" of LP and can be solved efficiently by modern LP solvers. During BFS iterations, we can hash the checked codes to avoid checking them repetitively. The BFS-based polytope traversing algorithm is summarized in Algorithm 1. We now state the correctness of this algorithm with its proof in Appendix.
+
+Theorem 4.1 Given a ReLUNN with one hidden layer of $M$ neurons as specified in (1), Algorithm 1 covers all non-empty local polytopes created by the neural network. That is, for all $\mathbf{x} \in {\mathbb{R}}^{P}$ , there exists one ${\mathcal{R}}_{\mathbf{c}}$ as defined in (3) such that $\mathbf{x} \in {\mathcal{R}}_{\mathbf{c}}$ and $\mathbf{c} \in {\mathcal{S}}_{R}$ , where ${\mathcal{S}}_{R}$ is the result returned by Algorithm 1.
+
+Algorithm 1 visits all the local polytopes created by a ReLU NN within ${\mathbb{R}}^{P}$ . The time complexity is exponential to the number of neurons, as all ${2}^{M}$ possible activation patterns are checked once in the worst-case scenario. The space complexity is also exponential to the number of neurons as we hash all the checked activation patterns. Furthermore, for each activation pattern, we solve a phase-I problem of LP with $M$ inequalities in ${\mathbb{R}}^{P}$ . Traversing all local polytopes in ${\mathbb{R}}^{P}$ , therefore, becomes intractable for neural networks with a large number of neurons.
+
+Fortunately, traversing in ${\mathbb{R}}^{P}$ is usually undesirable. Firstly, a neural network may run into extrapolation issues for points outside the sample distribution. The polytopes far away from the areas covered by the samples are often considered unreliable. Secondly, many real-life applications, to be discussed in Section 5, only require traversing within small bounded regions to examine the local behavior of a model. In the next section, we introduce a technique to improve the efficiency when traversing within a bounded region.
+
+Algorithm 1: BFS-Based Polytope Traversing
+
+---
+
+Require: A ReLU NN with one hidden layer of $M$ neurons as
+
+ specified in (1).
+
+Require: An initial point $\mathbf{x} \in {\mathbb{R}}^{P}$ .
+
+ 1: Initialize an empty queue $\mathcal{Q}$ for BFS.
+
+ Initialize an empty set ${\mathcal{S}}_{R}$ to store the codes of all visited poly-
+
+ topes.
+
+ 3: Initialize an empty set ${\mathcal{S}}_{c}$ to store all checked codes.
+
+ Calculate $\mathbf{x}$ ’s initial polytope code $\mathbf{c}$ using (2).
+
+ Append $c$ to the end of the $\mathcal{Q}$ .
+
+ Add $\mathbf{c}$ to both ${\mathcal{S}}_{R}$ and ${\mathcal{S}}_{\mathbf{c}}$ .
+
+ while $\mathcal{Q}$ is not empty do
+
+ Pop the first element in the front of BFS queue: $\mathbf{c} =$
+
+ Q.pop( ).
+
+ for $m = 1,2,\ldots , M$ do
+
+ Create a candidate polytope code $\widehat{\mathbf{c}}$ by flipping one bit
+
+ in $\mathbf{c} : {\widehat{c}}_{m} = 1 - {c}_{m}$ and ${\widehat{c}}_{k} = {c}_{k}\forall k \neq m$ .
+
+ if $\widehat{c} \notin {\mathcal{S}}_{c}$ then
+
+ Check if ${\mathcal{R}}_{\widehat{\mathbf{c}}} = \left\{ {\mathbf{x} \mid {\left( -1\right) }^{{\widehat{c}}_{k}}\left( {{\mathbf{w}}_{k}^{T}\mathbf{x} + {b}_{k}}\right) \leq }\right.$
+
+ $0, k = 1,2\ldots , M\}$ is empty using LP.
+
+ Add $\widehat{\mathbf{c}}$ to ${\mathcal{S}}_{\mathbf{c}}$ .
+
+ if ${\mathcal{R}}_{\widehat{\mathbf{c}}} \neq \varnothing$ then
+
+ Append $\widehat{\mathbf{c}}$ to the end of the $\mathcal{Q}$ .
+
+ Add $\widehat{\mathbf{c}}$ to ${\mathcal{S}}_{R}$ .
+
+ Return ${\mathcal{S}}_{R}$ .
+
+---
+
+### 4.2 Polytope traversing within a bounded region
+
+We first consider a region with each dimension bounded independently: ${l}_{j} \leq {x}_{j} \leq {u}_{j}, j = 1,2,\ldots , P$ . These $2 \times P$ linear inequalities creates a hypercube denoted as $\mathcal{B}$ . During the BFS-based polytope traversing, we repetitively flip the direction of one of the $M$ inequalities to identify the one-adjacent neighbors. When the bounded region is small, it is likely that only a small number of the $M$ hyperplanes cut through the hypercube. For the other hyper-planes, the entire hypercube falls onto only one side. Flipping to the other sides of these hyperplanes would leave the bounded region. Therefore, at the very beginning of polytope traversing, we can run through the $M$ hyperplanes to identify those cutting through the hypercube. Then in each neighbor identifying step, we only flip these hyperplanes.
+
+1.0 1.0 ${}^{\prime }\left( {0,1}\right)$ (c) (0,0) (0,1) (1,0) (1,1) (2,0) (2,1) 0.0 0.5 1.0 (b) (d) 0.5 0.5 (2,1) 0.0 0.0 -0.5 -0.5 -1.0 -1.0 -1.0 -0.5 0.0 0.5 1.0 -1.0 -0.5 (a)
+
+Figure 2: Demonstration of the BFS-base polytope traversing algorithm. (a) Traversing the 8 local polytopes within the bounded regions. The ReLU NN is the same as in Figure 1.(b). The lines marked in red are the boundaries of polytope No.0. (b) Traversing the 6 local polytopes within the bounded region. The ReLU NN is the same as in Figure 1.(c). The polytopes are indexed as "(level-1, level-2)". (c) The evolution of the BFS queue for traversing the local polytopes in (a). The gray arrows show the traversing order. The colored arrows at the bottom indicate the one-adjacent neighbors added to the queue. (d) The evolution of the hierarchical BFS queue for traversing the local polytopes in (b). The level-1 BFS queue is shown vertically while the level-2 BFS queue is shown horizontally.
+
+To identify the hyperplanes cutting through the hypercube, we denote the two sides of a hyperplane $\mathcal{H}$ and $\overline{\mathcal{H}} : \mathcal{H} = \left\{ {\mathbf{x} \mid {\mathbf{w}}_{m}^{T}\mathbf{x} + }\right.$ $\left. {{b}_{m} \leq 0}\right\}$ and $\overline{\mathcal{H}} = \left\{ {\mathbf{x} \mid {\mathbf{w}}_{m}^{T}\mathbf{x} + {b}_{m} \geq 0}\right\}$ . If neither $\mathcal{H} \cap \mathcal{B}$ nor $\widehat{\mathcal{H}} \cap \mathcal{B}$ is empty, we say the hyperplane ${\mathbf{w}}_{m}^{T}\mathbf{x} + {b}_{m} = 0$ cuts through $\mathcal{B}.\mathcal{H} \cap \mathcal{B}$ and $\widehat{\mathcal{H}} \cap \mathcal{B}$ are both constrained by $2 \times P + 1$ inequalities, checking their feasibility can again be formulated as a phase-I problem of LP. A faster and simpler method is to bound ${\mathbf{w}}_{m}^{T}\mathbf{x} + {b}_{m}$ subject to $\mathbf{x} \in \mathcal{B}$ , which has a closed-form solution. Then the hyperplane cuts through $\mathcal{B}$ if zero is in between the upper and lower bounds. We name this technique hyperplane pre-screening and summarize it in algorithm 2.
+
+Algorithm 2: Hyperplane Pre-Screening
+
+---
+
+Require: A set of hyperplanes ${\mathbf{w}}_{m}^{T}\mathbf{x} + {b}_{m} \leq 0, m =$
+
+ $1,2,\ldots , M$ .
+
+Require: A bounded traversing region $\mathcal{B}$ , e.g. $\left\{ {\mathbf{x} \mid {l}_{j} \leq {x}_{j} \leq {u}_{j}}\right.$ ,
+
+ $j = 1,2,\ldots , P\}$ .
+
+ : Initialize an empty set $\mathcal{T}$ to store all hyperplanes cutting
+
+ through $\mathcal{B}$ .
+
+ for $m = 1,2,\ldots , M$ do
+
+ Get two halfspaces $\mathcal{H} = \left\{ {\mathbf{x} \mid {\mathbf{w}}_{m}^{T}\mathbf{x} + {b}_{m} \leq 0}\right\}$ and $\overline{\mathcal{H}} =$
+
+ $\left\{ {\mathbf{x} \mid {\mathbf{w}}_{m}^{T}\mathbf{x} + {b}_{m} \geq 0}\right\}$ .
+
+ if $\mathcal{H} \cap \mathcal{B} \neq \varnothing$ and $\widehat{\mathcal{H}} \cap \mathcal{B} \neq \varnothing$ then
+
+ Add $m$ to $\mathcal{T}$ .
+
+ Return $\mathcal{T}$ .
+
+---
+
+Hyperplane pre-screening effectively reduces the complexity from $\mathcal{O}\left( {2}^{M}\right)$ to $\mathcal{O}\left( {2}^{\left| \mathcal{T}\right| }\right)$ , where $\left| \mathcal{T}\right|$ is the number of hyper-planes cutting through the hypercube. The number ${2}^{\left| \mathcal{T}\right| }$ corresponds to the worst-case scenario. Since the BFS-based traversing only checks non-empty polytopes and their potential one-adjacent neighbors, the number of activation patterns actually checked can be less than ${2}^{\left| \mathcal{T}\right| }$ . In general, the fewer hyperplanes go through $\mathcal{B}$ the faster polytope traversing finishes.
+
+Figure 2.(a) shows traversing the 8 local polytopes within the bounded region. The ReLU NN is the same as in Figure 1.(b). The lines marked in red are the hyperplanes cutting through the bounded region and are identified by the pre-screening algorithm. The evolution of the BFS queue is shown in Figure 2.(c). The gray arrows show the traversing order. The colored arrows at the bottom indicate the one-adjacent neighbors added to the queue. When polytope No. 0 is popped from the queue, its one-adjacent neighbors, No.1, 2, 3, and 4, are added to the queue. Next, when polytope No. 1 is popped, its one-adjacent neighbors, No. 5 and 6, are added. Polytope No.0, although as a one-adjacent neighbor to No.1, is ignored since it has been visited. Similarly, when polytope No. 2 is popped, only one of its one-adjacent neighbors, No. 7 , is added, since all others have been visited (including those in the queue). The algorithm finished after popping Polytope No. 8 as no new polytopes can be added and the queue is empty. All 8 local polytopes in the bounded region are traversed.
+
+Because $\mathcal{B}$ is bounded by a set of linear inequalities, the correctness of BFS-based polytope traversing as stated in Theorem 4.1 can be easily extended to this bounded traversing case. It can be proved by showing that for any two non-empty polytopes overlapped with $\mathcal{B}$ , we can move from one to another by repetitively finding a one-adjacent neighbor within $\mathcal{B}$ . We emphasis that the correctness of BFS-based polytope traversing can be proved for any traversing region bounded by a set of linear inequalities. This realization is critical to generalize our results to the case of ReLU NNs with multiple hidden layers. Furthermore, as any closed convex set can be represented as the intersection of a set of (possibly infinite) half-spaces, the correctness of BFS-based polytope traversing is true for any closed convex $\mathcal{B}$ .
+
+### 4.3 Hierarchical polytope traversing in the case of multiple hidden layers
+
+The BFS-based polytope traversing algorithm can be generalized to ReLU NNs with multiple hidden layers. In section 2.2, we described how a ReLU NN with $L$ hidden layers hierarchically partition the input space into polytopes of $L$ different level. Then in section3, we showed the adjacency of level- $l$ polytopes is conditioned on all of them belonging to the same level-(l - 1)polytope. Therefore, to traverse all level- $L$ polytopes, we need to traverse all level-(L - 1)polytopes and within each of them traversing the sub-polytopes by following the one-adjacent neighbors.
+
+The procedure above leads us to a recursive traversing scheme. Assume a ReLU NN with L hidden layers and a closed convex traversing region $\mathcal{B}$ . Starting from a sample $\mathbf{x} \in \mathcal{B}$ , we traverse all level-1 polytopes using the BFS-based algorithm. Inside each level-1 polytope, we traverse all the contained level-2 polytopes, so on and so forth until we reach the level-L polytopes. As shown in (13), each level- $l$ polytope is constrained by $\mathop{\sum }\limits_{{t = 1}}^{l}{M}_{t}$ linear inequalities, the way to identify level- $l$ one-adjacent neighbors is largely the same as what we have described in Section 4.1. Two level- $l$ one-adjacent neighbors must have the same $\mathop{\sum }\limits_{{t = 1}}^{{l - 1}}{M}_{t}$ linear inequalities corresponding to ${\mathbf{c}}^{1}{\mathbf{c}}^{2}\ldots {\mathbf{c}}^{l - 1}$ , and have one of the last ${M}_{l}$ inequalities differ in direction, so there are ${M}_{l}$ cases to check.
+
+We can use hyperplane pre-screening at each level of traversing. When traversing the level- $l$ polytopes within in a level-(l - 1) polytope ${\mathcal{R}}^{l - 1}$ , we update the bounded traversing region by taking the intersection of ${\mathcal{R}}^{l - 1}$ and $\mathcal{B}$ . We then screen the ${M}_{l}$ partitioning hyperplanes and only select those passing through this update traversing region.
+
+The BFS-based hierarchical polytope traversing algorithm is summarized in Algorithm 3. The correctness of this algorithm can be proved based on the results in Section 4.2, which guarantees the thoroughness of traversing the level- $l$ polytopes within in any level-(l - 1)polytope. Then the overall thoroughness is guaranteed because each level of traversing is thorough. We state the result in the following theorem.
+
+Theorem 4.2 Given a ReLUNN with $L$ hidden layers and a closed convex traversing region $\mathcal{B}$ . Algorithm 3 covers all non-empty level- $L$ polytopes created by the neural network that overlap with $\mathcal{B}$ . That is, for all $\mathbf{x} \in \mathcal{B}$ , there exists one ${\mathcal{R}}_{\mathbf{c}}$ as defined in (13) such that $\mathbf{x} \in {\mathcal{R}}_{\mathbf{c}}$ and $\mathbf{c} \in {\mathcal{S}}_{R}$ , where ${\mathcal{S}}_{R}$ is the result returned by Algorithm 3.
+
+Figure 2.(b) shows traversing the 6 local polytopes within the bounded region. The ReLU NN is the same as in Figure 1.(c). The evolution of the hierarchical BFS queue is shown in Figure 2.(d). The level-1 BFS queue is shown vertically while the level-2 BFS queue is shown horizontally. Starting from level-1 polytope $\left( {0, \cdot }\right)$ , the algorithm traverses the two level-2 polytopes inside it. It then identifies the two (level-1) one-adjacent neighbors of $\left( {0, \cdot }\right) : \left( {1, \cdot }\right)$ and $\left( {2, \cdot }\right)$ . Every time a level-1 polytope is identified, the algorithm goes into it to traverse all the level-2 polytopes inside. At the end of the recursive call, all 6 local polytopes in the bounded region are traversed.
+
+## 5 Network Property Verification Based on Polytope Traversing
+
+The biggest advantage of the polytope traversing algorithm is its ability to be adapted to solve many different problems of practical interest. Problems such as local adversarial attacks, searching for counterfactual samples, and local monotonicity verification can be solved easily when the model is linear. As we have shown in Sections 2.2, the local model within each level- $L$ polytope created by a ReLU NN is indeed linear. The polytope traversing algorithm provides a way to analyze not only the behavior of a ReLU NN at one local polytope but also the behavior within the neighborhood, and therefore enhances our understanding of the overall model behavior. In this section, we describe the details of adapting the polytope traversing algorithm to verify several properties of ReLU NNs.
+
+Algorithm 3: BFS-Based Hierarchical Polytopes Traversing in a Bounded Region
+
+---
+
+Require: A ReLU NN with $L$ hidden layers.
+
+Require: A closed convex traversing region $\mathcal{B}$ .
+
+Require: An initial point $\mathbf{x} \in \mathcal{B}$ .
+
+ 1: Initialize an empty set ${\mathcal{S}}_{R}$ to store the codes of all visited poly-
+
+ topes.
+
+ function HIERARCHICAL_TRAVERSE(x, l)
+
+ Initialize an empty queue ${\mathcal{Q}}^{l}$ for BFS at level $l$ .
+
+ Initialize an empty set ${\mathcal{S}}_{c}^{l}$ to store all checked level- $l$ codes.
+
+ Calculate $\mathbf{x}$ ’s initial polytope code $\mathbf{c}$ recursively using (12).
+
+ if $l = = L$ then
+
+ Add $\mathbf{c}$ to ${\mathcal{S}}_{R}$
+
+ else
+
+ HIERARCHICAL_TRAVERSE $\left( {\mathbf{x}, l + 1}\right)$
+
+ if $l > 1$ then
+
+ Get the level-(l - 1)polytope code specified by the
+
+ front segment of $c : {c}^{1 : l - 1} = {c}^{1}{c}^{2}\ldots {c}^{l - 1}$ .
+
+ Use ${\mathbf{c}}^{1 : l - 1}$ to get the level-(l - 1)polytope ${\mathcal{R}}_{\mathbf{c}}^{l - 1}$ as in
+
+ (13).
+
+ else
+
+ ${\mathcal{R}}_{\mathbf{c}}^{0} = {\mathbb{R}}^{P}$
+
+ Form the new traversing region ${\mathcal{B}}^{l - 1} = \mathcal{B} \cap {\mathcal{R}}_{c}^{l - 1}$ .
+
+ Append the code segment ${\mathbf{c}}^{l}$ to the end of the ${\mathcal{Q}}^{l}$ .
+
+ Add the code segment ${\mathbf{c}}^{l}$ to ${\mathcal{S}}_{\mathbf{c}}$ .
+
+ Get the ${M}_{l}$ hyperplanes associated with ${\mathbf{c}}^{l}$ .
+
+ Pre-Screen the hyperplanes associated with ${\mathbf{c}}^{l}$ using Algo-
+
+ rithm 2 with bounded region ${\mathcal{B}}^{l - 1}$ .
+
+ Collect the pre-screening results $\mathcal{T}$ .
+
+ while ${\mathcal{Q}}^{l}$ is not empty do
+
+ Pop the first element in the front of BFS queue: ${\mathbf{c}}^{l} =$
+
+ ${\mathcal{Q}}^{l}$ .pop( ).
+
+ for $m \in \mathcal{T}$ do
+
+ Create a candidate polytope code ${\widehat{\mathbf{c}}}^{l}$ by flipping one
+
+ bit in ${\mathbf{c}}^{l} : {\widehat{c}}_{m}^{l} = 1 - {c}_{m}^{l}$ and ${\widehat{c}}_{k}^{l} = {c}_{k}^{l}\forall k \neq m$ .
+
+ if ${\widehat{\mathbf{c}}}^{l} \notin {\mathcal{S}}_{\mathbf{c}}$ then
+
+ Get set ${\mathcal{R}}_{\widehat{\mathbf{c}}} = \left\{ {\mathbf{x} \mid {\left( -1\right) }^{{\widehat{c}}_{k}}\left( {\left\langle {{\widehat{\mathbf{w}}}_{k}^{l},\mathbf{x}}\right\rangle + {\widehat{b}}_{k}^{l}}\right) \leq }\right.$
+
+ $\left. {0, k = 1,2\ldots ,{M}_{l}}\right\}$
+
+ Check if ${\mathcal{R}}_{\widehat{\mathbf{c}}} \cap {\mathcal{B}}^{l - 1}$ is empty using LP.
+
+ Add ${\widehat{\mathbf{c}}}^{l}$ to ${\mathcal{S}}_{\mathbf{c}}$ .
+
+ if ${\mathcal{R}}_{\widehat{\mathbf{c}}} \cap {\mathcal{B}}^{l - 1} \neq \varnothing$ then
+
+ Append ${\widehat{\mathbf{c}}}^{l}$ to the end of the ${\mathcal{Q}}^{l}$ .
+
+ if $l = = L$ then
+
+ Add $\widehat{\mathbf{c}} = {\mathbf{c}}^{1}{\mathbf{c}}^{2}\ldots {\widehat{\mathbf{c}}}^{l}$ to ${\mathcal{S}}_{R}$
+
+ else
+
+ Find a point $\widehat{\mathbf{x}} \in {\mathcal{R}}_{\widehat{\mathbf{c}}} \cap {\mathcal{B}}^{l - 1}$
+
+ HIERARCHICAL_TRAVERSE $\left( {\widehat{\mathbf{x}}, l + 1}\right)$
+
+ HIERARCHICAL_TRAVERSE(x,1)
+
+ Return ${\mathcal{S}}_{R}$ .
+
+---
+
+### 5.1 Local Adversarial Attacks
+
+We define the local adversarial attack problem as finding the perturbation within a bounded region such that the model output can be changed most adversarially. Here, we assume the model output to be a scalar in $\mathbb{R}$ and consider three regression cases with different types of response variable: continuous, binary, and categorical. The perturbation region is a convex set around the original sample. For example, we can allow certain features to increase or decrease by certain amount; or we can use a norm $\left( {{L}_{1},{L}_{2},{L}_{\infty }}\right)$ ball centered at the original sample.
+
+1.0 1.0 1.0 0.5 0.0 -0.5 -1.0 0.0 0.5 1.0 -1.0 -0.5 0.0 0.5 1.0 (b) (c) 0.5 0.5 0.0 -0.5 -0.5 -1.0 -1.0 -1.0 -0.5 0.0 0.5 1.0 -1.0 -0.5 (a)
+
+Figure 3: Demonstration of different applications of the polytope traversing algorithm. We use the ReLU NN in Figure 1.(b) as an example. (a) Conducting local adversarial attack by finding the maximum (green) and minimum (red) model predictions within a bounded region. (b) Creating counterfactual samples that are closest to the original sample. The distance are measured in ${L}_{1}$ (green) and ${L}_{2}$ (red) norms. (c) Monotonicity verification in a bounded region. The polytope in red violates condition of model prediction monotonically increasing along the horizontal axis.
+
+In the continuous response case, the one-dimensional output after the last linear layer of a ReLU NN is directly used as the prediction of the target variable. Denote the model function as $f\left( \cdot \right)$ , the original sample as ${\mathbf{x}}_{0}$ , and the perturbation region as $\mathcal{B}$ . The local adversarial attack problem can be written as:
+
+$$
+\mathop{\max }\limits_{{\mathbf{x} \in \mathcal{B}}}\left| {f\left( \mathbf{x}\right) - f\left( {\mathbf{x}}_{0}\right) }\right| = \max \left( {\mathop{\max }\limits_{{\mathbf{x} \in \mathcal{B}}}f\left( \mathbf{x}\right) - f\left( {\mathbf{x}}_{0}\right) ,}\right. \tag{15}
+$$
+
+$$
+\left. {f\left( {\mathbf{x}}_{0}\right) - \mathop{\min }\limits_{{\mathbf{x} \in \mathcal{B}}}f\left( \mathbf{x}\right) }\right) ,
+$$
+
+which means we need to find the range of the model outputs on $\mathcal{B}$ . We can traverse all local polytopes covered by $\mathcal{B}$ , finding the model output range within each intersection $\mathcal{B} \cap \mathcal{R}$ , then aggregating all the local results to get the final range. Finding the output range within each $\mathcal{B} \cap \mathcal{R}$ is a convex problem with linear objective function, so the optimality can be guaranteed within each polytope. Because our traversing algorithm covers all polytopes overlapped with $\mathcal{B}$ , the final solution also has guaranteed optimality.
+
+In the case of binary response, the one-dimensional output after the last linear layer of a ReLU NN is passed through a logistic/sigmoid function to predict the probability of a sample belonging to class 1. To conduct adversarial attack, we minimize the predicted probability $f\left( \mathbf{x}\right)$ if the true response $y$ is 1, and maximize the prediction if the true response is 0 :
+
+$$
+\left\{ \begin{array}{ll} \mathop{\max }\limits_{{\mathbf{x} \in \mathcal{B}}}f\left( \mathbf{x}\right) , & y = 0 \\ \mathop{\min }\limits_{{\mathbf{x} \in \mathcal{B}}}f\left( \mathbf{x}\right) , & y = 1. \end{array}\right. \tag{16}
+$$
+
+Because of the monotonicity of the logistic function, the minimizer and maximizer of the probabilistic output are the same minimizer and maximizer of the output after the last linear layer (i.e. the predicted log odds), making it equivalent to the case of continuous response.
+
+In the case of categorical response with levels 1 to $Q$ , the output after the last linear layer of a ReLU NN is in ${\mathbb{R}}^{Q}$ and is passed through a softmax layer to be converted to probabilistic predictions of a sample belonging to each class. The adversarial sample is generated to minimize the predicted probability of the sample being in its true class. Within each local polytope, the linear models are given by (14), and the predicted probability of class $q$ can be minimized by finding the maximizer of the following optimization problem:
+
+$$
+\mathop{\max }\limits_{{\mathbf{x} \in \mathcal{B} \cap \mathcal{R}}}\mathop{\sum }\limits_{{i = 1, i \neq q}}^{Q}{e}^{{\left( {\widehat{\mathbf{w}}}_{i}^{o} - {\widehat{\mathbf{w}}}_{q}^{o}\right) }^{T}\mathbf{x} + \left( {{\widehat{b}}_{i}^{o} - {\widehat{b}}_{q}^{o}}\right) }, \tag{17}
+$$
+
+where ${\left( {\widehat{\mathbf{w}}}_{i}^{o}\right) }^{T}$ is the $i$ th row of the matrix ${\widehat{\mathbf{W}}}^{o}$ and ${\widehat{b}}_{i}^{o}$ is the $i$ th element in ${\widehat{\mathbf{b}}}^{o}$ . Since the objective function in (17) is convex, the optimality of local adversarial attack with polytope traversing is guaranteed.
+
+Figure 3.(a) demonstrates local adversarial attack in the case of regression with binary response. The ReLU NN is the same as in Figure 1.(b), which predicts the probability of a sample belong to class 1. The predictions across the whole domain are shown as the heat map. Within the region bounded by the black box, we find the minimum and maximum predictions and marked them by red and green respectively. Due to the nature of linear models, the minimizer and maximizer always fall on the intersections of partitioning hyperplanes and/or region boundaries.
+
+### 5.2 Counterfactual sample generation
+
+In classification problems, we are often interested in finding the smallest perturbation on a sample such that the model changes its class prediction. The magnitude of the perturbation is often measured by ${L}_{1},{L}_{2}$ , or ${L}_{\infty }$ norm. The optimization problem can be written as:
+
+$$
+\mathop{\min }\limits_{\mathbf{x}}{\begin{Vmatrix}\mathbf{x} - {\mathbf{x}}_{0}\end{Vmatrix}}_{p}\;\text{ s.t. }{f}_{\mathcal{C}}\left( \mathbf{x}\right) \neq {f}_{\mathcal{C}}\left( {\mathbf{x}}_{0}\right) , \tag{18}
+$$
+
+where ${\mathbf{x}}_{0}$ is the original sample, $p$ indicates a specific type of norm, and ${f}_{\mathcal{C}}\left( \cdot \right)$ is a ReLU NN outputting class predictions.
+
+We can adapt the polytope traversing algorithm to solve this problem. In the case of binary response, each local polytope has an associated hyperplane separating the two classes: ${\left( {\widehat{\mathbf{w}}}^{o}\right) }^{T}\mathbf{x} + {\widehat{b}}^{o} =$ $\gamma$ , where ${\widehat{\mathbf{w}}}^{o}$ and ${\widehat{b}}^{o}$ are given in (14), and $\gamma$ is the threshold converting predicted log odds to class. Finding the counterfactual sample within a local polytope $\mathcal{R}$ can be written as a convex optimization problem:
+
+$$
+\mathop{\min }\limits_{\mathbf{x}}{\begin{Vmatrix}\mathbf{x} - {\mathbf{x}}_{0}\end{Vmatrix}}_{p}\text{ s.t. }{\left( -1\right) }^{{\widehat{y}}_{0}}\left( {{\left( {\widehat{\mathbf{w}}}^{o}\right) }^{T}\mathbf{x} + {\widehat{b}}^{o}}\right) > \gamma ,\mathbf{x} \in \mathcal{R}.
+$$
+
+(19)where ${\widehat{y}}_{0}$ is the original class predicted by the model.
+
+We start the traversing algorithm from the polytope where ${\mathbf{x}}_{0}$ lies. In each polytope, we solve (19). It is possible that the entire polytope fall on one side of the class separating hyperplane and (19) does not have any feasible solution. If a solution can be obtained, we compare it with the solutions in previously traversed polytopes and keep the one with the smallest perturbation. Furthermore, we use this perturbation magnitude to construct a new bounded traversing region around ${\mathbf{x}}_{0}$ . Because no points outside this region can have a smaller distance to the original points, once we finish traversing all the polytopes inside this region, the algorithm can conclude. In practice we often construct this dynamic traversing region as $\mathcal{B} = \left\{ {\mathbf{x} \mid {\begin{Vmatrix}\mathbf{x} - {\mathbf{x}}_{0}\end{Vmatrix}}_{\infty } < {d}^{ * }}\right\}$ , where ${d}^{ * }$ is the smallest perturbation magnitude so far. When solving for (19) in the proceeding polytopes, we add $x \in \mathcal{B}$ to the constraints. $\mathcal{B}$ is updated whenever a smaller ${d}^{ * }$ is found. Because the new traversing region is always a subset of the previous one, our BFS-based traversing algorithm covers all polytopes within the final traversing region under this dynamic setting. The final solution to (18) is guaranteed to be optimal, and the running time depends on how far the original point is away from a class boundary.
+
+In the case of categorical response with levels 1 to $Q$ , the output after the last linear layer of a ReLU NN has $Q$ dimensions and the dimension of the largest value is the predicted class. We ignore the softmax layer at the end because it does not change the rank of the dimensions. Assuming the original example is predicted to belong to class ${\widehat{q}}_{0}$ , we generate counterfactual samples in the rest of $Q - 1$ classes.
+
+We consider one of these classes at a time and denote it as $q$ . Within each ReLU NN's local polytope, the linear models are given by (14). The area where a sample is predicted to be in class $q$ is enclosed by the intersection of $Q - 1$ halfspaces:
+
+$$
+{\mathcal{C}}_{q} = \left\{ {\mathbf{x} \mid {\left( {\widehat{\mathbf{w}}}_{q}^{o} - {\widehat{\mathbf{w}}}_{i}^{o}\right) }^{T}\mathbf{x} + \left( {{\widehat{b}}_{q}^{o} - {\widehat{b}}_{i}^{o}}\right) > 0,\forall i = 1,\ldots , Q, i \neq q}\right\} .
+$$
+
+(20)
+
+Therefore, within each local polytope, we solve the convex optimization problem:
+
+$$
+\mathop{\min }\limits_{\mathbf{x}}{\begin{Vmatrix}\mathbf{x} - {\mathbf{x}}_{0}\end{Vmatrix}}_{p}\text{ s.t. }\mathbf{x} \in {\mathcal{C}}_{q} \cap \mathcal{R}. \tag{21}
+$$
+
+We compare all feasible solutions of (21) under different $q$ and keep the one counterfactual sample that is closest to ${\mathbf{x}}_{0}$ . The traversing procedure and the dynamic traversing region update is the same as in the binary response case. Since (21) is convex, the final solution to (18) is guaranteed to be optimal.
+
+Figure 3.(b) demonstrates counterfactual sample generation in the case of binary classification. The ReLU NN is the same as in Figure 1.(b) whose class decision boundaries are plotted in red. Given an original sample plotted as the black dot, we generate two counterfactual samples on the decision boundaries. The red dot has the smallest ${L}_{2}$ distance to the original point while the green dot has the smallest ${L}_{1}$ distance.
+
+### 5.3 Local monotonicity verification
+
+We can adapt the polytope traversing algorithm to verify if a trained ReLU NN is monotonic w.r.t. certain features. We consider the regression cases with continuous and binary response. In both cases, the output after the last linear layer is a scalar. Since the binary response case uses a logistic function at the end which is monotonically increasing itself, we can ignore this additional function. The verification methods for the two cases, therefore, are equivalent.
+
+To check whether the model is monotonic w.r.t. a specific feature within a bounded convex domain, we traverse the local polytopes covered by the domain. Since the model is linear within each polytope, we can easily check the monotonicity direction (increasing or decreasing) by checking the sign of the corresponding coefficients. After traversing all local polytopes covered by the domain, we check their agreement on the monotonicity direction. Since a ReLU NN produces a continuous function, if the local models are all monotonically increasing or all monotonically decreasing, the network is monotonic on the checked domain. If there is a disagreement in the direction, the network is not monotonic. The verification algorithm based on polytope traversing not only provides us the final monotonicity result but also tells us in which part of the domain monotonicity is violated.
+
+Figure 3.(c) demonstrates local adversarial attack in the case of regression with binary response. The ReLU NN is the same as in Figure 1.(b), which predicts the probability of a sample belong to class 1. The predictions across the whole domain are shown as the heat map. We check if the model is monotonically increasing w.r.t. ${x}_{1}$ along the horizontal axis. The domain to check is bounded by the black box. Among the 5 polytopes overlapped with the domain, one of them violates the monotonically increasing condition and is marked in red.
+
+### 5.4 Comparison with algorithms based on mixed-integer programming
+
+The three applications above have been traditionally solved using MIP (Anderson et al. 2020; Fischetti and Jo 2017; Liu et al. 2020; Tjeng, Xiao, and Tedrake 2018; Weng et al. 2018). Our algorithms based on polytope traversing have several advantages. First, our method exploits the topological structure created by ReLU NNs and fully explains the model behavior in small neighborhoods. For the ${2}^{M}$ cases created by a ReLU NN with $M$ neurons, MIP eliminates the searching branches using branch-and-bound. Our method, on the other hand, eliminates the searching branches by checking the feasibility of the local polytopes and their adjacency. Since a small traversing region often covers a limited number of polytopes, our algorithm has short running time when solving local problems.
+
+Second, since our algorithm explicitly identifies and visits all the polytopes, the final results contain not only the optimal solution but also the whole picture of the model behavior, providing explainability to the often-so-called black-box model.
+
+Third, our method requires only linear and convex programming solvers and no MIP solvers. Identifying adjacent polytopes requires only linear programming. Convex programming may be used to solve the sub-problem within a local polytope. Our algorithm allows us to incorporate any convex programming solver that is most suitable for the sub-problem, providing much freedom to customize.
+
+Last but probably the most important, our algorithm is highly versatile and flexible. Within each local polytope, the model is linear, which is often the simplest type of model to work with. Any analysis that one runs on a linear model can be transplanted here and wrapped inside the polytope traversing algorithm. Therefore, our algorithm provides a unified framework to verify different properties of piecewise linear networks.
+
+## 6 Conclusion
+
+We explored the unique topological structure that ReLU NNs create in the input space; identified the adjacency among the partitioned local polytopes; developed a traversing algorithm based on this adjacency; and proved the thoroughness of polytope traversing. Our polytope traversing algorithm could be extended to other piecewise linear networks such as those containing convolutional or maxpooling layers.
+
+## References
+
+Anderson, R.; Huchette, J.; Ma, W.; Tjandraatmadja, C.; and Vielma, J. P. 2020. Strong mixed-integer programming formulations for trained neural networks. Mathematical Programming, 1- 37.
+
+Arora, R.; Basu, A.; Mianjy, P.; and Mukherjee, A. 2018. Understanding Deep Neural Networks with Rectified Linear Units. In International Conference on Learning Representations.
+
+Athalye, A.; Engstrom, L.; Ilyas, A.; and Kwok, K. 2018. Synthesizing robust adversarial examples. In International conference on machine learning, 284-293. PMLR.
+
+Bastani, O.; Ioannou, Y.; Lampropoulos, L.; Vytiniotis, D.; Nori, A.; and Criminisi, A. 2016. Measuring neural net robustness with constraints. Advances in neural information processing systems, 29: 2613-2621.
+
+Bunel, R.; Turkaslan, I.; Torr, P. H.; Kohli, P.; and Kumar, M. P. 2018. A unified view of piecewise linear neural network verification. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, 4795-4804.
+
+Carlini, N.; and Wagner, D. 2017. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), 39-57. IEEE.
+
+Chu, L.; Hu, X.; Hu, J.; Wang, L.; and Pei, J. 2018. Exact and consistent interpretation for piecewise linear neural networks: A closed form solution. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, 1244-1253.
+
+Daniels, H.; and Velikova, M. 2010. Monotone and partially monotone neural networks. IEEE Transactions on Neural Networks, 21(6): 906-917.
+
+Ehlers, R. 2017. Formal verification of piece-wise linear feed-forward neural networks. In International Symposium on Automated Technology for Verification and Analysis, 269-286. Springer.
+
+Fischetti, M.; and Jo, J. 2017. Deep neural networks as 0-1 mixed integer linear programs: A feasibility study. arXiv preprint arXiv:1712.06174.
+
+Glorot, X.; Bordes, A.; and Bengio, Y. 2011. Deep sparse rectifier neural networks. In Proceedings of the fourteenth international conference on artificial intelligence and statistics, 315-323. JMLR Workshop and Conference Proceedings.
+
+Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
+
+Gopinath, D.; Converse, H.; Pasareanu, C.; and Taly, A. 2019. Property inference for deep neural networks. In 2019 34th IEEE/ACM International Conference on Automated Software Engineering (ASE), 797-809. IEEE.
+
+Gupta, A.; Shukla, N.; Marla, L.; Kolbeinsson, A.; and Yellepeddi, K. 2019. How to Incorporate Monotonicity in Deep Networks While Preserving Flexibility? arXiv preprint arXiv:1909.10662.
+
+Hanin, B.; and Rolnick, D. 2019. Deep ReLU Networks Have Surprisingly Few Activation Patterns. Advances in Neural Information Processing Systems, 32: 361-370.
+
+Katz, G.; Barrett, C.; Dill, D. L.; Julian, K.; and Kochenderfer, M. J. 2017. Reluplex: An efficient SMT solver for verifying deep neural networks. In International Conference on Computer Aided Verification, 97-117. Springer.
+
+Liu, C.; Arnon, T.; Lazarus, C.; Strong, C.; Barrett, C.; and Kochen-derfer, M. J. 2019. Algorithms for verifying deep neural networks. arXiv preprint arXiv:1903.06758.
+
+Liu, X.; Han, X.; Zhang, N.; and Liu, Q. 2020. Certified monotonic neural networks. arXiv preprint arXiv:2011.10219.
+
+Lu, Z.; Pu, H.; Wang, F.; Hu, Z.; and Wang, L. 2017. The expressive power of neural networks: A view from the width. In Proceedings of the 31st International Conference on Neural Information Processing Systems, 6232-6240.
+
+Montufar, G. F.; Pascanu, R.; Cho, K.; and Bengio, Y. 2014. On the Number of Linear Regions of Deep Neural Networks. Advances in Neural Information Processing Systems, 27: 2924-2932.
+
+Pulina, L.; and Tacchella, A. 2010. An abstraction-refinement approach to verification of artificial neural networks. In International Conference on Computer Aided Verification, 243-257. Springer.
+
+Pulina, L.; and Tacchella, A. 2012. Challenging SMT solvers to verify neural networks. Ai Communications, 25(2): 117-135.
+
+Schmidt-Hieber, J. 2020. Nonparametric regression using deep neural networks with ReLU activation function. The Annals of Statistics, 48(4): 1875-1897.
+
+Serra, T.; Tjandraatmadja, C.; and Ramalingam, S. 2018. Bounding and counting linear regions of deep neural networks. In International Conference on Machine Learning, 4558-4566. PMLR.
+
+Sharma, A.; and Wehrheim, H. 2020. Testing monotonicity of machine learning models. arXiv preprint arXiv:2002.12278.
+
+Sudjianto, A.; Knauth, W.; Singh, R.; Yang, Z.; and Zhang, A. 2020. Unwrapping The Black Box of Deep ReLU Networks: Interpretability, Diagnostics, and Simplification. arXiv preprint arXiv:2011.04041.
+
+Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; and Fergus, R. 2014. Intriguing properties of neural networks. In 2nd International Conference on Learning Representations, ICLR 2014.
+
+Telgen, J. 1982. Minimal representation of convex polyhedral sets. Journal of Optimization Theory and Applications, 38(1): 1-24.
+
+Telgen, J. 1983. Identifying redundant constraints and implicit equalities in systems of linear constraints. Management Science, 29(10): 1209-1222.
+
+Tjeng, V.; Xiao, K. Y.; and Tedrake, R. 2018. Evaluating Robustness of Neural Networks with Mixed Integer Programming. In International Conference on Learning Representations.
+
+Weng, L.; Zhang, H.; Chen, H.; Song, Z.; Hsieh, C.-J.; Daniel, L.; Boning, D.; and Dhillon, I. 2018. Towards fast computation of certified robustness for relu networks. In International Conference on Machine Learning, 5276-5285. PMLR.
+
+Yang, Z.; Zhang, A.; and Sudjianto, A. 2020. Enhancing explainability of neural networks through architecture constraints. IEEE Transactions on Neural Networks and Learning Systems.
+
+Zhao, W.; Singh, R.; Joshi, T.; Sudjianto, A.; and Nair, V. N. 2021. Self-interpretable Convolutional Neural Networks for Text Classification. arXiv preprint arXiv:2105.08589.
+
+Zou, D.; Cao, Y.; Zhou, D.; and Gu, Q. 2020. Gradient descent optimizes over-parameterized deep ReLU networks. Machine Learning, 109(3): 467-492.
+
+## 7 Appendix
+
+### 7.1 Proof of Lemma 3.1
+
+Lemma 7.1 Given a set $\mathcal{R} = \left\{ {\mathbf{x} \mid {g}_{1}\left( \mathbf{x}\right) \leq 0,\ldots ,{g}_{M}\left( \mathbf{x}\right) \leq }\right.$ $0\} \neq \varnothing$ , then ${g}_{m}\left( \mathbf{x}\right)$ is a redundant inequality if the new set formed by flipping this inequality is empty: $\widehat{\mathcal{R}} = \left\{ {\mathbf{x} \mid {g}_{1}\left( \mathbf{x}\right) \leq }\right.$ $\left. {0,\ldots ,{g}_{m}\left( \mathbf{x}\right) \geq 0,\ldots ,{g}_{M}\left( \mathbf{x}\right) \leq 0}\right\} = \varnothing$ .
+
+Proof: Let $\widetilde{\mathcal{R}}$ be the set formed by removing inequality ${g}_{m}\left( \mathbf{x}\right) \leq 0 : \widetilde{\mathcal{R}} = \left\{ {\mathbf{x} \mid {g}_{1}\left( \mathbf{x}\right) \leq 0,\ldots ,{g}_{m - 1}\left( \mathbf{x}\right) \leq 0,{g}_{m + 1}\left( \mathbf{x}\right) \leq }\right.$ $\left. {0,\ldots ,{g}_{M}\left( \mathbf{x}\right) \leq 0}\right\}$ . Then $\widetilde{\mathcal{R}} = \mathcal{R} \cup \widehat{\mathcal{R}}$ . If $\widehat{\mathcal{R}} = \varnothing$ , then $\mathcal{R} = \widetilde{\mathcal{R}}$ and the inequality ${g}_{m}\left( \mathbf{x}\right) \leq 0$ satisfies Definition 3.1.
+
+Note the other direction of Lemma 3.1 may not hold. One example is when identical inequalities appear in the set: both inequalities in $\mathcal{R} = \left\{ {\mathbf{x} \mid {g}_{1}\left( \mathbf{x}\right) \leq 0,{g}_{2}\left( \mathbf{x}\right) \leq 0}\right\}$ are redundant by definition if ${g}_{1}\left( \cdot \right) = {g}_{2}\left( \cdot \right)$ . However, the procedure in Lemma3.1 will not identify them as redundant.
+
+### 7.2 Proof of Theorem 4.1
+
+Theorem 7.2 Given a ReLUNN with one hidden layer of $M$ neurons as specified in (1), Algorithm 1 covers all non-empty local polytopes created by the neural network. That is, for all $\mathbf{x} \in {\mathbb{R}}^{P}$ , there exists one ${\mathcal{R}}_{\mathbf{c}}$ as defined in (3) such that $\mathbf{x} \in {\mathcal{R}}_{\mathbf{c}}$ and $\mathbf{c} \in {\mathcal{S}}_{R}$ , where ${\mathcal{S}}_{R}$ is the result returned by Algorithm 1.
+
+Proof: Since each partitioning hyperplane divide ${\mathbb{R}}^{P}$ into two halfspaces, all ${2}^{M}$ activation patterns encoded by $\mathbf{c}$ covers the entire input space. We construct a graph with ${2}^{M}$ nodes, each representing a possible polytope code. Some the nodes may correspond to an empty set due to conflicting inequalities. For each pair of nonempty polytope that are one-adjacent to each other, we add an edge to their corresponding nodes. What left to prove is that any pair of non-empty polytopes are connected.
+
+W.l.o.g. assume two nodes with code $\mathbf{c}$ and $\widehat{\mathbf{c}}$ that differ only in the first $K$ bits. Also assume the polytopes ${\mathcal{R}}_{c}$ and ${\mathcal{R}}_{\widehat{c}}$ are both non-empty. We will show that there must exist a non-empty polytope ${\mathcal{R}}_{\widetilde{\mathbf{c}}}$ that is one-adjacent to ${\mathcal{R}}_{\mathbf{c}}$ with code $\widetilde{\mathbf{c}}$ different from $\widehat{\mathbf{c}}$ in one of the first $K$ bits. As a result, $\widetilde{\mathbf{c}}$ is now one bit closer to $\widehat{\mathbf{c}}$ .
+
+We prove the claim above by contradiction. Assuming claim is not true, we flip any one of the first $K$ bits in ${\mathcal{R}}_{c}$ , and the corresponding polytope ${\mathcal{R}}_{{\widetilde{\mathbf{c}}}^{k}}$ must be empty. By Definition 3.1, the inequality ${\left( -1\right) }^{{c}_{m}}\left( {{\mathbf{w}}_{m}^{T}\mathbf{x} + {b}_{m}}\right) \leq 0, m = 1,2,\ldots , K$ must all be redundant, which means they can be removed from the set of constraints (Telgen 1982, 1983):
+
+$$
+{\mathcal{R}}_{\mathbf{c}} = \left\{ {\mathbf{x} \mid {\left( -1\right) }^{{c}_{m}}\left( {{\mathbf{w}}_{m}^{T}\mathbf{x} + {b}_{m}}\right) \leq 0, m = 1,2\ldots , M}\right\}
+$$
+
+$$
+= \left\{ {\mathbf{x} \mid {\left( -1\right) }^{{c}_{m}}\left( {{\mathbf{w}}_{m}^{T}\mathbf{x} + {b}_{m}}\right) \leq 0, m = K + 1,\ldots , M}\right\}
+$$
+
+$$
+\supseteq \left\{ {\mathbf{x} \mid {\left( -1\right) }^{{c}_{m}}\left( {{\mathbf{w}}_{m}^{T}\mathbf{x} + {b}_{m}}\right) \leq 0, m = 1,2,\ldots , M}\right\} \cup
+$$
+
+$$
+\left\{ {\mathbf{x} \mid {\left( -1\right) }^{{c}_{m}}\left( {{\mathbf{w}}_{m}^{T}\mathbf{x} + {b}_{m}}\right) \geq 0, m = 1,\ldots , K,}\right.
+$$
+
+$$
+{\left( -1\right) }^{{c}_{m}}\left( {{\mathbf{w}}_{m}^{T}\mathbf{x} + {b}_{m}}\right) \leq 0, m = K + 1,\ldots , M\}
+$$
+
+$$
+= {\mathcal{R}}_{c} \cup {\mathcal{R}}_{\widehat{c}}\text{.}
+$$
+
+(22)
+
+The derived relationship in (22) plus the assumption that all ${\mathcal{R}}_{{\widetilde{\mathbf{c}}}^{k}}$ must be empty lead to the conclusion that ${\mathcal{R}}_{\widehat{c}} = \varnothing$ , which contradict with the non-empty assumption.
+
+Therefore, for any two non-empty polytopes ${\mathcal{R}}_{\mathbf{c}}$ and ${\mathcal{R}}_{\widehat{\mathbf{c}}}$ , we can create a path from ${\mathcal{R}}_{c}$ to ${\mathcal{R}}_{\widehat{c}}$ by iteratively finding an intermediate polytope whose code is one bit closer to $\widehat{\mathbf{c}}$ . Since the polytope graph covers all input space and all non-empty polytopes are connected, BFS guarantees the thoroughness of traversing.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/EQjwT2-Vaba/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/EQjwT2-Vaba/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..f186a3ac0357b9bac79086e0404c5e4d2fb7651f
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/EQjwT2-Vaba/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,489 @@
+§ TRAVERSING THE LOCAL POLYTOPES OF RELU NEURAL NETWORKS: A UNIFIED APPROACH FOR NETWORK VERIFICATION
+
+Anonymous Authors
+
+§ ABSTRACT
+
+Although neural networks (NNs) with ReLU activation functions have found success in a wide range of applications, their adoption in risk-sensitive settings has been limited by the concerns on robustness and interpretability. Previous works to examine robustness and to improve interpretability partially exploited the piecewise linear function form of ReLU NNs. In this paper, we explore the unique topological structure that ReLU NNs create in the input space, identifying the adjacency among the partitioned local polytopes and developing a traversing algorithm based on this adjacency. Our polytope traversing algorithm can be adapted to verify a wide range of network properties related to robustness and interpretability, providing an unified approach to examine the network behavior. As the traversing algorithm explicitly visits all local poly-topes, it returns a clear and full picture of the network behavior within the traversed region. The time and space complexity of the traversing algorithm is determined by the number of a ReLU NN's partitioning hyperplanes passing through the traversing region.
+
+§ 1 INTRODUCTION & RELATED WORK
+
+Neural networks with rectified linear unit activation functions (ReLU NNs) are arguably the most popular type of neural networks in deep learning. This type of network enjoys many appealing properties including better performance than NNs with sigmoid activation (Glorot, Bordes, and Bengio 2011), universal approximation ability (Arora et al. 2018; Lu et al. 2017; Montufar et al. 2014; Schmidt-Hieber 2020), and fast training speed via scalable algorithms such as stochastic gradient descent (SGD) and its variants (Zou et al. 2020).
+
+Despite their strong predictive power, ReLU NNs have seen limited adoption in risk-sensitive settings (Bunel et al. 2018). These settings require the model to make robust predictions against potential adversarial noise in the input (Athalye et al. 2018; Carlini and Wagner 2017; Goodfellow, Shlens, and Szegedy 2014; Szegedy et al. 2014). The alignment between model behavior and human intuition is also desirable (Liu et al. 2019): prior knowledge such as monotonicity may be incorporated into model design and training (Daniels and Velikova 2010; Gupta et al. 2019; Liu et al. 2020; Sharma and Wehrheim 2020); users and auditors of the model may require a certain degree of explanations of the model predictions (Gopinath et al. 2019; Chu et al. 2018).
+
+The requirements in risk-sensitive settings has motivated a great amount of research on verifying certain properties of ReLU NNs. These works often exploit the piecewise linear function form of ReLU NNs. In Bastani et al. (2016) the robustness of a network is verified in very small input region via linear programming (LP). To consider the nonlinearity of ReLU activation functions, Ehlers (2017); Katz et al. (2017); Pulina and Tacchella (2010, 2012) formulated the robustness verification problem as a satisfiability modulo theories (SMT) problem. A more popular way to model ReLU nonlinearality is to introduce a binary variable representing the on-off patterns of ReLU neurons. Property verification can then be solved using mixed-integer programming (MIP) (Anderson et al. 2020; Fischetti and Jo 2017; Liu et al. 2020; Tjeng, Xiao, and Tedrake 2018; Weng et al. 2018).
+
+The piecewise linear functional form of ReLU NNs also creates distinct topological structures in the input space. Previous studies have shown that a ReLU NN partitions the input space into convex polytopes and has one linear model associated with each polytope (Montufar et al. 2014; Serra, Tjandraatmadja, and Ramalingam 2018; Sudjianto et al. 2020). Each polytope can be coded by a binary activation code, which reflects the on-off patterns of the ReLU neurons. The number of local polytopes is often used as a measure of the model's expressivity (Hanin and Rolnick 2019; Lu et al. 2017). Built upon this framework, multiple studies (Sudjianto et al. 2020; Yang, Zhang, and Sudjianto 2020; Zhao et al. 2021) tried to explain the behavior of ReLU NNs and to improve their interpretability. They viewed ReLU NN as a collection of linear models. However, the relationship among the local polytopes their linear models has not been fully investigated.
+
+In this paper, we explore the topological relationship among the local polytopes created by ReLU NNs. We propose algorithms to identify the adjacency among these poly-topes, based on which we develop traversing algorithms to visit all polytopes within a bounded region in the input space. Our paper has the following major contributions:
+
+1. The polytope traversing algorithm provides a unified framework to examine the network behavior. Since each polytope contains a linear model whose properties are easy to verify, the full verification on a bounded domain is achieved after all the covered polytopes are visited and verified. We provide theoretical guarantees on the thoroughness of the traversing algorithm.
+
+Copyright (c) 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+2. Property verification based on the polytope traversing algorithm can be easily customized. Identifying the adjacency among the polytopes is formulated as LP. Within each local polytope, the user has the freedom to choose the solver most suitable for the verification sub-problem. We demonstrate that many common applications can be formulated as convex problems within each polytope.
+
+3. Because the polytope traversing algorithm explicitly visits all the local polytopes, it returns a full picture of the network behavior within the traversed region and improves interpretability.
+
+Although we focus on ReLU NN with fully connected layers through out this paper, our polytope traversing algorithm can be naturally extended to other piecewise linear networks such as those containing convolutional and maxpooling layers.
+
+The rest of this paper is organized as follows: Section 2 reviews how polytopes are created by ReLU NNs. Section 3 introduces two related concepts: the boundaries of a polytope and the adjacency among the polytopes. Our polytope traversing algorithm is described in Section 4. Section 5 demonstrates several cases of adapting the traversing algorithm for network property verification. The paper is concluded in Section 6.
+
+§ 2 THE LOCAL POLYTOPES IN RELU NNS
+
+§ 2.1 THE CASE OF ONE HIDDEN LAYER
+
+A ReLU NN partitions the input space ${\mathbb{R}}^{P}$ into several poly-topes and forms a linear model within each polytope. To see this, we first consider a simple NN with one hidden layer of $M$ neurons. It takes an input $\mathbf{x} \in {\mathbb{R}}^{P}$ and outputs $\mathbf{o} \in {\mathbb{R}}^{Q}$ by calculating:
+
+$$
+\mathbf{o} = {\mathbf{W}}^{o}\mathbf{h} + {\mathbf{b}}^{o} = {\mathbf{W}}^{o}\left( {\sigma \left( {\mathbf{W}\mathbf{x} + \mathbf{b}}\right) }\right) + {\mathbf{b}}^{o}
+$$
+
+$$
+\text{ where }\sigma {\left( \mathbf{x}\right) }_{m} = \left\{ \begin{array}{ll} 0, & {\mathbf{x}}_{m} < 0 \\ {\mathbf{x}}_{m}, & {\mathbf{x}}_{m} \geq 0 \end{array}\right. \text{ . } \tag{1}
+$$
+
+For problems with a binary or categorical target variable (i.e. binary or multi-class classification), a sigmoid or softmax layer is added after $o$ respectively to convert the convert the NN outputs to proper probabilistic predictions.
+
+The ReLU activation function $\sigma \left( \cdot \right)$ inserts non-linearity into the model by checking a set of linear inequalities: ${\mathbf{w}}_{m}^{T}\mathbf{x} + {b}_{m} \geq$ $0,m = 1,2,\ldots ,M$ , where ${\mathbf{w}}_{m}^{T}$ is the $m$ th row of matrix $\mathbf{W}$ and ${b}_{m}$ is the $m$ th element of $\mathbf{b}$ . Each neuron in the hidden layer creates a partitioning hyperplane in the input space with the linear equation ${\mathbf{w}}_{m}^{T}\mathbf{x} + {b}_{m} = 0$ . The areas on two sides of the hyperplane are two halfspaces. The entire input space is, therefore, partitioned by these $M$ hyperplanes. We define a local polytope as a set containing all points that fall on the same side of each and every hyperplane. The polytope encoding function (2) uses an element-wise indicator function $\mathbb{1}\left( \cdot \right)$ to create a unique binary code $\mathbf{c}$ for each polytope. Since the $m$ th neuron is called "ON" for some $\mathbf{x}$ if ${\mathbf{w}}_{m}^{T}\mathbf{x} + {b}_{m} \geq 0$ , the code $\mathbf{c}$ also represents the on-off pattern of the neurons. Using the results of this encoding function, we can express each polytope as an intersection of $M$ halfspaces as in (3), where the binary code $c$ controls the directions of the inequalities.
+
+$$
+C\left( \mathbf{x}\right) = \mathbb{1}\left( {\mathbf{W}\mathbf{x} + \mathbf{b} \geq 0}\right) . \tag{2}
+$$
+
+$$
+{\mathcal{R}}_{\mathbf{c}} = \left\{ {\mathbf{x} \mid {\left( -1\right) }^{{c}_{m}}\left( {{\mathbf{w}}_{m}^{T}\mathbf{x} + {b}_{m} \leq 0}\right) ,\forall m = 1,\ldots ,M}\right\} . \tag{3}
+$$
+
+Figure 1.(b) shows an example of ReLU NN trained on a two-dimensional synthetic dataset (plotted in Figure 1.(a)). The bounded input space is ${\left\lbrack -1,1\right\rbrack }^{2}$ and the target variable is binary. The network has one hidden layer of 20 neurons. The partitioning hyperplanes associated with these neurons are plotted as the blue dashed lines. They form in total 91 local polytopes within the bounded input space.
+
+For a given $\mathbf{x}$ , if ${\mathbf{w}}_{m}^{T}\mathbf{x} + {b}_{m} \geq 0$ , the ReLU neuron turns on and passes through the value. Otherwise, the neuron is off and suppresses the value to zero. Therefore, if we know the $m$ th neuron is off, we can mask the corresponding ${\mathbf{w}}_{m}$ and ${b}_{m}$ by zeros and create ${\widetilde{\mathbf{W}}}_{\mathbf{c}}$ and ${\widetilde{\mathbf{b}}}_{\mathbf{c}}$ that satisfy (5). The non-linear operation, therefore, can be replaced by the a locally linear operation after zero-masking. Because each local polytope ${\mathcal{R}}_{c}$ has a unique neuron activation pattern encoded by $\mathbf{c}$ , the zero-masking process in (4) is also unique for each polytope. Here,1is a vector of $1\mathrm{\;s}$ of length $p$ and $\otimes$ denotes element-wise product.
+
+$$
+{\widetilde{\mathbf{W}}}_{\mathbf{c}} = \mathbf{W} \otimes \left( {\mathbf{c}{\mathbf{1}}^{T}}\right) ,{\widetilde{\mathbf{b}}}_{\mathbf{c}} = \mathbf{b} \otimes \mathbf{c}, \tag{4}
+$$
+
+$$
+\sigma \left( {\mathbf{W}\mathbf{x} + \mathbf{b}}\right) = {\widetilde{\mathbf{W}}}_{\mathbf{c}}\mathbf{x} + {\widetilde{\mathbf{b}}}_{\mathbf{c}},\;\forall \mathbf{x} \in {\mathcal{R}}_{\mathbf{c}}. \tag{5}
+$$
+
+Within each polytope, as the non-linearity is taken out by the zero-masking process, the input $\mathbf{x}$ and output $\mathbf{o}$ have a linear relationship:
+
+$$
+\mathbf{o} = {\mathbf{W}}^{o}\left( {\sigma \left( {\mathbf{W}\mathbf{x} + \mathbf{b}}\right) }\right) + {\mathbf{b}}^{o} = {\widehat{\mathbf{W}}}_{\mathbf{c}}^{o}\mathbf{x} + {\widehat{\mathbf{b}}}_{\mathbf{c}}^{o},\forall \mathbf{x} \in {\mathcal{R}}_{\mathbf{c}}, \tag{6}
+$$
+
+$$
+\text{ where }{\widehat{\mathbf{W}}}_{\mathbf{c}}^{o} = {\mathbf{W}}^{o}{\widetilde{\mathbf{W}}}_{\mathbf{c}},{\widehat{\mathbf{b}}}_{\mathbf{c}}^{o} = {\mathbf{W}}^{o}{\widetilde{\mathbf{b}}}_{\mathbf{c}} + {\mathbf{b}}^{o}
+$$
+
+The linear model associated with polytope ${\mathcal{R}}_{c}$ has the weight matrix ${\widehat{\mathbf{W}}}_{\mathbf{c}}$ and the bias vector ${\widehat{\mathbf{b}}}_{\mathbf{c}}$ . The ReLU NN is now represented by a collection of linear models, each defined on a local polytope ${\mathcal{R}}_{c}$ .
+
+In Figure 1.(b), we represent the linear model in each local poly-topes as a red solid line indicating ${\left( {\widehat{\mathbf{w}}}_{\mathbf{c}}^{o}\right) }^{T}\mathbf{x} + {\widehat{b}}_{\mathbf{c}}^{o} = 0$ . In this binary response case, the two sides of this line have the opposite class prediction. We only plot the line if it passes through its corresponding polytope. For other polytopes, the entire polytopes fall on one side of their corresponding class-separating lines and the predicted class is the same within the whole polytope. The red lines all together form the decision boundary of the ReLU NN and are continuous when passing through one polytope to another. This is a direct result of ReLU NN being a continuous model.
+
+§ 2.2 THE CASE OF MULTIPLE LAYERS
+
+We can generalize the results to ReLU NNs with multiple hidden layers. A ReLU NN with $L$ hidden layers hierarchically partitions the input space and is locally linear in each and every level- $L$ polytope. Each level- $L$ polytope ${\mathcal{R}}^{L}$ has a unique binary code ${c}^{1}{c}^{2}\ldots {c}^{L}$ representing the activation pattern of the neurons in all $L$ hidden layers. The corresponding partitioning hyperplanes of each level, ${\widehat{\mathbf{W}}}^{l}\mathbf{x} + {\widehat{\mathbf{b}}}^{l} = 0,l = 1,2,\ldots ,L$ , can be calculated recursively level by level, using the zero masking procedure:
+
+$$
+{\widehat{\mathbf{W}}}^{1} = {\mathbf{W}}^{1},{\widehat{\mathbf{b}}}^{1} = {\mathbf{b}}^{1} \tag{7}
+$$
+
+$$
+{\widetilde{\mathbf{W}}}^{l} = {\widehat{\mathbf{W}}}^{l} \otimes \left( {{\mathbf{c}}^{l}{\mathbf{1}}^{T}}\right) ,{\widetilde{\mathbf{b}}}^{l} = {\widehat{\mathbf{b}}}^{l} \otimes {\mathbf{c}}^{l} \tag{8}
+$$
+
+$$
+{\widehat{\mathbf{W}}}^{l + 1} = {\mathbf{W}}^{l + 1}{\widetilde{\mathbf{W}}}^{l},{\widehat{\mathbf{b}}}^{l + 1} = {\mathbf{W}}^{l + 1}{\widetilde{\mathbf{b}}}^{l} + {\mathbf{b}}^{l + 1}. \tag{9}
+$$
+
+We emphasis that ${\widetilde{\mathbf{W}}}^{l},{\widetilde{\mathbf{b}}}^{l},{\widehat{\mathbf{W}}}^{l + 1}$ , and ${\widehat{\mathbf{b}}}^{l + 1}$ depend on all polytope code up to level $l : {\mathbf{c}}^{1}{\mathbf{c}}^{2}\ldots {\mathbf{c}}^{l}$ .
+
+1.0 1.0 1.0 0.5 0.0 -0.5 -1.0 0.0 0.5 1.0 -1.0 -0.5 0.0 0.5 1.0 (b) (c) 0.5 0.5 0.0 -0.5 -0.5 -1.0 -1.0 -1.0 -0.5 0.0 0.5 1.0 -1.0 -0.5 (a)
+
+Figure 1: Examples of trained ReLU NNs and their local polytopes. (a) The grid-like training data with binary target variable. (b) A trained ReLUNN with one hidden layer of 20 neurons. The heatmap shows the predicted probability of a sample belong to class 1 . The blue dashed lines are the partitioning hyperplanes associated with the ReLU neurons, which form 91 local polytopes in total. The red solid lines represent the linear model within each polytope where class separation occurs. (c) A trained ReLU NN with two hidden layers of 10 and 5 neurons respectively. The blue dashed lines are the partitioning hyperplanes associated with the first 10 ReLU neurons, forming 20 level-1 polytopes. The orange dashes lines are the partitioning hyperplanes associated with the second 5 ReLU neurons within each level-1 polytope. There are in total 41 (level-2) local polytopes. The red solid lines represent the linear model within each level-2 polytope where class separation occurs.
+
+At each level $l$ , the encoding function ${C}^{l}\left( \cdot \right)$ and the polytope ${\mathcal{R}}^{l}$ expressed as an intersection of $\mathop{\sum }\limits_{{t = 1}}^{l}{M}_{t}$ halfspaces can be written recursively as:
+
+$$
+{C}^{1}\left( \mathbf{x}\right) = \mathbb{1}\left( {{\mathbf{W}}^{1}\mathbf{x} + {\mathbf{b}}^{1} \geq 0}\right) \tag{10}
+$$
+
+$$
+{\mathcal{R}}^{1} = \left\{ {\mathbf{x} \mid {\left( -1\right) }^{{c}_{m}}\left( {{\left( {\mathbf{w}}^{1}\right) }_{m}^{T}\mathbf{x} + {\left( {b}^{1}\right) }_{m} \leq 0}\right) ,}\right. \tag{11}
+$$
+
+$$
+\left. {\forall m = 1,2,\ldots ,{M}_{1}}\right\}
+$$
+
+$$
+{C}^{l + 1}\left( \mathbf{x}\right) = \mathbb{1}\left( {{\widehat{\mathbf{W}}}^{l + 1}\mathbf{x} + {\widehat{\mathbf{b}}}^{l + 1} \geq 0}\right) ,\forall \mathbf{x} \in {\mathcal{R}}^{l} \tag{12}
+$$
+
+$$
+{\mathcal{R}}^{l + 1} = \left\{ {\mathbf{x} \mid {\left( -1\right) }^{{c}_{m}}\left( {{\left( {\widehat{\mathbf{w}}}^{l + 1}\right) }_{m}^{T}\mathbf{x} + {\left( {\widehat{b}}^{l + 1}\right) }_{m} \leq 0}\right) ,}\right. \tag{13}
+$$
+
+$$
+\left. {\forall m = 1,2,\ldots ,{M}_{l + 1}}\right\} \cap {\mathcal{R}}^{l}\text{ . }
+$$
+
+Finally, the linear model in a level- $L$ polytope is:
+
+$$
+\mathbf{o} = {\widehat{\mathbf{W}}}^{o}\mathbf{x} + {\widehat{\mathbf{b}}}^{o},\forall \mathbf{x} \in {\mathcal{R}}^{L}, \tag{14}
+$$
+
+$$
+\text{ where }{\widehat{\mathbf{W}}}^{o} = {\mathbf{W}}^{o}{\widetilde{\mathbf{W}}}^{L},{\widehat{\mathbf{b}}}^{o} = {\mathbf{W}}^{o}{\widetilde{\mathbf{b}}}^{L} + {\mathbf{b}}^{o}\text{ . }
+$$
+
+Figure 1.(c) shows an example of ReLU NN with two hidden layers of size 10 and 5 respectively. The partitioning hyperplanes associated with the first 10 neuron are plotted as the blue dashed lines. They form 20 level-1 polytopes within the bounded input space. Within each of the level-1 polytope, the hyperplanes associated with the second 5 neurons further partition the polytope. In many cases, some of the 5 hyperplanes are outside the level-1 polytope and, therefore, not creating a new sub-partition. The hy-perplanes do create new partitions are plotted as the orange dashed lines. The orange lines are only straight within a level-1 polytope but are continuous when passing through one polytope to another, which is also a result of ReLU NN being a continuous model. In total, this ReLU NN creates 41 (level-2) local polytopes. As in Figure 1.(b), the linear model within each level-2 polytope is represented as a red solid line if class separation occurs within the polytope.
+
+§ 3 POLYTOPE BOUNDARIES AND ADJACENCY
+
+Beyond viewing ReLU NNs as a collection of linear models defined on local polytopes, we explore the topological relationship among these polytopes. A key concept is the boundaries of each polytope. As shown in (13), each level- $l$ polytope ${\mathcal{R}}_{c}$ with corresponding binary code $\mathbf{c} = {\mathbf{c}}^{1}{\mathbf{c}}^{2}\ldots {\mathbf{c}}^{l}$ is an intersection of $\mathop{\sum }\limits_{{t = 1}}^{l}{M}_{t}$ halfspaces induced by a set of inequality constraints. Two situations can rise among these inequalities. First, an arbitrary $\mathbf{c}$ may lead to conflicting inequalities and makes ${\mathcal{R}}_{\mathbf{c}}$ an empty set. This situation can be common when the number of neurons is much larger than the dimension of the input space. Second, there can be redundant inequalities which means removing them does not affect set ${\mathcal{R}}_{c}$ . We now show that the non-redundant inequalities are closely related to the boundaries of a polytope.
+
+Definition 3.1 Let $\mathcal{R}$ contains all $\mathbf{x} \in {\mathbb{R}}^{P}$ that satisfy $M$ linear inequalities: $\mathcal{R} = \left\{ {\mathbf{x} \mid {g}_{1}\left( \mathbf{x}\right) \leq 0,{g}_{2}\left( \mathbf{x}\right) \leq 0,\ldots ,{g}_{M}\left( \mathbf{x}\right) \leq 0}\right\}$ . Assume that $\mathcal{R} \neq \varnothing$ . Let $\widetilde{\mathcal{R}}$ contains all $\mathbf{x}$ ’s that satisfy $M - 1$ linear inequalities: $\widetilde{\mathcal{R}} = \left\{ {\mathbf{x} \mid {g}_{1}\left( \mathbf{x}\right) \leq 0,\ldots ,{g}_{m - 1}\left( \mathbf{x}\right) \leq 0,{g}_{m + 1}\left( \mathbf{x}\right) \leq }\right.$ $\left. {0,\ldots ,{g}_{M}\left( \mathbf{x}\right) \leq 0}\right\}$ . Then the inequality ${g}_{m}\left( \mathbf{x}\right) \leq 0$ is a redundant inequality with respect to (w.r.t.) $\mathcal{R}$ if $\mathcal{R} = \widetilde{\mathcal{R}}$ .
+
+With the redundant inequality defined above, the following lemma provides an algorithm to identify them. The proof of this lemma is in the Appendix.
+
+Lemma 3.1 Given a set $\mathcal{R} = \left\{ {\mathbf{x} \mid {g}_{1}\left( \mathbf{x}\right) \leq 0,\ldots ,{g}_{M}\left( \mathbf{x}\right) \leq }\right.$ $0\} \neq \varnothing$ , then ${g}_{m}\left( \mathbf{x}\right)$ is a redundant inequality if the new set formed by flipping this inequality is empty: $\widehat{\mathcal{R}} = \left\{ {\mathbf{x} \mid {g}_{1}\left( \mathbf{x}\right) \leq }\right.$ $\left. {0,\ldots ,{g}_{m}\left( \mathbf{x}\right) \geq 0,\ldots ,{g}_{M}\left( \mathbf{x}\right) \leq 0}\right\} = \varnothing$ .
+
+We can now define the boundaries of a polytope formed by a set of linear inequalities using a similar procedure in Lemma3.1. The concept of polytope boundaries also leads to the definition of adjacency. Intuitively, we can move from one polytope to its adjacent polytope by crossing a boundary.
+
+Definition 3.2 Given a non-empty set formed by $M$ linear inequalities: $\mathcal{R} = \left\{ {\mathbf{x} \mid {g}_{1}\left( \mathbf{x}\right) \leq 0,\ldots ,{g}_{M}\left( \mathbf{x}\right) \leq 0}\right\} \neq \varnothing$ , then the hyperplane ${g}_{m}\left( \mathbf{x}\right) = 0$ is a boundary of $\mathcal{R}$ if the new set formed by flipping the corresponding inequality is non-empty: $\widehat{\mathcal{R}} = \left\{ {\mathbf{x} \mid {g}_{1}\left( \mathbf{x}\right) \leq 0,\ldots ,{g}_{m}\left( \mathbf{x}\right) \geq 0,\ldots ,{g}_{M}\left( \mathbf{x}\right) \leq 0}\right\} \neq \varnothing .$ Polytope $\widehat{\mathcal{R}}$ is called one-adjacent to $\mathcal{R}$ .
+
+Since for each polytope the directions of its linear inequalities are reflected by the binary code, two one-adjacent polytopes must have their code differ by one bit. Figure 2.(a) demonstrates the adjacency among the local polytopes. The ReLU NN is the same as in Figure 1.(b). Using the procedure in Definition 3.2, 4 out of the 20 partitioning hyperplanes are identified as the boundaries of polytope No. 0 and marked in red. The 4 one-adjacent neighbors to polytope No. 0 are No.1,2,3, and 4 ; each can be reached by crossing one boundary.
+
+As we have shown in the Section 2.2, ReLU NNs create poly-topes level by level. We follow the same hierarchy to define the polytope adjacency. Assume two non-empty level- $l$ polytopes, $\mathcal{R}$ and $\widehat{\mathcal{R}}$ , are inside the same level-(l - 1)polytope, which means their corresponding code $\mathbf{c} = {\mathbf{c}}^{1}{\mathbf{c}}^{2}\ldots {\mathbf{c}}^{l}$ and $\widehat{\mathbf{c}} = {\mathbf{c}}^{1}{\mathbf{c}}^{2}\ldots {\widehat{\mathbf{c}}}^{l}$ only differs at level- $l$ . We say that polytope $\widehat{\mathcal{R}}$ is a level- $l$ one-adjacent neighbor of $\mathcal{R}$ if ${\widehat{\mathbf{c}}}^{l}$ and ${\mathbf{c}}^{l}$ only differs in one bit.
+
+The condition that $\mathbf{c} = {\mathbf{c}}^{1}{\mathbf{c}}^{2}\ldots {\mathbf{c}}^{l}$ and $\widehat{\mathbf{c}} = {\mathbf{c}}^{1}{\mathbf{c}}^{2}\ldots {\widehat{\mathbf{c}}}^{l}$ only differs at level- $l$ is important. In this way, the two linear inequalities associated with each pair of bits in $\mathbf{c}$ and $\widehat{\mathbf{c}}$ have the same coefficients, and the difference in ${\mathbf{c}}^{l}$ and ${\widehat{\mathbf{c}}}^{l}$ only changes the direction of the linear inequality. On the other hand, if the two codes differ at a level ${l}^{\prime } < l$ , then according to the recursive calculation in (8) and (9), the codes starting from level ${l}^{\prime } + 1$ will correspond to linear inequalities of different coefficients, leaving our Definition 3.2 of adjacency not applicable.
+
+Figure 2.(b) demonstrates the hierarchical adjacency among the local polytopes. The ReLU NN is the same as in Figure 1.(c). Level-1 polytopes $\left( {1, \cdot }\right)$ and $\left( {2, \cdot }\right)$ are both (level-1) one-adjacent to $\left( {0, \cdot }\right)$ . Within the level-1 polytope $\left( {0, \cdot }\right)$ , Level-2 polytopes (0,0)and(0,1)are (level-2) one-adjacent to each other. Similarly, we can identify the level-2 adjacency of the other two pairs $\left( {1,0}\right) - \left( {1,1}\right)$ and $\left( {2,0}\right) - \left( {2,1}\right)$ . Note that in the plot, even thought one can move from polytope(2,1)to(0,1)by crossing one partitioning hyperplane, we do not define these two polytopes as adjacent, as they lie into two different level-1 polytopes.
+
+§ 4 POLYTOPE TRAVERSING
+
+§ 4.1 THE CASE OF ONE HIDDEN LAYER
+
+The adjacency defined in the previous section provides us an order to traverse the local polytopes: starting from an initial polytope $\mathcal{R}$ , visiting its all one-adjacent neighbors, then visiting all the neighbors' neighbors and so on.
+
+This algorithm can be viewed as breath-first search (BFS) on a polytope graph. To create this graph, we turn each polytope created by the ReLU NN into a node. An edge is added between each pair of polytopes that are one-adjacent to each other. The BFS algorithm uses a queue to keep track the visited polytopes. At the beginning of traversing, the initial polytope is added to an empty queue and is marked as visited afterwards. In each iteration, we pop the first polytope from the queue and identify all of its one-adjacent neighbors. Among these identified polytopes, we add those that have not been visited to the back of the queue and mark them as visited. The iteration stops when the queue is empty.
+
+The key component of the polytope traversing algorithm is to identify a polytope’s one-adjacent neighbors. For a polytope ${\mathcal{R}}_{c}$ coded by $\mathbf{c}$ of $M$ bits, there are at most $M$ one-adjacent neighbors with codes corresponding to flipping one of the bits in $\mathbf{c}$ . Each valid one-adjacent neighbor must be non-empty and can be reached by crossing a boundary. Therefore, we can check each linear inequality in (3) and determine whether it is a boundary or redundant. Some techniques of identifying redundant inequalities are summarized in Telgen (1983). By flipping the bits corresponding to the identified boundaries, we obtain the codes of the one-adjacent polytopes.
+
+Equivalently, we can identify the one-adjacent neighbors by going through all $M$ candidate codes and selecting those corresponding to non-empty sets. Checking the feasibility of a set constrained by a set of linear inequalities is often referred to as the "Phase-I Problem" of LP and can be solved efficiently by modern LP solvers. During BFS iterations, we can hash the checked codes to avoid checking them repetitively. The BFS-based polytope traversing algorithm is summarized in Algorithm 1. We now state the correctness of this algorithm with its proof in Appendix.
+
+Theorem 4.1 Given a ReLUNN with one hidden layer of $M$ neurons as specified in (1), Algorithm 1 covers all non-empty local polytopes created by the neural network. That is, for all $\mathbf{x} \in {\mathbb{R}}^{P}$ , there exists one ${\mathcal{R}}_{\mathbf{c}}$ as defined in (3) such that $\mathbf{x} \in {\mathcal{R}}_{\mathbf{c}}$ and $\mathbf{c} \in {\mathcal{S}}_{R}$ , where ${\mathcal{S}}_{R}$ is the result returned by Algorithm 1.
+
+Algorithm 1 visits all the local polytopes created by a ReLU NN within ${\mathbb{R}}^{P}$ . The time complexity is exponential to the number of neurons, as all ${2}^{M}$ possible activation patterns are checked once in the worst-case scenario. The space complexity is also exponential to the number of neurons as we hash all the checked activation patterns. Furthermore, for each activation pattern, we solve a phase-I problem of LP with $M$ inequalities in ${\mathbb{R}}^{P}$ . Traversing all local polytopes in ${\mathbb{R}}^{P}$ , therefore, becomes intractable for neural networks with a large number of neurons.
+
+Fortunately, traversing in ${\mathbb{R}}^{P}$ is usually undesirable. Firstly, a neural network may run into extrapolation issues for points outside the sample distribution. The polytopes far away from the areas covered by the samples are often considered unreliable. Secondly, many real-life applications, to be discussed in Section 5, only require traversing within small bounded regions to examine the local behavior of a model. In the next section, we introduce a technique to improve the efficiency when traversing within a bounded region.
+
+Algorithm 1: BFS-Based Polytope Traversing
+
+Require: A ReLU NN with one hidden layer of $M$ neurons as
+
+ specified in (1).
+
+Require: An initial point $\mathbf{x} \in {\mathbb{R}}^{P}$ .
+
+ 1: Initialize an empty queue $\mathcal{Q}$ for BFS.
+
+ Initialize an empty set ${\mathcal{S}}_{R}$ to store the codes of all visited poly-
+
+ topes.
+
+ 3: Initialize an empty set ${\mathcal{S}}_{c}$ to store all checked codes.
+
+ Calculate $\mathbf{x}$ ’s initial polytope code $\mathbf{c}$ using (2).
+
+ Append $c$ to the end of the $\mathcal{Q}$ .
+
+ Add $\mathbf{c}$ to both ${\mathcal{S}}_{R}$ and ${\mathcal{S}}_{\mathbf{c}}$ .
+
+ while $\mathcal{Q}$ is not empty do
+
+ Pop the first element in the front of BFS queue: $\mathbf{c} =$
+
+ Q.pop( ).
+
+ for $m = 1,2,\ldots ,M$ do
+
+ Create a candidate polytope code $\widehat{\mathbf{c}}$ by flipping one bit
+
+ in $\mathbf{c} : {\widehat{c}}_{m} = 1 - {c}_{m}$ and ${\widehat{c}}_{k} = {c}_{k}\forall k \neq m$ .
+
+ if $\widehat{c} \notin {\mathcal{S}}_{c}$ then
+
+ Check if ${\mathcal{R}}_{\widehat{\mathbf{c}}} = \left\{ {\mathbf{x} \mid {\left( -1\right) }^{{\widehat{c}}_{k}}\left( {{\mathbf{w}}_{k}^{T}\mathbf{x} + {b}_{k}}\right) \leq }\right.$
+
+ $0,k = 1,2\ldots ,M\}$ is empty using LP.
+
+ Add $\widehat{\mathbf{c}}$ to ${\mathcal{S}}_{\mathbf{c}}$ .
+
+ if ${\mathcal{R}}_{\widehat{\mathbf{c}}} \neq \varnothing$ then
+
+ Append $\widehat{\mathbf{c}}$ to the end of the $\mathcal{Q}$ .
+
+ Add $\widehat{\mathbf{c}}$ to ${\mathcal{S}}_{R}$ .
+
+ Return ${\mathcal{S}}_{R}$ .
+
+§ 4.2 POLYTOPE TRAVERSING WITHIN A BOUNDED REGION
+
+We first consider a region with each dimension bounded independently: ${l}_{j} \leq {x}_{j} \leq {u}_{j},j = 1,2,\ldots ,P$ . These $2 \times P$ linear inequalities creates a hypercube denoted as $\mathcal{B}$ . During the BFS-based polytope traversing, we repetitively flip the direction of one of the $M$ inequalities to identify the one-adjacent neighbors. When the bounded region is small, it is likely that only a small number of the $M$ hyperplanes cut through the hypercube. For the other hyper-planes, the entire hypercube falls onto only one side. Flipping to the other sides of these hyperplanes would leave the bounded region. Therefore, at the very beginning of polytope traversing, we can run through the $M$ hyperplanes to identify those cutting through the hypercube. Then in each neighbor identifying step, we only flip these hyperplanes.
+
+1.0 1.0 ${}^{\prime }\left( {0,1}\right)$ (c) (0,0) (0,1) (1,0) (1,1) (2,0) (2,1) 0.0 0.5 1.0 (b) (d) 0.5 0.5 (2,1) 0.0 0.0 -0.5 -0.5 -1.0 -1.0 -1.0 -0.5 0.0 0.5 1.0 -1.0 -0.5 (a)
+
+Figure 2: Demonstration of the BFS-base polytope traversing algorithm. (a) Traversing the 8 local polytopes within the bounded regions. The ReLU NN is the same as in Figure 1.(b). The lines marked in red are the boundaries of polytope No.0. (b) Traversing the 6 local polytopes within the bounded region. The ReLU NN is the same as in Figure 1.(c). The polytopes are indexed as "(level-1, level-2)". (c) The evolution of the BFS queue for traversing the local polytopes in (a). The gray arrows show the traversing order. The colored arrows at the bottom indicate the one-adjacent neighbors added to the queue. (d) The evolution of the hierarchical BFS queue for traversing the local polytopes in (b). The level-1 BFS queue is shown vertically while the level-2 BFS queue is shown horizontally.
+
+To identify the hyperplanes cutting through the hypercube, we denote the two sides of a hyperplane $\mathcal{H}$ and $\overline{\mathcal{H}} : \mathcal{H} = \left\{ {\mathbf{x} \mid {\mathbf{w}}_{m}^{T}\mathbf{x} + }\right.$ $\left. {{b}_{m} \leq 0}\right\}$ and $\overline{\mathcal{H}} = \left\{ {\mathbf{x} \mid {\mathbf{w}}_{m}^{T}\mathbf{x} + {b}_{m} \geq 0}\right\}$ . If neither $\mathcal{H} \cap \mathcal{B}$ nor $\widehat{\mathcal{H}} \cap \mathcal{B}$ is empty, we say the hyperplane ${\mathbf{w}}_{m}^{T}\mathbf{x} + {b}_{m} = 0$ cuts through $\mathcal{B}.\mathcal{H} \cap \mathcal{B}$ and $\widehat{\mathcal{H}} \cap \mathcal{B}$ are both constrained by $2 \times P + 1$ inequalities, checking their feasibility can again be formulated as a phase-I problem of LP. A faster and simpler method is to bound ${\mathbf{w}}_{m}^{T}\mathbf{x} + {b}_{m}$ subject to $\mathbf{x} \in \mathcal{B}$ , which has a closed-form solution. Then the hyperplane cuts through $\mathcal{B}$ if zero is in between the upper and lower bounds. We name this technique hyperplane pre-screening and summarize it in algorithm 2.
+
+Algorithm 2: Hyperplane Pre-Screening
+
+Require: A set of hyperplanes ${\mathbf{w}}_{m}^{T}\mathbf{x} + {b}_{m} \leq 0,m =$
+
+ $1,2,\ldots ,M$ .
+
+Require: A bounded traversing region $\mathcal{B}$ , e.g. $\left\{ {\mathbf{x} \mid {l}_{j} \leq {x}_{j} \leq {u}_{j}}\right.$ ,
+
+ $j = 1,2,\ldots ,P\}$ .
+
+ : Initialize an empty set $\mathcal{T}$ to store all hyperplanes cutting
+
+ through $\mathcal{B}$ .
+
+ for $m = 1,2,\ldots ,M$ do
+
+ Get two halfspaces $\mathcal{H} = \left\{ {\mathbf{x} \mid {\mathbf{w}}_{m}^{T}\mathbf{x} + {b}_{m} \leq 0}\right\}$ and $\overline{\mathcal{H}} =$
+
+ $\left\{ {\mathbf{x} \mid {\mathbf{w}}_{m}^{T}\mathbf{x} + {b}_{m} \geq 0}\right\}$ .
+
+ if $\mathcal{H} \cap \mathcal{B} \neq \varnothing$ and $\widehat{\mathcal{H}} \cap \mathcal{B} \neq \varnothing$ then
+
+ Add $m$ to $\mathcal{T}$ .
+
+ Return $\mathcal{T}$ .
+
+Hyperplane pre-screening effectively reduces the complexity from $\mathcal{O}\left( {2}^{M}\right)$ to $\mathcal{O}\left( {2}^{\left| \mathcal{T}\right| }\right)$ , where $\left| \mathcal{T}\right|$ is the number of hyper-planes cutting through the hypercube. The number ${2}^{\left| \mathcal{T}\right| }$ corresponds to the worst-case scenario. Since the BFS-based traversing only checks non-empty polytopes and their potential one-adjacent neighbors, the number of activation patterns actually checked can be less than ${2}^{\left| \mathcal{T}\right| }$ . In general, the fewer hyperplanes go through $\mathcal{B}$ the faster polytope traversing finishes.
+
+Figure 2.(a) shows traversing the 8 local polytopes within the bounded region. The ReLU NN is the same as in Figure 1.(b). The lines marked in red are the hyperplanes cutting through the bounded region and are identified by the pre-screening algorithm. The evolution of the BFS queue is shown in Figure 2.(c). The gray arrows show the traversing order. The colored arrows at the bottom indicate the one-adjacent neighbors added to the queue. When polytope No. 0 is popped from the queue, its one-adjacent neighbors, No.1, 2, 3, and 4, are added to the queue. Next, when polytope No. 1 is popped, its one-adjacent neighbors, No. 5 and 6, are added. Polytope No.0, although as a one-adjacent neighbor to No.1, is ignored since it has been visited. Similarly, when polytope No. 2 is popped, only one of its one-adjacent neighbors, No. 7, is added, since all others have been visited (including those in the queue). The algorithm finished after popping Polytope No. 8 as no new polytopes can be added and the queue is empty. All 8 local polytopes in the bounded region are traversed.
+
+Because $\mathcal{B}$ is bounded by a set of linear inequalities, the correctness of BFS-based polytope traversing as stated in Theorem 4.1 can be easily extended to this bounded traversing case. It can be proved by showing that for any two non-empty polytopes overlapped with $\mathcal{B}$ , we can move from one to another by repetitively finding a one-adjacent neighbor within $\mathcal{B}$ . We emphasis that the correctness of BFS-based polytope traversing can be proved for any traversing region bounded by a set of linear inequalities. This realization is critical to generalize our results to the case of ReLU NNs with multiple hidden layers. Furthermore, as any closed convex set can be represented as the intersection of a set of (possibly infinite) half-spaces, the correctness of BFS-based polytope traversing is true for any closed convex $\mathcal{B}$ .
+
+§ 4.3 HIERARCHICAL POLYTOPE TRAVERSING IN THE CASE OF MULTIPLE HIDDEN LAYERS
+
+The BFS-based polytope traversing algorithm can be generalized to ReLU NNs with multiple hidden layers. In section 2.2, we described how a ReLU NN with $L$ hidden layers hierarchically partition the input space into polytopes of $L$ different level. Then in section3, we showed the adjacency of level- $l$ polytopes is conditioned on all of them belonging to the same level-(l - 1)polytope. Therefore, to traverse all level- $L$ polytopes, we need to traverse all level-(L - 1)polytopes and within each of them traversing the sub-polytopes by following the one-adjacent neighbors.
+
+The procedure above leads us to a recursive traversing scheme. Assume a ReLU NN with L hidden layers and a closed convex traversing region $\mathcal{B}$ . Starting from a sample $\mathbf{x} \in \mathcal{B}$ , we traverse all level-1 polytopes using the BFS-based algorithm. Inside each level-1 polytope, we traverse all the contained level-2 polytopes, so on and so forth until we reach the level-L polytopes. As shown in (13), each level- $l$ polytope is constrained by $\mathop{\sum }\limits_{{t = 1}}^{l}{M}_{t}$ linear inequalities, the way to identify level- $l$ one-adjacent neighbors is largely the same as what we have described in Section 4.1. Two level- $l$ one-adjacent neighbors must have the same $\mathop{\sum }\limits_{{t = 1}}^{{l - 1}}{M}_{t}$ linear inequalities corresponding to ${\mathbf{c}}^{1}{\mathbf{c}}^{2}\ldots {\mathbf{c}}^{l - 1}$ , and have one of the last ${M}_{l}$ inequalities differ in direction, so there are ${M}_{l}$ cases to check.
+
+We can use hyperplane pre-screening at each level of traversing. When traversing the level- $l$ polytopes within in a level-(l - 1) polytope ${\mathcal{R}}^{l - 1}$ , we update the bounded traversing region by taking the intersection of ${\mathcal{R}}^{l - 1}$ and $\mathcal{B}$ . We then screen the ${M}_{l}$ partitioning hyperplanes and only select those passing through this update traversing region.
+
+The BFS-based hierarchical polytope traversing algorithm is summarized in Algorithm 3. The correctness of this algorithm can be proved based on the results in Section 4.2, which guarantees the thoroughness of traversing the level- $l$ polytopes within in any level-(l - 1)polytope. Then the overall thoroughness is guaranteed because each level of traversing is thorough. We state the result in the following theorem.
+
+Theorem 4.2 Given a ReLUNN with $L$ hidden layers and a closed convex traversing region $\mathcal{B}$ . Algorithm 3 covers all non-empty level- $L$ polytopes created by the neural network that overlap with $\mathcal{B}$ . That is, for all $\mathbf{x} \in \mathcal{B}$ , there exists one ${\mathcal{R}}_{\mathbf{c}}$ as defined in (13) such that $\mathbf{x} \in {\mathcal{R}}_{\mathbf{c}}$ and $\mathbf{c} \in {\mathcal{S}}_{R}$ , where ${\mathcal{S}}_{R}$ is the result returned by Algorithm 3.
+
+Figure 2.(b) shows traversing the 6 local polytopes within the bounded region. The ReLU NN is the same as in Figure 1.(c). The evolution of the hierarchical BFS queue is shown in Figure 2.(d). The level-1 BFS queue is shown vertically while the level-2 BFS queue is shown horizontally. Starting from level-1 polytope $\left( {0, \cdot }\right)$ , the algorithm traverses the two level-2 polytopes inside it. It then identifies the two (level-1) one-adjacent neighbors of $\left( {0, \cdot }\right) : \left( {1, \cdot }\right)$ and $\left( {2, \cdot }\right)$ . Every time a level-1 polytope is identified, the algorithm goes into it to traverse all the level-2 polytopes inside. At the end of the recursive call, all 6 local polytopes in the bounded region are traversed.
+
+§ 5 NETWORK PROPERTY VERIFICATION BASED ON POLYTOPE TRAVERSING
+
+The biggest advantage of the polytope traversing algorithm is its ability to be adapted to solve many different problems of practical interest. Problems such as local adversarial attacks, searching for counterfactual samples, and local monotonicity verification can be solved easily when the model is linear. As we have shown in Sections 2.2, the local model within each level- $L$ polytope created by a ReLU NN is indeed linear. The polytope traversing algorithm provides a way to analyze not only the behavior of a ReLU NN at one local polytope but also the behavior within the neighborhood, and therefore enhances our understanding of the overall model behavior. In this section, we describe the details of adapting the polytope traversing algorithm to verify several properties of ReLU NNs.
+
+Algorithm 3: BFS-Based Hierarchical Polytopes Traversing in a Bounded Region
+
+Require: A ReLU NN with $L$ hidden layers.
+
+Require: A closed convex traversing region $\mathcal{B}$ .
+
+Require: An initial point $\mathbf{x} \in \mathcal{B}$ .
+
+ 1: Initialize an empty set ${\mathcal{S}}_{R}$ to store the codes of all visited poly-
+
+ topes.
+
+ function HIERARCHICAL_TRAVERSE(x, l)
+
+ Initialize an empty queue ${\mathcal{Q}}^{l}$ for BFS at level $l$ .
+
+ Initialize an empty set ${\mathcal{S}}_{c}^{l}$ to store all checked level- $l$ codes.
+
+ Calculate $\mathbf{x}$ ’s initial polytope code $\mathbf{c}$ recursively using (12).
+
+ if $l = = L$ then
+
+ Add $\mathbf{c}$ to ${\mathcal{S}}_{R}$
+
+ else
+
+ HIERARCHICAL_TRAVERSE $\left( {\mathbf{x},l + 1}\right)$
+
+ if $l > 1$ then
+
+ Get the level-(l - 1)polytope code specified by the
+
+ front segment of $c : {c}^{1 : l - 1} = {c}^{1}{c}^{2}\ldots {c}^{l - 1}$ .
+
+ Use ${\mathbf{c}}^{1 : l - 1}$ to get the level-(l - 1)polytope ${\mathcal{R}}_{\mathbf{c}}^{l - 1}$ as in
+
+ (13).
+
+ else
+
+ ${\mathcal{R}}_{\mathbf{c}}^{0} = {\mathbb{R}}^{P}$
+
+ Form the new traversing region ${\mathcal{B}}^{l - 1} = \mathcal{B} \cap {\mathcal{R}}_{c}^{l - 1}$ .
+
+ Append the code segment ${\mathbf{c}}^{l}$ to the end of the ${\mathcal{Q}}^{l}$ .
+
+ Add the code segment ${\mathbf{c}}^{l}$ to ${\mathcal{S}}_{\mathbf{c}}$ .
+
+ Get the ${M}_{l}$ hyperplanes associated with ${\mathbf{c}}^{l}$ .
+
+ Pre-Screen the hyperplanes associated with ${\mathbf{c}}^{l}$ using Algo-
+
+ rithm 2 with bounded region ${\mathcal{B}}^{l - 1}$ .
+
+ Collect the pre-screening results $\mathcal{T}$ .
+
+ while ${\mathcal{Q}}^{l}$ is not empty do
+
+ Pop the first element in the front of BFS queue: ${\mathbf{c}}^{l} =$
+
+ ${\mathcal{Q}}^{l}$ .pop( ).
+
+ for $m \in \mathcal{T}$ do
+
+ Create a candidate polytope code ${\widehat{\mathbf{c}}}^{l}$ by flipping one
+
+ bit in ${\mathbf{c}}^{l} : {\widehat{c}}_{m}^{l} = 1 - {c}_{m}^{l}$ and ${\widehat{c}}_{k}^{l} = {c}_{k}^{l}\forall k \neq m$ .
+
+ if ${\widehat{\mathbf{c}}}^{l} \notin {\mathcal{S}}_{\mathbf{c}}$ then
+
+ Get set ${\mathcal{R}}_{\widehat{\mathbf{c}}} = \left\{ {\mathbf{x} \mid {\left( -1\right) }^{{\widehat{c}}_{k}}\left( {\left\langle {{\widehat{\mathbf{w}}}_{k}^{l},\mathbf{x}}\right\rangle + {\widehat{b}}_{k}^{l}}\right) \leq }\right.$
+
+ $\left. {0,k = 1,2\ldots ,{M}_{l}}\right\}$
+
+ Check if ${\mathcal{R}}_{\widehat{\mathbf{c}}} \cap {\mathcal{B}}^{l - 1}$ is empty using LP.
+
+ Add ${\widehat{\mathbf{c}}}^{l}$ to ${\mathcal{S}}_{\mathbf{c}}$ .
+
+ if ${\mathcal{R}}_{\widehat{\mathbf{c}}} \cap {\mathcal{B}}^{l - 1} \neq \varnothing$ then
+
+ Append ${\widehat{\mathbf{c}}}^{l}$ to the end of the ${\mathcal{Q}}^{l}$ .
+
+ if $l = = L$ then
+
+ Add $\widehat{\mathbf{c}} = {\mathbf{c}}^{1}{\mathbf{c}}^{2}\ldots {\widehat{\mathbf{c}}}^{l}$ to ${\mathcal{S}}_{R}$
+
+ else
+
+ Find a point $\widehat{\mathbf{x}} \in {\mathcal{R}}_{\widehat{\mathbf{c}}} \cap {\mathcal{B}}^{l - 1}$
+
+ HIERARCHICAL_TRAVERSE $\left( {\widehat{\mathbf{x}},l + 1}\right)$
+
+ HIERARCHICAL_TRAVERSE(x,1)
+
+ Return ${\mathcal{S}}_{R}$ .
+
+§ 5.1 LOCAL ADVERSARIAL ATTACKS
+
+We define the local adversarial attack problem as finding the perturbation within a bounded region such that the model output can be changed most adversarially. Here, we assume the model output to be a scalar in $\mathbb{R}$ and consider three regression cases with different types of response variable: continuous, binary, and categorical. The perturbation region is a convex set around the original sample. For example, we can allow certain features to increase or decrease by certain amount; or we can use a norm $\left( {{L}_{1},{L}_{2},{L}_{\infty }}\right)$ ball centered at the original sample.
+
+1.0 1.0 1.0 0.5 0.0 -0.5 -1.0 0.0 0.5 1.0 -1.0 -0.5 0.0 0.5 1.0 (b) (c) 0.5 0.5 0.0 -0.5 -0.5 -1.0 -1.0 -1.0 -0.5 0.0 0.5 1.0 -1.0 -0.5 (a)
+
+Figure 3: Demonstration of different applications of the polytope traversing algorithm. We use the ReLU NN in Figure 1.(b) as an example. (a) Conducting local adversarial attack by finding the maximum (green) and minimum (red) model predictions within a bounded region. (b) Creating counterfactual samples that are closest to the original sample. The distance are measured in ${L}_{1}$ (green) and ${L}_{2}$ (red) norms. (c) Monotonicity verification in a bounded region. The polytope in red violates condition of model prediction monotonically increasing along the horizontal axis.
+
+In the continuous response case, the one-dimensional output after the last linear layer of a ReLU NN is directly used as the prediction of the target variable. Denote the model function as $f\left( \cdot \right)$ , the original sample as ${\mathbf{x}}_{0}$ , and the perturbation region as $\mathcal{B}$ . The local adversarial attack problem can be written as:
+
+$$
+\mathop{\max }\limits_{{\mathbf{x} \in \mathcal{B}}}\left| {f\left( \mathbf{x}\right) - f\left( {\mathbf{x}}_{0}\right) }\right| = \max \left( {\mathop{\max }\limits_{{\mathbf{x} \in \mathcal{B}}}f\left( \mathbf{x}\right) - f\left( {\mathbf{x}}_{0}\right) ,}\right. \tag{15}
+$$
+
+$$
+\left. {f\left( {\mathbf{x}}_{0}\right) - \mathop{\min }\limits_{{\mathbf{x} \in \mathcal{B}}}f\left( \mathbf{x}\right) }\right) ,
+$$
+
+which means we need to find the range of the model outputs on $\mathcal{B}$ . We can traverse all local polytopes covered by $\mathcal{B}$ , finding the model output range within each intersection $\mathcal{B} \cap \mathcal{R}$ , then aggregating all the local results to get the final range. Finding the output range within each $\mathcal{B} \cap \mathcal{R}$ is a convex problem with linear objective function, so the optimality can be guaranteed within each polytope. Because our traversing algorithm covers all polytopes overlapped with $\mathcal{B}$ , the final solution also has guaranteed optimality.
+
+In the case of binary response, the one-dimensional output after the last linear layer of a ReLU NN is passed through a logistic/sigmoid function to predict the probability of a sample belonging to class 1. To conduct adversarial attack, we minimize the predicted probability $f\left( \mathbf{x}\right)$ if the true response $y$ is 1, and maximize the prediction if the true response is 0 :
+
+$$
+\left\{ \begin{array}{ll} \mathop{\max }\limits_{{\mathbf{x} \in \mathcal{B}}}f\left( \mathbf{x}\right) , & y = 0 \\ \mathop{\min }\limits_{{\mathbf{x} \in \mathcal{B}}}f\left( \mathbf{x}\right) , & y = 1. \end{array}\right. \tag{16}
+$$
+
+Because of the monotonicity of the logistic function, the minimizer and maximizer of the probabilistic output are the same minimizer and maximizer of the output after the last linear layer (i.e. the predicted log odds), making it equivalent to the case of continuous response.
+
+In the case of categorical response with levels 1 to $Q$ , the output after the last linear layer of a ReLU NN is in ${\mathbb{R}}^{Q}$ and is passed through a softmax layer to be converted to probabilistic predictions of a sample belonging to each class. The adversarial sample is generated to minimize the predicted probability of the sample being in its true class. Within each local polytope, the linear models are given by (14), and the predicted probability of class $q$ can be minimized by finding the maximizer of the following optimization problem:
+
+$$
+\mathop{\max }\limits_{{\mathbf{x} \in \mathcal{B} \cap \mathcal{R}}}\mathop{\sum }\limits_{{i = 1,i \neq q}}^{Q}{e}^{{\left( {\widehat{\mathbf{w}}}_{i}^{o} - {\widehat{\mathbf{w}}}_{q}^{o}\right) }^{T}\mathbf{x} + \left( {{\widehat{b}}_{i}^{o} - {\widehat{b}}_{q}^{o}}\right) }, \tag{17}
+$$
+
+where ${\left( {\widehat{\mathbf{w}}}_{i}^{o}\right) }^{T}$ is the $i$ th row of the matrix ${\widehat{\mathbf{W}}}^{o}$ and ${\widehat{b}}_{i}^{o}$ is the $i$ th element in ${\widehat{\mathbf{b}}}^{o}$ . Since the objective function in (17) is convex, the optimality of local adversarial attack with polytope traversing is guaranteed.
+
+Figure 3.(a) demonstrates local adversarial attack in the case of regression with binary response. The ReLU NN is the same as in Figure 1.(b), which predicts the probability of a sample belong to class 1. The predictions across the whole domain are shown as the heat map. Within the region bounded by the black box, we find the minimum and maximum predictions and marked them by red and green respectively. Due to the nature of linear models, the minimizer and maximizer always fall on the intersections of partitioning hyperplanes and/or region boundaries.
+
+§ 5.2 COUNTERFACTUAL SAMPLE GENERATION
+
+In classification problems, we are often interested in finding the smallest perturbation on a sample such that the model changes its class prediction. The magnitude of the perturbation is often measured by ${L}_{1},{L}_{2}$ , or ${L}_{\infty }$ norm. The optimization problem can be written as:
+
+$$
+\mathop{\min }\limits_{\mathbf{x}}{\begin{Vmatrix}\mathbf{x} - {\mathbf{x}}_{0}\end{Vmatrix}}_{p}\;\text{ s.t. }{f}_{\mathcal{C}}\left( \mathbf{x}\right) \neq {f}_{\mathcal{C}}\left( {\mathbf{x}}_{0}\right) , \tag{18}
+$$
+
+where ${\mathbf{x}}_{0}$ is the original sample, $p$ indicates a specific type of norm, and ${f}_{\mathcal{C}}\left( \cdot \right)$ is a ReLU NN outputting class predictions.
+
+We can adapt the polytope traversing algorithm to solve this problem. In the case of binary response, each local polytope has an associated hyperplane separating the two classes: ${\left( {\widehat{\mathbf{w}}}^{o}\right) }^{T}\mathbf{x} + {\widehat{b}}^{o} =$ $\gamma$ , where ${\widehat{\mathbf{w}}}^{o}$ and ${\widehat{b}}^{o}$ are given in (14), and $\gamma$ is the threshold converting predicted log odds to class. Finding the counterfactual sample within a local polytope $\mathcal{R}$ can be written as a convex optimization problem:
+
+$$
+\mathop{\min }\limits_{\mathbf{x}}{\begin{Vmatrix}\mathbf{x} - {\mathbf{x}}_{0}\end{Vmatrix}}_{p}\text{ s.t. }{\left( -1\right) }^{{\widehat{y}}_{0}}\left( {{\left( {\widehat{\mathbf{w}}}^{o}\right) }^{T}\mathbf{x} + {\widehat{b}}^{o}}\right) > \gamma ,\mathbf{x} \in \mathcal{R}.
+$$
+
+(19)where ${\widehat{y}}_{0}$ is the original class predicted by the model.
+
+We start the traversing algorithm from the polytope where ${\mathbf{x}}_{0}$ lies. In each polytope, we solve (19). It is possible that the entire polytope fall on one side of the class separating hyperplane and (19) does not have any feasible solution. If a solution can be obtained, we compare it with the solutions in previously traversed polytopes and keep the one with the smallest perturbation. Furthermore, we use this perturbation magnitude to construct a new bounded traversing region around ${\mathbf{x}}_{0}$ . Because no points outside this region can have a smaller distance to the original points, once we finish traversing all the polytopes inside this region, the algorithm can conclude. In practice we often construct this dynamic traversing region as $\mathcal{B} = \left\{ {\mathbf{x} \mid {\begin{Vmatrix}\mathbf{x} - {\mathbf{x}}_{0}\end{Vmatrix}}_{\infty } < {d}^{ * }}\right\}$ , where ${d}^{ * }$ is the smallest perturbation magnitude so far. When solving for (19) in the proceeding polytopes, we add $x \in \mathcal{B}$ to the constraints. $\mathcal{B}$ is updated whenever a smaller ${d}^{ * }$ is found. Because the new traversing region is always a subset of the previous one, our BFS-based traversing algorithm covers all polytopes within the final traversing region under this dynamic setting. The final solution to (18) is guaranteed to be optimal, and the running time depends on how far the original point is away from a class boundary.
+
+In the case of categorical response with levels 1 to $Q$ , the output after the last linear layer of a ReLU NN has $Q$ dimensions and the dimension of the largest value is the predicted class. We ignore the softmax layer at the end because it does not change the rank of the dimensions. Assuming the original example is predicted to belong to class ${\widehat{q}}_{0}$ , we generate counterfactual samples in the rest of $Q - 1$ classes.
+
+We consider one of these classes at a time and denote it as $q$ . Within each ReLU NN's local polytope, the linear models are given by (14). The area where a sample is predicted to be in class $q$ is enclosed by the intersection of $Q - 1$ halfspaces:
+
+$$
+{\mathcal{C}}_{q} = \left\{ {\mathbf{x} \mid {\left( {\widehat{\mathbf{w}}}_{q}^{o} - {\widehat{\mathbf{w}}}_{i}^{o}\right) }^{T}\mathbf{x} + \left( {{\widehat{b}}_{q}^{o} - {\widehat{b}}_{i}^{o}}\right) > 0,\forall i = 1,\ldots ,Q,i \neq q}\right\} .
+$$
+
+(20)
+
+Therefore, within each local polytope, we solve the convex optimization problem:
+
+$$
+\mathop{\min }\limits_{\mathbf{x}}{\begin{Vmatrix}\mathbf{x} - {\mathbf{x}}_{0}\end{Vmatrix}}_{p}\text{ s.t. }\mathbf{x} \in {\mathcal{C}}_{q} \cap \mathcal{R}. \tag{21}
+$$
+
+We compare all feasible solutions of (21) under different $q$ and keep the one counterfactual sample that is closest to ${\mathbf{x}}_{0}$ . The traversing procedure and the dynamic traversing region update is the same as in the binary response case. Since (21) is convex, the final solution to (18) is guaranteed to be optimal.
+
+Figure 3.(b) demonstrates counterfactual sample generation in the case of binary classification. The ReLU NN is the same as in Figure 1.(b) whose class decision boundaries are plotted in red. Given an original sample plotted as the black dot, we generate two counterfactual samples on the decision boundaries. The red dot has the smallest ${L}_{2}$ distance to the original point while the green dot has the smallest ${L}_{1}$ distance.
+
+§ 5.3 LOCAL MONOTONICITY VERIFICATION
+
+We can adapt the polytope traversing algorithm to verify if a trained ReLU NN is monotonic w.r.t. certain features. We consider the regression cases with continuous and binary response. In both cases, the output after the last linear layer is a scalar. Since the binary response case uses a logistic function at the end which is monotonically increasing itself, we can ignore this additional function. The verification methods for the two cases, therefore, are equivalent.
+
+To check whether the model is monotonic w.r.t. a specific feature within a bounded convex domain, we traverse the local polytopes covered by the domain. Since the model is linear within each polytope, we can easily check the monotonicity direction (increasing or decreasing) by checking the sign of the corresponding coefficients. After traversing all local polytopes covered by the domain, we check their agreement on the monotonicity direction. Since a ReLU NN produces a continuous function, if the local models are all monotonically increasing or all monotonically decreasing, the network is monotonic on the checked domain. If there is a disagreement in the direction, the network is not monotonic. The verification algorithm based on polytope traversing not only provides us the final monotonicity result but also tells us in which part of the domain monotonicity is violated.
+
+Figure 3.(c) demonstrates local adversarial attack in the case of regression with binary response. The ReLU NN is the same as in Figure 1.(b), which predicts the probability of a sample belong to class 1. The predictions across the whole domain are shown as the heat map. We check if the model is monotonically increasing w.r.t. ${x}_{1}$ along the horizontal axis. The domain to check is bounded by the black box. Among the 5 polytopes overlapped with the domain, one of them violates the monotonically increasing condition and is marked in red.
+
+§ 5.4 COMPARISON WITH ALGORITHMS BASED ON MIXED-INTEGER PROGRAMMING
+
+The three applications above have been traditionally solved using MIP (Anderson et al. 2020; Fischetti and Jo 2017; Liu et al. 2020; Tjeng, Xiao, and Tedrake 2018; Weng et al. 2018). Our algorithms based on polytope traversing have several advantages. First, our method exploits the topological structure created by ReLU NNs and fully explains the model behavior in small neighborhoods. For the ${2}^{M}$ cases created by a ReLU NN with $M$ neurons, MIP eliminates the searching branches using branch-and-bound. Our method, on the other hand, eliminates the searching branches by checking the feasibility of the local polytopes and their adjacency. Since a small traversing region often covers a limited number of polytopes, our algorithm has short running time when solving local problems.
+
+Second, since our algorithm explicitly identifies and visits all the polytopes, the final results contain not only the optimal solution but also the whole picture of the model behavior, providing explainability to the often-so-called black-box model.
+
+Third, our method requires only linear and convex programming solvers and no MIP solvers. Identifying adjacent polytopes requires only linear programming. Convex programming may be used to solve the sub-problem within a local polytope. Our algorithm allows us to incorporate any convex programming solver that is most suitable for the sub-problem, providing much freedom to customize.
+
+Last but probably the most important, our algorithm is highly versatile and flexible. Within each local polytope, the model is linear, which is often the simplest type of model to work with. Any analysis that one runs on a linear model can be transplanted here and wrapped inside the polytope traversing algorithm. Therefore, our algorithm provides a unified framework to verify different properties of piecewise linear networks.
+
+§ 6 CONCLUSION
+
+We explored the unique topological structure that ReLU NNs create in the input space; identified the adjacency among the partitioned local polytopes; developed a traversing algorithm based on this adjacency; and proved the thoroughness of polytope traversing. Our polytope traversing algorithm could be extended to other piecewise linear networks such as those containing convolutional or maxpooling layers.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/EfhpoMWSqUN/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/EfhpoMWSqUN/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..daed1e018a6d0b84477e38f7c6b8ee4f2648293f
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/EfhpoMWSqUN/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,340 @@
+# Broad Adversarial Training with Data Augmentation in the Output Space
+
+## Abstract
+
+In image classification, data augmentation and the usage of additional data has been shown to increase the efficiency of clean training and the accuracy of the resulting model. However, this does not prevent models from being fooled by adversarial manipulations. To increase the robustness, Adversarial Training (AT) is an easy, yet effective and widely used method to harden neural networks against adversarial inputs. Still, AT is computationally expensive and only creates one adversarial input per sample of the current batch. We propose Broad Adversarial Training (BAT), which combines adversarial training and data augmentation in the decision space, i.e., on the models output vector. By adding random noise to the original adversarial output vector, we create multiple pseudo adversarial instances, thus increasing the data pool for adversarial training. We show that this general idea is applicable to two different learning paradigms, i.e., supervised and self-supervised learning. Using BAT instead of AT for supervised learning, we can increase the robustness by ${0.56}\%$ for small seen attacks. For medium and larger seen attacks, the robustness increases by ${4.57}\%$ and ${1.11}\%$ , respectively. On large unseen attack, we can also report an increase in the robustness by 1.11% and 0.29%. When combining a larger corpus of input data with our proposed method, we report a slight increase of the clean accuracy and increased robustness against all observed attacks, compared to AT. In self-supervised training, we monitor a similar increase in robust accuracy for seen attacks and large unseen attacks, when it comes to the downstream task of image classification. In addition, for both observed self-supervised models, the clean accuracy also increases by up to 1.37% using our method.
+
+## Introduction
+
+The performance of deep learning models in various domains, e.g., image classification (Zhai et al. 2021), semantic image segmentation (Tao, Sapra, and Catanzaro 2020), or reinforcement learning (Tang et al. 2017) is already on a high level and constantly improving. Among other aspects, ongoing research and advances in data augmentation (Cubuk et al. 2020) techniques, as well as the creation of more realistic synthetic inputs (Ho, Jain, and Abbeel 2020) contribute to this success. Both techniques aim to enrich the training data, which increases the performance. However, when it comes to safety-critical applications, e.g., autonomous driving, adversarial inputs pose a threat. By applying small but malicious manipulations to the input, the prediction of the model can change drastically.
+
+Starting by manipulating digital inputs, several authors, e.g., (Goodfellow, Shlens, and Szegedy 2014; Carlini and Wagner 2017; Madry et al. 2017), developed different techniques to calculate and create the necessary manipulations to fool neural networks into misclassifying a given input. Later, these attacks were adapted or extended to also work in the physical world (Athalye et al. 2018; Worzyk, Kahlen, and Kramer 2019; Ranjan et al. 2019).
+
+One widely used technique to harden neural networks against such attacks is Adversarial Training (AT), which is simple but yet very effective. The idea is to create adversarial inputs during the training process, and include or exclusively use them for training. Thereby, the model learns to be more resilient against these worst-case perturbations. (Madry et al. 2017) for example, proposed a method referred to as Projected Gradient Descent (PGD) which is very successful in finding adversarial inputs and furthermore use these adversarial instances exclusively for training a given model. Even though effective, all to us known adversarial training techniques, only create one adversarial input per sample in the current batch.
+
+To increase the impact of any given adversarial instance during adversarial training, we propose to combine adversarial training and data augmentation in the decision space, specifically, manipulating the output vector of a given adversarial instance. In contrast to clean inputs, which represent the assumed reality in the input space, adversarial instances are calculated, and basically exist, based on flaws in the decision boundary of a model. Ultimately, the output vector of a given model defines the decision calculated by the model. Similar to data augmentation on clean inputs increasing the data pool and leading to better clean performance, increasing the data pool of adversarial samples in the output space leads to better robustness. The overall concept is shown in Figure 1.
+
+In Figure 1a, the decision space during traditional adversarial training is displayed. Based on the current decision boundary (bold line) and the output vector for a clean sample (orange circle), an adversarial input (blue cross) is created, whose output vector is located on the wrong side of the decision boundary. The adversarial decision boundary (dashed line) is then optimized to contribute for the adversarial input.
+
+---
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+---
+
+
+
+Figure 1: Difference between traditional (1a) and Broad (1b) Adversarial Training. Given the output vector of a clean sample (orange circle) and the current decision boundary (solid line), adversarial inputs (blue cross) lying on the wrong side of the decision boundary are created. When adding the adversarial samples to the training process, the decision boundary adapts accordingly (dashed line). During Broad Adversarial Training, the initial adversarial output vector is perturbed randomly within a given radius to create a set of additional pseudo adversarial inputs (smaller blue crosses). This extends the impact of any single adversarial input on the adversarial decision boundary (dashed line) during training.
+
+Figure 1b outlines our extension to this process. Based on the output vector of an adversarial input (large blue cross), multiple pseudo adversarial inputs (small blue crosses) are created by applying conditioned normally distributed random noise within a predefined radius to the initial adversarial output vector. By scattering the adversarial output vector, we widen its impact on the new adversarial decision boundary (dashed line).
+
+The remainder of this paper is structured as follows. In Section , we introduce the supervised, as well as self-supervised adversarial training methods used for the experiments in this paper. In Section , we define our proposed BAT method in more detail, followed by the experiments and their discussion in Section . Section puts our work in context with other related work, while Section concludes this paper.
+
+## Background
+
+## Supervised Adversarial Training
+
+PGD: One of the widest known techniques for adversarial training is Projected Gradient Descent (PGD), proposed by (Madry et al. 2017). Instead of using clean samples during training, the authors use the corresponding adversarial instances, created based on the following iterative equation,
+
+$$
+{x}^{0} = x
+$$
+
+$$
+{x}^{i + 1} = {\Pi }_{B\left( {x,\varepsilon }\right) }\left( {{x}^{i} + \alpha \operatorname{sign}\left( {{\nabla }_{{x}^{i}}{\mathcal{L}}_{\mathrm{{CE}}}\left( {\theta ,{x}^{i}, y}\right) }\right) }\right) . \tag{1}
+$$
+
+This function is initialized by the original input $x$ and calculates the gradients, regarding the intermediate adversarial input ${x}^{i}$ , of the cross-entropy loss ${\mathcal{L}}_{\mathrm{{CE}}}$ away from the true label $y$ , based on the current parameters of the model $\theta$ . The sign of the gradients is multiplied by a step size parameter $\alpha$ and added to the current intermediate adversarial input. The projection $\Pi$ then limits the perturbation to be within an $\varepsilon$ - ball around the initial input $x$ . The parameter $\varepsilon$ essentially governs the allowed amount of perturbation. For example, $\varepsilon = 8/{255}$ regarding the ${\ell }_{\infty }$ norm would state, that each pixel of the original input vector is allowed to be increased or decreased by not more than 8 values.
+
+## Self-supervised Adversarial Training
+
+More recently, adversarial training is also applied to self-supervised training. The general goal of self-supervision is to train for some pretext tasks where no labels are required. After training, the model and its parameters are transferred to a given downstream task, e.g., image classification. Therefore, the overall model in self-supervised learning is split into a backbone and a projector. The backbone can be based on, e.g., a ResNet architecture (He et al. 2016), stripped of the last fully connected layer. The projector reduces the dimensionality of the backbones output vector to a usually 128-dimensional vector. To train without labels, given a batch of samples $\left\{ {{x}_{1},\ldots ,{x}_{b}}\right\}$ , each sample is duplicated and transformed by a given series of random transformations $t$ , e.g., cropping and flipping. The resulting transformed versions of the same origin, i.e., ${t}_{1}\left( {x}_{i}\right)$ and ${t}_{2}\left( {x}_{i}\right)$ are called positive pair, while pairs of samples with different origin, i.e., $t\left( {x}_{i}\right)$ and $t\left( {x}_{j}\right)$ with $i \neq j$ are called negative pair. The pretext task of the models used in this paper is to maximise the distance between the output vectors of negative pairs while minimising the distance between the output vectors of positive pairs. How the distance is calculated differs between the training techniques.
+
+After pretraining for the pretext task, the backbone is kept and the projector is discarded. Instead of the projector, a downstream task-specific head is attached. The parameters of the backbone are usually frozen and only the head is trained. In essence, self-supervised pretraining aims to learn good feature representations which can, later on, be used for the given downstream task.
+
+RoCL: (Kim, Tack, and Hwang 2020) added adversarial training to the SimCLR (Chen et al. 2020) framework and dubbed it Robust Contrastive Learning (RoCL). To calculate the distance between positive and negative pairs, SimCLR uses the cosine similarity sim. Because of the different loss function, (Kim, Tack, and Hwang 2020) adapted PGD (cf. Equation 1) accordingly to use the contrastive loss instead of the cross-entropy loss. All functions are given in Table 3 in Appendix .
+
+To implement adversarial training, (Kim, Tack, and Hwang 2020) essentially use the adversarial inputs to extend the positive and negative pairs to triplets. During training, they aim to minimize the distance between the two transformed inputs of the positive pairs, as well as the distances between the two transformed inputs each and the thereof created adversarial input. While maximising the distance to the negative samples. The formalization of the RoCL objective is given in Table 3, where $t\left( x\right) + \delta$ is the adversarial sample. The overall training loss is then calculated based on the standard contrastive loss only considering the clean transformed samples, plus the adversarial loss based on the triplets of two transformed inputs and the additional adversarial input.
+
+One challenge using SimCLR as the basic framework is that it requires a large batch size to achieve good performance (Chen et al. 2020). In this type of self-supervised learning, the number of observed negative samples is essential for a good performance, and SimCLR does not incorporate any form of dictionary or memory bank to increase their number. Only the samples from the current batch are used for the calculations. When including adversarial samples into the training process, the number of inputs to be stored on the GPU is increased and results in a reduction of the feasible batch size.
+
+AMOC: Another widely known self-supervised framework is Momentum Contrast (MoCo) proposed by (He et al. 2020). The conceptual idea is the same as for SimCLR, i.e., minimising the distance between positive instances while maximising the distance towards negative samples. However, to overcome the problem of large batch sizes, (He et al. 2020) implement a dictionary or memory bank. In addition, they use two networks of the same architecture and the same initial weights. One model is referred to as the query encoder, which is updated after each batch as usual. The other model is called momentum or key encoder, whose parameters are a copy of the query encoder, delayed by a predefined momentum. This makes the output of the key encoder slightly different from the output of the query encoder, which can be considered as an additional form of data augmentation. After processing the inputs of the current batch by both models, the output vectors of the momentum encoder are enriched by output vectors of previous batches, stored in the dictionary. Thereby, a large number of negative samples can be created consistently, leading to overall better performance. The loss, given in Table 3, is then calculated based on the output vectors $q$ of the query encoder and the enriched output vectors $k$ of the key encoder. Furthermore, $\tau$ is a temperature parameter, and $\mathcal{M}$ refers to the memory bank of old key vectors.
+
+Based on this framework, (Xu and Yang 2020) proposed an extension for adversarial training. They introduce a second memory bank to store exclusively the historic adversarial output vectors, and to further disentangle the clean and adversarial distribution, they use dual Batch Normalization as proposed by (Xie et al. 2020). The optimization problem they solve is given in Table 3, with ${t}_{1}$ and ${t}_{2}$ being two different random transformations from a set $\mathcal{T}$ of possible transformation, and $\delta$ being the adversarial perturbation. ${\mathcal{M}}_{\text{clean }}$ and ${\mathcal{M}}_{\text{adv }}$ refer to the clean, and adversarial memory bank, respectively. As for the loss function, (Xu and Yang 2020) tested different memory bank and batch normalization combinations, and reported good results for a combination they refer to as ACC. The 'A' indicates, that the adversarial perturbation is injected into the query encoder, while the key encoder does not observe and perturbation, as well as the clean memory bank ${\mathcal{M}}_{\text{clean }}$ is used. The formulation to calculate the ACC loss is given in Table 3. Intuitively, by comparing the adversarial output vectors of the query encoder with clean samples from the key encoder, as well as the memory bank, the query encoder ${f}_{q}$ learns to classify adversarial inputs as its clean augmentation. To create the adversarial perturbation in the first place, (Xu and Yang 2020) use PGD as well, but with the MoCo loss instead of the cross-entropy loss. Finally, the overall training loss is calculated as a weighted sum of the standard MoCo loss solely trained on clean data, and the selected, e.g., ACC loss to incorporate adversarial instances.
+
+## $\mathbf{{Method}}$
+
+Multiple approaches for adversarial training are outlined in Section . Our goal is to develop a method that is not specifically tailored to one approach, but rather generalizable between different sorts of adversarial training. Therefore, given an input $x$ and a neural network $f, f\left( x\right)$ denotes the general output vector. In supervised learning, this vector would be the logits, while in self-supervised learning, this would be the 128-dimensional output vector of the projector.
+
+After an adversarial input ${x}^{\prime }$ and in consequence also its output vector $f\left( {x}^{\prime }\right)$ has been created by one of the approaches in Section , we create multiple pseudo adversarial inputs $f\left( {x}_{\mathrm{s}}^{\prime }\right)$ by adding conditioned normal distributed random noise,
+
+$$
+f{\left( {x}^{\prime }\right) }_{s} = f\left( {x}^{\prime }\right) + \mathcal{N}\left( {0,1}\right) \cdot {\delta }_{{x}^{\prime }, x} \cdot {\alpha }_{s}, \tag{2}
+$$
+
+where ${\delta }_{{x}^{\prime }, x}$ is defined as
+
+$$
+{\delta }_{{x}^{\prime }, x} = f\left( {x}^{\prime }\right) - f\left( x\right) \tag{3}
+$$
+
+and ${\alpha }_{s}$ is a hyperparameter, used to scale the ball around the initial adversarial output vector $f\left( {x}^{\prime }\right)$ , introduced by the random noise. Intuitively, ${\delta }_{{x}^{\prime }, x}$ defines the element-wise difference that the initial adversarial output vector is moved away from the original instance in the decision space, while ${\alpha }_{s}$ scales this initial manipulation.
+
+To confirm that this type of decision space data manipulation is suitable and label preserving, we create randomly perturbed adversarial output vectors for a standard, clean trained model (ST) and track their classification behaviour. In Figure 2, the results for an ST model are shown in the most left bar of each group. The blue (bottom) portion of the bar indicates the percentage of pseudo adversarial instances being classified the same, as the initial adversarial input. The orange (middle) portion indicates the number of pseudo adversarial output vectors returning to the classification area of the initial clean sample, and the green (top) portion gives the percentage of instances that move to a third classification area, which is neither the class of the clean nor the initial adversarial sample.
+
+
+
+Figure 2: Percentages of pseudo adversarial inputs being classified as indicated, depending on the perturbation scaling factor ${\alpha }_{s}$ . The bars of each group show the results based on the following models: Left bar: Standard trained model; Middle bar: Adversarial trained model; Right bar: Broad adversarial trained model. $\mathrm{C}$ (noisy) $= \mathrm{C}$ (adversarial) indicates the perturbed adversarial output vector is classified the same, as the initial adversarial input. $\mathrm{C}$ (noisy) $= \mathrm{C}$ (clean) gives the percentage of pseudo adversarial inputs, which return to the original true classification area, while C(noisy) ! $= \mathrm{C}$ (adversarial) $! = \mathrm{C}$ (clean) gives the percentage of pseudo adversarial inputs moving to some different, third class when perturbed randomly.
+
+We can observe that for sufficiently small perturbation ${100}\%$ of the pseudo adversarial instances are classified the same, as the initial adversarial input. This demonstrates empirically, that the applied conditioned random noise as a form of data augmentation in the decision space can be completely 'label preserving'. Only with larger perturbation radius, more and more perturbed adversarial output vectors move towards a third classification area. The samples returning to their originally true class, however, can be ignored, since the adversarial instances are labelled to have the same class as their clean counterparts during training. Therefore, the assigned label for these instances would not change.
+
+A different perspective to the classification changes shown in Figure 2 is to empirically evaluate the local smoothness of the decision surface. If already for small random perturbation an instance moves into another classification area, the decision boundary might be sharply twisted at that point. If only at larger perturbations the instances move into another class area, the decision boundary can be assumed to be more smooth.
+
+The second bar of each group displays the corresponding behaviour for an adversarially trained model. Similarly to the clean model, at small perturbation radius, almost all pseudo adversarial instances are classified the same as the initial adversarial instance. However, with increasing manipulation, more and more noisy instances move to the original or a third classification area. Compared to ST, the number of pseudo adversarial instances staying adversarial reduces. This can be explained by the fact, that the attack strength is kept constant, while the decision boundary in AT is pushed towards the observed adversarial instances. Thereby, in ST the adversarial instances are moved further into the wrong classification area, and thereby can endure more random perturbation before moving either back or to a third classification area. For an AT trained model, the adversarial instances are already closer to the decision boundary and are thereby moved to either the original or a third classification area at lower random perturbation sizes. This observation also indicates, that for training with pseudo adversarial output vectors, the scatter radius should be reduced over time. Thereby, the risk of assigning instances to a third classification area with a potentially incorrect label could be minimized.
+
+As a final comparison, the third bar of each group shows the corresponding classifications for pseudo adversarial instances on a BAT model. Here we can see that the number of pseudo adversarial instances being classified as the initial adversarial instance is higher compared to normal adversarial training. The number of instances moving to a third classification area is smaller as well for the BAT model compared to the AT model. This indicates a smoother local decision boundary when a model is trained with BAT compared to AT.
+
+Having verified that applying random perturbation as a form of data augmentation in the decision space is a valid option, our overall pseudo-code is given in Algorithm 1. Aside from the scalar for the perturbation radius ${\alpha }_{s}$ , we also introduce a hyperparameter to define the number of additionally created pseudo adversarial instances ${s}_{k}$ . Each additional pseudo adversarial instance only requires the calculation of random noise and evaluation of the given loss function. Addition and multiplication to create the pseudo adversarial instances regarding time complexity are in $O\left( 1\right)$ , while evaluating the loss function, since independent from the parameters added for BAT, can also be considered to be in $O\left( 1\right)$ . Therefore, our extension to implement BAT adds a time complexity in $O\left( n\right)$ with the number of created pseudo adversarial inputs to the overall training procedure. In Table 6 in Appendix , the additional time demand for each scattered input during the different training methods, is empirically evaluated and listed.
+
+To even out the effect of having multiple pseudo adversarial instances, we calculate the mean loss and add it, weighted by some factor $\lambda$ , to calculate the overall loss as
+
+$$
+{\mathcal{L}}_{\text{total }} = \iota {\mathcal{L}}_{\text{clean }} + \kappa {\mathcal{L}}_{\text{adv }} + \lambda {\mathcal{L}}_{\text{scatter }}, \tag{4}
+$$
+
+where $\iota ,\kappa$ , and $\lambda$ could be different weights for the different loss functions.
+
+## Results and Discussion
+
+Dataset and Model: All experiments were run on the Cifar-10 dataset (Krizhevsky, Hinton et al. 2009). For supervised learning, we did an additional set of experiments marked with ${}^{ + }$ , which uses another 1 million synthetic data points based on Cifar-10, provided by (Gowal et al. 2021). The authors report an increase in adversarial robustness using the additional synthetic data. The model used for all experiments is a ResNet-18 architecture, implemented in the provided repositories of (Kim, Tack, and Hwang 2020) for $\mathrm{{RoCL}}$ , and (Xu and Yang 2020) for AMOC. The experiments for the supervised case were run based on the AMOC framework. More details are provided in Appendix .
+
+Algorithm 1: Broad Adversarial Training (BAT).
+
+---
+
+Input: Dataset $D$ , model $f$ , parameter $\theta$ , Loss
+
+ function $\mathcal{L},\#$ attack steps $k,\#$ scatter
+
+ instances ${s}_{k}$ , scatter scalar ${\alpha }_{s}$
+
+foreach iter $\in$ number of training iteration do
+
+ foreach $x \in$ minibatch $B = \left\{ {{x}_{1},\ldots ,{x}_{m}}\right\}$ do
+
+ ${\mathcal{L}}_{\text{clean }} = \mathcal{L}\left( {f\left( x\right) }\right)$
+
+ ${x}^{\prime } =$ generateAdversarial(x)
+
+ ${\mathcal{L}}_{\text{adv }} = \mathcal{L}\left( {f\left( {x}^{\prime }\right) }\right)$
+
+ Broad Adversarial Operation:
+
+ ${\delta }_{{x}^{\prime }, x} = f\left( {x}^{\prime }\right) - f\left( x\right)$
+
+ for ${s}_{k}$ instances do
+
+ $\widehat{f}{\left( {x}^{\prime }\right) }_{s} = f\left( {x}^{\prime }\right) + \mathcal{N}\left( {0,1}\right) \cdot {\delta }_{{x}^{\prime }, x} \cdot {\alpha }_{s}$
+
+ ${\mathcal{L}}_{s} + = \mathcal{L}\left( {f{\left( {x}^{\prime }\right) }_{s}}\right)$
+
+ end
+
+ ${\mathcal{L}}_{\text{scatter }} = \frac{{\mathcal{L}}_{s}}{{s}_{k}}$
+
+ ${\mathcal{L}}_{\text{total }} = \iota {\mathcal{L}}_{\text{clean }} + \kappa {\mathcal{L}}_{\text{adv }} + \lambda {\mathcal{L}}_{\text{scatter }}$
+
+ Optimize $\theta$ over ${\mathcal{L}}_{\text{total }}$
+
+ end
+
+end
+
+---
+
+Hyperparameters for training: For all experiments, we used the provided hyperparameters suggested by (Kim, Tack, and Hwang 2020) for RoCL, and (Xu and Yang 2020) for AMOC, when applicable. More details are provided in Appendix .
+
+Attacks: During training, the adversarial inputs were created governed by a perturbation size of $\varepsilon = 8/{255}$ regarding ${\ell }_{\infty }$ . Therefore, the ${\ell }_{\infty }$ attacks are referred to as seen, even if only for a small perturbation size, while the ${\ell }_{2}$ and ${\ell }_{1}$ attacks were completely unseen during the training procedure. For adversarial training, we used the parameters provided by the respective frameworks, listed in Appendix .
+
+To challenge the trained models, the adversarial inputs were created with PGD as given in Equation 1 over 20 iteration steps, with a relative step size $\alpha$ of 0.1 to the given allowed amount of perturbation. The overall evaluation was conducted based on the respective functions in the AMOC framework, which itself draws the attacks from the foolbox framework (Rauber, Brendel, and Bethge 2017).
+
+Hyperparameters for BAT: For BAT, we found that a good number of additional inputs is ${s}_{k} = {10}$ . In prelimi-ary studies we found that introducing too many additional data points adds too much noise to the training process and thereby reduces the overall performance. On the other hand, too few pseudo adversarial instances do not have any impact on the overall performance. Similarly, setting the scatter radius too small results in no effect on the results, while setting it too large, as shown in Figure 2, will move the pseudo adversarial inputs increasingly towards and over the decision boundary of a different classification area, and thereby reduces the performance. For supervised BAT, we found that a surprisingly large initial ${\alpha }_{s} = {2.5}$ decayed by a cosine scheduler, yields the best results. Training AMOC, setting the initial ${\alpha }_{s} = {0.25}$ decayed by a cosine scheduler works best, respectively for RoCL an initial ${\alpha }_{0.1}$ decayed by a stepwise function reducing the initial ${\alpha }_{s}$ by 0.01 every 100 epochs.
+
+For the weight of the scatter loss to the overall loss, we found that in supervised broad adversarial training the same weight for the original adversarial loss and the scatter loss works best. Similarly, while pretraining AMOC, an equal contribution of the clean, the original adversarial, and the scatter loss yields the best results. For RoCL, a weight of $\lambda = {0.25}$ for the scatter loss yields the best results, combined with a weight of $\iota = \kappa = {1.0}$ for the clean and original adversarial loss.
+
+## Results
+
+The results for the supervised experiments are given in Table 1, where each value represents the mean value over 5 different runs. For the self-supervised experiments, the results are listed in Table 2. The upper part reports the results where only the classification head was optimized, while the parameters of the pretrained model were frozen. The lower part, indicated by Self-supervised + finetune, reports the results where also the parameters of the pretrained model were optimized during training of the classification head. A B- in front of the given method indicates, that our proposed adaptation was applied to the following training mechanism. The results for experiments run for 200 epochs are also the mean value over 5 different runs.
+
+## Discussion
+
+Taking a look at the results of the supervised methods in Table 1, using only the original Cifar-10 data, we can report, that the robust classification accuracy can be increased for all seen, as well as large unseen attack, when using BAT instead of AT. For small seen perturbation, the robust classification accuracy increases by ${0.56}\%$ , while for large perturbation size the accuracy increases by 1.11%. Considering unseen attack, the robust classification accuracy for small attacks is reduced when using BAT, however, with increasing attack size, this reduction is converted to an increase for large perturbation sizes. Considering ${\ell }_{2}$ governed attacks with $\varepsilon = {0.75}$ , the robust classification accuracy can be increased by ${1.11}\%$ using BAT, while for ${\ell }_{1}$ governed attacks with $\varepsilon = {16.16}$ , the robust classification accuracy increases only slight by ${0.29}\%$ using BAT instead of AT.
+
+When using the additional 1 million data points, we can reaffirm that it increases the clean, as well as robust accuracy for all training methods and attacks compared to training without the additional data, as (Gowal et al. 2021) reported. Comparing AT and BAT using additional data, we can report that BAT improves on the robust classification accuracy in all observed attacks, as well as a slight increase in the clean accuracy, compared to AT. Even for small unseen attacks, e.g., ${\ell }_{2}$ governed attacks with $\varepsilon = {0.25}$ , the robust accuracy increases by 0.11% using BAT over AT. For larger attacks, the robust accuracy benefits more from using BAT over AT.
+
+
| Method | ${A}_{nat}$ | seen | unseen |
| PGD20 ${l}_{\infty }$ | PGD20 ${l}_{2}$ | PGD20 ${l}_{1}$ |
| € 8/255 | 16/255 | 32/255 | 0.25 | 0.5 | 0.75 | 7.84 | 12 | 16.16 |
| ${\mathcal{L}}_{\mathrm{{CE}}}$ | 93.92 | 0.00 | 0.00 | 0.00 | 8.27 | 0.17 | 0.00 | 15.07 | 3.37 | 0.61 |
| AT | 81.85 | 52.49 | 22.21 | 1.25 | 73.83 | 63.11 | 50.91 | 70.52 | 62.95 | 54.66 |
| BAT | 76.60 | 53.05 | 26.78 | 2.36 | 69.63 | 61.56 | 52.02 | 67.04 | 61.37 | 54.95 |
| ${\mathcal{L}}_{\text{CE }}$ | $\bar{\mathbf{{95.04}}}$ | 0.00 | 0.00 | 0.00 | 12.42 | 0.58 | 0.04 | 21.75 | 6.21 | 1.84 |
| ${\mathrm{{AT}}}^{ + }$ | 84.15 | 59.22 | 29.70 | 2.60 | 76.78 | 67.51 | 56.09 | 73.47 | 66.06 | 58.05 |
| ${\mathbf{{BAT}}}^{ + }$ | 84.20 | 59.80 | 30.49 | 2.81 | 76.89 | 68.08 | 56.86 | 73.61 | 66.75 | 58.81 |
+
+Table 1: Results on Cifar-10 for supervised trained models with standard cross entropy training ${\mathcal{L}}_{\mathrm{{CE}}}$ , adversarial PGD training (AT), and our proposed Broad Adversarial Training (BAT). For the experiments marked with ${}^{ + }$ ,1 million additional synthetic data points based on Cifar-10 were used for training. During training, the initial adversarial instances were created governed by ${\ell }_{\infty }$ with a strength of 8/255. All experiments were run 5 times and the mean value is reported.
+
+Observing the results for AMOC when only the classification head is trained, given in Table 2, we can report similar behaviour. The clean accuracy is slightly reduced, while the classification accuracy for seen attacks increases in all combinations of AMOC and head training, except one combination for a large perturbation size. The increase in robustness can range from ${0.03}\%$ to ${1.01}\%$ , depending on the attack size. When AMOC is trained for 1000 epochs, instead of 200, the robust classification accuracy for large and sometimes medium unseen attacks increases as well, between 0.04% and 1.22%.
+
+For RoCL, introducing our proposed pseudo adversarial instances into the self-supervised pretraining, the clean accuracy increases between ${0.04}\%$ and ${1.37}\%$ . Also, the robustness against seen attacks increases for small and medium-sized attacks between 0.81% and 2.02%. Interestingly, the robustness for large seen attacks only increases by 0.13% using B-RoCL during pretraining and BAT for the classification head is applied. Similar to AMOC, RoCL also becomes more robust to medium and/or large unseen attacks, when trained with additional pseudo adversarial inputs. The robustness there increases between ${0.25}\%$ and ${3.25}\%$ . Particularly for the combination B-RoCL+AT, our proposed pretraining leads to better clean and robustness accuracy against almost all attacks compared to standard RoCL+AT.
+
+When during training of the classification head also the parameters of the pretrained models are finetuned, we observe an increase in clean, as well as robust accuracy for AMOC, too. In particular, comparing B-AMOC+B-AF with AMOC+AF trained for 1000 epochs, we observe that the performance increases against almost all attacks between ${0.35}\%$ and ${0.9}\%$ . If we assume AMOC+AF as the reported baseline, B-AMOC+B-AF increase the robustness against all seen attacks between ${0.13}\%$ and ${0.25}\%$ , as well as against medium and large unseen attacks between 0.11% and ${0.24}\%$ .
+
+To further investigate why BAT is sometimes weaker regarding unseen attacks, we calculated the perturbation size of successful ${\ell }_{2}$ and ${\ell }_{1}$ governed attacks regarding ${\ell }_{\infty }$ . The resulting distributions are given in Figure 3 in Appendix , where the $\mathrm{x}$ -axis indicates the perturbation size regarding ${\ell }_{\infty }$ , and the y-axis shows the number of successful attacks. We recommend inspecting the figures digitally to zoom in for better visibility. The distribution of manipulation sizes based on attacks controlled by ${\ell }_{2}$ is given in blue (legend top), while the values for ${\ell }_{1}$ -attacks are shown in orange (legend middle), and for ${\ell }_{\infty }$ -attacks in green (legend bottom). The grey vertical line gives a landmark of a perturbation of ${\ell }_{\infty } = 8/{255}$ , which is the perturbation size seen during adversarial and broad adversarial training. The left column of each pair shows the corresponding distributions for small perturbation size, while the right column shows the respective distribution for large perturbation size.
+
+The top row shows the results when the attacked model was trained on clean data only. We can see that the applied manipulation of attacks governed by ${\ell }_{2}$ and ${\ell }_{1}$ is generally lower than the adversarial manipulation applied by the corresponding ${\ell }_{\infty }$ -attack. This could explain why even models trained on clean samples are, to some extend, robust against ${\ell }_{2}$ and ${\ell }_{1}$ controlled attacks, as we can observe in Table 1.
+
+The second and third rows show the resulting perturbation size distributions for attacks on an adversarial trained network, resp. broad adversarial trained model. Here we can see that the perturbation of ${\ell }_{2}$ - and ${\ell }_{1}$ -attacks is larger regarding ${\ell }_{\infty }$ than the perturbation of the corresponding ${\ell }_{\infty }$ - attack, especially for a small perturbation size. Since during training both models have seen adversarial samples of the perturbation size ${\ell }_{\infty } = 8/{255}$ , this indicates why both also become more robust, but not perfect, against ${\ell }_{2}$ - and ${\ell }_{1}$ -attacks in general, but probably not why BAT performs worse than standard AT on unseen attacks.
+
+Observing the pixel level manipulations applied by ${\ell }_{2}$ - and ${\ell }_{1}$ -attacks, evaluated regarding ${\ell }_{\infty }$ might give more insights why BAT is worse regarding small perturbations, compared to AT. The resulting perturbations, exemplary for the blue color channel of an observed input, are shown in Figure 4 in Appendix for a standard trained model, in Figure 5 for an AT trained model, and in Figure 6 for a broad adversarial trained model. The visualization indicates whether the pixel value of the adversarial input was increased (red, top of the right bar aside the grid) or decreased (blue, bottom of the right bar aside the grid), regarding the pixel value of the original input.
+
+| Method | ${A}_{nat}$ | seen | unseen |
| PGD20 ${l}_{\infty }$ | PGD20 ${l}_{2}$ | PGD20 ${l}_{1}$ |
| € 8/255 | 16/255 | 32/255 | 0.25 | 0.5 | 0.75 | 7.84 | 12 | 16.16 |
| Self-supervised: |
| 200 epochs: |
| $\mathrm{{AMOC}} + {\mathcal{L}}_{\mathrm{{CE}}}$ | 79.03 | 36.61 | 7.46 | 0.05 | 67.93 | 54.82 | 41.53 | 66.27 | 58.53 | 50.74 |
| B-AMOC + ${\mathcal{L}}_{\mathrm{{CE}}}$ | 78.88 | 37.09 | 8.15 | 0.05 | 67.64 | 54.58 | 41.37 | 65.77 | 57.91 | 50.19 |
| AMOC + AT | 74.79 | 43.97 | 14.53 | 0.19 | 67.10 | 58.10 | 48.09 | 66.03 | 60.78 | 54.92 |
| B-AMOC + AT | 74.58 | 44.57 | 15.45 | 0.26 | 66.88 | 57.93 | 48.21 | 65.72 | 60.43 | 54.64 |
| AMOC + BAT | 74.32 | 44.08 | 15.15 | 0.24 | 66.63 | 57.92 | 48.21 | 65.62 | 60.78 | 54.82 |
| B-AMOC + BAT | 74.25 | 44.59 | 15.85 | 0.28 | 66.64 | 57.75 | 48.18 | 65.49 | 60.21 | 54.44 |
| 1000 epochs: |
| AMOC $+ {\mathcal{L}}_{\text{CE}}$ | 86.52 | 44.91 | 11.46 | 0.11 | 77.04 | 63.59 | 50.39 | 75.47 | 68.27 | 59.75 |
| B-AMOC + ${\mathcal{L}}_{\mathrm{{CE}}}$ | 85.90 | 45.17 | 12.02 | 0.14 | 76.78 | 64.29 | 50.99 | 75.38 | 68.64 | 60.97 |
| AMOC + AT | 84.48 | 50.87 | 16.85 | 0.26 | 77.07 | 67.28 | 56.00 | 76.14 | 70.45 | 64.43 |
| B-AMOC + AT | 83.80 | 50.89 | 17.81 | 0.38 | 76.35 | 66.79 | 56.16 | 75.44 | 69.78 | 64.29 |
| AMOC + BAT | 83.88 | 51.00 | 17.46 | 0.33 | 76.44 | 66.41 | 55.77 | 75.51 | 69.87 | 63.67 |
| B-AMOC + BAT | 83.40 | 51.08 | 18.47 | 0.37 | 75.97 | 66.45 | 56.11 | 75.15 | 69.48 | 63.83 |
| $\overline{\mathrm{{RoCL}}} + {\overline{\mathcal{L}}}_{\mathrm{{CE}}}$ | 83.69 | 38.49 | 8.73 | 0.66 | 65.98 | 61.12 | 44.47 | $\overline{\mathbf{{68.03}}}$ | 67.59 | 60.42 |
| B-RoCL + ${\mathcal{L}}_{\mathrm{{CE}}}$ | 85.06 | 40.44 | 9.54 | 0.63 | 65.37 | 62.86 | 47.42 | 66.42 | 66.63 | 63.67 |
| RoCL + AT | 79.65 | 47.35 | 16.33 | 0.36 | 67.33 | 65.18 | 53.38 | 68.20 | 68.17 | 65.58 |
| B-RoCL + AT | 79.69 | 49.36 | 17.41 | 0.33 | 67.58 | 66.15 | 54.64 | 68.21 | 68.54 | 66.68 |
| RoCL + BAT | 78.63 | 47.31 | 16.29 | 0.25 | 68.34 | 64.92 | 53.04 | 68.92 | 69.27 | 65.34 |
| B-RoCL + BAT | 79.69 | 49.33 | 17.22 | 0.38 | 67.59 | 66.03 | 54.47 | 68.38 | 68.43 | 66.73 |
| Self-supervised + finetune |
| 200 epochs: |
| AMOC + AF | 82.87 | 52.60 | 22.11 | 1.11 | 74.65 | 63.56 | 50.77 | 71.20 | 63.32 | 54.81 |
| B-AMOC + AF | 83.29 | 52.98 | 21.69 | 1.14 | 74.84 | 63.80 | 50.96 | 71.28 | 63.33 | 54.73 |
| AMOC + B-AF | 82.19 | 52.73 | 22.23 | 1.28 | 73.71 | 63.51 | 51.04 | 70.43 | 63.05 | 54.62 |
| B-AMOC + B-AF | 82.60 | 52.98 | 22.34 | 1.26 | 74.28 | 63.51 | 50.92 | 70.81 | 63.05 | 54.40 |
| 1000 epochs: |
| AMOC + AF | 83.28 | 52.82 | 22.04 | 1.12 | 74.95 | 63.87 | 51.38 | 71.79 | 63.83 | 55.13 |
| B-AMOC + AF | 84.00 | 53.08 | 21.74 | 1.09 | 75.44 | 64.65 | 51.20 | 71.95 | 64.20 | 55.33 |
| AMOC + B-AF | 81.85 | 52.62 | 22.51 | 1.38 | 73.77 | 63.21 | 50.99 | 70.82 | 63.16 | 54.55 |
| B-AMOC + B-AF | 82.76 | 53.07 | 22.17 | 1.32 | 74.63 | 64.11 | 51.49 | 71.17 | 63.83 | 55.30 |
+
+Table 2: Results on Cifar-10 for self-supervised trained models. In the first part, the classification head was trained without adapting the pretrained features. In the second part, the parameters of the pretrained model were also adapted during training the classification head. ${\mathcal{L}}_{\mathrm{{CE}}}$ , AT, and BAT define, whether the classification head, and in case of finetuning the pretrained models, were trained on clean, adversarial, or with addition of pseudo adversarial inputs, respectively. A B- before the given self-supervised method indicates, that our proposed extension was applied. During training, the initial adversarial instances were created governed by ${\ell }_{\infty }$ with a strength of $8/{255}$ .
+
+In all cases, we observe that ${\ell }_{2}$ and ${\ell }_{1}$ governed attacks tend to only slightly perturb the vast majority of pixel values while selecting a handful of pixels that are heavily perturbed. This is because the overall perturbation size for ${\ell }_{2}$ and ${\ell }_{1}$ is calculated over all pixels. Those attacks tend to spend their perturbation budget on the pixels, which seem to have the most impact on the classification. When the attack has the freedom to perturb each pixel independently, as is the case for ${\ell }_{\infty }$ -attacks, the overall perturbation is larger. This also underlines the observation that clean trained models are more robust to ${\ell }_{2}$ - and ${\ell }_{1}$ -attacks. Particularly, when including additional input data during training, which introduces a larger variety of pixel value combinations. While at the same time, clean trained models are completely defenceless against attacks governed by ${\ell }_{\infty }$ , which create manipulations that can not be covered by more clean data as the manipulations are too large and unnatural.
+
+Considering these observations, we propose that BAT is less robust against small perturbations by ${\ell }_{2}$ - and ${\ell }_{1}$ -attacks, because it overfits to the observed perturbations, or more precisely the output vectors of such adversarial inputs, based on ${\ell }_{\infty }$ during training. In particular, since Cifar-10 includes only 50,000 samples.
+
+This assumption is supported by the results for the supervised trained models, reported in Table 1, which used the additional 1 million samples for training. This additional data seem to prevent BAT from overfitting to the observed adversarial perturbation, as the variability in the input data, and thereby the variability in the pseudo adversarial instances, increases. This results in higher robustness to unseen attacks, compared to standard AT.
+
+Another future step to prevent the potential overfitting would be to further investigate the manipulation distributions of ${\ell }_{2}$ - and ${\ell }_{1}$ -attacks, and in particular the distribution of their respective output vectors in the decision space. The gained insights could help to apply more sophisticated data augmentation in the decision space than the simple conditioned random noise we use here. Also, observing the distribution of clean sample output vectors could help to prevent pseudo adversarial inputs from jumping into a third classification area, as shown in Figure 2.
+
+## Related Work
+
+Recent works, e.g., (Madaan, Shin, and Hwang 2021; Rusak et al. 2020; Dong et al. 2020), propose to incorporate random noise into their techniques to increase the robustness of models against adversarial perturbations. To that end, (Madaan, Shin, and Hwang 2021) and (Rusak et al. 2020) employ some kind of generator, trained to create perturbations which are applied to the input vector, i.e., the image, before they are fed through the classification model. (Dong et al. 2020) as well, aim to model a distribution for each input which, when drawn from with very high probability returns an adversarial sample for the given input. Based on this learned adversarial distribution, the classification model itself is trained to minimize the expected loss over the adversarial distribution. In all these cases, the random manipulation is applied to the input vector, while we manipulate the output vector of a given adversarial sample. Because the works operate on different parts of the model, it should be possible to combine the techniques, to further increase adversarial robustness.
+
+Regarding manipulating the output vector, mixup (Zhang et al. 2018) drew a lot of attention recently. (Lee, Lee, and Yoon 2020) took the idea of mixup and combined it with adversarial training, calling it Adversarial Vertex Mixup (AVM). Essentially, at first they create an adversarial sample and push it further in the adversarial direction to create the so called adversarial vertex. Then, instead of using two clean inputs as in the original mixup paper, they merge the initial clean sample and the adversarial vertex to form a new input. Since the clean sample and the adversarial vertex have the same label, the authors use some form of label smoothing function, e.g., by (Szegedy et al. 2016) to convert the one-hot encoded labels to a conditionally randomized distribution. Merging these two distributions give the new label for the mixup between the clean sample and the adversarial vertex. In contrast to this work, we create multiple adversarial instances, instead of one. AVM could be visualised as a line between the adversarial vertex and clean sample from which the new inputs are drawn. Our method creates a ball around the initial adversarial output vector from which multiple samples are drawn as new output vectors for training.
+
+## Conclusion
+
+Using data augmentation and larger datasets have shown to be supporting and sometimes even essential (Riquelme et al. 2021) to achieve better classification results and better generalisation. However, using these techniques does not yield robustness against adversarial manipulations. Instead, techniques like adversarial training are necessary to harden neural networks against unforeseen perturbations, which can fool the classification.
+
+Since adversarial inputs are created in and defined by the output space, which ultimately leads to the decision of a model, we proposed to combine adversarial training with data augmentation in the output space, referring to as Broad Adversarial Training (BAT). We show, that already applying simply conditioned random noise to the output vectors of adversarial inputs, and thereby create multiple new pseudo adversarial inputs, can increase the robustness, and in some cases even the clean accuracy.
+
+Extending standard Adversarial Training (AT) (Madry et al. 2017) to BAT for training on Cifar-10, increases the robustness against seen attacks by ${0.55}\%$ for small perturbations and by 1.11% for larger perturbation size. On large unseen ${\ell }_{2}$ -attacks the robust accuracy increases as well by ${1.11}\%$ , and for large ${\ell }_{1}$ -attacks by ${0.29}\%$ . Increasing the clean data pool by another 1 million data points, using BAT increases the robust accuracy for all observed attacks between ${0.12}\%$ and ${0.79}\%$ , as well as the clean accuracy slightly by ${0.05}\%$ . Similar results can also be reported for self-supervised learning, where using BAT can increase the robust and clean accuracy, as well.
+
+References
+
+Athalye, A.; Engstrom, L.; Ilyas, A.; and Kwok, K. 2018. Synthesizing robust adversarial examples. In International conference on machine learning, 284-293. PMLR.
+
+Carlini, N.; and Wagner, D. 2017. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), 39-57. IEEE.
+
+Chen, T.; Kornblith, S.; Norouzi, M.; and Hinton, G. 2020. A simple framework for contrastive learning of visual representations. In International conference on machine learning, 1597-1607. PMLR.
+
+Cubuk, E. D.; Zoph, B.; Shlens, J.; and Le, Q. V. 2020. Ran-daugment: Practical automated data augmentation with a reduced search space. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops, 702-703.
+
+Dong, Y.; Deng, Z.; Pang, T.; Zhu, J.; and Su, H. 2020. Adversarial Distributional Training for Robust Deep Learning. Advances in Neural Information Processing Systems, 33: 8270-8283.
+
+Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
+
+Gowal, S.; Rebuffi, S.-A.; Wiles, O.; Stimberg, F.; Calian, D.; Mann, T.; and DeepMind, L. 2021. DOING MORE WITH LESS: IMPROVING ROBUSTNESS USING GENERATED DATA.
+
+He, K.; Fan, H.; Wu, Y.; Xie, S.; and Girshick, R. 2020. Momentum contrast for unsupervised visual representation learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 9729-9738.
+
+He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770-778.
+
+Ho, J.; Jain, A.; and Abbeel, P. 2020. Denoising diffusion probabilistic models. arXiv preprint arXiv:2006.11239.
+
+Kim, M.; Tack, J.; and Hwang, S. J. 2020. Adversarial self-supervised contrastive learning. arXiv preprint arXiv:2006.07589.
+
+Krizhevsky, A.; Hinton, G.; et al. 2009. Learning multiple layers of features from tiny images.
+
+Lee, S.; Lee, H.; and Yoon, S. 2020. Adversarial vertex mixup: Toward better adversarially robust generalization. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 272-281.
+
+Madaan, D.; Shin, J.; and Hwang, S. J. 2021. Learning to generate noise for multi-attack robustness. In International Conference on Machine Learning, 7279-7289. PMLR.
+
+Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; and Vladu, A. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083.
+
+Ranjan, A.; Janai, J.; Geiger, A.; and Black, M. J. 2019. Attacking optical flow. In Proceedings of the IEEE/CVF International Conference on Computer Vision, 2404-2413.
+
+Rauber, J.; Brendel, W.; and Bethge, M. 2017. Foolbox: A python toolbox to benchmark the robustness of machine learning models. arXiv preprint arXiv:1707.04131.
+
+Riquelme, C.; Puigcerver, J.; Mustafa, B.; Neumann, M.; Jenatton, R.; Pinto, A. S.; Keysers, D.; and Houlsby, N. 2021. Scaling Vision with Sparse Mixture of Experts. arXiv preprint arXiv:2106.05974.
+
+Rusak, E.; Schott, L.; Zimmermann, R. S.; Bitterwolf, J.; Bringmann, O.; Bethge, M.; and Brendel, W. 2020. A simple way to make neural networks robust against diverse image corruptions. In European Conference on Computer Vision, 53-69. Springer.
+
+Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; and Wojna, Z. 2016. Rethinking the inception architecture for computer vision. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2818-2826.
+
+Tang, H.; Houthooft, R.; Foote, D.; Stooke, A.; Chen, X.; Duan, Y.; Schulman, J.; De Turck, F.; and Abbeel, P. 2017. #exploration: A study of count-based exploration for deep reinforcement learning. In 31st Conference on Neural Information Processing Systems (NIPS), volume 30, 1-18.
+
+Tao, A.; Sapra, K.; and Catanzaro, B. 2020. Hierarchical multi-scale attention for semantic segmentation. arXiv preprint arXiv:2005.10821.
+
+Worzyk, N.; Kahlen, H.; and Kramer, O. 2019. Physical adversarial attacks by projecting perturbations. In International Conference on Artificial Neural Networks, 649-659. Springer.
+
+Xie, C.; Tan, M.; Gong, B.; Wang, J.; Yuille, A. L.; and Le, Q. V. 2020. Adversarial examples improve image recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 819-828.
+
+Xu, C.; and Yang, M. 2020. Adversarial momentum-contrastive pre-training. arXiv preprint arXiv:2012.13154.
+
+You, Y.; Gitman, I.; and Ginsburg, B. 2017. Large batch training of convolutional networks. arXiv preprint arXiv:1708.03888.
+
+Zhai, X.; Kolesnikov, A.; Houlsby, N.; and Beyer, L. 2021. Scaling vision transformers. arXiv preprint arXiv:2106.04560.
+
+Zhang, H.; Cisse, M.; Dauphin, Y. N.; and Lopez-Paz, D. 2018. mixup: Beyond Empirical Risk Minimization. In International Conference on Learning Representations.
+
+## Appendix
+
+## Optimization problems and loss functions
+
+In Table 3 the respective loss functions and optimization problems used in training AMOC and RoCL are listed, with the explanation of the used symbols in the caption.
+
+## Training parameters
+
+In Table 4 all necessary hyperparameter for training the AMOC models are given, as well as for the supervised models/classification heads with clean, i.e., ${\mathcal{L}}_{\mathrm{{CE}}}$ , and adversarial, i.e., AT, samples. The explanation for the certain abbreviations, e.g., which transformation are used under the
+
+---
+
+ | Contrastive loss | ${\mathcal{L}}_{\text{con,}\theta }\left( {x,\left\{ {x}_{\text{pos }}\right\} ,\left\{ {x}_{\text{neg }}\right\} }\right)$ $\mathop{\sum }\limits_{\left\{ f{\left( x\right) }_{\text{pos }}\right\} }\exp \left( {\operatorname{sim}\left( {f\left( x\right) ,\left\{ {f{\left( x\right) }_{\text{pos }}}\right\} }\right) /\tau }\right)$ $\leftarrow - \log \mathop{\sum }\limits_{\left\{ f{\left( x\right) }_{\text{pos }}\right\} }\exp \left( {\operatorname{sim}\left( {f\left( x\right) ,\left\{ {f{\left( x\right) }_{\text{pos }}}\right\} }\right) /\tau }\right) + \mathop{\sum }\limits_{\left\{ f{\left( x\right) }_{\text{neg }}\right\} }\exp \left( {\operatorname{sim}\left( {f\left( x\right) ,\left\{ {f{\left( x\right) }_{\text{neg }}}\right\} }\right) /\tau }\right)$ |
| RoCL PGD | ${t}_{1}{\left( x\right) }^{i + 1} = {\Pi }_{B\left( {{t}_{1}\left( x\right) ,\varepsilon }\right) }\left( {{t}_{1}{\left( x\right) }^{i} + \alpha \operatorname{sign}\left( {{\nabla }_{{t}_{1}{\left( x\right) }^{i}}{\mathcal{L}}_{\operatorname{con},\theta }\left( {{t}_{1}{\left( x\right) }^{i},\left\{ {{t}_{2}\left( x\right) }\right\} ,\left\{ {{t}_{1}{\left( x\right) }_{\text{neg }}}\right\} }\right) }\right) }\right)$ |
| RoCL | $\underset{\theta }{{arg}\;{min}}{E}_{x \sim D}\left\lbrack {\mathop{\max }\limits_{{\delta \in B\left( {{t}_{1}\left( x\right) ,\varepsilon }\right) }}{\mathcal{L}}_{\mathrm{{con}},\theta }\left( {{t}_{1}\left( x\right) + \delta ,\left\{ {{t}_{2}\left( x\right) }\right\} ,\left\{ {{t}_{1}{\left( x\right) }_{\mathrm{{neg}}}}\right\} }\right) }\right\rbrack$ |
| MoCo Loss | ${\mathcal{L}}_{\mathrm{{NCE}}} = - \log \frac{\exp \left( {q \cdot {k}_{\text{pos }}/\tau }\right) }{\exp \left( {q \cdot {k}_{\text{pos }}/\tau }\right) + \mathop{\sum }\limits_{{{k}_{\text{neg }} \in \mathcal{M}}}\exp \left( {q \cdot {k}_{\text{neg }}/\tau }\right) }$ |
| AMOC | $\mathop{\min }\limits_{{{\theta }_{q},{\theta }_{k}}}{E}_{x \in D}{E}_{{t}_{1},{t}_{2} \in \mathcal{T}}\mathop{\max }\limits_{{\parallel \delta \parallel ,\begin{Vmatrix}{\delta }^{\prime }\end{Vmatrix} \leq \varepsilon }}\mathcal{L}\left( {{t}_{1}\left( x\right) + \delta ,{t}_{2}\left( x\right) + {\delta }^{\prime },{\mathcal{M}}_{\text{clean }},{\mathcal{M}}_{\text{adv }}}\right)$ |
| AMOC ACC | ${\mathcal{L}}_{\mathrm{{ACC}}} = {\mathcal{L}}_{\mathrm{{NCE}}}\left( {{f}_{q}\left( {{t}_{1}\left( x\right) + \delta ;{\mathrm{{BN}}}_{\mathrm{{adv}}}}\right) ,{f}_{k}\left( {{t}_{2}\left( x\right) ;{\mathrm{{BN}}}_{\text{clean }}}\right) ,{\mathcal{M}}_{\text{clean }}}\right)$ |
+
+---
+
+Table 3: Loss functions and optimization problems defined for the used self-supervised adversarial training methods. In all formulations $x$ is a given clean sample, $f$ indicates the observed model, and $\delta$ is the adversarial perturbation. Also ${t}_{1}$ and ${t}_{2}$ define two different random transformation, while $\tau$ is some temperature hyperparameter. The contrastive loss is used within the RoCL framework, based on SimCLR, where ${x}_{\text{pos }}$ and ${x}_{\text{neg }}$ give the positive and negative samples, respectively. The similarity sim between two output vectors is calculated as the cosine similarity. To create an adversarial sample, the PGD equation (cf. Equation 1) is adapted by replacing the cross entropy loss with the contrastive loss. For the MoCo Loss, $q$ and $k$ represent the query and key encoder, respectively, and $\mathcal{M}$ is the used memory bank. While standard MoCo only defines one memory bank, within the AMOC framework the authors use two memory banks ${\mathcal{M}}_{\text{clean }}$ and ${\mathcal{M}}_{\text{adv }}$ to store the clean and adversarial historical samples. AMOC ACC is one specific loss function within the overall AMOC framework, for which the authors report good results, and which is therefore used in this study. term simclr, are explained in the caption. The same applies for the comprehensive list of hyperparameters for training $\mathrm{{RoCL}}$ and the respective classification heads, given in Table 5 .
+
+Further details:
+
+- All experiments were run on NVIDIA GeForce GTX 1080.
+
+- For RoCL training, we were not able to use the suggested batch size of 256 per GPU with our hardware.
+
+- For RoCL we changed the projector to consist of 2, instead of 1 , linear layers, followed by a normalization layer.
+
+- Finetuning RoCL with only adversarial inputs led in our experiments to a classification accuracy of ${10}\%$ . Using additional clean samples, we achieved a robust accuracy around ${30}\%$ , which is ${10}\%$ lower than the reported values, and would not be comparable to standard adversarial training. Therefore, RoCL + AF was excluded from our experiments.
+
+## Additional time demand
+
+In Table 6 the mean time required for one step of the indicated adversarial training method is listed. Further down, we split the time demand into the creation of the initial adversarial instance, which already takes up between 40.97 to ${66.92}\%$ of the overall time. Calculating ${\delta }_{{x}^{\prime }, x}$ is only required once. Because for AT, the output vector of the clean sample is not calculated during training, the proportional time requirement is comparable large to the unsupervised methods, where the output vector is already calculated independent of our adaptation. Creating each pseudo adversarial input only adds a small portion, between ${0.2}\%$ to ${0.52}\%$ to the overall time demand per step. For RoCL the evaluation of the loss function furthermore takes up 84.96% of the time to create one pseudo adversarial instance. We explain this comparable large time demand by the fact that the RoCL framework implements the loss function itself, while AMOC uses the cross entropy loss provided by pytorch and does very limited own computation in context of the loss evaluation.
+
+## Differences between the attacks
+
+Acknowledgments
+
+ | AMOC 200 | AMOC 1000 | AT head | AT | ${\mathcal{L}}_{\mathrm{{CE}}}$ head | ${\mathcal{L}}_{\mathrm{{CE}}}$ |
| GPU | 1 | 1 | 1 | 1 | 1 | 1 |
| optimizer | sgd | sgd | sgd | sgd | sgd | sgd |
| momentum | 0.9 | 0.9 | 0.9 | 0.9 | 0.9 | 0.9 |
| weight decay | 5e-4 | 5e-4 | 5e-4 | 5e-4 | 2e-4 | 2e-4 |
| learning rate | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 | 0.1 |
| - decay | cosine | cosine | FC | TOTAL | FC | TOTAL |
| epochs | 200 | 1000 | 25 | 40 | 25 | 40 |
| warmup epochs | 10 | 10 | | | | |
| batch size | 256 | 256 | 128 | 128 | 128 | 128 |
| transform | simclr | simclr | default | default | default | default |
| attack: |
| type | ${l}_{\infty }$ | ${l}_{\infty }$ | ${l}_{\infty }$ | ${l}_{\infty }$ | | |
| $\varepsilon$ | 8/255 | 8/255 | 8/255 | 8/255 | | |
| step size | 2/255 | 2/255 | 2/255 | 2/255 | | |
| #steps | 5 | 5 | 10 | 10 | | |
| attack weight $\kappa$ | 0.5 | 0.5 | 1.0 | 1.0 | | |
| scatter operation: | | | | | | |
| ${s}_{k}$ | 10 | 10 | 10 | 10 | | |
| ${\alpha }_{s}$ | 0.25 | 0.25 | 2.5 | 2.5 | | |
| scatter decay | cosine | cosine | cosine | cosine | | |
| scatter weight $\lambda$ | 0.5 | 0.5 | 1.0 | 1.0 | | |
| MoCo specific: |
| dim_mlp | 512 | 512 | | | | |
| dim_head | 128 | 128 | | | | |
| $\tau$ | 0.2 | 0.2 | | | | |
| #samples in ${\mathcal{M}}_{\text{clean }}$ | 32768 | 32768 | | | | |
| #samples in ${\mathcal{M}}_{\text{adv }}$ | 32768 | 32768 | | | | |
| key encoder momentum | 0.999 | 0.999 | | | | |
+
+Table 4: Full list of parameters for training AMOC, as well as the supervised models/classification heads with clean, i.e., ${\mathcal{L}}_{\mathrm{{CE}}}$ , and adversarial, i.e., AT, samples. FC is implemented to decaying the learning rate by a factor of 10 at epochs 10 and 15 , while TOTAL reduces the learning rate by a factor of 10 at epochs 30 and 35. A default transformation is implemented as padding by 4, random resized cropping to 32, random horizontal flipping. SimCLR as transformation is composed of: random cropping of size 32, applying color jitter with a strength of 0.4 to the brightness, contrast, and saturation, while the hue is perturbed with strength0.1, all with a probability of0.8, random grayscale with a probability of 0.2, applying gaussian blur with a probability of 0.5 , and random horizontal flipping. All inputs are converted to tensors, i.e., to the range [0, 1].
+
+| RoCL | AT head | ${\mathcal{L}}_{\mathrm{{CE}}}$ head |
| GPU | 2 | 1 | 1 |
| base optimizer | SGD | $\bar{\mathrm{{SGD}}}$ | $\overline{\mathrm{S}}\bar{\mathrm{G}}\overline{\mathrm{D}}$ |
| - momentum | 0.9 | 0.9 | 0.9 |
| - weight decay | 1e-6 | 5e-4 | 5e-4 |
| - learning rate | 0.1 | 0.2 | 0.2 |
| optimizer | LARS | | |
| - eps | 1e-8 | | |
| - trust_coeff | 0.001 | | |
| learning rate decay | cosine | | |
| warmup | GradualWarmUp | | |
| - lr multiplier | 15 | | |
| - warumup epochs | 10 | | |
| epochs | 1000 | 150 | 150 |
| batch size | 128 per GPU | 128 | 128 |
| transform | simclr | simclr | simclr |
| attack: |
| type | ${l}_{\infty }$ | ${\ell }_{\infty }$ | |
| $\varepsilon$ | 0.0314 (≈8/255) | 0.0314 (≈8/255) | |
| step size | ${0.007}\left( { \approx 2/{255}}\right)$ | ${0.007}\left( { \approx 2/{255}}\right)$ | |
| #steps | 7 | 10 | |
| attack weight $\kappa$ | 1.0 | 1.0 | |
| scatter operation: |
| ${s}_{k}$ | 10 | 10 | |
| ${\alpha }_{s}$ | 0.1 | 2.5 | |
| scatter decay | stepwise | cosine | |
| scatter weight $\lambda$ | 0.25 | 1.0 | |
| RoCL specific: |
| $\tau$ | 0.5 | | |
| ${\lambda }_{\text{RoCL }}$ | 256 | | |
+
+Table 5: Full list of parameters for training RoCL, as well as the supervised classification heads with clean, i.e., ${\mathcal{L}}_{\mathrm{{CE}}}$ , and adversarial, i.e., AT, samples. To train RoCL (Kim, Tack, and Hwang 2020) use the LARS (You, Gitman, and Ginsburg 2017) optimizer based on SGD with the given parameters. The initial learning rate is increased during the first 10 epochs by an overall factor of 15. Afterwards the learning rate is decayed by a cosine scheduler. Their input transformation is composed of: applying color jitter with a strength of 0.4 to the brightness, contrast, and saturation, while the hue is perturbed with strength 0.1 , all with a probability of 0.8, random grayscale with a probability of 0.2, random horizontal flipping, and random resized cropping of size 32. All inputs are converted to tensors, i.e., to the range $\left\lbrack {0,1}\right\rbrack$ .
+
+
+
+Figure 3: Distribution of perturbation size, measured regarding ${\ell }_{\infty }$ , for attacks governed by ${\ell }_{2}$ (blue), ${\ell }_{1}$ (orange), and ${\ell }_{\infty }$ (green) on the indicated trained model. The gray line indicates a perturbation size of ${\ell }_{\infty } = 8/{255}$ , giving a landmark for seen adversarial inputs during adversarial training.
+
+
+
+Figure 4: Perturbation for each pixel governed by ${\ell }_{2}$ (top), ${\ell }_{1}$ (middle), and ${\ell }_{\infty }$ (bottom), measured regarding ${\ell }_{\infty }$ on a ${\mathcal{L}}_{\mathrm{{CE}}}$ trained model.
+
+
+
+Figure 5: Perturbation for each pixel governed by ${\ell }_{2}$ (top), ${\ell }_{1}$ (middle), and ${\ell }_{\infty }$ (bottom), measured regarding ${\ell }_{\infty }$ on a PGD adversarial trained model.
+
+
+
+Figure 6: Perturbation for each pixel governed by ${\ell }_{2}$ (top), ${\ell }_{1}$ (middle), and ${\ell }_{\infty }$ (bottom), measured regarding ${\ell }_{\infty }$ on our proposed broad adversarial trained model.
+
+| operation | mean in ms | std in ms | $\%$ of overall time |
| AT: |
| overall time per step | 777.55 | 30.37 | |
| create initial adversarial input | 429.14 | 16.61 | 55.19 |
| calculate ${\delta }_{{x}^{\prime }, x}$ | 43.65 | 4.03 | 5.61 |
| create one pseudo adversarial input | 1.56 | 3.68 | 0.20 |
| calculate the loss within scattering | 0.09 | 0.02 | 0.01 |
| AMOC: |
| overall time per epoch | 1180.50 | 59.58 | |
| create initial adversarial input | 483.61 | 26.72 | 40.97 |
| calculate ${\delta }_{{x}^{\prime }, x}$ | 0.10 | 0.01 | 0.01 |
| create one pseudo adversarial input | 3.65 | 7.23 | 0.31 |
| calculate the loss within scattering | 0.26 | 0.19 | 0.02 |
| RoCL: |
| overall time per epoch | 1342.06 | 52.27 | |
| create initial adversarial input | 898.09 | 35.55 | 66.92 |
| calculate ${\delta }_{{x}^{\prime }, x}$ | 0.81 | 0.14 | 0.06 |
| create one pseudo adversarial input | 7.04 | 1.25 | 0.52 |
| calculate the loss within scattering | 5.98 | 1.22 | 0.45 |
+
+Table 6: Time demand for different operations during BAT given in ms. For each method we list the overall mean time and standard deviation for one step, as well as the time required to calculate the initial adversarial input. The overall scatter operation is split into calculating ${\delta }_{{x}^{\prime }, x}$ , which is only performed once, and the creation of one pseudo adversarial input. In particular, we also list the time required to evaluate the loss function for the created pseudo adversarial instance.
+
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/EfhpoMWSqUN/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/EfhpoMWSqUN/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..dc89232f25c8b18f12d3f43ff20ff5d6daf251ee
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/EfhpoMWSqUN/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,343 @@
+§ BROAD ADVERSARIAL TRAINING WITH DATA AUGMENTATION IN THE OUTPUT SPACE
+
+§ ABSTRACT
+
+In image classification, data augmentation and the usage of additional data has been shown to increase the efficiency of clean training and the accuracy of the resulting model. However, this does not prevent models from being fooled by adversarial manipulations. To increase the robustness, Adversarial Training (AT) is an easy, yet effective and widely used method to harden neural networks against adversarial inputs. Still, AT is computationally expensive and only creates one adversarial input per sample of the current batch. We propose Broad Adversarial Training (BAT), which combines adversarial training and data augmentation in the decision space, i.e., on the models output vector. By adding random noise to the original adversarial output vector, we create multiple pseudo adversarial instances, thus increasing the data pool for adversarial training. We show that this general idea is applicable to two different learning paradigms, i.e., supervised and self-supervised learning. Using BAT instead of AT for supervised learning, we can increase the robustness by ${0.56}\%$ for small seen attacks. For medium and larger seen attacks, the robustness increases by ${4.57}\%$ and ${1.11}\%$ , respectively. On large unseen attack, we can also report an increase in the robustness by 1.11% and 0.29%. When combining a larger corpus of input data with our proposed method, we report a slight increase of the clean accuracy and increased robustness against all observed attacks, compared to AT. In self-supervised training, we monitor a similar increase in robust accuracy for seen attacks and large unseen attacks, when it comes to the downstream task of image classification. In addition, for both observed self-supervised models, the clean accuracy also increases by up to 1.37% using our method.
+
+§ INTRODUCTION
+
+The performance of deep learning models in various domains, e.g., image classification (Zhai et al. 2021), semantic image segmentation (Tao, Sapra, and Catanzaro 2020), or reinforcement learning (Tang et al. 2017) is already on a high level and constantly improving. Among other aspects, ongoing research and advances in data augmentation (Cubuk et al. 2020) techniques, as well as the creation of more realistic synthetic inputs (Ho, Jain, and Abbeel 2020) contribute to this success. Both techniques aim to enrich the training data, which increases the performance. However, when it comes to safety-critical applications, e.g., autonomous driving, adversarial inputs pose a threat. By applying small but malicious manipulations to the input, the prediction of the model can change drastically.
+
+Starting by manipulating digital inputs, several authors, e.g., (Goodfellow, Shlens, and Szegedy 2014; Carlini and Wagner 2017; Madry et al. 2017), developed different techniques to calculate and create the necessary manipulations to fool neural networks into misclassifying a given input. Later, these attacks were adapted or extended to also work in the physical world (Athalye et al. 2018; Worzyk, Kahlen, and Kramer 2019; Ranjan et al. 2019).
+
+One widely used technique to harden neural networks against such attacks is Adversarial Training (AT), which is simple but yet very effective. The idea is to create adversarial inputs during the training process, and include or exclusively use them for training. Thereby, the model learns to be more resilient against these worst-case perturbations. (Madry et al. 2017) for example, proposed a method referred to as Projected Gradient Descent (PGD) which is very successful in finding adversarial inputs and furthermore use these adversarial instances exclusively for training a given model. Even though effective, all to us known adversarial training techniques, only create one adversarial input per sample in the current batch.
+
+To increase the impact of any given adversarial instance during adversarial training, we propose to combine adversarial training and data augmentation in the decision space, specifically, manipulating the output vector of a given adversarial instance. In contrast to clean inputs, which represent the assumed reality in the input space, adversarial instances are calculated, and basically exist, based on flaws in the decision boundary of a model. Ultimately, the output vector of a given model defines the decision calculated by the model. Similar to data augmentation on clean inputs increasing the data pool and leading to better clean performance, increasing the data pool of adversarial samples in the output space leads to better robustness. The overall concept is shown in Figure 1.
+
+In Figure 1a, the decision space during traditional adversarial training is displayed. Based on the current decision boundary (bold line) and the output vector for a clean sample (orange circle), an adversarial input (blue cross) is created, whose output vector is located on the wrong side of the decision boundary. The adversarial decision boundary (dashed line) is then optimized to contribute for the adversarial input.
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+ < g r a p h i c s >
+
+Figure 1: Difference between traditional (1a) and Broad (1b) Adversarial Training. Given the output vector of a clean sample (orange circle) and the current decision boundary (solid line), adversarial inputs (blue cross) lying on the wrong side of the decision boundary are created. When adding the adversarial samples to the training process, the decision boundary adapts accordingly (dashed line). During Broad Adversarial Training, the initial adversarial output vector is perturbed randomly within a given radius to create a set of additional pseudo adversarial inputs (smaller blue crosses). This extends the impact of any single adversarial input on the adversarial decision boundary (dashed line) during training.
+
+Figure 1b outlines our extension to this process. Based on the output vector of an adversarial input (large blue cross), multiple pseudo adversarial inputs (small blue crosses) are created by applying conditioned normally distributed random noise within a predefined radius to the initial adversarial output vector. By scattering the adversarial output vector, we widen its impact on the new adversarial decision boundary (dashed line).
+
+The remainder of this paper is structured as follows. In Section, we introduce the supervised, as well as self-supervised adversarial training methods used for the experiments in this paper. In Section, we define our proposed BAT method in more detail, followed by the experiments and their discussion in Section . Section puts our work in context with other related work, while Section concludes this paper.
+
+§ BACKGROUND
+
+§ SUPERVISED ADVERSARIAL TRAINING
+
+PGD: One of the widest known techniques for adversarial training is Projected Gradient Descent (PGD), proposed by (Madry et al. 2017). Instead of using clean samples during training, the authors use the corresponding adversarial instances, created based on the following iterative equation,
+
+$$
+{x}^{0} = x
+$$
+
+$$
+{x}^{i + 1} = {\Pi }_{B\left( {x,\varepsilon }\right) }\left( {{x}^{i} + \alpha \operatorname{sign}\left( {{\nabla }_{{x}^{i}}{\mathcal{L}}_{\mathrm{{CE}}}\left( {\theta ,{x}^{i},y}\right) }\right) }\right) . \tag{1}
+$$
+
+This function is initialized by the original input $x$ and calculates the gradients, regarding the intermediate adversarial input ${x}^{i}$ , of the cross-entropy loss ${\mathcal{L}}_{\mathrm{{CE}}}$ away from the true label $y$ , based on the current parameters of the model $\theta$ . The sign of the gradients is multiplied by a step size parameter $\alpha$ and added to the current intermediate adversarial input. The projection $\Pi$ then limits the perturbation to be within an $\varepsilon$ - ball around the initial input $x$ . The parameter $\varepsilon$ essentially governs the allowed amount of perturbation. For example, $\varepsilon = 8/{255}$ regarding the ${\ell }_{\infty }$ norm would state, that each pixel of the original input vector is allowed to be increased or decreased by not more than 8 values.
+
+§ SELF-SUPERVISED ADVERSARIAL TRAINING
+
+More recently, adversarial training is also applied to self-supervised training. The general goal of self-supervision is to train for some pretext tasks where no labels are required. After training, the model and its parameters are transferred to a given downstream task, e.g., image classification. Therefore, the overall model in self-supervised learning is split into a backbone and a projector. The backbone can be based on, e.g., a ResNet architecture (He et al. 2016), stripped of the last fully connected layer. The projector reduces the dimensionality of the backbones output vector to a usually 128-dimensional vector. To train without labels, given a batch of samples $\left\{ {{x}_{1},\ldots ,{x}_{b}}\right\}$ , each sample is duplicated and transformed by a given series of random transformations $t$ , e.g., cropping and flipping. The resulting transformed versions of the same origin, i.e., ${t}_{1}\left( {x}_{i}\right)$ and ${t}_{2}\left( {x}_{i}\right)$ are called positive pair, while pairs of samples with different origin, i.e., $t\left( {x}_{i}\right)$ and $t\left( {x}_{j}\right)$ with $i \neq j$ are called negative pair. The pretext task of the models used in this paper is to maximise the distance between the output vectors of negative pairs while minimising the distance between the output vectors of positive pairs. How the distance is calculated differs between the training techniques.
+
+After pretraining for the pretext task, the backbone is kept and the projector is discarded. Instead of the projector, a downstream task-specific head is attached. The parameters of the backbone are usually frozen and only the head is trained. In essence, self-supervised pretraining aims to learn good feature representations which can, later on, be used for the given downstream task.
+
+RoCL: (Kim, Tack, and Hwang 2020) added adversarial training to the SimCLR (Chen et al. 2020) framework and dubbed it Robust Contrastive Learning (RoCL). To calculate the distance between positive and negative pairs, SimCLR uses the cosine similarity sim. Because of the different loss function, (Kim, Tack, and Hwang 2020) adapted PGD (cf. Equation 1) accordingly to use the contrastive loss instead of the cross-entropy loss. All functions are given in Table 3 in Appendix .
+
+To implement adversarial training, (Kim, Tack, and Hwang 2020) essentially use the adversarial inputs to extend the positive and negative pairs to triplets. During training, they aim to minimize the distance between the two transformed inputs of the positive pairs, as well as the distances between the two transformed inputs each and the thereof created adversarial input. While maximising the distance to the negative samples. The formalization of the RoCL objective is given in Table 3, where $t\left( x\right) + \delta$ is the adversarial sample. The overall training loss is then calculated based on the standard contrastive loss only considering the clean transformed samples, plus the adversarial loss based on the triplets of two transformed inputs and the additional adversarial input.
+
+One challenge using SimCLR as the basic framework is that it requires a large batch size to achieve good performance (Chen et al. 2020). In this type of self-supervised learning, the number of observed negative samples is essential for a good performance, and SimCLR does not incorporate any form of dictionary or memory bank to increase their number. Only the samples from the current batch are used for the calculations. When including adversarial samples into the training process, the number of inputs to be stored on the GPU is increased and results in a reduction of the feasible batch size.
+
+AMOC: Another widely known self-supervised framework is Momentum Contrast (MoCo) proposed by (He et al. 2020). The conceptual idea is the same as for SimCLR, i.e., minimising the distance between positive instances while maximising the distance towards negative samples. However, to overcome the problem of large batch sizes, (He et al. 2020) implement a dictionary or memory bank. In addition, they use two networks of the same architecture and the same initial weights. One model is referred to as the query encoder, which is updated after each batch as usual. The other model is called momentum or key encoder, whose parameters are a copy of the query encoder, delayed by a predefined momentum. This makes the output of the key encoder slightly different from the output of the query encoder, which can be considered as an additional form of data augmentation. After processing the inputs of the current batch by both models, the output vectors of the momentum encoder are enriched by output vectors of previous batches, stored in the dictionary. Thereby, a large number of negative samples can be created consistently, leading to overall better performance. The loss, given in Table 3, is then calculated based on the output vectors $q$ of the query encoder and the enriched output vectors $k$ of the key encoder. Furthermore, $\tau$ is a temperature parameter, and $\mathcal{M}$ refers to the memory bank of old key vectors.
+
+Based on this framework, (Xu and Yang 2020) proposed an extension for adversarial training. They introduce a second memory bank to store exclusively the historic adversarial output vectors, and to further disentangle the clean and adversarial distribution, they use dual Batch Normalization as proposed by (Xie et al. 2020). The optimization problem they solve is given in Table 3, with ${t}_{1}$ and ${t}_{2}$ being two different random transformations from a set $\mathcal{T}$ of possible transformation, and $\delta$ being the adversarial perturbation. ${\mathcal{M}}_{\text{ clean }}$ and ${\mathcal{M}}_{\text{ adv }}$ refer to the clean, and adversarial memory bank, respectively. As for the loss function, (Xu and Yang 2020) tested different memory bank and batch normalization combinations, and reported good results for a combination they refer to as ACC. The 'A' indicates, that the adversarial perturbation is injected into the query encoder, while the key encoder does not observe and perturbation, as well as the clean memory bank ${\mathcal{M}}_{\text{ clean }}$ is used. The formulation to calculate the ACC loss is given in Table 3. Intuitively, by comparing the adversarial output vectors of the query encoder with clean samples from the key encoder, as well as the memory bank, the query encoder ${f}_{q}$ learns to classify adversarial inputs as its clean augmentation. To create the adversarial perturbation in the first place, (Xu and Yang 2020) use PGD as well, but with the MoCo loss instead of the cross-entropy loss. Finally, the overall training loss is calculated as a weighted sum of the standard MoCo loss solely trained on clean data, and the selected, e.g., ACC loss to incorporate adversarial instances.
+
+§ $\MATHBF{{METHOD}}$
+
+Multiple approaches for adversarial training are outlined in Section . Our goal is to develop a method that is not specifically tailored to one approach, but rather generalizable between different sorts of adversarial training. Therefore, given an input $x$ and a neural network $f,f\left( x\right)$ denotes the general output vector. In supervised learning, this vector would be the logits, while in self-supervised learning, this would be the 128-dimensional output vector of the projector.
+
+After an adversarial input ${x}^{\prime }$ and in consequence also its output vector $f\left( {x}^{\prime }\right)$ has been created by one of the approaches in Section, we create multiple pseudo adversarial inputs $f\left( {x}_{\mathrm{s}}^{\prime }\right)$ by adding conditioned normal distributed random noise,
+
+$$
+f{\left( {x}^{\prime }\right) }_{s} = f\left( {x}^{\prime }\right) + \mathcal{N}\left( {0,1}\right) \cdot {\delta }_{{x}^{\prime },x} \cdot {\alpha }_{s}, \tag{2}
+$$
+
+where ${\delta }_{{x}^{\prime },x}$ is defined as
+
+$$
+{\delta }_{{x}^{\prime },x} = f\left( {x}^{\prime }\right) - f\left( x\right) \tag{3}
+$$
+
+and ${\alpha }_{s}$ is a hyperparameter, used to scale the ball around the initial adversarial output vector $f\left( {x}^{\prime }\right)$ , introduced by the random noise. Intuitively, ${\delta }_{{x}^{\prime },x}$ defines the element-wise difference that the initial adversarial output vector is moved away from the original instance in the decision space, while ${\alpha }_{s}$ scales this initial manipulation.
+
+To confirm that this type of decision space data manipulation is suitable and label preserving, we create randomly perturbed adversarial output vectors for a standard, clean trained model (ST) and track their classification behaviour. In Figure 2, the results for an ST model are shown in the most left bar of each group. The blue (bottom) portion of the bar indicates the percentage of pseudo adversarial instances being classified the same, as the initial adversarial input. The orange (middle) portion indicates the number of pseudo adversarial output vectors returning to the classification area of the initial clean sample, and the green (top) portion gives the percentage of instances that move to a third classification area, which is neither the class of the clean nor the initial adversarial sample.
+
+ < g r a p h i c s >
+
+Figure 2: Percentages of pseudo adversarial inputs being classified as indicated, depending on the perturbation scaling factor ${\alpha }_{s}$ . The bars of each group show the results based on the following models: Left bar: Standard trained model; Middle bar: Adversarial trained model; Right bar: Broad adversarial trained model. $\mathrm{C}$ (noisy) $= \mathrm{C}$ (adversarial) indicates the perturbed adversarial output vector is classified the same, as the initial adversarial input. $\mathrm{C}$ (noisy) $= \mathrm{C}$ (clean) gives the percentage of pseudo adversarial inputs, which return to the original true classification area, while C(noisy) ! $= \mathrm{C}$ (adversarial) $! = \mathrm{C}$ (clean) gives the percentage of pseudo adversarial inputs moving to some different, third class when perturbed randomly.
+
+We can observe that for sufficiently small perturbation ${100}\%$ of the pseudo adversarial instances are classified the same, as the initial adversarial input. This demonstrates empirically, that the applied conditioned random noise as a form of data augmentation in the decision space can be completely 'label preserving'. Only with larger perturbation radius, more and more perturbed adversarial output vectors move towards a third classification area. The samples returning to their originally true class, however, can be ignored, since the adversarial instances are labelled to have the same class as their clean counterparts during training. Therefore, the assigned label for these instances would not change.
+
+A different perspective to the classification changes shown in Figure 2 is to empirically evaluate the local smoothness of the decision surface. If already for small random perturbation an instance moves into another classification area, the decision boundary might be sharply twisted at that point. If only at larger perturbations the instances move into another class area, the decision boundary can be assumed to be more smooth.
+
+The second bar of each group displays the corresponding behaviour for an adversarially trained model. Similarly to the clean model, at small perturbation radius, almost all pseudo adversarial instances are classified the same as the initial adversarial instance. However, with increasing manipulation, more and more noisy instances move to the original or a third classification area. Compared to ST, the number of pseudo adversarial instances staying adversarial reduces. This can be explained by the fact, that the attack strength is kept constant, while the decision boundary in AT is pushed towards the observed adversarial instances. Thereby, in ST the adversarial instances are moved further into the wrong classification area, and thereby can endure more random perturbation before moving either back or to a third classification area. For an AT trained model, the adversarial instances are already closer to the decision boundary and are thereby moved to either the original or a third classification area at lower random perturbation sizes. This observation also indicates, that for training with pseudo adversarial output vectors, the scatter radius should be reduced over time. Thereby, the risk of assigning instances to a third classification area with a potentially incorrect label could be minimized.
+
+As a final comparison, the third bar of each group shows the corresponding classifications for pseudo adversarial instances on a BAT model. Here we can see that the number of pseudo adversarial instances being classified as the initial adversarial instance is higher compared to normal adversarial training. The number of instances moving to a third classification area is smaller as well for the BAT model compared to the AT model. This indicates a smoother local decision boundary when a model is trained with BAT compared to AT.
+
+Having verified that applying random perturbation as a form of data augmentation in the decision space is a valid option, our overall pseudo-code is given in Algorithm 1. Aside from the scalar for the perturbation radius ${\alpha }_{s}$ , we also introduce a hyperparameter to define the number of additionally created pseudo adversarial instances ${s}_{k}$ . Each additional pseudo adversarial instance only requires the calculation of random noise and evaluation of the given loss function. Addition and multiplication to create the pseudo adversarial instances regarding time complexity are in $O\left( 1\right)$ , while evaluating the loss function, since independent from the parameters added for BAT, can also be considered to be in $O\left( 1\right)$ . Therefore, our extension to implement BAT adds a time complexity in $O\left( n\right)$ with the number of created pseudo adversarial inputs to the overall training procedure. In Table 6 in Appendix, the additional time demand for each scattered input during the different training methods, is empirically evaluated and listed.
+
+To even out the effect of having multiple pseudo adversarial instances, we calculate the mean loss and add it, weighted by some factor $\lambda$ , to calculate the overall loss as
+
+$$
+{\mathcal{L}}_{\text{ total }} = \iota {\mathcal{L}}_{\text{ clean }} + \kappa {\mathcal{L}}_{\text{ adv }} + \lambda {\mathcal{L}}_{\text{ scatter }}, \tag{4}
+$$
+
+where $\iota ,\kappa$ , and $\lambda$ could be different weights for the different loss functions.
+
+§ RESULTS AND DISCUSSION
+
+Dataset and Model: All experiments were run on the Cifar-10 dataset (Krizhevsky, Hinton et al. 2009). For supervised learning, we did an additional set of experiments marked with ${}^{ + }$ , which uses another 1 million synthetic data points based on Cifar-10, provided by (Gowal et al. 2021). The authors report an increase in adversarial robustness using the additional synthetic data. The model used for all experiments is a ResNet-18 architecture, implemented in the provided repositories of (Kim, Tack, and Hwang 2020) for $\mathrm{{RoCL}}$ , and (Xu and Yang 2020) for AMOC. The experiments for the supervised case were run based on the AMOC framework. More details are provided in Appendix .
+
+Algorithm 1: Broad Adversarial Training (BAT).
+
+Input: Dataset $D$ , model $f$ , parameter $\theta$ , Loss
+
+ function $\mathcal{L},\#$ attack steps $k,\#$ scatter
+
+ instances ${s}_{k}$ , scatter scalar ${\alpha }_{s}$
+
+foreach iter $\in$ number of training iteration do
+
+ foreach $x \in$ minibatch $B = \left\{ {{x}_{1},\ldots ,{x}_{m}}\right\}$ do
+
+ ${\mathcal{L}}_{\text{ clean }} = \mathcal{L}\left( {f\left( x\right) }\right)$
+
+ ${x}^{\prime } =$ generateAdversarial(x)
+
+ ${\mathcal{L}}_{\text{ adv }} = \mathcal{L}\left( {f\left( {x}^{\prime }\right) }\right)$
+
+ Broad Adversarial Operation:
+
+ ${\delta }_{{x}^{\prime },x} = f\left( {x}^{\prime }\right) - f\left( x\right)$
+
+ for ${s}_{k}$ instances do
+
+ $\widehat{f}{\left( {x}^{\prime }\right) }_{s} = f\left( {x}^{\prime }\right) + \mathcal{N}\left( {0,1}\right) \cdot {\delta }_{{x}^{\prime },x} \cdot {\alpha }_{s}$
+
+ ${\mathcal{L}}_{s} + = \mathcal{L}\left( {f{\left( {x}^{\prime }\right) }_{s}}\right)$
+
+ end
+
+ ${\mathcal{L}}_{\text{ scatter }} = \frac{{\mathcal{L}}_{s}}{{s}_{k}}$
+
+ ${\mathcal{L}}_{\text{ total }} = \iota {\mathcal{L}}_{\text{ clean }} + \kappa {\mathcal{L}}_{\text{ adv }} + \lambda {\mathcal{L}}_{\text{ scatter }}$
+
+ Optimize $\theta$ over ${\mathcal{L}}_{\text{ total }}$
+
+ end
+
+end
+
+Hyperparameters for training: For all experiments, we used the provided hyperparameters suggested by (Kim, Tack, and Hwang 2020) for RoCL, and (Xu and Yang 2020) for AMOC, when applicable. More details are provided in Appendix .
+
+Attacks: During training, the adversarial inputs were created governed by a perturbation size of $\varepsilon = 8/{255}$ regarding ${\ell }_{\infty }$ . Therefore, the ${\ell }_{\infty }$ attacks are referred to as seen, even if only for a small perturbation size, while the ${\ell }_{2}$ and ${\ell }_{1}$ attacks were completely unseen during the training procedure. For adversarial training, we used the parameters provided by the respective frameworks, listed in Appendix .
+
+To challenge the trained models, the adversarial inputs were created with PGD as given in Equation 1 over 20 iteration steps, with a relative step size $\alpha$ of 0.1 to the given allowed amount of perturbation. The overall evaluation was conducted based on the respective functions in the AMOC framework, which itself draws the attacks from the foolbox framework (Rauber, Brendel, and Bethge 2017).
+
+Hyperparameters for BAT: For BAT, we found that a good number of additional inputs is ${s}_{k} = {10}$ . In prelimi-ary studies we found that introducing too many additional data points adds too much noise to the training process and thereby reduces the overall performance. On the other hand, too few pseudo adversarial instances do not have any impact on the overall performance. Similarly, setting the scatter radius too small results in no effect on the results, while setting it too large, as shown in Figure 2, will move the pseudo adversarial inputs increasingly towards and over the decision boundary of a different classification area, and thereby reduces the performance. For supervised BAT, we found that a surprisingly large initial ${\alpha }_{s} = {2.5}$ decayed by a cosine scheduler, yields the best results. Training AMOC, setting the initial ${\alpha }_{s} = {0.25}$ decayed by a cosine scheduler works best, respectively for RoCL an initial ${\alpha }_{0.1}$ decayed by a stepwise function reducing the initial ${\alpha }_{s}$ by 0.01 every 100 epochs.
+
+For the weight of the scatter loss to the overall loss, we found that in supervised broad adversarial training the same weight for the original adversarial loss and the scatter loss works best. Similarly, while pretraining AMOC, an equal contribution of the clean, the original adversarial, and the scatter loss yields the best results. For RoCL, a weight of $\lambda = {0.25}$ for the scatter loss yields the best results, combined with a weight of $\iota = \kappa = {1.0}$ for the clean and original adversarial loss.
+
+§ RESULTS
+
+The results for the supervised experiments are given in Table 1, where each value represents the mean value over 5 different runs. For the self-supervised experiments, the results are listed in Table 2. The upper part reports the results where only the classification head was optimized, while the parameters of the pretrained model were frozen. The lower part, indicated by Self-supervised + finetune, reports the results where also the parameters of the pretrained model were optimized during training of the classification head. A B- in front of the given method indicates, that our proposed adaptation was applied to the following training mechanism. The results for experiments run for 200 epochs are also the mean value over 5 different runs.
+
+§ DISCUSSION
+
+Taking a look at the results of the supervised methods in Table 1, using only the original Cifar-10 data, we can report, that the robust classification accuracy can be increased for all seen, as well as large unseen attack, when using BAT instead of AT. For small seen perturbation, the robust classification accuracy increases by ${0.56}\%$ , while for large perturbation size the accuracy increases by 1.11%. Considering unseen attack, the robust classification accuracy for small attacks is reduced when using BAT, however, with increasing attack size, this reduction is converted to an increase for large perturbation sizes. Considering ${\ell }_{2}$ governed attacks with $\varepsilon = {0.75}$ , the robust classification accuracy can be increased by ${1.11}\%$ using BAT, while for ${\ell }_{1}$ governed attacks with $\varepsilon = {16.16}$ , the robust classification accuracy increases only slight by ${0.29}\%$ using BAT instead of AT.
+
+When using the additional 1 million data points, we can reaffirm that it increases the clean, as well as robust accuracy for all training methods and attacks compared to training without the additional data, as (Gowal et al. 2021) reported. Comparing AT and BAT using additional data, we can report that BAT improves on the robust classification accuracy in all observed attacks, as well as a slight increase in the clean accuracy, compared to AT. Even for small unseen attacks, e.g., ${\ell }_{2}$ governed attacks with $\varepsilon = {0.25}$ , the robust accuracy increases by 0.11% using BAT over AT. For larger attacks, the robust accuracy benefits more from using BAT over AT.
+
+max width=
+
+3*Method 3*${A}_{nat}$ 3|c|seen 6|c|unseen
+
+3-11
+ 3|c|PGD20 ${l}_{\infty }$ 3|c|PGD20 ${l}_{2}$ 3|c|PGD20 ${l}_{1}$
+
+3-11
+ € 8/255 16/255 32/255 0.25 0.5 0.75 7.84 12 16.16
+
+1-11
+${\mathcal{L}}_{\mathrm{{CE}}}$ 93.92 0.00 0.00 0.00 8.27 0.17 0.00 15.07 3.37 0.61
+
+1-11
+AT 81.85 52.49 22.21 1.25 73.83 63.11 50.91 70.52 62.95 54.66
+
+1-11
+BAT 76.60 53.05 26.78 2.36 69.63 61.56 52.02 67.04 61.37 54.95
+
+1-11
+${\mathcal{L}}_{\text{ CE }}$ $\bar{\mathbf{{95.04}}}$ 0.00 0.00 0.00 12.42 0.58 0.04 21.75 6.21 1.84
+
+1-11
+${\mathrm{{AT}}}^{ + }$ 84.15 59.22 29.70 2.60 76.78 67.51 56.09 73.47 66.06 58.05
+
+1-11
+${\mathbf{{BAT}}}^{ + }$ 84.20 59.80 30.49 2.81 76.89 68.08 56.86 73.61 66.75 58.81
+
+1-11
+
+Table 1: Results on Cifar-10 for supervised trained models with standard cross entropy training ${\mathcal{L}}_{\mathrm{{CE}}}$ , adversarial PGD training (AT), and our proposed Broad Adversarial Training (BAT). For the experiments marked with ${}^{ + }$ ,1 million additional synthetic data points based on Cifar-10 were used for training. During training, the initial adversarial instances were created governed by ${\ell }_{\infty }$ with a strength of 8/255. All experiments were run 5 times and the mean value is reported.
+
+Observing the results for AMOC when only the classification head is trained, given in Table 2, we can report similar behaviour. The clean accuracy is slightly reduced, while the classification accuracy for seen attacks increases in all combinations of AMOC and head training, except one combination for a large perturbation size. The increase in robustness can range from ${0.03}\%$ to ${1.01}\%$ , depending on the attack size. When AMOC is trained for 1000 epochs, instead of 200, the robust classification accuracy for large and sometimes medium unseen attacks increases as well, between 0.04% and 1.22%.
+
+For RoCL, introducing our proposed pseudo adversarial instances into the self-supervised pretraining, the clean accuracy increases between ${0.04}\%$ and ${1.37}\%$ . Also, the robustness against seen attacks increases for small and medium-sized attacks between 0.81% and 2.02%. Interestingly, the robustness for large seen attacks only increases by 0.13% using B-RoCL during pretraining and BAT for the classification head is applied. Similar to AMOC, RoCL also becomes more robust to medium and/or large unseen attacks, when trained with additional pseudo adversarial inputs. The robustness there increases between ${0.25}\%$ and ${3.25}\%$ . Particularly for the combination B-RoCL+AT, our proposed pretraining leads to better clean and robustness accuracy against almost all attacks compared to standard RoCL+AT.
+
+When during training of the classification head also the parameters of the pretrained models are finetuned, we observe an increase in clean, as well as robust accuracy for AMOC, too. In particular, comparing B-AMOC+B-AF with AMOC+AF trained for 1000 epochs, we observe that the performance increases against almost all attacks between ${0.35}\%$ and ${0.9}\%$ . If we assume AMOC+AF as the reported baseline, B-AMOC+B-AF increase the robustness against all seen attacks between ${0.13}\%$ and ${0.25}\%$ , as well as against medium and large unseen attacks between 0.11% and ${0.24}\%$ .
+
+To further investigate why BAT is sometimes weaker regarding unseen attacks, we calculated the perturbation size of successful ${\ell }_{2}$ and ${\ell }_{1}$ governed attacks regarding ${\ell }_{\infty }$ . The resulting distributions are given in Figure 3 in Appendix, where the $\mathrm{x}$ -axis indicates the perturbation size regarding ${\ell }_{\infty }$ , and the y-axis shows the number of successful attacks. We recommend inspecting the figures digitally to zoom in for better visibility. The distribution of manipulation sizes based on attacks controlled by ${\ell }_{2}$ is given in blue (legend top), while the values for ${\ell }_{1}$ -attacks are shown in orange (legend middle), and for ${\ell }_{\infty }$ -attacks in green (legend bottom). The grey vertical line gives a landmark of a perturbation of ${\ell }_{\infty } = 8/{255}$ , which is the perturbation size seen during adversarial and broad adversarial training. The left column of each pair shows the corresponding distributions for small perturbation size, while the right column shows the respective distribution for large perturbation size.
+
+The top row shows the results when the attacked model was trained on clean data only. We can see that the applied manipulation of attacks governed by ${\ell }_{2}$ and ${\ell }_{1}$ is generally lower than the adversarial manipulation applied by the corresponding ${\ell }_{\infty }$ -attack. This could explain why even models trained on clean samples are, to some extend, robust against ${\ell }_{2}$ and ${\ell }_{1}$ controlled attacks, as we can observe in Table 1.
+
+The second and third rows show the resulting perturbation size distributions for attacks on an adversarial trained network, resp. broad adversarial trained model. Here we can see that the perturbation of ${\ell }_{2}$ - and ${\ell }_{1}$ -attacks is larger regarding ${\ell }_{\infty }$ than the perturbation of the corresponding ${\ell }_{\infty }$ - attack, especially for a small perturbation size. Since during training both models have seen adversarial samples of the perturbation size ${\ell }_{\infty } = 8/{255}$ , this indicates why both also become more robust, but not perfect, against ${\ell }_{2}$ - and ${\ell }_{1}$ -attacks in general, but probably not why BAT performs worse than standard AT on unseen attacks.
+
+Observing the pixel level manipulations applied by ${\ell }_{2}$ - and ${\ell }_{1}$ -attacks, evaluated regarding ${\ell }_{\infty }$ might give more insights why BAT is worse regarding small perturbations, compared to AT. The resulting perturbations, exemplary for the blue color channel of an observed input, are shown in Figure 4 in Appendix for a standard trained model, in Figure 5 for an AT trained model, and in Figure 6 for a broad adversarial trained model. The visualization indicates whether the pixel value of the adversarial input was increased (red, top of the right bar aside the grid) or decreased (blue, bottom of the right bar aside the grid), regarding the pixel value of the original input.
+
+max width=
+
+3*Method 3*${A}_{nat}$ 3|c|seen 6|c|unseen
+
+3-11
+ 3|c|PGD20 ${l}_{\infty }$ 3|c|PGD20 ${l}_{2}$ 3|c|PGD20 ${l}_{1}$
+
+3-11
+ € 8/255 16/255 32/255 0.25 0.5 0.75 7.84 12 16.16
+
+1-11
+11|c|Self-supervised:
+
+1-11
+11|c|200 epochs:
+
+1-11
+$\mathrm{{AMOC}} + {\mathcal{L}}_{\mathrm{{CE}}}$ 79.03 36.61 7.46 0.05 67.93 54.82 41.53 66.27 58.53 50.74
+
+1-11
+B-AMOC + ${\mathcal{L}}_{\mathrm{{CE}}}$ 78.88 37.09 8.15 0.05 67.64 54.58 41.37 65.77 57.91 50.19
+
+1-11
+AMOC + AT 74.79 43.97 14.53 0.19 67.10 58.10 48.09 66.03 60.78 54.92
+
+1-11
+B-AMOC + AT 74.58 44.57 15.45 0.26 66.88 57.93 48.21 65.72 60.43 54.64
+
+1-11
+AMOC + BAT 74.32 44.08 15.15 0.24 66.63 57.92 48.21 65.62 60.78 54.82
+
+1-11
+B-AMOC + BAT 74.25 44.59 15.85 0.28 66.64 57.75 48.18 65.49 60.21 54.44
+
+1-11
+11|c|1000 epochs:
+
+1-11
+AMOC $+ {\mathcal{L}}_{\text{ CE }}$ 86.52 44.91 11.46 0.11 77.04 63.59 50.39 75.47 68.27 59.75
+
+1-11
+B-AMOC + ${\mathcal{L}}_{\mathrm{{CE}}}$ 85.90 45.17 12.02 0.14 76.78 64.29 50.99 75.38 68.64 60.97
+
+1-11
+AMOC + AT 84.48 50.87 16.85 0.26 77.07 67.28 56.00 76.14 70.45 64.43
+
+1-11
+B-AMOC + AT 83.80 50.89 17.81 0.38 76.35 66.79 56.16 75.44 69.78 64.29
+
+1-11
+AMOC + BAT 83.88 51.00 17.46 0.33 76.44 66.41 55.77 75.51 69.87 63.67
+
+1-11
+B-AMOC + BAT 83.40 51.08 18.47 0.37 75.97 66.45 56.11 75.15 69.48 63.83
+
+1-11
+$\overline{\mathrm{{RoCL}}} + {\overline{\mathcal{L}}}_{\mathrm{{CE}}}$ 83.69 38.49 8.73 0.66 65.98 61.12 44.47 $\overline{\mathbf{{68.03}}}$ 67.59 60.42
+
+1-11
+B-RoCL + ${\mathcal{L}}_{\mathrm{{CE}}}$ 85.06 40.44 9.54 0.63 65.37 62.86 47.42 66.42 66.63 63.67
+
+1-11
+RoCL + AT 79.65 47.35 16.33 0.36 67.33 65.18 53.38 68.20 68.17 65.58
+
+1-11
+B-RoCL + AT 79.69 49.36 17.41 0.33 67.58 66.15 54.64 68.21 68.54 66.68
+
+1-11
+RoCL + BAT 78.63 47.31 16.29 0.25 68.34 64.92 53.04 68.92 69.27 65.34
+
+1-11
+B-RoCL + BAT 79.69 49.33 17.22 0.38 67.59 66.03 54.47 68.38 68.43 66.73
+
+1-11
+11|c|Self-supervised + finetune
+
+1-11
+11|c|200 epochs:
+
+1-11
+AMOC + AF 82.87 52.60 22.11 1.11 74.65 63.56 50.77 71.20 63.32 54.81
+
+1-11
+B-AMOC + AF 83.29 52.98 21.69 1.14 74.84 63.80 50.96 71.28 63.33 54.73
+
+1-11
+AMOC + B-AF 82.19 52.73 22.23 1.28 73.71 63.51 51.04 70.43 63.05 54.62
+
+1-11
+B-AMOC + B-AF 82.60 52.98 22.34 1.26 74.28 63.51 50.92 70.81 63.05 54.40
+
+1-11
+11|c|1000 epochs:
+
+1-11
+AMOC + AF 83.28 52.82 22.04 1.12 74.95 63.87 51.38 71.79 63.83 55.13
+
+1-11
+B-AMOC + AF 84.00 53.08 21.74 1.09 75.44 64.65 51.20 71.95 64.20 55.33
+
+1-11
+AMOC + B-AF 81.85 52.62 22.51 1.38 73.77 63.21 50.99 70.82 63.16 54.55
+
+1-11
+B-AMOC + B-AF 82.76 53.07 22.17 1.32 74.63 64.11 51.49 71.17 63.83 55.30
+
+1-11
+
+Table 2: Results on Cifar-10 for self-supervised trained models. In the first part, the classification head was trained without adapting the pretrained features. In the second part, the parameters of the pretrained model were also adapted during training the classification head. ${\mathcal{L}}_{\mathrm{{CE}}}$ , AT, and BAT define, whether the classification head, and in case of finetuning the pretrained models, were trained on clean, adversarial, or with addition of pseudo adversarial inputs, respectively. A B- before the given self-supervised method indicates, that our proposed extension was applied. During training, the initial adversarial instances were created governed by ${\ell }_{\infty }$ with a strength of $8/{255}$ .
+
+In all cases, we observe that ${\ell }_{2}$ and ${\ell }_{1}$ governed attacks tend to only slightly perturb the vast majority of pixel values while selecting a handful of pixels that are heavily perturbed. This is because the overall perturbation size for ${\ell }_{2}$ and ${\ell }_{1}$ is calculated over all pixels. Those attacks tend to spend their perturbation budget on the pixels, which seem to have the most impact on the classification. When the attack has the freedom to perturb each pixel independently, as is the case for ${\ell }_{\infty }$ -attacks, the overall perturbation is larger. This also underlines the observation that clean trained models are more robust to ${\ell }_{2}$ - and ${\ell }_{1}$ -attacks. Particularly, when including additional input data during training, which introduces a larger variety of pixel value combinations. While at the same time, clean trained models are completely defenceless against attacks governed by ${\ell }_{\infty }$ , which create manipulations that can not be covered by more clean data as the manipulations are too large and unnatural.
+
+Considering these observations, we propose that BAT is less robust against small perturbations by ${\ell }_{2}$ - and ${\ell }_{1}$ -attacks, because it overfits to the observed perturbations, or more precisely the output vectors of such adversarial inputs, based on ${\ell }_{\infty }$ during training. In particular, since Cifar-10 includes only 50,000 samples.
+
+This assumption is supported by the results for the supervised trained models, reported in Table 1, which used the additional 1 million samples for training. This additional data seem to prevent BAT from overfitting to the observed adversarial perturbation, as the variability in the input data, and thereby the variability in the pseudo adversarial instances, increases. This results in higher robustness to unseen attacks, compared to standard AT.
+
+Another future step to prevent the potential overfitting would be to further investigate the manipulation distributions of ${\ell }_{2}$ - and ${\ell }_{1}$ -attacks, and in particular the distribution of their respective output vectors in the decision space. The gained insights could help to apply more sophisticated data augmentation in the decision space than the simple conditioned random noise we use here. Also, observing the distribution of clean sample output vectors could help to prevent pseudo adversarial inputs from jumping into a third classification area, as shown in Figure 2.
+
+§ RELATED WORK
+
+Recent works, e.g., (Madaan, Shin, and Hwang 2021; Rusak et al. 2020; Dong et al. 2020), propose to incorporate random noise into their techniques to increase the robustness of models against adversarial perturbations. To that end, (Madaan, Shin, and Hwang 2021) and (Rusak et al. 2020) employ some kind of generator, trained to create perturbations which are applied to the input vector, i.e., the image, before they are fed through the classification model. (Dong et al. 2020) as well, aim to model a distribution for each input which, when drawn from with very high probability returns an adversarial sample for the given input. Based on this learned adversarial distribution, the classification model itself is trained to minimize the expected loss over the adversarial distribution. In all these cases, the random manipulation is applied to the input vector, while we manipulate the output vector of a given adversarial sample. Because the works operate on different parts of the model, it should be possible to combine the techniques, to further increase adversarial robustness.
+
+Regarding manipulating the output vector, mixup (Zhang et al. 2018) drew a lot of attention recently. (Lee, Lee, and Yoon 2020) took the idea of mixup and combined it with adversarial training, calling it Adversarial Vertex Mixup (AVM). Essentially, at first they create an adversarial sample and push it further in the adversarial direction to create the so called adversarial vertex. Then, instead of using two clean inputs as in the original mixup paper, they merge the initial clean sample and the adversarial vertex to form a new input. Since the clean sample and the adversarial vertex have the same label, the authors use some form of label smoothing function, e.g., by (Szegedy et al. 2016) to convert the one-hot encoded labels to a conditionally randomized distribution. Merging these two distributions give the new label for the mixup between the clean sample and the adversarial vertex. In contrast to this work, we create multiple adversarial instances, instead of one. AVM could be visualised as a line between the adversarial vertex and clean sample from which the new inputs are drawn. Our method creates a ball around the initial adversarial output vector from which multiple samples are drawn as new output vectors for training.
+
+§ CONCLUSION
+
+Using data augmentation and larger datasets have shown to be supporting and sometimes even essential (Riquelme et al. 2021) to achieve better classification results and better generalisation. However, using these techniques does not yield robustness against adversarial manipulations. Instead, techniques like adversarial training are necessary to harden neural networks against unforeseen perturbations, which can fool the classification.
+
+Since adversarial inputs are created in and defined by the output space, which ultimately leads to the decision of a model, we proposed to combine adversarial training with data augmentation in the output space, referring to as Broad Adversarial Training (BAT). We show, that already applying simply conditioned random noise to the output vectors of adversarial inputs, and thereby create multiple new pseudo adversarial inputs, can increase the robustness, and in some cases even the clean accuracy.
+
+Extending standard Adversarial Training (AT) (Madry et al. 2017) to BAT for training on Cifar-10, increases the robustness against seen attacks by ${0.55}\%$ for small perturbations and by 1.11% for larger perturbation size. On large unseen ${\ell }_{2}$ -attacks the robust accuracy increases as well by ${1.11}\%$ , and for large ${\ell }_{1}$ -attacks by ${0.29}\%$ . Increasing the clean data pool by another 1 million data points, using BAT increases the robust accuracy for all observed attacks between ${0.12}\%$ and ${0.79}\%$ , as well as the clean accuracy slightly by ${0.05}\%$ . Similar results can also be reported for self-supervised learning, where using BAT can increase the robust and clean accuracy, as well.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/Ex1yemaQgU/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/Ex1yemaQgU/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..5e3346a1d79ea74cf7a8418da3eb87aa3b06dfd6
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/Ex1yemaQgU/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,414 @@
+# Detecting Adversaries, yet Faltering to Noise? Leveraging Conditional Variational AutoEncoders for Adversary Detection in the Presence of Noisy Images
+
+## Abstract
+
+With the rapid advancement and increased use of deep learning models in image identification, security becomes a major concern to their deployment in safety-critical systems. Since the accuracy and robustness of deep learning models are primarily attributed from the purity of the training samples, therefore the deep learning architectures are often susceptible to adversarial attacks. Adversarial attacks are often obtained by making subtle perturbations to normal images, which are mostly imperceptible to humans, but can seriously confuse the state-of-the-art machine learning models. What is so special in the slightest intelligent perturbations or noise additions over normal images that it leads to catastrophic classifications by the deep neural networks? Using statistical hypothesis testing, we find that Conditional Variational AutoEncoders (CVAE) are surprisingly good at detecting imperceptible image perturbations. In this paper, we show how CVAEs can be effectively used to detect adversarial attacks on image classification networks. We demonstrate our results over MNIST, CIFAR-10 dataset and show how our method gives comparable performance to the state-of-the-art methods in detecting adversaries while not getting confused with noisy images, where most of the existing methods falter.
+
+Index Terms-Deep Neural Networks, Adversarial Attacks, Image Classification, Variational Autoencoders, Noisy Images
+
+## Introduction
+
+The phenomenal success of deep learning models in image identification and object detection has led to its wider adoption in diverse domains ranging from safety-critical systems, such as automotive and avionics (Rao and Frtunikj 2018) to healthcare like medical imaging, robot-assisted surgery, genomics etc. (Esteva et al. 2019), to robotics and image forensics (Yang et al. 2020), etc. The performance of these deep learning architectures are often dictated by the volume of correctly labelled data used during its training phases. Recent works (Szegedy et al. 2013) (Goodfellow, Shlens, and Szegedy 2014) have shown that small and carefully chosen modifications (often in terms of noise) to the input data of a neural network classifier can cause the model to give incorrect labels. This weakness of neural networks allows the possibility of making adversarial attacks on the input image by creating perturbations which are imperceptible to humans but however are able to convince the neural network in getting completely wrong results that too with very high confidence. Due to this, adversarial attacks may pose a serious threat to deploying deep learning models in real-world safety-critical applications. It is, therefore, imperative to devise efficient methods to thwart such adversarial attacks.
+
+Many recent works have presented effective ways in which adversarial attacks can be avoided. Adversarial attacks can be classified into whitebox and blackbox attacks. White-box attacks (Akhtar and Mian 2018) assume access to the neural network weights and architecture, which are used for classification, and thereby specifically targeted to fool the neural network. Hence, they are more accurate than blackbox attacks (Akhtar and Mian 2018) which do not assume access the model parameters. Methods for detection of adversarial attacks can be broadly categorized as - (i) statistical methods, (ii) network based methods, and (iii) distribution based methods. Statistical methods (Hendrycks and Gimpel 2016) (Li and Li 2017) focus on exploiting certain characteristics of the input images or the final logistic-unit layer of the classifier network and try to identify adversaries through their statistical inference. A certain drawback of such methods as pointed by (Carlini and Wagner 2017) is that the statistics derived may be dataset specific and same techniques are not generalized across other datasets and also fail against strong attacks like CW-attack. Network based methods (Metzen et al. 2017) (Gong, Wang, and Ku 2017) aim at specifically training a binary classification neural network to identify the adversaries. These methods are restricted since they do not generalize well across unknown attacks on which these networks are not trained, also they are sensitive to change with the amount of perturbation values such that a small increase in these values makes this attacks unsuccessful. Also, potential whitebox attacks can be designed as shown by (Carlini and Wagner 2017) which fool both the detection network as well as the adversary classifier networks. Distribution based methods (Feinman et al. 2017) (Gao et al. 2021) (Song et al. 2017) (Xu, Evans, and Qi 2017) (Jha et al. 2018) aim at finding the probability distribution from the clean examples and try to find the probability of the input example to quantify how much they fall within the same distribution. However, some of the methods do not guarantee robust separation of randomly perturbed and adversarial perturbed images. Hence there is a high chance that all these methods tend to get confused with random noises in the image, treating them as adversaries.
+
+---
+
+Copyright © 2021, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+---
+
+To overcome this drawback so that the learned models are robust with respect to both adversarial perturbations as well as sensitivity to random noises, we propose the use of Conditional Variational AutoEncoder (CVAE) trained over a clean image set. At the time of inference, we empirically establish that the input example falls within a low probability region of the clean examples of the predicted class from the target classifier network. It is important to note here that, this method uses both the input image as well as the predicted class to detect whether the input is an adversary as opposed to some distribution based methods which use only the distribution from the input images. On the contrary, random perturbations activate the target classifier network in such a way that the predicted output class matches with the actual class of the input image and hence it falls within the high probability region. Thus, it is empirically shown that our method does not confuse random noise with adversarial noises. Moreover, we show how our method is robust towards special attacks which have access to both the network weights of CVAE as well as the target classifier networks where many network based methods falter. Further, we show that to eventually fool our method, we may need larger perturbations which becomes visually perceptible to the human eye. The experimental results shown over MNIST and CIFAR-10 datasets present the working of our proposal. In particular, the primary contributions made by our work is as follows.
+
+(a) We propose a framework based on CVAE to detect the possibility of adversarial attacks.
+
+(b) We leverage distribution based methods to effectively differentiate between randomly perturbed and adversar-ially perturbed images.
+
+(c) We devise techniques to robustly detect specially targeted BIM-attacks (Metzen et al. 2017) using our proposed framework.
+
+To the best of our knowledge, this is the first work which leverages use of Variational AutoEncoder architecture for detecting adversaries as well as aptly differentiates noise from adversaries to effectively safeguard learned models against adversarial attacks.
+
+## Adversarial Attack Models and Methods
+
+For a test example $X$ , an attacking method tries to find a perturbation, ${\Delta X}$ such that ${\left| \Delta X\right| }_{k} \leq {\epsilon }_{atk}$ where ${\epsilon }_{atk}$ is the perturbation threshold and $k$ is the appropriate order, generally selected as 2 or $\infty$ so that the newly formed perturbed image, ${X}_{adv} = X + {\Delta X}$ . Here, each pixel in the image is represented by the $\langle \mathrm{R},\mathrm{G},\mathrm{B}\rangle$ tuple, where $\mathrm{R},\mathrm{G},\mathrm{B} \in$ $\left\lbrack {0,1}\right\rbrack$ . In this paper, we consider only white-box attacks, i.e. the attack methods which have access to the weights of the target classifier model. However, we believe that our method should work much better for black-box attacks as they need more perturbation to attack and hence should be more easily detected by our framework. For generating the attacks, we use the library by (Li et al. 2020).
+
+## Random Perturbation (RANDOM)
+
+Random perturbations are simply unbiased random values added to each pixel ranging in between $- {\epsilon }_{atk}$ to ${\epsilon }_{atk}$ . Formally, the randomly perturbed image is given by,
+
+$$
+{X}_{\text{rand }} = X + \mathcal{U}\left( {-{\epsilon }_{atk},{\epsilon }_{atk}}\right) \tag{1}
+$$
+
+where, $\mathcal{U}\left( {a, b}\right)$ denote a continuous uniform distribution in the range $\left\lbrack {a, b}\right\rbrack$ .
+
+## Fast Gradient Sign Method (FGSM)
+
+Earlier works by (Goodfellow, Shlens, and Szegedy 2014) introduced the generation of malicious biased perturbations at each pixel of the input image in the direction of the loss gradient ${\Delta }_{X}L\left( {X, y}\right)$ , where $L\left( {X, y}\right)$ is the loss function with which the target classifier model was trained. Formally, the adversarial examples with with ${l}_{\infty }$ norm for ${\epsilon }_{atk}$ are computed by,
+
+$$
+{X}_{\text{adv }} = X + {\epsilon }_{\text{atk }} \cdot \operatorname{sign}\left( {{\Delta }_{X}L\left( {X, y}\right) }\right) \tag{2}
+$$
+
+FGSM perturbations with ${l}_{2}$ norm on attack bound are calculated as,
+
+$$
+{X}_{adv} = X + {\epsilon }_{atk} \cdot \frac{{\Delta }_{X}L\left( {X, y}\right) }{{\left| {\Delta }_{X}L\left( X, y\right) \right| }_{2}} \tag{3}
+$$
+
+## Projected Gradient Descent (PGD)
+
+Earlier works by (Kurakin, Goodfellow, and Bengio 2017) propose a simple variant of the FGSM method by applying it multiple times with a rather smaller step size than ${\epsilon }_{atk}$ . However, as we need the overall perturbation after all the iterations to be within ${\epsilon }_{atk}$ -ball of $X$ , we clip the modified $X$ at each step within the ${\epsilon }_{atk}$ ball with ${l}_{\infty }$ norm.
+
+$$
+{X}_{{adv},0} = X, \tag{4a}
+$$
+
+$$
+{X}_{{adv}, n + 1} = {\operatorname{Clip}}_{X}^{{\epsilon }_{atk}}\left\{ {{X}_{{adv}, n} + \alpha \operatorname{.sign}\left( {{\Delta }_{X}L\left( {{X}_{{adv}, n}, y}\right) }\right) }\right\}
+$$
+
+(4b)
+
+Given $\alpha$ , we take the no of iterations, $n$ to be $\left\lfloor {\frac{2{\epsilon }_{atk}}{\alpha } + }\right.$ 2]. This attacking method has also been named as Basic Iterative Method (BIM) in some works.
+
+## Carlini-Wagner (CW) Method
+
+(Carlini and Wagner 2017) proposed a more sophisticated way of generating adversarial examples by solving an optimization objective as shown in Equation 5. Value of $c$ is chosen by an efficient binary search. We use the same parameters as set in (Li et al. 2020) to make the attack.
+
+$$
+{X}_{\text{adv }} = {\operatorname{Clip}}_{X}^{{\epsilon }_{\text{atk }}}\left\{ {\mathop{\min }\limits_{\epsilon }\parallel \epsilon {\parallel }_{2} + c.f\left( {x + \epsilon }\right) }\right\} \tag{5}
+$$
+
+## DeepFool method
+
+DeepFool (Moosavi-Dezfooli, Fawzi, and Frossard 2016) is an even more sophisticated and efficient way of generating adversaries. It works by making the perturbation iteratively towards the decision boundary so as to achieve the adversary with minimum perturbation. We use the default parameters set in (Li et al. 2020) to make the attack.
+
+## Proposed Framework Leveraging CVAE
+
+In this section, we present how Conditional Variational Au-toEncoders (CVAE), trained over a dataset of clean images, are capable of comprehending the inherent differentiable attributes between adversaries and noisy data and separate out both using their probability distribution map.
+
+## Conditional Variational AutoEncoders (CVAE)
+
+Variational AutoEncoder is a type of a Generative Adversarial Network (GAN) having two components as Encoder and Decoder. The input is first passed through an encoder to get the latent vector for the image. The latent vector is passed through the decoder to get the reconstructed input of the same size as the image. The encoder and decoder layers are trained with the objectives to get the reconstructed image as close to the input image as possible thus forcing to preserve most of the features of the input image in the latent vector to learn a compact representation of the image. The second objective is to get the distribution of the latent vectors for all the images close to the desired distribution. Hence, after the variational autoencoder is fully trained, decoder layer can be used to generate examples from randomly sampled latent vectors from the desired distribution with which the encoder and decoder layers were trained.
+
+
+
+Fig. 1: CVAE Model Architecture
+
+Conditional VAE is a variation of VAE in which along with the input image, the class of the image is also passed with the input at the encoder layer and also with the latent vector before the decoder layer (refer to Figure 1). This helps Conditional VAE to generate specific examples of a class. The loss function for CVAE is defined by Equation 6, The first term is the reconstruction loss which signifies how closely can the input $X$ be reconstructed given the latent vector $z$ and the output class from the target classifier network as condition, $c$ . The second term of the loss function is the KL-divergence $\left( {\mathcal{D}}_{KL}\right)$ between the desired distribution, $P\left( {z \mid c}\right)$ and the current distribution $\left( {Q\left( {z \mid X, c}\right) }\right)$ of $z$ given input image $X$ and the condition $c$ .
+
+$$
+L\left( {X, c}\right) = \mathbb{E}\left\lbrack {\log P\left( {X \mid z, c}\right) }\right\rbrack - {\mathcal{D}}_{KL}\left\lbrack {Q\left( {z \mid X, c}\right) \parallel P\left( {z \mid c}\right) }\right\rbrack
+$$
+
+(6)
+
+## Training CVAE Models
+
+For modeling $\log P\left( {X \mid z, c}\right)$ , we use the decoder neural network to output the reconstructed image, ${X}_{rcn}$ where we utilize the condition $c$ which is the output class of the image to get the set of parameters, $\theta \left( c\right)$ for the neural network. We calculate Binary Cross Entropy (BCE) loss of the reconstructed image, ${X}_{rcn}$ with the input image, $X$ to model $\log P\left( {X \mid z, c}\right)$ . Similarly, we model $Q\left( {z \mid X, c}\right)$ with the encoder neural network which takes as input image $X$ and utilizes condition, $c$ to select model parameters, $\theta \left( c\right)$ and outputs mean, $\mu$ and $\log$ of variance, $\log {\sigma }^{2}$ as parameters assuming Gaussian distribution for the conditional distribution. We set the target distribution $P\left( {z \mid c}\right)$ as unit Gaussian distribution with mean 0 and variance 1 as $N\left( {0,1}\right)$ . The resultant loss function would be as follows,
+
+$$
+L\left( {X, c}\right) = \operatorname{BCE}\left\lbrack {X,\operatorname{Decoder}\left( {X,\theta \left( c\right) }\right) }\right\rbrack -
+$$
+
+$$
+\frac{1}{2}\left\lbrack {{\operatorname{Encoder}}_{\sigma }^{2}\left( {X,\theta \left( c\right) }\right) + {\operatorname{Encoder}}_{\mu }^{2}\left( {X,\theta \left( c\right) }\right) }\right.
+$$
+
+$$
+\left. {-1 - \log \left( {{\operatorname{Encoder}}_{\sigma }^{2}\left( {X,\theta \left( c\right) }\right) }\right) }\right\rbrack \tag{7}
+$$
+
+The model architecture weights, $\theta \left( c\right)$ are a function of the condition, $c$ . Hence, we learn separate weights for encoder and decoder layers of CVAE for all the classes. It implies learning different encoder and decoder for each individual class. The layers sizes are tabulated in Table 1. We train the Encoder and Decoder layers of CVAE on clean images with their ground truth labels and use the condition as the predicted class from the target classifier network during inference.
+
+| Attribute | Layer | Size |
| Encoder | Conv2d | Channels: (c, 32) |
| Kernel: (4,4, stride=2, padding=1) |
| BatchNorm2d | 32 |
| Relu | |
| Conv2d | Channels: (32, 64) |
| Kernel: (4,4, stride=2, padding=1) |
| BatchNorm2d | 64 |
| Relu | |
| Conv2d | Channels: (64, 128) Kernel: (4,4, stride=2, padding=1) |
| BatchNorm2d | 128 |
| Mean | Linear | (1024, ${z}_{d\mathit{{im}}} = {128}$ ) |
| Variance | Linear | (1024, ${z}_{d\;i\;m} =$ 128) |
| Project | Linear | $\left( {{z}_{dim} = {128},{1024}}\right)$ |
| Reshape | (128,4,4) |
| Decoder | ConvTranspose2d | Channels: (128, 64) Kernel: (4,4, stride=2, padding=1) |
| BatchNorm2d | 64 |
| Relu | |
| ConvTranspose2d | Channels: (64, 32) |
| Kernel: (4,4, stride=2, padding=1) |
| BatchNorm2d | 64 |
| Relu | |
| ConvTranspose2d | Channels: (32, c) |
| Kernel: (4,4, stride=2, padding=1) |
| Sigmoid | |
+
+TABLE I: CVAE Architecture Layer Sizes. $c =$ Number of Channels in the Input Image $(c = 3$ for CIIFAR-10 and $c = 1$ for MNIST).
+
+## Determining Reconstruction Errors
+
+Let $X$ be the input image and ${y}_{\text{pred }}$ be the predicted class obtained from the target classifier network. ${X}_{{rcn},{y}_{pred}}$ is the reconstructed image obtained from the trained encoder and decoder networks with the condition ${y}_{\text{pred }}$ . We define the reconstruction error or the reconstruction distance as in Equation 8. The network architectures for encoder and decoder layers are given in Figure 1.
+
+$$
+\operatorname{Recon}\left( {X, y}\right) = {\left( X - {X}_{{rcn}, y}\right) }^{2} \tag{8}
+$$
+
+Two pertinent points to note here are:
+
+- For clean test examples, the reconstruction error is bound to be less since the CVAE is trained on clean train images. As the classifier gives correct class for the clean examples, the reconstruction error with the correct class of the image as input is less.
+
+- For the adversarial examples, as they fool the classifier network, passing the malicious output class, ${y}_{\text{pred }}$ of the classifier network to the CVAE with the slightly perturbed input image, the reconstructed image tries to be closer to the input with class ${y}_{\text{pred }}$ and hence, the reconstruction error is large.
+
+As an example, let the clean image be a cat and its slightly perturbed image fools the classifier network to believe it is a dog. Hence, the input to the CVAE will be the slightly perturbed cat image with the class dog. Now as the encoder and decoder layers are trained to output a dog image if the class inputted is dog, the reconstructed image will try to resemble a dog but since the input is a cat image, there will be large reconstruction error. Hence, we use reconstruction error as a measure to determine if the input image is adversarial. We first train the Conditional Variational AutoEncoder (CVAE) on clean images with the ground truth class as the condition. Examples of reconstructions for clean and adversarial examples are given in Figure 2 and Figure 3.
+
+
+
+Fig. 2: Clean and Adversarial Attacked Images to CVAE from MNIST Dataset
+
+
+
+Fig. 3: Clean and Adversarial Attacked Images to CVAE from CIFAR-10 Dataset.
+
+## Obtaining $p$ -value
+
+As already discussed, the reconstruction error is used as a basis for detection of adversaries. We first obtain the reconstruction distances for the train dataset of clean images which is expected to be similar to that of the train images. On the other hand, for the adversarial examples, as the predicted class $y$ is incorrect, the reconstruction is expected to be worse as it will be more similar to the image of class $y$ as the decoder network is trained to generate such images. Also for random images, as they do not mostly fool the classifier network, the predicted class, $y$ is expected to be correct, hence reconstruction distance is expected to be less. Besides qualitative analysis, for the quantitative measure, we use the permutation test from (Efron and Tibshirani 1993). We can provide an uncertainty value for each input about whether it comes from the training distribution. Specifically, let the input ${X}^{\prime }$ and training images ${X}_{1},{X}_{2},\ldots ,{X}_{N}$ . We first compute the reconstruction distances denoted by $\operatorname{Recon}\left( {X, y}\right)$ for all samples with the condition as the predicted class $y =$ Classifier(X). Then, using the rank of $\operatorname{Recon}\left( {{X}^{\prime },{y}^{\prime }}\right)$ in $\{ \mathtt{{Recon}}({X}_{1},{y}_{1}),\mathtt{{Recon}}({X}_{2},{y}_{2}),\ldots ,\mathtt{{Recon}}({X}_{N},{y}_{N})\} \;$ as our test statistic, we get,
+
+$$
+T = T\left( {{X}^{\prime };{X}_{1},{X}_{2},\ldots ,{X}_{N}}\right)
+$$
+
+$$
+= \mathop{\sum }\limits_{{i = 1}}^{N}I\left\lbrack {\operatorname{Recon}\left( {{X}_{i},{y}_{i}}\right) \leq \operatorname{Recon}\left( {{X}^{\prime },{y}^{\prime }}\right) }\right\rbrack \tag{9}
+$$
+
+Where $I\left\lbrack .\right\rbrack$ is an indicator function which returns 1 if the condition inside brackets is true, and 0 if false. By permutation principle, $p$ -value for each sample will be,
+
+$$
+p = \frac{1}{N + 1}\left( {\mathop{\sum }\limits_{{i = 1}}^{N}I\left\lbrack {{T}_{i} \leq T}\right\rbrack + 1}\right) \tag{10}
+$$
+
+Larger $p$ -value implies that the sample is more probable to be a clean example. Let $t$ be the threshold on the obtained $p$ -value for the sample, hence if ${p}_{X, y} < t$ , the sample $X$ is classified as an adversary. Algorithm 1 presents the overall resulting procedure combining all above mentioned stages.
+
+Algorithm 1 Adversarial Detection Algorithm
+
+---
+
+ function DETECT_ADVERSARIES $\left( {{X}_{\text{train }},{Y}_{\text{train }}, X, t}\right)$
+
+ recon $\leftarrow \operatorname{Train}\left( {{X}_{\text{train }},{Y}_{\text{train }}}\right)$
+
+3: recon_dists $\leftarrow \operatorname{Recon}\left( {{X}_{\text{train }},{Y}_{\text{train }}}\right)$
+
+4: Adversaries $\leftarrow \phi$
+
+5: for $x$ in $X$ do
+
+6: ${y}_{\text{pred }} \leftarrow$ Classifier(x)
+
+7: recon_dist_x $\leftarrow \operatorname{Recon}\left( {x,{y}_{\text{pred }}}\right)$
+
+8: pval $\leftarrow p$ -value $\left( {\text{recon_dist_x},\text{recon_dists}}\right)$
+
+9: if pval $\leq t$ then
+
+10: Adversaries.insert( $x$ )
+
+11: return Adversaries
+
+---
+
+Algorithm 1 first trains the CVAE network with clean training samples (Line 2) and formulates the reconstruction distances (Line 3). Then, for each of the test samples which may contain clean, randomly perturbed as well as adversarial examples, first the output predicted class is obtained using a target classifier network, followed by finding it's reconstructed image from CVAE, and finally by obtaining it’s $p$ -value to be used for thresholding (Lines 5- 8). Images with $p$ -value less than given threshold(t)are classified as adversaries (Lines 9-10).
+
+## Experimental Results
+
+We experimented our proposed methodology over MNIST and CIFAR-10 datasets. All the experiments are performed in Google Colab GPU having ${0.82}\mathrm{{GHz}}$ frequency, ${12}\mathrm{{GB}}$ RAM and dual-core CPU having ${2.3}\mathrm{{GHz}}$ frequency, ${12}\mathrm{{GB}}$ RAM. An exploratory version of the code-base will be made public on github.
+
+## Datasets and Models
+
+Two datasets are used for the experiments in this paper, namely MNIST (LeCun, Cortes, and Burges 2010) and CIFAR-10 (Krizhevsky 2009). MNIST dataset consists of hand-written images of numbers from 0 to 9 . It consists of 60,000 training examples and 10,000 test examples where each image is a ${28} \times {28}$ gray-scale image associated with a label from 1 of the 10 classes. CIFAR-10 is broadly used for comparison of image classification tasks. It also consists of 60,000 image of which 50,000 are used for training and the rest10,000are used for testing. Each image is a ${32} \times {32}$ coloured image i.e. consisting of 3 channels associated with a label indicating 1 out of 10 classes.
+
+We use state-of-the-art deep neural network image classifier, ResNet18 (He et al. 2015) as the target network for the experiments. We use the pre-trained model weights available from (Idelbayev) for both MNIST as well as CIFAR-10 datasets.
+
+## Performance over Grey-box attacks
+
+If the attacker has the access only to the model parameters of the target classifier model and no information about the detector method or it's model parameters, then we call such attack setting as Grey-box. This is the most common attack setting used in previous works against which we evaluate the most common attacks with standard epsilon setting as used in other works for both the datasets. For MNIST, the value of $\epsilon$ is commonly used between 0.15-0.3 for FGSM attack and 0.1 for iterative attacks (Samangouei, Kabkab, and Chellappa 2018) (Gong, Wang, and Ku 2017) (Xu, Evans, and Qi 2017). While for CIFAR10, the value of $\epsilon$ is most commonly chosen to be $\frac{8}{255}$ as in (Song et al. 2017) (Xu, Evans, and Qi 2017) (Fidel, Bitton, and Shabtai 2020). For DeepFool (Moosavi-Dezfooli, Fawzi, and Frossard 2016) and Carlini Wagner (CW) (Carlini and Wagner 2017) attacks, the $\epsilon$ bound is not present. The standard parameters as used by default in (Li et al. 2020) have been used for these 2 attacks. For ${L}_{2}$ attacks, the $\epsilon$ bound is chosen such that success of the attack is similar to their ${L}_{\infty }$ counterparts as the values used are very different in previous works.
+
+Reconstruction Error Distribution: The histogram distribution of reconstruction errors for MNIST and CIFAR- 10 datasets for different attacks are given in Figure 4. For adversarial attacked examples, only examples which fool the network are included in the distribution for fair comparison. It may be noted that, the reconstruction errors for adversarial examples is higher than normal examples as expected. Also, reconstructions errors for randomly perturbed test samples are similar to those of normal examples but slightly larger as expected due to reconstruction error contributed from noise.
+
+
+
+Fig. 4: Reconstruction Distances for different Grey-box attacks
+
+$p$ -value Distribution: From the reconstruction error values, the distribution histogram of p-values of test samples for MNIST and CIFAR-10 datasets are given in Figure 5. It may be noted that, in case of adversaries, most samples have $p$ -value close to 0 due to their high reconstruction error; whereas for the normal and randomly perturbed images, $p$ -value is nearly uniformly distributed as expected.
+
+ROC Characteristics: Using the $p$ -values, ROC curves can be plotted as shown in Figure 6. As can be observed from ROC curves, clean and randomly perturbed attacks can be very well classified from all adversarial attacks. The values of ${\epsilon }_{atk}$ were used such that the attack is able to fool the target detector for at-least ${45}\%$ samples. The percentage of samples on which the attack was successful for each attack is shown in Table 1
+
+Statistical Results and Discussions: The statistics for clean, randomly perturbed and adversarial attacked images for MNIST and CIFAR datasets are given in Table II. Error rate signifies the ratio of the number of examples which were misclassified by the target network. Last column (AUC) lists the area under the ROC curve. The area for adversaries is expected to be close to 1 ; whereas for the normal and randomly perturbed images, it is expected to be around 0.5 .
+
+It is worthy to note that, the obtained statistics are much comparable with the state-of-the-art results as tabulated in
+
+
+
+(b) $p$ -values from CIFAR-10 dataset
+
+Fig. 5: Generated $p$ -values for different Grey-box attacks
+
+| Type | Error Rate (%) | Parameters | AUC |
| MNIST | CIFAR-10 | MNIST | CIFAR-10 | MNIST | CIFAR-10 |
| NORMAL | 2.2 | 8.92 | - | - | 0.5 | 0.5 |
| RANDOM | 2.3 | 9.41 | $\epsilon = {0.1}$ | $\epsilon = \frac{8}{255}$ | 0.52 | 0.514 |
| FGSM | 90.8 | 40.02 | $\epsilon = {0.15}$ | $\epsilon = \frac{8}{255}$ | 0.99 | 0.91 |
| FGSM-L2 | 53.3 | 34.20 | $\epsilon = {1.5}$ | $\epsilon = 1$ | 0.95 | 0.63 |
| R-FGSM | 91.3 | 41.29 | $\epsilon = \left( {{0.05},{0.1}}\right)$ | $\epsilon = \left( {\frac{4}{255},\frac{8}{255}}\right)$ | 0.99 | 0.91 |
| R-FGSM-L2 | 54.84 | 34.72 | $\epsilon = \left( {{0.05},{1.5}}\right)$ | $\mathit{\epsilon }\mathrm{ = }\mathrm{(}\frac{\mathrm{4}}{\mathrm{{255}}}\mathrm{,}\mathrm{1}\mathrm{)}$ | 0.95 | 0.64 |
| PGD | 82.13 | 99.17 | $\epsilon = {0.1}, n = {12}$ ${\epsilon }_{step} = {0.02}$ | $\mathit{\epsilon }\mathrm{ = }\frac{\mathrm{8}}{\mathrm{{255}}}\mathrm{,}\mathit{n}\mathrm{ = }\mathrm{1}\mathrm{2}$ ${\epsilon }_{step} = \frac{1}{255}$ | 0.974 | 0.78 |
| CW | 100 | 100 | - | - | 0.98 | 0.86 |
| DeepFool | 97.3 | 93.89 | - | - | 0.962 | 0.75 |
+
+TABLE II: Image Statistics for MNIST and CIFAR-10. AUC : Area Under the ROC Curve. Error Rate (%) : Percentage of samples mis-classified or Successfully-attacked
+
+Table IV (Given in the Appendix). Interestingly, some of the methods (Song et al. 2017) explicitly report comparison results with randomly perturbed images and are ineffective in distinguishing adversaries from random noises, but most other methods do not report results with random noise added to the input image. Since other methods use varied experimental setting, attack models, different datasets as well as ${\epsilon }_{atk}$ values and network model, exact comparisons with other methods is not directly relevant primarily due to such varied experimental settings. However, the results reported within the Table IV (Given in the Appendix) are mostly similar to our results while our method is able to statistically differentiate from random noisy images.
+
+In addition to this, since our method does not use any adversarial examples for training, it is not prone to changes in value of $\epsilon$ or with change in attacks which network based methods face as they are explicitly trained with known values of $\epsilon$ and types of attacks. Moreover, among distribution and statistics based methods, to the best of our knowledge, utilization of the predicted class from target network has not been done before. Most of these methods either use the input image itself (Jha et al. 2018) (Song et al. 2017) (Xu, Evans, and Qi 2017), or the final logits layer (Feinman et al. 2017) (Hendrycks and Gimpel 2016), or some intermediate layer (Li and Li 2017) (Fidel, Bitton, and Shabtai 2020) from target architecture for inference, while we use the input image and the predicted class from target network to do the same.
+
+
+
+(b) CIFAR-10 dataset
+
+Fig. 6: ROC Curves for different Grey-box attacks
+
+## Performance over White-box attacks
+
+In this case, we evaluate the attacks if the attacker has the information of both the defense method as well as the target classifier network. (Metzen et al. 2017) proposed a modified PGD method which uses the gradient of the loss function of the detector network assuming that it is differentiable along with the loss function of the target classifier network to generate the adversarial examples. If the attacker also has access to the model weights of the detector CVAE network, an attack can be devised to fool both the detector as well as the classifier network. The modified PGD can be expressed as follows :-
+
+$$
+{X}_{{adv},0} = X \tag{11a}
+$$
+
+$$
+{X}_{{adv}, n + 1} = {\operatorname{Clip}}_{X}^{{\epsilon }_{atk}}\left\{ {{X}_{{adv}, n} + }\right.
+$$
+
+$$
+\text{ }\alpha \text{.sign }\left( {\text{ (1 } - \sigma ).{\Delta }_{X}{L}_{cls}\left( {{X}_{{adv}, n},{y}_{\text{target }}}\right) + }\right.
+$$
+
+$$
+\left. {\sigma .{\Delta }_{X}{L}_{\text{det }}\left( {{X}_{\text{adv }, n},{y}_{\text{target }}}\right) )}\right\} \tag{11b}
+$$
+
+Where ${y}_{\text{target }}$ is the target class and ${L}_{\text{det }}$ is the reconstruction distance from Equation 8. It is worthy to note that our proposed detector CVAE is differentiable only for the targeted attack setting. For the non-targeted attack, as the condition required for the CVAE is obtained from the target classifier output which is discrete, the differentiation operation is not valid. We set the target randomly as any class other than the true class for testing.
+
+Effect of $\sigma$ : To observe the effect of changing value of $\sigma$ , we keep the value of $\epsilon$ fixed at 0.1 . As can be observed in Figure 7, the increase in value of $\sigma$ implies larger weight on fooling the detector i.e. getting less reconstruction distance. Hence, as expected the attack becomes less successful with larger values of $\sigma \left\lbrack 8\right\rbrack$ and gets lesser AUC values [7], hence more effectively fooling the detector. For CIFAR-10 dataset, the detection model does get fooled for higher $c$ - values but however the error rate is significantly low for those values, implying that only a few samples get attacks on setting such value.
+
+
+
+Fig. 7: ROC Curves for different values of $\sigma$ . More area under the curve implies better detectivity for that attack. With more $\sigma$ value, the attack, as the focus shifts to fooling the detector, it becomes difficult for the detector to detect.
+
+Effect of $\epsilon$ : With changing values of $\epsilon$ , there is more space available for the attack to act, hence the attack becomes more successful as more no of images are attacked as observed in Figure 10. At the same time, the trend for AUC curves is shown in Figure 9. The initial dip in the value is as expected as the detector tends to be fooled with larger $\epsilon$ bound. From both these trends, it can be noted that for robustly attacking both the detector and target classifier for significantly higher no of images require significantly larger attack to be made for both the datasets.
+
+
+
+Fig. 8: Success rate for different values of $\sigma$ . More value of $\sigma$ means more focus on fooling the detector, hence success rate of fooling the detector decreases with increasing $\sigma$ .
+
+
+
+Fig. 9: ROC Curves for different values of $\epsilon$ . With more $\epsilon$ value, due to more space available for the attack, attack becomes less detectable on average.
+
+## Related Works
+
+There has been an active research in the direction of adversaries and the ways to avoid them, primarily these methods are statistical as well as machine learning (neural network) based which produces systematic identification and rectification of images into desired target classes.
+
+Statistical Methods: Statistical methods focus on exploiting certain characteristics of the input images and try to identify adversaries through their statistical inference. Some early works include use of PCA, softmax distribution of final layer logits (Hendrycks and Gimpel 2016), reconstruction from logits (Li and Li 2017) to identify adversaries. Carlini and Wagner (Carlini and Wagner 2017) showed how these methods are not robust against strong attacks and most of the methods work on some specific datasets but do not generalize on others as the same statistical thresholds do not work.
+
+
+
+Fig. 10: Success rate for different values of $\epsilon$ . More value of $\epsilon$ means more space available for the attack, hence success rate increases
+
+Network based Methods: Network based methods aim at specifically training a neural network to identify the adversaries. Binary classification networks (Metzen et al. 2017) (Gong, Wang, and Ku 2017) are trained to output a confidence score on the presence of adversaries. Some methods propose addition of a separate classification node in the target network itself (Hosseini et al. 2017). The training is done in the same way with the augmented dataset. (Carrara et al. 2018) uses feature distant spaces of intermediate layer values in the target network to train an LSTM network for classifying adversaries. Major challenges faced by these methods is that the classification networks are differentiable, thus if the attacker has access to the weights of the model, a specifically targeted attack can be devised as suggested by Carlini and Wagner (Carlini and Wagner 2017) to fool both the target network as well as the adversary classifier. Moreover, these methods are highly sensitive to the perturbation threshold set for adversarial attack and fail to identify attacks beyond a preset threshold.
+
+Distribution based Methods: Distribution based methods aim at finding the probability distribution from the clean examples and try to find the probability of the input example to fall within the same distribution. Some of these methods include using Kernel Density Estimate on the logits from the final softmax layer (Feinman et al. 2017). (Gao et al. 2021) used Maximum mean discrepancy (MMD) from the distribution of the input examples to classify adversaries based on their probability of occurrence in the input distribution. PixelDefend (Song et al. 2017) uses PixelCNN to get the Bits Per Dimension (BPD) score for the input image. (Xu, Evans, and Qi 2017) uses the difference in the final logit vector for original and squeezed images as a medium to create distribution and use it for inference. (Jha et al. 2018) compares different dimensionality reduction techniques to get low level representations of input images and use it for bayesian inference to detect adversaries.
+
+Some other special methods include use of SHAP signatures (Fidel, Bitton, and Shabtai 2020) which are used for getting explanations on where the classifier network is focusing as an input for detecting adversaries.
+
+A detailed comparative study with all these existing approaches is summarized through Table IV in the Appendix.
+
+## Comparison with State-of-the-Art using Generative Networks
+
+Finally we compare our work with these 3 works (Meng and Chen 2017) (Hwang et al. 2019) (Samangouei, Kabkab, and Chellappa 2018) proposed earlier which uses Generative networks for detection and purification of adversaries. We make our comparison on MNIST dataset which is used commonly in the 3 works (Table III). Our results are typically the best for all attacks or are off by short margin from the best. For the strongest attack, our performance is much better. This show how our method is more effective while not being confused with random perturbation as an adversary. More details are given in the Appendix.
+
+| Type | AUC |
| MagNet | PuVAE | DefenseGAN | CVAE (Ours) |
| RANDOM | 0.61 | 0.72 | 0.52 | 0.52 |
| FGSM | 0.98 | 0.96 | 0.77 | 0.99 |
| FGSM-L2 | 0.84 | 0.60 | 0.60 | 0.95 |
| R-FGSM | 0.989 | 0.97 | 0.78 | 0.987 |
| R-FGSM-L2 | 0.86 | 0.61 | 0.62 | 0.95 |
| PGD | 0.98 | 0.95 | 0.65 | 0.97 |
| CW | 0.983 | 0.92 | 0.94 | 0.986 |
| DeepFool | 0.86 | 0.86 | 0.92 | 0.96 |
| Strongest | 0.84 | 0.60 | 0.60 | 0.95 |
+
+TABLE III: Comparison in ROC AUC statistics with other methods. More AUC implies more detectablity. 0.5 value of AUC implies no detection. For RANDOM, value close to 0.5 is better while for adversaries, higher value is better.
+
+## Conclusion
+
+In this work, we propose the use of Conditional Variational AutoEncoder (CVAE) for detecting adversarial attacks. We utilized statistical base methods to verify that the adversarial attacks usually lie outside of the training distribution. We demonstrate how our method can specifically differentiate between random perturbations and targeted attacks which is necessary for some applications where the raw camera image may contain random noises which should not be confused with an adversarial attack. Furthermore, we demonstrate how it takes huge targeted perturbation to fool both the detector as well as the target classifier. Our framework presents a practical, effective and robust adversary detection approach in comparison to existing state-of-the-art techniques which falter to differentiate noisy data from adversaries. As a possible future work, it would be interesting to see the use of Variational AutoEncoders for automatically purifying the adversarialy attacked images.
+
+## References
+
+[Akhtar and Mian 2018] Akhtar, N., and Mian, A. 2018. Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access 6:14410-14430.
+
+[Carlini and Wagner 2017] Carlini, N., and Wagner, D. 2017. Towards evaluating the robustness of neural networks. In IEEE symposium on security and privacy (S&P), 39-57.
+
+[Carrara et al. 2018] Carrara, F.; Becarelli, R.; Caldelli, R.; Falchi, F.; and Amato, G. 2018. Adversarial examples detection in features distance spaces. In Proceedings of the European Conference on Computer Vision (ECCV) Workshops, 313-327.
+
+[Efron and Tibshirani 1993] Efron, B., and Tibshirani, R. J. 1993. An Introduction to the Bootstrap. Number 57 in Monographs on Statistics and Applied Probability. Boca Raton, Florida, USA: Chapman & Hall/CRC.
+
+[Esteva et al. 2019] Esteva, A.; Robicquet, A.; Ramsundar, B.; Kuleshov, V.; DePristo, M.; Chou, K.; Cui, C.; Corrado, G.; Thrun, S.; and Dean, J. 2019. A guide to deep learning in healthcare. Nature medicine 25(1):24-29.
+
+[Feinman et al. 2017] Feinman, R.; Curtin, R. R.; Shintre, S.; and Gardner, A. B. 2017. Detecting adversarial samples from artifacts. arXiv preprint arXiv:1703.00410.
+
+[Fidel, Bitton, and Shabtai 2020] Fidel, G.; Bitton, R.; and Shabtai, A. 2020. When explainability meets adversarial learning: Detecting adversarial examples using shap signatures. In International Joint Conference on Neural Networks (IJCNN), 1-8.
+
+[Gao et al. 2021] Gao, R.; Liu, F.; Zhang, J.; Han, B.; Liu, T.; Niu, G.; and Sugiyama, M. 2021. Maximum mean discrepancy test is aware of adversarial attacks. In International Conference on Machine Learning, 3564-3575.
+
+[Gong, Wang, and Ku 2017] Gong, Z.; Wang, W.; and Ku, W.-S. 2017. Adversarial and clean data are not twins. arXiv preprint arXiv:1704.04960.
+
+[Goodfellow, Shlens, and Szegedy 2014] Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
+
+[He et al. 2015] He, K.; Zhang, X.; Ren, S.; and Sun, J. 2015. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385.
+
+[Hendrycks and Gimpel 2016] Hendrycks, D., and Gimpel, K. 2016. Early methods for detecting adversarial images. arXiv preprint arXiv:1608.00530.
+
+[Hosseini et al. 2017] Hosseini, H.; Chen, Y.; Kannan, S.; Zhang, B.; and Poovendran, R. 2017. Blocking transferability of adversarial examples in black-box learning systems. arXiv preprint arXiv:1703.04318.
+
+[Hwang et al. 2019] Hwang, U.; Park, J.; Jang, H.; Yoon, S.; and Cho, N. I. 2019. Puvae: A variational autoencoder to purify adversarial examples.
+
+[Idelbayev] Idelbayev, Y. Proper ResNet implementation for CIFAR10/CIFAR100 in PyTorch.
+
+[Jha et al. 2018] Jha, S.; Jang, U.; Jha, S.; and Jalaian, B.
+
+2018. Detecting adversarial examples using data manifolds. In IEEE Military Communications Conference (MILCOM), 547-552.
+
+[Krizhevsky 2009] Krizhevsky, A. 2009. Learning multiple layers of features from tiny images. Technical report, University of Toronto.
+
+[Kurakin, Goodfellow, and Bengio 2017] Kurakin, A.; Goodfellow, I.; and Bengio, S. 2017. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236.
+
+[LeCun, Cortes, and Burges 2010] LeCun, Y.; Cortes, C.; and Burges, C. 2010. Mnist handwritten digit database. ATT Labs [Online]. Available: http://yann.lecun.com/exdb/mnist 2.
+
+[Li and Li 2017] Li, X., and Li, F. 2017. Adversarial examples detection in deep networks with convolutional filter statistics. In Proceedings of the IEEE International Conference on Computer Vision, 5764-5772.
+
+[Li et al. 2020] Li, Y.; Jin, W.; Xu, H.; and Tang, J. 2020. Deeprobust: A pytorch library for adversarial attacks and defenses.
+
+[Meng and Chen 2017] Meng, D., and Chen, H. 2017. Magnet: a two-pronged defense against adversarial examples.
+
+[Metzen et al. 2017] Metzen, J. H.; Genewein, T.; Fischer, V.; and Bischoff, B. 2017. On detecting adversarial perturbations. arXiv preprint arXiv:1702.04267.
+
+[Moosavi-Dezfooli, Fawzi, and Frossard 2016] Moosavi-Dezfooli, S.-M.; Fawzi, A.; and Frossard, P. 2016. Deepfool: a simple and accurate method to fool deep neural networks.
+
+[Rao and Frtunikj 2018] Rao, Q., and Frtunikj, J. 2018. Deep learning for self-driving cars: Chances and challenges. In Proceedings of the 1st International Workshop on Software Engineering for AI in Autonomous Systems, 35-38.
+
+[Samangouei, Kabkab, and Chellappa 2018] Samangouei, P.; Kabkab, M.; and Chellappa, R. 2018. Defense-gan: Protecting classifiers against adversarial attacks using generative models.
+
+[Song et al. 2017] Song, Y.; Kim, T.; Nowozin, S.; Ermon, S.; and Kushman, N. 2017. Pixeldefend: Leveraging generative models to understand and defend against adversarial examples. arXiv preprint arXiv:1710.10766.
+
+[Szegedy et al. 2013] Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; and Fergus, R. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.
+
+[Xu, Evans, and Qi 2017] Xu, W.; Evans, D.; and Qi, Y. 2017. Feature squeezing: Detecting adversarial examples in deep neural networks. arXiv preprint arXiv:1704.01155.
+
+[Yang et al. 2020] Yang, P.; Baracchi, D.; Ni, R.; Zhao, Y.; Argenti, F.; and Piva, A. 2020. A survey of deep learning-based source image forensics. Journal of Imaging 6(3):9.
+
+## Appendix
+
+## Use of simple AutoEncoder (AE)
+
+MagNet (Meng and Chen 2017) uses AutoEncoder (AE) for detecting adversaries. We compare the results with our proposed CVAE architecture on the same experiment setting and present the comparison in AUC values of the ROC curve observed for the 2 cases. Although the paper's claim is based on both detection as well as purification (if not detected) of the adversaries. MagNet uses their detection framework for detecting larger adversarial perturbations which cannot be purified. For smaller perturbations, MagNet proposes to purify the adversaries by a different AutoEncoder model. We make the relevant comparison only for the detection part with our proposed method. Using the same architecture as proposed, our results are better for the strongest attack while not getting confused with random perturbations of similar magnitude. ROC curves obtained for different adversaries for MagNet are given in Figure 11
+
+
+
+Fig. 11: ROC curve of different adversaries for MagNet
+
+## Use of Variational AutoEncoder (VAE)
+
+PuVAE (Hwang et al. 2019) uses Variational AutoEncoder (VAE) for purifying adversaries. We compare the results with our proposed CVAE architecture on the same experiment setting. PuVAE however, does not propose using VAE for detection of adversaries but in case if their model is to be used for detection, it would be based on the reconstruction distance. So, we make the comparison with our proposed CVAE architecture. ROC curves for different adversaries are given in Figure 12
+
+## Use of Generative Adversarial Network (GAN)
+
+Defense-GAN (Samangouei, Kabkab, and Chellappa 2018) uses Generative Adversarial Network (GAN) for detecting adversaries. We used $L = {100}$ and $R = {10}$ for getting the results as per our experiment setting. We compare the results with our proposed CVAE architecture on the same experiment setting and present the comparison in AUC values of the ROC curve observed for the 2 cases. Although the paper's main claim is about purification of the adversaries, we make the relevant comparison for the detection part with our proposed method. We used the
+
+
+
+Fig. 12: ROC curve of different adversaries for PuVAE
+
+same architecture as mentioned in (Samangouei, Kabkab, and Chellappa 2018) and got comparable results as per their claim for MNIST dataset on FGSM adversaries. As this method took a lot of time to run, we randomly chose 1000 samples out of 10000 test samples for evaluation due to time constraint. The detection performance for other attacks is considerably low. Also, Defense-GAN is quite slow as it needs to solve an optimization problem for each image to get its corresponding reconstructed image. Average computation time required by Defense-GAN is ${2.8s}$ per image while our method takes ${0.17s}$ per image with a batch size of 16 . Hence, our method is roughly 16 times faster than Defense-GAN. Refer to Figure 13 for the ROC curves for Defense-GAN.
+
+
+
+Fig. 13: ROC curve of different adversaries for DefenseGan
+
+| References | Concepts Established | Datasets Used | Attack Types | Primary Results | Major Shortcomings | Advantages of our Proposed Work |
| (Hendrycks and Gimpel 2016) | PCA whitening on distribution of final softmax layer | MNIST, CIFAR- 10, Tiny- ImageNet | $\operatorname{FGSM}\left( {l}_{\infty }\right)$ , $\mathrm{{BIM}}\left( {l}_{\infty }\right)$ | AUC ROC for CIFAR-10: $\mathrm{{FGSM}}\left( {l}_{\infty }\right) = {0.928},$ $\mathrm{{BIM}}\left( {l}_{\infty }\right) = {0.912}$ | Not tested for strong attacks, Not tested to differentiate random noisy images | Ability to differentiate from randomly perturbed images, evaluation against strong attacks and target classifier. |
| (Li and Li 2017) | Cascade classi- fier based PCA statistics of in- termediate con- volution layers | ILSVRC- 2012 | L-BGFS (Similar to CW) | AUC of ROC: 0.908 | Not tested for strong attacks, standard datsets, for random noises | Ability to differentiate from randomly perturbed images, evaluation against strong and wider attacks. |
| (Metzen et al. 2017) | Binary classifier network with intermediate layer features as input | CIFAR- 10 | FGSM $\left( {{l}_{2},{l}_{\infty }}\right)$ , BIM $\left( {{l}_{2},{l}_{\infty }}\right)$ , DeepFool, Dynamic BIM (Similar to S-BIM) | Highest detection accuracy among different layers: $\mathrm{{FGSM}} = {0.97},$ $\operatorname{BIM}\left( {l}_{2}\right) = {0.8}$ , $\operatorname{BIM}\left( {l}_{\infty }\right) = {0.82}$ , $\mathrm{{DeepFool}}\left( {l}_{2}\right) = {0.72},$ DeepFool $\left( {l}_{\infty }\right)$ 0.75, Dynamic-BIM $= {0.8}$ (Average) | Need to train with adversarial examples, hence do not generalize well on other attacks, not evaluated for random noisy images | No use of adversaries for training, ability to differentiate from randomly perturbed images, more robust to dynamic adversaries, better AUC results |
| (Gong, Wang, and Ku 2017) | Binary classifier network trained with input image | MNIST, CIFAR- 10. SVHN | $\operatorname{FGSM}\left( {l}_{\infty }\right)$ , $\operatorname{TGSM}\left( {L\infty }\right)$ JSMA | Average accuracy of 0.9914 (MNIST), 0.8279 (CIFAR-10), 0.9378 (SVHN) | Trained with generated adversaries, hence does not generalize well on other adversaries, sensitive to $\epsilon$ changes | No use of adversaries for training, ability to differentiate from randomly perturbed images |
| (Carrara et al. 2018) | LSTM on dis- tant features at each layer of target classifier network | ILSVRC dataset | FGSM, BIM, PGD, L-BFGS $\left( {L\infty }\right)$ | ROC AUC: FGSM = 0.996, $\mathrm{{BIM}} = {0.997}$ , L-BFGS = 0.854 , PGD $= {0.997}$ | Not evaluated for differentiation from random noisy images, on special attack which has access to network weights | No use of adversaries for training, ability to differentiate from randomly perturbed images, evaluaion on ${l}_{2}$ attacks |
| (Feinman et al. 2017) | Bayesian density estimate on final softmax layer | MNIST, CIFAR- 10, SVHN | FGSM, BIM, JSMA, CW $\left( {l}_{\infty }\right)$ | CIFAR-10 ROC- AUC: FGSM $=$ 0.9057, $\mathrm{{BIM}} = {0.81}$ , JSMA $= {0.92},\mathrm{{CW}}$ $= {0.92}$ | No explicit test for random noisy images | Ability to differentiate between randomly perturbed images, better AUC values |
| (Song et al. 2017) | Using PixelDe- fend to get re- construction er- ror on input im- age | Fashion MNIST, CIFAR- 10 | FGSM, BIM, DeepFool, CW $\left( {L}_{\infty }\right)$ | ROC curves given, AUC not given | Cannot differentiate random noisy images from adversaries | Ability to differenti- ate between randomly perturbed and clean images |
| (Xu, Evans, and Qi 2017) | Feature squeez- ing and com- parison | MNIST, CIFAR- 10, ImageNet | FGSM, BIM, DeepFool, JSMA, CW | Overall detection rate: MNIST = 0.982, CIFAR-10 $=$ 0.845, ImageNet = 0.859 | No test for randomly perturbed images | Ability differentiate from randomly perturbed images, better AUC values |
| (Jha et al. 2018) | Using bayesian inference from manifolds on input image | MNIST, CIFAR- 10 | FGSM, BIM | No quantitative re- sults reported | No comparison with- out quantitative re- sults | Ability differentiate from randomly perturbed images, evaluation against strong attacks |
| (Fidel, Bitton, and Shabtai 2020) | Using SHAP signatures of input image | MNIST, CIFAR- 10 | FGSM, BIM, DeepFool etc. | Average ROC-AUC: CIFAR-10 = 0.966, MNIST $= {0.967}$ | Not tested for ran- dom noisy images | No use of adversaries for training, ability to differentiate from randomly perturbed images |
+
+TABLE IV: Summary of Related Works and Comparative Study with these Existing Methods
+
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/Ex1yemaQgU/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/Ex1yemaQgU/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..3f02e5fc895327592e9e3f40a0a40881a01868d6
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/Ex1yemaQgU/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,453 @@
+§ DETECTING ADVERSARIES, YET FALTERING TO NOISE? LEVERAGING CONDITIONAL VARIATIONAL AUTOENCODERS FOR ADVERSARY DETECTION IN THE PRESENCE OF NOISY IMAGES
+
+§ ABSTRACT
+
+With the rapid advancement and increased use of deep learning models in image identification, security becomes a major concern to their deployment in safety-critical systems. Since the accuracy and robustness of deep learning models are primarily attributed from the purity of the training samples, therefore the deep learning architectures are often susceptible to adversarial attacks. Adversarial attacks are often obtained by making subtle perturbations to normal images, which are mostly imperceptible to humans, but can seriously confuse the state-of-the-art machine learning models. What is so special in the slightest intelligent perturbations or noise additions over normal images that it leads to catastrophic classifications by the deep neural networks? Using statistical hypothesis testing, we find that Conditional Variational AutoEncoders (CVAE) are surprisingly good at detecting imperceptible image perturbations. In this paper, we show how CVAEs can be effectively used to detect adversarial attacks on image classification networks. We demonstrate our results over MNIST, CIFAR-10 dataset and show how our method gives comparable performance to the state-of-the-art methods in detecting adversaries while not getting confused with noisy images, where most of the existing methods falter.
+
+Index Terms-Deep Neural Networks, Adversarial Attacks, Image Classification, Variational Autoencoders, Noisy Images
+
+§ INTRODUCTION
+
+The phenomenal success of deep learning models in image identification and object detection has led to its wider adoption in diverse domains ranging from safety-critical systems, such as automotive and avionics (Rao and Frtunikj 2018) to healthcare like medical imaging, robot-assisted surgery, genomics etc. (Esteva et al. 2019), to robotics and image forensics (Yang et al. 2020), etc. The performance of these deep learning architectures are often dictated by the volume of correctly labelled data used during its training phases. Recent works (Szegedy et al. 2013) (Goodfellow, Shlens, and Szegedy 2014) have shown that small and carefully chosen modifications (often in terms of noise) to the input data of a neural network classifier can cause the model to give incorrect labels. This weakness of neural networks allows the possibility of making adversarial attacks on the input image by creating perturbations which are imperceptible to humans but however are able to convince the neural network in getting completely wrong results that too with very high confidence. Due to this, adversarial attacks may pose a serious threat to deploying deep learning models in real-world safety-critical applications. It is, therefore, imperative to devise efficient methods to thwart such adversarial attacks.
+
+Many recent works have presented effective ways in which adversarial attacks can be avoided. Adversarial attacks can be classified into whitebox and blackbox attacks. White-box attacks (Akhtar and Mian 2018) assume access to the neural network weights and architecture, which are used for classification, and thereby specifically targeted to fool the neural network. Hence, they are more accurate than blackbox attacks (Akhtar and Mian 2018) which do not assume access the model parameters. Methods for detection of adversarial attacks can be broadly categorized as - (i) statistical methods, (ii) network based methods, and (iii) distribution based methods. Statistical methods (Hendrycks and Gimpel 2016) (Li and Li 2017) focus on exploiting certain characteristics of the input images or the final logistic-unit layer of the classifier network and try to identify adversaries through their statistical inference. A certain drawback of such methods as pointed by (Carlini and Wagner 2017) is that the statistics derived may be dataset specific and same techniques are not generalized across other datasets and also fail against strong attacks like CW-attack. Network based methods (Metzen et al. 2017) (Gong, Wang, and Ku 2017) aim at specifically training a binary classification neural network to identify the adversaries. These methods are restricted since they do not generalize well across unknown attacks on which these networks are not trained, also they are sensitive to change with the amount of perturbation values such that a small increase in these values makes this attacks unsuccessful. Also, potential whitebox attacks can be designed as shown by (Carlini and Wagner 2017) which fool both the detection network as well as the adversary classifier networks. Distribution based methods (Feinman et al. 2017) (Gao et al. 2021) (Song et al. 2017) (Xu, Evans, and Qi 2017) (Jha et al. 2018) aim at finding the probability distribution from the clean examples and try to find the probability of the input example to quantify how much they fall within the same distribution. However, some of the methods do not guarantee robust separation of randomly perturbed and adversarial perturbed images. Hence there is a high chance that all these methods tend to get confused with random noises in the image, treating them as adversaries.
+
+Copyright © 2021, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+To overcome this drawback so that the learned models are robust with respect to both adversarial perturbations as well as sensitivity to random noises, we propose the use of Conditional Variational AutoEncoder (CVAE) trained over a clean image set. At the time of inference, we empirically establish that the input example falls within a low probability region of the clean examples of the predicted class from the target classifier network. It is important to note here that, this method uses both the input image as well as the predicted class to detect whether the input is an adversary as opposed to some distribution based methods which use only the distribution from the input images. On the contrary, random perturbations activate the target classifier network in such a way that the predicted output class matches with the actual class of the input image and hence it falls within the high probability region. Thus, it is empirically shown that our method does not confuse random noise with adversarial noises. Moreover, we show how our method is robust towards special attacks which have access to both the network weights of CVAE as well as the target classifier networks where many network based methods falter. Further, we show that to eventually fool our method, we may need larger perturbations which becomes visually perceptible to the human eye. The experimental results shown over MNIST and CIFAR-10 datasets present the working of our proposal. In particular, the primary contributions made by our work is as follows.
+
+(a) We propose a framework based on CVAE to detect the possibility of adversarial attacks.
+
+(b) We leverage distribution based methods to effectively differentiate between randomly perturbed and adversar-ially perturbed images.
+
+(c) We devise techniques to robustly detect specially targeted BIM-attacks (Metzen et al. 2017) using our proposed framework.
+
+To the best of our knowledge, this is the first work which leverages use of Variational AutoEncoder architecture for detecting adversaries as well as aptly differentiates noise from adversaries to effectively safeguard learned models against adversarial attacks.
+
+§ ADVERSARIAL ATTACK MODELS AND METHODS
+
+For a test example $X$ , an attacking method tries to find a perturbation, ${\Delta X}$ such that ${\left| \Delta X\right| }_{k} \leq {\epsilon }_{atk}$ where ${\epsilon }_{atk}$ is the perturbation threshold and $k$ is the appropriate order, generally selected as 2 or $\infty$ so that the newly formed perturbed image, ${X}_{adv} = X + {\Delta X}$ . Here, each pixel in the image is represented by the $\langle \mathrm{R},\mathrm{G},\mathrm{B}\rangle$ tuple, where $\mathrm{R},\mathrm{G},\mathrm{B} \in$ $\left\lbrack {0,1}\right\rbrack$ . In this paper, we consider only white-box attacks, i.e. the attack methods which have access to the weights of the target classifier model. However, we believe that our method should work much better for black-box attacks as they need more perturbation to attack and hence should be more easily detected by our framework. For generating the attacks, we use the library by (Li et al. 2020).
+
+§ RANDOM PERTURBATION (RANDOM)
+
+Random perturbations are simply unbiased random values added to each pixel ranging in between $- {\epsilon }_{atk}$ to ${\epsilon }_{atk}$ . Formally, the randomly perturbed image is given by,
+
+$$
+{X}_{\text{ rand }} = X + \mathcal{U}\left( {-{\epsilon }_{atk},{\epsilon }_{atk}}\right) \tag{1}
+$$
+
+where, $\mathcal{U}\left( {a,b}\right)$ denote a continuous uniform distribution in the range $\left\lbrack {a,b}\right\rbrack$ .
+
+§ FAST GRADIENT SIGN METHOD (FGSM)
+
+Earlier works by (Goodfellow, Shlens, and Szegedy 2014) introduced the generation of malicious biased perturbations at each pixel of the input image in the direction of the loss gradient ${\Delta }_{X}L\left( {X,y}\right)$ , where $L\left( {X,y}\right)$ is the loss function with which the target classifier model was trained. Formally, the adversarial examples with with ${l}_{\infty }$ norm for ${\epsilon }_{atk}$ are computed by,
+
+$$
+{X}_{\text{ adv }} = X + {\epsilon }_{\text{ atk }} \cdot \operatorname{sign}\left( {{\Delta }_{X}L\left( {X,y}\right) }\right) \tag{2}
+$$
+
+FGSM perturbations with ${l}_{2}$ norm on attack bound are calculated as,
+
+$$
+{X}_{adv} = X + {\epsilon }_{atk} \cdot \frac{{\Delta }_{X}L\left( {X,y}\right) }{{\left| {\Delta }_{X}L\left( X,y\right) \right| }_{2}} \tag{3}
+$$
+
+§ PROJECTED GRADIENT DESCENT (PGD)
+
+Earlier works by (Kurakin, Goodfellow, and Bengio 2017) propose a simple variant of the FGSM method by applying it multiple times with a rather smaller step size than ${\epsilon }_{atk}$ . However, as we need the overall perturbation after all the iterations to be within ${\epsilon }_{atk}$ -ball of $X$ , we clip the modified $X$ at each step within the ${\epsilon }_{atk}$ ball with ${l}_{\infty }$ norm.
+
+$$
+{X}_{{adv},0} = X, \tag{4a}
+$$
+
+$$
+{X}_{{adv},n + 1} = {\operatorname{Clip}}_{X}^{{\epsilon }_{atk}}\left\{ {{X}_{{adv},n} + \alpha \operatorname{.sign}\left( {{\Delta }_{X}L\left( {{X}_{{adv},n},y}\right) }\right) }\right\}
+$$
+
+(4b)
+
+Given $\alpha$ , we take the no of iterations, $n$ to be $\left\lfloor {\frac{2{\epsilon }_{atk}}{\alpha } + }\right.$ 2]. This attacking method has also been named as Basic Iterative Method (BIM) in some works.
+
+§ CARLINI-WAGNER (CW) METHOD
+
+(Carlini and Wagner 2017) proposed a more sophisticated way of generating adversarial examples by solving an optimization objective as shown in Equation 5. Value of $c$ is chosen by an efficient binary search. We use the same parameters as set in (Li et al. 2020) to make the attack.
+
+$$
+{X}_{\text{ adv }} = {\operatorname{Clip}}_{X}^{{\epsilon }_{\text{ atk }}}\left\{ {\mathop{\min }\limits_{\epsilon }\parallel \epsilon {\parallel }_{2} + c.f\left( {x + \epsilon }\right) }\right\} \tag{5}
+$$
+
+§ DEEPFOOL METHOD
+
+DeepFool (Moosavi-Dezfooli, Fawzi, and Frossard 2016) is an even more sophisticated and efficient way of generating adversaries. It works by making the perturbation iteratively towards the decision boundary so as to achieve the adversary with minimum perturbation. We use the default parameters set in (Li et al. 2020) to make the attack.
+
+§ PROPOSED FRAMEWORK LEVERAGING CVAE
+
+In this section, we present how Conditional Variational Au-toEncoders (CVAE), trained over a dataset of clean images, are capable of comprehending the inherent differentiable attributes between adversaries and noisy data and separate out both using their probability distribution map.
+
+§ CONDITIONAL VARIATIONAL AUTOENCODERS (CVAE)
+
+Variational AutoEncoder is a type of a Generative Adversarial Network (GAN) having two components as Encoder and Decoder. The input is first passed through an encoder to get the latent vector for the image. The latent vector is passed through the decoder to get the reconstructed input of the same size as the image. The encoder and decoder layers are trained with the objectives to get the reconstructed image as close to the input image as possible thus forcing to preserve most of the features of the input image in the latent vector to learn a compact representation of the image. The second objective is to get the distribution of the latent vectors for all the images close to the desired distribution. Hence, after the variational autoencoder is fully trained, decoder layer can be used to generate examples from randomly sampled latent vectors from the desired distribution with which the encoder and decoder layers were trained.
+
+ < g r a p h i c s >
+
+Fig. 1: CVAE Model Architecture
+
+Conditional VAE is a variation of VAE in which along with the input image, the class of the image is also passed with the input at the encoder layer and also with the latent vector before the decoder layer (refer to Figure 1). This helps Conditional VAE to generate specific examples of a class. The loss function for CVAE is defined by Equation 6, The first term is the reconstruction loss which signifies how closely can the input $X$ be reconstructed given the latent vector $z$ and the output class from the target classifier network as condition, $c$ . The second term of the loss function is the KL-divergence $\left( {\mathcal{D}}_{KL}\right)$ between the desired distribution, $P\left( {z \mid c}\right)$ and the current distribution $\left( {Q\left( {z \mid X,c}\right) }\right)$ of $z$ given input image $X$ and the condition $c$ .
+
+$$
+L\left( {X,c}\right) = \mathbb{E}\left\lbrack {\log P\left( {X \mid z,c}\right) }\right\rbrack - {\mathcal{D}}_{KL}\left\lbrack {Q\left( {z \mid X,c}\right) \parallel P\left( {z \mid c}\right) }\right\rbrack
+$$
+
+(6)
+
+§ TRAINING CVAE MODELS
+
+For modeling $\log P\left( {X \mid z,c}\right)$ , we use the decoder neural network to output the reconstructed image, ${X}_{rcn}$ where we utilize the condition $c$ which is the output class of the image to get the set of parameters, $\theta \left( c\right)$ for the neural network. We calculate Binary Cross Entropy (BCE) loss of the reconstructed image, ${X}_{rcn}$ with the input image, $X$ to model $\log P\left( {X \mid z,c}\right)$ . Similarly, we model $Q\left( {z \mid X,c}\right)$ with the encoder neural network which takes as input image $X$ and utilizes condition, $c$ to select model parameters, $\theta \left( c\right)$ and outputs mean, $\mu$ and $\log$ of variance, $\log {\sigma }^{2}$ as parameters assuming Gaussian distribution for the conditional distribution. We set the target distribution $P\left( {z \mid c}\right)$ as unit Gaussian distribution with mean 0 and variance 1 as $N\left( {0,1}\right)$ . The resultant loss function would be as follows,
+
+$$
+L\left( {X,c}\right) = \operatorname{BCE}\left\lbrack {X,\operatorname{Decoder}\left( {X,\theta \left( c\right) }\right) }\right\rbrack -
+$$
+
+$$
+\frac{1}{2}\left\lbrack {{\operatorname{Encoder}}_{\sigma }^{2}\left( {X,\theta \left( c\right) }\right) + {\operatorname{Encoder}}_{\mu }^{2}\left( {X,\theta \left( c\right) }\right) }\right.
+$$
+
+$$
+\left. {-1 - \log \left( {{\operatorname{Encoder}}_{\sigma }^{2}\left( {X,\theta \left( c\right) }\right) }\right) }\right\rbrack \tag{7}
+$$
+
+The model architecture weights, $\theta \left( c\right)$ are a function of the condition, $c$ . Hence, we learn separate weights for encoder and decoder layers of CVAE for all the classes. It implies learning different encoder and decoder for each individual class. The layers sizes are tabulated in Table 1. We train the Encoder and Decoder layers of CVAE on clean images with their ground truth labels and use the condition as the predicted class from the target classifier network during inference.
+
+max width=
+
+Attribute Layer Size
+
+1-3
+10*Encoder Conv2d Channels: (c, 32)
+
+2-3
+ X Kernel: (4,4, stride=2, padding=1)
+
+2-3
+ BatchNorm2d 32
+
+2-3
+ Relu X
+
+2-3
+ Conv2d Channels: (32, 64)
+
+2-3
+ X Kernel: (4,4, stride=2, padding=1)
+
+2-3
+ BatchNorm2d 64
+
+2-3
+ Relu X
+
+2-3
+ Conv2d Channels: (64, 128) Kernel: (4,4, stride=2, padding=1)
+
+2-3
+ BatchNorm2d 128
+
+1-3
+Mean Linear (1024, ${z}_{d\mathit{{im}}} = {128}$ )
+
+1-3
+Variance Linear (1024, ${z}_{d\;i\;m} =$ 128)
+
+1-3
+2*Project Linear $\left( {{z}_{dim} = {128},{1024}}\right)$
+
+2-3
+ Reshape (128,4,4)
+
+1-3
+10*Decoder ConvTranspose2d Channels: (128, 64) Kernel: (4,4, stride=2, padding=1)
+
+2-3
+ BatchNorm2d 64
+
+2-3
+ Relu X
+
+2-3
+ ConvTranspose2d Channels: (64, 32)
+
+2-3
+ X Kernel: (4,4, stride=2, padding=1)
+
+2-3
+ BatchNorm2d 64
+
+2-3
+ Relu X
+
+2-3
+ ConvTranspose2d Channels: (32, c)
+
+2-3
+ X Kernel: (4,4, stride=2, padding=1)
+
+2-3
+ Sigmoid X
+
+1-3
+
+TABLE I: CVAE Architecture Layer Sizes. $c =$ Number of Channels in the Input Image $(c = 3$ for CIIFAR-10 and $c = 1$ for MNIST).
+
+§ DETERMINING RECONSTRUCTION ERRORS
+
+Let $X$ be the input image and ${y}_{\text{ pred }}$ be the predicted class obtained from the target classifier network. ${X}_{{rcn},{y}_{pred}}$ is the reconstructed image obtained from the trained encoder and decoder networks with the condition ${y}_{\text{ pred }}$ . We define the reconstruction error or the reconstruction distance as in Equation 8. The network architectures for encoder and decoder layers are given in Figure 1.
+
+$$
+\operatorname{Recon}\left( {X,y}\right) = {\left( X - {X}_{{rcn},y}\right) }^{2} \tag{8}
+$$
+
+Two pertinent points to note here are:
+
+ * For clean test examples, the reconstruction error is bound to be less since the CVAE is trained on clean train images. As the classifier gives correct class for the clean examples, the reconstruction error with the correct class of the image as input is less.
+
+ * For the adversarial examples, as they fool the classifier network, passing the malicious output class, ${y}_{\text{ pred }}$ of the classifier network to the CVAE with the slightly perturbed input image, the reconstructed image tries to be closer to the input with class ${y}_{\text{ pred }}$ and hence, the reconstruction error is large.
+
+As an example, let the clean image be a cat and its slightly perturbed image fools the classifier network to believe it is a dog. Hence, the input to the CVAE will be the slightly perturbed cat image with the class dog. Now as the encoder and decoder layers are trained to output a dog image if the class inputted is dog, the reconstructed image will try to resemble a dog but since the input is a cat image, there will be large reconstruction error. Hence, we use reconstruction error as a measure to determine if the input image is adversarial. We first train the Conditional Variational AutoEncoder (CVAE) on clean images with the ground truth class as the condition. Examples of reconstructions for clean and adversarial examples are given in Figure 2 and Figure 3.
+
+ < g r a p h i c s >
+
+Fig. 2: Clean and Adversarial Attacked Images to CVAE from MNIST Dataset
+
+ < g r a p h i c s >
+
+Fig. 3: Clean and Adversarial Attacked Images to CVAE from CIFAR-10 Dataset.
+
+§ OBTAINING $P$ -VALUE
+
+As already discussed, the reconstruction error is used as a basis for detection of adversaries. We first obtain the reconstruction distances for the train dataset of clean images which is expected to be similar to that of the train images. On the other hand, for the adversarial examples, as the predicted class $y$ is incorrect, the reconstruction is expected to be worse as it will be more similar to the image of class $y$ as the decoder network is trained to generate such images. Also for random images, as they do not mostly fool the classifier network, the predicted class, $y$ is expected to be correct, hence reconstruction distance is expected to be less. Besides qualitative analysis, for the quantitative measure, we use the permutation test from (Efron and Tibshirani 1993). We can provide an uncertainty value for each input about whether it comes from the training distribution. Specifically, let the input ${X}^{\prime }$ and training images ${X}_{1},{X}_{2},\ldots ,{X}_{N}$ . We first compute the reconstruction distances denoted by $\operatorname{Recon}\left( {X,y}\right)$ for all samples with the condition as the predicted class $y =$ Classifier(X). Then, using the rank of $\operatorname{Recon}\left( {{X}^{\prime },{y}^{\prime }}\right)$ in $\{ \mathtt{{Recon}}({X}_{1},{y}_{1}),\mathtt{{Recon}}({X}_{2},{y}_{2}),\ldots ,\mathtt{{Recon}}({X}_{N},{y}_{N})\} \;$ as our test statistic, we get,
+
+$$
+T = T\left( {{X}^{\prime };{X}_{1},{X}_{2},\ldots ,{X}_{N}}\right)
+$$
+
+$$
+= \mathop{\sum }\limits_{{i = 1}}^{N}I\left\lbrack {\operatorname{Recon}\left( {{X}_{i},{y}_{i}}\right) \leq \operatorname{Recon}\left( {{X}^{\prime },{y}^{\prime }}\right) }\right\rbrack \tag{9}
+$$
+
+Where $I\left\lbrack .\right\rbrack$ is an indicator function which returns 1 if the condition inside brackets is true, and 0 if false. By permutation principle, $p$ -value for each sample will be,
+
+$$
+p = \frac{1}{N + 1}\left( {\mathop{\sum }\limits_{{i = 1}}^{N}I\left\lbrack {{T}_{i} \leq T}\right\rbrack + 1}\right) \tag{10}
+$$
+
+Larger $p$ -value implies that the sample is more probable to be a clean example. Let $t$ be the threshold on the obtained $p$ -value for the sample, hence if ${p}_{X,y} < t$ , the sample $X$ is classified as an adversary. Algorithm 1 presents the overall resulting procedure combining all above mentioned stages.
+
+Algorithm 1 Adversarial Detection Algorithm
+
+ function DETECT_ADVERSARIES $\left( {{X}_{\text{ train }},{Y}_{\text{ train }},X,t}\right)$
+
+ recon $\leftarrow \operatorname{Train}\left( {{X}_{\text{ train }},{Y}_{\text{ train }}}\right)$
+
+3: recon_dists $\leftarrow \operatorname{Recon}\left( {{X}_{\text{ train }},{Y}_{\text{ train }}}\right)$
+
+4: Adversaries $\leftarrow \phi$
+
+5: for $x$ in $X$ do
+
+6: ${y}_{\text{ pred }} \leftarrow$ Classifier(x)
+
+7: recon_dist_x $\leftarrow \operatorname{Recon}\left( {x,{y}_{\text{ pred }}}\right)$
+
+8: pval $\leftarrow p$ -value $\left( {\text{ recon\_dist\_x },\text{ recon\_dists }}\right)$
+
+9: if pval $\leq t$ then
+
+10: Adversaries.insert( $x$ )
+
+11: return Adversaries
+
+Algorithm 1 first trains the CVAE network with clean training samples (Line 2) and formulates the reconstruction distances (Line 3). Then, for each of the test samples which may contain clean, randomly perturbed as well as adversarial examples, first the output predicted class is obtained using a target classifier network, followed by finding it's reconstructed image from CVAE, and finally by obtaining it’s $p$ -value to be used for thresholding (Lines 5- 8). Images with $p$ -value less than given threshold(t)are classified as adversaries (Lines 9-10).
+
+§ EXPERIMENTAL RESULTS
+
+We experimented our proposed methodology over MNIST and CIFAR-10 datasets. All the experiments are performed in Google Colab GPU having ${0.82}\mathrm{{GHz}}$ frequency, ${12}\mathrm{{GB}}$ RAM and dual-core CPU having ${2.3}\mathrm{{GHz}}$ frequency, ${12}\mathrm{{GB}}$ RAM. An exploratory version of the code-base will be made public on github.
+
+§ DATASETS AND MODELS
+
+Two datasets are used for the experiments in this paper, namely MNIST (LeCun, Cortes, and Burges 2010) and CIFAR-10 (Krizhevsky 2009). MNIST dataset consists of hand-written images of numbers from 0 to 9 . It consists of 60,000 training examples and 10,000 test examples where each image is a ${28} \times {28}$ gray-scale image associated with a label from 1 of the 10 classes. CIFAR-10 is broadly used for comparison of image classification tasks. It also consists of 60,000 image of which 50,000 are used for training and the rest10,000are used for testing. Each image is a ${32} \times {32}$ coloured image i.e. consisting of 3 channels associated with a label indicating 1 out of 10 classes.
+
+We use state-of-the-art deep neural network image classifier, ResNet18 (He et al. 2015) as the target network for the experiments. We use the pre-trained model weights available from (Idelbayev) for both MNIST as well as CIFAR-10 datasets.
+
+§ PERFORMANCE OVER GREY-BOX ATTACKS
+
+If the attacker has the access only to the model parameters of the target classifier model and no information about the detector method or it's model parameters, then we call such attack setting as Grey-box. This is the most common attack setting used in previous works against which we evaluate the most common attacks with standard epsilon setting as used in other works for both the datasets. For MNIST, the value of $\epsilon$ is commonly used between 0.15-0.3 for FGSM attack and 0.1 for iterative attacks (Samangouei, Kabkab, and Chellappa 2018) (Gong, Wang, and Ku 2017) (Xu, Evans, and Qi 2017). While for CIFAR10, the value of $\epsilon$ is most commonly chosen to be $\frac{8}{255}$ as in (Song et al. 2017) (Xu, Evans, and Qi 2017) (Fidel, Bitton, and Shabtai 2020). For DeepFool (Moosavi-Dezfooli, Fawzi, and Frossard 2016) and Carlini Wagner (CW) (Carlini and Wagner 2017) attacks, the $\epsilon$ bound is not present. The standard parameters as used by default in (Li et al. 2020) have been used for these 2 attacks. For ${L}_{2}$ attacks, the $\epsilon$ bound is chosen such that success of the attack is similar to their ${L}_{\infty }$ counterparts as the values used are very different in previous works.
+
+Reconstruction Error Distribution: The histogram distribution of reconstruction errors for MNIST and CIFAR- 10 datasets for different attacks are given in Figure 4. For adversarial attacked examples, only examples which fool the network are included in the distribution for fair comparison. It may be noted that, the reconstruction errors for adversarial examples is higher than normal examples as expected. Also, reconstructions errors for randomly perturbed test samples are similar to those of normal examples but slightly larger as expected due to reconstruction error contributed from noise.
+
+ < g r a p h i c s >
+
+Fig. 4: Reconstruction Distances for different Grey-box attacks
+
+$p$ -value Distribution: From the reconstruction error values, the distribution histogram of p-values of test samples for MNIST and CIFAR-10 datasets are given in Figure 5. It may be noted that, in case of adversaries, most samples have $p$ -value close to 0 due to their high reconstruction error; whereas for the normal and randomly perturbed images, $p$ -value is nearly uniformly distributed as expected.
+
+ROC Characteristics: Using the $p$ -values, ROC curves can be plotted as shown in Figure 6. As can be observed from ROC curves, clean and randomly perturbed attacks can be very well classified from all adversarial attacks. The values of ${\epsilon }_{atk}$ were used such that the attack is able to fool the target detector for at-least ${45}\%$ samples. The percentage of samples on which the attack was successful for each attack is shown in Table 1
+
+Statistical Results and Discussions: The statistics for clean, randomly perturbed and adversarial attacked images for MNIST and CIFAR datasets are given in Table II. Error rate signifies the ratio of the number of examples which were misclassified by the target network. Last column (AUC) lists the area under the ROC curve. The area for adversaries is expected to be close to 1 ; whereas for the normal and randomly perturbed images, it is expected to be around 0.5 .
+
+It is worthy to note that, the obtained statistics are much comparable with the state-of-the-art results as tabulated in
+
+ < g r a p h i c s >
+
+(b) $p$ -values from CIFAR-10 dataset
+
+Fig. 5: Generated $p$ -values for different Grey-box attacks
+
+max width=
+
+2*Type 2|c|Error Rate (%) 2|c|Parameters 2|c|AUC
+
+2-7
+ MNIST CIFAR-10 MNIST CIFAR-10 MNIST CIFAR-10
+
+1-7
+NORMAL 2.2 8.92 - - 0.5 0.5
+
+1-7
+RANDOM 2.3 9.41 $\epsilon = {0.1}$ $\epsilon = \frac{8}{255}$ 0.52 0.514
+
+1-7
+FGSM 90.8 40.02 $\epsilon = {0.15}$ $\epsilon = \frac{8}{255}$ 0.99 0.91
+
+1-7
+FGSM-L2 53.3 34.20 $\epsilon = {1.5}$ $\epsilon = 1$ 0.95 0.63
+
+1-7
+R-FGSM 91.3 41.29 $\epsilon = \left( {{0.05},{0.1}}\right)$ $\epsilon = \left( {\frac{4}{255},\frac{8}{255}}\right)$ 0.99 0.91
+
+1-7
+R-FGSM-L2 54.84 34.72 $\epsilon = \left( {{0.05},{1.5}}\right)$ $\mathit{\epsilon }\mathrm{ = }\mathrm{(}\frac{\mathrm{4}}{\mathrm{{255}}}\mathrm{,}\mathrm{1}\mathrm{)}$ 0.95 0.64
+
+1-7
+PGD 82.13 99.17 $\epsilon = {0.1},n = {12}$ ${\epsilon }_{step} = {0.02}$ $\mathit{\epsilon }\mathrm{ = }\frac{\mathrm{8}}{\mathrm{{255}}}\mathrm{,}\mathit{n}\mathrm{ = }\mathrm{1}\mathrm{2}$ ${\epsilon }_{step} = \frac{1}{255}$ 0.974 0.78
+
+1-7
+CW 100 100 - - 0.98 0.86
+
+1-7
+DeepFool 97.3 93.89 - - 0.962 0.75
+
+1-7
+
+TABLE II: Image Statistics for MNIST and CIFAR-10. AUC : Area Under the ROC Curve. Error Rate (%) : Percentage of samples mis-classified or Successfully-attacked
+
+Table IV (Given in the Appendix). Interestingly, some of the methods (Song et al. 2017) explicitly report comparison results with randomly perturbed images and are ineffective in distinguishing adversaries from random noises, but most other methods do not report results with random noise added to the input image. Since other methods use varied experimental setting, attack models, different datasets as well as ${\epsilon }_{atk}$ values and network model, exact comparisons with other methods is not directly relevant primarily due to such varied experimental settings. However, the results reported within the Table IV (Given in the Appendix) are mostly similar to our results while our method is able to statistically differentiate from random noisy images.
+
+In addition to this, since our method does not use any adversarial examples for training, it is not prone to changes in value of $\epsilon$ or with change in attacks which network based methods face as they are explicitly trained with known values of $\epsilon$ and types of attacks. Moreover, among distribution and statistics based methods, to the best of our knowledge, utilization of the predicted class from target network has not been done before. Most of these methods either use the input image itself (Jha et al. 2018) (Song et al. 2017) (Xu, Evans, and Qi 2017), or the final logits layer (Feinman et al. 2017) (Hendrycks and Gimpel 2016), or some intermediate layer (Li and Li 2017) (Fidel, Bitton, and Shabtai 2020) from target architecture for inference, while we use the input image and the predicted class from target network to do the same.
+
+ < g r a p h i c s >
+
+(b) CIFAR-10 dataset
+
+Fig. 6: ROC Curves for different Grey-box attacks
+
+§ PERFORMANCE OVER WHITE-BOX ATTACKS
+
+In this case, we evaluate the attacks if the attacker has the information of both the defense method as well as the target classifier network. (Metzen et al. 2017) proposed a modified PGD method which uses the gradient of the loss function of the detector network assuming that it is differentiable along with the loss function of the target classifier network to generate the adversarial examples. If the attacker also has access to the model weights of the detector CVAE network, an attack can be devised to fool both the detector as well as the classifier network. The modified PGD can be expressed as follows :-
+
+$$
+{X}_{{adv},0} = X \tag{11a}
+$$
+
+$$
+{X}_{{adv},n + 1} = {\operatorname{Clip}}_{X}^{{\epsilon }_{atk}}\left\{ {{X}_{{adv},n} + }\right.
+$$
+
+$$
+\text{ }\alpha \text{ .sign }\left( {\text{ (1 } - \sigma ).{\Delta }_{X}{L}_{cls}\left( {{X}_{{adv},n},{y}_{\text{ target }}}\right) + }\right.
+$$
+
+$$
+\left. {\sigma .{\Delta }_{X}{L}_{\text{ det }}\left( {{X}_{\text{ adv },n},{y}_{\text{ target }}}\right) )}\right\} \tag{11b}
+$$
+
+Where ${y}_{\text{ target }}$ is the target class and ${L}_{\text{ det }}$ is the reconstruction distance from Equation 8. It is worthy to note that our proposed detector CVAE is differentiable only for the targeted attack setting. For the non-targeted attack, as the condition required for the CVAE is obtained from the target classifier output which is discrete, the differentiation operation is not valid. We set the target randomly as any class other than the true class for testing.
+
+Effect of $\sigma$ : To observe the effect of changing value of $\sigma$ , we keep the value of $\epsilon$ fixed at 0.1 . As can be observed in Figure 7, the increase in value of $\sigma$ implies larger weight on fooling the detector i.e. getting less reconstruction distance. Hence, as expected the attack becomes less successful with larger values of $\sigma \left\lbrack 8\right\rbrack$ and gets lesser AUC values [7], hence more effectively fooling the detector. For CIFAR-10 dataset, the detection model does get fooled for higher $c$ - values but however the error rate is significantly low for those values, implying that only a few samples get attacks on setting such value.
+
+ < g r a p h i c s >
+
+Fig. 7: ROC Curves for different values of $\sigma$ . More area under the curve implies better detectivity for that attack. With more $\sigma$ value, the attack, as the focus shifts to fooling the detector, it becomes difficult for the detector to detect.
+
+Effect of $\epsilon$ : With changing values of $\epsilon$ , there is more space available for the attack to act, hence the attack becomes more successful as more no of images are attacked as observed in Figure 10. At the same time, the trend for AUC curves is shown in Figure 9. The initial dip in the value is as expected as the detector tends to be fooled with larger $\epsilon$ bound. From both these trends, it can be noted that for robustly attacking both the detector and target classifier for significantly higher no of images require significantly larger attack to be made for both the datasets.
+
+ < g r a p h i c s >
+
+Fig. 8: Success rate for different values of $\sigma$ . More value of $\sigma$ means more focus on fooling the detector, hence success rate of fooling the detector decreases with increasing $\sigma$ .
+
+ < g r a p h i c s >
+
+Fig. 9: ROC Curves for different values of $\epsilon$ . With more $\epsilon$ value, due to more space available for the attack, attack becomes less detectable on average.
+
+§ RELATED WORKS
+
+There has been an active research in the direction of adversaries and the ways to avoid them, primarily these methods are statistical as well as machine learning (neural network) based which produces systematic identification and rectification of images into desired target classes.
+
+Statistical Methods: Statistical methods focus on exploiting certain characteristics of the input images and try to identify adversaries through their statistical inference. Some early works include use of PCA, softmax distribution of final layer logits (Hendrycks and Gimpel 2016), reconstruction from logits (Li and Li 2017) to identify adversaries. Carlini and Wagner (Carlini and Wagner 2017) showed how these methods are not robust against strong attacks and most of the methods work on some specific datasets but do not generalize on others as the same statistical thresholds do not work.
+
+ < g r a p h i c s >
+
+Fig. 10: Success rate for different values of $\epsilon$ . More value of $\epsilon$ means more space available for the attack, hence success rate increases
+
+Network based Methods: Network based methods aim at specifically training a neural network to identify the adversaries. Binary classification networks (Metzen et al. 2017) (Gong, Wang, and Ku 2017) are trained to output a confidence score on the presence of adversaries. Some methods propose addition of a separate classification node in the target network itself (Hosseini et al. 2017). The training is done in the same way with the augmented dataset. (Carrara et al. 2018) uses feature distant spaces of intermediate layer values in the target network to train an LSTM network for classifying adversaries. Major challenges faced by these methods is that the classification networks are differentiable, thus if the attacker has access to the weights of the model, a specifically targeted attack can be devised as suggested by Carlini and Wagner (Carlini and Wagner 2017) to fool both the target network as well as the adversary classifier. Moreover, these methods are highly sensitive to the perturbation threshold set for adversarial attack and fail to identify attacks beyond a preset threshold.
+
+Distribution based Methods: Distribution based methods aim at finding the probability distribution from the clean examples and try to find the probability of the input example to fall within the same distribution. Some of these methods include using Kernel Density Estimate on the logits from the final softmax layer (Feinman et al. 2017). (Gao et al. 2021) used Maximum mean discrepancy (MMD) from the distribution of the input examples to classify adversaries based on their probability of occurrence in the input distribution. PixelDefend (Song et al. 2017) uses PixelCNN to get the Bits Per Dimension (BPD) score for the input image. (Xu, Evans, and Qi 2017) uses the difference in the final logit vector for original and squeezed images as a medium to create distribution and use it for inference. (Jha et al. 2018) compares different dimensionality reduction techniques to get low level representations of input images and use it for bayesian inference to detect adversaries.
+
+Some other special methods include use of SHAP signatures (Fidel, Bitton, and Shabtai 2020) which are used for getting explanations on where the classifier network is focusing as an input for detecting adversaries.
+
+A detailed comparative study with all these existing approaches is summarized through Table IV in the Appendix.
+
+§ COMPARISON WITH STATE-OF-THE-ART USING GENERATIVE NETWORKS
+
+Finally we compare our work with these 3 works (Meng and Chen 2017) (Hwang et al. 2019) (Samangouei, Kabkab, and Chellappa 2018) proposed earlier which uses Generative networks for detection and purification of adversaries. We make our comparison on MNIST dataset which is used commonly in the 3 works (Table III). Our results are typically the best for all attacks or are off by short margin from the best. For the strongest attack, our performance is much better. This show how our method is more effective while not being confused with random perturbation as an adversary. More details are given in the Appendix.
+
+max width=
+
+2*Type 4|c|AUC
+
+2-5
+ MagNet PuVAE DefenseGAN CVAE (Ours)
+
+1-5
+RANDOM 0.61 0.72 0.52 0.52
+
+1-5
+FGSM 0.98 0.96 0.77 0.99
+
+1-5
+FGSM-L2 0.84 0.60 0.60 0.95
+
+1-5
+R-FGSM 0.989 0.97 0.78 0.987
+
+1-5
+R-FGSM-L2 0.86 0.61 0.62 0.95
+
+1-5
+PGD 0.98 0.95 0.65 0.97
+
+1-5
+CW 0.983 0.92 0.94 0.986
+
+1-5
+DeepFool 0.86 0.86 0.92 0.96
+
+1-5
+Strongest 0.84 0.60 0.60 0.95
+
+1-5
+
+TABLE III: Comparison in ROC AUC statistics with other methods. More AUC implies more detectablity. 0.5 value of AUC implies no detection. For RANDOM, value close to 0.5 is better while for adversaries, higher value is better.
+
+§ CONCLUSION
+
+In this work, we propose the use of Conditional Variational AutoEncoder (CVAE) for detecting adversarial attacks. We utilized statistical base methods to verify that the adversarial attacks usually lie outside of the training distribution. We demonstrate how our method can specifically differentiate between random perturbations and targeted attacks which is necessary for some applications where the raw camera image may contain random noises which should not be confused with an adversarial attack. Furthermore, we demonstrate how it takes huge targeted perturbation to fool both the detector as well as the target classifier. Our framework presents a practical, effective and robust adversary detection approach in comparison to existing state-of-the-art techniques which falter to differentiate noisy data from adversaries. As a possible future work, it would be interesting to see the use of Variational AutoEncoders for automatically purifying the adversarialy attacked images.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/HbasA9ysA3/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/HbasA9ysA3/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..fe205fca8f18097c983be148a08c2dfb00e85740
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/HbasA9ysA3/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,419 @@
+## Optimal Robust Classification Trees
+
+## Abstract
+
+In many high-stakes domains, the data used to drive machine learning algorithms is noisy (due to e.g., the sensitive nature of the data being collected, limited resources available to validate the data, etc). This may cause a distribution shift to occur, where the distribution of the training data does not match the distribution of the testing data. In the presence of distribution shifts, any trained model can perform poorly in the testing phase. In this paper, motivated by the need for interpretability and robustness, we propose a mixed-integer optimization formulation and a tailored solution algorithm for learning optimal classification trees that are robust to adversarial perturbations in the data features. We evaluate the performance of our approach on numerous publicly available datasets, and compare the performance to a regularized, non-robust optimal tree. We show an increase of up to 14.16% in worst-case accuracy and increase of up to 4.72% in average-case accuracy across several data sets and distribution shifts from using our robust solution in comparison to the non-robust solution.
+
+## 1 Introduction
+
+Machine learning techniques are increasingly being used in high-stakes domains to assist humans in making important decisions. Within these applications, black box models that need explanation should be avoided as decisions made from these models may have a profound impact (Rudin 2019). That is, we need inherently interpretable models where decisions made can be simply understood and verified. One of the most interpretable models are classification trees, which are easily visualized and do not require extensive knowledge to use. Classification trees are a widely used model that takes the form of a binary tree. At each branching node, a test based off of the attributes of the given data sample is made, which dictates the next node visited. Then at an assignment node, a particular label is assigned to the data sample (Breiman et al. 2017).
+
+However, as with many other machine learning models, classification trees are susceptible to distribution shifts. That is, the distribution of the training data and the testing data may be different, causing poor performance in deployment (Quiñonero-Candela et al. 2009). In high-stakes domains where there is a need for interpretability, there must also be robustness against distribution shifts to ensure high-quality solutions under any realization of the training data.
+
+### 1.1 Background and Related Work
+
+Traditionally, classification trees are built using heuristic approaches since the problem of building optimal classification trees is $\mathcal{{NP}}$ -hard (Breiman et al. 2017). But in settings where the quality of solutions is important, heuristic approaches may yield suboptimal solutions that are unacceptable for use in applications. Thus, to ensure a high-quality decision tree, mathematical optimization techniques, like mixed-integer optimization (MIO), have been developed for building optimal trees. Namely, Bertsimas and Dunn (2017) were the first to use MIO to build optimal decision trees. To combat the long run times for making optimal decision trees on large data sets, Verwer and Zhang (2019) create a binary linear programming formulation that has a run time independent of the amount of training samples. Aghaei, Gómez, and Vayanos (2021) build a strong, "flow-based" MIO formulation that greatly improves on solving times in comparison to other state-of-the-art optimal classification tree algorithms. MIO approaches for constructing decision trees have also allowed for several extensions. For example, Mišić (2020) formulates the problem of creating tree ensembles as a MIO problem, Aghaei, Azizi, and Vayanos (2019) create optimal and fair decision trees using MIO, and Jo et al. (2021) use MIO to build optimal prescriptive trees from observational data.
+
+To account for the problem of distribution shifts, there exists both non-MIO and MIO approaches. One type of non-MIO method up-weights training samples that match the test set distribution and down-weights the training samples that differ from the test set (Shimodaira 2000; Bickel, Brückner, and Scheffer 2007). These methods usually define distribution shift as a biased sampling of training data, where assigning weights to training samples diminishes the effects of adversarial examples.
+
+Another way to define a distribution shift is as an adversarial perturbation of the trained data that makes it differ from the test data. Motivated by this viewpoint, there have been several optimization-based methods to deal with distribution shifts. One of these methods is distributionally robust optimization, which combats the effects of distribution shifts by performing well under an adversarial distribution of samples. Both Kuhn et al. (2019) and Sinha et al. (2020) in particular provide distributionally robust approaches to building machine learning models that perform well under an adversarial distribution of the training data, where the adversarial distribution is in some Wasserstein distance from the nominal distribution of the data.
+
+---
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+---
+
+Distributionally robust optimization requires an assumption on the distribution of the data available, which may not be a reasonable assumption to make. In the case where such an assumption cannot be reasonably made, robust optimization provides a framework to generate solutions that perform well in the worst-case perturbation, where the perturbation comes from a set of values without a probability distribution assumption imposed (Ben-Tal, El Ghaoui, and Nemirovski 2009). Many common machine learning models have been formulated as robust optimization problems to deal with uncertainty in data. For example, robust optimization has been used for creating robust support vector machines (Shivaswamy, Bhattacharyya, and Smola 2006; Bert-simas et al. 2019). Robust optimization has also been used to create artificial neural networks that are robust against adversarial perturbations of the data (Shaham, Yamada, and Negahban 2018).
+
+In a similar spirit to these previous works, we propose using robust optimization to create a classification tree robust to distribution shifts. In an adversarial setting, we must decide the tree structure before observing the perturbation of the data. The perturbation of a sample, once unveiled, reveals whether the given tree structure correctly classifies the realization of the sample. That is, the classification of samples in a decision tree is dependent on both the tree structure and the realization of the uncertain parameter. Decision variables that indicate the correct classification of samples are called second-stage decisions, which can be modeled through two-stage robust optimization (Ben-Tal et al. 2004).
+
+There have been several recent methods for robust classification trees. Namely, Vos and Verwer (2021) have a local search algorithm for decision trees robust against adversarial examples derived from a user-defined threat model. Bertsimas et al. (2019) have a robust optimization formulation for robust trees where the uncertainty set is modeled by restricting the norm of the perturbation parameter. Unlike our proposed method, Bertsimas et al. (2019) do not capture the dependent relationship between the perturbation of the covariates and the classification of training samples, as decisions made on the classification of training samples are made before the realization of the uncertain parameter. Thus, they cannot identify correctly classified data points in the realization of the worst-case perturbation. So, differing from previous work, we argue that the decision variables related to the classification of training samples should be modeled as second-stage decisions. And by modeling these variables as second-stage decisions, we obtain less conservative solutions than a single-stage approach to the same problem.
+
+With these motivating factors in mind, we propose a two-stage, MIO method for learning optimal classification trees robust to distribution shifts in the data. Namely, we present a flow-based optimization problem, where we model uncertainty through a cost-and-budget framework. We then present a tailored Benders decomposition algorithm that solves this two-stage formulation to optimality. We evaluate the performance of our formulation on publicly available data sets for several problem instances to measure the effectiveness of our method in mitigating the adverse effects of distribution shifts.
+
+## 2 Robust Tree Formulation
+
+In this section, we present our formulation for a robust classification tree. We describe the structure of the classification tree, present the proposed two-stage formulation, and discuss our model of uncertainty.
+
+### 2.1 Setup and Notation
+
+Let ${\left\{ {\mathbf{x}}^{i},{y}^{i}\right\} }_{i \in \mathcal{I}}$ be the training set, where $\mathcal{I}$ is the index set for our training samples. The covariates are ${\mathbf{x}}^{i} \in {\mathbb{Z}}^{\left| \mathcal{F}\right| }$ , where $\mathcal{F}$ is the set of features of our data, and ${y}^{i}$ is some label in a finite set $\mathcal{K}$ . With a slight abuse of notation, we will let $\mathbf{x}$ denote the the vector concatenation of the rows of the $\left| \mathcal{I}\right| \times \left| \mathcal{F}\right|$ matrix of all training covariates, and $\mathbf{y}$ the $\left| \mathcal{I}\right|$ -sized vector of all training labels. The training set ${\left\{ {\mathbf{x}}^{i},{y}^{i}\right\} }_{i \in \mathcal{I}}$ is used to determine the tree structure, which includes deciding what binary tests to perform and labels to predict. Thus, in the first stage of our problem, we decide the tree structure to maximize the number of correct classifications for the given training data.
+
+Let ${\mathbf{\xi }}^{i} \in {\mathbb{Z}}^{\left| \mathcal{F}\right| }$ represent a perturbation of the covariate ${\mathbf{x}}^{i}$ . We can only observe ${\mathbf{\xi }}^{i} \in {\mathbb{Z}}^{\left| \mathcal{F}\right| }$ after making our first stage decisions on the tree structure. So ${\mathbf{x}}^{i} + {\mathbf{\xi }}^{i}$ represent the realization of the training sample $i$ after determining the tree structure. We let the covariate and perturbation value at sample $i$ and feature $f$ be ${x}_{f}^{i}$ and ${\xi }_{f}^{i}$ respectively. Also, denote $\mathbf{\xi }$ as the vector concatenation of the rows of the $\left| \mathcal{I}\right| \times$ $\left| \mathcal{F}\right|$ matrix of perturbations, and let $\Xi$ be our perturbation set that defines all possible $\xi$ .
+
+As mentioned before, the second stage decisions are the classification of training samples, which occurs after deciding the tree structure and observing the worst-case perturbation. To classify sample $i$ after perturbation, we must perform the series of binary tests for covariate ${\mathbf{x}}^{i} + {\mathbf{\xi }}^{i}$ from the tree decided on in the first stage. The binary test uses a threshold $\theta$ such that if ${x}_{f}^{i} + {\xi }_{f}^{i} \leq \theta$ , then sample $i$ travels to the left child. Likewise, if ${x}_{f}^{i} + {\xi }_{f}^{i} \geq \theta + 1$ , then sample $i$ travels to the right child. Letting ${c}_{f}$ and ${d}_{f}$ be the lower and upper bound of realized values for feature $f$ respectively, we define
+
+$$
+\Theta \left( f\right) \mathrel{\text{:=}} \left\{ {\theta \in \mathbb{Z} \mid {c}_{f} \leq \theta < {d}_{f}}\right\}
+$$
+
+as the set of possible binary test threshold values for feature $f$ .
+
+### 2.2 The Two-Stage Problem
+
+We will set up our robust formulation based on the non-robust classification tree outlined by (Aghaei, Gómez, and Vayanos 2021), where a binary classification tree is represented by a directed graph. The model starts with a depth $d$ binary tree, where the internal nodes are in the set $\mathcal{N}$ and the leaf nodes are in the set $\mathcal{L}$ . A node in $\mathcal{N}$ can either be a branching node where a binary test is performed, or an assignment node where a classification of a sample is made. Nodes in $\mathcal{L}$ can only be assignment nodes. There are ${2}^{d} - 1$ nodes in $\mathcal{N}$ and ${2}^{d}$ nodes in $\mathcal{L}$ , and we number each node from 1 to ${2}^{d + 1} - 1$ in a breadth-first search pattern. We then augment this tree by adding a single source node $s$ connected to the root node of the tree, and a sink node $t$ connected to each node in $\mathcal{N} \cup \mathcal{L}$ . By adding a source and sink node, we say that any data sample $i$ travels from the source $s$ through the tree based on ${\mathbf{x}}^{i}$ and reaches the sink if and only if the datapoint is correctly classified. Lastly, we denote the left and right child of node $n \in \mathcal{N}$ as $l\left( n\right)$ and $r\left( n\right)$ respectively, and also denote the ancestor of any node $n \in \mathcal{N} \cup \mathcal{L}$ as $a\left( n\right)$ .
+
+
+
+Figure 1: The left graph shows a classification tree with depth 2. Nodes $s$ and $t$ are the source and sink nodes respectively, $\mathcal{N} = \{ 1,2,3\}$ and $\mathcal{L} = \{ 4,5,6,7\}$ . The right graph shows an example of induced graph of a particular sample, where the green, bold edges are the subset of edges included in the induced graph. For this particular induced graph, nodes 1 and 3 are branching nodes, where the sample would be routed left and right, respectively. Nodes 2 and 6 assign the correct label, and node 7 does not assign the correct label. The maximum flow from $s$ to $t$ is 1 in this induced graph, indicating a correct classification.
+
+To determine whether a data sample $i$ is correctly classified by a given tree, we consider an induced graph of the original directed graph for data sample $i$ . In this induced graph, we keep every node in $\mathcal{N} \cup \mathcal{L}$ , the source $s$ , and the sink $t$ . For every branching node, we remove the edge leading to the sink node $t$ and the edge that fails the binary test based on the value of ${\mathbf{x}}^{i} + {\mathbf{\xi }}^{i}$ . And for every assignment node, we include the edge leading to $t$ only if the assigned class of that node is ${y}^{i}$ , and exclude all other edges leaving the assignment node. Lastly, we include the edge from the source $s$ to the root node 1 . Therefore, a maximum flow of 1 from $s$ to $t$ of this induced graph means that data sample $i$ would be correctly classified. Figure 1 illustrates an induced graph.
+
+With the flow graph setup, we propose a two-stage formulation for creating robust classification trees. In the first stage, we decide the structure of the tree. Let ${b}_{nf\theta }$ , defined over all $n \in \mathcal{N}, f \in \mathcal{F}$ , and $\theta \in \Theta \left( f\right)$ be a binary variable that denotes the branching decisions. If ${b}_{nf\theta } = 1$ , then node $n$ is a branching node, where the binary test is on feature $f$ with threshold $\theta$ . We also let ${w}_{nk}$ , defined over all $n \in \mathcal{N} \cup \mathcal{L}$ and all $k \in \mathcal{K}$ , be a binary variable that denotes the assignment decisions. If ${w}_{nk} = 1$ , then node $n$ is an assignment node with assignment label $k$ . We will denote $\mathbf{b}$ and $\mathbf{w}$ as the collection of ${b}_{nf\theta }$ and ${w}_{nk}$ variables respectively.
+
+Given a value of $\mathbf{b}$ and $\mathbf{w}$ , we find the perturbation in $\Xi$ that results in the minimum number of correctly classified points, which we will call the worst-case perturbation. In the second stage, after observing the worst-case perturbation of our covariates $\xi$ from the set $\Xi$ given a certain tree structure, we classify each of our points based on our tree. Let ${z}_{n, m}^{i}$ indicate whether data point $i$ flows down the edge between $n$ and $m$ and is correctly classified by the tree for $n \in \mathcal{N} \cup \mathcal{L} \cup \{ s\}$ and $m \in \mathcal{N} \cup \mathcal{L} \cup \{ t\}$ under the worst-case perturbation. We will let $\mathbf{z}$ be the vector concatenation of the rows of the $\left| \mathcal{I}\right| \times \left( {{2}^{d + 2} - 2}\right)$ matrix of ${z}_{n, m}^{i}$ values with rows corresponding to data sample $i$ and columns representing edge(n, m)for $n = a\left( m\right)$ . Note that ${z}_{n, m}^{i}$ are the decision variables of a maximum flow problem, where in the induced graph from data sample $i,{z}_{n, m}^{i}$ is 1 if and only if the maximum flow is 1 and the flow goes from node $n$ to node $m$ . Therefore, if there exists an $n \in \mathcal{N} \cup \mathcal{L}$ such that ${z}_{n, t}^{i} = 1$ , then sample $i$ is correctly classified by our tree after observing the worst-case perturbation.
+
+Our two-stage approach to defining the variables $\mathbf{b},\mathbf{w}$ , $\xi$ , and $\mathbf{z}$ leads us to the following formulation for a robust classification tree:
+
+$$
+\mathop{\max }\limits_{{\mathbf{b},\mathbf{w}}}\mathop{\min }\limits_{{\mathbf{\xi } \in \Xi }}\mathop{\max }\limits_{{\mathbf{z} \in \mathcal{Z}\left( {\mathbf{b},\mathbf{w},\mathbf{\xi }}\right) }}\mathop{\sum }\limits_{{i \in \mathcal{I}}}\mathop{\sum }\limits_{{n \in \mathcal{N} \cup \mathcal{L}}}{z}_{n, t}^{i} \tag{1a}
+$$
+
+$$
+\text{ s.t. }\mathop{\sum }\limits_{{f \in \mathcal{F}}}\mathop{\sum }\limits_{{\theta \in \Theta \left( f\right) }}{b}_{nf\theta } + \mathop{\sum }\limits_{{k \in \mathcal{K}}}{w}_{nk} = 1\;\forall n \in \mathcal{N} \tag{1b}
+$$
+
+$$
+\mathop{\sum }\limits_{{k \in \mathcal{K}}}{w}_{nk} = 1\;\forall n \in \mathcal{L} \tag{1c}
+$$
+
+$$
+{b}_{nf\theta } \in \{ 0,1\} \;\forall n \in \mathcal{N}, f \in \mathcal{F},\theta \in \Theta \left( f\right) \tag{1d}
+$$
+
+$$
+{w}_{nk} \in \{ 0,1\} \;\forall n \in \mathcal{N} \cup \mathcal{L}, k \in \mathcal{K}, \tag{1e}
+$$
+
+where the set $\mathcal{Z}$ is defined as
+
+$$
+\mathcal{Z}\left( {\mathbf{b},\mathbf{w},\mathbf{\xi }}\right) \mathrel{\text{:=}} \{ \mathbf{z} \in \{ 0,1{\} }^{\left| \mathcal{I}\right| \times \left( {{2}^{d + 2} - 2}\right) } :
+$$
+
+$$
+{z}_{n, l\left( n\right) }^{i} \leq \mathop{\sum }\limits_{{f \in \mathcal{F}}}\mathop{\sum }\limits_{\substack{{\theta \in \Theta \left( f\right) : } \\ {{x}_{f}^{i} + {\xi }_{f}^{i} \leq \theta } }}{b}_{nf\theta }\;\forall i \in \mathcal{I}, n \in \mathcal{N}, \tag{2a}
+$$
+
+$$
+{z}_{n, r\left( n\right) }^{i} \leq \mathop{\sum }\limits_{{f \in \mathcal{F}}}\mathop{\sum }\limits_{\substack{{\theta \in \Theta \left( f\right) : } \\ {{x}_{f}^{i} + {\xi }_{f}^{i} \geq \theta + 1} }}{b}_{nf\theta }\;\forall i \in \mathcal{I}, n \in \mathcal{N}, \tag{2b}
+$$
+
+$$
+{z}_{a\left( n\right) , n}^{i} = {z}_{n, l\left( n\right) }^{i} + {z}_{n, r\left( n\right) }^{i} + {z}_{n, t}^{i}\;\forall i \in \mathcal{I}, n \in \mathcal{N}, \tag{2c}
+$$
+
+$$
+{z}_{a\left( n\right) , n}^{i} = {z}_{n, t}^{i}\;\forall i \in \mathcal{I}, n \in \mathcal{L}, \tag{2d}
+$$
+
+$$
+{z}_{n, t}^{i} \leq {w}_{n,{y}^{i}}\;\forall i \in \mathcal{I}, n \in \mathcal{N} \cup L \tag{2e}
+$$
+
+\}.
+
+The objective function in (1) maximizes the number of correctly classified training samples in the worst-case perturbation. The constraint (1b) states that each internal node must either classify a point or must be a binary test with some threshold $\theta$ . The constraint (1c) states that each leaf node must classify a point.
+
+The set (2) describes the maximum flow constraints for each sample's induced graph. Constraints (2a) and (2b) are capacity constraints that control the flow of samples in the induced graph based on $\mathbf{x} + \mathbf{\xi }$ and the tree structure. Constraints (2c) and (2d) are flow conservation constraints. Lastly, constraint (2e) blocks any flow to the sink if the node is either not an assignment node or the assignment is incorrect.
+
+### 2.3 The Uncertainty Set
+
+We consider uncertainty sets defined as follows. Let ${\gamma }_{f}^{i} \in \mathbb{R}$ be the cost of perturbing ${x}_{f}^{i}$ by one. Thus, ${\gamma }_{f}^{i}\left| {\xi }_{f}^{i}\right|$ is the total cost of perturbing ${x}_{f}^{i}$ to ${x}_{f}^{i} + {\xi }_{f}^{i}$ . Letting $\epsilon$ be the total allowable budget of uncertainty across data samples, we define the following uncertainty set:
+
+$$
+\Xi \mathrel{\text{:=}} \left\{ {\xi \in {\mathbb{Z}}^{\left| \mathcal{I}\right| \times \left| \mathcal{F}\right| } : \mathop{\sum }\limits_{{i \in \mathcal{I}}}\mathop{\sum }\limits_{{f \in \mathcal{F}}}{\gamma }_{f}^{i}\left| {\xi }_{f}^{i}\right| \leq \epsilon }\right\} . \tag{3}
+$$
+
+As we will show later, a tailored solution method of Formulation (1) can be made if uncertainty is defined by set (3), and there exists a connection between (3) and hypothesis testing.
+
+## 3 Solution Method
+
+We now present a method of solving problem (1) through a reformulation that can leverage existing, off-the-shelf mixed-integer linear programming solvers.
+
+### 3.1 Reformulating the Two-Stage Problem
+
+We solve our two-stage optimization problem by first taking the dual of the inner maximization problem. Recall that the inner maximization problem is a maximum flow problem; therefore, the dual of the inner maximization problem will yield an inner minimum cut problem (Vazirani 2001). Note that strong duality holds, and therefore taking the dual of the inner problem of (1) will yield a reformulation with equal optimal objective values, and thus optimal tree structure variables $\mathbf{b}$ and $\mathbf{w}$ for both problems.
+
+Let ${q}_{n, m}^{i}$ be the binary dual variable that equals 1 if and only if in the induced graph for data sample $i$ after perturbation, the edge that connects nodes $n \in \mathcal{N} \cup \{ s\}$ and $m \in \mathcal{N} \cup \mathcal{L} \cup \{ t\}$ is in the minimum cut. We write $\mathbf{q}$ as the vector concatenation of the rows of the $\left| \mathcal{I}\right| \times \left( {{2}^{d + 2} - 2}\right)$ matrix of ${q}_{n, m}^{i}$ values with rows corresponding to data sample $i$ and columns representing edge(n, m). We also define ${p}_{n}^{i}$ to be a binary variable that equals 1 if and only if in the induced graph of data sample $i \in \mathcal{I}$ , the node $n \in \mathcal{N} \cup \mathcal{L} \cup \{ s\}$ is in the source set. Letting $\mathcal{Q}$ be the set of all possible values of $\mathbf{q}$ , we then have
+
+$$
+\mathcal{Q} \mathrel{\text{:=}} \{ \mathbf{q} \in \{ 0,1{\} }^{\left| \mathcal{I}\right| \times \left( {{2}^{d + 2} - 2}\right) } :
+$$
+
+$$
+\exists {p}_{n}^{i} \in \{ 0,1\} \;\forall i \in \mathcal{I}, n \in \mathcal{N} \cup \mathcal{L} \cup \{ s\} , \tag{4a}
+$$
+
+$$
+{q}_{n, l\left( n\right) }^{i} - {p}_{n}^{i} + {p}_{l\left( n\right) }^{i} \geq 0\;\forall i \in \mathcal{I}, n \in \mathcal{N}, \tag{4b}
+$$
+
+$$
+{q}_{n, r\left( n\right) }^{i} - {p}_{n}^{i} + {p}_{r\left( n\right) }^{i} \geq 0\;\forall i \in \mathcal{I}, n \in \mathcal{N}, \tag{4c}
+$$
+
+$$
+{q}_{s,1}^{i} + {p}_{1}^{i} \geq 1\;\forall i \in \mathcal{I}, \tag{4d}
+$$
+
+$$
+- {p}_{n}^{i} + {q}_{n, t}^{i} \geq 0\;\forall i \in \mathcal{I}, n \in \mathcal{N} \cup \mathcal{L}, \tag{4e}
+$$
+
+\}.
+
+Constraints (4b) ensures that if the node $n \in \mathcal{N}$ is in the source set, then either its left child is also in the source set or the edge between $n$ and its left child are in the cut. Constraint (4c) is analogous to (4b), but with the right child of node $n \in \mathcal{N}$ . Constraint (4d) states that either the root node is in the source set or the edge from the source to the root node is in the cut. Lastly, constraint (4e) ensures that for any $n \in$ $\mathcal{N} \cup \mathcal{L}$ , if $n$ is in the source set, then the edge from $n$ to the sink must be in the cut.
+
+Then, taking the dual of the inner maximization problem in (1) gives the following single-stage formulation:
+
+$$
+\mathop{\max }\limits_{{\mathbf{b},\mathbf{w}}}\mathop{\min }\limits_{{\mathbf{q} \in \mathcal{Q},\mathbf{\xi } \in \Xi }}\mathop{\sum }\limits_{{i \in \mathcal{I}}}\mathop{\sum }\limits_{{n \in \mathcal{N}}}\mathop{\sum }\limits_{\substack{{\theta \in \Theta \left( f\right) : } \\ {{x}_{f}^{i} + {\xi }_{f}^{i} \leq \theta } }}\mathop{\sum }\limits_{{n, l\left( n\right) }}{q}_{n, l\left( n\right) }^{i}{b}_{nf\theta }
+$$
+
+$$
++ \mathop{\sum }\limits_{{i \in \mathcal{I}}}\mathop{\sum }\limits_{{n \in \mathcal{N}}}\mathop{\sum }\limits_{\substack{{f \in \mathcal{F}} \\ {{x}_{f}^{i} + {\xi }_{f}^{i} \geq \theta + 1} }}\mathop{\sum }\limits_{\substack{{\theta \in \Theta \left( f\right) } \\ {{x}_{f}^{i} + {\xi }_{f}^{i} \geq \theta + 1} }}{q}_{n, r\left( n\right) }^{i}{b}_{nf\theta }
+$$
+
+$$
++ \mathop{\sum }\limits_{{i \in \mathcal{I}}}\mathop{\sum }\limits_{{n \in \mathcal{N} \cup \mathcal{L}}}{q}_{n, t}^{i}{w}_{n,{y}^{i}} + \mathop{\sum }\limits_{{i \in \mathcal{I}}}{q}_{s,1}^{i} \tag{5a}
+$$
+
+$$
+\text{ s.t. }\mathop{\sum }\limits_{{f \in \mathcal{F}}}\mathop{\sum }\limits_{{\theta \in \Theta \left( f\right) }}{b}_{nf\theta } + \mathop{\sum }\limits_{{k \in \mathcal{K}}}{w}_{nk} = 1\;\forall n \in \mathcal{N} \tag{5b}
+$$
+
+$$
+\mathop{\sum }\limits_{{k \in \mathcal{K}}}{w}_{nk} = 1\;\forall n \in \mathcal{L} \tag{5c}
+$$
+
+$$
+{b}_{nf\theta } \in \{ 0,1\} \;\forall n \in \mathcal{N}, f \in \mathcal{F},\theta \in \Theta \left( f\right) \tag{5d}
+$$
+
+$$
+{w}_{nk} \in \{ 0,1\} \;\forall n \in \mathcal{N} \cup \mathcal{L}, k \in \mathcal{K} \tag{5e}
+$$
+
+where $\mathcal{Q}$ is defined by (4) and constraints (5b) and (5c) are the same as constraints (1b) and (1c) respectively. As mentioned before, strong duality holds between Formulations (1) and (5) since strong duality holds between the maximum flow and minimum cut problems.
+
+### 3.2 Solving the Single-Stage Reformulation
+
+We can obtain a mixed-integer linear program equivalent to (5) by doing a hypograph reformulation. However, a hypo-graph reformulation would introduce an extremely large of constraints. A common approach to solving the reformulation is to use a tailored Benders decomposition algorithm, which we describe here.
+
+The master problem decides the tree structure given its current constraints. We thus have the following initial master problem:
+
+$$
+\mathop{\max }\limits_{{\mathbf{b},\mathbf{w},\mathbf{t}}}\mathop{\sum }\limits_{{i \in \mathcal{I}}}{t}_{i} \tag{6a}
+$$
+
+$$
+\text{ s.t. }\mathop{\sum }\limits_{{f \in \mathcal{F}}}\mathop{\sum }\limits_{{\theta \in \Theta \left( f\right) }}{b}_{nf\theta } + \mathop{\sum }\limits_{{k \in \mathcal{K}}}{w}_{nk} = 1\;\forall n \in \mathcal{N} \tag{6b}
+$$
+
+$$
+\mathop{\sum }\limits_{{k \in \mathcal{K}}}{w}_{nk} = 1\;\forall n \in \mathcal{L} \tag{6c}
+$$
+
+$$
+{t}_{i} \leq 1\;\forall i \in \mathcal{I} \tag{6d}
+$$
+
+$$
+{b}_{nf\theta } \in \{ 0,1\} \;\forall n \in \mathcal{N}, f \in \mathcal{F},\theta \in \Theta \left( f\right) \tag{6e}
+$$
+
+$$
+{w}_{nk} \in \{ 0,1\} \;\forall n \in \mathcal{N} \cup \mathcal{L}, k \in \mathcal{K} \tag{6f}
+$$
+
+where ${t}_{i}$ comes from the hypograph of the inner sum of objective function (5a) for a particular $i \in \mathcal{I}$ . We add the constraint (6d) to ensure that the initial problem is bounded.
+
+The goal for the subproblem is, given certain values of $\mathbf{b}$ , $\mathbf{w}$ , and $\mathbf{t}$ that describe a specific tree structure, find a perturbation in $\Xi$ that reduces the number of correctly classified samples the most. After finding the minimum cut of the induced graph for each data sample after deciding the perturbation, we add a constraint of the form
+
+$$
+\mathop{\sum }\limits_{{i \in \mathcal{I}}}{t}_{i} \leq \mathop{\sum }\limits_{{i \in \mathcal{I}}}\mathop{\sum }\limits_{{n \in \mathcal{N}}}\mathop{\sum }\limits_{\substack{{f \in \mathcal{F}} \\ {f \in {\mathcal{F}}_{f} \leq \theta } }}\mathop{\sum }\limits_{\substack{{\theta \in \Theta \left( f\right) : } \\ {{x}_{f}^{i} + {\xi }_{f}^{i} \leq \theta } }}{q}_{n, l\left( n\right) }^{i}{b}_{nf\theta }
+$$
+
+$$
++ \mathop{\sum }\limits_{{i \in \mathcal{I}}}\mathop{\sum }\limits_{{n \in \mathcal{N}}}\mathop{\sum }\limits_{\substack{{f \in \mathcal{F}} \\ {{x}_{f}^{i} + {\xi }_{f}^{i} \geq \theta + 1} }}\mathop{\sum }\limits_{\substack{{\theta \in \Theta \left( f\right) } \\ {{x}_{f}^{i} + {\xi }_{f}^{i} \geq \theta + 1} }}{q}_{n, r}^{i}{b}_{nf\theta } \tag{7}
+$$
+
+$$
++ \mathop{\sum }\limits_{{i \in \mathcal{I}}}\mathop{\sum }\limits_{{n \in \mathcal{N} \cup \mathcal{L}}}{q}_{n, t}^{i}{w}_{n,{y}^{i}} + \mathop{\sum }\limits_{{i \in \mathcal{I}}}{q}_{s,1}^{i}
+$$
+
+where we substitute the variables $\mathbf{\xi }$ and $\mathbf{q}$ with the perturbation and minimum cuts.
+
+### 3.3 The Subproblem
+
+We will now describe the procedure for the subproblem to find the perturbation and minimum cuts that will yield a violated constraint of the form (7) if a violated constraint exists.
+
+For each data sample that is correctly classified by the tree given by the master problem (6), we first find the lowest-cost perturbation ${\xi }^{i}$ for the single sample that would cause it to be misclassified. To do this, we set up a shortest path problem. The weighted graph of the shortest path problem is created from the flow-based tree returned by the master problem, and is constructed with the following procedure:
+
+1. The edge from $s$ to 1 (the root of the decision tree) has path cost 0 .
+
+2. For each $n \in \mathcal{N} \cup \mathcal{L}$ , if there exists a $k \in \mathcal{K}$ such that ${w}_{nk} = 1$ , and $k \neq {y}^{i}$ , then we have a 0 path cost from $n$ to $t$ . All other edges coming into $t$ have infinite path cost.
+
+3. For each $n \in \mathcal{N}$ , if there exists an $f \in \mathcal{F}$ such that ${b}_{nf} = 1$ , then...
+
+(a) if ${x}_{f}^{i} = 0$ , add an edge from $n$ to $l\left( n\right)$ with 0 weight, and add an edge from $n$ to $r\left( n\right)$ with ${\gamma }_{f}$ weight.
+
+(b) if ${x}_{f}^{i} = 1$ , add an edge from $n$ to $r\left( n\right)$ with 0 weight, and add an edge from $n$ to $l\left( n\right)$ with ${\gamma }_{f}$ weight. By finding the shortest path from $s$ to $t$ for the weighted graph derived from data sample $i$ , we find the path with the smallest total cost of perturbation that would misclassify the point $i$ . That is, we use the shortest path to see what perturbation ${\mathbf{\xi }}^{i}$ would misclassify ${\mathbf{x}}^{i}$ with the smallest cost.
+
+Once we find the lowest-cost perturbation that would misclassify every sample, we choose the largest subset of these training samples whose total cost of perturbation to misclassify each sample is less than the allowed budget of uncertainty. Through this procedure, we find the value of $\xi$ that misclassifies the most number of points given the current tree.
+
+Note that the right hand side of the constraint (7) gives the count of the the number of correctly classified points for a certain $\mathbf{b},\mathbf{w},\mathbf{\xi }$ , and $\mathbf{q}$ . Therefore, for dataset $\mathbf{x} + \mathbf{\xi }$ , if the number of correctly classified points is less than the optimal value of $\mathop{\sum }\limits_{{i \in I}}{t}_{i}$ from the master problem, then we know that there exists a constraint of the form (7) that is violated. Otherwise, there are no violated constraints of the form (7), which indicates the optimality of the current solution.
+
+In the case of finding a violated constraint, we now would like to obtain the values of $\mathbf{q}$ , the variables associated with the minimum cut problem. To do this, we need to find for each sample $i \in \mathcal{I}$ the set of edges in a minimum cut given the path of the data point ${\mathbf{x}}^{i} + {\xi }^{i}$ , where the value of the cut is 1 if ${\mathbf{x}}^{i} + {\mathbf{\xi }}^{i}$ is correctly classified and 0 otherwise. The simplest way to construct this minimum cut is for each $i \in \mathcal{I}$ , we follow the path of ${\mathbf{x}}^{i} + {\mathbf{\xi }}^{i}$ . At each node visited in this path, we first include to the minimum cut any edges outgoing from that node that are not traversed. And at the assignment node, we also include the edge going from the assignment node to the sink $t$ . By doing this procedure for all training samples, we obtain the value of $\mathbf{q}$ describing all minimum cuts.
+
+By finding the value of $\xi$ and an associated $\mathbf{q}$ , we can use these values in (7) to obtain the most restrictive violated constraint for a given tree, which we add back to the master problem. We summarize our approach in Algorithm 1.
+
+## 4 Statistical Connections
+
+Here, we explore how the uncertainty set described in (3) connects to hypothesis testing. Let ${q}_{f}^{\zeta } \in (0,1\rbrack$ be the probability that the realization of the data at feature $f$ as decided by our uncertainty set perturbs the nominal data at feature $f$ by $\zeta \in \mathbb{Z}$ . We will impose the assumption that the perturbations follow a geometric distribution. More specifically,
+
+$$
+{q}_{f}^{\zeta } = {\left( {0.5}\right) }^{\mathbb{I}\left\lbrack {\zeta \neq 0}\right\rbrack }{q}_{f}{\left( 1 - {q}_{f}\right) }^{\left| \zeta \right| } \tag{8}
+$$
+
+for some ${q}_{f} \in (0,1\rbrack$ (where the ${\left( {0.5}\right) }^{\mathbb{I}\left\lbrack {\zeta \neq 0}\right\rbrack }$ multiplier imposes a symmetry between positive and negative values of the perturbation).
+
+We will set up a likelihood ratio test with threshold ${\lambda }^{\left| \mathcal{I}\right| }$ for $\lambda \in \left\lbrack {0,1}\right\rbrack$ , where we add the exponential of $\left| \mathcal{I}\right|$ for ease of comparison across different data sets with different number of training samples. Our null hypothesis will be that a given perturbation of our data comes from the distribution of perturbations given by the chosen ${q}_{f}^{\zeta } \in (0,1\rbrack$ . Then, we set up a likelihood ratio test where we fail to reject the null hypothesis if
+
+$$
+\frac{\mathop{\prod }\limits_{{i \in \mathcal{I}}}\mathop{\prod }\limits_{{f \in \mathcal{F}}}\mathop{\prod }\limits_{{\zeta = - \infty }}^{\infty }{\left( {q}_{f}^{\zeta }\right) }^{\mathbb{I}\left\lbrack {{\xi }_{f}^{i} = = \zeta }\right\rbrack }}{\mathop{\prod }\limits_{{i \in \mathcal{I}}}\left\lbrack {\mathop{\prod }\limits_{{f \in \mathcal{F}}}{q}_{f}^{0}}\right\rbrack } \geq {\lambda }^{\left| \mathcal{I}\right| }, \tag{9}
+$$
+
+Algorithm 1: Solution Method to formulation (5)
+
+---
+
+Input: training set indexed by $\mathcal{I}$ with features $\mathcal{F}$ and labels
+
+$\mathcal{K}$ , range of test thresholds $\Theta \left( f\right)$ , tree depth $d$ , uncertainty
+
+set parameters $\gamma$ and $\epsilon$
+
+Output: The optimal robust tree represented by ${\mathcal{T}}^{ * } =$
+
+$\left( {{\mathbf{b}}^{ * },{\mathbf{w}}^{ * }}\right)$
+
+ while no tree $\mathcal{T}$ returned do
+
+ Solve the master problem (6) with any added con-
+
+ straints, obtain tree $\mathcal{T} = \left( {{\mathbf{b}}^{ * },{\mathbf{w}}^{ * }}\right)$ , and ${\mathbf{t}}^{ * }$
+
+ Find the lowest cost $\xi$ that causes the most number of
+
+ samples to be misclassified in $\mathcal{T}$ to obtain ${\mathbf{\xi }}^{ * }$
+
+ if $\mathop{\sum }\limits_{{i \in \mathcal{I}}}{t}_{i} \leq$ number of correctly classified samples
+
+ of $\mathbf{x} + {\mathbf{\xi }}^{ * }$ given $\mathcal{T}$ then
+
+ return $\mathcal{T}$
+
+ else
+
+ Find ${\mathbf{q}}^{ * }$ by finding a minimum cut for each $i \in \mathcal{I}$
+
+ based on $\mathbf{x} + {\mathbf{\xi }}^{ * }$ and $\mathcal{T}$
+
+ Use values of ${\mathbf{\xi }}^{ * }$ and ${\mathbf{q}}^{ * }$ to create constraint (7) to
+
+ add to the master problem.
+
+ end if
+
+ end while
+
+---
+
+where the numerator of the left hand side is the maximum likelihood of a given perturbation $\xi$ , and the denominator of the left hand side is the likelihood under the null hypothesis. Using the assumption that ${q}_{f}^{\zeta }$ follows (8), we can reduce the hypothesis test in (9) into
+
+$$
+\mathop{\sum }\limits_{{i \in \mathcal{I}}}\mathop{\sum }\limits_{{f \in \mathcal{F}}}\left| {\xi }_{f}^{i}\right| \log \left( \frac{1}{1 - {q}_{f}}\right) \leq - \left| \mathcal{I}\right| \log \lambda . \tag{10}
+$$
+
+We say that if a particular $\xi$ lies within the region where we fail to reject the null hypothesis, then it is part of our perturbation set. That is, using the notation from the perturbation set defined in (3), letting ${\gamma }_{f}^{i} = \log \left( \frac{1}{1 - {q}_{f}}\right)$ and $\epsilon = - \left| \mathcal{I}\right| \log \lambda$ yields an uncertainty set with a direct relationship to the probabilities of certainty for each feature.
+
+## 5 Experiments
+
+We evaluate our approach on 12 datasets from the UCI Machine Learning Repository (Dua and Graff 2017). For each data set, we construct a robust classification tree from our method using a synthetic uncertainty set where for different problem instances, we choose different levels of uncertainty in the features and budgets of uncertainty. We utilize the hypothesis testing framework as described by (10), where we define ${q}_{f}$ by sampling the probability of certainty from a normal distribution with a particular mean we set and a standard deviation of 0.2 . The means of this normal distribution included0.6,0.7,0.8, and 0.9 . We also chose different values of our budget by setting $\lambda$ to be0.5,0.75,0.85,0.9, 0.95,0.97, and 0.99 . For every data set and uncertainty set, we tested with tree depths of2,3,4and 5 .
+
+
+
+Figure 2: This graph shows the number of instances solved across times and optimality gaps when the time limit of 7200 seconds is reached for several values of $\lambda$ . The case of $\lambda =$ 1.0 is the regularized tree with an empty uncertainty set.
+
+For each instance, we randomly split the data set into 80% training data, ${20}\%$ testing data. We then ran our algorithm to obtain a robust classification tree with a time limit of 7200 seconds. For comparison, we used our model to create a non-robust tree as well by setting the budget of uncertainty to 0 (i.e. $\lambda = 1$ ), and tuned a regularization parameter for the non-robust tree. The regularization term penalized the objective for every branching node to yield the following objective in the master problem of our algorithm:
+
+$$
+\mathop{\max }\limits_{{\mathbf{b},\mathbf{w},\mathbf{t}}}\left( {1 - R}\right) \mathop{\sum }\limits_{{i \in \mathcal{I}}}{t}_{i} - R\mathop{\sum }\limits_{{i \in \mathcal{I}}}\mathop{\sum }\limits_{{n \in \mathcal{N}}}\mathop{\sum }\limits_{{\Theta \left( f\right) }}{b}_{nf\theta }
+$$
+
+where $R \in \left\lbrack {0,1}\right\rbrack$ is the tuned regularization parameter. Note that we do not add a regularization parameter to our robust model, as robust optimization has an equivalence with regularization and so adding a regularization term is redundant (Bertsimas and Copenhaver 2018). We summarize the computation time across all instances in Figure 2. As we expected, the larger the uncertainty set, the longer it takes for the formulation to solve to optimality.
+
+To test our model's robustness against distribution shifts, we perturbed the test data in 5000 different ways, where for each perturbation we found the test accuracy from our robust tree. We first perturbed the data based on the expected distribution of perturbations. That is, for the collection of ${q}_{f}$ values for every $f \in \mathcal{F}$ used to construct an uncertainty set based off of (10), we perturb the data based on the distribution described in (8).
+
+In order to measure the robustness of our model based on unexpected perturbations of the data, we also repeat the same process but for values of ${q}_{f}$ different than what we gave our model. First, we shifted each ${q}_{f}$ value down 0.2, then perturb our test data in 5000 different ways based on these new values of ${q}_{f}$ . We do the same procedure but with ${q}_{f}$ shifted down by 0.1 and up by 0.1 . In a similar fashion, we also uniformly sampled a new ${q}_{f}$ value for each feature in a neighborhood of radius 0.05 of the original expected ${q}_{f}$ value, and perturbed the test data in 5000 different ways with the new ${q}_{f}$ values. We do the same procedure for the radii of the neighborhoods0.1,0.15, and 0.2,
+
+
+
+Figure 3: These boxplots show the distribution across problem instances of the gain in worst-case accuracy from using a robust tree versus a non-robust, regularized tree across different values of $\lambda$ . We also show the distribution of the gain in worst-case accuracy in the case where perturbations of our data are not as we expect.
+
+For each set of perturbations of the test data, we measure the worst-case accuracy by finding the lowest accuracy from all perturbations we made for a single set of ${q}_{f}$ values, and measure the average accuracy by averaging over the accuracy over all perturbations for a single set of ${q}_{f}$ values. We compile the gain in worst-case and average-case performance from using our robust tree versus using a regularized, non-robust tree for every problem instance and perturbation of our data, giving us a distribution of worst-case and average case gains in performance that are summarized in Figures 3 and 4 , respectively.
+
+From the figures, we see that our robust tree model in general has both higher worst-case and average-case accuracy than a non-robust model when there exists distribution shifts in the data. We also see that there is a range of values of $\lambda$ that seem to perform well over other values (namely 0.85). This shows us that if the budget of uncertainty is too small, then we do not allow enough room to hedge against distribution shifts in our uncertainty set. But if the budget of uncertainty is too large, then we become over-conservative and perform poorly for any perturbation of our test data. We also see that there is little difference between the gains in accuracy in instances where the perturbation of our data is as we expected versus when the perturbation is not as we expect. This indicates that even if we misspecify our model, we still obtain a classification tree robust to any kind of distribution shift within a reasonable range of our expected distribution shift. Overall, we see that an important factor in determining the performance of our model is the budget of uncertainty, which can be easily tuned to create an effective robust tree.
+
+
+
+Figure 4: These boxplots show the distribution across problem instances of the gain in average test accuracy from using a robust tree versus a non-robust, regularized tree across different values of $\lambda$ . We also show this gain in average accuracy in the case where perturbations of our data are not what we expect.
+
+## References
+
+Aghaei, S.; Azizi, M. J.; and Vayanos, P. 2019. Learning Optimal and Fair Decision Trees for Non-Discriminative Decision-Making. Proceedings of the AAAI Conference on Artificial Intelligence, 33: 1418-1426.
+
+Aghaei, S.; Gómez, A.; and Vayanos, P. 2021. Strong Optimal Classification Trees. arXiv:2103.15965.
+
+Ben-Tal, A.; El Ghaoui, L.; and Nemirovski, A. 2009. Robust Optimization. Princeton University Press. ISBN 9781400831050.
+
+Ben-Tal, A.; Goryashko, A.; Guslitzer, E.; and Nemirovski, A. 2004. Adjustable robust solutions of uncertain linear programs. Mathematical Programming, 99(2).
+
+Bertsimas, D.; and Copenhaver, M. S. 2018. Characterization of the equivalence of robustification and regularization in linear and matrix regression. European Journal of Operational Research, 270(3).
+
+Bertsimas, D.; and Dunn, J. 2017. Optimal classification trees. Machine Learning, 106(7): 1039-1082.
+
+Bertsimas, D.; Dunn, J.; Pawlowski, C.; and Zhuo, Y. D. 2019. Robust Classification. INFORMS Journal on Optimization, 1(1): 2-34.
+
+Bickel, S.; Brückner, M.; and Scheffer, T. 2007. Discriminative learning for differing training and test distributions. In Proceedings of the 24th international conference on Machine learning - ICML '07. New York, New York, USA: ACM Press. ISBN 9781595937933.
+
+Breiman, L.; Friedman, J. H.; Olshen, R. A.; and Stone, C. J. 2017. Classification And Regression Trees. Routledge. ISBN 9781315139470.
+
+Dua, D.; and Graff, C. 2017. UCI Machine Learning Repository.
+
+Jo, N.; Aghaei, S.; Gómez, A.; and Vayanos, P. 2021. Learning Optimal Prescriptive Trees from Observational Data. arXiv:2108.13628.
+
+Kuhn, D.; Esfahani, P. M.; Nguyen, V. A.; and Shafieezadeh-Abadeh, S. 2019. Wasserstein Distributionally Robust Optimization: Theory and Applications in Machine Learning. In Operations Research & Management Science in the Age of Analytics, 130-166. INFORMS.
+
+Mišić, V. V. 2020. Optimization of Tree Ensembles. Operations Research, 68(5): 1605-1624.
+
+Quiñonero-Candela, J.; Sugiyama, M.; Lawrence, N. D.; and Schwaighofer, A. 2009. Dataset shift in machine learning. Mit Press.
+
+Rudin, C. 2019. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nature Machine Intelligence, 1(5).
+
+Shaham, U.; Yamada, Y.; and Negahban, S. 2018. Understanding adversarial training: Increasing local stability of supervised models through robust optimization. Neurocomput-ing, 307: 195-204.
+
+Shimodaira, H. 2000. Improving predictive inference under covariate shift by weighting the log-likelihood function. Journal of Statistical Planning and Inference, 90(2).
+
+Shivaswamy, P. K.; Bhattacharyya, C.; and Smola, A. J. 2006. Second Order Cone Programming Approaches for Handling Missing and Uncertain Data. Journal of Machine Learning Research, 7(47): 1283-1314.
+
+Sinha, A.; Namkoong, H.; Volpi, R.; and Duchi, J. 2020. Certifying Some Distributional Robustness with Principled Adversarial Training. arXiv:1710.10571.
+
+Vazirani, V. V. 2001. Approximation algorithm, volume 1. Springer.
+
+Verwer, S.; and Zhang, Y. 2019. Learning Optimal Classification Trees Using a Binary Linear Program Formulation. Proceedings of the AAAI Conference on Artificial Intelligence, 33.
+
+Vos, D.; and Verwer, S. 2021. Robust Optimal Classification Trees Against Adversarial Examples. arXiv:2109.03857.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/HbasA9ysA3/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/HbasA9ysA3/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..ba20d3d39383438c8d5710cedc6fe5c3e782fb69
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/HbasA9ysA3/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,365 @@
+§ OPTIMAL ROBUST CLASSIFICATION TREES
+
+§ ABSTRACT
+
+In many high-stakes domains, the data used to drive machine learning algorithms is noisy (due to e.g., the sensitive nature of the data being collected, limited resources available to validate the data, etc). This may cause a distribution shift to occur, where the distribution of the training data does not match the distribution of the testing data. In the presence of distribution shifts, any trained model can perform poorly in the testing phase. In this paper, motivated by the need for interpretability and robustness, we propose a mixed-integer optimization formulation and a tailored solution algorithm for learning optimal classification trees that are robust to adversarial perturbations in the data features. We evaluate the performance of our approach on numerous publicly available datasets, and compare the performance to a regularized, non-robust optimal tree. We show an increase of up to 14.16% in worst-case accuracy and increase of up to 4.72% in average-case accuracy across several data sets and distribution shifts from using our robust solution in comparison to the non-robust solution.
+
+§ 1 INTRODUCTION
+
+Machine learning techniques are increasingly being used in high-stakes domains to assist humans in making important decisions. Within these applications, black box models that need explanation should be avoided as decisions made from these models may have a profound impact (Rudin 2019). That is, we need inherently interpretable models where decisions made can be simply understood and verified. One of the most interpretable models are classification trees, which are easily visualized and do not require extensive knowledge to use. Classification trees are a widely used model that takes the form of a binary tree. At each branching node, a test based off of the attributes of the given data sample is made, which dictates the next node visited. Then at an assignment node, a particular label is assigned to the data sample (Breiman et al. 2017).
+
+However, as with many other machine learning models, classification trees are susceptible to distribution shifts. That is, the distribution of the training data and the testing data may be different, causing poor performance in deployment (Quiñonero-Candela et al. 2009). In high-stakes domains where there is a need for interpretability, there must also be robustness against distribution shifts to ensure high-quality solutions under any realization of the training data.
+
+§ 1.1 BACKGROUND AND RELATED WORK
+
+Traditionally, classification trees are built using heuristic approaches since the problem of building optimal classification trees is $\mathcal{{NP}}$ -hard (Breiman et al. 2017). But in settings where the quality of solutions is important, heuristic approaches may yield suboptimal solutions that are unacceptable for use in applications. Thus, to ensure a high-quality decision tree, mathematical optimization techniques, like mixed-integer optimization (MIO), have been developed for building optimal trees. Namely, Bertsimas and Dunn (2017) were the first to use MIO to build optimal decision trees. To combat the long run times for making optimal decision trees on large data sets, Verwer and Zhang (2019) create a binary linear programming formulation that has a run time independent of the amount of training samples. Aghaei, Gómez, and Vayanos (2021) build a strong, "flow-based" MIO formulation that greatly improves on solving times in comparison to other state-of-the-art optimal classification tree algorithms. MIO approaches for constructing decision trees have also allowed for several extensions. For example, Mišić (2020) formulates the problem of creating tree ensembles as a MIO problem, Aghaei, Azizi, and Vayanos (2019) create optimal and fair decision trees using MIO, and Jo et al. (2021) use MIO to build optimal prescriptive trees from observational data.
+
+To account for the problem of distribution shifts, there exists both non-MIO and MIO approaches. One type of non-MIO method up-weights training samples that match the test set distribution and down-weights the training samples that differ from the test set (Shimodaira 2000; Bickel, Brückner, and Scheffer 2007). These methods usually define distribution shift as a biased sampling of training data, where assigning weights to training samples diminishes the effects of adversarial examples.
+
+Another way to define a distribution shift is as an adversarial perturbation of the trained data that makes it differ from the test data. Motivated by this viewpoint, there have been several optimization-based methods to deal with distribution shifts. One of these methods is distributionally robust optimization, which combats the effects of distribution shifts by performing well under an adversarial distribution of samples. Both Kuhn et al. (2019) and Sinha et al. (2020) in particular provide distributionally robust approaches to building machine learning models that perform well under an adversarial distribution of the training data, where the adversarial distribution is in some Wasserstein distance from the nominal distribution of the data.
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+Distributionally robust optimization requires an assumption on the distribution of the data available, which may not be a reasonable assumption to make. In the case where such an assumption cannot be reasonably made, robust optimization provides a framework to generate solutions that perform well in the worst-case perturbation, where the perturbation comes from a set of values without a probability distribution assumption imposed (Ben-Tal, El Ghaoui, and Nemirovski 2009). Many common machine learning models have been formulated as robust optimization problems to deal with uncertainty in data. For example, robust optimization has been used for creating robust support vector machines (Shivaswamy, Bhattacharyya, and Smola 2006; Bert-simas et al. 2019). Robust optimization has also been used to create artificial neural networks that are robust against adversarial perturbations of the data (Shaham, Yamada, and Negahban 2018).
+
+In a similar spirit to these previous works, we propose using robust optimization to create a classification tree robust to distribution shifts. In an adversarial setting, we must decide the tree structure before observing the perturbation of the data. The perturbation of a sample, once unveiled, reveals whether the given tree structure correctly classifies the realization of the sample. That is, the classification of samples in a decision tree is dependent on both the tree structure and the realization of the uncertain parameter. Decision variables that indicate the correct classification of samples are called second-stage decisions, which can be modeled through two-stage robust optimization (Ben-Tal et al. 2004).
+
+There have been several recent methods for robust classification trees. Namely, Vos and Verwer (2021) have a local search algorithm for decision trees robust against adversarial examples derived from a user-defined threat model. Bertsimas et al. (2019) have a robust optimization formulation for robust trees where the uncertainty set is modeled by restricting the norm of the perturbation parameter. Unlike our proposed method, Bertsimas et al. (2019) do not capture the dependent relationship between the perturbation of the covariates and the classification of training samples, as decisions made on the classification of training samples are made before the realization of the uncertain parameter. Thus, they cannot identify correctly classified data points in the realization of the worst-case perturbation. So, differing from previous work, we argue that the decision variables related to the classification of training samples should be modeled as second-stage decisions. And by modeling these variables as second-stage decisions, we obtain less conservative solutions than a single-stage approach to the same problem.
+
+With these motivating factors in mind, we propose a two-stage, MIO method for learning optimal classification trees robust to distribution shifts in the data. Namely, we present a flow-based optimization problem, where we model uncertainty through a cost-and-budget framework. We then present a tailored Benders decomposition algorithm that solves this two-stage formulation to optimality. We evaluate the performance of our formulation on publicly available data sets for several problem instances to measure the effectiveness of our method in mitigating the adverse effects of distribution shifts.
+
+§ 2 ROBUST TREE FORMULATION
+
+In this section, we present our formulation for a robust classification tree. We describe the structure of the classification tree, present the proposed two-stage formulation, and discuss our model of uncertainty.
+
+§ 2.1 SETUP AND NOTATION
+
+Let ${\left\{ {\mathbf{x}}^{i},{y}^{i}\right\} }_{i \in \mathcal{I}}$ be the training set, where $\mathcal{I}$ is the index set for our training samples. The covariates are ${\mathbf{x}}^{i} \in {\mathbb{Z}}^{\left| \mathcal{F}\right| }$ , where $\mathcal{F}$ is the set of features of our data, and ${y}^{i}$ is some label in a finite set $\mathcal{K}$ . With a slight abuse of notation, we will let $\mathbf{x}$ denote the the vector concatenation of the rows of the $\left| \mathcal{I}\right| \times \left| \mathcal{F}\right|$ matrix of all training covariates, and $\mathbf{y}$ the $\left| \mathcal{I}\right|$ -sized vector of all training labels. The training set ${\left\{ {\mathbf{x}}^{i},{y}^{i}\right\} }_{i \in \mathcal{I}}$ is used to determine the tree structure, which includes deciding what binary tests to perform and labels to predict. Thus, in the first stage of our problem, we decide the tree structure to maximize the number of correct classifications for the given training data.
+
+Let ${\mathbf{\xi }}^{i} \in {\mathbb{Z}}^{\left| \mathcal{F}\right| }$ represent a perturbation of the covariate ${\mathbf{x}}^{i}$ . We can only observe ${\mathbf{\xi }}^{i} \in {\mathbb{Z}}^{\left| \mathcal{F}\right| }$ after making our first stage decisions on the tree structure. So ${\mathbf{x}}^{i} + {\mathbf{\xi }}^{i}$ represent the realization of the training sample $i$ after determining the tree structure. We let the covariate and perturbation value at sample $i$ and feature $f$ be ${x}_{f}^{i}$ and ${\xi }_{f}^{i}$ respectively. Also, denote $\mathbf{\xi }$ as the vector concatenation of the rows of the $\left| \mathcal{I}\right| \times$ $\left| \mathcal{F}\right|$ matrix of perturbations, and let $\Xi$ be our perturbation set that defines all possible $\xi$ .
+
+As mentioned before, the second stage decisions are the classification of training samples, which occurs after deciding the tree structure and observing the worst-case perturbation. To classify sample $i$ after perturbation, we must perform the series of binary tests for covariate ${\mathbf{x}}^{i} + {\mathbf{\xi }}^{i}$ from the tree decided on in the first stage. The binary test uses a threshold $\theta$ such that if ${x}_{f}^{i} + {\xi }_{f}^{i} \leq \theta$ , then sample $i$ travels to the left child. Likewise, if ${x}_{f}^{i} + {\xi }_{f}^{i} \geq \theta + 1$ , then sample $i$ travels to the right child. Letting ${c}_{f}$ and ${d}_{f}$ be the lower and upper bound of realized values for feature $f$ respectively, we define
+
+$$
+\Theta \left( f\right) \mathrel{\text{ := }} \left\{ {\theta \in \mathbb{Z} \mid {c}_{f} \leq \theta < {d}_{f}}\right\}
+$$
+
+as the set of possible binary test threshold values for feature $f$ .
+
+§ 2.2 THE TWO-STAGE PROBLEM
+
+We will set up our robust formulation based on the non-robust classification tree outlined by (Aghaei, Gómez, and Vayanos 2021), where a binary classification tree is represented by a directed graph. The model starts with a depth $d$ binary tree, where the internal nodes are in the set $\mathcal{N}$ and the leaf nodes are in the set $\mathcal{L}$ . A node in $\mathcal{N}$ can either be a branching node where a binary test is performed, or an assignment node where a classification of a sample is made. Nodes in $\mathcal{L}$ can only be assignment nodes. There are ${2}^{d} - 1$ nodes in $\mathcal{N}$ and ${2}^{d}$ nodes in $\mathcal{L}$ , and we number each node from 1 to ${2}^{d + 1} - 1$ in a breadth-first search pattern. We then augment this tree by adding a single source node $s$ connected to the root node of the tree, and a sink node $t$ connected to each node in $\mathcal{N} \cup \mathcal{L}$ . By adding a source and sink node, we say that any data sample $i$ travels from the source $s$ through the tree based on ${\mathbf{x}}^{i}$ and reaches the sink if and only if the datapoint is correctly classified. Lastly, we denote the left and right child of node $n \in \mathcal{N}$ as $l\left( n\right)$ and $r\left( n\right)$ respectively, and also denote the ancestor of any node $n \in \mathcal{N} \cup \mathcal{L}$ as $a\left( n\right)$ .
+
+ < g r a p h i c s >
+
+Figure 1: The left graph shows a classification tree with depth 2. Nodes $s$ and $t$ are the source and sink nodes respectively, $\mathcal{N} = \{ 1,2,3\}$ and $\mathcal{L} = \{ 4,5,6,7\}$ . The right graph shows an example of induced graph of a particular sample, where the green, bold edges are the subset of edges included in the induced graph. For this particular induced graph, nodes 1 and 3 are branching nodes, where the sample would be routed left and right, respectively. Nodes 2 and 6 assign the correct label, and node 7 does not assign the correct label. The maximum flow from $s$ to $t$ is 1 in this induced graph, indicating a correct classification.
+
+To determine whether a data sample $i$ is correctly classified by a given tree, we consider an induced graph of the original directed graph for data sample $i$ . In this induced graph, we keep every node in $\mathcal{N} \cup \mathcal{L}$ , the source $s$ , and the sink $t$ . For every branching node, we remove the edge leading to the sink node $t$ and the edge that fails the binary test based on the value of ${\mathbf{x}}^{i} + {\mathbf{\xi }}^{i}$ . And for every assignment node, we include the edge leading to $t$ only if the assigned class of that node is ${y}^{i}$ , and exclude all other edges leaving the assignment node. Lastly, we include the edge from the source $s$ to the root node 1 . Therefore, a maximum flow of 1 from $s$ to $t$ of this induced graph means that data sample $i$ would be correctly classified. Figure 1 illustrates an induced graph.
+
+With the flow graph setup, we propose a two-stage formulation for creating robust classification trees. In the first stage, we decide the structure of the tree. Let ${b}_{nf\theta }$ , defined over all $n \in \mathcal{N},f \in \mathcal{F}$ , and $\theta \in \Theta \left( f\right)$ be a binary variable that denotes the branching decisions. If ${b}_{nf\theta } = 1$ , then node $n$ is a branching node, where the binary test is on feature $f$ with threshold $\theta$ . We also let ${w}_{nk}$ , defined over all $n \in \mathcal{N} \cup \mathcal{L}$ and all $k \in \mathcal{K}$ , be a binary variable that denotes the assignment decisions. If ${w}_{nk} = 1$ , then node $n$ is an assignment node with assignment label $k$ . We will denote $\mathbf{b}$ and $\mathbf{w}$ as the collection of ${b}_{nf\theta }$ and ${w}_{nk}$ variables respectively.
+
+Given a value of $\mathbf{b}$ and $\mathbf{w}$ , we find the perturbation in $\Xi$ that results in the minimum number of correctly classified points, which we will call the worst-case perturbation. In the second stage, after observing the worst-case perturbation of our covariates $\xi$ from the set $\Xi$ given a certain tree structure, we classify each of our points based on our tree. Let ${z}_{n,m}^{i}$ indicate whether data point $i$ flows down the edge between $n$ and $m$ and is correctly classified by the tree for $n \in \mathcal{N} \cup \mathcal{L} \cup \{ s\}$ and $m \in \mathcal{N} \cup \mathcal{L} \cup \{ t\}$ under the worst-case perturbation. We will let $\mathbf{z}$ be the vector concatenation of the rows of the $\left| \mathcal{I}\right| \times \left( {{2}^{d + 2} - 2}\right)$ matrix of ${z}_{n,m}^{i}$ values with rows corresponding to data sample $i$ and columns representing edge(n, m)for $n = a\left( m\right)$ . Note that ${z}_{n,m}^{i}$ are the decision variables of a maximum flow problem, where in the induced graph from data sample $i,{z}_{n,m}^{i}$ is 1 if and only if the maximum flow is 1 and the flow goes from node $n$ to node $m$ . Therefore, if there exists an $n \in \mathcal{N} \cup \mathcal{L}$ such that ${z}_{n,t}^{i} = 1$ , then sample $i$ is correctly classified by our tree after observing the worst-case perturbation.
+
+Our two-stage approach to defining the variables $\mathbf{b},\mathbf{w}$ , $\xi$ , and $\mathbf{z}$ leads us to the following formulation for a robust classification tree:
+
+$$
+\mathop{\max }\limits_{{\mathbf{b},\mathbf{w}}}\mathop{\min }\limits_{{\mathbf{\xi } \in \Xi }}\mathop{\max }\limits_{{\mathbf{z} \in \mathcal{Z}\left( {\mathbf{b},\mathbf{w},\mathbf{\xi }}\right) }}\mathop{\sum }\limits_{{i \in \mathcal{I}}}\mathop{\sum }\limits_{{n \in \mathcal{N} \cup \mathcal{L}}}{z}_{n,t}^{i} \tag{1a}
+$$
+
+$$
+\text{ s.t. }\mathop{\sum }\limits_{{f \in \mathcal{F}}}\mathop{\sum }\limits_{{\theta \in \Theta \left( f\right) }}{b}_{nf\theta } + \mathop{\sum }\limits_{{k \in \mathcal{K}}}{w}_{nk} = 1\;\forall n \in \mathcal{N} \tag{1b}
+$$
+
+$$
+\mathop{\sum }\limits_{{k \in \mathcal{K}}}{w}_{nk} = 1\;\forall n \in \mathcal{L} \tag{1c}
+$$
+
+$$
+{b}_{nf\theta } \in \{ 0,1\} \;\forall n \in \mathcal{N},f \in \mathcal{F},\theta \in \Theta \left( f\right) \tag{1d}
+$$
+
+$$
+{w}_{nk} \in \{ 0,1\} \;\forall n \in \mathcal{N} \cup \mathcal{L},k \in \mathcal{K}, \tag{1e}
+$$
+
+where the set $\mathcal{Z}$ is defined as
+
+$$
+\mathcal{Z}\left( {\mathbf{b},\mathbf{w},\mathbf{\xi }}\right) \mathrel{\text{ := }} \{ \mathbf{z} \in \{ 0,1{\} }^{\left| \mathcal{I}\right| \times \left( {{2}^{d + 2} - 2}\right) } :
+$$
+
+$$
+{z}_{n,l\left( n\right) }^{i} \leq \mathop{\sum }\limits_{{f \in \mathcal{F}}}\mathop{\sum }\limits_{\substack{{\theta \in \Theta \left( f\right) : } \\ {{x}_{f}^{i} + {\xi }_{f}^{i} \leq \theta } }}{b}_{nf\theta }\;\forall i \in \mathcal{I},n \in \mathcal{N}, \tag{2a}
+$$
+
+$$
+{z}_{n,r\left( n\right) }^{i} \leq \mathop{\sum }\limits_{{f \in \mathcal{F}}}\mathop{\sum }\limits_{\substack{{\theta \in \Theta \left( f\right) : } \\ {{x}_{f}^{i} + {\xi }_{f}^{i} \geq \theta + 1} }}{b}_{nf\theta }\;\forall i \in \mathcal{I},n \in \mathcal{N}, \tag{2b}
+$$
+
+$$
+{z}_{a\left( n\right) ,n}^{i} = {z}_{n,l\left( n\right) }^{i} + {z}_{n,r\left( n\right) }^{i} + {z}_{n,t}^{i}\;\forall i \in \mathcal{I},n \in \mathcal{N}, \tag{2c}
+$$
+
+$$
+{z}_{a\left( n\right) ,n}^{i} = {z}_{n,t}^{i}\;\forall i \in \mathcal{I},n \in \mathcal{L}, \tag{2d}
+$$
+
+$$
+{z}_{n,t}^{i} \leq {w}_{n,{y}^{i}}\;\forall i \in \mathcal{I},n \in \mathcal{N} \cup L \tag{2e}
+$$
+
+}.
+
+The objective function in (1) maximizes the number of correctly classified training samples in the worst-case perturbation. The constraint (1b) states that each internal node must either classify a point or must be a binary test with some threshold $\theta$ . The constraint (1c) states that each leaf node must classify a point.
+
+The set (2) describes the maximum flow constraints for each sample's induced graph. Constraints (2a) and (2b) are capacity constraints that control the flow of samples in the induced graph based on $\mathbf{x} + \mathbf{\xi }$ and the tree structure. Constraints (2c) and (2d) are flow conservation constraints. Lastly, constraint (2e) blocks any flow to the sink if the node is either not an assignment node or the assignment is incorrect.
+
+§ 2.3 THE UNCERTAINTY SET
+
+We consider uncertainty sets defined as follows. Let ${\gamma }_{f}^{i} \in \mathbb{R}$ be the cost of perturbing ${x}_{f}^{i}$ by one. Thus, ${\gamma }_{f}^{i}\left| {\xi }_{f}^{i}\right|$ is the total cost of perturbing ${x}_{f}^{i}$ to ${x}_{f}^{i} + {\xi }_{f}^{i}$ . Letting $\epsilon$ be the total allowable budget of uncertainty across data samples, we define the following uncertainty set:
+
+$$
+\Xi \mathrel{\text{ := }} \left\{ {\xi \in {\mathbb{Z}}^{\left| \mathcal{I}\right| \times \left| \mathcal{F}\right| } : \mathop{\sum }\limits_{{i \in \mathcal{I}}}\mathop{\sum }\limits_{{f \in \mathcal{F}}}{\gamma }_{f}^{i}\left| {\xi }_{f}^{i}\right| \leq \epsilon }\right\} . \tag{3}
+$$
+
+As we will show later, a tailored solution method of Formulation (1) can be made if uncertainty is defined by set (3), and there exists a connection between (3) and hypothesis testing.
+
+§ 3 SOLUTION METHOD
+
+We now present a method of solving problem (1) through a reformulation that can leverage existing, off-the-shelf mixed-integer linear programming solvers.
+
+§ 3.1 REFORMULATING THE TWO-STAGE PROBLEM
+
+We solve our two-stage optimization problem by first taking the dual of the inner maximization problem. Recall that the inner maximization problem is a maximum flow problem; therefore, the dual of the inner maximization problem will yield an inner minimum cut problem (Vazirani 2001). Note that strong duality holds, and therefore taking the dual of the inner problem of (1) will yield a reformulation with equal optimal objective values, and thus optimal tree structure variables $\mathbf{b}$ and $\mathbf{w}$ for both problems.
+
+Let ${q}_{n,m}^{i}$ be the binary dual variable that equals 1 if and only if in the induced graph for data sample $i$ after perturbation, the edge that connects nodes $n \in \mathcal{N} \cup \{ s\}$ and $m \in \mathcal{N} \cup \mathcal{L} \cup \{ t\}$ is in the minimum cut. We write $\mathbf{q}$ as the vector concatenation of the rows of the $\left| \mathcal{I}\right| \times \left( {{2}^{d + 2} - 2}\right)$ matrix of ${q}_{n,m}^{i}$ values with rows corresponding to data sample $i$ and columns representing edge(n, m). We also define ${p}_{n}^{i}$ to be a binary variable that equals 1 if and only if in the induced graph of data sample $i \in \mathcal{I}$ , the node $n \in \mathcal{N} \cup \mathcal{L} \cup \{ s\}$ is in the source set. Letting $\mathcal{Q}$ be the set of all possible values of $\mathbf{q}$ , we then have
+
+$$
+\mathcal{Q} \mathrel{\text{ := }} \{ \mathbf{q} \in \{ 0,1{\} }^{\left| \mathcal{I}\right| \times \left( {{2}^{d + 2} - 2}\right) } :
+$$
+
+$$
+\exists {p}_{n}^{i} \in \{ 0,1\} \;\forall i \in \mathcal{I},n \in \mathcal{N} \cup \mathcal{L} \cup \{ s\} , \tag{4a}
+$$
+
+$$
+{q}_{n,l\left( n\right) }^{i} - {p}_{n}^{i} + {p}_{l\left( n\right) }^{i} \geq 0\;\forall i \in \mathcal{I},n \in \mathcal{N}, \tag{4b}
+$$
+
+$$
+{q}_{n,r\left( n\right) }^{i} - {p}_{n}^{i} + {p}_{r\left( n\right) }^{i} \geq 0\;\forall i \in \mathcal{I},n \in \mathcal{N}, \tag{4c}
+$$
+
+$$
+{q}_{s,1}^{i} + {p}_{1}^{i} \geq 1\;\forall i \in \mathcal{I}, \tag{4d}
+$$
+
+$$
+- {p}_{n}^{i} + {q}_{n,t}^{i} \geq 0\;\forall i \in \mathcal{I},n \in \mathcal{N} \cup \mathcal{L}, \tag{4e}
+$$
+
+}.
+
+Constraints (4b) ensures that if the node $n \in \mathcal{N}$ is in the source set, then either its left child is also in the source set or the edge between $n$ and its left child are in the cut. Constraint (4c) is analogous to (4b), but with the right child of node $n \in \mathcal{N}$ . Constraint (4d) states that either the root node is in the source set or the edge from the source to the root node is in the cut. Lastly, constraint (4e) ensures that for any $n \in$ $\mathcal{N} \cup \mathcal{L}$ , if $n$ is in the source set, then the edge from $n$ to the sink must be in the cut.
+
+Then, taking the dual of the inner maximization problem in (1) gives the following single-stage formulation:
+
+$$
+\mathop{\max }\limits_{{\mathbf{b},\mathbf{w}}}\mathop{\min }\limits_{{\mathbf{q} \in \mathcal{Q},\mathbf{\xi } \in \Xi }}\mathop{\sum }\limits_{{i \in \mathcal{I}}}\mathop{\sum }\limits_{{n \in \mathcal{N}}}\mathop{\sum }\limits_{\substack{{\theta \in \Theta \left( f\right) : } \\ {{x}_{f}^{i} + {\xi }_{f}^{i} \leq \theta } }}\mathop{\sum }\limits_{{n,l\left( n\right) }}{q}_{n,l\left( n\right) }^{i}{b}_{nf\theta }
+$$
+
+$$
++ \mathop{\sum }\limits_{{i \in \mathcal{I}}}\mathop{\sum }\limits_{{n \in \mathcal{N}}}\mathop{\sum }\limits_{\substack{{f \in \mathcal{F}} \\ {{x}_{f}^{i} + {\xi }_{f}^{i} \geq \theta + 1} }}\mathop{\sum }\limits_{\substack{{\theta \in \Theta \left( f\right) } \\ {{x}_{f}^{i} + {\xi }_{f}^{i} \geq \theta + 1} }}{q}_{n,r\left( n\right) }^{i}{b}_{nf\theta }
+$$
+
+$$
++ \mathop{\sum }\limits_{{i \in \mathcal{I}}}\mathop{\sum }\limits_{{n \in \mathcal{N} \cup \mathcal{L}}}{q}_{n,t}^{i}{w}_{n,{y}^{i}} + \mathop{\sum }\limits_{{i \in \mathcal{I}}}{q}_{s,1}^{i} \tag{5a}
+$$
+
+$$
+\text{ s.t. }\mathop{\sum }\limits_{{f \in \mathcal{F}}}\mathop{\sum }\limits_{{\theta \in \Theta \left( f\right) }}{b}_{nf\theta } + \mathop{\sum }\limits_{{k \in \mathcal{K}}}{w}_{nk} = 1\;\forall n \in \mathcal{N} \tag{5b}
+$$
+
+$$
+\mathop{\sum }\limits_{{k \in \mathcal{K}}}{w}_{nk} = 1\;\forall n \in \mathcal{L} \tag{5c}
+$$
+
+$$
+{b}_{nf\theta } \in \{ 0,1\} \;\forall n \in \mathcal{N},f \in \mathcal{F},\theta \in \Theta \left( f\right) \tag{5d}
+$$
+
+$$
+{w}_{nk} \in \{ 0,1\} \;\forall n \in \mathcal{N} \cup \mathcal{L},k \in \mathcal{K} \tag{5e}
+$$
+
+where $\mathcal{Q}$ is defined by (4) and constraints (5b) and (5c) are the same as constraints (1b) and (1c) respectively. As mentioned before, strong duality holds between Formulations (1) and (5) since strong duality holds between the maximum flow and minimum cut problems.
+
+§ 3.2 SOLVING THE SINGLE-STAGE REFORMULATION
+
+We can obtain a mixed-integer linear program equivalent to (5) by doing a hypograph reformulation. However, a hypo-graph reformulation would introduce an extremely large of constraints. A common approach to solving the reformulation is to use a tailored Benders decomposition algorithm, which we describe here.
+
+The master problem decides the tree structure given its current constraints. We thus have the following initial master problem:
+
+$$
+\mathop{\max }\limits_{{\mathbf{b},\mathbf{w},\mathbf{t}}}\mathop{\sum }\limits_{{i \in \mathcal{I}}}{t}_{i} \tag{6a}
+$$
+
+$$
+\text{ s.t. }\mathop{\sum }\limits_{{f \in \mathcal{F}}}\mathop{\sum }\limits_{{\theta \in \Theta \left( f\right) }}{b}_{nf\theta } + \mathop{\sum }\limits_{{k \in \mathcal{K}}}{w}_{nk} = 1\;\forall n \in \mathcal{N} \tag{6b}
+$$
+
+$$
+\mathop{\sum }\limits_{{k \in \mathcal{K}}}{w}_{nk} = 1\;\forall n \in \mathcal{L} \tag{6c}
+$$
+
+$$
+{t}_{i} \leq 1\;\forall i \in \mathcal{I} \tag{6d}
+$$
+
+$$
+{b}_{nf\theta } \in \{ 0,1\} \;\forall n \in \mathcal{N},f \in \mathcal{F},\theta \in \Theta \left( f\right) \tag{6e}
+$$
+
+$$
+{w}_{nk} \in \{ 0,1\} \;\forall n \in \mathcal{N} \cup \mathcal{L},k \in \mathcal{K} \tag{6f}
+$$
+
+where ${t}_{i}$ comes from the hypograph of the inner sum of objective function (5a) for a particular $i \in \mathcal{I}$ . We add the constraint (6d) to ensure that the initial problem is bounded.
+
+The goal for the subproblem is, given certain values of $\mathbf{b}$ , $\mathbf{w}$ , and $\mathbf{t}$ that describe a specific tree structure, find a perturbation in $\Xi$ that reduces the number of correctly classified samples the most. After finding the minimum cut of the induced graph for each data sample after deciding the perturbation, we add a constraint of the form
+
+$$
+\mathop{\sum }\limits_{{i \in \mathcal{I}}}{t}_{i} \leq \mathop{\sum }\limits_{{i \in \mathcal{I}}}\mathop{\sum }\limits_{{n \in \mathcal{N}}}\mathop{\sum }\limits_{\substack{{f \in \mathcal{F}} \\ {f \in {\mathcal{F}}_{f} \leq \theta } }}\mathop{\sum }\limits_{\substack{{\theta \in \Theta \left( f\right) : } \\ {{x}_{f}^{i} + {\xi }_{f}^{i} \leq \theta } }}{q}_{n,l\left( n\right) }^{i}{b}_{nf\theta }
+$$
+
+$$
++ \mathop{\sum }\limits_{{i \in \mathcal{I}}}\mathop{\sum }\limits_{{n \in \mathcal{N}}}\mathop{\sum }\limits_{\substack{{f \in \mathcal{F}} \\ {{x}_{f}^{i} + {\xi }_{f}^{i} \geq \theta + 1} }}\mathop{\sum }\limits_{\substack{{\theta \in \Theta \left( f\right) } \\ {{x}_{f}^{i} + {\xi }_{f}^{i} \geq \theta + 1} }}{q}_{n,r}^{i}{b}_{nf\theta } \tag{7}
+$$
+
+$$
++ \mathop{\sum }\limits_{{i \in \mathcal{I}}}\mathop{\sum }\limits_{{n \in \mathcal{N} \cup \mathcal{L}}}{q}_{n,t}^{i}{w}_{n,{y}^{i}} + \mathop{\sum }\limits_{{i \in \mathcal{I}}}{q}_{s,1}^{i}
+$$
+
+where we substitute the variables $\mathbf{\xi }$ and $\mathbf{q}$ with the perturbation and minimum cuts.
+
+§ 3.3 THE SUBPROBLEM
+
+We will now describe the procedure for the subproblem to find the perturbation and minimum cuts that will yield a violated constraint of the form (7) if a violated constraint exists.
+
+For each data sample that is correctly classified by the tree given by the master problem (6), we first find the lowest-cost perturbation ${\xi }^{i}$ for the single sample that would cause it to be misclassified. To do this, we set up a shortest path problem. The weighted graph of the shortest path problem is created from the flow-based tree returned by the master problem, and is constructed with the following procedure:
+
+1. The edge from $s$ to 1 (the root of the decision tree) has path cost 0 .
+
+2. For each $n \in \mathcal{N} \cup \mathcal{L}$ , if there exists a $k \in \mathcal{K}$ such that ${w}_{nk} = 1$ , and $k \neq {y}^{i}$ , then we have a 0 path cost from $n$ to $t$ . All other edges coming into $t$ have infinite path cost.
+
+3. For each $n \in \mathcal{N}$ , if there exists an $f \in \mathcal{F}$ such that ${b}_{nf} = 1$ , then...
+
+(a) if ${x}_{f}^{i} = 0$ , add an edge from $n$ to $l\left( n\right)$ with 0 weight, and add an edge from $n$ to $r\left( n\right)$ with ${\gamma }_{f}$ weight.
+
+(b) if ${x}_{f}^{i} = 1$ , add an edge from $n$ to $r\left( n\right)$ with 0 weight, and add an edge from $n$ to $l\left( n\right)$ with ${\gamma }_{f}$ weight. By finding the shortest path from $s$ to $t$ for the weighted graph derived from data sample $i$ , we find the path with the smallest total cost of perturbation that would misclassify the point $i$ . That is, we use the shortest path to see what perturbation ${\mathbf{\xi }}^{i}$ would misclassify ${\mathbf{x}}^{i}$ with the smallest cost.
+
+Once we find the lowest-cost perturbation that would misclassify every sample, we choose the largest subset of these training samples whose total cost of perturbation to misclassify each sample is less than the allowed budget of uncertainty. Through this procedure, we find the value of $\xi$ that misclassifies the most number of points given the current tree.
+
+Note that the right hand side of the constraint (7) gives the count of the the number of correctly classified points for a certain $\mathbf{b},\mathbf{w},\mathbf{\xi }$ , and $\mathbf{q}$ . Therefore, for dataset $\mathbf{x} + \mathbf{\xi }$ , if the number of correctly classified points is less than the optimal value of $\mathop{\sum }\limits_{{i \in I}}{t}_{i}$ from the master problem, then we know that there exists a constraint of the form (7) that is violated. Otherwise, there are no violated constraints of the form (7), which indicates the optimality of the current solution.
+
+In the case of finding a violated constraint, we now would like to obtain the values of $\mathbf{q}$ , the variables associated with the minimum cut problem. To do this, we need to find for each sample $i \in \mathcal{I}$ the set of edges in a minimum cut given the path of the data point ${\mathbf{x}}^{i} + {\xi }^{i}$ , where the value of the cut is 1 if ${\mathbf{x}}^{i} + {\mathbf{\xi }}^{i}$ is correctly classified and 0 otherwise. The simplest way to construct this minimum cut is for each $i \in \mathcal{I}$ , we follow the path of ${\mathbf{x}}^{i} + {\mathbf{\xi }}^{i}$ . At each node visited in this path, we first include to the minimum cut any edges outgoing from that node that are not traversed. And at the assignment node, we also include the edge going from the assignment node to the sink $t$ . By doing this procedure for all training samples, we obtain the value of $\mathbf{q}$ describing all minimum cuts.
+
+By finding the value of $\xi$ and an associated $\mathbf{q}$ , we can use these values in (7) to obtain the most restrictive violated constraint for a given tree, which we add back to the master problem. We summarize our approach in Algorithm 1.
+
+§ 4 STATISTICAL CONNECTIONS
+
+Here, we explore how the uncertainty set described in (3) connects to hypothesis testing. Let ${q}_{f}^{\zeta } \in (0,1\rbrack$ be the probability that the realization of the data at feature $f$ as decided by our uncertainty set perturbs the nominal data at feature $f$ by $\zeta \in \mathbb{Z}$ . We will impose the assumption that the perturbations follow a geometric distribution. More specifically,
+
+$$
+{q}_{f}^{\zeta } = {\left( {0.5}\right) }^{\mathbb{I}\left\lbrack {\zeta \neq 0}\right\rbrack }{q}_{f}{\left( 1 - {q}_{f}\right) }^{\left| \zeta \right| } \tag{8}
+$$
+
+for some ${q}_{f} \in (0,1\rbrack$ (where the ${\left( {0.5}\right) }^{\mathbb{I}\left\lbrack {\zeta \neq 0}\right\rbrack }$ multiplier imposes a symmetry between positive and negative values of the perturbation).
+
+We will set up a likelihood ratio test with threshold ${\lambda }^{\left| \mathcal{I}\right| }$ for $\lambda \in \left\lbrack {0,1}\right\rbrack$ , where we add the exponential of $\left| \mathcal{I}\right|$ for ease of comparison across different data sets with different number of training samples. Our null hypothesis will be that a given perturbation of our data comes from the distribution of perturbations given by the chosen ${q}_{f}^{\zeta } \in (0,1\rbrack$ . Then, we set up a likelihood ratio test where we fail to reject the null hypothesis if
+
+$$
+\frac{\mathop{\prod }\limits_{{i \in \mathcal{I}}}\mathop{\prod }\limits_{{f \in \mathcal{F}}}\mathop{\prod }\limits_{{\zeta = - \infty }}^{\infty }{\left( {q}_{f}^{\zeta }\right) }^{\mathbb{I}\left\lbrack {{\xi }_{f}^{i} = = \zeta }\right\rbrack }}{\mathop{\prod }\limits_{{i \in \mathcal{I}}}\left\lbrack {\mathop{\prod }\limits_{{f \in \mathcal{F}}}{q}_{f}^{0}}\right\rbrack } \geq {\lambda }^{\left| \mathcal{I}\right| }, \tag{9}
+$$
+
+Algorithm 1: Solution Method to formulation (5)
+
+Input: training set indexed by $\mathcal{I}$ with features $\mathcal{F}$ and labels
+
+$\mathcal{K}$ , range of test thresholds $\Theta \left( f\right)$ , tree depth $d$ , uncertainty
+
+set parameters $\gamma$ and $\epsilon$
+
+Output: The optimal robust tree represented by ${\mathcal{T}}^{ * } =$
+
+$\left( {{\mathbf{b}}^{ * },{\mathbf{w}}^{ * }}\right)$
+
+ while no tree $\mathcal{T}$ returned do
+
+ Solve the master problem (6) with any added con-
+
+ straints, obtain tree $\mathcal{T} = \left( {{\mathbf{b}}^{ * },{\mathbf{w}}^{ * }}\right)$ , and ${\mathbf{t}}^{ * }$
+
+ Find the lowest cost $\xi$ that causes the most number of
+
+ samples to be misclassified in $\mathcal{T}$ to obtain ${\mathbf{\xi }}^{ * }$
+
+ if $\mathop{\sum }\limits_{{i \in \mathcal{I}}}{t}_{i} \leq$ number of correctly classified samples
+
+ of $\mathbf{x} + {\mathbf{\xi }}^{ * }$ given $\mathcal{T}$ then
+
+ return $\mathcal{T}$
+
+ else
+
+ Find ${\mathbf{q}}^{ * }$ by finding a minimum cut for each $i \in \mathcal{I}$
+
+ based on $\mathbf{x} + {\mathbf{\xi }}^{ * }$ and $\mathcal{T}$
+
+ Use values of ${\mathbf{\xi }}^{ * }$ and ${\mathbf{q}}^{ * }$ to create constraint (7) to
+
+ add to the master problem.
+
+ end if
+
+ end while
+
+where the numerator of the left hand side is the maximum likelihood of a given perturbation $\xi$ , and the denominator of the left hand side is the likelihood under the null hypothesis. Using the assumption that ${q}_{f}^{\zeta }$ follows (8), we can reduce the hypothesis test in (9) into
+
+$$
+\mathop{\sum }\limits_{{i \in \mathcal{I}}}\mathop{\sum }\limits_{{f \in \mathcal{F}}}\left| {\xi }_{f}^{i}\right| \log \left( \frac{1}{1 - {q}_{f}}\right) \leq - \left| \mathcal{I}\right| \log \lambda . \tag{10}
+$$
+
+We say that if a particular $\xi$ lies within the region where we fail to reject the null hypothesis, then it is part of our perturbation set. That is, using the notation from the perturbation set defined in (3), letting ${\gamma }_{f}^{i} = \log \left( \frac{1}{1 - {q}_{f}}\right)$ and $\epsilon = - \left| \mathcal{I}\right| \log \lambda$ yields an uncertainty set with a direct relationship to the probabilities of certainty for each feature.
+
+§ 5 EXPERIMENTS
+
+We evaluate our approach on 12 datasets from the UCI Machine Learning Repository (Dua and Graff 2017). For each data set, we construct a robust classification tree from our method using a synthetic uncertainty set where for different problem instances, we choose different levels of uncertainty in the features and budgets of uncertainty. We utilize the hypothesis testing framework as described by (10), where we define ${q}_{f}$ by sampling the probability of certainty from a normal distribution with a particular mean we set and a standard deviation of 0.2 . The means of this normal distribution included0.6,0.7,0.8, and 0.9 . We also chose different values of our budget by setting $\lambda$ to be0.5,0.75,0.85,0.9, 0.95,0.97, and 0.99 . For every data set and uncertainty set, we tested with tree depths of2,3,4and 5 .
+
+ < g r a p h i c s >
+
+Figure 2: This graph shows the number of instances solved across times and optimality gaps when the time limit of 7200 seconds is reached for several values of $\lambda$ . The case of $\lambda =$ 1.0 is the regularized tree with an empty uncertainty set.
+
+For each instance, we randomly split the data set into 80% training data, ${20}\%$ testing data. We then ran our algorithm to obtain a robust classification tree with a time limit of 7200 seconds. For comparison, we used our model to create a non-robust tree as well by setting the budget of uncertainty to 0 (i.e. $\lambda = 1$ ), and tuned a regularization parameter for the non-robust tree. The regularization term penalized the objective for every branching node to yield the following objective in the master problem of our algorithm:
+
+$$
+\mathop{\max }\limits_{{\mathbf{b},\mathbf{w},\mathbf{t}}}\left( {1 - R}\right) \mathop{\sum }\limits_{{i \in \mathcal{I}}}{t}_{i} - R\mathop{\sum }\limits_{{i \in \mathcal{I}}}\mathop{\sum }\limits_{{n \in \mathcal{N}}}\mathop{\sum }\limits_{{\Theta \left( f\right) }}{b}_{nf\theta }
+$$
+
+where $R \in \left\lbrack {0,1}\right\rbrack$ is the tuned regularization parameter. Note that we do not add a regularization parameter to our robust model, as robust optimization has an equivalence with regularization and so adding a regularization term is redundant (Bertsimas and Copenhaver 2018). We summarize the computation time across all instances in Figure 2. As we expected, the larger the uncertainty set, the longer it takes for the formulation to solve to optimality.
+
+To test our model's robustness against distribution shifts, we perturbed the test data in 5000 different ways, where for each perturbation we found the test accuracy from our robust tree. We first perturbed the data based on the expected distribution of perturbations. That is, for the collection of ${q}_{f}$ values for every $f \in \mathcal{F}$ used to construct an uncertainty set based off of (10), we perturb the data based on the distribution described in (8).
+
+In order to measure the robustness of our model based on unexpected perturbations of the data, we also repeat the same process but for values of ${q}_{f}$ different than what we gave our model. First, we shifted each ${q}_{f}$ value down 0.2, then perturb our test data in 5000 different ways based on these new values of ${q}_{f}$ . We do the same procedure but with ${q}_{f}$ shifted down by 0.1 and up by 0.1 . In a similar fashion, we also uniformly sampled a new ${q}_{f}$ value for each feature in a neighborhood of radius 0.05 of the original expected ${q}_{f}$ value, and perturbed the test data in 5000 different ways with the new ${q}_{f}$ values. We do the same procedure for the radii of the neighborhoods0.1,0.15, and 0.2,
+
+ < g r a p h i c s >
+
+Figure 3: These boxplots show the distribution across problem instances of the gain in worst-case accuracy from using a robust tree versus a non-robust, regularized tree across different values of $\lambda$ . We also show the distribution of the gain in worst-case accuracy in the case where perturbations of our data are not as we expect.
+
+For each set of perturbations of the test data, we measure the worst-case accuracy by finding the lowest accuracy from all perturbations we made for a single set of ${q}_{f}$ values, and measure the average accuracy by averaging over the accuracy over all perturbations for a single set of ${q}_{f}$ values. We compile the gain in worst-case and average-case performance from using our robust tree versus using a regularized, non-robust tree for every problem instance and perturbation of our data, giving us a distribution of worst-case and average case gains in performance that are summarized in Figures 3 and 4, respectively.
+
+From the figures, we see that our robust tree model in general has both higher worst-case and average-case accuracy than a non-robust model when there exists distribution shifts in the data. We also see that there is a range of values of $\lambda$ that seem to perform well over other values (namely 0.85). This shows us that if the budget of uncertainty is too small, then we do not allow enough room to hedge against distribution shifts in our uncertainty set. But if the budget of uncertainty is too large, then we become over-conservative and perform poorly for any perturbation of our test data. We also see that there is little difference between the gains in accuracy in instances where the perturbation of our data is as we expected versus when the perturbation is not as we expect. This indicates that even if we misspecify our model, we still obtain a classification tree robust to any kind of distribution shift within a reasonable range of our expected distribution shift. Overall, we see that an important factor in determining the performance of our model is the budget of uncertainty, which can be easily tuned to create an effective robust tree.
+
+ < g r a p h i c s >
+
+Figure 4: These boxplots show the distribution across problem instances of the gain in average test accuracy from using a robust tree versus a non-robust, regularized tree across different values of $\lambda$ . We also show this gain in average accuracy in the case where perturbations of our data are not what we expect.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/HvRAM-dpmEv/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/HvRAM-dpmEv/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..04de71999c2c03c21e5c6837bb3197bc070c1b0d
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/HvRAM-dpmEv/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,603 @@
+# BiGrad: Differentiating through Bilevel Optimization Programming
+
+## Abstract
+
+Integrating mathematical programming, and in particular Bilevel Optimization Programming, within deep learning architectures has vast applications in various domains from machine learning to engineering. Bilevel programming is able to capture complex interactions when two actors have conflicting objectives. Previous approaches only consider single-level programming. In this paper, we thus propose Differentiating through Bilevel Optimization Programming (BiGrad) as approach for end-to-end learning of models that use Bilevel Programming as a layer. BiGrad has wide applicability and it can be used in modern machine learning frameworks. We focus on two classes of Bilevel Programming: continuous and combinatorial optimization problems. The framework extends existing approaches of single level optimization programming. We describe a class of gradient estimators for the combinatorial case which reduces the requirements in term of computation complexity; for the continuous variables case the gradient computation takes advantage of push-back approach (i.e. vector-jacobian product) for an efficient implementation. Experiments suggest that the proposed approach successfully extends existing single level approaches to Bilevel Programming.
+
+## 1 Introduction
+
+Neural networks provide unprecedented improvements in perception tasks, however, they struggle to learn basic logic operations (Garcez et al. 2015) or relationships. When modelling complex systems, for example decision systems, it is not only beneficial to integrate optimization components into larger differentiable system, but also to use general purpose solvers (e.g. for Integer Linear Programming or Nonlinear Programming (Bertsekas 1997; Boyd and Vanden-berghe 2004)) and problem specific implementation, to discover the governing discrete or continuous relationships. Recent approaches propose thus differentiable layers that incorporate either quadratic (Amos and Kolter 2017), convex (Agrawal et al. 2019a), cone (Agrawal et al. 2019b), equilibrium (Bai, Kolter, and Koltun 2019), SAT (Wang et al. 2019) or combinatorial (Pogančić et al. 2019; Mandi and Guns 2020; Berthet et al. 2020) programs. Use of optimization programming as layer of differentiable systems, requires to compute the gradients through these layers, which is either specific to the optimization problem or zero almost everywhere, when dealing with discrete variables. Proposed gradient estimates either relax the combinatorial problem (Mandi and Guns 2020), or perturb the input variables (Berthet et al. 2020; Domke 2010) or linearly approximate the loss function (Pogančić et al. 2019).
+
+These approaches though, do now allow to directly express models with conflicting objectives, for example in structural learning (Elsken, Metzen, and Hutter 2019) or adversarial system (Goodfellow et al. 2014). We thus consider the use of bilevel optimization programming as a layer. Bilevel Optimization Program (Kleinert et al. 2021; Colson, Marcotte, and Savard 2007; Dempe 2018; Stackelberg et al. 1952), also known as generalization of Stackelberg Games, is the extension of single-level optimization program, where the solution of one optimization problem (i.e. the outer problem) depends on the solution of another optimization problem (i.e. the inner problem). This class of problems can model interactions between two actors ${}^{1}$ , where the action of the first depends on the knowledge of the counter-action of the second. Bilevel Programming finds application in various domains, as in Electricity networks, Economics, Environmental policy, Chemical plant, defence and planning (Dempe 2018; Sinha, Malo, and Deb 2017). In general, Bilevel programs are NP-hard (Sinha, Malo, and Deb 2017), they require specialized solvers and it is not clear how to extend previous approaches, since standard chain rule is not directly applicable.
+
+By modelling the bilevel optimization problem as an implicit layer (Bai, Kolter, and Koltun 2019), we consider the more general case where 1) the solution of the bilevel problem is computed separately by a bilevel solver; thus leveraging on powerfully solver developed over various decades (Kleinert et al. 2021); and 2) the computation of the gradient is more efficient, since we do not have to propagate gradient through the solver. We thus propose Differentiating through Bilevel Optimization Programming (BiGrad):
+
+- BiGrad comprises of forward pass, where existing solvers can be used, and backward pass, where BiGrad estimates gradient for both continuous and combinatorial problems based on sensitivity analysis;
+
+- we show how the proposed gradient estimators relate to the single-level analogous and that the proposed approach is beneficial in both continuous and discrete cases.
+
+---
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+${}^{1}$ In the following section we provide concrete example of applications.
+
+---
+
+
+
+Figure 1: The Forward and backward passes of a Bilevel Programming layer: the larger system has input $d$ and output $u = {h}_{\psi } \circ H \circ {h}_{\theta }\left( d\right)$ ; the bilevel layer has input $z$ and output $x, y$ , which are solutions of a Bilevel optimization problem represented by the implicit function $H\left( {x, y, z}\right) = 0$ .
+
+## Examples of Bilevel Optimization Problems
+
+Physical System with control sub-system example Bilevel Programming is to model the interaction of a dynamical system(x)and its control sub-system(y), as for example an industrial plant or a physical process. The control sub-system changes based on the state of the underlying dynamical system, which itself solves a physics constraint optimization problem (Raissi, Perdikaris, and Karniadakis 2019; de Avila Belbute-Peres et al. 2018).
+
+Interdiction problem example Two actors discrete Interdiction problems (Fischetti et al. 2019), where one actor(x) tries to interdict the actions of another actors(y)under budget constraints, arise in various areas, from marketing, protecting critical infrastructure, preventing drug smuggling to hinder nuclear weapon proliferation.
+
+Min-max problem example Min-max problems are used to model robust optimization problems (Ben-Tal, El Ghaoui, and Nemirovski 2009), where a second variable represents the environment and is constrained to an uncertain set that captures the unknown variability of the environment.
+
+Adversarial attack in Machine Learning Bilevel pro-bram is used the represents the interaction between a machine learning model(y)and a potential attacker(x)(Gold-blum, Fowl, and Goldstein 2019) and is used to increase the resilience to intentional or unintended adversarial attacks.
+
+## 2 Differentiable Bilevel Optimization Layer
+
+We model the Bilevel Optimization Program as an Implicit Layer (Bai, Kolter, and Koltun 2019), i.e. as the solution of an implicit equation $H\left( {x, y, z}\right) = 0$ , in order to derive the gradient using the implicit function theorem, where $z$ is given and represents the parameters of our system we want to estimate, and $x, y$ are output variables (Fig.1). We also assume we have access ${}^{2}$ to a solver $\left( {x, y}\right) = {\operatorname{Solve}}_{H}\left( z\right)$ . The bilevel Optimization Program is then used a layer of a differentiable system, whose input is $d$ and output is given by $u = {h}_{\psi } \circ {\operatorname{Solve}}_{H} \circ {h}_{\theta }\left( d\right) = {h}_{\psi ,\theta }\left( d\right)$ , where $\circ$ is the function composition operator. We want to learn the parameters $\psi ,\theta$ of the function ${h}_{\psi ,\theta }\left( d\right)$ that minimize the loss function $L\left( {{h}_{\psi ,\theta }\left( d\right) , u}\right)$ , using the training data ${D}^{\mathrm{{tr}}} = \left\{ {\left( d, u\right) }_{i = 1}^{{N}^{\mathrm{{tr}}}}\right\}$ . In order to be able to perform the end-to-end training, we need to back-propagate the gradient of the Bilevel Optimization Program Layer, which can not be accomplish only using chain rule.
+
+### 2.1 Continuous Bilevel Programming
+
+We now present the definition of the continous Bilevel Optimization problem, which comprises of two non-linear function $f, g$ , as
+
+$$
+\mathop{\min }\limits_{{x \in X}}f\left( {x, y, z}\right) \;y \in \arg \mathop{\min }\limits_{{y \in Y}}g\left( {x, y, z}\right) \tag{1}
+$$
+
+where the left part problem is called outer optimization problem and resolves for the variable $x \in X$ , with $X = {\mathbb{R}}^{n}$ . The right problem is called the inner optimization problem and solves for the variable $y \in Y$ , with $Y = {\mathbb{R}}^{m}$ . The variable $z \in {\mathbb{R}}^{p}$ is the input variable and is a parameter for the bilevel problem. Min-max is special case of Bilevel optimization problem $\mathop{\min }\limits_{{y \in Y}}\mathop{\max }\limits_{{x \in X}}g\left( {x, y, z}\right)$ , where the minimization functions are equal and opposite in sign.
+
+### 2.2 Combinatorial Bilevel Programming
+
+When the variables are discrete, we restrict the objective functions to be multi-linear (Greub 1967). Various important combinatorial problems are linear in discrete variables (e.g. VRP, TSP, SAT ${}^{3}$ ), one example form is the following
+
+$$
+\mathop{\min }\limits_{{x \in X}}\langle z, x{\rangle }_{A} + \langle y, x{\rangle }_{B}, y \in \arg \mathop{\min }\limits_{{y \in Y}}\langle w, y{\rangle }_{C} + \langle x, y{\rangle }_{D}
+$$
+
+(2)
+
+The variables $x, y$ have domains in $x \in X, y \in Y$ , where $X, Y$ are convex polytopes that are constructed from a set of distinct points $\mathcal{X} \subset {\mathbb{R}}^{n},\mathcal{Y} \subset {\mathbb{R}}^{m}$ , as their convex hull. The outer and inner problems are Integer Linear Programs (ILPs). The multi-linear operator is represented by the inner product $\langle x, y{\rangle }_{A} = {x}^{T}{Ay}$ . We only consider the case where we have separate parameters for the outer and inner problems, $z \in {\mathbb{R}}^{p}$ and $w \in {\mathbb{R}}^{q}$ .
+
+## 3 BiGrad: Gradient estimation
+
+Even if the discrete and continuous variable cases share a similar structure, the approach is different when evaluating the gradients. We can identify the following common basic steps (Alg.1):
+
+1. In the forward pass, solve the combinatorial or continuous Bilevel Optimisation problem as defined in Eq.1(or Eq.2) using existing solver;
+
+2. During the backward pass, compute the gradient ${\mathrm{d}}_{z}L$ (and ${\mathrm{d}}_{w}L$ ) using the suggested gradients (Sec.3.1 and Sec.3.2) starting from the gradients on the output variables ${\nabla }_{x}L$ and ${\nabla }_{y}L$ .
+
+---
+
+${}^{2}$ Finding the solution of the bi-level problem is not in the scope of this work.
+
+${}^{3}$ Vehicle Routing Problem, Boolean satisfiability problem.
+
+---
+
+Algorithm 1: BiGrad Layer: Bilevel Optimization Programming Layer using BiGrad
+
+---
+
+1. Input: Training sample $\left( {\widetilde{d},\widetilde{u}}\right)$
+
+2. Forward Pass:
+
+(a) Compute $\left( {x, y}\right) \in \{ x, y : H\left( {x, y, z}\right) = 0\}$ using
+
+ Bilevel Solver: $\left( {x, y}\right) \in {\operatorname{Solve}}_{H}\left( z\right)$
+
+(b) Compute the loss function $L\left( {{h}_{\psi } \circ H \circ {h}_{\theta }\left( \widetilde{d}\right) ,\widetilde{u}}\right)$ ,
+
+(c) Save(x, y, z)for the backward pass
+
+3. Backward Pass:
+
+(a) update the parameter of the downstream layers $\psi$ using
+
+ back-propagation
+
+(b) For the continuous variable case, compute based on
+
+ Theorem 2 around the current solution(x, y, z), with-
+
+ out solving the Bilevel Problem
+
+(c) For the discrete variable case, use the gradient es-
+
+ timates of Theorem 3 or Section 3.2 (e.g. Eq. 11 or
+
+ Eq.12) by solving, when needed, for the two separate
+
+ problems
+
+ (d) Back-propagate the estimated gradient to the down-
+
+ stream parameters $\theta$
+
+---
+
+### 3.1 Continuous Optimization
+
+To evaluate the gradient of the variables $z$ versus the loss function $L$ , we need to propagate the gradients of the two output variables $x, y$ through the two optimization problems. We can use the implicit function theorem to approximate locally the function $z \rightarrow \left( {x, y}\right)$ . We have thus the following main results ${}^{4}$ .
+
+Theorem 1. Consider the bilevel problem of Eq.1, we can build the following set of equations that represent the equivalent problem around a given solution ${x}^{ * },{y}^{ * },{z}^{ * }$ :
+
+$$
+F\left( {x, y, z}\right) = 0\;G\left( {x, y, z}\right) = 0 \tag{3}
+$$
+
+where
+
+$$
+F\left( {x, y, z}\right) = {\nabla }_{x}f - {\nabla }_{y}f{\nabla }_{y}G{\nabla }_{x}G,\;G\left( {x, y, z}\right) = {\nabla }_{y}g \tag{4}
+$$
+
+where we used the short notation $f = f\left( {x, y, z}\right) , g =$ $g\left( {x, y, z}\right) , F = F\left( {x, y, z}\right) , G = G\left( {x, y, z}\right)$
+
+Theorem 2. Consider the problem defined in Eq.1, then the total gradient of the parameter $z$ w.r.t. the loss function $L\left( {x, y, z}\right)$ is computed from the partial gradients ${\nabla }_{x}L,{\nabla }_{y}L,{\nabla }_{z}L$ as
+
+$$
+{\mathrm{d}}_{z}L = {\nabla }_{z}L - \left| {{\nabla }_{x}L\;{\nabla }_{y}L}\right| {\left| \begin{array}{ll} {\nabla }_{x}F & {\nabla }_{y}F \\ {\nabla }_{x}G & {\nabla }_{y}G \end{array}\right| }^{-1}\left| \begin{array}{l} {\nabla }_{z}F \\ {\nabla }_{z}G \end{array}\right|
+$$
+
+(5)
+
+The implicit layer is thus defined by the two conditions $F\left( {x, y, z}\right) = 0$ and $G\left( {x, y, z}\right) = 0$ . We notice that Eq. 5 can be solved without explicitly computing the Jacobian matrices and inverting the system, but adopting the Vector-Jacobian product approach we can proceed from left to right to evaluate ${\mathrm{d}}_{z}L$ . In the following section we describe how affine equality constraints and nonlinear inequality can be used when modelling $f, g$ . We also notice that the solution of Eq. 5 does not require to solve the original problem, but only to apply matrix-vector products, i.e. linear algebra, and the evaluation of the gradient that can be computed using automatic differentiation.
+
+Linear Equality constraints To extend the model of Eq. 1 to include linear equality constraints of the form ${Ax} = b$ and ${By} = c$ on the outer and inner problem variables, we use the following change of variables
+
+$$
+x \rightarrow {x}_{0} + {A}^{ \bot }x, y \rightarrow {y}_{0} + {B}^{ \bot }y \tag{6}
+$$
+
+where ${A}^{ \bot },{B}^{ \bot }$ are the orthogonal space of $A$ and $B$ , i.e. $A{A}^{ \bot } = 0, B{B}^{ \bot } = 0$ , and ${x}_{0},{y}_{0}$ are one solution of the equations, i.e. $A{x}_{0} = b, B{y}_{0} = c$ .
+
+Non-linear Inequality constraints Similarly, to extend the model of Eq.1 when we have non-linear inequality constraints, we use the barrier method approach (Boyd and Van-denberghe 2004), where the variable is penalized with a logarithmic function to violate the constraints. Specifically, let us consider the case where ${f}_{i},{g}_{i}$ are inequality constraint functions, i.e. ${f}_{i} < 0,{g}_{i} < 0$ , for the outer and inner problems. We then define new functions
+
+$$
+f \rightarrow {tf} - \mathop{\sum }\limits_{{i = 1}}^{{k}_{x}}\ln \left( {-{f}_{i}}\right) , g \rightarrow {tg} - \mathop{\sum }\limits_{{i = 1}}^{{k}_{y}}\ln \left( {-{g}_{i}}\right) . \tag{7}
+$$
+
+where $t$ is a variable parameter, which depends on the violation of the constraints. The closer the solution is to violate the constraints, the larger the value of $t$ is.
+
+Bilevel Cone programming We show here how Theorem. 2 can be applied to bi-level cone programming extending single-level cone programming results (Agrawal et al. 2019b), where we can use efficient solvers for cone programs to compute a solution of the bilevel problem (Ouattara and Aswani 2018)
+
+$$
+\mathop{\min }\limits_{x}{c}^{T}x + {\left( Cy\right) }^{T}x
+$$
+
+$$
+\text{s.t.}{Ax} + z + R\left( y\right) \left( {x - r}\right) = b, s \in \mathcal{K} \tag{8a}
+$$
+
+$$
+y \in \arg \mathop{\min }\limits_{y}{d}^{T}y + {\left( Dx\right) }^{T}y
+$$
+
+$$
+\text{s.t.}{By} + u + P\left( x\right) \left( {y - p}\right) = f, u \in \mathcal{K} \tag{8b}
+$$
+
+In this bilevel cone programming, the inner and outer problem are both cone programs, where $R\left( y\right) , P\left( x\right)$ represents a linear transformation, while $C, r, D, p$ are new parameters of the problem, while $\mathcal{K}$ is the conic domain of the variables. In the hypothesis that a local minima of Eq. 8 exists, we can use an interior point method to find such point. To compute the bilevel gradient, we then use the residual maps (Busseti, Moursi, and Boyd 2019) of the outer and inner problems. Indeed, we can then apply Theorem 2, where $F = {N}_{1}\left( {x, Q, y}\right)$ and $G = {N}_{2}\left( {y, Q, x}\right)$ are the normalized residual maps defined in (Busseti, Moursi, and Boyd 2019; Agrawal et al. 2019a) of the outer and inner problems.
+
+---
+
+${}^{4}$ Proofs are in the Supplementary Material
+
+---
+
+### 3.2 Combinatorial Optimization
+
+When we consider discrete variables, the gradient is zero almost everywhere. We thus need to resort to estimate gradients. For the bilevel problem with discrete variables of Eq.2, when the solution of the bilevel problem exists and its solution is given (Kleinert et al. 2021), Thm. 3 gives the gradients of the loss function with respect to the input parameters.
+
+Theorem 3. Given the Eq. 2 problem, the partial variation of a cost function $L\left( {x, y, z, w}\right)$ on the input parameters has the following form:
+
+$$
+{\mathrm{d}}_{z}L = {\nabla }_{z}L + \left\lbrack {{\nabla }_{x}L + {\nabla }_{y}L{\nabla }_{x}y}\right\rbrack {\nabla }_{z}x \tag{9a}
+$$
+
+$$
+{\mathrm{d}}_{w}L = {\nabla }_{w}L + \left\lbrack {{\nabla }_{x}L{\nabla }_{y}x + {\nabla }_{y}L}\right\rbrack {\nabla }_{w}y \tag{9b}
+$$
+
+The ${\nabla }_{x}y,{\nabla }_{y}x$ terms capture the interaction between outer and inner problems. We could estimate the gradients in Thm. 3 using the perturbation approach suggested in (Berthet et al. 2020), which estimate the gradient as the expected value of the gradient of the problem after perturbing the input variable, but, similar to REINFORCE (Williams 1992), this introduces large variance. While it is possible to reduce variance in some cases (Grathwohl et al. 2017) with the use of additional trainable functions, we consider alternative approaches as described in the following.
+
+Differentiation of blackbox combinatorial solvers (Pogančić et al. 2019) propose a way to propagate the gradient through a single level combinatorial solver, where ${\nabla }_{z}L \approx \frac{1}{\tau }\left\lbrack {x\left( {z + \tau {\nabla }_{x}L}\right) - x\left( z\right) }\right\rbrack$ when $x\left( z\right) = \arg \mathop{\max }\limits_{{x \in X}}\langle x, z\rangle$ . We thus propose to compute the variation on the input variables from the two separate problems of the Bilevel Problem:
+
+$$
+{\nabla }_{z}L \approx 1/\tau \left\lbrack {x\left( {z + {\tau A}{\nabla }_{x}L, y}\right) - x\left( {z, y}\right) }\right\rbrack \tag{10a}
+$$
+
+$$
+{\nabla }_{w}L \approx 1/\tau \left\lbrack {y\left( {w + {\tau C}{\nabla }_{y}L, x}\right) - y\left( {w, x}\right) }\right\rbrack \tag{10b}
+$$
+
+or alternatively, if we have only access to the Bilevel solver and not to the separate ILP solvers, we can express
+
+$$
+{\nabla }_{z, w}L \approx 1/\tau \left\lbrack {s\left( {v + {\tau E}{\nabla }_{x, y}L}\right) - s\left( v\right) }\right\rbrack \tag{11}
+$$
+
+where $x\left( {z, y}\right)$ and $y\left( {w, x}\right)$ represent the solutions of the two problems separately, $s\left( v\right) = \left( {z, w}\right) \rightarrow \left( {x, y}\right)$ the complete solution to the Bilevel Problem, $\tau \rightarrow 0$ is a hyper-parameter and $E = \left\lbrack \begin{matrix} A & 0 \\ 0 & C \end{matrix}\right\rbrack$ . This form is more convenient than Eq.9, since it does not require to compute the cross terms, ignoring thus the interaction of the two levels.
+
+Straight-Through gradient In estimating the input variables $z, w$ of our model, we may not be interested in the interaction between the two variable $x, y$ . Let us consider, for example, the squared ${\ell }_{2}$ loss function defined over the output variables
+
+$$
+{L}^{2}\left( {x, y}\right) = {L}^{2}\left( x\right) + {L}^{2}\left( y\right)
+$$
+
+where ${L}^{2}\left( x\right) = \frac{1}{2}{\begin{Vmatrix}x - {x}^{ * }\end{Vmatrix}}_{2}^{2}$ and ${x}^{ * }$ is the true value. The loss is non zero only when the two vectors disagree, and with integer variables, it counts the difference squared or, in case of the binary variables, it counts the number of differences. If we compute ${\nabla }_{x}{L}^{2}\left( x\right) = \left( {x - {x}^{ * }}\right)$ in the binary case, we have that ${\nabla }_{{x}_{i}}{L}^{2}\left( x\right) = + 1$ if ${x}_{i}^{ * } = 0 \land {x}_{i} = 1,{\nabla }_{{x}_{i}}{L}^{2}\left( x\right) =$ -1if ${x}_{i}^{ * } = 1 \land {x}_{i} = 0$ , and 0 otherwise. This information can be directly used to update the ${z}_{i}$ variable in the linear term $\langle z, x\rangle$ , thus we can estimate the gradients of the input variables as ${\nabla }_{{z}_{i}}{L}^{2} = - \lambda {\nabla }_{{x}_{i}}{L}^{2}$ and ${\nabla }_{{w}_{i}}{L}^{2} = - \lambda {\nabla }_{{y}_{i}}{L}^{2}$ , with some weight $\lambda > 0$ . The intuition is that, the weight ${z}_{i}$ associated with the variable ${x}_{i}$ is increased, when the value of the variable ${x}_{i}$ reduces. In the general multilinear case we have additional multiplicative terms. Following this intuition (see Sec.A.3), we thus use as an estimate of the gradient of the variables
+
+$$
+{\nabla }_{z}L = - A{\nabla }_{x}L\;{\nabla }_{w}L = - C{\nabla }_{y}L \tag{12}
+$$
+
+This is equivalent in Eq. 2 where ${\nabla }_{z}x = {\nabla }_{w}y = - I$ and ${\nabla }_{y}x = 0$ , thus ${\nabla }_{x}y = 0$ . This update is also equivalent to Eq. 10, without the soluton computation. The advantage of this form is that it does not requires to solve for an additional solution in the backward pass. For the single level problem, gradient has the same form of the Straight-Through gradient proposed by (Bengio, Léonard, and Courville 2013), with surrogate gradient ${\nabla }_{z}x = - I$ .
+
+## 4 Related Work
+
+Bilevel Programming in machine learning Various papers model machine learning problem as Bilevel problems, for example in Hyper-parameter Optimization (MacKay et al. 2019; Franceschi et al. 2018), Meta-Feature Learning (Li and Malik 2016), Meta-Initialization Learning (Ra-jeswaran et al. 2019), Neural Architecture Search (Liu, Si-monyan, and Yang 2018), Adversarial Learning (Li et al. 2019), Deep Reinforcement Learning (Vahdat et al. 2020) and Multi-Task Learning (Alesiani et al. 2020). In these works the main focus is to compute the solution of the bilevel optimization problems. In (MacKay et al. 2019; Lorraine and Duvenaud 2018), the best response function is modeled as a neural network and the solution is found using iterative minimization, without attempting to estimate the complete gradient. Many bilevel approaches rely on the use of the implicit function to compute the hyper-gradient (Sec. 3.5 of (Colson, Marcotte, and Savard 2007)), but do not use bilevel as layer.
+
+Quadratic, Cone and Convex single-level Programming Various works have addressed the problem of differentiate through quadratic, convex or cone programming (Amos 2019; Amos and Kolter 2017; Agrawal et al. 2019b, a). In these approaches the optimization layer is modelled as an implicit layer and for the cone/convex case the normalized residual map is used to propagate the gradients. Contrary to our approach, these work only address single level problems. These approaches do not consider combinatorial optimization.
+
+Implicit layer Networks While classical deep neural neural networks perform a single pass through the network at inference time, a new class of systems performs inference by solving an optimization problem. Example of this are Deep Equilibrium Network (DEQ) (Bai, Kolter, and Koltun 2019) and NeurolODE (NODE) (Chen et al. 2018). Similar to our approach, the gradient is computed based on sensitivity analysis of the current solution. These methods only consider continuous optimization.
+
+
+
+Figure 2: (a) Visualization of the Optimal Control Learning network, where a disturbance ${\epsilon }_{t}$ is injected based on the control signal ${u}_{t}$ . (b) Comparison of the training performance for $N = 2$ , $T = {20}$ and epochs $= {10}$ of the BiGrad and the Adversarial version of the OptNet (Amos and Kolter 2017).
+
+Combinatorial optimization Various papers estimate gradients of single-level combinatorial problems using relaxation. (Wilder, Dilkina, and Tambe 2019; Elmachtoub and Grigas 2017; Ferber et al. 2020; Mandi and Guns 2020) for example use ${\ell }_{1},{\ell }_{2}$ or log barrier to relax the Integer Linear Programming (ILP) problem. Once relaxed the problem is solved using standard methods for continuous variable optimization. An alternative approach is suggested in other papers. For example in (Pogančić et al. 2019) the loss function is approximated with a linear function and this leads to an estimate of the gradient of the input variable similar to the implicit differentiation by perturbation form (Domke 2010). (Berthet et al. 2020) is another approach that uses also perturbation and change of variables to estimate the gradient in a ILP problem. SatNet (Wang et al. 2019) solves MAXSAT problems by solving a continuous semidefinite program (SDP) relaxation of the original problem. These works only consider single-level problems.
+
+Discrete latent variables Discrete random variables provide an effective way to model multi-modal distributions over discrete values, which can be used in various machine learning problems, e.g. in language models (Yang et al. 2017) or for conditional computation (Bengio, Léonard, and Courville 2013). Gradients of discrete distribution are not mathematical defined, thus, in order to use gradient based method, gradient estimations have been proposed. A class of methods is based on Gumbel-Softmax estimator (Jang, Gu, and Poole 2016; Maddison, Mnih, and Teh 2016; Paulus, Maddison, and Krause 2021).
+
+## 5 Experiments
+
+We evaluate BiGrad with continuous and combinatorial problems to shows that improves over single-level approaches. In the first experiment we compare the use of Bi-Grad versus the use of the implicit layer proposed in (Amos and Kolter 2017) for the design of Optimal Control with adversarial noise. In the second part, after experimenting with adversarial attack, we explore the performance of BiGrad with two combinatorial problems with Interdiction, where we adapted the experimental setup proposed in (Pogančić et al. 2019). In these latter experiments, we compare the formulation in Eq.11 (denoted by Bigrad(BB)) and the formulation of Eq.12 (denoted by Bigrad(PT)). In addition we compare with the single level BB-1 from (Pogančić et al. 2019) and single level straight-through (Bengio, Léonard, and Courville 2013; Paulus, Maddison, and Krause 2021), with the surrogate gradient ${\nabla }_{z}x = - I$ ,(PT-1) gradient estimations. We compare against Supervised learning (SL), which ignores the underlay structure of the problem and directly predicts the solution of the bilevel problem.
+
+Table 1: Optimal Control Average Cost; Bilevel approach improves (lower cost) over two-step approach, because is able to better capture the interaction between noise and control dynamics.
+
+ | LQR | OptNet | Bilevel |
| Adversarial (10 steps) | 2.736 | 0.2722 | 0.2379 |
| (30 steps) | - | 0.2511 | 0.2181 |
+
+### 5.1 Optimal Control with adversarial disturbance
+
+We consider the design of a robust stochastic control for a Dynamical System (Agrawal et al. 2019b). The problem is to find a feedback function $u = \phi \left( x\right)$ that minimizes
+
+$$
+\mathop{\min }\limits_{\phi }\mathbb{E}\frac{1}{T}\mathop{\sum }\limits_{{t = 0}}^{T}{\begin{Vmatrix}{x}_{t}\end{Vmatrix}}^{2} + {\begin{Vmatrix}\phi \left( {x}_{t}\right) \end{Vmatrix}}^{2} \tag{13a}
+$$
+
+$$
+\text{s.t.}{x}_{t + 1} = A{x}_{t} + {B\phi }\left( {x}_{t}\right) + {w}_{t},\forall t \tag{13b}
+$$
+
+where ${x}_{t} \in {\mathbb{R}}^{n}$ is the state of the system, while ${w}_{t}$ is a i.i.d. random disturbance and ${x}_{0}$ is given initial state. To solve this problem we use Approximate Dynamic Programming (ADP) (Wang and Boyd 2010) that solves a proxy quadratic problem
+
+$$
+\mathop{\min }\limits_{{u}_{t}}{u}_{t}^{T}P{u}_{t} + {x}_{t}Q{u}_{t} + {q}^{t}{u}_{t}\;\text{ s.t. }{\begin{Vmatrix}{u}_{t}\end{Vmatrix}}_{2} \leq 1 \tag{14}
+$$
+
+We can use the optimization layer as shown in Fig.2(a) and update the problem variables (e.g. $P, Q, q$ ) using gradient descent. We use the linear quadratic regulator (LQR) solution as initial solution (Kalman 1964). The optimization module is replicated for each time step $t$ , similarly to Recursive Neural Network (RNN).
+
+We can build a resilient version of the controller in the hypothesis that an adversarial is able to inject a noise of limited energy, but arbitrary dependent on the control $u$ , by solving the following bilevel optimization problem
+
+$$
+\mathop{\max }\limits_{\epsilon }Q\left( {{u}_{t},{x}_{t} + \epsilon }\right) \;\text{ s.t. }\;\parallel \epsilon \parallel \leq \sigma \tag{15a}
+$$
+
+$$
+{u}_{t}\left( \epsilon \right) = \arg \mathop{\min }\limits_{{u}_{t}}Q\left( {{u}_{t},{x}_{t}}\right) \;\text{ s.t. }{\begin{Vmatrix}{u}_{t}\end{Vmatrix}}_{2} \leq 1 \tag{15b}
+$$
+
+where $Q\left( {u, x}\right) = {u}^{T}{Pu} + {x}_{t}{Qu} + {q}^{t}u$ and we want to learn the parameters $z = \left( {P, Q, q}\right)$ , where $y = {u}_{t}, x = \epsilon$ of Eq.1.
+
+| gradient type | train | accuracy [12x12 maps] validation | train | accuracy [18x18 maps] validation | train | accuracy [24x24 maps] validation |
| BiGrad(BB) | ${95.8} \pm {0.2}$ | ${94.5} \pm {0.2}$ | $\mathbf{{97.1}} \pm {0.0}$ | $\mathbf{{96.4}} \pm {0.2}$ | ${98.0} \pm {0.0}$ | $\mathbf{{97.8}} \pm {0.0}$ |
| BiGrad(PT) | ${91.7} \pm {0.1}$ | ${91.6} \pm {0.1}$ | ${94.3} \pm {0.0}$ | ${94.2} \pm {0.1}$ | ${95.7} \pm {0.0}$ | ${95.6} \pm {0.1}$ |
| BB-1 | ${95.9} \pm {0.2}$ | ${91.7} \pm {0.1}$ | ${96.7} \pm {0.2}$ | ${94.5} \pm {0.1}$ | ${97.1} \pm {0.1}$ | ${96.3} \pm {0.2}$ |
| PT-1 | ${88.3} \pm {0.2}$ | ${87.5} \pm {0.2}$ | ${90.9} \pm {0.4}$ | ${90.6} \pm {0.5}$ | ${92.8} \pm {0.1}$ | ${92.8} \pm {0.2}$ |
| SL | ${100.0} \pm {0.0}$ | ${26.2} \pm {2.4}$ | $\mathbf{{99.9}} \pm {0.1}$ | ${20.2} \pm {0.5}$ | $\mathbf{{99.1}} \pm {0.2}$ | ${14.0} \pm {1.0}$ |
+
+Table 2: Performance on the Dynamic Programming Problem with Interdiction. SL uses ResNet18.
+
+| ${L}_{\infty } \leq \alpha$ | DCNN | Bi-DCNN | CNN | CNN* |
| 0 | ${62.9} \pm {0.3}$ | ${64.0} \pm {0.4}$ | ${63.4} \pm {0.7}$ | ${63.6} \pm {0.5}$ |
| 5 | ${42.6} \pm {1.0}$ | ${44.5} \pm {0.2}$ | ${43.8} \pm {1.2}$ | ${44.3} \pm {1.0}$ |
| 10 | ${23.5} \pm {1.5}$ | $\mathbf{{25.3}} \pm {0.8}$ | ${24.3} \pm {1.0}$ | ${24.2} \pm {1.0}$ |
| 15 | ${14.4} \pm {1.4}$ | $\mathbf{{15.6}} \pm {0.7}$ | ${14.6} \pm {0.7}$ | ${14.3} \pm {0.4}$ |
| 20 | ${9.1} \pm {1.2}$ | $\mathbf{{10.0}} \pm {0.6}$ | ${9.2} \pm {0.4}$ | ${8.9} \pm {0.2}$ |
| 25 | ${6.1} \pm {1.0}$ | ${6.8} \pm {0.5}$ | ${6.0} \pm {0.2}$ | ${5.9} \pm {0.2}$ |
| 30 | ${3.9} \pm {0.7}$ | ${4.4} \pm {0.5}$ | ${3.9} \pm {0.2}$ | ${3.9} \pm {0.1}$ |
+
+Table 3: Performance on the adversarial attack with discrete features, with $Q = {10}$ . DCNN is the single level discrete CNN, Bi-DCNN is the bilevel discrete CNN, CNN is the vanilla CNN, while CNN* is the CNN where we add the bilevel discrete layer after vanilla training.
+
+We evaluate the performance to verify the viability of the proposed approach and compare with LQR and OptNet (Amos and Kolter 2017), where the outer problem is substituted with a best response function that computes the adversarial noise based on the computed output; in this case the adversarial noise is a scaled version of ${Qu}$ of Eq.14. Tab.1 and Fig.2(b) present the performance using BiGrad, LQR and the adversarial version of OptNet. BiGrad improves over two-step OptNet (Tab.1), because is able to better model the interaction between noise and control dynamic.
+
+### 5.2 Robust ML with discrete latent variables
+
+Machine learning models are heavily affected by the injection of intentional noise (Madry et al. 2017; Goodfellow, Shlens, and Szegedy 2014). Adversarial attack typically requires the access to the machine learning model, in this way the attack model can be used during training to include its effect. Instead of training an end-to-end system as in (Goldblum, Fowl, and Goldstein 2019), where the attacker is aware of the model, we consider the case where the attacker can inject a noise at feature level, as opposed at input level (as in (Goldblum, Fowl, and Goldstein 2019)), this allows us to model the interaction as a bilevel problem. Thus, to demonstrate the use of a bilevel layer, we design a system that is composed of a feature extraction layer, followed by a discretization layer that operates on the space of $\{ 0,1{\} }^{m}$ , where $m$ is the hidden feature size, followed by a classification layer. The network used in the experiments is composed of two convolutional layers with max-pooling and two linear layers, all with relu activation functions, while the classification is a linear layer. We consider an more limited attacker that is not aware of the loss function of the model and does not have access to the full model, but rather only to the input of the discrete layer and is able two switch $Q$ discrete variables, The interaction of the discrete layer with the attacker is described by the following bilevel problem:
+
+$$
+\mathop{\min }\limits_{{x \in Q}}\mathop{\max }\limits_{{y \in B}}\langle z + x, y\rangle . \tag{16}
+$$
+
+where $Q$ represents the sets of all possible attack, $B$ the budget of the discritization layer and $y$ is the output of the layer. For the simulation, we compute the solution by sorting the features by values and considering only the first B values, while the attacker will obscure (i.e. set to zero) the first $Q$ positions. The output $y$ thus will have ones on the $Q$ to $B$ non-zero positions, and zero elsewhere. We train three models, on CIFAR-10 dataset for 50 epochs. For comparison we consider:1) the vanilla CNN network (i.e. without the discrete features); 2) the network with the single level problem (i.e. the single-level problem without attacker) and; 3) the network with the bilevel problem (i.e. the min-max discretization problem defined in Eq.16). We then test the networks to adversarial attack using the PGD (Madry et al. 2017) attack similar to (Goldblum, Fowl, and Goldstein 2019). Similar results apply for FGSM attack (Fast Gradient Sign Attack) (Goodfellow, Shlens, and Szegedy 2014). We also tested the network trained as vanilla network, where we added the min-max layer after training. From the results (Tab.3), we notice: 1) The min-max network shows improved resilience to adversarial attack wrt to the vanilla network, but also with respect to the max (single-level) network; 2) The min-max layer applied to the vanilla trained network is beneficial to adversarial attack; 3) The min-max network does not significantly change performance in presence of adversarial attack at the discrete layer (i.e. between $\mathrm{Q} = 0$ and $\mathrm{Q} = {10}$ ). This example shows how bilevel-layers can be successfully integrated into Machine Learning system as differentiable layers.
+
+### 5.3 Dynamic Programming: Shortest path with Interdiction
+
+We consider the problem of Shortest Path with Interdiction, where the set of possible valid paths (see Fig.3(a)) is $Y$ and the set of all possible interdiction is $X$ . The mathematical problem can be written as
+
+$$
+\mathop{\min }\limits_{{y \in Y}}\mathop{\max }\limits_{{x \in X}}\langle z + x \odot w, y\rangle \tag{17}
+$$
+
+where $\odot$ is the element wise product.This problem is multilinear in the discrete variables $x, y, z$ . The $z, w$ variables are output of neural network whose input are the Warcraft II tile images. The aim is to train the parameters of weight
+
+| gradient type | | accuracy | | accuracy | accuracy |
| $\mathrm{k}$ | train | validation | $\mathrm{k}$ | train | validation | $\mathrm{k}$ | train | validation |
| BiGrad(BB) | 8 | ${89.2} \pm {0.1}$ | ${89.4} \pm {0.2}$ | 10 | ${91.9} \pm {0.1}$ | $\mathbf{{92.0}} \pm {0.1}$ | 12 | ${93.5} \pm {0.1}$ | ${93.5} \pm {0.2}$ |
| BiGrad(PT) | 8 | ${89.3} \pm {0.0}$ | $\mathbf{{89.4}} \pm {0.1}$ | 10 | ${92.0} \pm {0.0}$ | ${91.9} \pm {0.1}$ | 12 | $\mathbf{{93.7}} \pm {0.1}$ | $\mathbf{{93.7}} \pm {0.1}$ |
| BB-1 | 8 | ${84.0} \pm {0.4}$ | ${83.9} \pm {0.4}$ | 10 | ${87.4} \pm {0.3}$ | ${87.5} \pm {0.4}$ | 12 | ${89.3} \pm {0.1}$ | ${89.3} \pm {0.1}$ |
| PT-1 | 8 | ${84.1} \pm {0.4}$ | ${84.1} \pm {0.3}$ | 10 | ${87.3} \pm {0.3}$ | ${87.0} \pm {0.3}$ | 12 | ${89.3} \pm {0.0}$ | ${89.5} \pm {0.2}$ |
| SL | 8 | ${94.2} \pm {5.0}$ | ${10.7} \pm {3.9}$ | 10 | ${92.7} \pm {5.4}$ | ${9.4} \pm {0.4}$ | 12 | ${91.4} \pm {2.3}$ | ${9.3} \pm {1.2}$ |
+
+Table 4: Performance in term of accuracy of the TSP use case with interdiction. SL has higher accuracy during train, but fails in at test time.
+
+
+
+Figure 3: (a) Example Shortest Path in the Warcraft II tile set of (Guyomarch 2017). (b) Example Shortest Path without (left) and with interdiction (middle). Even a small interdiction (right) has a large effect on the output.
+
+network, such that we can solve the shortest path problem only based on the input image. For the experiments, we followed and adapted the scenario of (Pogančić et al. 2019) and used the Warcraft II tile maps of (Guyomarch 2017). We implemented the interdiction Game using a two stage min-max-min algorithm (Kämmerling and Kurtz 2020). In Fig.3(b) it is possible to see the effect of interdiction on the final solution. Tab. 2 shows the performances of the proposed approaches, where we allow for $B = 3$ interdictions and we used tile size of ${12} \times {12},{18} \times {18},{24} \times {24}$ . The loss function is the Hamming and ${\ell }_{1}$ loss evaluated on both the shortest path $y$ and the intervention $x$ . The gradient estimated using Eq. 11 (BB) provides more accurate results, at double of computation cost of PT. Single level BB-1 approach outperforms PT, but shares similar computational complexity, while single level PT-1 is inferior to PT. As expected, SL outperforms other methods during training, but completely fails during validation. Bigrad improves over single-level approaches, because includes the interaction of the two problems.
+
+### 5.4 Combinatorial Optimization: Travel Salesman Problem (TSP) with Interdiction
+
+Travel Salesman Problem (TSP) with interdiction consists of finding shortest route $y \in Y$ that touches all cities, where some connections $x \in X$ can be removed. The mathematical
+
+
+
+Figure 4: Example of TSP with 8 cities and the comparison of a TSP tour without (a) or with (b) a single interdiction. Even a single interdiction has a large effect on the final tour.
+
+problem to solve is given by
+
+$$
+\mathop{\min }\limits_{{y \in Y}}\mathop{\max }\limits_{{x \in X}}\langle z + x \odot w, y\rangle \tag{18}
+$$
+
+where $z, w$ are cost matrices for salesman and interceptor. Similar to the dynamic programming experiment, we implemented the interdiction Game using a two stage min-max-min algorithm (Kämmerling and Kurtz 2020). Fig. 4 shows the effect of a single interdiction. The aim is to learn the weight matrices, trained with interdicted solution on subset of the cities. Tab. 4 describes the performance in term of accuracy on both shortest tour and intervention. We use Hamming and ${\ell }_{1}$ loss function. We only allow for $B = 1$ intervention, but considered $k = 8,{10}$ and 12 cities from a total of 100 cities. Single and two level approaches perform similarly in the train and validation. Since the number of interdiction is limited to one, the performance of the single level approach is not catastrophic, while the supervised learning approach completely fails in the validation set. Bigrad thus improves over single-level and SL approaches. Since Bi-grad(PT) has similar performance of BiGrad(BB), thus PT is preferable in this scenario, since it requires less computation resources.
+
+## 6 Conclusions
+
+BiGrad generalizes existing single level gradient estimation approaches and is able to incorporate Bilevel Programming as learnable layer in modern machine learning frameworks, which allows to model conflicting objectives as in adversarial attack. The proposed novel gradient estimators are also efficient and the proposed framework is widely applicable to both continuous and discrete problems. The impact of Bi-Grad has a marginal or similar cost with respect to the complexity of computing the solution of the Bilevel Programming problems. We show how BiGrad is able to learn complex logic, when the cost functions are multi-linear.
+
+References
+
+Agrawal, A.; Amos, B.; Barratt, S.; Boyd, S.; Diamond, S.; and Kolter, Z. 2019a. Differentiable convex optimization layers. arXiv preprint arXiv:1910.12430.
+
+Agrawal, A.; Barratt, S.; Boyd, S.; Busseti, E.; and Moursi, W. M. 2019b. Differentiating through a cone program. arXiv preprint arXiv:1904.09043.
+
+Alesiani, F.; Yu, S.; Shaker, A.; and Yin, W. 2020. Towards Interpretable Multi-Task Learning Using Bilevel Programming. arXiv preprint arXiv:2009.05483.
+
+Amos, B. 2019. Differentiable optimization-based modeling for machine learning. Ph.D. thesis, PhD thesis. Carnegie Mellon University.
+
+Amos, B.; and Kolter, J. Z. 2017. Optnet: Differentiable optimization as a layer in neural networks. In International Conference on Machine Learning, 136-145. PMLR.
+
+Bai, S.; Kolter, J. Z.; and Koltun, V. 2019. Deep equilibrium models. arXiv preprint arXiv:1909.01377.
+
+Baydin, A. G.; Pearlmutter, B. A.; Radul, A. A.; and Siskind, J. M. 2018. Automatic differentiation in machine learning: a survey. Journal of machine learning research, 18.
+
+Ben-Tal, A.; El Ghaoui, L.; and Nemirovski, A. 2009. Robust optimization. Princeton university press.
+
+Bengio, Y.; Léonard, N.; and Courville, A. 2013. Estimating or propagating gradients through stochastic neurons for conditional computation. arXiv preprint arXiv:1308.3432.
+
+Berthet, Q.; Blondel, M.; Teboul, O.; Cuturi, M.; Vert, J.-P.; and Bach, F. 2020. Learning with differentiable perturbed optimizers. arXiv preprint arXiv:2002.08676.
+
+Bertsekas, D. P. 1997. Nonlinear programming. Journal of the Operational Research Society, 48(3): 334-334.
+
+Boyd, S.; and Vandenberghe, L. 2004. Convex optimization. Cambridge university press.
+
+Busseti, E.; Moursi, W. M.; and Boyd, S. 2019. Solution refinement at regular points of conic problems. Computational Optimization and Applications, 74(3): 627-643.
+
+Chen, R. T.; Rubanova, Y.; Bettencourt, J.; and Duvenaud, D. 2018. Neural ordinary differential equations. In Anderson, D., ed., Neural Information Processing Systems, 22-30. American Institute of Physics.
+
+Colson, B.; Marcotte, P.; and Savard, G. 2007. An overview of bilevel optimization. Annals of operations research, 153(1): 235-256.
+
+de Avila Belbute-Peres, F.; Smith, K.; Allen, K.; Tenenbaum, J.; and Kolter, J. Z. 2018. End-to-end differentiable physics for learning and control. Advances in neural information processing systems, 31: 7178-7189.
+
+Dempe, S. 2018. Bilevel optimization: theory, algorithms and applications. TU Bergakademie Freiberg, Fakultät für Mathematik und Informatik.
+
+Domke, J. 2010. Implicit differentiation by perturbation. Advances in Neural Information Processing Systems, 23: 523- 531.
+
+Elliott, C. 2018. The simple essence of automatic differentiation. Proceedings of the ACM on Programming Languages,
+
+2(ICFP): 1-29.
+
+Elmachtoub, A. N.; and Grigas, P. 2017. Smart" predict, then optimize". arXiv preprint arXiv:1710.08005.
+
+Elsken, T.; Metzen, J. H.; and Hutter, F. 2019. Neural architecture search: A survey. The Journal of Machine Learning Research, 20(1): 1997-2017.
+
+Ferber, A.; Wilder, B.; Dilkina, B.; and Tambe, M. 2020. Mipaal: Mixed integer program as a layer. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 34, 1504-1511.
+
+Fischetti, M.; Ljubić, I.; Monaci, M.; and Sinnl, M. 2019. Interdiction games and monotonicity, with application to knapsack problems. INFORMS Journal on Computing, 31(2): 390-410.
+
+Franceschi, L.; Frasconi, P.; Salzo, S.; Grazzi, R.; and Pontil, M. 2018. Bilevel programming for hyperparameter optimization and meta-learning. In International Conference on Machine Learning, 1568-1577. PMLR.
+
+Garcez, A.; Besold, T. R.; Raedt, L.; Földiak, P.; Hitzler, P.; Icard, T.; Kühnberger, K.-U.; Lamb, L. C.; Miikkulainen, R.; and Silver, D. L. 2015. Neural-symbolic learning and reasoning: contributions and challenges.
+
+Goldblum, M.; Fowl, L.; and Goldstein, T. 2019. Adver-sarially robust few-shot learning: A meta-learning approach. arXiv preprint arXiv:1910.00982.
+
+Goodfellow, I.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2014. Generative adversarial nets. Advances in neural information processing systems, 27.
+
+Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
+
+Grathwohl, W.; Choi, D.; Wu, Y.; Roeder, G.; and Duve-naud, D. 2017. Backpropagation through the void: Optimizing control variates for black-box gradient estimation. arXiv preprint arXiv:1711.00123.
+
+Greub, W. 1967. Multilinear Algebra. Springer Verlag.
+
+Guyomarch, J. 2017. Warcraft ii open-source map editor. URL http://github.com/war2/war2edit.
+
+Jang, E.; Gu, S.; and Poole, B. 2016. Categorical reparameterization with gumbel-softmax. arXiv preprint arXiv:1611.01144.
+
+Kalman, R. E. 1964. When is a linear control system optimal?
+
+Kämmerling, N.; and Kurtz, J. 2020. Oracle-based algorithms for binary two-stage robust optimization. Computational Optimization and Applications, 77(2): 539-569.
+
+Kleinert, T.; Labbé, M.; Ljubić, I.; and Schmidt, M. 2021. A Survey on Mixed-Integer Programming Techniques in Bilevel Optimization.
+
+Li, K.; and Malik, J. 2016. Learning to optimize. arXiv preprint arXiv:1606.01885.
+
+Li, Y.; Song, L.; Wu, X.; He, R.; and Tan, T. 2019. Learning a bi-level adversarial network with global and local perception for makeup-invariant face verification. Pattern Recognition, 90: 99-108.
+
+Liu, H.; Simonyan, K.; and Yang, Y. 2018. Darts: Differentiable architecture search. arXiv preprint arXiv:1806.09055.
+
+Lorraine, J.; and Duvenaud, D. 2018. Stochastic hyperpa-rameter optimization through hypernetworks. arXiv preprint arXiv:1802.09419.
+
+MacKay, M.; Vicol, P.; Lorraine, J.; Duvenaud, D.; and Grosse, R. 2019. Self-tuning networks: Bilevel optimization of hyperparameters using structured best-response functions. arXiv preprint arXiv:1903.03088.
+
+Maddison, C. J.; Mnih, A.; and Teh, Y. W. 2016. The concrete distribution: A continuous relaxation of discrete random variables. arXiv preprint arXiv:1611.00712.
+
+Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; and Vladu, A. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083.
+
+Mandi, J.; and Guns, T. 2020. Interior Point Solving for LP-based prediction+ optimisation. arXiv preprint arXiv:2010.13943.
+
+Ouattara, A.; and Aswani, A. 2018. Duality approach to bilevel programs with a convex lower level. In 2018 Annual American Control Conference (ACC), 1388-1395. IEEE.
+
+Paulus, M. B.; Maddison, C. J.; and Krause, A. 2021. Rao-Blackwellizing the Straight-Through Gumbel-Softmax Gradient Estimator. 11.
+
+Pogančić, M. V.; Paulus, A.; Musil, V.; Martius, G.; and Ro-linek, M. 2019. Differentiation of blackbox combinatorial solvers. In International Conference on Learning Representations.
+
+Raissi, M.; Perdikaris, P.; and Karniadakis, G. E. 2019. Physics-informed neural networks: A deep learning framework for solving forward and inverse problems involving nonlinear partial differential equations. Journal of Computational Physics, 378: 686-707.
+
+Rajeswaran, A.; Finn, C.; Kakade, S.; and Levine, S. 2019. Meta-learning with implicit gradients. arXiv preprint arXiv:1909.04630.
+
+Saad, Y.; and Schultz, M. H. 1986. GMRES: A generalized minimal residual algorithm for solving nonsymmetric linear systems. SIAM Journal on scientific and statistical computing, 7(3): 856-869.
+
+Sinha, A.; Malo, P.; and Deb, K. 2017. A review on bilevel optimization: from classical to evolutionary approaches and applications. IEEE Transactions on Evolutionary Computation, 22(2): 276-295.
+
+Stackelberg, H. v.; et al. 1952. Theory of the market economy.
+
+Vahdat, A.; Mallya, A.; Liu, M.-Y.; and Kautz, J. 2020. Unas: Differentiable architecture search meets reinforcement learning. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 11266-11275.
+
+Wang, P.-W.; Donti, P.; Wilder, B.; and Kolter, Z. 2019. Sat-net: Bridging deep learning and logical reasoning using a differentiable satisfiability solver. In International Conference on Machine Learning, 6545-6554. PMLR.
+
+Wang, Y.; and Boyd, S. 2010. Fast evaluation of quadratic control-Lyapunov policy. IEEE Transactions on Control Systems Technology, 19(4): 939-946.
+
+Wilder, B.; Dilkina, B.; and Tambe, M. 2019. Melding the data-decisions pipeline: Decision-focused learning for combinatorial optimization. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 33, 1658-1665.
+
+Williams, R. J. 1992. Simple statistical gradient-following algorithms for connectionist reinforcement learning. Machine learning, 8(3-4): 229-256.
+
+Yang, Z.; Hu, Z.; Salakhutdinov, R.; and Berg-Kirkpatrick, T. 2017. Improved variational autoencoders for text modeling using dilated convolutions. In International conference on machine learning, 3881-3890. PMLR.
+
+## A Supplementary Material; BiGrad: Differentiating through Bilevel Optimization Programming
+
+### A.1 Relationship with other related work
+
+Predict then optimize Predict then Optimize (two-stage) (Elmachtoub and Grigas 2017; Ferber et al. 2020) or solving linear programs and submodular maximization from (Wilder, Dilkina, and Tambe 2019) solve optimization problems when the cost variable or the minimization function is directly observable. In contrary, in our approach we only have access to a loss function on the output of the bilevel problem, thus allowing to use as a layer.
+
+### A.2 Proofs
+
+Proof of Linear Equality constraints. Here we show that
+
+$$
+x\left( u\right) = {x}_{0} + {A}^{ \bot }u \tag{19}
+$$
+
+includes all solution of ${Ax} = b$ . First we have that $A{A}^{ \bot } =$ 0 and $A{x}_{0} = b$ by definition. This implies that ${Ax}\left( u\right) =$ $A\left( {{x}_{0} + {A}^{ \bot }u}\right) = A{x}_{0} = b$ . Thus $\forall u \rightarrow {Ax}\left( u\right) = b$ . The difference ${x}^{\prime } - {x}_{0}$ belongs to the null space of $A$ , indeed $A\left( {{x}^{\prime } - {x}_{0}}\right) = A{x}^{\prime } - A{x}_{0} = b - b = 0$ . The null space of $A$ has size $n - \rho \left( A\right)$ . If $\rho \left( A\right) = n$ , where $A \in {\mathbb{R}}^{m \times n}, m \geq n$ , then there is only one solution $x = {x}_{0} = {A}^{ \dagger }b,{A}^{ \dagger }$ the pseudo inverse of $A$ . If $\rho \left( A\right) < n$ , then $\rho \left( {A}^{ \bot }\right) ) = n - \rho \left( A\right)$ is a based of all vectors s.t. ${Ax}\left( u\right) = b$ , since $\left. {\rho \left( {A}^{ \bot }\right) }\right) = n -$ $\rho \left( A\right)$ is the size of the null space of $A$ . In fact ${A}^{ \bot }$ is the base for the null space of $A$ . The same applies for $y\left( v\right) =$ ${y}_{0} + {B}^{ \bot }v$ and ${By}\left( v\right) = c$ .
+
+Proof of Theorem 1. The second equation is derived by imposing the optimally condition on the inner problem. Since we do not have inequality and equality constraints we optimal solution shall equate the gradient w.r.t. $y$ to zero, thus $G = {\nabla }_{y}g = 0$ . The first equation is also related to the optimality of the $x$ variable w.r.t. to the total derivative or hyper-gradient, thus we have that $0 = {\mathrm{d}}_{x}f = {\nabla }_{x}f + {\nabla }_{y}f{\nabla }_{x}y$ . In order to compute the variation of $y$ , i.e. ${\nabla }_{x}y$ we apply the implicit theorem to the inner problem, i.e. ${\nabla }_{x}G +$ ${\nabla }_{y}G{\nabla }_{x}y = 0$ , thus obtaining ${\nabla }_{x}y = - {\nabla }_{y}^{-1}G{\nabla }_{x}G$ .
+
+Proof of Theorem 2. In order to prove the theorem, we use the Discrete Adjoin Method (DAM). Let consider a cost function or functional $L\left( {x, y, z}\right)$ evaluated at the output of our system. Our system is defined by the two equations $F = 0, G = 0$ from Theorem 1. Let us first consider the total variations: $\mathrm{d}L,\mathrm{\;d}F = 0,\mathrm{\;d}G = 0$ , where the last conditions are true by definition of the bilevel problem. When we expand the total variations, we obtain
+
+$$
+\mathrm{d}L = {\nabla }_{x}L\mathrm{\;d}x + {\nabla }_{y}L\mathrm{\;d}y + {\nabla }_{z}L\mathrm{\;d}z
+$$
+
+$$
+\mathrm{d}F = {\nabla }_{x}F\mathrm{\;d}x + {\nabla }_{y}F\mathrm{\;d}y + {\nabla }_{z}F\mathrm{\;d}z
+$$
+
+$$
+\mathrm{d}G = {\nabla }_{x}G\mathrm{\;d}x + {\nabla }_{y}G\mathrm{\;d}y + {\nabla }_{z}G\mathrm{\;d}z
+$$
+
+We now consider $\mathrm{d}L + \mathrm{d}{F\lambda } + \mathrm{d}{G\gamma } = \left\lbrack {{\nabla }_{x}L + {\nabla }_{x}{F\lambda } + }\right.$ $\left. {{\nabla }_{x}{G\gamma }}\right\rbrack \mathrm{d}x + \left\lbrack {{\nabla }_{y}L + {\nabla }_{y}{F\lambda } + {\nabla }_{y}{G\gamma }}\right\rbrack \mathrm{d}y + \left\lbrack {{\nabla }_{z}L + {\nabla }_{z}{F\lambda } + }\right.$
+
+$\left. {{\nabla }_{z}{G\gamma }}\right\rbrack \mathrm{d}z$ . We ask the first two terms to be zero to find the two free variables $\lambda ,\gamma$ :
+
+$$
+{\nabla }_{x}L + {\nabla }_{x}{F\lambda } + {\nabla }_{x}{G\gamma } = 0 \tag{20}
+$$
+
+$$
+{\nabla }_{y}L + {\nabla }_{y}{F\lambda } + {\nabla }_{y}{G\gamma } = 0 \tag{21}
+$$
+
+or in matrix form
+
+$$
+\left| \begin{array}{ll} {\nabla }_{x}F & {\nabla }_{x}G \\ {\nabla }_{y}F & {\nabla }_{y}F \end{array}\right| \left| \begin{array}{l} \lambda \\ \gamma \end{array}\right| = - \left| \begin{array}{l} {\nabla }_{x}L \\ {\nabla }_{y}L \end{array}\right|
+$$
+
+We can now compute the ${\mathrm{d}}_{z}L = {\nabla }_{z}L + {\nabla }_{z}{F\lambda } + {\nabla }_{z}{G\gamma }$ with $\lambda ,\gamma$ from the previous equation.
+
+
+
+Figure 5: Discrete Bilevel Variables: Dependence diagram
+
+Proof of Theorem 3. The partial derivatives are obtained by using the perturbed discrete minimization problems defined by Eqs.24. We first notice that ${\nabla }_{x}\mathop{\min }\limits_{{y \in Y}}\langle x, y\rangle =$ $\arg \mathop{\min }\limits_{{y \in Y}}\langle x, y\rangle$ . This result is obtained by the fact that $\mathop{\min }\limits_{{y \in Y}}\langle x, y\rangle = \left\langle {x,{y}^{ * }}\right\rangle$ , where ${y}^{ * } = \arg \mathop{\min }\limits_{{y \in Y}}\langle x, y\rangle$ and applying the gradient w.r.t. the continuous variable $x$ ; while Eqs. 23 are the expected functions of the perturbed minimization problems. Thus, if we compute the gradient of the perturbed minimizer, we obtain the optimal solution, proper scaled by the inner product matrix. For example ${\nabla }_{x}{\widetilde{\Phi }}_{\eta } = A{x}^{ * }\left( {z, y}\right)$ , with $A$ the inner product matrix. To compute the variation on the two parameter variables, we have that $\mathrm{d}L = {\nabla }_{x}L\mathrm{\;d}x + {\nabla }_{y}L\mathrm{\;d}y + {\nabla }_{z}L\mathrm{\;d}z + {\nabla }_{w}L\mathrm{\;d}w$ and that $\mathrm{d}w/\mathrm{d}z = 0,\mathrm{\;d}z/\mathrm{d}w = 0$ from the dependence diagram of Fig. 5
+
+### A.3 Gradient Estimation based on perturbation
+
+We can use the gradient estimator using the perturbation approach proposed in (Berthet et al. 2020). We thus have
+
+$$
+{\nabla }_{z}x\left( {z, y}\right) = {A}^{-1}{\nabla }_{{z}^{2}}^{2}{\widetilde{\Phi }}_{\eta }\left( {z, y}\right) {|}_{\eta \rightarrow 0} \tag{22a}
+$$
+
+$$
+{\left. {\nabla }_{w}y\left( w, z\right) = {C}^{-1}{\nabla }_{{w}^{2}}^{2}{\widetilde{\Psi }}_{\eta }\left( w, z\right) \right| }_{\eta \rightarrow 0} \tag{22b}
+$$
+
+$$
+{\nabla }_{x}y\left( {x, w}\right) = {\left. {D}^{-1}{\nabla }_{{x}^{2}}^{2}{\widetilde{\Theta }}_{\eta }\left( x, w\right) \right| }_{\eta \rightarrow 0} \tag{22c}
+$$
+
+$$
+{\nabla }_{y}x\left( {z, y}\right) = {\left. {B}^{-1}{\nabla }_{{y}^{2}}^{2}{\widetilde{W}}_{\eta }\left( z, y\right) \right| }_{\eta \rightarrow 0} \tag{22d}
+$$
+
+$$
+{\nabla }_{z}y = {\nabla }_{x}y{\nabla }_{z}x \tag{22e}
+$$
+
+and
+
+$$
+{\widetilde{\Phi }}_{\eta }\left( {z, y}\right) = {\mathbb{E}}_{u \sim U}\Phi \left( {z + {\eta u}, y}\right) \tag{23a}
+$$
+
+$$
+{\widetilde{\Psi }}_{\eta }\left( {w, x}\right) = {\mathbb{E}}_{u \sim U}\Psi \left( {w + {\eta u}, x}\right) \tag{23b}
+$$
+
+$$
+{\widetilde{\Theta }}_{\eta }\left( {x, w}\right) = {\mathbb{E}}_{u \sim U}\Psi \left( {w, x + {\eta u}}\right) \tag{23c}
+$$
+
+$$
+{\widetilde{W}}_{\eta }\left( {y, z}\right) = {\mathbb{E}}_{u \sim U}\Phi \left( {z, y + {\eta u}}\right) \tag{23d}
+$$
+
+, while
+
+$$
+\Phi \left( {z, y}\right) = \mathop{\min }\limits_{{x \in X}}\langle z, x{\rangle }_{A} + \langle y, x{\rangle }_{B} \tag{24a}
+$$
+
+$$
+\Psi \left( {w, x}\right) = \mathop{\min }\limits_{{y \in Y}}\langle w, y{\rangle }_{C} + \langle x, y{\rangle }_{D} \tag{24b}
+$$
+
+which are valid under the conditions of (Berthet et al. 2020), while $\tau$ and $\mu$ are hyper-parameters.
+
+### A.4 Alternative derivation
+
+Let consider the problem $\mathop{\min }\limits_{{x \in K}}\langle z, x{\rangle }_{A}$ and let us define ${\Omega }_{x}$ a penalty term that ensures $x \in K$ . We can define the generalized lagragian $\mathbb{L}\left( {z, x,\Omega }\right) = \langle z, x{\rangle }_{A} + {\Omega }_{x}$ . One example of ${\Omega }_{x} = {\lambda }^{T}\left| {x - K\left( x\right) }\right|$ or ${\Omega }_{x} = - \ln \left| {x - K\left( x\right) }\right|$ where $K\left( x\right)$ is the projection into $K$ . To solve the Lagragian, we solve the unconstrained problem $\mathop{\min }\limits_{x}\mathop{\max }\limits_{{\Omega }_{x}}\mathbb{L}\left( {z, x,{\Omega }_{x}}\right)$ . At the optimal point ${\nabla }_{x}\mathbb{L} = 0$ . Let us define $F = {\nabla }_{x}\mathbb{L} =$ ${A}^{T}z + {\Omega }_{x}^{\prime }$ , then ${\nabla }_{x}F = {\Omega }_{x}^{\prime \prime }$ and ${\nabla }_{z}F = {A}^{T}$ . If we have $F\left( {x, z}\right) = 0$ and a cost function $L\left( {x, z}\right)$ , we can compute ${\mathrm{d}}_{z}L = {\nabla }_{z}L - {\nabla }_{x}L{\nabla }_{x}^{-1}F{\nabla }_{z}F$ . Now $F\left( {x, z,{\Omega }_{x}}\right) = 0$ , we can apply the previous result and ${\mathrm{d}}_{z}L = {\nabla }_{z}L -$ ${\nabla }_{x}L{\Omega }_{x}^{\prime \prime - 1}{A}^{T}$ . If we assume ${\Omega }_{x}^{\prime \prime } = I$ and ${\nabla }_{z}L = 0$ , then ${\mathrm{d}}_{z}L = - A{\nabla }_{x}L.$
+
+### A.5 Memory Efficiency
+
+For continuous optimization programming, by separating the computation of the solution and the computation of the gradient around the current solution we 1) compute the gradient more efficiently, in particular we compute second order gradient taking advantage of the vector-jacobian product (push-back operator) formulation without explicitly inverting and thus building the jacobian or hessian matrices; 2) use more advanced and not differentialble solution techniques to solve the bilevel optimization problem that would be difficult to integrate using automatic differentiable operations. Using VJP we reduce memory use from $O\left( {n}^{2}\right)$ to $O\left( n\right)$ . Indeed using an iterative solver, like generalized minimal residual method (GMRES) (Saad and Schultz 1986), we only need to evaluate the gradients of Eq. 5 and not invert the matrix neither materialize the large matrix and computing matrix-vector products. Similarly, we use Conjugate Gradient (CG) method to compute Eq.4, which requires to only evaluate the gradient at the current solution and nor inverting neither materializing the Jacobian matrix. An implementation of a bilevel solver would have a memory complexity of $O\left( {Tn}\right)$ , where $T$ are the number of iterations of the bilevel algorithm.
+
+### A.6 Experimental Setup and Computational Resources
+
+For the Optimal Control with adversarial disturbance we follow a similar setup of (Agrawal et al. 2019a), where we added the adversarial noise as described in the experiments. For the Combinatorial Optimization, we follow the setup of (Pogančić et al. 2019). The dataset is generated by solving the bilevel problem on the same data of (Pogančić et al. 2019). For section 5.3, we use the warcraft terrain tiles and generate optimal bilevel solution with the correct parameters (z, w), where $z$ is the terrain transit cost and $w$ is the interdiction cost, considered constant to 1 in our experiment. $X$ is the set of all feasible interdictions, in our experiment we allow the maximum number of interdictions to be $B$ . For section 5.4, on the other hand the $z$ represents the true distances among cities and $w$ a matrix of the interdiction cost, both unknown to the model. $X$ is the set of all possible interdictions. In these experiments, we solved the bilevel problem using the min-max-min algorithm (Kämmerling and Kurtz 2020). For the Adversarial Attack, we used two convolutional layers with max-pooling, relu activation layer, followed by the discrete layer of size $m = {2024}, B = {100}, Q = 0,{10}$ . A final linear classification layer is used to classify CIFAR10. We run over 3 runs,50 epochs, learning rate ${lr} = {3e} - 4$ and Adam optimizer. Experiments were conducted using a standard server with $8\mathrm{{CPU}},{64}\mathrm{{Gb}}$ of RAM and GeForce RTX 2080 GPU with $6\mathrm{{Gb}}$ of RAM.
+
+### A.7 Jacobian-Vector and Vector-Jacobian Products
+
+The Jacobian-Vector Product (JVP) is the operation that computes the directional derivative ${J}_{f}\left( x\right) u$ , with direction $u \in {\mathbb{R}}^{m}$ , of the multi-dimensional operator $f : {\mathbb{R}}^{m} \rightarrow {\mathbb{R}}^{n}$ , with respect to $x \in {\mathbb{R}}^{m}$ , where ${J}_{f}\left( x\right)$ is the Jacobian of $f$ evaluated at $x$ . On the other hand, the Vector-Jacobian product (VJP) operation, with direction $v \in {\mathbb{R}}^{n}$ , computes the adjoint directional derivative ${v}^{T}{J}_{f}\left( x\right)$ . JVP and VJP are the essential ingredient for automatic differentiation (Elliott 2018; Baydin et al. 2018).
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/HvRAM-dpmEv/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/HvRAM-dpmEv/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..cbbb359476108b2f3ccafe047344f730c591bf3d
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/HvRAM-dpmEv/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,417 @@
+§ BIGRAD: DIFFERENTIATING THROUGH BILEVEL OPTIMIZATION PROGRAMMING
+
+§ ABSTRACT
+
+Integrating mathematical programming, and in particular Bilevel Optimization Programming, within deep learning architectures has vast applications in various domains from machine learning to engineering. Bilevel programming is able to capture complex interactions when two actors have conflicting objectives. Previous approaches only consider single-level programming. In this paper, we thus propose Differentiating through Bilevel Optimization Programming (BiGrad) as approach for end-to-end learning of models that use Bilevel Programming as a layer. BiGrad has wide applicability and it can be used in modern machine learning frameworks. We focus on two classes of Bilevel Programming: continuous and combinatorial optimization problems. The framework extends existing approaches of single level optimization programming. We describe a class of gradient estimators for the combinatorial case which reduces the requirements in term of computation complexity; for the continuous variables case the gradient computation takes advantage of push-back approach (i.e. vector-jacobian product) for an efficient implementation. Experiments suggest that the proposed approach successfully extends existing single level approaches to Bilevel Programming.
+
+§ 1 INTRODUCTION
+
+Neural networks provide unprecedented improvements in perception tasks, however, they struggle to learn basic logic operations (Garcez et al. 2015) or relationships. When modelling complex systems, for example decision systems, it is not only beneficial to integrate optimization components into larger differentiable system, but also to use general purpose solvers (e.g. for Integer Linear Programming or Nonlinear Programming (Bertsekas 1997; Boyd and Vanden-berghe 2004)) and problem specific implementation, to discover the governing discrete or continuous relationships. Recent approaches propose thus differentiable layers that incorporate either quadratic (Amos and Kolter 2017), convex (Agrawal et al. 2019a), cone (Agrawal et al. 2019b), equilibrium (Bai, Kolter, and Koltun 2019), SAT (Wang et al. 2019) or combinatorial (Pogančić et al. 2019; Mandi and Guns 2020; Berthet et al. 2020) programs. Use of optimization programming as layer of differentiable systems, requires to compute the gradients through these layers, which is either specific to the optimization problem or zero almost everywhere, when dealing with discrete variables. Proposed gradient estimates either relax the combinatorial problem (Mandi and Guns 2020), or perturb the input variables (Berthet et al. 2020; Domke 2010) or linearly approximate the loss function (Pogančić et al. 2019).
+
+These approaches though, do now allow to directly express models with conflicting objectives, for example in structural learning (Elsken, Metzen, and Hutter 2019) or adversarial system (Goodfellow et al. 2014). We thus consider the use of bilevel optimization programming as a layer. Bilevel Optimization Program (Kleinert et al. 2021; Colson, Marcotte, and Savard 2007; Dempe 2018; Stackelberg et al. 1952), also known as generalization of Stackelberg Games, is the extension of single-level optimization program, where the solution of one optimization problem (i.e. the outer problem) depends on the solution of another optimization problem (i.e. the inner problem). This class of problems can model interactions between two actors ${}^{1}$ , where the action of the first depends on the knowledge of the counter-action of the second. Bilevel Programming finds application in various domains, as in Electricity networks, Economics, Environmental policy, Chemical plant, defence and planning (Dempe 2018; Sinha, Malo, and Deb 2017). In general, Bilevel programs are NP-hard (Sinha, Malo, and Deb 2017), they require specialized solvers and it is not clear how to extend previous approaches, since standard chain rule is not directly applicable.
+
+By modelling the bilevel optimization problem as an implicit layer (Bai, Kolter, and Koltun 2019), we consider the more general case where 1) the solution of the bilevel problem is computed separately by a bilevel solver; thus leveraging on powerfully solver developed over various decades (Kleinert et al. 2021); and 2) the computation of the gradient is more efficient, since we do not have to propagate gradient through the solver. We thus propose Differentiating through Bilevel Optimization Programming (BiGrad):
+
+ * BiGrad comprises of forward pass, where existing solvers can be used, and backward pass, where BiGrad estimates gradient for both continuous and combinatorial problems based on sensitivity analysis;
+
+ * we show how the proposed gradient estimators relate to the single-level analogous and that the proposed approach is beneficial in both continuous and discrete cases.
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+${}^{1}$ In the following section we provide concrete example of applications.
+
+ < g r a p h i c s >
+
+Figure 1: The Forward and backward passes of a Bilevel Programming layer: the larger system has input $d$ and output $u = {h}_{\psi } \circ H \circ {h}_{\theta }\left( d\right)$ ; the bilevel layer has input $z$ and output $x,y$ , which are solutions of a Bilevel optimization problem represented by the implicit function $H\left( {x,y,z}\right) = 0$ .
+
+§ EXAMPLES OF BILEVEL OPTIMIZATION PROBLEMS
+
+Physical System with control sub-system example Bilevel Programming is to model the interaction of a dynamical system(x)and its control sub-system(y), as for example an industrial plant or a physical process. The control sub-system changes based on the state of the underlying dynamical system, which itself solves a physics constraint optimization problem (Raissi, Perdikaris, and Karniadakis 2019; de Avila Belbute-Peres et al. 2018).
+
+Interdiction problem example Two actors discrete Interdiction problems (Fischetti et al. 2019), where one actor(x) tries to interdict the actions of another actors(y)under budget constraints, arise in various areas, from marketing, protecting critical infrastructure, preventing drug smuggling to hinder nuclear weapon proliferation.
+
+Min-max problem example Min-max problems are used to model robust optimization problems (Ben-Tal, El Ghaoui, and Nemirovski 2009), where a second variable represents the environment and is constrained to an uncertain set that captures the unknown variability of the environment.
+
+Adversarial attack in Machine Learning Bilevel pro-bram is used the represents the interaction between a machine learning model(y)and a potential attacker(x)(Gold-blum, Fowl, and Goldstein 2019) and is used to increase the resilience to intentional or unintended adversarial attacks.
+
+§ 2 DIFFERENTIABLE BILEVEL OPTIMIZATION LAYER
+
+We model the Bilevel Optimization Program as an Implicit Layer (Bai, Kolter, and Koltun 2019), i.e. as the solution of an implicit equation $H\left( {x,y,z}\right) = 0$ , in order to derive the gradient using the implicit function theorem, where $z$ is given and represents the parameters of our system we want to estimate, and $x,y$ are output variables (Fig.1). We also assume we have access ${}^{2}$ to a solver $\left( {x,y}\right) = {\operatorname{Solve}}_{H}\left( z\right)$ . The bilevel Optimization Program is then used a layer of a differentiable system, whose input is $d$ and output is given by $u = {h}_{\psi } \circ {\operatorname{Solve}}_{H} \circ {h}_{\theta }\left( d\right) = {h}_{\psi ,\theta }\left( d\right)$ , where $\circ$ is the function composition operator. We want to learn the parameters $\psi ,\theta$ of the function ${h}_{\psi ,\theta }\left( d\right)$ that minimize the loss function $L\left( {{h}_{\psi ,\theta }\left( d\right) ,u}\right)$ , using the training data ${D}^{\mathrm{{tr}}} = \left\{ {\left( d,u\right) }_{i = 1}^{{N}^{\mathrm{{tr}}}}\right\}$ . In order to be able to perform the end-to-end training, we need to back-propagate the gradient of the Bilevel Optimization Program Layer, which can not be accomplish only using chain rule.
+
+§ 2.1 CONTINUOUS BILEVEL PROGRAMMING
+
+We now present the definition of the continous Bilevel Optimization problem, which comprises of two non-linear function $f,g$ , as
+
+$$
+\mathop{\min }\limits_{{x \in X}}f\left( {x,y,z}\right) \;y \in \arg \mathop{\min }\limits_{{y \in Y}}g\left( {x,y,z}\right) \tag{1}
+$$
+
+where the left part problem is called outer optimization problem and resolves for the variable $x \in X$ , with $X = {\mathbb{R}}^{n}$ . The right problem is called the inner optimization problem and solves for the variable $y \in Y$ , with $Y = {\mathbb{R}}^{m}$ . The variable $z \in {\mathbb{R}}^{p}$ is the input variable and is a parameter for the bilevel problem. Min-max is special case of Bilevel optimization problem $\mathop{\min }\limits_{{y \in Y}}\mathop{\max }\limits_{{x \in X}}g\left( {x,y,z}\right)$ , where the minimization functions are equal and opposite in sign.
+
+§ 2.2 COMBINATORIAL BILEVEL PROGRAMMING
+
+When the variables are discrete, we restrict the objective functions to be multi-linear (Greub 1967). Various important combinatorial problems are linear in discrete variables (e.g. VRP, TSP, SAT ${}^{3}$ ), one example form is the following
+
+$$
+\mathop{\min }\limits_{{x \in X}}\langle z,x{\rangle }_{A} + \langle y,x{\rangle }_{B},y \in \arg \mathop{\min }\limits_{{y \in Y}}\langle w,y{\rangle }_{C} + \langle x,y{\rangle }_{D}
+$$
+
+(2)
+
+The variables $x,y$ have domains in $x \in X,y \in Y$ , where $X,Y$ are convex polytopes that are constructed from a set of distinct points $\mathcal{X} \subset {\mathbb{R}}^{n},\mathcal{Y} \subset {\mathbb{R}}^{m}$ , as their convex hull. The outer and inner problems are Integer Linear Programs (ILPs). The multi-linear operator is represented by the inner product $\langle x,y{\rangle }_{A} = {x}^{T}{Ay}$ . We only consider the case where we have separate parameters for the outer and inner problems, $z \in {\mathbb{R}}^{p}$ and $w \in {\mathbb{R}}^{q}$ .
+
+§ 3 BIGRAD: GRADIENT ESTIMATION
+
+Even if the discrete and continuous variable cases share a similar structure, the approach is different when evaluating the gradients. We can identify the following common basic steps (Alg.1):
+
+1. In the forward pass, solve the combinatorial or continuous Bilevel Optimisation problem as defined in Eq.1(or Eq.2) using existing solver;
+
+2. During the backward pass, compute the gradient ${\mathrm{d}}_{z}L$ (and ${\mathrm{d}}_{w}L$ ) using the suggested gradients (Sec.3.1 and Sec.3.2) starting from the gradients on the output variables ${\nabla }_{x}L$ and ${\nabla }_{y}L$ .
+
+${}^{2}$ Finding the solution of the bi-level problem is not in the scope of this work.
+
+${}^{3}$ Vehicle Routing Problem, Boolean satisfiability problem.
+
+Algorithm 1: BiGrad Layer: Bilevel Optimization Programming Layer using BiGrad
+
+1. Input: Training sample $\left( {\widetilde{d},\widetilde{u}}\right)$
+
+2. Forward Pass:
+
+(a) Compute $\left( {x,y}\right) \in \{ x,y : H\left( {x,y,z}\right) = 0\}$ using
+
+ Bilevel Solver: $\left( {x,y}\right) \in {\operatorname{Solve}}_{H}\left( z\right)$
+
+(b) Compute the loss function $L\left( {{h}_{\psi } \circ H \circ {h}_{\theta }\left( \widetilde{d}\right) ,\widetilde{u}}\right)$ ,
+
+(c) Save(x, y, z)for the backward pass
+
+3. Backward Pass:
+
+(a) update the parameter of the downstream layers $\psi$ using
+
+ back-propagation
+
+(b) For the continuous variable case, compute based on
+
+ Theorem 2 around the current solution(x, y, z), with-
+
+ out solving the Bilevel Problem
+
+(c) For the discrete variable case, use the gradient es-
+
+ timates of Theorem 3 or Section 3.2 (e.g. Eq. 11 or
+
+ Eq.12) by solving, when needed, for the two separate
+
+ problems
+
+ (d) Back-propagate the estimated gradient to the down-
+
+ stream parameters $\theta$
+
+§ 3.1 CONTINUOUS OPTIMIZATION
+
+To evaluate the gradient of the variables $z$ versus the loss function $L$ , we need to propagate the gradients of the two output variables $x,y$ through the two optimization problems. We can use the implicit function theorem to approximate locally the function $z \rightarrow \left( {x,y}\right)$ . We have thus the following main results ${}^{4}$ .
+
+Theorem 1. Consider the bilevel problem of Eq.1, we can build the following set of equations that represent the equivalent problem around a given solution ${x}^{ * },{y}^{ * },{z}^{ * }$ :
+
+$$
+F\left( {x,y,z}\right) = 0\;G\left( {x,y,z}\right) = 0 \tag{3}
+$$
+
+where
+
+$$
+F\left( {x,y,z}\right) = {\nabla }_{x}f - {\nabla }_{y}f{\nabla }_{y}G{\nabla }_{x}G,\;G\left( {x,y,z}\right) = {\nabla }_{y}g \tag{4}
+$$
+
+where we used the short notation $f = f\left( {x,y,z}\right) ,g =$ $g\left( {x,y,z}\right) ,F = F\left( {x,y,z}\right) ,G = G\left( {x,y,z}\right)$
+
+Theorem 2. Consider the problem defined in Eq.1, then the total gradient of the parameter $z$ w.r.t. the loss function $L\left( {x,y,z}\right)$ is computed from the partial gradients ${\nabla }_{x}L,{\nabla }_{y}L,{\nabla }_{z}L$ as
+
+$$
+{\mathrm{d}}_{z}L = {\nabla }_{z}L - \left| {{\nabla }_{x}L\;{\nabla }_{y}L}\right| {\left| \begin{array}{ll} {\nabla }_{x}F & {\nabla }_{y}F \\ {\nabla }_{x}G & {\nabla }_{y}G \end{array}\right| }^{-1}\left| \begin{array}{l} {\nabla }_{z}F \\ {\nabla }_{z}G \end{array}\right|
+$$
+
+(5)
+
+The implicit layer is thus defined by the two conditions $F\left( {x,y,z}\right) = 0$ and $G\left( {x,y,z}\right) = 0$ . We notice that Eq. 5 can be solved without explicitly computing the Jacobian matrices and inverting the system, but adopting the Vector-Jacobian product approach we can proceed from left to right to evaluate ${\mathrm{d}}_{z}L$ . In the following section we describe how affine equality constraints and nonlinear inequality can be used when modelling $f,g$ . We also notice that the solution of Eq. 5 does not require to solve the original problem, but only to apply matrix-vector products, i.e. linear algebra, and the evaluation of the gradient that can be computed using automatic differentiation.
+
+Linear Equality constraints To extend the model of Eq. 1 to include linear equality constraints of the form ${Ax} = b$ and ${By} = c$ on the outer and inner problem variables, we use the following change of variables
+
+$$
+x \rightarrow {x}_{0} + {A}^{ \bot }x,y \rightarrow {y}_{0} + {B}^{ \bot }y \tag{6}
+$$
+
+where ${A}^{ \bot },{B}^{ \bot }$ are the orthogonal space of $A$ and $B$ , i.e. $A{A}^{ \bot } = 0,B{B}^{ \bot } = 0$ , and ${x}_{0},{y}_{0}$ are one solution of the equations, i.e. $A{x}_{0} = b,B{y}_{0} = c$ .
+
+Non-linear Inequality constraints Similarly, to extend the model of Eq.1 when we have non-linear inequality constraints, we use the barrier method approach (Boyd and Van-denberghe 2004), where the variable is penalized with a logarithmic function to violate the constraints. Specifically, let us consider the case where ${f}_{i},{g}_{i}$ are inequality constraint functions, i.e. ${f}_{i} < 0,{g}_{i} < 0$ , for the outer and inner problems. We then define new functions
+
+$$
+f \rightarrow {tf} - \mathop{\sum }\limits_{{i = 1}}^{{k}_{x}}\ln \left( {-{f}_{i}}\right) ,g \rightarrow {tg} - \mathop{\sum }\limits_{{i = 1}}^{{k}_{y}}\ln \left( {-{g}_{i}}\right) . \tag{7}
+$$
+
+where $t$ is a variable parameter, which depends on the violation of the constraints. The closer the solution is to violate the constraints, the larger the value of $t$ is.
+
+Bilevel Cone programming We show here how Theorem. 2 can be applied to bi-level cone programming extending single-level cone programming results (Agrawal et al. 2019b), where we can use efficient solvers for cone programs to compute a solution of the bilevel problem (Ouattara and Aswani 2018)
+
+$$
+\mathop{\min }\limits_{x}{c}^{T}x + {\left( Cy\right) }^{T}x
+$$
+
+$$
+\text{ s.t. }{Ax} + z + R\left( y\right) \left( {x - r}\right) = b,s \in \mathcal{K} \tag{8a}
+$$
+
+$$
+y \in \arg \mathop{\min }\limits_{y}{d}^{T}y + {\left( Dx\right) }^{T}y
+$$
+
+$$
+\text{ s.t. }{By} + u + P\left( x\right) \left( {y - p}\right) = f,u \in \mathcal{K} \tag{8b}
+$$
+
+In this bilevel cone programming, the inner and outer problem are both cone programs, where $R\left( y\right) ,P\left( x\right)$ represents a linear transformation, while $C,r,D,p$ are new parameters of the problem, while $\mathcal{K}$ is the conic domain of the variables. In the hypothesis that a local minima of Eq. 8 exists, we can use an interior point method to find such point. To compute the bilevel gradient, we then use the residual maps (Busseti, Moursi, and Boyd 2019) of the outer and inner problems. Indeed, we can then apply Theorem 2, where $F = {N}_{1}\left( {x,Q,y}\right)$ and $G = {N}_{2}\left( {y,Q,x}\right)$ are the normalized residual maps defined in (Busseti, Moursi, and Boyd 2019; Agrawal et al. 2019a) of the outer and inner problems.
+
+${}^{4}$ Proofs are in the Supplementary Material
+
+§ 3.2 COMBINATORIAL OPTIMIZATION
+
+When we consider discrete variables, the gradient is zero almost everywhere. We thus need to resort to estimate gradients. For the bilevel problem with discrete variables of Eq.2, when the solution of the bilevel problem exists and its solution is given (Kleinert et al. 2021), Thm. 3 gives the gradients of the loss function with respect to the input parameters.
+
+Theorem 3. Given the Eq. 2 problem, the partial variation of a cost function $L\left( {x,y,z,w}\right)$ on the input parameters has the following form:
+
+$$
+{\mathrm{d}}_{z}L = {\nabla }_{z}L + \left\lbrack {{\nabla }_{x}L + {\nabla }_{y}L{\nabla }_{x}y}\right\rbrack {\nabla }_{z}x \tag{9a}
+$$
+
+$$
+{\mathrm{d}}_{w}L = {\nabla }_{w}L + \left\lbrack {{\nabla }_{x}L{\nabla }_{y}x + {\nabla }_{y}L}\right\rbrack {\nabla }_{w}y \tag{9b}
+$$
+
+The ${\nabla }_{x}y,{\nabla }_{y}x$ terms capture the interaction between outer and inner problems. We could estimate the gradients in Thm. 3 using the perturbation approach suggested in (Berthet et al. 2020), which estimate the gradient as the expected value of the gradient of the problem after perturbing the input variable, but, similar to REINFORCE (Williams 1992), this introduces large variance. While it is possible to reduce variance in some cases (Grathwohl et al. 2017) with the use of additional trainable functions, we consider alternative approaches as described in the following.
+
+Differentiation of blackbox combinatorial solvers (Pogančić et al. 2019) propose a way to propagate the gradient through a single level combinatorial solver, where ${\nabla }_{z}L \approx \frac{1}{\tau }\left\lbrack {x\left( {z + \tau {\nabla }_{x}L}\right) - x\left( z\right) }\right\rbrack$ when $x\left( z\right) = \arg \mathop{\max }\limits_{{x \in X}}\langle x,z\rangle$ . We thus propose to compute the variation on the input variables from the two separate problems of the Bilevel Problem:
+
+$$
+{\nabla }_{z}L \approx 1/\tau \left\lbrack {x\left( {z + {\tau A}{\nabla }_{x}L,y}\right) - x\left( {z,y}\right) }\right\rbrack \tag{10a}
+$$
+
+$$
+{\nabla }_{w}L \approx 1/\tau \left\lbrack {y\left( {w + {\tau C}{\nabla }_{y}L,x}\right) - y\left( {w,x}\right) }\right\rbrack \tag{10b}
+$$
+
+or alternatively, if we have only access to the Bilevel solver and not to the separate ILP solvers, we can express
+
+$$
+{\nabla }_{z,w}L \approx 1/\tau \left\lbrack {s\left( {v + {\tau E}{\nabla }_{x,y}L}\right) - s\left( v\right) }\right\rbrack \tag{11}
+$$
+
+where $x\left( {z,y}\right)$ and $y\left( {w,x}\right)$ represent the solutions of the two problems separately, $s\left( v\right) = \left( {z,w}\right) \rightarrow \left( {x,y}\right)$ the complete solution to the Bilevel Problem, $\tau \rightarrow 0$ is a hyper-parameter and $E = \left\lbrack \begin{matrix} A & 0 \\ 0 & C \end{matrix}\right\rbrack$ . This form is more convenient than Eq.9, since it does not require to compute the cross terms, ignoring thus the interaction of the two levels.
+
+Straight-Through gradient In estimating the input variables $z,w$ of our model, we may not be interested in the interaction between the two variable $x,y$ . Let us consider, for example, the squared ${\ell }_{2}$ loss function defined over the output variables
+
+$$
+{L}^{2}\left( {x,y}\right) = {L}^{2}\left( x\right) + {L}^{2}\left( y\right)
+$$
+
+where ${L}^{2}\left( x\right) = \frac{1}{2}{\begin{Vmatrix}x - {x}^{ * }\end{Vmatrix}}_{2}^{2}$ and ${x}^{ * }$ is the true value. The loss is non zero only when the two vectors disagree, and with integer variables, it counts the difference squared or, in case of the binary variables, it counts the number of differences. If we compute ${\nabla }_{x}{L}^{2}\left( x\right) = \left( {x - {x}^{ * }}\right)$ in the binary case, we have that ${\nabla }_{{x}_{i}}{L}^{2}\left( x\right) = + 1$ if ${x}_{i}^{ * } = 0 \land {x}_{i} = 1,{\nabla }_{{x}_{i}}{L}^{2}\left( x\right) =$ -1if ${x}_{i}^{ * } = 1 \land {x}_{i} = 0$ , and 0 otherwise. This information can be directly used to update the ${z}_{i}$ variable in the linear term $\langle z,x\rangle$ , thus we can estimate the gradients of the input variables as ${\nabla }_{{z}_{i}}{L}^{2} = - \lambda {\nabla }_{{x}_{i}}{L}^{2}$ and ${\nabla }_{{w}_{i}}{L}^{2} = - \lambda {\nabla }_{{y}_{i}}{L}^{2}$ , with some weight $\lambda > 0$ . The intuition is that, the weight ${z}_{i}$ associated with the variable ${x}_{i}$ is increased, when the value of the variable ${x}_{i}$ reduces. In the general multilinear case we have additional multiplicative terms. Following this intuition (see Sec.A.3), we thus use as an estimate of the gradient of the variables
+
+$$
+{\nabla }_{z}L = - A{\nabla }_{x}L\;{\nabla }_{w}L = - C{\nabla }_{y}L \tag{12}
+$$
+
+This is equivalent in Eq. 2 where ${\nabla }_{z}x = {\nabla }_{w}y = - I$ and ${\nabla }_{y}x = 0$ , thus ${\nabla }_{x}y = 0$ . This update is also equivalent to Eq. 10, without the soluton computation. The advantage of this form is that it does not requires to solve for an additional solution in the backward pass. For the single level problem, gradient has the same form of the Straight-Through gradient proposed by (Bengio, Léonard, and Courville 2013), with surrogate gradient ${\nabla }_{z}x = - I$ .
+
+§ 4 RELATED WORK
+
+Bilevel Programming in machine learning Various papers model machine learning problem as Bilevel problems, for example in Hyper-parameter Optimization (MacKay et al. 2019; Franceschi et al. 2018), Meta-Feature Learning (Li and Malik 2016), Meta-Initialization Learning (Ra-jeswaran et al. 2019), Neural Architecture Search (Liu, Si-monyan, and Yang 2018), Adversarial Learning (Li et al. 2019), Deep Reinforcement Learning (Vahdat et al. 2020) and Multi-Task Learning (Alesiani et al. 2020). In these works the main focus is to compute the solution of the bilevel optimization problems. In (MacKay et al. 2019; Lorraine and Duvenaud 2018), the best response function is modeled as a neural network and the solution is found using iterative minimization, without attempting to estimate the complete gradient. Many bilevel approaches rely on the use of the implicit function to compute the hyper-gradient (Sec. 3.5 of (Colson, Marcotte, and Savard 2007)), but do not use bilevel as layer.
+
+Quadratic, Cone and Convex single-level Programming Various works have addressed the problem of differentiate through quadratic, convex or cone programming (Amos 2019; Amos and Kolter 2017; Agrawal et al. 2019b, a). In these approaches the optimization layer is modelled as an implicit layer and for the cone/convex case the normalized residual map is used to propagate the gradients. Contrary to our approach, these work only address single level problems. These approaches do not consider combinatorial optimization.
+
+Implicit layer Networks While classical deep neural neural networks perform a single pass through the network at inference time, a new class of systems performs inference by solving an optimization problem. Example of this are Deep Equilibrium Network (DEQ) (Bai, Kolter, and Koltun 2019) and NeurolODE (NODE) (Chen et al. 2018). Similar to our approach, the gradient is computed based on sensitivity analysis of the current solution. These methods only consider continuous optimization.
+
+ < g r a p h i c s >
+
+Figure 2: (a) Visualization of the Optimal Control Learning network, where a disturbance ${\epsilon }_{t}$ is injected based on the control signal ${u}_{t}$ . (b) Comparison of the training performance for $N = 2$ , $T = {20}$ and epochs $= {10}$ of the BiGrad and the Adversarial version of the OptNet (Amos and Kolter 2017).
+
+Combinatorial optimization Various papers estimate gradients of single-level combinatorial problems using relaxation. (Wilder, Dilkina, and Tambe 2019; Elmachtoub and Grigas 2017; Ferber et al. 2020; Mandi and Guns 2020) for example use ${\ell }_{1},{\ell }_{2}$ or log barrier to relax the Integer Linear Programming (ILP) problem. Once relaxed the problem is solved using standard methods for continuous variable optimization. An alternative approach is suggested in other papers. For example in (Pogančić et al. 2019) the loss function is approximated with a linear function and this leads to an estimate of the gradient of the input variable similar to the implicit differentiation by perturbation form (Domke 2010). (Berthet et al. 2020) is another approach that uses also perturbation and change of variables to estimate the gradient in a ILP problem. SatNet (Wang et al. 2019) solves MAXSAT problems by solving a continuous semidefinite program (SDP) relaxation of the original problem. These works only consider single-level problems.
+
+Discrete latent variables Discrete random variables provide an effective way to model multi-modal distributions over discrete values, which can be used in various machine learning problems, e.g. in language models (Yang et al. 2017) or for conditional computation (Bengio, Léonard, and Courville 2013). Gradients of discrete distribution are not mathematical defined, thus, in order to use gradient based method, gradient estimations have been proposed. A class of methods is based on Gumbel-Softmax estimator (Jang, Gu, and Poole 2016; Maddison, Mnih, and Teh 2016; Paulus, Maddison, and Krause 2021).
+
+§ 5 EXPERIMENTS
+
+We evaluate BiGrad with continuous and combinatorial problems to shows that improves over single-level approaches. In the first experiment we compare the use of Bi-Grad versus the use of the implicit layer proposed in (Amos and Kolter 2017) for the design of Optimal Control with adversarial noise. In the second part, after experimenting with adversarial attack, we explore the performance of BiGrad with two combinatorial problems with Interdiction, where we adapted the experimental setup proposed in (Pogančić et al. 2019). In these latter experiments, we compare the formulation in Eq.11 (denoted by Bigrad(BB)) and the formulation of Eq.12 (denoted by Bigrad(PT)). In addition we compare with the single level BB-1 from (Pogančić et al. 2019) and single level straight-through (Bengio, Léonard, and Courville 2013; Paulus, Maddison, and Krause 2021), with the surrogate gradient ${\nabla }_{z}x = - I$ ,(PT-1) gradient estimations. We compare against Supervised learning (SL), which ignores the underlay structure of the problem and directly predicts the solution of the bilevel problem.
+
+Table 1: Optimal Control Average Cost; Bilevel approach improves (lower cost) over two-step approach, because is able to better capture the interaction between noise and control dynamics.
+
+max width=
+
+X LQR OptNet Bilevel
+
+1-4
+Adversarial (10 steps) 2.736 0.2722 0.2379
+
+1-4
+(30 steps) - 0.2511 0.2181
+
+1-4
+
+§ 5.1 OPTIMAL CONTROL WITH ADVERSARIAL DISTURBANCE
+
+We consider the design of a robust stochastic control for a Dynamical System (Agrawal et al. 2019b). The problem is to find a feedback function $u = \phi \left( x\right)$ that minimizes
+
+$$
+\mathop{\min }\limits_{\phi }\mathbb{E}\frac{1}{T}\mathop{\sum }\limits_{{t = 0}}^{T}{\begin{Vmatrix}{x}_{t}\end{Vmatrix}}^{2} + {\begin{Vmatrix}\phi \left( {x}_{t}\right) \end{Vmatrix}}^{2} \tag{13a}
+$$
+
+$$
+\text{ s.t. }{x}_{t + 1} = A{x}_{t} + {B\phi }\left( {x}_{t}\right) + {w}_{t},\forall t \tag{13b}
+$$
+
+where ${x}_{t} \in {\mathbb{R}}^{n}$ is the state of the system, while ${w}_{t}$ is a i.i.d. random disturbance and ${x}_{0}$ is given initial state. To solve this problem we use Approximate Dynamic Programming (ADP) (Wang and Boyd 2010) that solves a proxy quadratic problem
+
+$$
+\mathop{\min }\limits_{{u}_{t}}{u}_{t}^{T}P{u}_{t} + {x}_{t}Q{u}_{t} + {q}^{t}{u}_{t}\;\text{ s.t. }{\begin{Vmatrix}{u}_{t}\end{Vmatrix}}_{2} \leq 1 \tag{14}
+$$
+
+We can use the optimization layer as shown in Fig.2(a) and update the problem variables (e.g. $P,Q,q$ ) using gradient descent. We use the linear quadratic regulator (LQR) solution as initial solution (Kalman 1964). The optimization module is replicated for each time step $t$ , similarly to Recursive Neural Network (RNN).
+
+We can build a resilient version of the controller in the hypothesis that an adversarial is able to inject a noise of limited energy, but arbitrary dependent on the control $u$ , by solving the following bilevel optimization problem
+
+$$
+\mathop{\max }\limits_{\epsilon }Q\left( {{u}_{t},{x}_{t} + \epsilon }\right) \;\text{ s.t. }\;\parallel \epsilon \parallel \leq \sigma \tag{15a}
+$$
+
+$$
+{u}_{t}\left( \epsilon \right) = \arg \mathop{\min }\limits_{{u}_{t}}Q\left( {{u}_{t},{x}_{t}}\right) \;\text{ s.t. }{\begin{Vmatrix}{u}_{t}\end{Vmatrix}}_{2} \leq 1 \tag{15b}
+$$
+
+where $Q\left( {u,x}\right) = {u}^{T}{Pu} + {x}_{t}{Qu} + {q}^{t}u$ and we want to learn the parameters $z = \left( {P,Q,q}\right)$ , where $y = {u}_{t},x = \epsilon$ of Eq.1.
+
+max width=
+
+gradient type train accuracy [12x12 maps] validation train accuracy [18x18 maps] validation train accuracy [24x24 maps] validation
+
+1-7
+BiGrad(BB) ${95.8} \pm {0.2}$ ${94.5} \pm {0.2}$ $\mathbf{{97.1}} \pm {0.0}$ $\mathbf{{96.4}} \pm {0.2}$ ${98.0} \pm {0.0}$ $\mathbf{{97.8}} \pm {0.0}$
+
+1-7
+BiGrad(PT) ${91.7} \pm {0.1}$ ${91.6} \pm {0.1}$ ${94.3} \pm {0.0}$ ${94.2} \pm {0.1}$ ${95.7} \pm {0.0}$ ${95.6} \pm {0.1}$
+
+1-7
+BB-1 ${95.9} \pm {0.2}$ ${91.7} \pm {0.1}$ ${96.7} \pm {0.2}$ ${94.5} \pm {0.1}$ ${97.1} \pm {0.1}$ ${96.3} \pm {0.2}$
+
+1-7
+PT-1 ${88.3} \pm {0.2}$ ${87.5} \pm {0.2}$ ${90.9} \pm {0.4}$ ${90.6} \pm {0.5}$ ${92.8} \pm {0.1}$ ${92.8} \pm {0.2}$
+
+1-7
+SL ${100.0} \pm {0.0}$ ${26.2} \pm {2.4}$ $\mathbf{{99.9}} \pm {0.1}$ ${20.2} \pm {0.5}$ $\mathbf{{99.1}} \pm {0.2}$ ${14.0} \pm {1.0}$
+
+1-7
+
+Table 2: Performance on the Dynamic Programming Problem with Interdiction. SL uses ResNet18.
+
+max width=
+
+${L}_{\infty } \leq \alpha$ DCNN Bi-DCNN CNN CNN*
+
+1-5
+0 ${62.9} \pm {0.3}$ ${64.0} \pm {0.4}$ ${63.4} \pm {0.7}$ ${63.6} \pm {0.5}$
+
+1-5
+5 ${42.6} \pm {1.0}$ ${44.5} \pm {0.2}$ ${43.8} \pm {1.2}$ ${44.3} \pm {1.0}$
+
+1-5
+10 ${23.5} \pm {1.5}$ $\mathbf{{25.3}} \pm {0.8}$ ${24.3} \pm {1.0}$ ${24.2} \pm {1.0}$
+
+1-5
+15 ${14.4} \pm {1.4}$ $\mathbf{{15.6}} \pm {0.7}$ ${14.6} \pm {0.7}$ ${14.3} \pm {0.4}$
+
+1-5
+20 ${9.1} \pm {1.2}$ $\mathbf{{10.0}} \pm {0.6}$ ${9.2} \pm {0.4}$ ${8.9} \pm {0.2}$
+
+1-5
+25 ${6.1} \pm {1.0}$ ${6.8} \pm {0.5}$ ${6.0} \pm {0.2}$ ${5.9} \pm {0.2}$
+
+1-5
+30 ${3.9} \pm {0.7}$ ${4.4} \pm {0.5}$ ${3.9} \pm {0.2}$ ${3.9} \pm {0.1}$
+
+1-5
+
+Table 3: Performance on the adversarial attack with discrete features, with $Q = {10}$ . DCNN is the single level discrete CNN, Bi-DCNN is the bilevel discrete CNN, CNN is the vanilla CNN, while CNN* is the CNN where we add the bilevel discrete layer after vanilla training.
+
+We evaluate the performance to verify the viability of the proposed approach and compare with LQR and OptNet (Amos and Kolter 2017), where the outer problem is substituted with a best response function that computes the adversarial noise based on the computed output; in this case the adversarial noise is a scaled version of ${Qu}$ of Eq.14. Tab.1 and Fig.2(b) present the performance using BiGrad, LQR and the adversarial version of OptNet. BiGrad improves over two-step OptNet (Tab.1), because is able to better model the interaction between noise and control dynamic.
+
+§ 5.2 ROBUST ML WITH DISCRETE LATENT VARIABLES
+
+Machine learning models are heavily affected by the injection of intentional noise (Madry et al. 2017; Goodfellow, Shlens, and Szegedy 2014). Adversarial attack typically requires the access to the machine learning model, in this way the attack model can be used during training to include its effect. Instead of training an end-to-end system as in (Goldblum, Fowl, and Goldstein 2019), where the attacker is aware of the model, we consider the case where the attacker can inject a noise at feature level, as opposed at input level (as in (Goldblum, Fowl, and Goldstein 2019)), this allows us to model the interaction as a bilevel problem. Thus, to demonstrate the use of a bilevel layer, we design a system that is composed of a feature extraction layer, followed by a discretization layer that operates on the space of $\{ 0,1{\} }^{m}$ , where $m$ is the hidden feature size, followed by a classification layer. The network used in the experiments is composed of two convolutional layers with max-pooling and two linear layers, all with relu activation functions, while the classification is a linear layer. We consider an more limited attacker that is not aware of the loss function of the model and does not have access to the full model, but rather only to the input of the discrete layer and is able two switch $Q$ discrete variables, The interaction of the discrete layer with the attacker is described by the following bilevel problem:
+
+$$
+\mathop{\min }\limits_{{x \in Q}}\mathop{\max }\limits_{{y \in B}}\langle z + x,y\rangle . \tag{16}
+$$
+
+where $Q$ represents the sets of all possible attack, $B$ the budget of the discritization layer and $y$ is the output of the layer. For the simulation, we compute the solution by sorting the features by values and considering only the first B values, while the attacker will obscure (i.e. set to zero) the first $Q$ positions. The output $y$ thus will have ones on the $Q$ to $B$ non-zero positions, and zero elsewhere. We train three models, on CIFAR-10 dataset for 50 epochs. For comparison we consider:1) the vanilla CNN network (i.e. without the discrete features); 2) the network with the single level problem (i.e. the single-level problem without attacker) and; 3) the network with the bilevel problem (i.e. the min-max discretization problem defined in Eq.16). We then test the networks to adversarial attack using the PGD (Madry et al. 2017) attack similar to (Goldblum, Fowl, and Goldstein 2019). Similar results apply for FGSM attack (Fast Gradient Sign Attack) (Goodfellow, Shlens, and Szegedy 2014). We also tested the network trained as vanilla network, where we added the min-max layer after training. From the results (Tab.3), we notice: 1) The min-max network shows improved resilience to adversarial attack wrt to the vanilla network, but also with respect to the max (single-level) network; 2) The min-max layer applied to the vanilla trained network is beneficial to adversarial attack; 3) The min-max network does not significantly change performance in presence of adversarial attack at the discrete layer (i.e. between $\mathrm{Q} = 0$ and $\mathrm{Q} = {10}$ ). This example shows how bilevel-layers can be successfully integrated into Machine Learning system as differentiable layers.
+
+§ 5.3 DYNAMIC PROGRAMMING: SHORTEST PATH WITH INTERDICTION
+
+We consider the problem of Shortest Path with Interdiction, where the set of possible valid paths (see Fig.3(a)) is $Y$ and the set of all possible interdiction is $X$ . The mathematical problem can be written as
+
+$$
+\mathop{\min }\limits_{{y \in Y}}\mathop{\max }\limits_{{x \in X}}\langle z + x \odot w,y\rangle \tag{17}
+$$
+
+where $\odot$ is the element wise product.This problem is multilinear in the discrete variables $x,y,z$ . The $z,w$ variables are output of neural network whose input are the Warcraft II tile images. The aim is to train the parameters of weight
+
+max width=
+
+2*gradient type X 2|c|accuracy X 2|c|accuracy 3|c|accuracy
+
+2-10
+ $\mathrm{k}$ train validation $\mathrm{k}$ train validation $\mathrm{k}$ train validation
+
+1-10
+BiGrad(BB) 8 ${89.2} \pm {0.1}$ ${89.4} \pm {0.2}$ 10 ${91.9} \pm {0.1}$ $\mathbf{{92.0}} \pm {0.1}$ 12 ${93.5} \pm {0.1}$ ${93.5} \pm {0.2}$
+
+1-10
+BiGrad(PT) 8 ${89.3} \pm {0.0}$ $\mathbf{{89.4}} \pm {0.1}$ 10 ${92.0} \pm {0.0}$ ${91.9} \pm {0.1}$ 12 $\mathbf{{93.7}} \pm {0.1}$ $\mathbf{{93.7}} \pm {0.1}$
+
+1-10
+BB-1 8 ${84.0} \pm {0.4}$ ${83.9} \pm {0.4}$ 10 ${87.4} \pm {0.3}$ ${87.5} \pm {0.4}$ 12 ${89.3} \pm {0.1}$ ${89.3} \pm {0.1}$
+
+1-10
+PT-1 8 ${84.1} \pm {0.4}$ ${84.1} \pm {0.3}$ 10 ${87.3} \pm {0.3}$ ${87.0} \pm {0.3}$ 12 ${89.3} \pm {0.0}$ ${89.5} \pm {0.2}$
+
+1-10
+SL 8 ${94.2} \pm {5.0}$ ${10.7} \pm {3.9}$ 10 ${92.7} \pm {5.4}$ ${9.4} \pm {0.4}$ 12 ${91.4} \pm {2.3}$ ${9.3} \pm {1.2}$
+
+1-10
+
+Table 4: Performance in term of accuracy of the TSP use case with interdiction. SL has higher accuracy during train, but fails in at test time.
+
+ < g r a p h i c s >
+
+Figure 3: (a) Example Shortest Path in the Warcraft II tile set of (Guyomarch 2017). (b) Example Shortest Path without (left) and with interdiction (middle). Even a small interdiction (right) has a large effect on the output.
+
+network, such that we can solve the shortest path problem only based on the input image. For the experiments, we followed and adapted the scenario of (Pogančić et al. 2019) and used the Warcraft II tile maps of (Guyomarch 2017). We implemented the interdiction Game using a two stage min-max-min algorithm (Kämmerling and Kurtz 2020). In Fig.3(b) it is possible to see the effect of interdiction on the final solution. Tab. 2 shows the performances of the proposed approaches, where we allow for $B = 3$ interdictions and we used tile size of ${12} \times {12},{18} \times {18},{24} \times {24}$ . The loss function is the Hamming and ${\ell }_{1}$ loss evaluated on both the shortest path $y$ and the intervention $x$ . The gradient estimated using Eq. 11 (BB) provides more accurate results, at double of computation cost of PT. Single level BB-1 approach outperforms PT, but shares similar computational complexity, while single level PT-1 is inferior to PT. As expected, SL outperforms other methods during training, but completely fails during validation. Bigrad improves over single-level approaches, because includes the interaction of the two problems.
+
+§ 5.4 COMBINATORIAL OPTIMIZATION: TRAVEL SALESMAN PROBLEM (TSP) WITH INTERDICTION
+
+Travel Salesman Problem (TSP) with interdiction consists of finding shortest route $y \in Y$ that touches all cities, where some connections $x \in X$ can be removed. The mathematical
+
+ < g r a p h i c s >
+
+Figure 4: Example of TSP with 8 cities and the comparison of a TSP tour without (a) or with (b) a single interdiction. Even a single interdiction has a large effect on the final tour.
+
+problem to solve is given by
+
+$$
+\mathop{\min }\limits_{{y \in Y}}\mathop{\max }\limits_{{x \in X}}\langle z + x \odot w,y\rangle \tag{18}
+$$
+
+where $z,w$ are cost matrices for salesman and interceptor. Similar to the dynamic programming experiment, we implemented the interdiction Game using a two stage min-max-min algorithm (Kämmerling and Kurtz 2020). Fig. 4 shows the effect of a single interdiction. The aim is to learn the weight matrices, trained with interdicted solution on subset of the cities. Tab. 4 describes the performance in term of accuracy on both shortest tour and intervention. We use Hamming and ${\ell }_{1}$ loss function. We only allow for $B = 1$ intervention, but considered $k = 8,{10}$ and 12 cities from a total of 100 cities. Single and two level approaches perform similarly in the train and validation. Since the number of interdiction is limited to one, the performance of the single level approach is not catastrophic, while the supervised learning approach completely fails in the validation set. Bigrad thus improves over single-level and SL approaches. Since Bi-grad(PT) has similar performance of BiGrad(BB), thus PT is preferable in this scenario, since it requires less computation resources.
+
+§ 6 CONCLUSIONS
+
+BiGrad generalizes existing single level gradient estimation approaches and is able to incorporate Bilevel Programming as learnable layer in modern machine learning frameworks, which allows to model conflicting objectives as in adversarial attack. The proposed novel gradient estimators are also efficient and the proposed framework is widely applicable to both continuous and discrete problems. The impact of Bi-Grad has a marginal or similar cost with respect to the complexity of computing the solution of the Bilevel Programming problems. We show how BiGrad is able to learn complex logic, when the cost functions are multi-linear.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/LGlhzn1ZJl/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/LGlhzn1ZJl/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..5028baf6a92d65b933996c5932e406dcc7fdef8b
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/LGlhzn1ZJl/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,303 @@
+# Measuring the Contribution of Multiple Model Representations in Detecting Adversarial Instances
+
+Anonymous Author(s)
+
+Affiliation
+
+Address
+
+email
+
+## Abstract
+
+Deep learning models have been used for a wide variety of tasks. They are prevalent in computer vision, natural language processing, speech recognition, and other areas. While these models have worked well under many scenarios, it has been shown that they are vulnerable to adversarial attacks. This has led to a proliferation of research into ways that such attacks could be identified and/or defended against. Our goal is to explore the contribution that can be attributed to using multiple underlying models for the purpose of adversarial instance detection. Our paper describes two approaches that incorporate representations from multiple models for detecting adversarial examples. We devise controlled experiments for measuring the detection impact of incrementally utilizing additional models. For many of the scenarios we consider, the results show that performance increases with the number of underlying models used for extracting representations.
+
+Code is available at https://anonymized/for/submission.
+
+## 1 Introduction
+
+Research on neural networks has progressed for many decades, from early work modeling neural activity (McCulloch and Pitts 1943) to the more recent rise of deep learning (Bengio, Lecun, and Hinton 2021). Notable applications include image classification (Krizhevsky, Sutskever, and Hinton 2012), image generation (Goodfellow et al. 2014), image translation (Isola et al. 2017), and many others (Dargan et al. 2020). Along with the demonstrated success it has also been shown that carefully crafted adversarial instances-which appear as normal images to humans-can be used to deceive deep learning models (Szegedy et al. 2014), resulting in incorrect output. The discovery of adversarial instances has led to a broad range of related research including 1) the development of new attacks, 2) the characterization of attack properties, and 3) defense techniques. Akhtar and Mian present a comprehensive survey on the threat of adversarial attacks to deep learning systems used for computer vision.
+
+Two general approaches-discussed further in Section 6-that have been proposed for defending against adversarial attacks include 1) the usage of model ensembling and 2) the incorporation of hidden layer representations as discriminative features for identifying perturbed data. Building on these ideas, we explore the performance implications that can be attributed to using representations from multiple models for the purpose of adversarial instance detection.
+
+Our Contribution In Section 3 we present two approaches that use neural network representations as features for an adversarial detector. For each technique we devise a treatment and control variant in order to measure the impact of using multiple networks for extracting representations. Our controlled experiments in Section 4 measure the effect of using multiple models. For many of the scenarios we consider, detection performance increased as a function of the underlying model count.
+
+## 2 Preliminaries
+
+Our research incorporates $l$ -layer feedforward neural networks, functions $h : \mathcal{X} \rightarrow \mathcal{Y}$ that map input $x \in \mathcal{X}$ to output $\widehat{y} \in \mathcal{Y}$ through linear preactivation functions ${f}_{i}$ and nonlinear activation functions ${\phi }_{i}$ .
+
+$$
+\widehat{y} = h\left( x\right) = {\phi }_{l} \circ {f}_{l} \circ {\phi }_{l - 1} \circ {f}_{l - 1} \circ \ldots \circ {\phi }_{1} \circ {f}_{1}\left( x\right)
+$$
+
+The models we consider are classifiers, where the outputs are discrete labels. For input $x$ and its true class label $y$ , let $J\left( {x, y}\right)$ denote the corresponding loss of a trained neural network. Our notation omits the dependence on model parameters $\theta$ , for convenience.
+
+### 2.1 Adversarial Attacks
+
+Consider input $x$ that is correctly classified by neural network $h$ . For an untargeted adversarial attack, the adversary tries to devise a small additive perturbation ${\Delta x}$ such that adversarial input ${x}^{\text{adv }} = x + {\Delta x}$ changes the classifier’s output (i.e., $h\left( x\right) \neq h\left( {x}^{\text{adv }}\right)$ ). For a targeted attack, a desired value for $h\left( {x}^{\text{adv }}\right)$ is an added objective. In both cases, the ${L}_{p}$ norm of ${\Delta x}$ is typically constrained to be less than some threshold $\epsilon$ . Different threat models—white-box, grey-box, and black-box-correspond to varying levels of knowledge that the adversary has about the model being used, its parameters, and its possible defense.
+
+The adversary's objective can be expressed as an optimization problem. For example, the following constrained maximization of the loss function is one way of formulating how an adversary could generate an untargeted adversarial input ${x}^{adv}$ .
+
+$$
+{\Delta x} = \mathop{\operatorname{argmax}}\limits_{\delta }J\left( {x + \delta , y}\right)
+$$
+
+$$
+\text{subject to}\parallel \delta {\parallel }_{p} \leq \epsilon
+$$
+
+$$
+x + \delta \in \mathcal{X}
+$$
+
+There are various ways to generate attacks. Under many formulations it's challenging to devise an exact computation of ${\Delta x}$ that optimizes the objective function. An approximation is often employed.
+
+Fast Gradient Sign Method (FGSM) (Goodfellow, Shlens, and Szegedy 2015) generates an adversarial perturbation ${\Delta x} = \epsilon \cdot \operatorname{sign}\left( {{\nabla }_{x}J\left( {x, y}\right) }\right)$ , which is the approximate direction of the loss function gradient. The sign function bounds its input to an ${L}_{\infty }$ norm of 1, which is scaled by $\epsilon$ .
+
+Basic Iterative Method (BIM) (Kurakin, Goodfellow, and Bengio 2017) iteratively applies FGSM, whereby ${x}_{t}^{adv} = {x}_{t - 1}^{adv} + \alpha \cdot \operatorname{sign}\left( {{\nabla }_{x}J\left( {{x}_{t - 1}^{adv}, y}\right) }\right)$ for each step, starting with ${x}_{0}^{adv} = x$ . The ${L}_{\infty }$ norm is bounded by $\alpha$ on each iteration and by $t \cdot \alpha$ after $t$ iterations. ${x}_{t}^{adv}$ can be clipped after each iteration in a way that constrains the final ${x}^{\text{adv }}$ to an $\epsilon$ -ball of $x$ .
+
+Carlini & Wagner (CW) (Carlini and Wagner 2017) generates an adversarial perturbation via gradient descent to solve ${\Delta x} = {\operatorname{argmin}}_{\delta }\left( {\parallel \delta {\parallel }_{p} + c \cdot f\left( {x + \delta }\right) }\right)$ subject to a box constraint on $x + \delta .f$ is a function for which $f\left( {x + \delta }\right) \leq 0$ if and only if the target classifier is successfully attacked. Experimentation yielded the most effective $f$ -for targeted attacks-of those considered. $c$ is a positive constant that can be found with binary search, a strategy that worked well empirically. Clipping or a change of variables can be used to accommodate the box constraint.
+
+### 2.2 Ensembling
+
+Our research draws inspiration from ensembling, the combination of multiple models to improve performance relative to the component models themselves. There are various ways of combining models. An approach that is widely used in deep learning averages outputs from an assortment of neural networks; each network having the same architecture, trained from a differing set of randomly initialized weights.
+
+## 3 Method
+
+To detect adversarial instances, we use hidden layer representations-from representation models- as inputs to adversarial detection models. For our experiments in Section 4, the representation models are convolutional neural networks that are independently trained for the same classification task, initialized with different weights. Representations are extracted from the penultimate layers of the trained networks. The method we describe in this section is more general, as various approaches could be used for preparing representation models. For example, each representation model could be an independently trained autoencoder-as opposed to a classifier-with representations for each model extracted from arbitrary hidden layers. Additionally, it's not necessary that each of the models-used for extracting representations- has the same architecture.
+
+We devise two broad techniques-model-wise and unit-wise-for extracting representations and detecting adversarial instances. These approaches each have two formulations, a treatment that incorporates multiple representation models and a control that uses a single representation model. For each technique, the functional form of the detection step is the same across treatment and control. This serves our objective of measuring the contribution of incrementally incorporating multiple representation models, as the control makes it possible to check whether gains are coming from some aspect other than the incorporation of multiple representation models.
+
+The illustrations in this section are best viewed in color.
+
+### 3.1 Model-Wise Detection
+
+With $N$ representation models, model-wise detection uses a set of representations from each underlying model as separate input to $N$ corresponding detection models that each outputs an adversarial score. These scores, which we interpret as estimated probabilities, are then averaged to give an ensemble adversarial probability estimate. A baseline-holding fixed the number of detectors-uses a single representation model as a repeated input to multiple detection models. The steps of both approaches are outlined below.
+
+## Model-Wise Treatment
+
+Step 1 Extract representations for input $x$ from $N$ representation models.
+
+
+
+Step 2 Pass the Step 1 representations through $N$ corresponding detection models that each output adversarial probability (denoted ${P}_{i}$ for model $i$ ).
+
+
+
+Step 3 Calculate adversarial probability $P$ as the average of Step 2 adversarial probabilities.
+
+$$
+P = \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{P}_{i}
+$$
+
+## Model-Wise Control
+
+Step 1 Extract representations for input $x$ from a single representation model.
+
+
+
+Step 2 Pass the Step 1 representations through $N$ detection models that each outputs adversarial probability (denoted ${P}_{i}$ for model $i$ ).
+
+
+
+Step 3 Calculate adversarial probability $P$ as the average of Step 2 adversarial probabilities.
+
+$$
+P = \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{P}_{i}
+$$
+
+### 3.2 Unit-Wise Detection
+
+With $N$ representation models, model-wise detection incorporates a single representation from each underlying model to form an $N$ -dimensional array of features as input to a single detection model. A baseline-holding fixed the number of features for the detector-uses a set of units from a single representation model to form an input array for a detection model. The steps of both approaches are outlined below.
+
+## Unit-Wise Treatment
+
+Step 1 Extract a single representation for input $x$ from $N$ representation models. There is no requirement on which unit is selected nor whether there is any correspondence between which unit is selected from each model.
+
+
+
+Step 2 Pass the $N$ -dimensional array of Step 1 representations through an adversarial detection model that outputs adversarial probability $P$ .
+
+
+
+## Unit-Wise Control
+
+Step 1 Extract $N$ units from the representations for input $x$ from a single representation model. In the illustration that follows, the count of extracted representation units, $N$ , matches the total number of units available. It's also possible for $N$ to be smaller than the quantity available.
+
+
+
+Step 2 Pass Step 1 representations through an adversarial detection model that outputs adversarial probability $P$ .
+
+
+
+### 3.3 Measuring the Contribution from Multiple Models
+
+We are interested in measuring the contribution of multiple models for detecting adversarial instances. For both the model-wise and unit-wise detection techniques, the contribution of multiple models can be evaluated by inspecting the change in treatment performance when incrementing the number of representation models, $N$ . The changes should be considered relative to the control performance, to check whether any differences are coming from some aspect other than the incorporation of multiple representation models.
+
+## 4 Experiments
+
+### 4.1 Experimental Settings
+
+We conducted experiments using the CIFAR-10 dataset (Krizhevsky 2009), which is comprised of 60,000 ${32} \times {32}$ RGB images across 10 classes. The dataset, as received, was already split into 50,000 training images and 10,000 test images. We trained one neural network classifier that served as the target for generating adversarial attacks. We trained 1,024 additional neural network classifiers to be used as representation models-with representations extracted from the 512-dimensional penultimate layer of each network. A different randomization seed was used for initializing the weights of the 1,025 networks. Each network had the same-18-layer, 11,173,962-parameter-ResNet-inspired architecture, with filter counts and depth matching the kuangliu ResNet-18 architecture. ${}^{1}$ Pixel values of input images were scaled by $1/{255}$ to be between 0 and 1 . The networks were trained for 100 epochs using an Adam optimizer (Kingma and Ba 2014), with random horizontal flipping and random crop sampling on images padded with 4 pixels per edge. The model for attack generation had ${91.95}\%$ accuracy on the test dataset. The average test accuracy across the 1,024 additional networks was ${92.22}\%$ with sample standard deviation of 0.34%.
+
+Adversarial Attacks Untargeted adversarial perturbations were generated for the 9,195 images that were originally correctly classified by the attacked model. Attacks were conducted with FGSM, BIM, and CW, all using the cleverhans library (Papernot et al. 2018). After each attack, we clipped the perturbed images between 0 and 1 and quantized the pixel intensities to 256 discrete values. This way the perturbed instances could be represented in 24-bit RGB space.
+
+For FGSM, we set $\epsilon = 3/{255}$ for a maximum perturbation of 3 intensity values (out of 255 ) for each pixel on the unnormalized data. Model accuracy on the attacked model-for the 9,195 perturbed images-was 21.13% (i.e., an attack success rate of 78.87%). Average accuracy on the 1,024 representation models was ${61.69}\%$ (i.e., an attack transfer success rate of ${38.31}\%$ ) with sample standard deviation of 1.31%.
+
+For BIM, we used 10 iterations with $\alpha = 1/{255}$ and maximum perturbation magnitude clipped to $\epsilon = 3/{255}$ . This results in a maximum perturbation of 1 unnormalized intensity value per pixel on each step, with maximum perturbation after all steps clipped to 3 . Accuracy after attack was ${0.61}\%$ for the attacked model. Average accuracy on the 1,024 representation models was ${41.09}\%$ with sample standard deviation of 2.64%.
+
+---
+
+${}^{1}$ This differs from the ResNet-20 architecture used for CIFAR- 10 in the original ResNet paper (He et al. 2016).
+
+---
+
+
+
+Figure 1: Example CIFAR-10 images after adversarial perturbation. The original image-in the leftmost column-is followed by three columns corresponding to FGSM, BIM, and CW attacks, respectively. Images were chosen randomly from the set of test images that were correctly classified without perturbation-the population of images for which attacks were generated.
+
+For CW, we used an ${L}_{2}$ norm distance metric along with most default parameters-a learning rate of0.005,5binary search steps, and 1,000 maximum iterations. We raised the confidence parameter ${}^{2}$ to 100 from its default of 0, which increases attack transferability. This makes our experiments more closely align with black-box and grey-box attack scenarios, where transferability would be an objective of an adversary. Accuracy after attack was ${0.07}\%$ for the attacked model. Average accuracy on the 1,024 representation models was ${5.86}\%$ with sample standard deviation of 1.72%.
+
+Figure 1 shows examples of images that were perturbed for our experiments. These were chosen randomly from the 9,195 correctly classified test images-the population of images for which attacks were generated.
+
+Adversarial Detectors We use the 512-dimensional representation vectors extracted from the 1,024 representation models as inputs to model-wise and unit-wise adversarial detectors-both treatment and control configurations-as described in Section 3. All detection models are binary classification neural networks that have a 100-dimensional hidden layer with a rectified linear unit activation function. We did not tune hyperparameters, instead using the defaults as specified by the library we employed, scikit-learn (Pedregosa et al. 2011). Model-wise detectors differed in their randomly initialized weights.
+
+To evaluate the contribution of multiple models, we run experiments that vary 1 ) the number of detection models used for model-wise detection, and 2) the number of units used for unit-wise detection. For the treatment experiments, the number of underlying representation models matches 1) the number of detection models for model-wise detection and 2) the number of units for unit-wise detection. For the control experiments, there is a single underlying representation model.
+
+The number of units for the unit-wise control models was limited to 512, based on the dimensionality of the penultimate layer representations. The number of units for the unit-wise treatment was extended beyond this since its limit is based on the number of representation models, for which we had more than 512. One way to incorporate more units into the unit-wise control experiments would be to draw units from other network layers, but we have not explored that for this paper.
+
+We are interested in the generalization capabilities of detectors trained with data from a specific attack. While the training datasets we constructed were each limited to a single attack algorithm, we separately tested each model using data attacked with each of the three algorithms-FGSM, BIM, and CW.
+
+For training and evaluating each detection model, the dataset consisted of 1 ) the 9,125 images that were originally correctly classified by the attacked model, and 2) the 9,125 corresponding perturbed variants. Models were trained with ${90}\%$ of the data and tested on the remaining ${10}\%$ . Each original image and its paired adversarial counterpart were grouped, i.e., they were never separated such that one would be used for training and the other for testing.
+
+We retained all 9,125 perturbed images and handled them the same (i.e., they were given the same class) for training and evaluation, including the instances that did not successfully deceive the attacked model. For BIM and CW, the consequence of this approach is presumably minor, since there were few unsuccessful attacks. For FGSM, which had a lower attack success rate, further work would be needed to 1) study the implications and/or 2) implement an alternative approach.
+
+We conducted 100 trials for each combination of settings. For each trial, random sampling was used for 1) splitting data into training and test groups, 2) choosing representation models, and 3) choosing which representation units to use for the unit-wise experiments.
+
+---
+
+${}^{2}$ Our description of CW in Section 2 does not discuss the $\kappa$ confidence parameter. See the CW paper (Carlini and Wagner 2017) for details.
+
+---
+
+
+
+Figure 2: Average model-wise adversarial input detection accuracies, where each point is calculated across 100 trials. The sample standard deviations were added and subtracted from each sample mean to generate the shaded regions. The figure subplots each correspond to a specific attack used for the training data-as indicated by the leftmost labels-and a specific attack used for the test data-as indicated by the header labels. The endpoint values underlying the figure are provided in the appendix.
+
+### 4.2 Hardware and Software
+
+The experiments were conducted on a desktop computer running Ubuntu 21.04 with Python 3.9. The hardware includes an AMD Ryzen 9 3950X CPU, 64GB of memory, and an NVIDIA TITAN RTX GPU with 24GB of memory. The GPU was used for training the CIFAR-10 classifiers and generating adversarial attacks.
+
+The code for the experiments is available at https:// anonymized/for/submission.
+
+### 4.3 Results
+
+Model-Wise Figure 2 shows average model-wise adversarial input detection accuracies-calculated from 100 trials-plotted across the number of detection models. The subplots represent different combinations of training data attacks and test data attacks. The endpoint values underlying the figure are provided in the appendix.
+
+Unit-Wise Figure 3 shows average unit-wise adversarial input detection accuracies-calculated from 100 trials-plotted across the number of units. The subplots represent different combinations of training data attacks and test data attacks. The endpoint values underlying the figure are provided in the appendix.
+
+## 5 Discussion
+
+Although subtle, for most scenarios the model-wise control experiments show an upward trend in accuracy as a function of the number of detection models. This is presumably an ensembling effect where there are benefits from combining multiple detection models even when they're each trained on the same features. The model-wise treatment experiments tend to outpace the corresponding controls, highlighting the benefit realized when the ensemble utilizes representations from distinct models.
+
+The increasing accuracy for the unit-wise control experiments- as a function of the number of units- is more discernible than for the corresponding model-wise control experiments (the latter being a function of the number of models). The unit-wise gains are from having more units, and thus more information, as discriminative features for detecting adversarial instances. In most scenarios the treatment experiments-which draw units from distinct representation models-have higher performance than the corresponding controls. An apparent additional benefit is being able to incorporate more units when drawing from multiple models, not limited by the quantity of eligible units in a single model. However, drawing units from multiple models also comes at a practical cost, as it requires more computation relative to drawing from a single model.
+
+
+
+Figure 3: Average unit-wise adversarial input detection accuracies, where each point is calculated across 100 trials. The sample standard deviations were added and subtracted from each sample mean to generate the shaded regions. The figure subplots each correspond to a specific attack used for the training data-as indicated by the leftmost labels- and a specific attack used for the test data-as indicated by the header labels. The endpoint values underlying the figure are provided in the appendix.
+
+As expected, detectors trained with data from a specific attack perform best when tested with data from the same attack. Interestingly, detectors trained with BIM attack data appear to generalize better relative to detectors trained with FGSM or CW attack data. This may be related to the hyper-parameters we used for each of the attacks, as opposed to being something representative of BIM more generally.
+
+## 6 Related Work
+
+We are aware of two general research areas that are related to what we've explored in this paper. The approaches include 1) the incorporation of ensembling for adversarial defense, and 2) the usage of hidden layer representations for detecting adversarial instances.
+
+### 6.1 Ensembling-Based Adversarial Defense
+
+Combining machine learning models is the hallmark of en-sembling. For our work, we trained detection models that process representations extracted from multiple independently trained models. For model-wise detection, we averaged detection outputs across multiple models. Existing research has explored ensembling techniques in the context of defending against adversarial attacks (Liu et al. 2019). Bag-nall, Bunescu, and Stewart train an ensemble-to be used for the original task, classification, and also for adversarial detection-such that the underlying models agree on clean samples and disagree on perturbed examples. The adaptive diversity promoting regularizer (Pang et al. 2019) was developed to increase model diversity-and decrease attack transferability-among the members of an ensemble. Abbasi et al. devise a way to train ensemble specialists and merge their predictions-to mitigate the risk of adversarial examples.
+
+### 6.2 Attack Detection from Representations
+
+For our research we've extracted representations from independently trained classifiers to be used as features for adversarial example detectors. Hidden layer representations have been utilized in various other work on adversarial instance detection. Neural network invariant checking (Ma et al. 2019) detects adversarial samples based on whether internal activations conflict with invariants learned from non-adversarial data. Wójcik et al. use hidden layer activations to train autoencoders whose own hidden layer activations-along with reconstruction error-are used as features for attack detection. Li and Li develop a cascade classifier that incrementally incorporates statistics calculated on convolutional layer activations. At each stage, the instance is either classified as non-adversarial or passed along to the next stage of the cascade that integrates features computed from an additional convolutional layer. In addition to the methods summarized above, detection techniques have also been developed that 1) model the relative-positioned dynamics of representations passing through a neural network (Carrara et al. 2019), 2) use hidden layer activations as features for a $k$ -nearest neighbor classifier (Carrara et al. 2017), and 3) process the hidden layer units that were determined to be relevant for the classes of interest (Granda, Tuytelaars, and Oramas 2020).
+
+Table 1: Average unit-wise adversarial input detection accuracies plus/minus sample standard deviations, calculated across 100 trials for each datum. These are a subset of values used to generate Figure 2.
+
+| Train Attack | Number of Detection Models | Test Attack |
| FGSM | BIM | CW |
| Control | Treatment | Control | Treatment | Control | Treatment |
| FGSM | 1 | ${0.819} \pm {0.014}$ | ${0.820} \pm {0.014}$ | ${0.736} \pm {0.014}$ | ${0.735} \pm {0.014}$ | ${0.638} \pm {0.019}$ | ${0.637} \pm {0.020}$ |
| 10 | ${0.836} \pm {0.013}$ | ${0.892} \pm {0.006}$ | ${0.747} \pm {0.012}$ | ${0.799} \pm {0.009}$ | ${0.643} \pm {0.017}$ | ${0.661} \pm {0.013}$ |
| BIM | 1 | ${0.765} \pm {0.017}$ | ${0.766} \pm {0.015}$ | ${0.788} \pm {0.013}$ | ${0.788} \pm {0.012}$ | ${0.767} \pm {0.014}$ | ${0.770} \pm {0.014}$ |
| 10 | ${0.783} \pm {0.015}$ | ${0.839} \pm {0.009}$ | ${0.805} \pm {0.012}$ | ${0.864} \pm {0.008}$ | ${0.785} \pm {0.012}$ | ${0.840} \pm {0.010}$ |
| CW | 1 | ${0.597} \pm {0.017}$ | ${0.600} \pm {0.017}$ | ${0.690} \pm {0.015}$ | ${0.691} \pm {0.016}$ | ${0.870} \pm {0.009}$ | ${0.870} \pm {0.010}$ |
| 10 | ${0.602} \pm {0.018}$ | ${0.601} \pm {0.011}$ | ${0.699} \pm {0.014}$ | ${0.727} \pm {0.010}$ | ${0.883} \pm {0.009}$ | ${0.937} \pm {0.005}$ |
+
+Table 2: Average unit-wise adversarial input detection accuracies plus/minus sample standard deviations, calculated across 100 trials for each datum. These are a subset of values used to generate Figure 3.
+
+| Train Attack | Number of Units | Test Attack |
| FGSM | BIM | CW |
| Control | Treatment | Control | Treatment | Control | Treatment |
| FGSM | 8 | ${0.671} \pm {0.014}$ | ${0.671} \pm {0.013}$ | ${0.646} \pm {0.012}$ | ${0.648} \pm {0.014}$ | ${0.556} \pm {0.024}$ | ${0.550} \pm {0.026}$ |
| 512 | ${0.820} \pm {0.016}$ | ${0.868} \pm {0.008}$ | ${0.739} \pm {0.013}$ | ${0.771} \pm {0.011}$ | ${0.639} \pm {0.019}$ | ${0.626} \pm {0.016}$ |
| 1,024 | - | ${0.890} \pm {0.008}$ | - | ${0.778} \pm {0.014}$ | - | ${0.629} \pm {0.016}$ |
| BIM | 8 | ${0.654} \pm {0.013}$ | ${0.657} \pm {0.014}$ | ${0.662} \pm {0.012}$ | ${0.667} \pm {0.013}$ | ${0.600} \pm {0.019}$ | ${0.596} \pm {0.020}$ |
| 512 | ${0.766} \pm {0.017}$ | ${0.815} \pm {0.010}$ | ${0.787} \pm {0.014}$ | ${0.837} \pm {0.009}$ | ${0.768} \pm {0.013}$ | ${0.809} \pm {0.009}$ |
| 1,024 | - | ${0.838} \pm {0.010}$ | - | ${0.857} \pm {0.010}$ | - | ${0.838} \pm {0.011}$ |
| CW | 8 | ${0.553} \pm {0.024}$ | ${0.550} \pm {0.026}$ | ${0.596} \pm {0.018}$ | ${0.592} \pm {0.019}$ | ${0.679} \pm {0.015}$ | ${0.678} \pm {0.017}$ |
| 512 | ${0.599} \pm {0.016}$ | ${0.588} \pm {0.012}$ | ${0.690} \pm {0.015}$ | ${0.689} \pm {0.013}$ | ${0.870} \pm {0.011}$ | ${0.922} \pm {0.007}$ |
| 1,024 | - | ${0.588} \pm {0.014}$ | - | ${0.694} \pm {0.016}$ | - | ${0.941} \pm {0.006}$ |
+
+## 7 Conclusion and Future Work
+
+We presented two approaches for adversarial instance detection-model-wise and unit-wise-that incorporate the representations from multiple models. Using those two approaches, we devised controlled experiments comprised of treatments and controls, for measuring the contribution of multiple model representations in detecting adversarial instances. For many of the scenarios we considered, experiments showed that detection performance increased with the number of underlying models used for extracting representations.
+
+The research leaves open various avenues for future work.
+
+- For our experiments, we trained 1,024 neural network representation models, whose diversity arises from using a different randomization seed for each. Perhaps other methods for imposing diversity would impact the performance of the detectors that depend on those models.
+
+- It would be interesting to explore how existing adversarial defenses fare when extended to use multiple underlying models.
+
+- Although we evaluated detectors across different attack algorithms, we always used data from a single attack for the purpose of training. Future research could investigate the effect of training with data from multiple attacks and/or varying hyperparameter settings for a specific attack.
+
+- Our focus was on measuring the incremental gains of detecting attacks when incorporating multiple representation models. Further work could perform a thorough defense evaluation under more challenging threat models.
+
+## Appendix
+
+The endpoint values underlying Figure 2 are included in Table 1. The endpoint values underlying Figure 3 are included in Table 2.
+
+## References
+
+Abbasi, M.; Rajabi, A.; Gagné, C.; and Bobba, R. B. 2020. Toward Adversarial Robustness by Diversity in an Ensemble of Specialized Deep Neural Networks. In Goutte, C.; and Zhu, X., eds., Advances in Artificial Intelligence, Lecture Notes in Computer Science, 1-14. Cham: Springer International Publishing. ISBN 978-3-030-47358- 7.
+
+Akhtar, N.; and Mian, A. 2018. Threat of Adversarial Attacks on Deep Learning in Computer Vision: A Survey. IEEE Access, 6: 14410-14430. Conference Name: IEEE Access.
+
+Bagnall, A.; Bunescu, R.; and Stewart, G. 2017. Training Ensembles to Detect Adversarial Examples. arXiv:1712.04006 [cs].
+
+Bengio, Y.; Lecun, Y.; and Hinton, G. 2021. Deep learning for AI. Communications of the ACM, 64(7): 58-65.
+
+Carlini, N.; and Wagner, D. 2017. Towards Evaluating the Robustness of Neural Networks. arXiv:1608.04644 [cs].
+
+Carrara, F.; Becarelli, R.; Caldelli, R.; Falchi, F.; and Amato, G. 2019. Adversarial Examples Detection in Features Distance Spaces. In Leal-Taixé, L.; and Roth, S., eds., Computer Vision - ECCV 2018 Workshops, Lecture Notes in Computer Science, 313- 327. Cham: Springer International Publishing. ISBN 978-3-030- 11012-3.
+
+Carrara, F.; Falchi, F.; Caldelli, R.; Amato, G.; Fumarola, R.; and Becarelli, R. 2017. Detecting adversarial example attacks to deep neural networks. In Proceedings of the 15th International Workshop on Content-Based Multimedia Indexing, CBMI '17, 1-7. New York, NY, USA: Association for Computing Machinery. ISBN 978-1-4503-5333-5.
+
+Dargan, S.; Kumar, M.; Ayyagari, M. R.; and Kumar, G. 2020. A Survey of Deep Learning and Its Applications: A New Paradigm to Machine Learning. Archives of Computational Methods in Engineering, 27(4): 1071-1092.
+
+Goodfellow, I.; Shlens, J.; and Szegedy, C. 2015. Explaining and Harnessing Adversarial Examples. In International Conference on Learning Representations.
+
+Goodfellow, I. J.; Pouget-Abadie, J.; Mirza, M.; Xu, B.; Warde-Farley, D.; Ozair, S.; Courville, A.; and Bengio, Y. 2014. Generative Adversarial Nets. In Ghahramani, Z.; Welling, M.; Cortes, C.; Lawrence, N. D.; and Weinberger, K. Q., eds., Advances in Neural Information Processing Systems 27, 2672-2680. Curran Associates, Inc.
+
+Granda, R.; Tuytelaars, T.; and Oramas, J. 2020. Can the state of relevant neurons in a deep neural networks serve as indicators for detecting adversarial attacks? arXiv:2010.15974 [cs].
+
+He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep Residual Learning for Image Recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770-778.
+
+Isola, P.; Zhu, J.-Y.; Zhou, T.; and Efros, A. A. 2017. Image-To-Image Translation With Conditional Adversarial Networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (CVPR).
+
+Kingma, D.; and Ba, J. 2014. Adam: A Method for Stochastic Optimization. arXiv:1412.6980 [cs].
+
+Krizhevsky, A. 2009. Learning Multiple Layers of Features from Tiny Images. Technical report.
+
+Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. Ima-geNet Classification with Deep Convolutional Neural Networks. In Pereira, F.; Burges, C. J. C.; Bottou, L.; and Weinberger, K. Q., eds., Advances in Neural Information Processing Systems 25, 1097- 1105. Curran Associates, Inc.
+
+kuangliu. 2017. kuangliu/pytorch-cifar. https://github.com/ kuangliu/pytorch-cifar.
+
+Kurakin, A.; Goodfellow, I.; and Bengio, S. 2017. Adversarial examples in the physical world. arXiv:1607.02533 [cs, stat].
+
+Li, X.; and Li, F. 2017. Adversarial Examples Detection in Deep Networks with Convolutional Filter Statistics. In 2017 IEEE International Conference on Computer Vision (ICCV), 5775-5783. ISSN: 2380-7504.
+
+Liu, L.; Wei, W.; Chow, K.-H.; Loper, M.; Gursoy, E.; Truex, S.; and Wu, Y. 2019. Deep Neural Network Ensembles Against Deception: Ensemble Diversity, Accuracy and Robustness. In 2019 IEEE 16th International Conference on Mobile Ad Hoc and Sensor Systems (MASS), 274-282. ISSN: 2155-6814.
+
+Ma, S.; Liu, Y.; Tao, G.; Lee, W.-C.; and Zhang, X. 2019. NIC: Detecting Adversarial Samples with Neural Network Invariant Checking. In Proceedings 2019 Network and Distributed System Security Symposium. San Diego, CA: Internet Society. ISBN 978-1-891562- 55-6.
+
+McCulloch, W. S.; and Pitts, W. 1943. A logical calculus of the ideas immanent in nervous activity. The bulletin of mathematical biophysics, 5(4): 115-133.
+
+Pang, T.; Xu, K.; Du, C.; Chen, N.; and Zhu, J. 2019. Improving Adversarial Robustness via Promoting Ensemble Diversity. In Proceedings of the 36th International Conference on Machine Learning, 4970-4979. PMLR. ISSN: 2640-3498.
+
+Papernot, N.; Faghri, F.; Carlini, N.; Goodfellow, I.; Feinman, R.; Kurakin, A.; Xie, C.; Sharma, Y.; Brown, T.; Roy, A.; Matyasko, A.; Behzadan, V.; Hambardzumyan, K.; Zhang, Z.; Juang, Y.-L.; Li, Z.; Sheatsley, R.; Garg, A.; Uesato, J.; Gierke, W.; Dong, Y.; Berthelot, D.; Hendricks, P.; Rauber, J.; and Long, R. 2018. Technical Report on the CleverHans v2.1.0 Adversarial Examples Library. arXiv preprint arXiv:1610.00768.
+
+Pedregosa, F.; Varoquaux, G.; Gramfort, A.; Michel, V.; Thirion, B.; Grisel, O.; Blondel, M.; Prettenhofer, P.; Weiss, R.; Dubourg, V.; Vanderplas, J.; Passos, A.; Cournapeau, D.; Brucher, M.; Perrot, M.; and Duchesnay, E. 2011. Scikit-learn: Machine Learning in Python. Journal of Machine Learning Research, 12: 2825-2830.
+
+Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; and Fergus, R. 2014. Intriguing properties of neural networks. In International Conference on Learning Representations.
+
+Wójcik, B.; Morawiecki, P.; Smieja, M.; Krzyżek, T.; Spurek, P.; and Tabor, J. 2020. Adversarial Examples Detection and Analysis with Layer-wise Autoencoders. arXiv:2006.10013 [cs, stat].
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/LGlhzn1ZJl/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/LGlhzn1ZJl/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..cca82a1779fa3ba8deee778d584510369f708368
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/LGlhzn1ZJl/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,301 @@
+§ MEASURING THE CONTRIBUTION OF MULTIPLE MODEL REPRESENTATIONS IN DETECTING ADVERSARIAL INSTANCES
+
+Anonymous Author(s)
+
+Affiliation
+
+Address
+
+email
+
+§ ABSTRACT
+
+Deep learning models have been used for a wide variety of tasks. They are prevalent in computer vision, natural language processing, speech recognition, and other areas. While these models have worked well under many scenarios, it has been shown that they are vulnerable to adversarial attacks. This has led to a proliferation of research into ways that such attacks could be identified and/or defended against. Our goal is to explore the contribution that can be attributed to using multiple underlying models for the purpose of adversarial instance detection. Our paper describes two approaches that incorporate representations from multiple models for detecting adversarial examples. We devise controlled experiments for measuring the detection impact of incrementally utilizing additional models. For many of the scenarios we consider, the results show that performance increases with the number of underlying models used for extracting representations.
+
+Code is available at https://anonymized/for/submission.
+
+§ 1 INTRODUCTION
+
+Research on neural networks has progressed for many decades, from early work modeling neural activity (McCulloch and Pitts 1943) to the more recent rise of deep learning (Bengio, Lecun, and Hinton 2021). Notable applications include image classification (Krizhevsky, Sutskever, and Hinton 2012), image generation (Goodfellow et al. 2014), image translation (Isola et al. 2017), and many others (Dargan et al. 2020). Along with the demonstrated success it has also been shown that carefully crafted adversarial instances-which appear as normal images to humans-can be used to deceive deep learning models (Szegedy et al. 2014), resulting in incorrect output. The discovery of adversarial instances has led to a broad range of related research including 1) the development of new attacks, 2) the characterization of attack properties, and 3) defense techniques. Akhtar and Mian present a comprehensive survey on the threat of adversarial attacks to deep learning systems used for computer vision.
+
+Two general approaches-discussed further in Section 6-that have been proposed for defending against adversarial attacks include 1) the usage of model ensembling and 2) the incorporation of hidden layer representations as discriminative features for identifying perturbed data. Building on these ideas, we explore the performance implications that can be attributed to using representations from multiple models for the purpose of adversarial instance detection.
+
+Our Contribution In Section 3 we present two approaches that use neural network representations as features for an adversarial detector. For each technique we devise a treatment and control variant in order to measure the impact of using multiple networks for extracting representations. Our controlled experiments in Section 4 measure the effect of using multiple models. For many of the scenarios we consider, detection performance increased as a function of the underlying model count.
+
+§ 2 PRELIMINARIES
+
+Our research incorporates $l$ -layer feedforward neural networks, functions $h : \mathcal{X} \rightarrow \mathcal{Y}$ that map input $x \in \mathcal{X}$ to output $\widehat{y} \in \mathcal{Y}$ through linear preactivation functions ${f}_{i}$ and nonlinear activation functions ${\phi }_{i}$ .
+
+$$
+\widehat{y} = h\left( x\right) = {\phi }_{l} \circ {f}_{l} \circ {\phi }_{l - 1} \circ {f}_{l - 1} \circ \ldots \circ {\phi }_{1} \circ {f}_{1}\left( x\right)
+$$
+
+The models we consider are classifiers, where the outputs are discrete labels. For input $x$ and its true class label $y$ , let $J\left( {x,y}\right)$ denote the corresponding loss of a trained neural network. Our notation omits the dependence on model parameters $\theta$ , for convenience.
+
+§ 2.1 ADVERSARIAL ATTACKS
+
+Consider input $x$ that is correctly classified by neural network $h$ . For an untargeted adversarial attack, the adversary tries to devise a small additive perturbation ${\Delta x}$ such that adversarial input ${x}^{\text{ adv }} = x + {\Delta x}$ changes the classifier’s output (i.e., $h\left( x\right) \neq h\left( {x}^{\text{ adv }}\right)$ ). For a targeted attack, a desired value for $h\left( {x}^{\text{ adv }}\right)$ is an added objective. In both cases, the ${L}_{p}$ norm of ${\Delta x}$ is typically constrained to be less than some threshold $\epsilon$ . Different threat models—white-box, grey-box, and black-box-correspond to varying levels of knowledge that the adversary has about the model being used, its parameters, and its possible defense.
+
+The adversary's objective can be expressed as an optimization problem. For example, the following constrained maximization of the loss function is one way of formulating how an adversary could generate an untargeted adversarial input ${x}^{adv}$ .
+
+$$
+{\Delta x} = \mathop{\operatorname{argmax}}\limits_{\delta }J\left( {x + \delta ,y}\right)
+$$
+
+$$
+\text{ subject to }\parallel \delta {\parallel }_{p} \leq \epsilon
+$$
+
+$$
+x + \delta \in \mathcal{X}
+$$
+
+There are various ways to generate attacks. Under many formulations it's challenging to devise an exact computation of ${\Delta x}$ that optimizes the objective function. An approximation is often employed.
+
+Fast Gradient Sign Method (FGSM) (Goodfellow, Shlens, and Szegedy 2015) generates an adversarial perturbation ${\Delta x} = \epsilon \cdot \operatorname{sign}\left( {{\nabla }_{x}J\left( {x,y}\right) }\right)$ , which is the approximate direction of the loss function gradient. The sign function bounds its input to an ${L}_{\infty }$ norm of 1, which is scaled by $\epsilon$ .
+
+Basic Iterative Method (BIM) (Kurakin, Goodfellow, and Bengio 2017) iteratively applies FGSM, whereby ${x}_{t}^{adv} = {x}_{t - 1}^{adv} + \alpha \cdot \operatorname{sign}\left( {{\nabla }_{x}J\left( {{x}_{t - 1}^{adv},y}\right) }\right)$ for each step, starting with ${x}_{0}^{adv} = x$ . The ${L}_{\infty }$ norm is bounded by $\alpha$ on each iteration and by $t \cdot \alpha$ after $t$ iterations. ${x}_{t}^{adv}$ can be clipped after each iteration in a way that constrains the final ${x}^{\text{ adv }}$ to an $\epsilon$ -ball of $x$ .
+
+Carlini & Wagner (CW) (Carlini and Wagner 2017) generates an adversarial perturbation via gradient descent to solve ${\Delta x} = {\operatorname{argmin}}_{\delta }\left( {\parallel \delta {\parallel }_{p} + c \cdot f\left( {x + \delta }\right) }\right)$ subject to a box constraint on $x + \delta .f$ is a function for which $f\left( {x + \delta }\right) \leq 0$ if and only if the target classifier is successfully attacked. Experimentation yielded the most effective $f$ -for targeted attacks-of those considered. $c$ is a positive constant that can be found with binary search, a strategy that worked well empirically. Clipping or a change of variables can be used to accommodate the box constraint.
+
+§ 2.2 ENSEMBLING
+
+Our research draws inspiration from ensembling, the combination of multiple models to improve performance relative to the component models themselves. There are various ways of combining models. An approach that is widely used in deep learning averages outputs from an assortment of neural networks; each network having the same architecture, trained from a differing set of randomly initialized weights.
+
+§ 3 METHOD
+
+To detect adversarial instances, we use hidden layer representations-from representation models- as inputs to adversarial detection models. For our experiments in Section 4, the representation models are convolutional neural networks that are independently trained for the same classification task, initialized with different weights. Representations are extracted from the penultimate layers of the trained networks. The method we describe in this section is more general, as various approaches could be used for preparing representation models. For example, each representation model could be an independently trained autoencoder-as opposed to a classifier-with representations for each model extracted from arbitrary hidden layers. Additionally, it's not necessary that each of the models-used for extracting representations- has the same architecture.
+
+We devise two broad techniques-model-wise and unit-wise-for extracting representations and detecting adversarial instances. These approaches each have two formulations, a treatment that incorporates multiple representation models and a control that uses a single representation model. For each technique, the functional form of the detection step is the same across treatment and control. This serves our objective of measuring the contribution of incrementally incorporating multiple representation models, as the control makes it possible to check whether gains are coming from some aspect other than the incorporation of multiple representation models.
+
+The illustrations in this section are best viewed in color.
+
+§ 3.1 MODEL-WISE DETECTION
+
+With $N$ representation models, model-wise detection uses a set of representations from each underlying model as separate input to $N$ corresponding detection models that each outputs an adversarial score. These scores, which we interpret as estimated probabilities, are then averaged to give an ensemble adversarial probability estimate. A baseline-holding fixed the number of detectors-uses a single representation model as a repeated input to multiple detection models. The steps of both approaches are outlined below.
+
+§ MODEL-WISE TREATMENT
+
+Step 1 Extract representations for input $x$ from $N$ representation models.
+
+ < g r a p h i c s >
+
+Step 2 Pass the Step 1 representations through $N$ corresponding detection models that each output adversarial probability (denoted ${P}_{i}$ for model $i$ ).
+
+ < g r a p h i c s >
+
+Step 3 Calculate adversarial probability $P$ as the average of Step 2 adversarial probabilities.
+
+$$
+P = \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{P}_{i}
+$$
+
+§ MODEL-WISE CONTROL
+
+Step 1 Extract representations for input $x$ from a single representation model.
+
+ < g r a p h i c s >
+
+Step 2 Pass the Step 1 representations through $N$ detection models that each outputs adversarial probability (denoted ${P}_{i}$ for model $i$ ).
+
+ < g r a p h i c s >
+
+Step 3 Calculate adversarial probability $P$ as the average of Step 2 adversarial probabilities.
+
+$$
+P = \frac{1}{N}\mathop{\sum }\limits_{{i = 1}}^{N}{P}_{i}
+$$
+
+§ 3.2 UNIT-WISE DETECTION
+
+With $N$ representation models, model-wise detection incorporates a single representation from each underlying model to form an $N$ -dimensional array of features as input to a single detection model. A baseline-holding fixed the number of features for the detector-uses a set of units from a single representation model to form an input array for a detection model. The steps of both approaches are outlined below.
+
+§ UNIT-WISE TREATMENT
+
+Step 1 Extract a single representation for input $x$ from $N$ representation models. There is no requirement on which unit is selected nor whether there is any correspondence between which unit is selected from each model.
+
+ < g r a p h i c s >
+
+Step 2 Pass the $N$ -dimensional array of Step 1 representations through an adversarial detection model that outputs adversarial probability $P$ .
+
+ < g r a p h i c s >
+
+§ UNIT-WISE CONTROL
+
+Step 1 Extract $N$ units from the representations for input $x$ from a single representation model. In the illustration that follows, the count of extracted representation units, $N$ , matches the total number of units available. It's also possible for $N$ to be smaller than the quantity available.
+
+ < g r a p h i c s >
+
+Step 2 Pass Step 1 representations through an adversarial detection model that outputs adversarial probability $P$ .
+
+ < g r a p h i c s >
+
+§ 3.3 MEASURING THE CONTRIBUTION FROM MULTIPLE MODELS
+
+We are interested in measuring the contribution of multiple models for detecting adversarial instances. For both the model-wise and unit-wise detection techniques, the contribution of multiple models can be evaluated by inspecting the change in treatment performance when incrementing the number of representation models, $N$ . The changes should be considered relative to the control performance, to check whether any differences are coming from some aspect other than the incorporation of multiple representation models.
+
+§ 4 EXPERIMENTS
+
+§ 4.1 EXPERIMENTAL SETTINGS
+
+We conducted experiments using the CIFAR-10 dataset (Krizhevsky 2009), which is comprised of 60,000 ${32} \times {32}$ RGB images across 10 classes. The dataset, as received, was already split into 50,000 training images and 10,000 test images. We trained one neural network classifier that served as the target for generating adversarial attacks. We trained 1,024 additional neural network classifiers to be used as representation models-with representations extracted from the 512-dimensional penultimate layer of each network. A different randomization seed was used for initializing the weights of the 1,025 networks. Each network had the same-18-layer, 11,173,962-parameter-ResNet-inspired architecture, with filter counts and depth matching the kuangliu ResNet-18 architecture. ${}^{1}$ Pixel values of input images were scaled by $1/{255}$ to be between 0 and 1 . The networks were trained for 100 epochs using an Adam optimizer (Kingma and Ba 2014), with random horizontal flipping and random crop sampling on images padded with 4 pixels per edge. The model for attack generation had ${91.95}\%$ accuracy on the test dataset. The average test accuracy across the 1,024 additional networks was ${92.22}\%$ with sample standard deviation of 0.34%.
+
+Adversarial Attacks Untargeted adversarial perturbations were generated for the 9,195 images that were originally correctly classified by the attacked model. Attacks were conducted with FGSM, BIM, and CW, all using the cleverhans library (Papernot et al. 2018). After each attack, we clipped the perturbed images between 0 and 1 and quantized the pixel intensities to 256 discrete values. This way the perturbed instances could be represented in 24-bit RGB space.
+
+For FGSM, we set $\epsilon = 3/{255}$ for a maximum perturbation of 3 intensity values (out of 255 ) for each pixel on the unnormalized data. Model accuracy on the attacked model-for the 9,195 perturbed images-was 21.13% (i.e., an attack success rate of 78.87%). Average accuracy on the 1,024 representation models was ${61.69}\%$ (i.e., an attack transfer success rate of ${38.31}\%$ ) with sample standard deviation of 1.31%.
+
+For BIM, we used 10 iterations with $\alpha = 1/{255}$ and maximum perturbation magnitude clipped to $\epsilon = 3/{255}$ . This results in a maximum perturbation of 1 unnormalized intensity value per pixel on each step, with maximum perturbation after all steps clipped to 3 . Accuracy after attack was ${0.61}\%$ for the attacked model. Average accuracy on the 1,024 representation models was ${41.09}\%$ with sample standard deviation of 2.64%.
+
+${}^{1}$ This differs from the ResNet-20 architecture used for CIFAR- 10 in the original ResNet paper (He et al. 2016).
+
+ < g r a p h i c s >
+
+Figure 1: Example CIFAR-10 images after adversarial perturbation. The original image-in the leftmost column-is followed by three columns corresponding to FGSM, BIM, and CW attacks, respectively. Images were chosen randomly from the set of test images that were correctly classified without perturbation-the population of images for which attacks were generated.
+
+For CW, we used an ${L}_{2}$ norm distance metric along with most default parameters-a learning rate of0.005,5binary search steps, and 1,000 maximum iterations. We raised the confidence parameter ${}^{2}$ to 100 from its default of 0, which increases attack transferability. This makes our experiments more closely align with black-box and grey-box attack scenarios, where transferability would be an objective of an adversary. Accuracy after attack was ${0.07}\%$ for the attacked model. Average accuracy on the 1,024 representation models was ${5.86}\%$ with sample standard deviation of 1.72%.
+
+Figure 1 shows examples of images that were perturbed for our experiments. These were chosen randomly from the 9,195 correctly classified test images-the population of images for which attacks were generated.
+
+Adversarial Detectors We use the 512-dimensional representation vectors extracted from the 1,024 representation models as inputs to model-wise and unit-wise adversarial detectors-both treatment and control configurations-as described in Section 3. All detection models are binary classification neural networks that have a 100-dimensional hidden layer with a rectified linear unit activation function. We did not tune hyperparameters, instead using the defaults as specified by the library we employed, scikit-learn (Pedregosa et al. 2011). Model-wise detectors differed in their randomly initialized weights.
+
+To evaluate the contribution of multiple models, we run experiments that vary 1 ) the number of detection models used for model-wise detection, and 2) the number of units used for unit-wise detection. For the treatment experiments, the number of underlying representation models matches 1) the number of detection models for model-wise detection and 2) the number of units for unit-wise detection. For the control experiments, there is a single underlying representation model.
+
+The number of units for the unit-wise control models was limited to 512, based on the dimensionality of the penultimate layer representations. The number of units for the unit-wise treatment was extended beyond this since its limit is based on the number of representation models, for which we had more than 512. One way to incorporate more units into the unit-wise control experiments would be to draw units from other network layers, but we have not explored that for this paper.
+
+We are interested in the generalization capabilities of detectors trained with data from a specific attack. While the training datasets we constructed were each limited to a single attack algorithm, we separately tested each model using data attacked with each of the three algorithms-FGSM, BIM, and CW.
+
+For training and evaluating each detection model, the dataset consisted of 1 ) the 9,125 images that were originally correctly classified by the attacked model, and 2) the 9,125 corresponding perturbed variants. Models were trained with ${90}\%$ of the data and tested on the remaining ${10}\%$ . Each original image and its paired adversarial counterpart were grouped, i.e., they were never separated such that one would be used for training and the other for testing.
+
+We retained all 9,125 perturbed images and handled them the same (i.e., they were given the same class) for training and evaluation, including the instances that did not successfully deceive the attacked model. For BIM and CW, the consequence of this approach is presumably minor, since there were few unsuccessful attacks. For FGSM, which had a lower attack success rate, further work would be needed to 1) study the implications and/or 2) implement an alternative approach.
+
+We conducted 100 trials for each combination of settings. For each trial, random sampling was used for 1) splitting data into training and test groups, 2) choosing representation models, and 3) choosing which representation units to use for the unit-wise experiments.
+
+${}^{2}$ Our description of CW in Section 2 does not discuss the $\kappa$ confidence parameter. See the CW paper (Carlini and Wagner 2017) for details.
+
+ < g r a p h i c s >
+
+Figure 2: Average model-wise adversarial input detection accuracies, where each point is calculated across 100 trials. The sample standard deviations were added and subtracted from each sample mean to generate the shaded regions. The figure subplots each correspond to a specific attack used for the training data-as indicated by the leftmost labels-and a specific attack used for the test data-as indicated by the header labels. The endpoint values underlying the figure are provided in the appendix.
+
+§ 4.2 HARDWARE AND SOFTWARE
+
+The experiments were conducted on a desktop computer running Ubuntu 21.04 with Python 3.9. The hardware includes an AMD Ryzen 9 3950X CPU, 64GB of memory, and an NVIDIA TITAN RTX GPU with 24GB of memory. The GPU was used for training the CIFAR-10 classifiers and generating adversarial attacks.
+
+The code for the experiments is available at https:// anonymized/for/submission.
+
+§ 4.3 RESULTS
+
+Model-Wise Figure 2 shows average model-wise adversarial input detection accuracies-calculated from 100 trials-plotted across the number of detection models. The subplots represent different combinations of training data attacks and test data attacks. The endpoint values underlying the figure are provided in the appendix.
+
+Unit-Wise Figure 3 shows average unit-wise adversarial input detection accuracies-calculated from 100 trials-plotted across the number of units. The subplots represent different combinations of training data attacks and test data attacks. The endpoint values underlying the figure are provided in the appendix.
+
+§ 5 DISCUSSION
+
+Although subtle, for most scenarios the model-wise control experiments show an upward trend in accuracy as a function of the number of detection models. This is presumably an ensembling effect where there are benefits from combining multiple detection models even when they're each trained on the same features. The model-wise treatment experiments tend to outpace the corresponding controls, highlighting the benefit realized when the ensemble utilizes representations from distinct models.
+
+The increasing accuracy for the unit-wise control experiments- as a function of the number of units- is more discernible than for the corresponding model-wise control experiments (the latter being a function of the number of models). The unit-wise gains are from having more units, and thus more information, as discriminative features for detecting adversarial instances. In most scenarios the treatment experiments-which draw units from distinct representation models-have higher performance than the corresponding controls. An apparent additional benefit is being able to incorporate more units when drawing from multiple models, not limited by the quantity of eligible units in a single model. However, drawing units from multiple models also comes at a practical cost, as it requires more computation relative to drawing from a single model.
+
+ < g r a p h i c s >
+
+Figure 3: Average unit-wise adversarial input detection accuracies, where each point is calculated across 100 trials. The sample standard deviations were added and subtracted from each sample mean to generate the shaded regions. The figure subplots each correspond to a specific attack used for the training data-as indicated by the leftmost labels- and a specific attack used for the test data-as indicated by the header labels. The endpoint values underlying the figure are provided in the appendix.
+
+As expected, detectors trained with data from a specific attack perform best when tested with data from the same attack. Interestingly, detectors trained with BIM attack data appear to generalize better relative to detectors trained with FGSM or CW attack data. This may be related to the hyper-parameters we used for each of the attacks, as opposed to being something representative of BIM more generally.
+
+§ 6 RELATED WORK
+
+We are aware of two general research areas that are related to what we've explored in this paper. The approaches include 1) the incorporation of ensembling for adversarial defense, and 2) the usage of hidden layer representations for detecting adversarial instances.
+
+§ 6.1 ENSEMBLING-BASED ADVERSARIAL DEFENSE
+
+Combining machine learning models is the hallmark of en-sembling. For our work, we trained detection models that process representations extracted from multiple independently trained models. For model-wise detection, we averaged detection outputs across multiple models. Existing research has explored ensembling techniques in the context of defending against adversarial attacks (Liu et al. 2019). Bag-nall, Bunescu, and Stewart train an ensemble-to be used for the original task, classification, and also for adversarial detection-such that the underlying models agree on clean samples and disagree on perturbed examples. The adaptive diversity promoting regularizer (Pang et al. 2019) was developed to increase model diversity-and decrease attack transferability-among the members of an ensemble. Abbasi et al. devise a way to train ensemble specialists and merge their predictions-to mitigate the risk of adversarial examples.
+
+§ 6.2 ATTACK DETECTION FROM REPRESENTATIONS
+
+For our research we've extracted representations from independently trained classifiers to be used as features for adversarial example detectors. Hidden layer representations have been utilized in various other work on adversarial instance detection. Neural network invariant checking (Ma et al. 2019) detects adversarial samples based on whether internal activations conflict with invariants learned from non-adversarial data. Wójcik et al. use hidden layer activations to train autoencoders whose own hidden layer activations-along with reconstruction error-are used as features for attack detection. Li and Li develop a cascade classifier that incrementally incorporates statistics calculated on convolutional layer activations. At each stage, the instance is either classified as non-adversarial or passed along to the next stage of the cascade that integrates features computed from an additional convolutional layer. In addition to the methods summarized above, detection techniques have also been developed that 1) model the relative-positioned dynamics of representations passing through a neural network (Carrara et al. 2019), 2) use hidden layer activations as features for a $k$ -nearest neighbor classifier (Carrara et al. 2017), and 3) process the hidden layer units that were determined to be relevant for the classes of interest (Granda, Tuytelaars, and Oramas 2020).
+
+Table 1: Average unit-wise adversarial input detection accuracies plus/minus sample standard deviations, calculated across 100 trials for each datum. These are a subset of values used to generate Figure 2.
+
+max width=
+
+3*Train Attack 3*Number of Detection Models 6|c|Test Attack
+
+3-8
+ 2|c|FGSM 2|c|BIM 2|c|CW
+
+3-8
+ Control Treatment Control Treatment Control Treatment
+
+1-8
+2*FGSM 1 ${0.819} \pm {0.014}$ ${0.820} \pm {0.014}$ ${0.736} \pm {0.014}$ ${0.735} \pm {0.014}$ ${0.638} \pm {0.019}$ ${0.637} \pm {0.020}$
+
+2-8
+ 10 ${0.836} \pm {0.013}$ ${0.892} \pm {0.006}$ ${0.747} \pm {0.012}$ ${0.799} \pm {0.009}$ ${0.643} \pm {0.017}$ ${0.661} \pm {0.013}$
+
+1-8
+2*BIM 1 ${0.765} \pm {0.017}$ ${0.766} \pm {0.015}$ ${0.788} \pm {0.013}$ ${0.788} \pm {0.012}$ ${0.767} \pm {0.014}$ ${0.770} \pm {0.014}$
+
+2-8
+ 10 ${0.783} \pm {0.015}$ ${0.839} \pm {0.009}$ ${0.805} \pm {0.012}$ ${0.864} \pm {0.008}$ ${0.785} \pm {0.012}$ ${0.840} \pm {0.010}$
+
+1-8
+2*CW 1 ${0.597} \pm {0.017}$ ${0.600} \pm {0.017}$ ${0.690} \pm {0.015}$ ${0.691} \pm {0.016}$ ${0.870} \pm {0.009}$ ${0.870} \pm {0.010}$
+
+2-8
+ 10 ${0.602} \pm {0.018}$ ${0.601} \pm {0.011}$ ${0.699} \pm {0.014}$ ${0.727} \pm {0.010}$ ${0.883} \pm {0.009}$ ${0.937} \pm {0.005}$
+
+1-8
+
+Table 2: Average unit-wise adversarial input detection accuracies plus/minus sample standard deviations, calculated across 100 trials for each datum. These are a subset of values used to generate Figure 3.
+
+max width=
+
+3*Train Attack 3*Number of Units 6|c|Test Attack
+
+3-8
+ 2|c|FGSM 2|c|BIM 2|c|CW
+
+3-8
+ Control Treatment Control Treatment Control Treatment
+
+1-8
+3*FGSM 8 ${0.671} \pm {0.014}$ ${0.671} \pm {0.013}$ ${0.646} \pm {0.012}$ ${0.648} \pm {0.014}$ ${0.556} \pm {0.024}$ ${0.550} \pm {0.026}$
+
+2-8
+ 512 ${0.820} \pm {0.016}$ ${0.868} \pm {0.008}$ ${0.739} \pm {0.013}$ ${0.771} \pm {0.011}$ ${0.639} \pm {0.019}$ ${0.626} \pm {0.016}$
+
+2-8
+ 1,024 - ${0.890} \pm {0.008}$ - ${0.778} \pm {0.014}$ - ${0.629} \pm {0.016}$
+
+1-8
+3*BIM 8 ${0.654} \pm {0.013}$ ${0.657} \pm {0.014}$ ${0.662} \pm {0.012}$ ${0.667} \pm {0.013}$ ${0.600} \pm {0.019}$ ${0.596} \pm {0.020}$
+
+2-8
+ 512 ${0.766} \pm {0.017}$ ${0.815} \pm {0.010}$ ${0.787} \pm {0.014}$ ${0.837} \pm {0.009}$ ${0.768} \pm {0.013}$ ${0.809} \pm {0.009}$
+
+2-8
+ 1,024 - ${0.838} \pm {0.010}$ - ${0.857} \pm {0.010}$ - ${0.838} \pm {0.011}$
+
+1-8
+3*CW 8 ${0.553} \pm {0.024}$ ${0.550} \pm {0.026}$ ${0.596} \pm {0.018}$ ${0.592} \pm {0.019}$ ${0.679} \pm {0.015}$ ${0.678} \pm {0.017}$
+
+2-8
+ 512 ${0.599} \pm {0.016}$ ${0.588} \pm {0.012}$ ${0.690} \pm {0.015}$ ${0.689} \pm {0.013}$ ${0.870} \pm {0.011}$ ${0.922} \pm {0.007}$
+
+2-8
+ 1,024 - ${0.588} \pm {0.014}$ - ${0.694} \pm {0.016}$ - ${0.941} \pm {0.006}$
+
+1-8
+
+§ 7 CONCLUSION AND FUTURE WORK
+
+We presented two approaches for adversarial instance detection-model-wise and unit-wise-that incorporate the representations from multiple models. Using those two approaches, we devised controlled experiments comprised of treatments and controls, for measuring the contribution of multiple model representations in detecting adversarial instances. For many of the scenarios we considered, experiments showed that detection performance increased with the number of underlying models used for extracting representations.
+
+The research leaves open various avenues for future work.
+
+ * For our experiments, we trained 1,024 neural network representation models, whose diversity arises from using a different randomization seed for each. Perhaps other methods for imposing diversity would impact the performance of the detectors that depend on those models.
+
+ * It would be interesting to explore how existing adversarial defenses fare when extended to use multiple underlying models.
+
+ * Although we evaluated detectors across different attack algorithms, we always used data from a single attack for the purpose of training. Future research could investigate the effect of training with data from multiple attacks and/or varying hyperparameter settings for a specific attack.
+
+ * Our focus was on measuring the incremental gains of detecting attacks when incorporating multiple representation models. Further work could perform a thorough defense evaluation under more challenging threat models.
+
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/UHBsuFPrJ11/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/UHBsuFPrJ11/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..31787a5ae5a6caebcfb6ce4054953fb905d2398e
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/UHBsuFPrJ11/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,335 @@
+# Estimating the Robustness of Classification Models by the Structure of the Learned Feature-Space
+
+## Abstract
+
+Over the last decade, the development of deep image classification networks has mostly been driven by the search for the best performance in terms of classification accuracy on standardized benchmarks like ImageNet. More recently, this focus has been expanded by the notion of model robustness, i.e. the generalization abilities of models towards previously unseen changes in the data distribution. While new benchmarks, like ImageNet-C, have been introduced to measure robustness properties, we argue that fixed testsets are only able to capture a small portion of possible data variations and are thus limited and prone to generate new overfitted solutions. To overcome these drawbacks, we suggest to estimate the robustness of a model directly from the structure of its learned feature-space. We introduce robustness indicators which are obtained via unsupervised clustering of latent representations from a trained classifier and show very high correlations to the model performance on corrupted test data.
+
+## 1 Introduction
+
+Deep learning approaches have shown rapid progress on computer vision tasks. Much work has been dedicated to train ever deeper models with improved validation and test accuracies and efficient training schemes (Zoph et al. 2018; Howard et al. 2017; Liu et al. 2018; Hu, Shen, and Sun 2018). Recently, this progress has been accompanied by discussions on the robustness of the resulting model (Djo-longa et al. 2020). Specifically, the focus shifted towards the following two questions: 1. How can we train models that are robust with respect to specific kinds of perturbations? 2. How can we assess the robustness of a given model? These two questions represent fundamentally different perspectives on the same problem. While the first question assumes that the expected set of perturbations is known during model training, the second question rather aims at estimating a models behavior in unforeseen cases and predict its robustness without explicitly testing on specific kinds of corrupted data.
+
+In this paper, we address the second research question. We argue that the clustering performance in a model's latent space can be an indicator for a model's robustness. For this purpose, we introduce cluster purity as a robustness measure in order to predict the behavior of models against data corruption and adversarial attacks. Specifically, we evaluate various classification models (Krizhevsky, Sutskever, and Hinton 2012; Zoph et al. 2018; Huang et al. 2017; He et al. 2016; Szegedy et al. 2017; Zhang et al. 2017; Ioffe and Szegedy 2015; Touvron et al. 2020) on the ImageNet-C (Hendrycks and Dietterich 2019) dataset of corrupted ImageNet images where we measure the robustness of a model as the ratio between the accuracy on corrupted data and clean data. The key result of this paper is illustrated in figure 1: it shows that the model robustness is strongly correlated to the relative clustering performance on the models' latent spaces, i.e. the ratio between the cluster purity and the classification accuracy, both evaluated on clean data. The clusterability of a model's feature space can therefore be considered as an easily accessible indicator for model robustness.
+
+
+
+Figure 1: Predicting the robustness of models using our proposed cluster purity indicator $\left( {p}_{\text{purity }}\right)$ : The correlation between ${p}_{\text{purity }}$ of models trained on the original ImageNet with the measured test accuracy on ImageNet-C is ${R}^{2} =$ 0.87 .
+
+In summary, our work contributes the following:
+
+---
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+---
+
+- We study the feature spaces of several ImageNet pre-trained models including the state-of-the-art CNN models (Zoph et al. 2018; Huang et al. 2017; He et al. 2016; Szegedy et al. 2017; Zhang et al. 2017) and the recently proposed transformer models (Touvron et al. 2020) and evaluate their model robustness on the ImageNet-C dataset and against adversarial attacks.
+
+- We show that intra- and inter-class distances extracted from classification models are not suitable as a direct indicator for a model's robustness.
+
+- We provide a study of two clustering methods, $K$ -means and the Minimum Cost Multicut Problem (MP) and analyze the correlation between classification accuracy, robustness and clusterability.
+
+- We show that the relative clustering accuracy, i.e. the ratio between classification and clustering performance, is a strong indicator for the robustness of the classification model under ImageNet-C corruptions.
+
+This paper is structured as follows: We first review the related work on image classification, model robustness and deep clustering approaches in Section 2, then we propose the methodology for the feature space analysis in Section 3. Our experiments and results are discussed in Section 4.
+
+## 2 Related Work
+
+Image Classification. Convolutional neural networks (CNN) have shown great success in computer vision. In particular, from the classification of handwritten characters (LeCun et al. 1998) to images (Krizhevsky, Hinton et al. 2009), CNN-based methods consistently achieve state-of-the-art in various benchmarks. With the introduction of Im-ageNet (Russakovsky et al. 2015), a dataset with higher resolution images and one thousand diverse classes is available to benchmark the classification accuracy of ever better performing networks (Krizhevsky, Sutskever, and Hinton 2012; Zoph et al. 2018; Huang et al. 2017; He et al. 2016; Szegedy et al. 2017; Zhang et al. 2017), ranging from small and compact network (Howard et al. 2017) to large models (Simonyan and Zisserman 2014) with over 100 millions of parameters.
+
+Transformers. Recently, transformer network architectures, which were originally introduced in the area of natural language processing (Vaswani et al. 2017), have been successfully applied to the image classification task (Chen et al. 2020; Dosovitskiy et al. 2020). The performance of transformer networks is competitive despite having no convolutional layers. However, transformer models require long training times and large amounts of data (Dosovitskiy et al. 2020) in order to generalize well. A more efficient approach for training has been proposed in (Touvron et al. 2020), which is based on a teacher-student strategy (distillation). Similarly, (Caron et al. 2021) uses the same strategy on self-supervised tasks.
+
+Model Robustness. Convolutional neural networks are susceptible to distribution shifts (Quiñonero-Candela et al. 2009) between train and test data (Ovadia et al. 2019; Geirhos et al. 2018; Hendrycks and Dietterich 2019; Saikia, Schmid, and Brox 2021). This concerns both visible input domain shifts by for example considering corrupted, noisy or blurred data, as well as imperceptible changes in the input, induced by (Moosavi-Dezfooli, Fawzi, and Frossard 2016; Goodfellow, Shlens, and Szegedy 2014; Ku-rakin, Goodfellow, and Bengio 2016). These explicitly maximize the error rate of classification models (Szegedy et al. 2013; Biggio and Roli 2018) and thereby reveal model weaknesses. Many methods have been proposed to improve the adversarial robustness by specific training procedures, e.g. (Moosavi-Dezfooli, Fawzi, and Frossard 2016; Jakubovitz and Giryes 2018). In contrast, input distribution shifts induced by various kinds of noise as modeled in the ImageNet-C (Hendrycks and Dietterich 2019) dataset mimic the robustness of a model in unconstrained environments, for example under diverse weather conditions. This aspect is crucial if we consider scenarios like autonomous driving, where we want to ensure robust behaviour for example under strong rain. Therefore, we focus on the latter aspect and investigate the behaviour of various pre-trained models under ImageNet-C corruptions but also evaluate the proposed robustness measure on adversarial perturbations (Moosavi-Dezfooli, Fawzi, and Frossard 2016; Jakubovitz and Giryes 2018). While (Jiang et al. 2018) propose a trust score instead of its models' confidence score to judge the reliability of the results, (Buzhinsky, Nerinovsky, and Tripakis 2021) introduce a natural way of measuring adversarial robustness, called latent space performance metrics. In contrast, (Gi-raudon et al. 2021) measure robustness using a mean radius approach.
+
+Clustering. Clustering approaches, deep clustering approaches in particular, have shown to benefit from well structured feature spaces. Such approaches therefore aim at optimizing the latent representations for example using variational autoencoders or Gaussian mixture model or $K$ - means priors (Prasad, Das, and Bhowmick 2020; Xie, Gir-shick, and Farhadi 2016; Ghasedi Dizaji et al. 2017; Ghasedi et al. 2019; Caron et al. 2018). (Caron et al. 2018) iteratively groups points using $K$ -means during the latent space optimization. Conversely, we are investigating the actual feature space learned from image classification tasks using clusterability as a measure for its robustness. Therefore, we apply clustering approaches on pre-trained feature spaces. Further, while the above mentioned methods rely on a $K$ - means-like clustering, i.e. data is clustered into a given number of clusters, we also evaluate clusters from a similarity driven clustering approach, the Minimum Cost Multicut Problem (Bansal, Blum, and Chawla 2004).
+
+The Multicut Problem, aka. Correlation Clustering, groups similar data points together by pairwise terms: data (e.g. images) are represented as nodes in a graph. The real valued weight of an edge between two nodes measures their similarity. Clusters are obtained by cutting edges in order to decompose the graph and minimize the cut cost. This problem is known to be NP-hard (Demaine et al. 2006). In practice, heuristic solvers often perform reasonably (Kernighan and Lin 1970; Beier et al. 2014). Correlation Clustering has various applications in computer vision, such as motion tracking and segmentation (Keuper et al. 2018; Wolf et al.
+
+
+
+Figure 2: The robustness of a model is measured by its relative classification performance, which is the ratio between clean and corrupted (in red arrow) data.. The latent space or features (in blue) of various classification models is sampled using ImageNet images. The feature representations are then clustered with the $K$ -means and Multicut clustering approaches. The correlation is visualized in 1 .
+
+2020), image clustering (Ho et al. 2020a) or multiple object tracking (Tang et al. 2017; Ho et al. 2020a).
+
+## 3 Feature Space Analysis
+
+Our aim is to establish indicators for a model's robustness from the structure of its induced latent space. Therefore, we first extract latent space samples, i.e. feature representations of input test images. The latent space structure is subsequently analyzed using two different clustering approaches. $K$ -means is clustering data based on distances to a fixed number of cluster means and can therefore be interpreted as a proxy of how well the latent space distribution can be represented by a univariate Gaussian mixture model. The Minimum Cost Multicut problem formulation clusters data points based on their pairwise distances and therefore imposes less constraints on the data manifold to be clustered. Figure 2 gives an overview of the methodology. First, we briefly recap classification models as feature extractors in Section 3.1. The K-means and Minimum Cost Multicut Problem on the image clustering task are explained in Section 3.2. In Section 3.3, we review evaluation metrics for measuring the clustering performance and in Section 3.4, we present our proposed metrics for robustness estimation.
+
+### 3.1 Extracting Features from Classification Models
+
+Classification models with multiple classes are often trained with softmax cross-entropy and it has been shown that features, learned from vanilla softmax cross-entropy achieve a high performance in transfer accuracy (Kornblith et al. 2020). In order to obtain the learned features from images, the last layer of the trained model (classifier) is removed, which is often done for instance in transfer learning (Sharif Razavian et al. 2014; Shin et al. 2016) or clustering tasks (Xie, Girshick, and Farhadi 2016). The model encodes an image ${x}_{i}$ with a function ${f}_{\theta }\left( \text{.}\right) ,$ with pre-trained parameters $\theta$ . Table 1 shows the different classification models with their according feature dimensions as well as the number of parameters and their top 1 classification accuracy in $\%$ . We investigate models which vary significantly in their architectures, including CNNs and transformer models, their number of parameters, ranging from ${3.5}\mathrm{M}$ to ${138}\mathrm{M}$ , as well as their test accuracy, ranging from 56.4% to 81.2% top-1 scores. We use features extracted from the full ImageNet test set as latent space samples for our analysis as shown in 2 .
+
+Table 1: Classification models: all models are trained and evaluated on the ImageNet (Russakovsky et al. 2015) dataset, sorted by performance. We report the Top1 classification accuracy in $\%$ . The first ten models are based on convolutional layers while the last two are transformer networks.
+
+| MODEL | FEATURES | Param | TOP1 % |
| ALEXNET | 4096 | 61.1M | 56.4 |
| VGG11 | 4096 | 132.9M | 69.0 |
| VGG16 | 4096 | 138.4M | 71.6 |
| BNINCEPTION | 1024 | 11.3M | 73.5 |
| NASNETAMOBILE | 1056 | 5.3M | 74.1 |
| DENSENET121 | 1024 | 7.9M | 74.6 |
| RESNET50 | 2048 | 25.6M | 76.0 |
| RESNET101 | 2048 | 44.5M | 77.4 |
| INCRESNV2 | 1536 | 55.8M | 80.2 |
| POLYNET | 2048 | 95.3M | 81.0 |
| DEIT-TINY | 192 | 5.9M | 74.5 |
| DEIT-SMALL | 384 | 22.4M | 81.2 |
+
+### 3.2 Latent Space Clustering
+
+K-means. K-means is a simple and effective method to cluster $N$ data points into $K$ clusters ${S}_{k}, k = 1,\ldots , K$ . As $K$ is set a priori, this method produces exactly the number of defined clusters by minimizing the intra-cluster distance:
+
+$$
+\mathop{\sum }\limits_{{k = 1}}^{K}\mathop{\sum }\limits_{{{x}_{i} \in {S}_{k}}}{\begin{Vmatrix}f\left( {x}_{i}\right) - {\mu }_{k}\end{Vmatrix}}^{2} \tag{1}
+$$
+
+where the centroid ${\mu }_{k}$ is computed as the mean of features $\frac{1}{\left| {S}_{k}\right| }\mathop{\sum }\limits_{{{x}_{i} \in {S}_{k}}}f\left( {x}_{i}\right)$ in cluster $k$ .
+
+Multicut Clustering. The Minimum Cost Multicut Problem is a graph based clustering approach. Considering an undirected graph $G = \left( {V, E}\right)$ , with $v \in V$ being the images ${x}_{i}$ of the dataset $X$ with $\left| V\right| = N$ samples, a complete graph with $N$ nodes has in total $\left| E\right| = \frac{N\left( {N - 1}\right) }{2}$ edges. A real valued cost $w : E \rightarrow \mathbb{R}$ is assigned to every edge $e \in E$ . While the decision, whether an edge is joined or cut, is made based on the edge label $y : E \rightarrow \{ 0,1\}$ , the decision boundary can be derived from training parameters of the model (Ho et al. 2020b), directly learned from the dataset (Ho et al. 2020a; Tang et al. 2017) or simply estimated empirically (via parameter search). The inference of such edge labels is defined as follows:
+
+$$
+\mathop{\min }\limits_{{y \in \{ 0,1{\} }^{E}}}\mathop{\sum }\limits_{{e \in E}}{w}_{e}{y}_{e} \tag{2}
+$$
+
+$$
+\text{s.t.}\;\forall C \in \operatorname{cycles}\left( G\right) \;\forall e \in C : {y}_{e} \leq \mathop{\sum }\limits_{{{e}^{\prime } \in C\smallsetminus \{ e\} }}{y}_{{e}^{\prime }} \tag{3}
+$$
+
+
+
+Figure 3: Evaluation metrics with 4 clusters with 3 unique classes. Cluster Accuracy: The best match for class dark circle is cluster 3 , since it contains the most frequent items from the same class. Cluster 4 is considered as false positive. Purity score on the other hand does not penalize cluster 4 . Thus, the purity score is higher than the cluster accuracy $\left( {{80}\% \text{ vs. }{73}\% }\right)$ .
+
+Here, edges with negative costs ${w}_{e}$ have a high probability to be cut. Equation (3) enforces that for each cycle in $G$ , a cut is only allowed if there is at least another edge being cut as well, which was shown in (Chopra and Rao 1993) to be sufficient to enforce on all chordless cycles.Practically, the edge costs are computed from pairwise distances in the feature space. The distance ${d}_{i, j}$ between two features $f\left( {x}_{i}\right)$ and $f\left( {x}_{j}\right)$ is calculated from the pre-trained model or encoder $f$ , where ${x}_{i}$ and ${x}_{j}$ are two distinct images from the test dataset, respectively, as
+
+$$
+{d}_{i, j} = {\begin{Vmatrix}f\left( {x}_{i}\right) - f\left( {x}_{j}\right) \end{Vmatrix}}^{2}. \tag{4}
+$$
+
+A logistic regression model estimates the probability ${d}_{i, j}$ of the edge between $f\left( {x}_{i}\right)$ and $f\left( {x}_{j}\right)$ to be cut. This cut probability is then converted into real valued edge costs $w$ using the logit function $\operatorname{logit}\left( p\right) = \log \frac{p}{1 - p}$ such that similar features are connected by an edge with positive, i.e. attractive weight and dissimilar features are connected by edges with negative, i.e. repulsive weight. The decision boundary (i.e. the threshold on $d$ , which indicates when to cut or to join) is estimated empirically
+
+### 3.3 Cluster Quality Measures
+
+We use two popular external evaluation metrics (i.e. label information are used) to measure the clustering performance: Cluster Accuracy (ACC) and Purity Score. The former metric is calculated based on the Hungarian algorithm (Kuhn 2005), where the best match between the predicted and the true labels are found. The purity score assigns data in a cluster to the class with the most frequent label (Jain, Grover, and LIET 2017). Formally, given a set of $K$ clusters ${S}_{k}$ and a set of classes $L$ with a total number of $N$ data samples, the purity is computed as follows:
+
+$$
+\frac{1}{N}\mathop{\sum }\limits_{{k \in K}}\mathop{\max }\limits_{{\ell \in L}}\left| {{S}_{k} \cap \ell }\right| \tag{5}
+$$
+
+The advantage of using this metric is two-fold: on one hand, it is suitable if the dataset is balanced and on the other hand, purity score does not penalize having a large number of clusters. Figure 3 depicts an example of both metrics.
+
+### 3.4 Performance Measure
+
+Next, we derive a measure based on the latent space clustering performance, that allows to draw conclusions on a model's robustness without evaluating the model on corrupted data. Thereby, we measure a model's robustness as its relative classification accuracy, i.e. the ratio between its classification accuracy on corrupted data and on clean data:
+
+$$
+\text{ Robustness } = \frac{{\operatorname{Model}}_{{\mathrm{{ACC}}}_{s, c}^{ * }}}{{\operatorname{Model}}_{\mathrm{{ACC}}}} \tag{6}
+$$
+
+Parameters $c$ and $s$ are corruption type and severity level (or intensity), respectively for non-adversarial attacks such as ImageNet-C. The aggregated value over all severity levels $s \in \widetilde{S}$ on all corruption types $c \in {CORR}$ is calculated as follows:
+
+$$
+{\mathrm{{ACC}}}_{\text{all }}^{ * } = \frac{1}{\left| \mathrm{{CORR}}\right| }\mathop{\sum }\limits_{{c \in \mathrm{{CORR}}}}\frac{1}{\left| \widetilde{\mathrm{S}}\right| }\mathop{\sum }\limits_{{s = 1}}^{\left| \widetilde{\mathrm{S}}\right| }{\mathrm{{ACC}}}_{s, c}^{ * } \tag{7}
+$$
+
+According to equation 6, perfectly robust models therefore have a robustness of 1 , smaller values indicate lower robustness. Based on the above considerations on model robustness and clustering performance, we propose to consider the relative clustering performance as an indicator for the model robustness and show empirically that there exists a strong correlation between both. The relative clustering performance, i.e. the ratio between clustering performance and classification accuracy ${\mathrm{{Model}}}_{\mathrm{{ACC}}}$ is defined as follows:
+
+$$
+p = \frac{\text{ clustering performance }}{{\text{ Model }}_{\mathrm{{ACC}}}} \tag{8}
+$$
+
+Here, we consider the clustering accuracy ${\mathrm{C}}_{\mathrm{{ACC}}}$ and purity score ${\mathrm{C}}_{\text{purity }}$ as a performance measures for our experiments, i.e.
+
+$$
+{p}_{\mathrm{{ACC}}} = \frac{{\mathrm{C}}_{\mathrm{{ACC}}}}{{\mathrm{{Model}}}_{\mathrm{{ACC}}}}\text{ and }{p}_{\text{purity }} = \frac{{\mathrm{C}}_{\text{purity }}}{{\mathrm{{Model}}}_{\mathrm{{ACC}}}}
+$$
+
+respectively.
+
+Correlation Metrics. The degree of correlation is computed based on the coefficient of determination ${R}^{2}$ and Kendall rank correlation coefficient $\tau$ , respectively with a value of 1.0 being perfectly correlated while 0 means no correlation at all. An example for ${R}^{2}$ is illustrated in Figure 1 and $\tau$ in Figure 7.
+
+
+
+Figure 4: ImageNet-C dataset: first row shows the original image and the corruption brightness for different severity levels. Second row: examples of other corruption types at severity level 5 .
+
+Baseline Indicator: Class Overlap $\Delta$ . Our hypothesis is that an initial well-separated feature space of a classification model provides a good estimate regarding the model robustness. A simple method to determine such a separation would be to observe the intra- and inter-class distances between data samples in the feature space. If an overlap between classes exists, they are not well separated, which may indicate weak models. We define this setting as a baseline in order to show that latent space clustering provides significantly more information.
+
+To investigate this, we define the overlap $\Delta$ between the intra- and inter-class distances as follows:
+
+$$
+\Delta = \left( {{\mu }_{\text{intra }} + {\sigma }_{\text{intra }}}\right) - \left( {{\mu }_{\text{inter }} + {\sigma }_{\text{inter }}}\right) \tag{9}
+$$
+
+$\mu$ and $\sigma$ represent the mean and standard deviation of the intra- and inter-class distances.
+
+## 4 Experiments
+
+This section is structured as follows: we first explain the setup of our experiments in 4.1. Then, we present the clustering results in Section 4.2 where we analyse the clustering accuracy and purity for the two considered clustering approaches on the feature spaces of the different models. Section 4.3 shows that the intra- and inter-class distances cannot directly be used as robustness indicators. In Section 4.4, we consider the relationship between the model classification robustness under corruptions and the relative clustering performance of the considered clustering methods and metrics. We show that both clustering accuracy and cluster purity, computed on the feature spaces of clean data, allow to derive indicators for a model's expected robustness under corruptions. Thereby, the purity score is more stable than the clustering accuracy and the information provided by $k$ - means clustering and multicuts complement one another. In Section 4.5, we evaluate the proposed robustness indicator in the context of adversarial attacks.
+
+### 4.1 Setup
+
+Our experiments are based on the ImageNet (Russakovsky et al. 2015) dataset. All models were pre-trained on the original training dataset. We evaluate 10 CNN-based models and 2 transformer architectures deit- $t$ and deit- $s$ (t stands for tiny and $\mathrm{s}$ for small). An overview is provided in Table 1. We evaluate the robustness of the considered models against corruptions using the ImageNet-C (Hendrycks and Dietterich 2019) dataset and report the model accuracy for classification and clustering tasks. Figure 4 illustrates an example of considered image corruptions: the first row shows the different severity levels $s = 1,\ldots ,5$ of the corruption brightness, with 1 being the lowest and 5 the strongest corruption. The second row shows other kinds of image perturbations $c$ at severity level 5 such as fog, frost, Gaussian blur. jpeg_compression or pixelate. Each corruption $c$ has 5 severity levels $s = 1,\ldots ,5$ . All models are trained on the clean dataset and the numbers are evaluated on the full test dataset. as done in (Hendrycks and Dietterich 2019).
+
+### 4.2 Classification vs. Clustering
+
+Table 2 summarizes the evaluation in three categories: classification, $K$ -means and Multicuts. There are in total $\left| \mathrm{{CORR}}\right| = {19}$ corruption types with each $\left| \widetilde{S}\right| = 5$ severity levels on ImageNet-C. For the classification task, the numbers are reported in top 1% accuracy for all five levels of corruption (denoted as $1 - 5$ ). On $K$ -means and Multicuts, we report the clustering metrics as presented in 3.3.
+
+The transformer deit- $s$ shows the highest top 1% accuracy on the classification task both on clean and on corrupted data for all severity levels. Inceptionresnetv2 and polynet perform only slightly worse on clean data but are more strongly affected by the ImageNet-C data corruptions than deit-s. Alexnet shows the worse performance across all corruption levels. Although resnet50 outperforms bninception, nasne-tamobile and densenet121 it is less robust against corruption. This is also illustrated in Figure 6 (right).
+
+Considering the clustering accuracy and purity, the $K$ - means and the Multicut behave significantly different from one another. K-means clustering achieves about ${70}\%$ accuracy for models with the highest clean classification accuracy. Yet, its accuracy is much better for the deit- $t$ latent space than for example for the densenet121 induced latent space, although the clean classification accuracy of both networks is comparable. Overall, the $K$ -means clustering works surprisingly well on the transformer models. The Multicut clustering showed the highest clustering accuracy on the in-ceptionresnetv2 model. The cluster purity was comparably high for the best transformer model deit-s. Note that our goal is to derive from the clustering performance an indicator for model robustness, i.e. we expect clustering to be less accurate when models are less robust to noise.
+
+### 4.3 Baseline Indicators: Intra- and Inter class-distances
+
+Table 3 shows the correlation (as ${R}^{2}$ ) and the ranking correlation $\tau$ between the class overlap baseline indicator $\Delta$ , which we detailed in section 3.3, and the model robustness, grouped by severity level. We use equation 6 to calculate the robustness for severity level $s$ over all corruptions and compare them with $\Delta$ . The last column shows the correlation on all corruption levels. All 12 models are considered. The rank correlation $\tau$ is calculated by comparing the model’s robustness rank and the overlap $\Delta$ ranking. Initial well-separated feature spaces (thus a low $\Delta$ ) should have a high correlation with their model's robustness. Despite its simplicity, this metric $\Delta$ correlates poorly with a highest score of ${R}^{2} = {0.29}$ and $\tau = {0.52}$ . This observation rejects the simple hypothesis about the overlap of intra- and interclass distances and it suggests that using $\Delta$ is not sufficiently informative as an indicator for model robustness.
+
+Table 2: Evaluation of robustness on classification and clustering tasks with the ImageNet $C$ dataset, evaluated on corruption severity levels 1 to 5. Column CLEAN represents the classification performance of the models on the clean dataset. Columns 1-5 show the classification accuracy (top 1%) under different severity levels of over all 19 corruptions and column ${\mathrm{{ACC}}}_{all}^{\mathrm{t}}$ shows the mean over corruptions on all 5 severity levels. On $K$ -means, ACC and purity are clustering performance on clean test data. The numbers on multicuts are evaluated on a subset. The best score on each column is marked in bold.
+
+ | CLASSIFCATION ACCURAY (TOP 1%) | $K$ -means | MULTICUTS |
| Model | CLEAN | 1 | 2 | 3 | 4 | 5 | ${\mathrm{{ACC}}}_{\text{all }}^{ * }$ | ACC | Purity | ${\mathrm{{ACC}}}_{all}^{ * }$ | ACC | Purity | ${\mathrm{{ACC}}}_{all}^{ * }$ |
| ALEXNET | 56.4 | 35.9 | 25.4 | 18.9 | 12.7 | 8.0 | 20.2 | 14.6 | 18.4 | 8.0 | 8.0 | 28.1 | 2.6 |
| VGG11 | 69.0 | 47.3 | 35.3 | 25.7 | 16.7 | 10.1 | 27.0 | 28.0 | 32.8 | 12.4 | 15.8 | 27.2 | 2.5 |
| VGG16 | 71.6 | 50.9 | 38.6 | 28.5 | 18.7 | 11.4 | 29.6 | 32.3 | 37.5 | 14.4 | 19.3 | 27.6 | 2.8 |
| BNINCEPTION | 73.5 | 59.4 | 48.4 | 38.8 | 27.2 | 17.7 | 38.3 | 40.5 | 44.0 | 18.0 | 11.5 | 46.6 | 7.9 |
| NASNETAMOBILE | 74.1 | 60.7 | 51.3 | 43.7 | 33.6 | 22.5 | 42.4 | 41.0 | 45.3 | 23.6 | 41.9 | 70.2 | 19.3 |
| DENSENET121 | 74.6 | 60.2 | 50.9 | 42.2 | 31.4 | 21.0 | 41.1 | 48.9 | 52.1 | 23.4 | 16.8 | 80.5 | 8.3 |
| RESNET50 | 76.0 | 60.1 | 49.8 | 40.1 | 28.9 | 18.8 | 39.6 | 55.8 | 58.6 | 24.9 | 29.3 | 64.3 | 11.1 |
| RESNET101 | 77.4 | 63.6 | 54.4 | 45.5 | 34.0 | 22.9 | 44.1 | 59.1 | 61.9 | 29.3 | 28.6 | 53.7 | 16.6 |
| INCEPTIONRESNETV2 | 80.2 | 68.8 | 60.8 | 53.5 | 43.5 | 31.6 | 51.7 | 70.0 | 71.2 | 37.4 | 71.3 | 81.3 | 39.8 |
| POLYNET | 81.0 | 68.0 | 58.9 | 49.9 | 38.2 | 26.3 | 48.3 | 67.8 | 69.7 | 34.8 | 54.4 | 76.6 | 24.1 |
| DEIT-T | 74.5 | 63.3 | 55.9 | 48.7 | 38.9 | 28.1 | 47.0 | 57.4 | 60.0 | 31.7 | 33.0 | 91.9 | 19.5 |
| DEIT-S | 81.2 | 72.1 | 66.1 | 60.2 | 51.3 | 39.7 | 57.9 | 68.8 | 70.8 | 43.4 | 49.4 | 81.1 | 29.6 |
+
+Table 3: Baseline indicators for model robustness: The table shows the correlation between overlap $\Delta$ and model robustness for different corruption severity levels. Second row shows the rank correlation $\tau$ between the actual model robustness rank and the predicted rank using $\Delta$ .
+
+ | SEVERITY |
| METRIC | 1 | 2 | 3 | 4 | 5 | TOTAL |
| ${R}^{2}$ | 0.27 | 0.29 | 0.27 | 0.25 | 0.26 | 0.27 |
| $\tau$ | 0.48 | 0.52 | 0.52 | 0.52 | 0.52 | 0.48 |
+
+### 4.4 Robustness Indicators: Clustering Measures
+
+In the following we evaluate our proposed clustering driven robustness indicator. Specifically, we want to investigate the effects of different clustering measures on the correlation coefficient ${R}^{2}$ . Table 4 gives an overview of the strength of correlation on different severity levels and clustering metrics on $K$ -means and Multicuts. Column $\Delta$ shows the correlation on robustness using the overlap of intra- and inter-class distances as previously discussed. Furthermore, the columns ${ACC}$ and $P$ are showing the correlation between the model robustness and the clustering accuracy and purity, respectively. The last column shows the combination of both clustering methods in one metric. K-means and Multicuts have an ${R}^{2}$ value of ${R}^{2} = {0.83}$ and ${R}^{2} = {0.55}$ for clustering accuracy on all corruption levels. On the purity score, both methods show a slightly higher correlation of ${R}^{2} = {0.83}$ and ${R}^{2} = {0.71}$ , respectively for the sum over all corruptions (last row of Table 4). This indicated that latent space clusterability of clean test images K-means is a valid indicator for model robustness under corruptions. However, we show that both clustering methods are complementary when combining their purity scores with
+
+$$
+{p}_{{\text{purity }}^{k\text{-means }\text{-multicuts }}} = \frac{{\mathrm{C}}_{\text{purity }}^{k\text{-means }} \cdot {\mathrm{C}}_{\text{purity }}^{\text{multicut }}}{{\operatorname{Model}}_{\mathrm{{ACC}}}}. \tag{10}
+$$
+
+This measure shows the highest correlation with the model robustness with ${R}^{2} = {0.87}$ (see fig. 1 for the full correlation plot). Additionally, the combination of purity scores of both methods also yields more consistent results across different severity levels.
+
+Model Ranking. Next, we evaluate whether our proposed robustness indicator is able to retrieve the correct ranking in terms of model robustness for our set of classification models. The rank correlation is measured as the Kendall rank coefficient $\tau$ . Table 5 shows the results for different setups. Here, $K$ -means shows a more consistent and better correlation with highest rank correlation of $\tau = {0.82}$ on ${ACC}$ and Purity. Again, all clustering metrics outperform the $\Delta$ baseline. Figure 6 illustrates one example of the change of rank between the predicted (left) and actual (right) model robustness. The prediction is done using ${p}_{{\mathrm{{ACC}}}^{K - \text{ means }}\text{, which has a }}$ rank correlation of $\tau = {0.79}$ . Our proposed measure is able to rank different models according to their robustness. The three worse performing models (alexnet, vgg11 and vgg16) are correctly retrieved. The largest ranking gap of 3 positions is observed for nasnetamobile and resnet50. In this particular example, the value for alexnet is calculated as follows: $\frac{14.6}{56.4} * {100} = {25.9}$ and $\frac{20.2}{56.4} * {100} = {35.8}$ for predicted and actual values, respectively.
+
+
+
+Figure 5: Visualization of feature space on alexnet and deit-s using umap. The colors correspond to the class labels, where only 10 classes were selected at random. First column shows the initial, clean latent space from the classification model. Each new column depicts the corresponding severity level of the corruption brightness. While alexnet collapses as the severity increases, the most robust model deit-s preserves the clusters very well even after significant corruptions and thus our proposed clusterability of latent space provides a good indicator about the model robustness.
+
+Table 4: Correlation with different metrics and severity levels: the reported numbers are the coefficient of determination $\left( {R}^{2}\right)$ on different clustering metrics. Column $\Delta$ is the overlap (from Table 3). Column ${ACC}$ and Purity (denoted as $P$ .) are used to compute the correlation coefficient ${R}^{2}$ . The last column is the combination of both clustering methods, i.e. last column Purity is equation 10. The highest score is marked in bold.
+
+| Metric: ${R}^{2}$ | | K-MEANS | MULTICUTS | COMBINED |
| SEVERITY | $\Delta$ | ACC | P. | ACC | P. | ACC | P. |
| 1 | 0.27 | 0.85 | 0.85 | 0.48 | 0.67 | 0.54 | 0.82 |
| 2 | 0.29 | 0.87 | 0.87 | 0.51 | 0.70 | 0.58 | 0.86 |
| 3 | 0.27 | 0.84 | 0.83 | 0.55 | 0.73 | 0.61 | 0.87 |
| 4 | 0.25 | 0.79 | 0.79 | 0.58 | 0.72 | 0.64 | 0.87 |
| 5 | 0.26 | 0.75 | 0.84 | 0.57 | 0.68 | 0.64 | 0.84 |
| ALL | 0.27 | 0.83 | 0.83 | 0.55 | 0.71 | 0.62 | 0.87 |
+
+Latent Space Visualization. Umap (McInnes, Healy, and Melville 2018), a scalable dimensionality reduction method similar to the popular technique TSNE(Van der Maaten and Hinton 2008), has been applied to features on 10 randomly selected classes of the ImageNet dataset for visualization. Figure 5 shows one example of the corruption brightness for 2 different models: the first column shows features without any corruptions (clean). As the severity level increases, a collapse is observed for instance on alexnet: well-separated clusters (i.e. different colors) are being pulled into a direction in the latent space as the severity increases. The model with the highest robustness, i.e. deit-s, preserves the clusters well, which explains the high relative clustering performance. This verifies our assumption on the correlation between clusterability and robustness of classification models, that were evaluated in ImageNet-C dataset.
+
+| Predicted | Actual |
| Model Name | Rank | Rank | Model Name | 100 x Robustness |
| 25.9 | AlexNet | 12 | 12 | AlexNet | 35.8 |
| 40.6 | Vgg11 | 11 | 11 | Vgg11 | 39.1 |
| 45.1 | Vgg16 | 10 | 10 | Vgg16 | 41.3 |
| 55.1 | BNInception | 9 | 9 | Resnet50 | 52.1 |
| 55.3 | NasnetAmobile | 8 | | BNInception | 52.1 |
| 65.5 | Densenet121 | 7 | | Densenet121 | 55.1 |
| 73.4 | Resnet50 | 6 | | Resnet101 | 57.0 |
| 76.4 | Resnet101 | 5 | | NasnetAmobile | 57.2 |
| 77.0 | Deit-t | 4 | | PolyNet | 59.6 |
| 83.7 | PolyNet | 3 | | Deit-t | 63.1 |
| 84.7 | Deit-s | | | InceptionResnetV2 | 64.5 |
| 87.3 | InceptionResnetV2 | | | Deit-s | 71.3 |
+
+Figure 6: Change in robustness ranking based on predicted (left) vs. actual (right) model robustness on ImageNet-C using clustering metric ${p}_{\mathrm{{AC}}{\mathrm{C}}^{\mathrm{K} - \text{ means }}}$ on total corruptions $\tau = {0.79}$ . Top is the least robust model $\left( {\operatorname{Rank} = {12}}\right)$ while the bottom shows the most robust model $\left( {\operatorname{Rank} = 1}\right)$ . The highest score is marked in bold.
+
+### 4.5 Adversarial Robustness
+
+So far, we have shown that our proposed approach can effectively indicate the robustness of classification models towards visible image corruptions and shifts in the data distributions provided by the ImageNet-C benchmark. Here, we extend this evaluation to intentional, non-visible corruptions induced by adversarial attacks. Using the proposed clustering metric ${p}_{\text{purity }}$ k-means-multicuts as an estimator, we evaluate all 12 models with ImageNet test dataset under two adversarial attacks: DeepFool (Moosavi-Dezfooli, Fawzi, and Frossard 2016), FGSM (Goodfellow, Shlens, and Szegedy 2014) and PGM (Kurakin, Goodfellow, and Bengio 2016) with different perturbation sizes epsilon. Figure 7 shows the results of both attacks across all 12 models: left (a) represents the correlation of determination ${R}^{2}$ while on the right (b) the classification accuracy, respectively. Epsilon (x-axis) is the perturbation size of the attacks. For small epsilon, we expect lower correlations since the model accuracy should hardly be affected. As epsilon increases, some models are more robust than others, i.e. better preserve their classification accuracy. In this range, we see a relatively strong correlation of the proposed indicator and the relative robust accuracy, albeit weaker than the correlation with robustness to corruptions, with ${R}^{2} = {0.66},{R}^{2} = {0.44}$ and ${R}^{2} = {0.44}$ for DeepFool and FGSM and PGM, respectively. When epsilon becomes too large, the correlation becomes weaker.
+
+
+
+Figure 7: Correlation of our proposed clustering metric on different adversarial attacks with different strengths (Epsilon). Left (a): line represents the coefficient of determination ${R}^{2}$ . Right (b) shows the clustering accuracy. The higher the strength, the weaker the performance.
+
+Table 5: Rank correlation with different metrics and severity levels: the reported numbers are the coefficient of determination $\left( \tau \right)$ on different clustering metrics. Column $\Delta$ is the overlap (from Table 3). Column ${ACC}$ and Purity (denoted as ${P}_{ \cdot }$ ) are the used compute the rank correlation coefficient $\tau$ . The last column is the combination of both clustering methods, i.e. last column Purity is equation 10 . The highest score is marked in bold.
+
+| METRIC: $\tau$ | | K-MEANS | MULTICUTS | COMBINED |
| SEVERITY | $\Delta$ | ACC | P. | ACC | P. | ACC | P. |
| 1 | 0.48 | 0.79 | 0.79 | 0.61 | 0.52 | 0.73 | 0.73 |
| 2 | 0.52 | 0.82 | 0.82 | 0.64 | 0.55 | 0.76 | 0.76 |
| 3 | 0.52 | 0.82 | 0.82 | 0.70 | 0.61 | 0.82 | 0.76 |
| 4 | 0.52 | 0.82 | 0.82 | 0.70 | 0.61 | 0.82 | 0.76 |
| 5 | 0.52 | 0.82 | 0.82 | 0.70 | 0.61 | 0.82 | 0.76 |
| ALL | 0.48 | 0.79 | 0.79 | 0.67 | 0.58 | 0.79 | 0.73 |
+
+## 5 Conclusion
+
+In this work, we presented a study of the feature space of several pre-trained models on ImageNet including state-of-the-art CNN models and the recently proposed transformer models and we evaluated the robustness on ImageNet-C dataset and extended our evaluation on adversarial robustness as well. We propose a novel way to estimate the robustness behavior of trained models by analyzing the learned feature-space structure. Specifically, we presented a comprehensive study of two clustering methods, $K$ -means and the Minimum Cost Multicut Problem on ImageNet, where the classification accuracy, clusterability and robustness are analyzed. We show that the relative clustering performance gives a strong indication regarding the model's robustness. Both considered clustering methods show complementary behaviour in our analysis: the coefficient of determination is ${R}^{2} = {0.87}$ when combining the purity scores of both methods. Our experiments also show that this indicator is lower, albeit still significant for adversarial robustness $\left( {{R}^{2} = {0.66}}\right.$ and ${R}^{2} = {0.44}$ ). Additionally, our proposed method is able estimate the order of robust models $\left( {\tau = {0.79}}\right)$ on ImageNet-C. This novel method is simple yet effective and allows the estimation of robustness of any given classification model without explicitly testing on any specific test data. To the best of our knowledge, we are the first to propose such technique for estimating model robustness.
+
+References
+
+Bansal, N.; Blum, A.; and Chawla, S. 2004. Correlation Clustering. Machine Learning, 56(1-3): 89-113.
+
+Beier, T.; Kroeger, T.; Kappes, J. H.; Kothe, U.; and Hamprecht, F. A. 2014. Cut, glue & cut: A fast, approximate solver for multicut partitioning. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 73-80.
+
+Biggio, B.; and Roli, F. 2018. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 84: 317-331.
+
+Buzhinsky, I.; Nerinovsky, A.; and Tripakis, S. 2021. Metrics and methods for robustness evaluation of neural networks with generative models. Machine Learning, 1-36.
+
+Caron, M.; Bojanowski, P.; Joulin, A.; and Douze, M. 2018. Deep clustering for unsupervised learning of visual features. In Proceedings of the European Conference on Computer Vision (ECCV), 132-149.
+
+Caron, M.; Touvron, H.; Misra, I.; Jégou, H.; Mairal, J.; Bo-janowski, P.; and Joulin, A. 2021. Emerging properties in self-supervised vision transformers. arXiv preprint arXiv:2104.14294.
+
+Chen, M.; Radford, A.; Child, R.; Wu, J.; Jun, H.; Luan, D.; and Sutskever, I. 2020. Generative pretraining from pixels. In International Conference on Machine Learning, 1691-1703. PMLR.
+
+Chopra, S.; and Rao, M. 1993. The partition problem. Mathematical Programming, 59(1-3): 87-115.
+
+Demaine, E. D.; Emanuel, D.; Fiat, A.; and Immorlica, N. 2006. Correlation clustering in general weighted graphs. Theoretical Computer Science, 361(2-3): 172-187.
+
+Djolonga, J.; Yung, J.; Tschannen, M.; Romijnders, R.; Beyer, L.; Kolesnikov, A.; Puigcerver, J.; Minderer, M.; D'Amour, A.; Moldovan, D.; et al. 2020. On robustness and transferability of convolutional neural networks. arXiv preprint arXiv:2007.08558.
+
+Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; et al. 2020. An image is worth 16x16 words: Transformers for image recognition at scale. arXiv preprint arXiv:2010.11929.
+
+Geirhos, R.; Temme, C. R. M.; Rauber, J.; Schütt, H. H.; Bethge, M.; and Wichmann, F. A. 2018. Generalisation in humans and deep neural networks. arXiv preprint arXiv:1808.08750.
+
+Ghasedi, K.; Wang, X.; Deng, C.; and Huang, H. 2019. Balanced self-paced learning for generative adversarial clustering network. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 4391-4400.
+
+Ghasedi Dizaji, K.; Herandi, A.; Deng, C.; Cai, W.; and Huang, H. 2017. Deep clustering via joint convolutional autoencoder embedding and relative entropy minimization. In Proceedings of the IEEE international conference on computer vision, 5736-5745.
+
+Giraudon, T.; Gripon, V.; Löwe, M.; and Vermet, F. 2021. Towards an intrinsic definition of robustness for a classifier. In ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 4015-4019. IEEE.
+
+Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
+
+He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770-778.
+
+Hendrycks, D.; and Dietterich, T. 2019. Benchmarking neural network robustness to common corruptions and perturbations. arXiv preprint arXiv:1903.12261.
+
+Ho, K.; Kardoost, A.; Pfreundt, F.-J.; Keuper, J.; and Keuper, M.
+
+2020a. A Two-Stage Minimum Cost Multicut Approach to Self-Supervised Multiple Person Tracking. In Proceedings of the Asian Conference on Computer Vision.
+
+Ho, K.; Keuper, J.; Pfreundt, F.-J.; and Keuper, M. 2020b. Learning Embeddings for Image Clustering: An Empirical Study of Triplet Loss Approaches. arXiv preprint arXiv:2007.03123.
+
+Howard, A. G.; Zhu, M.; Chen, B.; Kalenichenko, D.; Wang, W.; Weyand, T.; Andreetto, M.; and Adam, H. 2017. Mobilenets: Efficient convolutional neural networks for mobile vision applications. arXiv preprint arXiv:1704.04861.
+
+Hu, J.; Shen, L.; and Sun, G. 2018. Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 7132-7141.
+
+Huang, G.; Liu, Z.; Van Der Maaten, L.; and Weinberger, K. Q. 2017. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 4700-4708.
+
+Ioffe, S.; and Szegedy, C. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. In International conference on machine learning, 448-456. PMLR.
+
+Jain, H.; Grover, R.; and LIET, A. 2017. Clustering Analysis with Purity Calculation of Text and SQL Data using K-means Clustering Algorithm. IJAPRR, 4(44557): 47-58.
+
+Jakubovitz, D.; and Giryes, R. 2018. Improving dnn robustness to adversarial attacks using jacobian regularization. In Proceedings of the European Conference on Computer Vision (ECCV), 514-529.
+
+Jiang, H.; Kim, B.; Guan, M. Y.; and Gupta, M. 2018. To trust or not to trust a classifier. arXiv preprint arXiv:1805.11783.
+
+Kernighan, B. W.; and Lin, S. 1970. An efficient heuristic procedure for partitioning graphs. The Bell system technical journal, 49(2): 291-307.
+
+Keuper, M.; Tang, S.; Andres, B.; Brox, T.; and Schiele, B. 2018. Motion segmentation & multiple object tracking by correlation co-clustering. IEEE transactions on pattern analysis and machine intelligence, 42(1): 140-153.
+
+Kornblith, S.; Lee, H.; Chen, T.; and Norouzi, M. 2020. What's in a Loss Function for Image Classification? arXiv preprint arXiv:2010.16402.
+
+Krizhevsky, A.; Hinton, G.; et al. 2009. Learning multiple layers of features from tiny images.
+
+Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. Imagenet classification with deep convolutional neural networks. Advances in neural information processing systems, 25: 1097-1105.
+
+Kuhn, H. W. 2005. The Hungarian method for the assignment problem. Naval Research Logistics (NRL), 52(1): 7-21.
+
+Kurakin, A.; Goodfellow, I.; and Bengio, S. 2016. Adversarial machine learning at scale. arXiv preprint arXiv:1611.01236.
+
+LeCun, Y.; Bottou, L.; Bengio, Y.; and Haffner, P. 1998. Gradient-based learning applied to document recognition. Proceedings of the IEEE, 86(11): 2278-2324.
+
+Liu, C.; Zoph, B.; Neumann, M.; Shlens, J.; Hua, W.; Li, L.-J.; Fei-Fei, L.; Yuille, A.; Huang, J.; and Murphy, K. 2018. Progressive neural architecture search. In Proceedings of the European conference on computer vision (ECCV), 19-34.
+
+McInnes, L.; Healy, J.; and Melville, J. 2018. Umap: Uniform manifold approximation and projection for dimension reduction. arXiv preprint arXiv:1802.03426.
+
+Moosavi-Dezfooli, S.-M.; Fawzi, A.; and Frossard, P. 2016. Deep-fool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 2574-2582.
+
+Ovadia, Y.; Fertig, E.; Ren, J.; Nado, Z.; Sculley, D.; Nowozin, S.; Dillon, J. V.; Lakshminarayanan, B.; and Snoek, J. 2019. Can you trust your model's uncertainty? Evaluating predictive uncertainty under dataset shift. arXiv preprint arXiv:1906.02530.
+
+Prasad, V.; Das, D.; and Bhowmick, B. 2020. Variational clustering: Leveraging variational autoencoders for image clustering. In 2020 International Joint Conference on Neural Networks (IJCNN), 1-10. IEEE.
+
+Quiñonero-Candela, J.; Sugiyama, M.; Lawrence, N. D.; and Schwaighofer, A. 2009. Dataset shift in machine learning. Mit Press.
+
+Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M.; et al. 2015. Imagenet large scale visual recognition challenge. International journal of computer vision, 115(3): 211-252.
+
+Saikia, T.; Schmid, C.; and Brox, T. 2021. Improving robustness against common corruptions with frequency biased models. arXiv preprint arXiv:2103.16241.
+
+Sharif Razavian, A.; Azizpour, H.; Sullivan, J.; and Carlsson, S. 2014. CNN features off-the-shelf: an astounding baseline for recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition workshops, 806-813.
+
+Shin, H.-C.; Roth, H. R.; Gao, M.; Lu, L.; Xu, Z.; Nogues, I.; Yao, J.; Mollura, D.; and Summers, R. M. 2016. Deep convolutional neural networks for computer-aided detection: CNN architectures, dataset characteristics and transfer learning. IEEE transactions on medical imaging, 35(5): 1285-1298.
+
+Simonyan, K.; and Zisserman, A. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
+
+Szegedy, C.; Ioffe, S.; Vanhoucke, V.; and Alemi, A. 2017. Inception-v4, inception-resnet and the impact of residual connections on learning. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 31.
+
+Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; and Fergus, R. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.
+
+Tang, S.; Andriluka, M.; Andres, B.; and Schiele, B. 2017. Multiple people tracking by lifted multicut and person re-identification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 3539-3548.
+
+Touvron, H.; Cord, M.; Douze, M.; Massa, F.; Sablayrolles, A.; and Jégou, H. 2020. Training data-efficient image transformers & distillation through attention. arXiv preprint arXiv:2012.12877.
+
+Van der Maaten, L.; and Hinton, G. 2008. Visualizing data using t-SNE. Journal of machine learning research, 9(11).
+
+Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. Attention is all you need. arXiv preprint arXiv:1706.03762.
+
+Wolf, S.; Bailoni, A.; Pape, C.; Rahaman, N.; Kreshuk, A.; Köthe, U.; and Hamprecht, F. A. 2020. The Mutex Watershed and its Objective: Efficient, Parameter-Free Graph Partitioning. IEEE transactions on pattern analysis and machine intelligence.
+
+Xie, J.; Girshick, R.; and Farhadi, A. 2016. Unsupervised deep embedding for clustering analysis. In International conference on machine learning, 478-487. PMLR.
+
+Zhang, X.; Li, Z.; Change Loy, C.; and Lin, D. 2017. Polynet: A pursuit of structural diversity in very deep networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 718-726.
+
+Zoph, B.; Vasudevan, V.; Shlens, J.; and Le, Q. V. 2018. Learning transferable architectures for scalable image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 8697-8710.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/UHBsuFPrJ11/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/UHBsuFPrJ11/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..2c962b25dc56a75199dadc8fd8ddcb8b746dfe1a
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/UHBsuFPrJ11/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,404 @@
+§ ESTIMATING THE ROBUSTNESS OF CLASSIFICATION MODELS BY THE STRUCTURE OF THE LEARNED FEATURE-SPACE
+
+§ ABSTRACT
+
+Over the last decade, the development of deep image classification networks has mostly been driven by the search for the best performance in terms of classification accuracy on standardized benchmarks like ImageNet. More recently, this focus has been expanded by the notion of model robustness, i.e. the generalization abilities of models towards previously unseen changes in the data distribution. While new benchmarks, like ImageNet-C, have been introduced to measure robustness properties, we argue that fixed testsets are only able to capture a small portion of possible data variations and are thus limited and prone to generate new overfitted solutions. To overcome these drawbacks, we suggest to estimate the robustness of a model directly from the structure of its learned feature-space. We introduce robustness indicators which are obtained via unsupervised clustering of latent representations from a trained classifier and show very high correlations to the model performance on corrupted test data.
+
+§ 1 INTRODUCTION
+
+Deep learning approaches have shown rapid progress on computer vision tasks. Much work has been dedicated to train ever deeper models with improved validation and test accuracies and efficient training schemes (Zoph et al. 2018; Howard et al. 2017; Liu et al. 2018; Hu, Shen, and Sun 2018). Recently, this progress has been accompanied by discussions on the robustness of the resulting model (Djo-longa et al. 2020). Specifically, the focus shifted towards the following two questions: 1. How can we train models that are robust with respect to specific kinds of perturbations? 2. How can we assess the robustness of a given model? These two questions represent fundamentally different perspectives on the same problem. While the first question assumes that the expected set of perturbations is known during model training, the second question rather aims at estimating a models behavior in unforeseen cases and predict its robustness without explicitly testing on specific kinds of corrupted data.
+
+In this paper, we address the second research question. We argue that the clustering performance in a model's latent space can be an indicator for a model's robustness. For this purpose, we introduce cluster purity as a robustness measure in order to predict the behavior of models against data corruption and adversarial attacks. Specifically, we evaluate various classification models (Krizhevsky, Sutskever, and Hinton 2012; Zoph et al. 2018; Huang et al. 2017; He et al. 2016; Szegedy et al. 2017; Zhang et al. 2017; Ioffe and Szegedy 2015; Touvron et al. 2020) on the ImageNet-C (Hendrycks and Dietterich 2019) dataset of corrupted ImageNet images where we measure the robustness of a model as the ratio between the accuracy on corrupted data and clean data. The key result of this paper is illustrated in figure 1: it shows that the model robustness is strongly correlated to the relative clustering performance on the models' latent spaces, i.e. the ratio between the cluster purity and the classification accuracy, both evaluated on clean data. The clusterability of a model's feature space can therefore be considered as an easily accessible indicator for model robustness.
+
+ < g r a p h i c s >
+
+Figure 1: Predicting the robustness of models using our proposed cluster purity indicator $\left( {p}_{\text{ purity }}\right)$ : The correlation between ${p}_{\text{ purity }}$ of models trained on the original ImageNet with the measured test accuracy on ImageNet-C is ${R}^{2} =$ 0.87 .
+
+In summary, our work contributes the following:
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+ * We study the feature spaces of several ImageNet pre-trained models including the state-of-the-art CNN models (Zoph et al. 2018; Huang et al. 2017; He et al. 2016; Szegedy et al. 2017; Zhang et al. 2017) and the recently proposed transformer models (Touvron et al. 2020) and evaluate their model robustness on the ImageNet-C dataset and against adversarial attacks.
+
+ * We show that intra- and inter-class distances extracted from classification models are not suitable as a direct indicator for a model's robustness.
+
+ * We provide a study of two clustering methods, $K$ -means and the Minimum Cost Multicut Problem (MP) and analyze the correlation between classification accuracy, robustness and clusterability.
+
+ * We show that the relative clustering accuracy, i.e. the ratio between classification and clustering performance, is a strong indicator for the robustness of the classification model under ImageNet-C corruptions.
+
+This paper is structured as follows: We first review the related work on image classification, model robustness and deep clustering approaches in Section 2, then we propose the methodology for the feature space analysis in Section 3. Our experiments and results are discussed in Section 4.
+
+§ 2 RELATED WORK
+
+Image Classification. Convolutional neural networks (CNN) have shown great success in computer vision. In particular, from the classification of handwritten characters (LeCun et al. 1998) to images (Krizhevsky, Hinton et al. 2009), CNN-based methods consistently achieve state-of-the-art in various benchmarks. With the introduction of Im-ageNet (Russakovsky et al. 2015), a dataset with higher resolution images and one thousand diverse classes is available to benchmark the classification accuracy of ever better performing networks (Krizhevsky, Sutskever, and Hinton 2012; Zoph et al. 2018; Huang et al. 2017; He et al. 2016; Szegedy et al. 2017; Zhang et al. 2017), ranging from small and compact network (Howard et al. 2017) to large models (Simonyan and Zisserman 2014) with over 100 millions of parameters.
+
+Transformers. Recently, transformer network architectures, which were originally introduced in the area of natural language processing (Vaswani et al. 2017), have been successfully applied to the image classification task (Chen et al. 2020; Dosovitskiy et al. 2020). The performance of transformer networks is competitive despite having no convolutional layers. However, transformer models require long training times and large amounts of data (Dosovitskiy et al. 2020) in order to generalize well. A more efficient approach for training has been proposed in (Touvron et al. 2020), which is based on a teacher-student strategy (distillation). Similarly, (Caron et al. 2021) uses the same strategy on self-supervised tasks.
+
+Model Robustness. Convolutional neural networks are susceptible to distribution shifts (Quiñonero-Candela et al. 2009) between train and test data (Ovadia et al. 2019; Geirhos et al. 2018; Hendrycks and Dietterich 2019; Saikia, Schmid, and Brox 2021). This concerns both visible input domain shifts by for example considering corrupted, noisy or blurred data, as well as imperceptible changes in the input, induced by (Moosavi-Dezfooli, Fawzi, and Frossard 2016; Goodfellow, Shlens, and Szegedy 2014; Ku-rakin, Goodfellow, and Bengio 2016). These explicitly maximize the error rate of classification models (Szegedy et al. 2013; Biggio and Roli 2018) and thereby reveal model weaknesses. Many methods have been proposed to improve the adversarial robustness by specific training procedures, e.g. (Moosavi-Dezfooli, Fawzi, and Frossard 2016; Jakubovitz and Giryes 2018). In contrast, input distribution shifts induced by various kinds of noise as modeled in the ImageNet-C (Hendrycks and Dietterich 2019) dataset mimic the robustness of a model in unconstrained environments, for example under diverse weather conditions. This aspect is crucial if we consider scenarios like autonomous driving, where we want to ensure robust behaviour for example under strong rain. Therefore, we focus on the latter aspect and investigate the behaviour of various pre-trained models under ImageNet-C corruptions but also evaluate the proposed robustness measure on adversarial perturbations (Moosavi-Dezfooli, Fawzi, and Frossard 2016; Jakubovitz and Giryes 2018). While (Jiang et al. 2018) propose a trust score instead of its models' confidence score to judge the reliability of the results, (Buzhinsky, Nerinovsky, and Tripakis 2021) introduce a natural way of measuring adversarial robustness, called latent space performance metrics. In contrast, (Gi-raudon et al. 2021) measure robustness using a mean radius approach.
+
+Clustering. Clustering approaches, deep clustering approaches in particular, have shown to benefit from well structured feature spaces. Such approaches therefore aim at optimizing the latent representations for example using variational autoencoders or Gaussian mixture model or $K$ - means priors (Prasad, Das, and Bhowmick 2020; Xie, Gir-shick, and Farhadi 2016; Ghasedi Dizaji et al. 2017; Ghasedi et al. 2019; Caron et al. 2018). (Caron et al. 2018) iteratively groups points using $K$ -means during the latent space optimization. Conversely, we are investigating the actual feature space learned from image classification tasks using clusterability as a measure for its robustness. Therefore, we apply clustering approaches on pre-trained feature spaces. Further, while the above mentioned methods rely on a $K$ - means-like clustering, i.e. data is clustered into a given number of clusters, we also evaluate clusters from a similarity driven clustering approach, the Minimum Cost Multicut Problem (Bansal, Blum, and Chawla 2004).
+
+The Multicut Problem, aka. Correlation Clustering, groups similar data points together by pairwise terms: data (e.g. images) are represented as nodes in a graph. The real valued weight of an edge between two nodes measures their similarity. Clusters are obtained by cutting edges in order to decompose the graph and minimize the cut cost. This problem is known to be NP-hard (Demaine et al. 2006). In practice, heuristic solvers often perform reasonably (Kernighan and Lin 1970; Beier et al. 2014). Correlation Clustering has various applications in computer vision, such as motion tracking and segmentation (Keuper et al. 2018; Wolf et al.
+
+ < g r a p h i c s >
+
+Figure 2: The robustness of a model is measured by its relative classification performance, which is the ratio between clean and corrupted (in red arrow) data.. The latent space or features (in blue) of various classification models is sampled using ImageNet images. The feature representations are then clustered with the $K$ -means and Multicut clustering approaches. The correlation is visualized in 1 .
+
+2020), image clustering (Ho et al. 2020a) or multiple object tracking (Tang et al. 2017; Ho et al. 2020a).
+
+§ 3 FEATURE SPACE ANALYSIS
+
+Our aim is to establish indicators for a model's robustness from the structure of its induced latent space. Therefore, we first extract latent space samples, i.e. feature representations of input test images. The latent space structure is subsequently analyzed using two different clustering approaches. $K$ -means is clustering data based on distances to a fixed number of cluster means and can therefore be interpreted as a proxy of how well the latent space distribution can be represented by a univariate Gaussian mixture model. The Minimum Cost Multicut problem formulation clusters data points based on their pairwise distances and therefore imposes less constraints on the data manifold to be clustered. Figure 2 gives an overview of the methodology. First, we briefly recap classification models as feature extractors in Section 3.1. The K-means and Minimum Cost Multicut Problem on the image clustering task are explained in Section 3.2. In Section 3.3, we review evaluation metrics for measuring the clustering performance and in Section 3.4, we present our proposed metrics for robustness estimation.
+
+§ 3.1 EXTRACTING FEATURES FROM CLASSIFICATION MODELS
+
+Classification models with multiple classes are often trained with softmax cross-entropy and it has been shown that features, learned from vanilla softmax cross-entropy achieve a high performance in transfer accuracy (Kornblith et al. 2020). In order to obtain the learned features from images, the last layer of the trained model (classifier) is removed, which is often done for instance in transfer learning (Sharif Razavian et al. 2014; Shin et al. 2016) or clustering tasks (Xie, Girshick, and Farhadi 2016). The model encodes an image ${x}_{i}$ with a function ${f}_{\theta }\left( \text{ . }\right) ,$ with pre-trained parameters $\theta$ . Table 1 shows the different classification models with their according feature dimensions as well as the number of parameters and their top 1 classification accuracy in $\%$ . We investigate models which vary significantly in their architectures, including CNNs and transformer models, their number of parameters, ranging from ${3.5}\mathrm{M}$ to ${138}\mathrm{M}$ , as well as their test accuracy, ranging from 56.4% to 81.2% top-1 scores. We use features extracted from the full ImageNet test set as latent space samples for our analysis as shown in 2 .
+
+Table 1: Classification models: all models are trained and evaluated on the ImageNet (Russakovsky et al. 2015) dataset, sorted by performance. We report the Top1 classification accuracy in $\%$ . The first ten models are based on convolutional layers while the last two are transformer networks.
+
+max width=
+
+MODEL FEATURES Param TOP1 %
+
+1-4
+ALEXNET 4096 61.1M 56.4
+
+1-4
+VGG11 4096 132.9M 69.0
+
+1-4
+VGG16 4096 138.4M 71.6
+
+1-4
+BNINCEPTION 1024 11.3M 73.5
+
+1-4
+NASNETAMOBILE 1056 5.3M 74.1
+
+1-4
+DENSENET121 1024 7.9M 74.6
+
+1-4
+RESNET50 2048 25.6M 76.0
+
+1-4
+RESNET101 2048 44.5M 77.4
+
+1-4
+INCRESNV2 1536 55.8M 80.2
+
+1-4
+POLYNET 2048 95.3M 81.0
+
+1-4
+DEIT-TINY 192 5.9M 74.5
+
+1-4
+DEIT-SMALL 384 22.4M 81.2
+
+1-4
+
+§ 3.2 LATENT SPACE CLUSTERING
+
+K-means. K-means is a simple and effective method to cluster $N$ data points into $K$ clusters ${S}_{k},k = 1,\ldots ,K$ . As $K$ is set a priori, this method produces exactly the number of defined clusters by minimizing the intra-cluster distance:
+
+$$
+\mathop{\sum }\limits_{{k = 1}}^{K}\mathop{\sum }\limits_{{{x}_{i} \in {S}_{k}}}{\begin{Vmatrix}f\left( {x}_{i}\right) - {\mu }_{k}\end{Vmatrix}}^{2} \tag{1}
+$$
+
+where the centroid ${\mu }_{k}$ is computed as the mean of features $\frac{1}{\left| {S}_{k}\right| }\mathop{\sum }\limits_{{{x}_{i} \in {S}_{k}}}f\left( {x}_{i}\right)$ in cluster $k$ .
+
+Multicut Clustering. The Minimum Cost Multicut Problem is a graph based clustering approach. Considering an undirected graph $G = \left( {V,E}\right)$ , with $v \in V$ being the images ${x}_{i}$ of the dataset $X$ with $\left| V\right| = N$ samples, a complete graph with $N$ nodes has in total $\left| E\right| = \frac{N\left( {N - 1}\right) }{2}$ edges. A real valued cost $w : E \rightarrow \mathbb{R}$ is assigned to every edge $e \in E$ . While the decision, whether an edge is joined or cut, is made based on the edge label $y : E \rightarrow \{ 0,1\}$ , the decision boundary can be derived from training parameters of the model (Ho et al. 2020b), directly learned from the dataset (Ho et al. 2020a; Tang et al. 2017) or simply estimated empirically (via parameter search). The inference of such edge labels is defined as follows:
+
+$$
+\mathop{\min }\limits_{{y \in \{ 0,1{\} }^{E}}}\mathop{\sum }\limits_{{e \in E}}{w}_{e}{y}_{e} \tag{2}
+$$
+
+$$
+\text{ s.t. }\;\forall C \in \operatorname{cycles}\left( G\right) \;\forall e \in C : {y}_{e} \leq \mathop{\sum }\limits_{{{e}^{\prime } \in C\smallsetminus \{ e\} }}{y}_{{e}^{\prime }} \tag{3}
+$$
+
+ < g r a p h i c s >
+
+Figure 3: Evaluation metrics with 4 clusters with 3 unique classes. Cluster Accuracy: The best match for class dark circle is cluster 3, since it contains the most frequent items from the same class. Cluster 4 is considered as false positive. Purity score on the other hand does not penalize cluster 4 . Thus, the purity score is higher than the cluster accuracy $\left( {{80}\% \text{ vs. }{73}\% }\right)$ .
+
+Here, edges with negative costs ${w}_{e}$ have a high probability to be cut. Equation (3) enforces that for each cycle in $G$ , a cut is only allowed if there is at least another edge being cut as well, which was shown in (Chopra and Rao 1993) to be sufficient to enforce on all chordless cycles.Practically, the edge costs are computed from pairwise distances in the feature space. The distance ${d}_{i,j}$ between two features $f\left( {x}_{i}\right)$ and $f\left( {x}_{j}\right)$ is calculated from the pre-trained model or encoder $f$ , where ${x}_{i}$ and ${x}_{j}$ are two distinct images from the test dataset, respectively, as
+
+$$
+{d}_{i,j} = {\begin{Vmatrix}f\left( {x}_{i}\right) - f\left( {x}_{j}\right) \end{Vmatrix}}^{2}. \tag{4}
+$$
+
+A logistic regression model estimates the probability ${d}_{i,j}$ of the edge between $f\left( {x}_{i}\right)$ and $f\left( {x}_{j}\right)$ to be cut. This cut probability is then converted into real valued edge costs $w$ using the logit function $\operatorname{logit}\left( p\right) = \log \frac{p}{1 - p}$ such that similar features are connected by an edge with positive, i.e. attractive weight and dissimilar features are connected by edges with negative, i.e. repulsive weight. The decision boundary (i.e. the threshold on $d$ , which indicates when to cut or to join) is estimated empirically
+
+§ 3.3 CLUSTER QUALITY MEASURES
+
+We use two popular external evaluation metrics (i.e. label information are used) to measure the clustering performance: Cluster Accuracy (ACC) and Purity Score. The former metric is calculated based on the Hungarian algorithm (Kuhn 2005), where the best match between the predicted and the true labels are found. The purity score assigns data in a cluster to the class with the most frequent label (Jain, Grover, and LIET 2017). Formally, given a set of $K$ clusters ${S}_{k}$ and a set of classes $L$ with a total number of $N$ data samples, the purity is computed as follows:
+
+$$
+\frac{1}{N}\mathop{\sum }\limits_{{k \in K}}\mathop{\max }\limits_{{\ell \in L}}\left| {{S}_{k} \cap \ell }\right| \tag{5}
+$$
+
+The advantage of using this metric is two-fold: on one hand, it is suitable if the dataset is balanced and on the other hand, purity score does not penalize having a large number of clusters. Figure 3 depicts an example of both metrics.
+
+§ 3.4 PERFORMANCE MEASURE
+
+Next, we derive a measure based on the latent space clustering performance, that allows to draw conclusions on a model's robustness without evaluating the model on corrupted data. Thereby, we measure a model's robustness as its relative classification accuracy, i.e. the ratio between its classification accuracy on corrupted data and on clean data:
+
+$$
+\text{ Robustness } = \frac{{\operatorname{Model}}_{{\mathrm{{ACC}}}_{s,c}^{ * }}}{{\operatorname{Model}}_{\mathrm{{ACC}}}} \tag{6}
+$$
+
+Parameters $c$ and $s$ are corruption type and severity level (or intensity), respectively for non-adversarial attacks such as ImageNet-C. The aggregated value over all severity levels $s \in \widetilde{S}$ on all corruption types $c \in {CORR}$ is calculated as follows:
+
+$$
+{\mathrm{{ACC}}}_{\text{ all }}^{ * } = \frac{1}{\left| \mathrm{{CORR}}\right| }\mathop{\sum }\limits_{{c \in \mathrm{{CORR}}}}\frac{1}{\left| \widetilde{\mathrm{S}}\right| }\mathop{\sum }\limits_{{s = 1}}^{\left| \widetilde{\mathrm{S}}\right| }{\mathrm{{ACC}}}_{s,c}^{ * } \tag{7}
+$$
+
+According to equation 6, perfectly robust models therefore have a robustness of 1, smaller values indicate lower robustness. Based on the above considerations on model robustness and clustering performance, we propose to consider the relative clustering performance as an indicator for the model robustness and show empirically that there exists a strong correlation between both. The relative clustering performance, i.e. the ratio between clustering performance and classification accuracy ${\mathrm{{Model}}}_{\mathrm{{ACC}}}$ is defined as follows:
+
+$$
+p = \frac{\text{ clustering performance }}{{\text{ Model }}_{\mathrm{{ACC}}}} \tag{8}
+$$
+
+Here, we consider the clustering accuracy ${\mathrm{C}}_{\mathrm{{ACC}}}$ and purity score ${\mathrm{C}}_{\text{ purity }}$ as a performance measures for our experiments, i.e.
+
+$$
+{p}_{\mathrm{{ACC}}} = \frac{{\mathrm{C}}_{\mathrm{{ACC}}}}{{\mathrm{{Model}}}_{\mathrm{{ACC}}}}\text{ and }{p}_{\text{ purity }} = \frac{{\mathrm{C}}_{\text{ purity }}}{{\mathrm{{Model}}}_{\mathrm{{ACC}}}}
+$$
+
+respectively.
+
+Correlation Metrics. The degree of correlation is computed based on the coefficient of determination ${R}^{2}$ and Kendall rank correlation coefficient $\tau$ , respectively with a value of 1.0 being perfectly correlated while 0 means no correlation at all. An example for ${R}^{2}$ is illustrated in Figure 1 and $\tau$ in Figure 7.
+
+ < g r a p h i c s >
+
+Figure 4: ImageNet-C dataset: first row shows the original image and the corruption brightness for different severity levels. Second row: examples of other corruption types at severity level 5 .
+
+Baseline Indicator: Class Overlap $\Delta$ . Our hypothesis is that an initial well-separated feature space of a classification model provides a good estimate regarding the model robustness. A simple method to determine such a separation would be to observe the intra- and inter-class distances between data samples in the feature space. If an overlap between classes exists, they are not well separated, which may indicate weak models. We define this setting as a baseline in order to show that latent space clustering provides significantly more information.
+
+To investigate this, we define the overlap $\Delta$ between the intra- and inter-class distances as follows:
+
+$$
+\Delta = \left( {{\mu }_{\text{ intra }} + {\sigma }_{\text{ intra }}}\right) - \left( {{\mu }_{\text{ inter }} + {\sigma }_{\text{ inter }}}\right) \tag{9}
+$$
+
+$\mu$ and $\sigma$ represent the mean and standard deviation of the intra- and inter-class distances.
+
+§ 4 EXPERIMENTS
+
+This section is structured as follows: we first explain the setup of our experiments in 4.1. Then, we present the clustering results in Section 4.2 where we analyse the clustering accuracy and purity for the two considered clustering approaches on the feature spaces of the different models. Section 4.3 shows that the intra- and inter-class distances cannot directly be used as robustness indicators. In Section 4.4, we consider the relationship between the model classification robustness under corruptions and the relative clustering performance of the considered clustering methods and metrics. We show that both clustering accuracy and cluster purity, computed on the feature spaces of clean data, allow to derive indicators for a model's expected robustness under corruptions. Thereby, the purity score is more stable than the clustering accuracy and the information provided by $k$ - means clustering and multicuts complement one another. In Section 4.5, we evaluate the proposed robustness indicator in the context of adversarial attacks.
+
+§ 4.1 SETUP
+
+Our experiments are based on the ImageNet (Russakovsky et al. 2015) dataset. All models were pre-trained on the original training dataset. We evaluate 10 CNN-based models and 2 transformer architectures deit- $t$ and deit- $s$ (t stands for tiny and $\mathrm{s}$ for small). An overview is provided in Table 1. We evaluate the robustness of the considered models against corruptions using the ImageNet-C (Hendrycks and Dietterich 2019) dataset and report the model accuracy for classification and clustering tasks. Figure 4 illustrates an example of considered image corruptions: the first row shows the different severity levels $s = 1,\ldots ,5$ of the corruption brightness, with 1 being the lowest and 5 the strongest corruption. The second row shows other kinds of image perturbations $c$ at severity level 5 such as fog, frost, Gaussian blur. jpeg_compression or pixelate. Each corruption $c$ has 5 severity levels $s = 1,\ldots ,5$ . All models are trained on the clean dataset and the numbers are evaluated on the full test dataset. as done in (Hendrycks and Dietterich 2019).
+
+§ 4.2 CLASSIFICATION VS. CLUSTERING
+
+Table 2 summarizes the evaluation in three categories: classification, $K$ -means and Multicuts. There are in total $\left| \mathrm{{CORR}}\right| = {19}$ corruption types with each $\left| \widetilde{S}\right| = 5$ severity levels on ImageNet-C. For the classification task, the numbers are reported in top 1% accuracy for all five levels of corruption (denoted as $1 - 5$ ). On $K$ -means and Multicuts, we report the clustering metrics as presented in 3.3.
+
+The transformer deit- $s$ shows the highest top 1% accuracy on the classification task both on clean and on corrupted data for all severity levels. Inceptionresnetv2 and polynet perform only slightly worse on clean data but are more strongly affected by the ImageNet-C data corruptions than deit-s. Alexnet shows the worse performance across all corruption levels. Although resnet50 outperforms bninception, nasne-tamobile and densenet121 it is less robust against corruption. This is also illustrated in Figure 6 (right).
+
+Considering the clustering accuracy and purity, the $K$ - means and the Multicut behave significantly different from one another. K-means clustering achieves about ${70}\%$ accuracy for models with the highest clean classification accuracy. Yet, its accuracy is much better for the deit- $t$ latent space than for example for the densenet121 induced latent space, although the clean classification accuracy of both networks is comparable. Overall, the $K$ -means clustering works surprisingly well on the transformer models. The Multicut clustering showed the highest clustering accuracy on the in-ceptionresnetv2 model. The cluster purity was comparably high for the best transformer model deit-s. Note that our goal is to derive from the clustering performance an indicator for model robustness, i.e. we expect clustering to be less accurate when models are less robust to noise.
+
+§ 4.3 BASELINE INDICATORS: INTRA- AND INTER CLASS-DISTANCES
+
+Table 3 shows the correlation (as ${R}^{2}$ ) and the ranking correlation $\tau$ between the class overlap baseline indicator $\Delta$ , which we detailed in section 3.3, and the model robustness, grouped by severity level. We use equation 6 to calculate the robustness for severity level $s$ over all corruptions and compare them with $\Delta$ . The last column shows the correlation on all corruption levels. All 12 models are considered. The rank correlation $\tau$ is calculated by comparing the model’s robustness rank and the overlap $\Delta$ ranking. Initial well-separated feature spaces (thus a low $\Delta$ ) should have a high correlation with their model's robustness. Despite its simplicity, this metric $\Delta$ correlates poorly with a highest score of ${R}^{2} = {0.29}$ and $\tau = {0.52}$ . This observation rejects the simple hypothesis about the overlap of intra- and interclass distances and it suggests that using $\Delta$ is not sufficiently informative as an indicator for model robustness.
+
+Table 2: Evaluation of robustness on classification and clustering tasks with the ImageNet $C$ dataset, evaluated on corruption severity levels 1 to 5. Column CLEAN represents the classification performance of the models on the clean dataset. Columns 1-5 show the classification accuracy (top 1%) under different severity levels of over all 19 corruptions and column ${\mathrm{{ACC}}}_{all}^{\mathrm{t}}$ shows the mean over corruptions on all 5 severity levels. On $K$ -means, ACC and purity are clustering performance on clean test data. The numbers on multicuts are evaluated on a subset. The best score on each column is marked in bold.
+
+max width=
+
+X 7|c|CLASSIFCATION ACCURAY (TOP 1%) 3|c|$K$ -means 3|c|MULTICUTS
+
+1-14
+Model CLEAN 1 2 3 4 5 ${\mathrm{{ACC}}}_{\text{ all }}^{ * }$ ACC Purity ${\mathrm{{ACC}}}_{all}^{ * }$ ACC Purity ${\mathrm{{ACC}}}_{all}^{ * }$
+
+1-14
+ALEXNET 56.4 35.9 25.4 18.9 12.7 8.0 20.2 14.6 18.4 8.0 8.0 28.1 2.6
+
+1-14
+VGG11 69.0 47.3 35.3 25.7 16.7 10.1 27.0 28.0 32.8 12.4 15.8 27.2 2.5
+
+1-14
+VGG16 71.6 50.9 38.6 28.5 18.7 11.4 29.6 32.3 37.5 14.4 19.3 27.6 2.8
+
+1-14
+BNINCEPTION 73.5 59.4 48.4 38.8 27.2 17.7 38.3 40.5 44.0 18.0 11.5 46.6 7.9
+
+1-14
+NASNETAMOBILE 74.1 60.7 51.3 43.7 33.6 22.5 42.4 41.0 45.3 23.6 41.9 70.2 19.3
+
+1-14
+DENSENET121 74.6 60.2 50.9 42.2 31.4 21.0 41.1 48.9 52.1 23.4 16.8 80.5 8.3
+
+1-14
+RESNET50 76.0 60.1 49.8 40.1 28.9 18.8 39.6 55.8 58.6 24.9 29.3 64.3 11.1
+
+1-14
+RESNET101 77.4 63.6 54.4 45.5 34.0 22.9 44.1 59.1 61.9 29.3 28.6 53.7 16.6
+
+1-14
+INCEPTIONRESNETV2 80.2 68.8 60.8 53.5 43.5 31.6 51.7 70.0 71.2 37.4 71.3 81.3 39.8
+
+1-14
+POLYNET 81.0 68.0 58.9 49.9 38.2 26.3 48.3 67.8 69.7 34.8 54.4 76.6 24.1
+
+1-14
+DEIT-T 74.5 63.3 55.9 48.7 38.9 28.1 47.0 57.4 60.0 31.7 33.0 91.9 19.5
+
+1-14
+DEIT-S 81.2 72.1 66.1 60.2 51.3 39.7 57.9 68.8 70.8 43.4 49.4 81.1 29.6
+
+1-14
+
+Table 3: Baseline indicators for model robustness: The table shows the correlation between overlap $\Delta$ and model robustness for different corruption severity levels. Second row shows the rank correlation $\tau$ between the actual model robustness rank and the predicted rank using $\Delta$ .
+
+max width=
+
+X 6|c|SEVERITY
+
+1-7
+METRIC 1 2 3 4 5 TOTAL
+
+1-7
+${R}^{2}$ 0.27 0.29 0.27 0.25 0.26 0.27
+
+1-7
+$\tau$ 0.48 0.52 0.52 0.52 0.52 0.48
+
+1-7
+
+§ 4.4 ROBUSTNESS INDICATORS: CLUSTERING MEASURES
+
+In the following we evaluate our proposed clustering driven robustness indicator. Specifically, we want to investigate the effects of different clustering measures on the correlation coefficient ${R}^{2}$ . Table 4 gives an overview of the strength of correlation on different severity levels and clustering metrics on $K$ -means and Multicuts. Column $\Delta$ shows the correlation on robustness using the overlap of intra- and inter-class distances as previously discussed. Furthermore, the columns ${ACC}$ and $P$ are showing the correlation between the model robustness and the clustering accuracy and purity, respectively. The last column shows the combination of both clustering methods in one metric. K-means and Multicuts have an ${R}^{2}$ value of ${R}^{2} = {0.83}$ and ${R}^{2} = {0.55}$ for clustering accuracy on all corruption levels. On the purity score, both methods show a slightly higher correlation of ${R}^{2} = {0.83}$ and ${R}^{2} = {0.71}$ , respectively for the sum over all corruptions (last row of Table 4). This indicated that latent space clusterability of clean test images K-means is a valid indicator for model robustness under corruptions. However, we show that both clustering methods are complementary when combining their purity scores with
+
+$$
+{p}_{{\text{ purity }}^{k\text{ -means }\text{ -multicuts }}} = \frac{{\mathrm{C}}_{\text{ purity }}^{k\text{ -means }} \cdot {\mathrm{C}}_{\text{ purity }}^{\text{ multicut }}}{{\operatorname{Model}}_{\mathrm{{ACC}}}}. \tag{10}
+$$
+
+This measure shows the highest correlation with the model robustness with ${R}^{2} = {0.87}$ (see fig. 1 for the full correlation plot). Additionally, the combination of purity scores of both methods also yields more consistent results across different severity levels.
+
+Model Ranking. Next, we evaluate whether our proposed robustness indicator is able to retrieve the correct ranking in terms of model robustness for our set of classification models. The rank correlation is measured as the Kendall rank coefficient $\tau$ . Table 5 shows the results for different setups. Here, $K$ -means shows a more consistent and better correlation with highest rank correlation of $\tau = {0.82}$ on ${ACC}$ and Purity. Again, all clustering metrics outperform the $\Delta$ baseline. Figure 6 illustrates one example of the change of rank between the predicted (left) and actual (right) model robustness. The prediction is done using ${p}_{{\mathrm{{ACC}}}^{K - \text{ means }}\text{ , which has a }}$ rank correlation of $\tau = {0.79}$ . Our proposed measure is able to rank different models according to their robustness. The three worse performing models (alexnet, vgg11 and vgg16) are correctly retrieved. The largest ranking gap of 3 positions is observed for nasnetamobile and resnet50. In this particular example, the value for alexnet is calculated as follows: $\frac{14.6}{56.4} * {100} = {25.9}$ and $\frac{20.2}{56.4} * {100} = {35.8}$ for predicted and actual values, respectively.
+
+ < g r a p h i c s >
+
+Figure 5: Visualization of feature space on alexnet and deit-s using umap. The colors correspond to the class labels, where only 10 classes were selected at random. First column shows the initial, clean latent space from the classification model. Each new column depicts the corresponding severity level of the corruption brightness. While alexnet collapses as the severity increases, the most robust model deit-s preserves the clusters very well even after significant corruptions and thus our proposed clusterability of latent space provides a good indicator about the model robustness.
+
+Table 4: Correlation with different metrics and severity levels: the reported numbers are the coefficient of determination $\left( {R}^{2}\right)$ on different clustering metrics. Column $\Delta$ is the overlap (from Table 3). Column ${ACC}$ and Purity (denoted as $P$ .) are used to compute the correlation coefficient ${R}^{2}$ . The last column is the combination of both clustering methods, i.e. last column Purity is equation 10. The highest score is marked in bold.
+
+max width=
+
+Metric: ${R}^{2}$ X 2|c|K-MEANS 2|c|MULTICUTS 2|c|COMBINED
+
+1-8
+SEVERITY $\Delta$ ACC P. ACC P. ACC P.
+
+1-8
+1 0.27 0.85 0.85 0.48 0.67 0.54 0.82
+
+1-8
+2 0.29 0.87 0.87 0.51 0.70 0.58 0.86
+
+1-8
+3 0.27 0.84 0.83 0.55 0.73 0.61 0.87
+
+1-8
+4 0.25 0.79 0.79 0.58 0.72 0.64 0.87
+
+1-8
+5 0.26 0.75 0.84 0.57 0.68 0.64 0.84
+
+1-8
+ALL 0.27 0.83 0.83 0.55 0.71 0.62 0.87
+
+1-8
+
+Latent Space Visualization. Umap (McInnes, Healy, and Melville 2018), a scalable dimensionality reduction method similar to the popular technique TSNE(Van der Maaten and Hinton 2008), has been applied to features on 10 randomly selected classes of the ImageNet dataset for visualization. Figure 5 shows one example of the corruption brightness for 2 different models: the first column shows features without any corruptions (clean). As the severity level increases, a collapse is observed for instance on alexnet: well-separated clusters (i.e. different colors) are being pulled into a direction in the latent space as the severity increases. The model with the highest robustness, i.e. deit-s, preserves the clusters well, which explains the high relative clustering performance. This verifies our assumption on the correlation between clusterability and robustness of classification models, that were evaluated in ImageNet-C dataset.
+
+max width=
+
+3|c|Predicted 3|c|Actual
+
+1-6
+X Model Name Rank Rank Model Name 100 x Robustness
+
+1-6
+25.9 AlexNet 12 12 AlexNet 35.8
+
+1-6
+40.6 Vgg11 11 11 Vgg11 39.1
+
+1-6
+45.1 Vgg16 10 10 Vgg16 41.3
+
+1-6
+55.1 BNInception 9 9 Resnet50 52.1
+
+1-6
+55.3 NasnetAmobile 8 X BNInception 52.1
+
+1-6
+65.5 Densenet121 7 X Densenet121 55.1
+
+1-6
+73.4 Resnet50 6 X Resnet101 57.0
+
+1-6
+76.4 Resnet101 5 X NasnetAmobile 57.2
+
+1-6
+77.0 Deit-t 4 X PolyNet 59.6
+
+1-6
+83.7 PolyNet 3 X Deit-t 63.1
+
+1-6
+84.7 Deit-s X X InceptionResnetV2 64.5
+
+1-6
+87.3 InceptionResnetV2 X X Deit-s 71.3
+
+1-6
+
+Figure 6: Change in robustness ranking based on predicted (left) vs. actual (right) model robustness on ImageNet-C using clustering metric ${p}_{\mathrm{{AC}}{\mathrm{C}}^{\mathrm{K} - \text{ means }}}$ on total corruptions $\tau = {0.79}$ . Top is the least robust model $\left( {\operatorname{Rank} = {12}}\right)$ while the bottom shows the most robust model $\left( {\operatorname{Rank} = 1}\right)$ . The highest score is marked in bold.
+
+§ 4.5 ADVERSARIAL ROBUSTNESS
+
+So far, we have shown that our proposed approach can effectively indicate the robustness of classification models towards visible image corruptions and shifts in the data distributions provided by the ImageNet-C benchmark. Here, we extend this evaluation to intentional, non-visible corruptions induced by adversarial attacks. Using the proposed clustering metric ${p}_{\text{ purity }}$ k-means-multicuts as an estimator, we evaluate all 12 models with ImageNet test dataset under two adversarial attacks: DeepFool (Moosavi-Dezfooli, Fawzi, and Frossard 2016), FGSM (Goodfellow, Shlens, and Szegedy 2014) and PGM (Kurakin, Goodfellow, and Bengio 2016) with different perturbation sizes epsilon. Figure 7 shows the results of both attacks across all 12 models: left (a) represents the correlation of determination ${R}^{2}$ while on the right (b) the classification accuracy, respectively. Epsilon (x-axis) is the perturbation size of the attacks. For small epsilon, we expect lower correlations since the model accuracy should hardly be affected. As epsilon increases, some models are more robust than others, i.e. better preserve their classification accuracy. In this range, we see a relatively strong correlation of the proposed indicator and the relative robust accuracy, albeit weaker than the correlation with robustness to corruptions, with ${R}^{2} = {0.66},{R}^{2} = {0.44}$ and ${R}^{2} = {0.44}$ for DeepFool and FGSM and PGM, respectively. When epsilon becomes too large, the correlation becomes weaker.
+
+ < g r a p h i c s >
+
+Figure 7: Correlation of our proposed clustering metric on different adversarial attacks with different strengths (Epsilon). Left (a): line represents the coefficient of determination ${R}^{2}$ . Right (b) shows the clustering accuracy. The higher the strength, the weaker the performance.
+
+Table 5: Rank correlation with different metrics and severity levels: the reported numbers are the coefficient of determination $\left( \tau \right)$ on different clustering metrics. Column $\Delta$ is the overlap (from Table 3). Column ${ACC}$ and Purity (denoted as ${P}_{ \cdot }$ ) are the used compute the rank correlation coefficient $\tau$ . The last column is the combination of both clustering methods, i.e. last column Purity is equation 10 . The highest score is marked in bold.
+
+max width=
+
+METRIC: $\tau$ X 2|c|K-MEANS 2|c|MULTICUTS 2|c|COMBINED
+
+1-8
+SEVERITY $\Delta$ ACC P. ACC P. ACC P.
+
+1-8
+1 0.48 0.79 0.79 0.61 0.52 0.73 0.73
+
+1-8
+2 0.52 0.82 0.82 0.64 0.55 0.76 0.76
+
+1-8
+3 0.52 0.82 0.82 0.70 0.61 0.82 0.76
+
+1-8
+4 0.52 0.82 0.82 0.70 0.61 0.82 0.76
+
+1-8
+5 0.52 0.82 0.82 0.70 0.61 0.82 0.76
+
+1-8
+ALL 0.48 0.79 0.79 0.67 0.58 0.79 0.73
+
+1-8
+
+§ 5 CONCLUSION
+
+In this work, we presented a study of the feature space of several pre-trained models on ImageNet including state-of-the-art CNN models and the recently proposed transformer models and we evaluated the robustness on ImageNet-C dataset and extended our evaluation on adversarial robustness as well. We propose a novel way to estimate the robustness behavior of trained models by analyzing the learned feature-space structure. Specifically, we presented a comprehensive study of two clustering methods, $K$ -means and the Minimum Cost Multicut Problem on ImageNet, where the classification accuracy, clusterability and robustness are analyzed. We show that the relative clustering performance gives a strong indication regarding the model's robustness. Both considered clustering methods show complementary behaviour in our analysis: the coefficient of determination is ${R}^{2} = {0.87}$ when combining the purity scores of both methods. Our experiments also show that this indicator is lower, albeit still significant for adversarial robustness $\left( {{R}^{2} = {0.66}}\right.$ and ${R}^{2} = {0.44}$ ). Additionally, our proposed method is able estimate the order of robust models $\left( {\tau = {0.79}}\right)$ on ImageNet-C. This novel method is simple yet effective and allows the estimation of robustness of any given classification model without explicitly testing on any specific test data. To the best of our knowledge, we are the first to propose such technique for estimating model robustness.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/UiF3RTES7pU/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/UiF3RTES7pU/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..45403a37bca6e966f71ef6c7d9684ce9c4949db0
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/UiF3RTES7pU/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,440 @@
+# Revisiting Adversarial Robustness of Classifiers With a Reject Option
+
+## Abstract
+
+Adversarial training of deep neural networks (DNNs) is an important defense mechanism that allows a DNN to be robust to input perturbations, that can otherwise result in predictions errors. Recently, there is a growing interest in learning a classifier with a reject (abstain) option that can be more robust to adversarial perturbations by choosing to not return a prediction on inputs where the classifier may be incorrect. A challenge faced with robust learning of a classifier with reject option is that existing works do not have a mechanism to ensure that (very) small perturbations of the input are not rejected, when they can in fact be accepted and correctly classified. We first propose a novel metric - robust error with rejection - that extends the standard definition of robust error to include the rejection of small perturbations. The proposed metric has natural connections to the standard robust error (without rejection), as well as the robust error with rejection proposed in a recent work. Motivated by this metric, we propose novel loss functions and a robust training method - stratified adversarial training with rejection (SATR) - for a classifier with reject option, where the goal is to accept and correctly-classify small input perturbations, while allowing the rejection of larger input perturbations that cannot be correctly classified. Experiments on well-known image classification DNNs using strong adaptive attack methods validate that SATR can significantly improve the robustness of a classifier with rejection compared to standard adversarial training (with confidence-based rejection) as well as a recently-proposed baseline.
+
+## Introduction
+
+Training robust classifiers in the presence of adversarial inputs is an important problem from the standpoint of designing secure and reliable machine learning systems (Biggio and Roli 2018). Adversarial training (AT) and its variations are the most effective methods for learning robust DNN classifiers (Madry et al. 2018; Zhang et al. 2019). However, adversarial training may still not be very effective against adaptive adversarial attacks, or even standard attacks with configurations not observed during training (Athalye, Carlini, and Wagner 2018; Tramèr et al. 2020). Given this limitation, it is important to design classifiers that learn when to reject or abstain from predicting on hard-to-classify inputs. This can be especially crucial when it comes to real-world, safety-critical systems such as self-driving cars, where abstaining from prediction is often a much safer alternative to making an incorrect decision.
+
+We focus on the problem of learning a robust classifier with a reject option in the presence of adversarial inputs. The related problem of learning a (non-robust) classifier with a reject option has been studied extensively in the literature (Tax and Duin 2008; Guan et al. 2018; Cortes, DeSalvo, and Mohri 2016; Geifman and El-Yaniv 2019; Charoenphakdee et al. 2021). Recently, a number of works have also addressed the problem of adversarial robustness for a classifier equipped with a reject option (Laidlaw and Feizi 2019; Stutz, Hein, and Schiele 2020; Sheikholeslami, Lotfi, and Kolter 2021; Pang et al. 2021b; Tramèr 2021; Kato, Cui, and Fukuhara 2020). These approaches extend the standard definition of adversarial robustness (robust error) to the setting where the classifier can also reject inputs. In this setting, rejection of a perturbed input is considered to be a valid decision that does not count towards the robust error. However, rejection of a clean input still counts towards the robust error (Tramèr 2021).
+
+A key limitation with this view of the robust error (with rejection) is that it treats equally the rejection of very small perturbations as well as large perturbations of an input. However, many practical applications (e.g., object detection) may require that small perturbations of an input be handled accurately by the classifier without resorting to rejection. In other words, there could be a higher cost for rejecting small input perturbations, when in-fact the classifier can accept and classify them correctly. Existing methods for training a robust classifier with rejection, such as confidence-calibrated adversarial training (CCAT) (Stutz, Hein, and Schiele 2020), achieve a high robustness by simply rejecting a large fraction of the perturbed inputs (since rejecting perturbed inputs does not contribute to the robust error, no matter the perturbation size). As we validate experimentally, CCAT often has a high rejection rate on even small perturbations of clean inputs, which may not be acceptable in many practical applications.
+
+Motivated by these limitations in existing works, we revisit the problem of adversarial robustness of a classifier with reject option, and make the following contributions:
+
+- We propose a novel metric - robust error with rejection - that can provide a fine-grained evaluation of the robustness of a classifier with reject option across a range of perturbation sizes.
+
+---
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+---
+
+- We provide a theoretical analysis of this problem, which motivates the need for learning a robust classifier with rejection that can accept and correctly classify small input perturbations.
+
+- We propose novel loss functions and a robust training method SATR for jointly learning a classifier-detector system (i.e., a classifier with rejection) that are designed to achieve the goal of accepting and correctly classifying small input perturbations, while also selectively rejecting larger input perturbations.
+
+## Related Work
+
+Adversarial robustness of deep learning models has received significant attention in recent years. Many defenses have been proposed and most of them have been broken by strong adaptive attacks (Athalye, Carlini, and Wagner 2018; Tramèr et al. 2020). The most effective approach for improving adversarial robustness is adversarial training (Madry et al. 2018; Zhang et al. 2019). However, adversarial training still cannot achieve very good robustness on complex datasets, and often there is a large generalization gap in the robustness (Tsipras et al. 2019; Stutz, Hein, and Schiele 2019). For example, on CIFAR-10, current state-of-the-art adversarial training has only about ${50}\%$ robustness under the strongest adaptive attacks.
+
+One approach to break this robustness bottleneck is to allow rejection of adversarial examples instead of trying to correctly classify all of them. Recently, there has been a great interest in exploring adversarial training of a classifier with a reject option (Laidlaw and Feizi 2019; Stutz, Hein, and Schiele 2020; Sheikholeslami, Lotfi, and Kolter 2021; Pang et al. 2021b; Tramèr 2021). Stutz, Hein, and Schiele proposed to adversarially train confidence-calibrated models so that they can generalize to unseen adversarial attacks. Sheikholeslami, Lotfi, and Kolter modified existing certified defense mechanisms to allow the classifier to either robustly classify or detect adversarial attacks, and showed that it can lead to better certified robustness, especially for large perturbation sizes. Laidlaw and Feizi proposed a method called Combined Abstention Robustness Learning (CARL) for jointly learning a classifier and the region of the input space on which it should abstain, and showed that training with CARL can result in a more accurate and robust classifier.
+
+## Problem Setup
+
+Let $\mathcal{X} \subseteq {\mathbb{R}}^{d}$ denote the space of inputs $\mathbf{x}$ and $\overline{\mathcal{Y}} \mathrel{\text{:=}}$ $\{ 1,\cdots , k\}$ denote the space of outputs $y$ . Let $\mathcal{Y} \mathrel{\text{:=}} \overline{\mathcal{Y}} \cup \{ \bot \}$ be the extended output space where $\bot$ denotes the abstain or rejection option. Let ${\Delta }_{k}$ denote the set of output probabilities over $\overline{\mathcal{Y}}$ (i.e., the simplex in $k$ -dimensions). Let $d\left( {\mathbf{x},{\mathbf{x}}^{\prime }}\right)$ be a norm-induced distance on $\mathcal{X}$ (e.g., the ${\ell }_{p}$ -distance for some $p > 1$ ), and let $\mathcal{N}\left( {\mathbf{x}, r}\right) \mathrel{\text{:=}} \left\{ {{\mathbf{x}}^{\prime } \in \mathcal{X} : d\left( {{\mathbf{x}}^{\prime },\mathbf{x}}\right) \leq r}\right\}$ denote the neighborhood of $\mathbf{x}$ with distance $r$ . Let $\land$ and $\vee$ denote the boolean AND and OR operations respectively. Let $\mathbf{1}\{ c\}$ define the binary indicator function which takes value 1(0) when the condition $c$ is true (false). We denote vectors and matrices using boldface symbols.
+
+Given samples from a distribution $\mathcal{D}$ over $\mathcal{X} \times \overline{\mathcal{Y}}$ , our goal is to learn a classifier with rejection option, $f : \mathcal{X} \rightarrow \mathcal{Y}$ , that can correctly classify adversarial examples with small perturbations, and can either correctly classify or reject those with large perturbations. The standard robust error at adversarial budget $\epsilon > 0$ is defined as:
+
+$$
+{R}_{\epsilon }\left( f\right) \mathrel{\text{:=}} \underset{\left( {\mathbf{x}, y}\right) \sim \mathcal{D}}{\mathbb{E}}\left\lbrack {\mathop{\max }\limits_{{{\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},\epsilon }\right) }}\mathbf{1}\left\{ {f\left( {\mathbf{x}}^{\prime }\right) \neq y}\right\} }\right\rbrack ,
+$$
+
+which does not allow rejection. A few recent works (e.g. (Tramèr 2021)) have proposed a robust error with rejection at adversarial budget $\epsilon > 0$ as
+
+$$
+{R}_{\epsilon }^{\text{rej }}\left( f\right) \mathrel{\text{:=}} \underset{\left( {\mathbf{x}, y}\right) \sim \mathcal{D}}{\mathbb{E}}\left\lbrack {\mathbf{1}\{ f\left( \mathbf{x}\right) \neq y\} }\right.
+$$
+
+$$
+\left. {\vee \mathop{\max }\limits_{{{\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},\epsilon }\right) }}\mathbf{1}\{ f\left( {\mathbf{x}}^{\prime }\right) \notin \{ y, \bot \} \} }\right\rbrack \text{,}
+$$
+
+which allows the rejection of small input perturbations without incurring an error.
+
+Neither of these metrics for robust error is well-suited to our needs. We therefore propose a new metric for evaluating a robust classifier with reject option - the robust error with rejection at budgets ${\epsilon }_{0} \in \left\lbrack {0,\epsilon }\right\rbrack$ and $\epsilon \geq 0$ :
+
+$$
+{R}_{{\epsilon }_{0},\epsilon }^{\text{rej }}\left( f\right) \mathrel{\text{:=}} \underset{\left( {\mathbf{x}, y}\right) \sim \mathcal{D}}{\mathbb{E}}\left\lbrack {\mathop{\max }\limits_{{{\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},{\epsilon }_{0}}\right) }}\mathbf{1}\left\{ {f\left( {\mathbf{x}}^{\prime }\right) \neq y}\right\} }\right.
+$$
+
+$$
+\left. {\vee \mathop{\max }\limits_{{{\mathbf{x}}^{\prime \prime } \in \mathcal{N}\left( {\mathbf{x},\epsilon }\right) }}\mathbf{1}\{ f\left( {\mathbf{x}}^{\prime \prime }\right) \notin \{ y, \bot \} \} }\right\rbrack \text{.} \tag{1}
+$$
+
+The motivation for this metric is as follows. For small perturbations of a clean input within a neighborhood of radius ${\epsilon }_{0}$ , both an incorrect prediction and rejection are considered to be an error. For larger perturbations outside the ${\epsilon }_{0}$ -neighborhood, rejection is not considered to be an error, i.e., the classifier can either predict the correct class or reject larger perturbations.
+
+Proposition 1. The robust error with rejection can be equivalently defined as
+
+$$
+{R}_{{\epsilon }_{0},\epsilon }^{rej}\left( f\right) \mathrel{\text{:=}} \underset{\left( {\mathbf{x}, y}\right) \sim \mathcal{D}}{\mathbb{E}}\left\lbrack {\mathop{\max }\limits_{{{\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},{\epsilon }_{0}}\right) }}\mathbf{1}\left\{ {f\left( {\mathbf{x}}^{\prime }\right) = \bot }\right\} }\right.
+$$
+
+$$
+\left. {\vee \mathop{\max }\limits_{{{\mathbf{x}}^{\prime \prime } \in \mathcal{N}\left( {\mathbf{x},\epsilon }\right) }}\mathbf{1}\{ f\left( {\mathbf{x}}^{\prime \prime }\right) \notin \{ y, \bot \} \} }\right\rbrack \text{.} \tag{2}
+$$
+
+We first note that
+
+$$
+\mathbf{1}\left\{ {f\left( {\mathbf{x}}^{\prime }\right) \neq y}\right\} = \mathbf{1}\left\{ {f\left( {\mathbf{x}}^{\prime }\right) = \bot }\right\} \vee \mathbf{1}\left\{ {f\left( {\mathbf{x}}^{\prime }\right) \notin \{ y, \bot \} }\right\} .
+$$
+
+The maximum over the ${\epsilon }_{0}$ -neighborhood can be expressed as
+
+$$
+\mathop{\max }\limits_{{{\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},{\epsilon }_{0}}\right) }}\mathbf{1}\left\{ {f\left( {\mathbf{x}}^{\prime }\right) \neq y}\right\} = \mathop{\max }\limits_{{{\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},{\epsilon }_{0}}\right) }}\mathbf{1}\left\{ {f\left( {\mathbf{x}}^{\prime }\right) = \bot }\right\}
+$$
+
+$$
+\vee \mathop{\max }\limits_{{{\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},{\epsilon }_{0}}\right) }}\mathbf{1}\left\{ {f\left( {\mathbf{x}}^{\prime }\right) \notin \{ y, \bot \} }\right\} .
+$$
+
+Finally, the second term in the RHS of the above expression can be combined with the second term inside the expectation of Eq. (1), i.e.,
+
+$\mathop{\max }\limits_{{{\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},{\epsilon }_{0}}\right) }}\mathbf{1}\left\{ {f\left( {\mathbf{x}}^{\prime }\right) \notin \{ y, \bot \} }\right\} \vee \mathop{\max }\limits_{{{\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},\epsilon }\right) }}\mathbf{1}\left\{ {f\left( {\mathbf{x}}^{\prime }\right) \notin \{ y, \bot \} }\right\}$
+
+$= \mathop{\max }\limits_{{{\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},\epsilon }\right) }}\mathbf{1}\left\{ {f\left( {\mathbf{x}}^{\prime }\right) \notin \{ y, \bot \} }\right\} ,$
+
+which shows the equivalence of (1) and (2).
+
+Our new metric also has natural connections with existing metrics in the literature. When ${\epsilon }_{0} = \epsilon$ , our metric ${R}_{{\epsilon }_{0},\epsilon }^{\text{rej }}\left( f\right)$ reduces to the standard robust error ${R}_{\epsilon }\left( f\right)$ (without rejection) at budget $\epsilon$ (Carlini and Wagner 2017). When ${\epsilon }_{0} = 0$ , our metric reduces to the robust error with rejection at budget $\epsilon ,{R}_{\epsilon }^{\text{rej }}\left( f\right)$ proposed e.g., in (Tramèr 2021). For this special case, rejection is considered to be an error only for clean inputs (i.e., no perturbation).
+
+## Theoretical Analysis
+
+Our goal is to correctly classify small perturbations of the input and allow rejection of large perturbations when the classifier is not confident. Two fundamental questions arise:
+
+1. Why not allow rejection of both small and large perturbations? This is done in most existing studies on robust classification with rejection. On the other hand, many practical applications would like to handle small perturbations, and rejecting them can have severe costs. The question is whether it is possible to correctly classify small perturbations without hurting the robustness i.e., whether it is possible to achieve a small ${R}_{{\epsilon }_{0},\epsilon }^{\text{rej }}$ .
+
+2. Why not try to correctly classify both small and large perturbations? This is done in traditional adversarial robustness, typically by adversarial training. The question is essentially about the benefit of allowing rejection.
+
+To answer these questions, we will show that under mild conditions, there exists a classifier $f$ with rejection with small ${R}_{{\epsilon }_{0},\epsilon }^{\text{rej }}\left( f\right)$ . So it is possible to correctly classify small perturbations without rejecting them, answering the first question. Moreover, under the same conditions, all classifiers $g$ without rejection must have at least as large errors: ${R}_{{\epsilon }_{0},\epsilon }^{\text{rej }}\left( g\right) = {R}_{\epsilon }\left( g\right) \geq {R}_{{\epsilon }_{0},\epsilon }^{\text{rej }}\left( f\right)$ . In fact, the error of $g$ may be much larger than that of $f$ . This shows the benefit of allowing rejection, answering the second question.
+
+Theorem 1. Consider binary classification. Let $g\left( \mathbf{x}\right)$ be any decision boundary (i.e., any classifier without a rejection option). For any $0 \leq {\epsilon }_{0} \leq \epsilon$ , there exists a classifier $f$ with a rejection option such that
+
+$$
+{R}_{{\epsilon }_{0},\epsilon }^{rej}\left( f\right) \leq {R}_{\left( {{\epsilon }_{0} + \epsilon }\right) /2}\left( g\right) . \tag{3}
+$$
+
+Moreover, the bound is tight: there exist simple data distributions and $g$ such that any $f$ must have ${R}_{{\epsilon }_{0},\epsilon }^{rej}\left( f\right) \geq$ ${R}_{\left( {{\epsilon }_{0} + \epsilon }\right) /2}\left( g\right)$ .
+
+The proof for Theorem 1 can be found in the Appendix.
+
+Intuitively, the theorem states that if the data allows a small robust error at adversarial budget $\left( {{\epsilon }_{0} + \epsilon }\right) /2$ , then there exists a classifier with small robust error with rejection at budget $\left( {{\epsilon }_{0},\epsilon }\right)$ . For example, if the two classes can be separated with margin $\left( {{\epsilon }_{0} + \epsilon }\right) /2$ , then there exists $f$ with 0 robust error with rejection, even considering perturbation as large as $\epsilon$ which can be significantly larger than $\left( {{\epsilon }_{0} + \epsilon }\right) /2$ . So under mild conditions, it is possible to classify correctly small perturbations while rejecting large perturbations, answering our first question.
+
+On the other hand, under the same conditions, if we do not allow rejection and consider any classifier $g$ without rejection, then robust error of $g$ at the same adversarial budget is ${R}_{{\epsilon }_{0},\epsilon }^{\text{rej }}\left( g\right) = {R}_{\epsilon }\left( g\right) \geq {R}_{\left( {{\epsilon }_{0} + \epsilon }\right) /2}\left( g\right) \geq {R}_{{\epsilon }_{0},\epsilon }^{\text{rej }}\left( f\right)$ . In fact, there can be a big gap between ${R}_{\epsilon }\left( g\right)$ and ${R}_{\left( {{\epsilon }_{0} + \epsilon }\right) /2}\left( g\right)$ , e.g., when a large fraction of inputs have distances in $\left( {\left( {{\epsilon }_{0} + \epsilon }\right) /2,\epsilon }\right)$ to the decision boundary of $g$ . In this case, allowing rejection can bring significant benefit, answering our second question.
+
+## $\mathbf{{ProposedMethod}}$
+
+Consider a classifier without rejection $g\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right) , g : \mathcal{X} \mapsto \overline{\mathcal{Y}}$ realized by a DNN with parameters ${\mathbf{\theta }}_{c}$ . The output of the DNN is the predicted probability of each class $\mathbf{h}\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right) =$ $\left\lbrack {{h}_{1}\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right) ,\cdots ,{h}_{k}\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right) }\right\rbrack \in {\Delta }_{k}$ . We define the logits or the vector of un-normalized predictions as $\widetilde{\mathbf{h}}\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right) =$ $\left\lbrack {{\widetilde{h}}_{1}\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right) ,\cdots ,{\widetilde{h}}_{k}\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right) }\right\rbrack \in {\mathbb{R}}^{k}$ . The output of the DNN is obtained by applying the softmax function to the logits. The class corresponding to the maximum predicted probability is returned by $g$ , i.e., $g\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right) \mathrel{\text{:=}} {\operatorname{argmax}}_{y \in \overline{\mathcal{Y}}}{h}_{y}\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right)$ . The corresponding maximum probability is referred to as the prediction confidence ${h}_{\max }\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right) \mathrel{\text{:=}} \mathop{\max }\limits_{{y \in \overline{\mathcal{Y}}}}{h}_{y}\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right)$ . The prediction confidence has been used in prior works for determining when the classifier should abstain from prediction (Wu et al. 2018; Stutz, Hein, and Schiele 2020). In this work, we focus on the robust training of a classifier with a confidence-based reject option. Unlike many prior works, the confidence is not simply used at test time for rejection, but is included in our robust training objective.
+
+
+
+Figure 1: Overview of the proposed classifier with rejection.
+
+We define a general classifier with a confidence-based reject option $f : \mathcal{X} \mapsto \mathcal{Y}$ as follows
+
+$$
+f\left( {\mathbf{x};\mathbf{\theta }}\right) \mathrel{\text{:=}} \left\{ \begin{array}{ll} g\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right) & \text{ if }{h}_{ \bot }\left( {\mathbf{x};\mathbf{\theta }}\right) \leq \eta , \\ \bot & \text{ otherwise }, \end{array}\right. \tag{4}
+$$
+
+where ${h}_{ \bot }\left( {\mathbf{x};\mathbf{\theta }}\right) \in \left\lbrack {0,1}\right\rbrack$ is the predicted probability of rejection and $\eta \in \left\lbrack {0,1}\right\rbrack$ is a suitably-chosen threshold. We can view ${h}_{ \bot }\left( {\mathbf{x};\mathbf{\theta }}\right)$ as a detector that either accepts or rejects an input based on the classifier's prediction, as shown in Fig.1. The detector is defined as a general parametric function of the classifier’s un-normalized prediction ${h}_{ \bot }\left( {\mathbf{x};\mathbf{\theta }}\right) \mathrel{\text{:=}}$ $u\left( {\widetilde{\mathbf{h}}\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right) ;{\mathbf{\theta }}_{d}}\right) , u : {\mathbb{R}}^{k} \mapsto \left\lbrack {0,1}\right\rbrack$ , with detector-specific parameters ${\mathbf{\theta }}_{d}{}^{1}$ . Here, we denote the combined parameter vector of the classifier and detector by ${\mathbf{\theta }}^{T} \mathrel{\text{:=}} \left\lbrack \begin{array}{ll} {\mathbf{\theta }}_{c}^{T} & {\mathbf{\theta }}_{d}^{T} \end{array}\right\rbrack$ .
+
+Probability Model. The class-posterior probability model of the classifier with reject option $f$ is defined as follows:
+
+$$
+{P}_{\mathbf{\theta }}\left( {y \mid \mathbf{x}}\right) = \left( {1 - {h}_{ \bot }\left( {\mathbf{x};\mathbf{\theta }}\right) }\right) {h}_{y}\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right) \mathbf{1}\{ y \neq \bot \}
+$$
+
+$$
++ {h}_{ \bot }\left( {\mathbf{x};\mathbf{\theta }}\right) \mathbf{1}\{ y = \bot \} \text{.} \tag{5}
+$$
+
+An input $\mathbf{x}$ is accepted with probability $1 - {h}_{ \bot }\left( {\mathbf{x};\mathbf{\theta }}\right)$ and predicted into one of the classes $y \in \overline{\mathcal{Y}}$ with probability ${h}_{y}\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right)$ ; otherwise $\mathbf{x}$ is rejected with probability ${h}_{ \bot }\left( {\mathbf{x};\mathbf{\theta }}\right)$ and the class $\bot$ is returned with probability 1 .
+
+## Loss Functions
+
+Consider the robust error with rejection defined in Eq. (1). We would like to design smooth surrogate loss functions to replace the $0 - 1$ error functions in order the minimize the robust error with rejection.
+
+
+
+Figure 2: Nested perturbation balls (relative to the ${\ell }_{2}$ -norm) around a clean input $\mathbf{x}$ ; used to formalize our robust classification with rejection setting.
+
+Accept & Classify Correctly. First, consider the 0 - 1 error corresponding to small perturbations in the ${\epsilon }_{0}$ - neighborhood, $\mathbf{1}\left\{ {f\left( {\mathbf{x}}^{\prime }\right) \neq y}\right\}$ . We would like the corresponding surrogate loss to take a small value when the predicted probability for the true class $y$ is high and the predicted probability of rejection is low. The predicted probability of $f$ can be viewed as a $k + 1$ dimensional probability vector over the $k$ classes and the reject class $\mathcal{Y} : \left\lbrack {\left( {1 - {h}_{ \bot }\left( {{\mathbf{x}}^{\prime };\mathbf{\theta }}\right) }\right) {h}_{1}\left( {{\mathbf{x}}^{\prime };{\mathbf{\theta }}_{c}}\right) ,\cdots ,(1 - }\right.$ $\left. {\left. {{h}_{ \bot }\left( {{\mathbf{x}}^{\prime };\mathbf{\theta }}\right) }\right) {h}_{k}\left( {{\mathbf{x}}^{\prime };{\mathbf{\theta }}_{c}}\right) ,{h}_{ \bot }\left( {{\mathbf{x}}^{\prime };\mathbf{\theta }}\right) }\right\rbrack$ . Note that the final term corresponds to the probability of rejection, and the $k + 1$ probabilities sum to 1 . For an input $\left( {{\mathbf{x}}^{\prime }, y}\right)$ to be accepted and predicted into class $y$ , the target $k + 1$ dimensional one-hot probability vector has a 1 corresponding to class $y$ and zeros elsewhere. We propose to use the cross-entropy loss between this target one-hot probability and the predicted probability of $f$ , given by
+
+$$
+{\ell }_{\mathrm{{CE}}}\left( {{\mathbf{x}}^{\prime }, y;\mathbf{\theta }}\right) = - \log \left\lbrack {\left( {1 - {h}_{ \bot }\left( {{\mathbf{x}}^{\prime };\mathbf{\theta }}\right) }\right) {h}_{y}\left( {{\mathbf{x}}^{\prime };{\mathbf{\theta }}_{c}}\right) }\right\rbrack . \tag{6}
+$$
+
+We observe that the above loss function approaches 0 when the probability of rejection is close to 0 and the predicted probability of class $y$ is close to 1 ; the loss function takes a large value in all other cases. We also apply this cross-entropy loss for clean inputs since we expect the classifier to accept and correctly classify them.
+
+Accept & Classify Correctly or Reject. Consider the $0 - 1$ error corresponding to perturbations in the $\epsilon$ -neighborhood, $\mathbf{1}\left\{ {f\left( {\mathbf{x}}^{\prime }\right) \notin \{ y, \bot \} }\right\}$ . We would like the corresponding surrogate loss to take a small value when the predicted probability for the true class is high, or when the probability of rejection is high. To motivate the cross-entropy loss for this case, consider $k$ meta-classes defined as follows: $\{ 1,\cdots , y \vee \bot , y + 1,\cdots , k\}$ , i.e., the reject option is merged only with the true class $y$ . The predicted probability of $f$ over these meta-classes is given by: $\lbrack (1 -$ $\left. {{\bar{h}}_{ \bot }\left( {{\mathbf{x}}^{\prime };\mathbf{\theta }}\right) }\right) {h}_{1}\left( {{\mathbf{x}}^{\prime };{\mathbf{\theta }}_{c}}\right) ,\cdots ,\left( {1 - {h}_{ \bot }\left( {{\mathbf{x}}^{\prime };\mathbf{\theta }}\right) }\right) {h}_{y}\left( {{\mathbf{x}}^{\prime };{\widetilde{\mathbf{\theta }}}_{c}}\right) +$ ${\left. {h}_{ \bot }\left( {\mathbf{x}}^{\prime };\mathbf{\theta }\right) ,\cdots ,\left( 1 - {h}_{ \bot }\left( {\mathbf{x}}^{\prime };\mathbf{\theta }\right) \right) {h}_{k}\left( {\mathbf{x}}^{\prime };{\mathbf{\theta }}_{c}\right) \right\rbrack }^{2}$ . For an input $\left( {{\mathbf{x}}^{\prime }, y}\right)$ to be either rejected or accepted and predicted into class $y$ , the target $k$ -dimensional one-hot probability vector has a 1 corresponding to the meta-class $y \vee \bot$ , and zeros elsewhere. We propose to use the cross-entropy loss between this target one-hot probability and the predicted probability of $f$ , given by
+
+$$
+{\ell }_{\mathrm{{CE}}}^{\mathrm{{rej}}}\left( {{\mathbf{x}}^{\prime }, y;\mathbf{\theta }}\right) = - \log \left\lbrack {\left( {1 - {h}_{ \bot }\left( {{\mathbf{x}}^{\prime };\mathbf{\theta }}\right) }\right) {h}_{y}\left( {{\mathbf{x}}^{\prime };{\mathbf{\theta }}_{c}}\right) }\right.
+$$
+
+$$
+\left. {+{h}_{ \bot }\left( {{\mathbf{x}}^{\prime };\mathbf{\theta }}\right) }\right\rbrack \text{.} \tag{7}
+$$
+
+This loss function approaches 0 when i) the probability of rejection is close to 1 , or ii) the probability of rejection is close to 0 and the predicted probability of class $y$ is close to 1. Note that both the loss functions have a range $\lbrack 0,\infty )$ .
+
+## Robust Training Objective
+
+Given clean labeled samples(x, y)from a data distribution $\mathcal{D}$ , a perturbation budget for robust classification $\epsilon > 0$ , and a smaller perturbation budget ${\epsilon }_{0} \in \left\lbrack {0,\epsilon }\right\rbrack$ , we propose the following training objective for learning a robust classifier with a reject option:
+
+$$
+\mathcal{L}\left( \mathbf{\theta }\right) = \underset{\left( {\mathbf{x}, y}\right) \sim \mathcal{D}}{\mathbb{E}}\left\lbrack {{\ell }_{\mathrm{{CE}}}\left( {\mathbf{x}, y;\mathbf{\theta }}\right) + \beta \mathop{\max }\limits_{{{\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},{\epsilon }_{0}}\right) }}{\ell }_{\mathrm{{CE}}}\left( {{\mathbf{x}}^{\prime }, y;\mathbf{\theta }}\right) }\right.
+$$
+
+$$
+\left. {+\gamma \mathop{\max }\limits_{{{\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},\epsilon }\right) }}{\ell }_{\mathrm{{CE}}}^{\mathrm{{rej}}}\left( {{\mathbf{x}}^{\prime }, y;\mathbf{\theta }}\right) }\right\rbrack \text{.} \tag{8}
+$$
+
+The first term corresponds to the standard cross-entropy loss on clean inputs from the data distribution. The second term corresponds to the robust loss for small perturbations in the ${\epsilon }_{0}$ -neighborhood that we would like the classifier to accept and correctly classify. The third term corresponds to the robust loss for large perturbations in the $\epsilon$ -neighborhood that we would like the classifier to either reject or accept and correctly classify. The classifier parameters ${\mathbf{\theta }}_{c}$ and the detector parameters ${\mathbf{\theta }}_{d}$ are jointly learned by minimizing $\mathcal{L}\left( \mathbf{\theta }\right)$ . The hyper-parameters $\beta \geq 0$ and $\gamma \geq 0$ control the trade-off between the natural error and robust error terms of the classifier. We use standard PGD attack (Madry et al. 2018) to solve the inner maximization in our training objective.
+
+Comments. Suppose we choose the detector to always accept inputs, i.e., ${h}_{ \bot }\left( {\mathbf{x};\mathbf{\theta }}\right) = 0,\forall \mathbf{x}$ , and fix ${\epsilon }_{0} = \epsilon ,\beta = 1$ , $\gamma = 0$ , then the training objective specializes to standard adversarial training. The proposed training objective (8) differs from adversarial training by allowing large perturbations of an input to be rejected, when the classifier is likely to predict them incorrectly. As we show experimentally, the proposed method of robust training with rejection typically has higher robustness on unseen adversarial attacks that have a larger perturbation budget $\epsilon$ than that used in training, whereas the robustness of standard adversarial training drops significantly on those unseen adversarial attacks.
+
+---
+
+${}^{2}$ Note that this is a valid probability distribution over the meta-classes that sums to 1 .
+
+${}^{1}$ We discuss specific choices for the function $u$ in the sequel.
+
+---
+
+## Choice of the Detector
+
+Recall that we defined the detector as a general parametric function of the classifier's un-normalized prediction, that outputs the probability of rejection ${h}_{ \bot }\left( {\mathbf{x};\mathbf{\theta }}\right) =$ $u\left( {\widetilde{\mathbf{h}}\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right) ;{\mathbf{\theta }}_{d}}\right)$ . We explored a few approaches for defining $u\left( {\cdot ;{\mathbf{\theta }}_{d}}\right)$ based on smooth approximations of the prediction confidence $\mathop{\max }\limits_{{y \in \overline{\mathcal{Y}}}}{h}_{y}\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right)$ . For instance, we used a temperature-scaled log-sum-exponential approximation to the max function, followed by an affine transformation and the Sigmoid function (in order to get a probabilistic output). We also explored a multilayer fully-connected neural network to model the detector, which takes the prediction logits as its input and predicts the probability of rejection. We found the neural network-based model of the detector to have consistently better performance compared to the simple confidence-based approaches. Therefore, we adopt this model of the detector in our experiments.
+
+## Design of Adaptive Attacks
+
+We design strong adaptive attacks to evaluate the robustness with rejection of our method. To compute robustness with rejection at budgets ${\epsilon }_{0}$ and $\epsilon$ , we need to generate two adversarial examples ${\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},{\epsilon }_{0}}\right)$ and ${\mathbf{x}}^{\prime \prime } \in \mathcal{N}\left( {\mathbf{x},\epsilon }\right)$ for each clean input(x, y). We generate the adversarial example ${\mathbf{x}}^{\prime }$ within the ${\epsilon }_{0}$ -ball $\mathcal{N}\left( {\mathbf{x},{\epsilon }_{0}}\right)$ using the following objective:
+
+$$
+{\mathbf{x}}^{\prime } = \mathop{\operatorname{argmax}}\limits_{{{\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},{\epsilon }_{0}}\right) }} - \log \left( {1 - {h}_{ \bot }\left( {{\mathbf{x}}^{\prime };\mathbf{\theta }}\right) }\right) .
+$$
+
+The goal of the adversary is to make the detector reject the adversarial input by pushing the probability of rejection close to ${1}^{3}$ . We generate the adversarial example ${x}^{\prime \prime }$ within the larger $\epsilon$ -ball $\mathcal{N}\left( {\mathbf{x},\epsilon }\right)$ via the following objective:
+
+$$
+{\mathbf{x}}^{\prime \prime } = \mathop{\operatorname{argmax}}\limits_{{{\mathbf{x}}^{\prime \prime } \in \mathcal{N}\left( {\mathbf{x},\epsilon }\right) }}{\ell }_{\mathrm{{CE}}}^{\mathrm{{rej}}}\left( {{\mathbf{x}}^{\prime \prime }, y;\mathbf{\theta }}\right) .
+$$
+
+By solving this objective, the adversary attempts to push both the probability of rejection ${h}_{ \bot }\left( {{\mathbf{x}}^{\prime \prime };\mathbf{\theta }}\right)$ and the predicted probability of the true class ${h}_{y}\left( {{\mathbf{x}}^{\prime \prime };{\mathbf{\theta }}_{c}}\right)$ close to 0 . Thus, the goal of the adversary is to make the classifier-detector accept and incorrectly classify the adversarial input.
+
+We use the Projected Gradient Descent (PGD) method with Backtracking proposed by (Stutz, Hein, and Schiele 2020) to solve the attack objectives. The hyperparameters for PGD with backtracking are specified in the experiment section. Adaptive attacks for evaluating the baseline methods are discussed in the Appendix.
+
+## Experiments
+
+In this section, we perform experiments to evaluate the proposed method (SATR) and compare it to the baseline methods. Our main findings are summarized as follows:
+
+1) SATR achieves higher robustness with rejection compared to adversarial training (with and without confidence-based rejection) and CCAT (Stutz, Hein, and Schiele 2019).
+
+2) On small perturbations, SATR has a much lower rejection rate compared to CCAT, which often rejects a large fraction of the perturbed inputs;
+
+3) Our method outperforms both CCAT and adversarial training under unseen attacks;
+
+We next provide details on the experimental setup, datasets and DNN architectures, baseline methods, and the performance metric.
+
+## Setup
+
+We describe the important experimental settings in this section, and provide additional details about our method and the baselines in the appendix.
+
+Datasets. We perform experiments on the MNIST (LeCun 1998) and CIFAR-10 (Krizhevsky, Hinton et al. 2009) image datasets. MNIST contains 50,000 training images and 10,000 test images from 10 classes corresponding to handwritten digits. CIFAR-10 contains 50,000 training images and 10,000 test images from 10 classes corresponding to object categories. Following the setup in (Stutz, Hein, and Schiele 2020), we compute the accuracy of the models on the first 9,000 images of the test set and compute the robustness of the models on the first 1,000 images of the test set. We use the last 1,000 images of the test set as a validation dataset for selecting the rejection threshold of the methods.
+
+Baseline Methods. We compare the performance of SATR with the following three baselines: (1) AT: standard adversarial training without rejection (i.e. accepting every input) (Madry et al. 2018); (2) AT + Rejection: adversarial training with rejection based on the prediction confidence; (3) CCAT: confidence-calibrated adversarial training (Stutz, Hein, and Schiele 2020).
+
+DNN Architectures. On MNIST, we use LeNet network architecture (LeCun et al. 1989) for the classifier ${\mathbf{\theta }}_{c}$ , and use a three-layer fully-connected neural network with width 256 and ReLU activation function for the detector ${\mathbf{\theta }}_{d}$ . On CIFAR- 10, we use ResNet-20 network architecture (He et al. 2016) for the classifier ${\mathbf{\theta }}_{c}$ , and use a seven-layer fully-connected neural network with width 1024, ReLU activation function and a batch normalization layer for the detector ${\mathbf{\theta }}_{d}$ .
+
+Training Details. On both MNIST and CIFAR-10, we train the model for 100 epochs with a batch size of 128 . We use standard stochastic gradient descent (SGD) starting with a learning rate of 0.1 . The learning rate is multiplied by 0.95 after each epoch. We use a momentum of 0.9 and do not use weight decay for SGD. For each training batch, we split it into two sub-batches with equal size and use the first sub-batch for the first two loss terms in our training objective (objective (8)) and use the second sub-batch for the third loss term in our training objective. We set the hyper-parameters $\beta = 1$ and $\gamma = 1$ in our training objective without tuning. On MNIST, we train the model from scratch, while on CIFAR-10 we use an adversarially-trained model to initialize the classifier parameters ${\mathbf{\theta }}_{c}$ . On CIFAR-10, we also augment the training images using random crop and random horizontal flip. We use the standard PGD attack (Madry et al. 2018) to generate adversarial training examples. On MNIST, we use the PGD attack with a step size of0.01,40steps and a random start. On CIFAR-10, we use the PGD attack with a step size of $2/{255},{10}$ steps and a random start. In the training objective, by default, we set $\epsilon = {0.3}$ and ${\epsilon }_{0} = {0.1}$ for MNIST, and set $\epsilon = 8/{255}$ and ${\epsilon }_{0} = 2/{255}$ for CIFAR-10.
+
+---
+
+${}^{3}$ We appeal to the definition of robust error with rejection in Eq. (2), where rejecting a perturbed input in the ${\epsilon }_{0}$ -neighborhood is considered an error.
+
+---
+
+Performance Metric. We use the robustness with rejection at budgets ${\epsilon }_{0}$ and $\epsilon$ , defined as $1 - {R}_{{\epsilon }_{0},\epsilon }^{\mathrm{{rej}}}\left( f\right)$ , as the evaluation metric. For a fixed $\epsilon$ , we vary ${\epsilon }_{0}$ from 0 to $\epsilon$ over a given number of values. Note that the ${\epsilon }_{0}$ in this performance metric is different from the fixed ${\epsilon }_{0}$ that is used in the training objective of the proposed method. For convenience, we define the factor $\alpha \mathrel{\text{:=}} {\epsilon }_{0}/\epsilon \in \left\lbrack {0,1}\right\rbrack$ , and calculate the robustness with rejection metric for each of the $\alpha$ values from the set $\{ 0,{0.05},{0.1},{0.2},{0.3},{0.4},{0.5},{1.0}\}$ . Note that each $\alpha$ value corresponds to an ${\epsilon }_{0}$ value equal to ${\alpha \epsilon }$ . We plot a robustness curve for each method, where the $\alpha$ value is plotted on the $\mathrm{x}$ -axis and the corresponding robustness with rejection metric is plotted on the y-axis. A larger value of robustness corresponds to better performance. Referring to Fig. 3, we note that at the right end of this curve $\left( {{\epsilon }_{0} = \epsilon }\right)$ , the robustness $1 - {R}_{\epsilon ,\epsilon }^{\text{rej }}\left( f\right)$ corresponds to the standard definition of adversarial robustness without rejection (Madry et al. 2018). At the left end of this curve $\left( {{\epsilon }_{0} = 0}\right)$ , the robustness $1 - {R}_{0,\epsilon }^{\text{rej }}\left( f\right)$ corresponds to the robustness with rejection defined by (Tramèr 2021). In practice, we are mainly interested only in the robustness for small values of $\alpha$ , where the radius of perturbations (to be accepted) are small.
+
+Evaluation. We use the same approach to set the rejection threshold for all the methods. Specifically, on MNIST, we set the threshold such that only $1\%$ of clean validation inputs are rejected. On CIFAR-10, we set the threshold such that only 5% of clean validation inputs are rejected. We consider ${\ell }_{\infty }$ -norm bounded attacks and generate adversarial examples to compute the robustness with rejection metric via the PGD attack with backtracking (Stutz, Hein, and Schiele 2020). We use a base learning rate of 0.05 , momentum factor of 0.9 , a learning rate factor of1.25,200iterations, and 10 random restarts for generating adversarial examples ${\mathbf{x}}^{\prime }$ within the ${\epsilon }_{0}$ -ball $\mathcal{N}\left( {\mathbf{x},{\epsilon }_{0}}\right)$ . For generating adversarial examples ${x}^{\prime \prime }$ within the larger $\epsilon$ -ball $\mathcal{N}\left( {\mathbf{x},\epsilon }\right)$ , we use a base learning rate of 0.001 , a momentum factor of 0.9 , a learning rate factor of 1.1, 1000 iterations, and 10 random restarts.
+
+## Results
+
+We discuss the performance of the proposed method and the baselines on the CIFAR-10 and MNIST datasets.
+
+Evaluation under seen attacks. Figure 3 compares the robustness with rejection of the methods as a function of $\alpha$ for the scenario where the adaptive attacks used for evaluation use the same $\epsilon$ budget that was used for training the methods. For the proposed SATR, the ${\epsilon }_{0}$ value used for training is indicated (with the corresponding $\alpha$ value) using the vertical dashed line. We observe that CCAT has comparable robustness to AT only for $\alpha = 0$ , but its robustness quickly drops for larger $\alpha$ . This suggests that CCAT rejects a large fraction of small input perturbations based on its confidence threshold-ing method. AT with confidence-based rejection has higher robustness compared to standard AT on both datasets, which suggests that including even a simple rejection mechanism can help improve the robustness. On CIFAR-10, the proposed SATR has significantly higher robustness with rejection for small to moderate $\alpha$ , and its robustness drops only for large $\alpha$ values (which are not likely to be of practical interest). On MNIST, the robustness of SATR is slightly better than or comparable to AT + Rejection for small to moderate $\alpha$ . This suggests that SATR is successful at accepting and correctly classifying a majority of adversarial attacks of small size.
+
+Evaluation under unseen attacks. Figure 4 compares the robustness with rejection of the methods as a function of $\alpha$ for the unseen-adaptive-attack scenario, wherein a larger $\epsilon$ (compared to training) is used for evaluation. AT, both with and without rejection, performs poorly in this setting, suggesting that it does not generalize well to unseen (stronger) attacks. CCAT has relatively high robustness for $\alpha = 0$ ; however, its robustness sharply drops for larger $\alpha$ values. The significantly higher robustness of SATR for a range of small to moderate $\alpha$ values suggests that the proposed training method learns to reject larger input perturbations, even if the attack is unseen.
+
+Ablation study. We performed an ablation experiment to study the effect of the hyper-parameter ${\epsilon }_{0}$ used by SATR during training. The result of this experiment is shown in Figure 5 for a few different ${\epsilon }_{0}$ values. Clearly, the choice ${\epsilon }_{0} = 0$ leads to poor robustness with rejection, suggesting that a small non-zero value of ${\epsilon }_{0}$ is required for training to ensure that SATR does not reject too many small adversarial perturbations. We also observe that a larger ${\epsilon }_{0}$ during training typically leads to a higher robustness for large $\alpha$ values. However, this may come at the expense of lower robustness for small $\alpha$ , as observed on CIFAR-10 for ${\epsilon }_{0} = 4/{255}$ .
+
+## Conclusion
+
+We explored the problem of learning an adversarially-robust classifier with a reject option. We conducted a careful theoretical analysis of the problem and motivate the need for not rejecting small perturbations of the input. We proposed a novel metric for evaluating the robustness of a classifier with reject option that subsumes prior definitions of robustness, and provides a more fine-grained analysis of the radius (size) of perturbations rejected by a given method. We proposed a novel training objective for learning a robust classifier with rejection that encourages small input perturbations to be accepted and classified correctly, while allowing larger input perturbations to be rejected when the classifier's prediction may be incorrect. Experimental evaluations using strong adaptive attacks demonstrate significant improvement in the adversarial robustness with rejection of the proposed method, including the setting where unseen attacks with a larger $\epsilon$ budget are present during evaluation.
+
+
+
+Figure 3: Results on MNIST and CIFAR-10 datasets under seen adaptive attacks. On MNIST we set the perturbation budget $\epsilon = {0.3}$ , while on CIFAR-10 we set the perturbation budget $\epsilon = 8/{255}$ . The vertical dashed line corresponds to the $\alpha$ used for training SATR.
+
+
+
+Figure 4: Results on MNIST and CIFAR-10 datasets under unseen adaptive attacks. On MNIST, we set the perturbation budget $\epsilon = {0.4}$ , while on CIFAR-10 we set the perturbation budget $\epsilon = {10}/{255}$ . The vertical dashed line corresponds to the $\alpha$ used for training SATR.
+
+## References
+
+Athalye, A.; Carlini, N.; and Wagner, D. A. 2018. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. In Proceedings of the 35th International Conference on Machine Learning (ICML), volume 80 of Proceedings of Machine Learning Research, 274-283. PMLR.
+
+Biggio, B.; and Roli, F. 2018. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 84: 317-331.
+
+Carlini, N.; and Wagner, D. A. 2017. Towards Evaluating the Robustness of Neural Networks. In 2017 IEEE Symposium on Security and Privacy, SP, 39-57. IEEE Computer Society. Charoenphakdee, N.; Cui, Z.; Zhang, Y.; and Sugiyama, M.
+
+2021. Classification with Rejection Based on Cost-sensitive Classification. In Proceedings of the 38th International Conference on Machine Learning (ICML), volume 139 of Proceedings of Machine Learning Research, 1507-1517. PMLR.
+
+Cortes, C.; DeSalvo, G.; and Mohri, M. 2016. Learning with Rejection. In Algorithmic Learning Theory - 27th International Conference, ALT, volume 9925 of Lecture Notes in Computer Science, 67-82.
+
+Geifman, Y.; and El-Yaniv, R. 2019. SelectiveNet: A Deep Neural Network with an Integrated Reject Option. In Proceedings of the 36th International Conference on Machine Learning (ICML), volume 97 of Proceedings of Machine Learning Research, 2151-2159. PMLR.
+
+Guan, H.; Zhang, Y.; Cheng, H.; and Tang, X. 2018. Abstaining Classification When Error Costs are Unequal and Unknown. CoRR, abs/1806.03445.
+
+He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep Residual Learning for Image Recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, 770-778. IEEE Computer Society.
+
+Kato, M.; Cui, Z.; and Fukuhara, Y. 2020. ATRO: Adversarial Training with a Rejection Option. CoRR, abs/2010.12905.
+
+
+
+Figure 5: Ablation study on MNIST and CIFAR-10 datasets under seen adaptive attacks. On MNIST we set the perturbation budget $\epsilon = {0.3}$ , while on CIFAR-10 we set the perturbation budget $\epsilon = 8/{255}$ .
+
+Krizhevsky, A.; Hinton, G.; et al. 2009. Learning multiple layers of features from tiny images.
+
+Laidlaw, C.; and Feizi, S. 2019. Playing it Safe: Adversarial Robustness with an Abstain Option. CoRR, abs/1911.11253.
+
+LeCun, Y. 1998. The MNIST database of handwritten digits. http://yann.lecun.com/exdb/mnist/.
+
+LeCun, Y.; Boser, B. E.; Denker, J. S.; Henderson, D.; Howard, R. E.; Hubbard, W. E.; and Jackel, L. D. 1989. Handwritten Digit Recognition with a Back-Propagation Network. In Touretzky, D. S., ed., Advances in Neural Information Processing Systems 2, [NIPS Conference, Denver, Colorado, USA, November 27-30, 1989], 396-404. Morgan Kaufmann.
+
+Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; and Vladu, A. 2018. Towards Deep Learning Models Resistant to Adversarial Attacks. In 6th International Conference on Learning Representations, Conference Track Proceedings. OpenRe-view.net.
+
+Pang, T.; Yang, X.; Dong, Y.; Su, H.; and Zhu, J. 2021a. Bag of Tricks for Adversarial Training. In 9th International Conference on Learning Representations (ICLR). OpenRe-view.net.
+
+Pang, T.; Zhang, H.; He, D.; Dong, Y.; Su, H.; Chen, W.; Zhu, J.; and Liu, T. 2021b. Adversarial Training with Rectified Rejection. CoRR, abs/2105.14785.
+
+Sheikholeslami, F.; Lotfi, A.; and Kolter, J. Z. 2021. Provably robust classification of adversarial examples with detection. In 9th International Conference on Learning Representations (ICLR). OpenReview.net.
+
+Stutz, D.; Hein, M.; and Schiele, B. 2019. Disentangling Adversarial Robustness and Generalization. In IEEE Conference on Computer Vision and Pattern Recognition CVPR, 6976-6987. Computer Vision Foundation / IEEE.
+
+Stutz, D.; Hein, M.; and Schiele, B. 2020. Confidence-Calibrated Adversarial Training: Generalizing to Unseen Attacks. In Proceedings of the 37th International Conference on Machine Learning (ICML), volume 119 of Proceedings of Machine Learning Research, 9155-9166. PMLR.
+
+Tax, D. M. J.; and Duin, R. P. W. 2008. Growing a multi-class classifier with a reject option. Pattern Recognition Letters, 29(10): 1565-1570.
+
+Tramèr, F. 2021. Detecting Adversarial Examples Is (Nearly) As Hard As Classifying Them. CoRR, abs/2107.11630.
+
+Tramèr, F.; Carlini, N.; Brendel, W.; and Madry, A. 2020. On Adaptive Attacks to Adversarial Example Defenses. In Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems.
+
+Tsipras, D.; Santurkar, S.; Engstrom, L.; Turner, A.; and Madry, A. 2019. Robustness May Be at Odds with Accuracy. In 7th International Conference on Learning Representations (ICLR). OpenReview.net.
+
+Wu, X.; Jang, U.; Chen, J.; Chen, L.; and Jha, S. 2018. Reinforcing Adversarial Robustness using Model Confidence Induced by Adversarial Training. In Proceedings of the 35th International Conference on Machine Learning (ICML), volume 80 of Proceedings of Machine Learning Research, 5330-5338. PMLR.
+
+Zhang, H.; Yu, Y.; Jiao, J.; Xing, E. P.; Ghaoui, L. E.; and Jordan, M. I. 2019. Theoretically Principled Trade-off between Robustness and Accuracy. In Proceedings of the 36th International Conference on Machine Learning (ICML), volume 97 of Proceedings of Machine Learning Research, 7472-7482. PMLR.
+
+## Appendix Proof for Theorem 1
+
+Theorem 2 (Restatement of Theorem 1). Consider binary classification. Let $g\left( \mathbf{x}\right)$ be any decision boundary (i.e., any classifier without a rejection option). For any $0 \leq {\epsilon }_{0} \leq \epsilon$ , there exists a classifier $f$ with a rejection option such that
+
+$$
+{R}_{{\epsilon }_{0},\epsilon }^{rej}\left( f\right) \leq {R}_{\left( {{\epsilon }_{0} + \epsilon }\right) /2}\left( g\right) . \tag{9}
+$$
+
+Moreover, the bound is tight: there exist simple data distributions and $g$ such that any $f$ must have ${R}_{{\epsilon }_{0},\epsilon }^{rej}\left( f\right) \geq$ ${R}_{\left( {{\epsilon }_{0} + \epsilon }\right) /2}\left( g\right)$ .
+
+Proof. For any $r > 0$ , let $\mathcal{N}\left( {g, r}\right)$ denote the region within distance $r$ to the decision boundary of $g$ :
+
+$$
+\mathcal{N}\left( {g, r}\right) \mathrel{\text{:=}} \left\{ {\mathbf{x} \in \mathcal{X} : \exists {\mathbf{x}}^{\prime }, d\left( {{\mathbf{x}}^{\prime },\mathbf{x}}\right) \leq r\text{ and }g\left( {\mathbf{x}}^{\prime }\right) \neq g\left( \mathbf{x}\right) }\right\} .
+$$
+
+Consider a parameter $\delta \in \left\lbrack {0,\epsilon }\right\rbrack$ and construct a classifier ${f}_{\delta }$ with rejection as follows:
+
+$$
+{f}_{\delta }\left( \mathbf{x}\right) \mathrel{\text{:=}} \left\{ \begin{array}{ll} \bot & \text{ if }\mathbf{x} \in \mathcal{N}\left( {g,\delta }\right) , \\ g\left( \mathbf{x}\right) & \text{ otherwise. } \end{array}\right. \tag{10}
+$$
+
+We will show that any sample(x, y)contributing error to ${R}_{{\epsilon }_{0},\epsilon }^{\text{rej }}\left( {f}_{\delta }\right)$ must also contribute error to ${R}_{{\epsilon }^{\prime }}\left( g\right)$ , where ${\epsilon }^{\prime } =$ $\max \left\{ {{\epsilon }_{0} + \delta ,\epsilon - \delta }\right\}$ . This will prove that ${R}_{{\epsilon }_{0},\epsilon }^{\text{rej }}\left( {f}_{\delta }\right) \leq {R}_{{\epsilon }^{\prime }}\left( g\right)$ , which specializes to ${R}_{{\epsilon }_{0},\epsilon }^{\text{rej }}\left( {f}_{\delta }\right) \leq {R}_{\left( {{\epsilon }_{0} + \epsilon }\right) /2}\left( g\right)$ for the choice $\delta = \left( {\epsilon - {\epsilon }_{0}}\right) /2$ . Consider the following two cases:
+
+- Consider the first type of error in ${R}_{{\epsilon }_{0},\epsilon }^{\text{rej }}\left( {f}_{\delta }\right)$ : $\mathop{\max }\limits_{{{\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},{\epsilon }_{0}}\right) }}\mathbf{1}\left\lbrack {{f}_{\delta }\left( {\mathbf{x}}^{\prime }\right) \neq y}\right\rbrack = 1$ . This implies that there exists ${\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},{\epsilon }_{0}}\right)$ such that ${f}_{\delta }\left( {\mathbf{x}}^{\prime }\right) \neq y$ . So there are two subcases to consider:
+
+(1) ${\mathbf{x}}^{\prime } \in \mathcal{N}\left( {g,\delta }\right)$ : in this case $\mathbf{x} \in \mathcal{N}\left( {g,\delta + {\epsilon }_{0}}\right)$ .
+
+(2) $g\left( {\mathbf{x}}^{\prime }\right) \neq y$ : in this case either $g\left( \mathbf{x}\right) \neq y$ , or $g\left( \mathbf{x}\right) =$ $y \neq g\left( {\mathbf{x}}^{\prime }\right)$ and thus $\mathbf{x} \in \mathcal{N}\left( {g,{\epsilon }_{0}}\right)$ .
+
+In summary, either $g\left( \mathbf{x}\right) \neq y$ or $\mathbf{x} \in \mathcal{N}\left( {g,{\epsilon }_{0} + \delta }\right)$ .
+
+- Next consider the second type of error in ${R}_{{\epsilon }_{0},\epsilon }^{\text{rej }}\left( {f}_{\delta }\right)$ : $\mathop{\max }\limits_{{{\mathbf{x}}^{\prime \prime } \in \mathcal{N}\left( {\mathbf{x},\epsilon }\right) }}\mathbf{1}\left\lbrack {{f}_{\delta }\left( {\mathbf{x}}^{\prime \prime }\right) \notin \{ y, \bot \} }\right\rbrack = 1$ . This means there exists an ${\mathbf{x}}^{\prime \prime } \in \mathcal{N}\left( {\mathbf{x},\epsilon }\right)$ such that ${f}_{\delta }\left( {\mathbf{x}}^{\prime \prime }\right) \notin \{ y, \bot \}$ , i.e., ${\mathbf{x}}^{\prime \prime } \notin \mathcal{N}\left( {g,\delta }\right)$ and $g\left( {\mathbf{x}}^{\prime \prime }\right) \neq y$ . This implies that all $\mathbf{z} \in \mathcal{N}\left( {{\mathbf{x}}^{\prime \prime },\delta }\right)$ should have $g\left( \mathbf{z}\right) = g\left( {\mathbf{x}}^{\prime \prime }\right) \neq y$ . In particular, there exists $\mathbf{z} \in \mathcal{N}\left( {{\mathbf{x}}^{\prime \prime },\delta }\right)$ with $d\left( {\mathbf{z},\mathbf{x}}\right) \leq \epsilon - \delta$ and $g\left( \mathbf{z}\right) \neq y$ . It can be verified that $\mathbf{z} = \frac{\delta }{\epsilon }\mathbf{x} + \frac{\epsilon - \delta }{\epsilon }{\mathbf{x}}^{\prime \prime }$ , which is a point on the line joining $\mathbf{x}$ and ${\mathbf{x}}^{\prime \prime }$ , satisfies the above condition. In summary, either $g\left( \mathbf{x}\right) \neq y$ , or $g\left( \mathbf{x}\right) = y \neq g\left( \mathbf{z}\right)$ and thus $\mathbf{x} \in \mathcal{N}\left( {g,\epsilon - \delta }\right)$ .
+
+Overall, a sample(x, y)contributing error to ${R}_{{\epsilon }_{0},\epsilon }^{\text{rej }}\left( {f}_{\delta }\right)$ must satisfy either $g\left( \mathbf{x}\right) \neq y$ or $\mathbf{x} \in \mathcal{N}\left( {g,{\epsilon }^{\prime }}\right)$ , where ${\epsilon }^{\prime } =$ $\max \left\{ {{\epsilon }_{0} + \delta ,\epsilon - \delta }\right\}$ . Clearly, such a sample also contributes an error to ${R}_{{\epsilon }^{\prime }}\left( g\right)$ . Therefore, we have
+
+$$
+{R}_{{\epsilon }_{0},\epsilon }^{\text{rej }}\left( {f}_{\delta }\right) \leq {R}_{{\epsilon }^{\prime }}\left( g\right) , \tag{11}
+$$
+
+which leads to the desired bound when $\delta = \left( {\epsilon - {\epsilon }_{0}}\right) /2$ and ${\epsilon }^{\prime } = \left( {\epsilon + {\epsilon }_{0}}\right) /2$ .
+
+To show that the bound is tight, consider the following data distribution. Let $\mathbf{x} \in \mathbb{R}$ and $y \in \{ - 1, + 1\} ,0 < {\epsilon }_{0} < \epsilon$ , and let $\alpha \in \left( {0,1/2}\right)$ be some constant:(x, y)is $\left( {-{4\epsilon }, - 1}\right)$ with probability $\left( {1 - \alpha }\right) /2,\left( {-{\epsilon }_{0}/4, - 1}\right)$ with probability $\alpha /2,\left( {{\epsilon }_{0}/4, + 1}\right)$ with probability $\alpha /2$ , and $\left( {{4\epsilon }, + 1}\right)$ with probability $\left( {1 - \alpha }\right) /2$ . Let $g\left( \mathbf{x}\right) \mathrel{\text{:=}} \operatorname{sign}\left( {\mathbf{x} + \epsilon }\right)$ . It is clear that ${R}_{\left( {{\epsilon }_{0} + \epsilon }\right) /2}\left( g\right) = \alpha /2$ . It is also clear that any $f$ must have ${R}_{{\epsilon }_{0},\epsilon }^{\text{rej }}\left( f\right) \geq \alpha /2$ since the points $\mathbf{x} = - {\epsilon }_{0}/4$ and $\mathbf{x} = {\epsilon }_{0}/4$ have distance only ${\epsilon }_{0}/2$ but have different labels.
+
+## Experimental Details
+
+## General Setup
+
+Software and Hardware. We run all experiments with Py-Torch and NVDIA GeForce RTX 2080Ti GPUs.
+
+Number of Evaluation Runs. We run all experiments once with fixed random seeds.
+
+Dataset. MNIST (LeCun 1998) is a large dataset of handwritten digits. Each digit has 5,500 training images and 1,000 test images. Each image is ${28} \times {28}$ grayscale. CIFAR- 10 (Krizhevsky, Hinton et al. 2009) is a dataset of ${32} \times {32}$ color images with ten classes, each consisting of 5,000 training images and 1,000 test images. The classes correspond to dogs, frogs, ships, trucks, etc. We normalize the range of pixel values to $\left\lbrack {0,1}\right\rbrack$ .
+
+Multiple restarts of PGD Attacks. We use PGD attacks with multiple restarts for evaluating the robustness. Following (Stutz, Hein, and Schiele 2020), we initialize the perturbation $\delta$ uniformly over directions and norm:
+
+$$
+\delta = {u\epsilon }\frac{{\delta }^{\prime }}{{\begin{Vmatrix}{\delta }^{\prime }\end{Vmatrix}}_{\infty }},{\delta }^{\prime } \sim \mathcal{N}\left( {0, I}\right) , u \sim U\left( {0,1}\right) \tag{12}
+$$
+
+where ${\delta }^{\prime }$ is sampled from a standard Gaussian and $u \in \left\lbrack {0,1}\right\rbrack$ from a uniform distribution. We also consider zero initialization, i.e., $\delta = 0$ . For zero initialization, we use 1 restart and for random initialization, we use multiple restarts. We take the perturbation corresponding to the best objective value obtained throughout the optimization.
+
+## Baselines
+
+We consdier three baselines: (1) AT: adversarial training without rejection (i.e. accepting every input); (2) AT+Rejection: adversarial training with confidence-based rejection; (3) CCAT: confidence-calibrated adversarial training. We give their training details below.
+
+AT and AT+Rejection. We consider the standard adversarial training proposed in (Madry et al. 2018). On MNIST, we use LetNet network architecture (LeCun et al. 1989) and train the network for 100 epochs with a batch size of 128 . We use standard stochastic gradient descent (SGD) starting with a learning rate of 0.1 . The learning rate is multiplied by 0.95 after each epoch. We use a momentum of 0.9 and do not use weight decay for SGD. We use the PGD attack to generate adversarial training examples with $\epsilon = {0.3}$ , a step size of0.01,40steps and a random start. On CIFAR-10, we use ResNet-20 network architecture (He et al. 2016) and train the network following the suggestions in (Pang et al. 2021a). Specifically, we train the network for 110 epochs with a batch size of 128 using stochastic gradient decent (SGD) with Nesterov momentum and learning rate schedule. We set momentum 0.9 and ${\ell }_{2}$ weight decay with a coefficient of $5 \times {10}^{-4}$ . The initial learning rate is 0.1 and it decreases by 0.1 at 100 and 105 epoch respectively. We augment the training images using random crop and random horizontal flip. We use the PGD attack to generate adversarial training examples with $\epsilon = 8/{255}$ , a step size of $2/{255},{10}$ steps and a random start. On both MNIST and CIFAR-10, we train on ${50}\%$ clean and ${50}\%$ adversarial examples per batch.
+
+CCAT. We follow the original training settings for CCAT in (Stutz, Hein, and Schiele 2020) and train models on MNIST and CIFAR-10 using standard stochastic gradient descent (SGD). On MNIST, we use LetNet network architecture (LeCun et al. 1989) and train the network for 100 epochs with a batch size of 100 and a learning rate of 0.1 . On CIFAR-10, we use ResNet-20 network architecture (He et al. 2016) and train the network for 200 epochs with a batch size of 100 and a learning rate of 0.075 . We augment the training images using random crop and random horizontal flip on CIFAR-10. On both MNIST and CIFAR-10, we use learning rate schedule and the learning rate is multiplied by 0.95 after each epoch. We use a momentum of 0.9 and do not use weight decay for SGD. We use the PGD attack with backtracking to generate adversarial training examples: we use a learning rate of 0.005 , a momentum of 0.9 , a learning rate factor of 1.5,40 steps and a random start. We randomly switch between the random initialization and zero initialization. We train on ${50}\%$ clean and ${50}\%$ adversarial examples per batch.
+
+## Adaptive Attacks for Confidence-based Detectors
+
+We design adaptive attacks to evaluate the robustness with rejection of classifiers using confidence (or maximum soft-max score) to reject adversarial inputs (e.g. AT+Rejection and CCAT). To compute robustness with rejection at budgets ${\epsilon }_{0}$ and $\epsilon$ , we need to generate two adversarial examples ${\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},{\epsilon }_{0}}\right)$ and ${\mathbf{x}}^{\prime \prime } \in \mathcal{N}\left( {\mathbf{x},\epsilon }\right)$ for each clean input(x, y). We generate the adversarial example ${\mathbf{x}}^{\prime }$ within the ${\epsilon }_{0}$ -ball $\mathcal{N}\left( {\mathbf{x},{\epsilon }_{0}}\right)$ using the following objective:
+
+$$
+{\mathbf{x}}^{\prime } = \mathop{\operatorname{argmax}}\limits_{{{\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},{\epsilon }_{0}}\right) }} - \mathop{\sum }\limits_{{j = 1}}^{k}{h}_{j}{\left( {\mathbf{x}}^{\prime };{\mathbf{\theta }}_{c}\right) }^{2}.
+$$
+
+The goal of the adversary is to make the detector reject the adversarial input by pushing the softmax output of the network to uniform.
+
+We generate the adversarial example ${x}^{\prime \prime }$ within the larger $\epsilon$ -ball $\mathcal{N}\left( {\mathbf{x},\epsilon }\right)$ via the following objective:
+
+$$
+{\mathbf{x}}^{\prime \prime } = \mathop{\operatorname{argmax}}\limits_{{{\mathbf{x}}^{\prime \prime } \in \mathcal{N}\left( {\mathbf{x},\epsilon }\right) }}\mathop{\max }\limits_{{j \neq y}}{h}_{j}\left( {{\mathbf{x}}^{\prime \prime };{\mathbf{\theta }}_{c}}\right) .
+$$
+
+By solving this objective, the adversary attempts to find misclassified adversarial examples with high confidence. Thus, the goal of the adversary is to make the classifier-detector accept and incorrectly classify the adversarial input.
+
+We use Projected Gradient Descent (PGD) method with Backtracking proposed by (Stutz, Hein, and Schiele 2020) to solve the attack objectives. The hyperparameters for PGD with backtracking are specified in the experiment section.
+
+## Additional Results
+
+Evaluation on clean test inputs. We evaluate our method and baselines on clean test inputs. The results in Table 1 show that our method SATR has comparable performance on clean test inputs as the baselines.
+
+| Dataset | Method | Acc. with Rej. | Rej. Rate |
| MNIST | AT | 99.08 | 0.00 |
| AT+Rejection | 99.68 | 1.71 |
| CCAT | 99.88 | 1.51 |
| SATR (Ours) | 99.76 | 1.61 |
| CIFAR- 10 | AT | 88.07 | 0.00 |
| AT+Rejection | 90.73 | 5.42 |
| CCAT | 91.78 | 5.80 |
| SATR (Ours) | 91.51 | 4.86 |
+
+Table 1: Evaluation on clean test inputs. The accuracy with rejection is defined as the accuracy on the accepted test inputs and the rejection rate is defined as the fraction of test inputs that are rejected. All values are percentages.
+
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/UiF3RTES7pU/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/UiF3RTES7pU/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..2af51cd8fb6f827677617edae1996dafca8db573
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/UiF3RTES7pU/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,273 @@
+§ REVISITING ADVERSARIAL ROBUSTNESS OF CLASSIFIERS WITH A REJECT OPTION
+
+§ ABSTRACT
+
+Adversarial training of deep neural networks (DNNs) is an important defense mechanism that allows a DNN to be robust to input perturbations, that can otherwise result in predictions errors. Recently, there is a growing interest in learning a classifier with a reject (abstain) option that can be more robust to adversarial perturbations by choosing to not return a prediction on inputs where the classifier may be incorrect. A challenge faced with robust learning of a classifier with reject option is that existing works do not have a mechanism to ensure that (very) small perturbations of the input are not rejected, when they can in fact be accepted and correctly classified. We first propose a novel metric - robust error with rejection - that extends the standard definition of robust error to include the rejection of small perturbations. The proposed metric has natural connections to the standard robust error (without rejection), as well as the robust error with rejection proposed in a recent work. Motivated by this metric, we propose novel loss functions and a robust training method - stratified adversarial training with rejection (SATR) - for a classifier with reject option, where the goal is to accept and correctly-classify small input perturbations, while allowing the rejection of larger input perturbations that cannot be correctly classified. Experiments on well-known image classification DNNs using strong adaptive attack methods validate that SATR can significantly improve the robustness of a classifier with rejection compared to standard adversarial training (with confidence-based rejection) as well as a recently-proposed baseline.
+
+§ INTRODUCTION
+
+Training robust classifiers in the presence of adversarial inputs is an important problem from the standpoint of designing secure and reliable machine learning systems (Biggio and Roli 2018). Adversarial training (AT) and its variations are the most effective methods for learning robust DNN classifiers (Madry et al. 2018; Zhang et al. 2019). However, adversarial training may still not be very effective against adaptive adversarial attacks, or even standard attacks with configurations not observed during training (Athalye, Carlini, and Wagner 2018; Tramèr et al. 2020). Given this limitation, it is important to design classifiers that learn when to reject or abstain from predicting on hard-to-classify inputs. This can be especially crucial when it comes to real-world, safety-critical systems such as self-driving cars, where abstaining from prediction is often a much safer alternative to making an incorrect decision.
+
+We focus on the problem of learning a robust classifier with a reject option in the presence of adversarial inputs. The related problem of learning a (non-robust) classifier with a reject option has been studied extensively in the literature (Tax and Duin 2008; Guan et al. 2018; Cortes, DeSalvo, and Mohri 2016; Geifman and El-Yaniv 2019; Charoenphakdee et al. 2021). Recently, a number of works have also addressed the problem of adversarial robustness for a classifier equipped with a reject option (Laidlaw and Feizi 2019; Stutz, Hein, and Schiele 2020; Sheikholeslami, Lotfi, and Kolter 2021; Pang et al. 2021b; Tramèr 2021; Kato, Cui, and Fukuhara 2020). These approaches extend the standard definition of adversarial robustness (robust error) to the setting where the classifier can also reject inputs. In this setting, rejection of a perturbed input is considered to be a valid decision that does not count towards the robust error. However, rejection of a clean input still counts towards the robust error (Tramèr 2021).
+
+A key limitation with this view of the robust error (with rejection) is that it treats equally the rejection of very small perturbations as well as large perturbations of an input. However, many practical applications (e.g., object detection) may require that small perturbations of an input be handled accurately by the classifier without resorting to rejection. In other words, there could be a higher cost for rejecting small input perturbations, when in-fact the classifier can accept and classify them correctly. Existing methods for training a robust classifier with rejection, such as confidence-calibrated adversarial training (CCAT) (Stutz, Hein, and Schiele 2020), achieve a high robustness by simply rejecting a large fraction of the perturbed inputs (since rejecting perturbed inputs does not contribute to the robust error, no matter the perturbation size). As we validate experimentally, CCAT often has a high rejection rate on even small perturbations of clean inputs, which may not be acceptable in many practical applications.
+
+Motivated by these limitations in existing works, we revisit the problem of adversarial robustness of a classifier with reject option, and make the following contributions:
+
+ * We propose a novel metric - robust error with rejection - that can provide a fine-grained evaluation of the robustness of a classifier with reject option across a range of perturbation sizes.
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+ * We provide a theoretical analysis of this problem, which motivates the need for learning a robust classifier with rejection that can accept and correctly classify small input perturbations.
+
+ * We propose novel loss functions and a robust training method SATR for jointly learning a classifier-detector system (i.e., a classifier with rejection) that are designed to achieve the goal of accepting and correctly classifying small input perturbations, while also selectively rejecting larger input perturbations.
+
+§ RELATED WORK
+
+Adversarial robustness of deep learning models has received significant attention in recent years. Many defenses have been proposed and most of them have been broken by strong adaptive attacks (Athalye, Carlini, and Wagner 2018; Tramèr et al. 2020). The most effective approach for improving adversarial robustness is adversarial training (Madry et al. 2018; Zhang et al. 2019). However, adversarial training still cannot achieve very good robustness on complex datasets, and often there is a large generalization gap in the robustness (Tsipras et al. 2019; Stutz, Hein, and Schiele 2019). For example, on CIFAR-10, current state-of-the-art adversarial training has only about ${50}\%$ robustness under the strongest adaptive attacks.
+
+One approach to break this robustness bottleneck is to allow rejection of adversarial examples instead of trying to correctly classify all of them. Recently, there has been a great interest in exploring adversarial training of a classifier with a reject option (Laidlaw and Feizi 2019; Stutz, Hein, and Schiele 2020; Sheikholeslami, Lotfi, and Kolter 2021; Pang et al. 2021b; Tramèr 2021). Stutz, Hein, and Schiele proposed to adversarially train confidence-calibrated models so that they can generalize to unseen adversarial attacks. Sheikholeslami, Lotfi, and Kolter modified existing certified defense mechanisms to allow the classifier to either robustly classify or detect adversarial attacks, and showed that it can lead to better certified robustness, especially for large perturbation sizes. Laidlaw and Feizi proposed a method called Combined Abstention Robustness Learning (CARL) for jointly learning a classifier and the region of the input space on which it should abstain, and showed that training with CARL can result in a more accurate and robust classifier.
+
+§ PROBLEM SETUP
+
+Let $\mathcal{X} \subseteq {\mathbb{R}}^{d}$ denote the space of inputs $\mathbf{x}$ and $\overline{\mathcal{Y}} \mathrel{\text{ := }}$ $\{ 1,\cdots ,k\}$ denote the space of outputs $y$ . Let $\mathcal{Y} \mathrel{\text{ := }} \overline{\mathcal{Y}} \cup \{ \bot \}$ be the extended output space where $\bot$ denotes the abstain or rejection option. Let ${\Delta }_{k}$ denote the set of output probabilities over $\overline{\mathcal{Y}}$ (i.e., the simplex in $k$ -dimensions). Let $d\left( {\mathbf{x},{\mathbf{x}}^{\prime }}\right)$ be a norm-induced distance on $\mathcal{X}$ (e.g., the ${\ell }_{p}$ -distance for some $p > 1$ ), and let $\mathcal{N}\left( {\mathbf{x},r}\right) \mathrel{\text{ := }} \left\{ {{\mathbf{x}}^{\prime } \in \mathcal{X} : d\left( {{\mathbf{x}}^{\prime },\mathbf{x}}\right) \leq r}\right\}$ denote the neighborhood of $\mathbf{x}$ with distance $r$ . Let $\land$ and $\vee$ denote the boolean AND and OR operations respectively. Let $\mathbf{1}\{ c\}$ define the binary indicator function which takes value 1(0) when the condition $c$ is true (false). We denote vectors and matrices using boldface symbols.
+
+Given samples from a distribution $\mathcal{D}$ over $\mathcal{X} \times \overline{\mathcal{Y}}$ , our goal is to learn a classifier with rejection option, $f : \mathcal{X} \rightarrow \mathcal{Y}$ , that can correctly classify adversarial examples with small perturbations, and can either correctly classify or reject those with large perturbations. The standard robust error at adversarial budget $\epsilon > 0$ is defined as:
+
+$$
+{R}_{\epsilon }\left( f\right) \mathrel{\text{ := }} \underset{\left( {\mathbf{x},y}\right) \sim \mathcal{D}}{\mathbb{E}}\left\lbrack {\mathop{\max }\limits_{{{\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},\epsilon }\right) }}\mathbf{1}\left\{ {f\left( {\mathbf{x}}^{\prime }\right) \neq y}\right\} }\right\rbrack ,
+$$
+
+which does not allow rejection. A few recent works (e.g. (Tramèr 2021)) have proposed a robust error with rejection at adversarial budget $\epsilon > 0$ as
+
+$$
+{R}_{\epsilon }^{\text{ rej }}\left( f\right) \mathrel{\text{ := }} \underset{\left( {\mathbf{x},y}\right) \sim \mathcal{D}}{\mathbb{E}}\left\lbrack {\mathbf{1}\{ f\left( \mathbf{x}\right) \neq y\} }\right.
+$$
+
+$$
+\left. {\vee \mathop{\max }\limits_{{{\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},\epsilon }\right) }}\mathbf{1}\{ f\left( {\mathbf{x}}^{\prime }\right) \notin \{ y, \bot \} \} }\right\rbrack \text{ , }
+$$
+
+which allows the rejection of small input perturbations without incurring an error.
+
+Neither of these metrics for robust error is well-suited to our needs. We therefore propose a new metric for evaluating a robust classifier with reject option - the robust error with rejection at budgets ${\epsilon }_{0} \in \left\lbrack {0,\epsilon }\right\rbrack$ and $\epsilon \geq 0$ :
+
+$$
+{R}_{{\epsilon }_{0},\epsilon }^{\text{ rej }}\left( f\right) \mathrel{\text{ := }} \underset{\left( {\mathbf{x},y}\right) \sim \mathcal{D}}{\mathbb{E}}\left\lbrack {\mathop{\max }\limits_{{{\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},{\epsilon }_{0}}\right) }}\mathbf{1}\left\{ {f\left( {\mathbf{x}}^{\prime }\right) \neq y}\right\} }\right.
+$$
+
+$$
+\left. {\vee \mathop{\max }\limits_{{{\mathbf{x}}^{\prime \prime } \in \mathcal{N}\left( {\mathbf{x},\epsilon }\right) }}\mathbf{1}\{ f\left( {\mathbf{x}}^{\prime \prime }\right) \notin \{ y, \bot \} \} }\right\rbrack \text{ . } \tag{1}
+$$
+
+The motivation for this metric is as follows. For small perturbations of a clean input within a neighborhood of radius ${\epsilon }_{0}$ , both an incorrect prediction and rejection are considered to be an error. For larger perturbations outside the ${\epsilon }_{0}$ -neighborhood, rejection is not considered to be an error, i.e., the classifier can either predict the correct class or reject larger perturbations.
+
+Proposition 1. The robust error with rejection can be equivalently defined as
+
+$$
+{R}_{{\epsilon }_{0},\epsilon }^{rej}\left( f\right) \mathrel{\text{ := }} \underset{\left( {\mathbf{x},y}\right) \sim \mathcal{D}}{\mathbb{E}}\left\lbrack {\mathop{\max }\limits_{{{\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},{\epsilon }_{0}}\right) }}\mathbf{1}\left\{ {f\left( {\mathbf{x}}^{\prime }\right) = \bot }\right\} }\right.
+$$
+
+$$
+\left. {\vee \mathop{\max }\limits_{{{\mathbf{x}}^{\prime \prime } \in \mathcal{N}\left( {\mathbf{x},\epsilon }\right) }}\mathbf{1}\{ f\left( {\mathbf{x}}^{\prime \prime }\right) \notin \{ y, \bot \} \} }\right\rbrack \text{ . } \tag{2}
+$$
+
+We first note that
+
+$$
+\mathbf{1}\left\{ {f\left( {\mathbf{x}}^{\prime }\right) \neq y}\right\} = \mathbf{1}\left\{ {f\left( {\mathbf{x}}^{\prime }\right) = \bot }\right\} \vee \mathbf{1}\left\{ {f\left( {\mathbf{x}}^{\prime }\right) \notin \{ y, \bot \} }\right\} .
+$$
+
+The maximum over the ${\epsilon }_{0}$ -neighborhood can be expressed as
+
+$$
+\mathop{\max }\limits_{{{\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},{\epsilon }_{0}}\right) }}\mathbf{1}\left\{ {f\left( {\mathbf{x}}^{\prime }\right) \neq y}\right\} = \mathop{\max }\limits_{{{\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},{\epsilon }_{0}}\right) }}\mathbf{1}\left\{ {f\left( {\mathbf{x}}^{\prime }\right) = \bot }\right\}
+$$
+
+$$
+\vee \mathop{\max }\limits_{{{\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},{\epsilon }_{0}}\right) }}\mathbf{1}\left\{ {f\left( {\mathbf{x}}^{\prime }\right) \notin \{ y, \bot \} }\right\} .
+$$
+
+Finally, the second term in the RHS of the above expression can be combined with the second term inside the expectation of Eq. (1), i.e.,
+
+$\mathop{\max }\limits_{{{\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},{\epsilon }_{0}}\right) }}\mathbf{1}\left\{ {f\left( {\mathbf{x}}^{\prime }\right) \notin \{ y, \bot \} }\right\} \vee \mathop{\max }\limits_{{{\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},\epsilon }\right) }}\mathbf{1}\left\{ {f\left( {\mathbf{x}}^{\prime }\right) \notin \{ y, \bot \} }\right\}$
+
+$= \mathop{\max }\limits_{{{\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},\epsilon }\right) }}\mathbf{1}\left\{ {f\left( {\mathbf{x}}^{\prime }\right) \notin \{ y, \bot \} }\right\} ,$
+
+which shows the equivalence of (1) and (2).
+
+Our new metric also has natural connections with existing metrics in the literature. When ${\epsilon }_{0} = \epsilon$ , our metric ${R}_{{\epsilon }_{0},\epsilon }^{\text{ rej }}\left( f\right)$ reduces to the standard robust error ${R}_{\epsilon }\left( f\right)$ (without rejection) at budget $\epsilon$ (Carlini and Wagner 2017). When ${\epsilon }_{0} = 0$ , our metric reduces to the robust error with rejection at budget $\epsilon ,{R}_{\epsilon }^{\text{ rej }}\left( f\right)$ proposed e.g., in (Tramèr 2021). For this special case, rejection is considered to be an error only for clean inputs (i.e., no perturbation).
+
+§ THEORETICAL ANALYSIS
+
+Our goal is to correctly classify small perturbations of the input and allow rejection of large perturbations when the classifier is not confident. Two fundamental questions arise:
+
+1. Why not allow rejection of both small and large perturbations? This is done in most existing studies on robust classification with rejection. On the other hand, many practical applications would like to handle small perturbations, and rejecting them can have severe costs. The question is whether it is possible to correctly classify small perturbations without hurting the robustness i.e., whether it is possible to achieve a small ${R}_{{\epsilon }_{0},\epsilon }^{\text{ rej }}$ .
+
+2. Why not try to correctly classify both small and large perturbations? This is done in traditional adversarial robustness, typically by adversarial training. The question is essentially about the benefit of allowing rejection.
+
+To answer these questions, we will show that under mild conditions, there exists a classifier $f$ with rejection with small ${R}_{{\epsilon }_{0},\epsilon }^{\text{ rej }}\left( f\right)$ . So it is possible to correctly classify small perturbations without rejecting them, answering the first question. Moreover, under the same conditions, all classifiers $g$ without rejection must have at least as large errors: ${R}_{{\epsilon }_{0},\epsilon }^{\text{ rej }}\left( g\right) = {R}_{\epsilon }\left( g\right) \geq {R}_{{\epsilon }_{0},\epsilon }^{\text{ rej }}\left( f\right)$ . In fact, the error of $g$ may be much larger than that of $f$ . This shows the benefit of allowing rejection, answering the second question.
+
+Theorem 1. Consider binary classification. Let $g\left( \mathbf{x}\right)$ be any decision boundary (i.e., any classifier without a rejection option). For any $0 \leq {\epsilon }_{0} \leq \epsilon$ , there exists a classifier $f$ with a rejection option such that
+
+$$
+{R}_{{\epsilon }_{0},\epsilon }^{rej}\left( f\right) \leq {R}_{\left( {{\epsilon }_{0} + \epsilon }\right) /2}\left( g\right) . \tag{3}
+$$
+
+Moreover, the bound is tight: there exist simple data distributions and $g$ such that any $f$ must have ${R}_{{\epsilon }_{0},\epsilon }^{rej}\left( f\right) \geq$ ${R}_{\left( {{\epsilon }_{0} + \epsilon }\right) /2}\left( g\right)$ .
+
+The proof for Theorem 1 can be found in the Appendix.
+
+Intuitively, the theorem states that if the data allows a small robust error at adversarial budget $\left( {{\epsilon }_{0} + \epsilon }\right) /2$ , then there exists a classifier with small robust error with rejection at budget $\left( {{\epsilon }_{0},\epsilon }\right)$ . For example, if the two classes can be separated with margin $\left( {{\epsilon }_{0} + \epsilon }\right) /2$ , then there exists $f$ with 0 robust error with rejection, even considering perturbation as large as $\epsilon$ which can be significantly larger than $\left( {{\epsilon }_{0} + \epsilon }\right) /2$ . So under mild conditions, it is possible to classify correctly small perturbations while rejecting large perturbations, answering our first question.
+
+On the other hand, under the same conditions, if we do not allow rejection and consider any classifier $g$ without rejection, then robust error of $g$ at the same adversarial budget is ${R}_{{\epsilon }_{0},\epsilon }^{\text{ rej }}\left( g\right) = {R}_{\epsilon }\left( g\right) \geq {R}_{\left( {{\epsilon }_{0} + \epsilon }\right) /2}\left( g\right) \geq {R}_{{\epsilon }_{0},\epsilon }^{\text{ rej }}\left( f\right)$ . In fact, there can be a big gap between ${R}_{\epsilon }\left( g\right)$ and ${R}_{\left( {{\epsilon }_{0} + \epsilon }\right) /2}\left( g\right)$ , e.g., when a large fraction of inputs have distances in $\left( {\left( {{\epsilon }_{0} + \epsilon }\right) /2,\epsilon }\right)$ to the decision boundary of $g$ . In this case, allowing rejection can bring significant benefit, answering our second question.
+
+§ $\MATHBF{{PROPOSEDMETHOD}}$
+
+Consider a classifier without rejection $g\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right) ,g : \mathcal{X} \mapsto \overline{\mathcal{Y}}$ realized by a DNN with parameters ${\mathbf{\theta }}_{c}$ . The output of the DNN is the predicted probability of each class $\mathbf{h}\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right) =$ $\left\lbrack {{h}_{1}\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right) ,\cdots ,{h}_{k}\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right) }\right\rbrack \in {\Delta }_{k}$ . We define the logits or the vector of un-normalized predictions as $\widetilde{\mathbf{h}}\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right) =$ $\left\lbrack {{\widetilde{h}}_{1}\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right) ,\cdots ,{\widetilde{h}}_{k}\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right) }\right\rbrack \in {\mathbb{R}}^{k}$ . The output of the DNN is obtained by applying the softmax function to the logits. The class corresponding to the maximum predicted probability is returned by $g$ , i.e., $g\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right) \mathrel{\text{ := }} {\operatorname{argmax}}_{y \in \overline{\mathcal{Y}}}{h}_{y}\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right)$ . The corresponding maximum probability is referred to as the prediction confidence ${h}_{\max }\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right) \mathrel{\text{ := }} \mathop{\max }\limits_{{y \in \overline{\mathcal{Y}}}}{h}_{y}\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right)$ . The prediction confidence has been used in prior works for determining when the classifier should abstain from prediction (Wu et al. 2018; Stutz, Hein, and Schiele 2020). In this work, we focus on the robust training of a classifier with a confidence-based reject option. Unlike many prior works, the confidence is not simply used at test time for rejection, but is included in our robust training objective.
+
+ < g r a p h i c s >
+
+Figure 1: Overview of the proposed classifier with rejection.
+
+We define a general classifier with a confidence-based reject option $f : \mathcal{X} \mapsto \mathcal{Y}$ as follows
+
+$$
+f\left( {\mathbf{x};\mathbf{\theta }}\right) \mathrel{\text{ := }} \left\{ \begin{array}{ll} g\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right) & \text{ if }{h}_{ \bot }\left( {\mathbf{x};\mathbf{\theta }}\right) \leq \eta , \\ \bot & \text{ otherwise }, \end{array}\right. \tag{4}
+$$
+
+where ${h}_{ \bot }\left( {\mathbf{x};\mathbf{\theta }}\right) \in \left\lbrack {0,1}\right\rbrack$ is the predicted probability of rejection and $\eta \in \left\lbrack {0,1}\right\rbrack$ is a suitably-chosen threshold. We can view ${h}_{ \bot }\left( {\mathbf{x};\mathbf{\theta }}\right)$ as a detector that either accepts or rejects an input based on the classifier's prediction, as shown in Fig.1. The detector is defined as a general parametric function of the classifier’s un-normalized prediction ${h}_{ \bot }\left( {\mathbf{x};\mathbf{\theta }}\right) \mathrel{\text{ := }}$ $u\left( {\widetilde{\mathbf{h}}\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right) ;{\mathbf{\theta }}_{d}}\right) ,u : {\mathbb{R}}^{k} \mapsto \left\lbrack {0,1}\right\rbrack$ , with detector-specific parameters ${\mathbf{\theta }}_{d}{}^{1}$ . Here, we denote the combined parameter vector of the classifier and detector by ${\mathbf{\theta }}^{T} \mathrel{\text{ := }} \left\lbrack \begin{array}{ll} {\mathbf{\theta }}_{c}^{T} & {\mathbf{\theta }}_{d}^{T} \end{array}\right\rbrack$ .
+
+Probability Model. The class-posterior probability model of the classifier with reject option $f$ is defined as follows:
+
+$$
+{P}_{\mathbf{\theta }}\left( {y \mid \mathbf{x}}\right) = \left( {1 - {h}_{ \bot }\left( {\mathbf{x};\mathbf{\theta }}\right) }\right) {h}_{y}\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right) \mathbf{1}\{ y \neq \bot \}
+$$
+
+$$
++ {h}_{ \bot }\left( {\mathbf{x};\mathbf{\theta }}\right) \mathbf{1}\{ y = \bot \} \text{ . } \tag{5}
+$$
+
+An input $\mathbf{x}$ is accepted with probability $1 - {h}_{ \bot }\left( {\mathbf{x};\mathbf{\theta }}\right)$ and predicted into one of the classes $y \in \overline{\mathcal{Y}}$ with probability ${h}_{y}\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right)$ ; otherwise $\mathbf{x}$ is rejected with probability ${h}_{ \bot }\left( {\mathbf{x};\mathbf{\theta }}\right)$ and the class $\bot$ is returned with probability 1 .
+
+§ LOSS FUNCTIONS
+
+Consider the robust error with rejection defined in Eq. (1). We would like to design smooth surrogate loss functions to replace the $0 - 1$ error functions in order the minimize the robust error with rejection.
+
+ < g r a p h i c s >
+
+Figure 2: Nested perturbation balls (relative to the ${\ell }_{2}$ -norm) around a clean input $\mathbf{x}$ ; used to formalize our robust classification with rejection setting.
+
+Accept & Classify Correctly. First, consider the 0 - 1 error corresponding to small perturbations in the ${\epsilon }_{0}$ - neighborhood, $\mathbf{1}\left\{ {f\left( {\mathbf{x}}^{\prime }\right) \neq y}\right\}$ . We would like the corresponding surrogate loss to take a small value when the predicted probability for the true class $y$ is high and the predicted probability of rejection is low. The predicted probability of $f$ can be viewed as a $k + 1$ dimensional probability vector over the $k$ classes and the reject class $\mathcal{Y} : \left\lbrack {\left( {1 - {h}_{ \bot }\left( {{\mathbf{x}}^{\prime };\mathbf{\theta }}\right) }\right) {h}_{1}\left( {{\mathbf{x}}^{\prime };{\mathbf{\theta }}_{c}}\right) ,\cdots ,(1 - }\right.$ $\left. {\left. {{h}_{ \bot }\left( {{\mathbf{x}}^{\prime };\mathbf{\theta }}\right) }\right) {h}_{k}\left( {{\mathbf{x}}^{\prime };{\mathbf{\theta }}_{c}}\right) ,{h}_{ \bot }\left( {{\mathbf{x}}^{\prime };\mathbf{\theta }}\right) }\right\rbrack$ . Note that the final term corresponds to the probability of rejection, and the $k + 1$ probabilities sum to 1 . For an input $\left( {{\mathbf{x}}^{\prime },y}\right)$ to be accepted and predicted into class $y$ , the target $k + 1$ dimensional one-hot probability vector has a 1 corresponding to class $y$ and zeros elsewhere. We propose to use the cross-entropy loss between this target one-hot probability and the predicted probability of $f$ , given by
+
+$$
+{\ell }_{\mathrm{{CE}}}\left( {{\mathbf{x}}^{\prime },y;\mathbf{\theta }}\right) = - \log \left\lbrack {\left( {1 - {h}_{ \bot }\left( {{\mathbf{x}}^{\prime };\mathbf{\theta }}\right) }\right) {h}_{y}\left( {{\mathbf{x}}^{\prime };{\mathbf{\theta }}_{c}}\right) }\right\rbrack . \tag{6}
+$$
+
+We observe that the above loss function approaches 0 when the probability of rejection is close to 0 and the predicted probability of class $y$ is close to 1 ; the loss function takes a large value in all other cases. We also apply this cross-entropy loss for clean inputs since we expect the classifier to accept and correctly classify them.
+
+Accept & Classify Correctly or Reject. Consider the $0 - 1$ error corresponding to perturbations in the $\epsilon$ -neighborhood, $\mathbf{1}\left\{ {f\left( {\mathbf{x}}^{\prime }\right) \notin \{ y, \bot \} }\right\}$ . We would like the corresponding surrogate loss to take a small value when the predicted probability for the true class is high, or when the probability of rejection is high. To motivate the cross-entropy loss for this case, consider $k$ meta-classes defined as follows: $\{ 1,\cdots ,y \vee \bot ,y + 1,\cdots ,k\}$ , i.e., the reject option is merged only with the true class $y$ . The predicted probability of $f$ over these meta-classes is given by: $\lbrack (1 -$ $\left. {{\bar{h}}_{ \bot }\left( {{\mathbf{x}}^{\prime };\mathbf{\theta }}\right) }\right) {h}_{1}\left( {{\mathbf{x}}^{\prime };{\mathbf{\theta }}_{c}}\right) ,\cdots ,\left( {1 - {h}_{ \bot }\left( {{\mathbf{x}}^{\prime };\mathbf{\theta }}\right) }\right) {h}_{y}\left( {{\mathbf{x}}^{\prime };{\widetilde{\mathbf{\theta }}}_{c}}\right) +$ ${\left. {h}_{ \bot }\left( {\mathbf{x}}^{\prime };\mathbf{\theta }\right) ,\cdots ,\left( 1 - {h}_{ \bot }\left( {\mathbf{x}}^{\prime };\mathbf{\theta }\right) \right) {h}_{k}\left( {\mathbf{x}}^{\prime };{\mathbf{\theta }}_{c}\right) \right\rbrack }^{2}$ . For an input $\left( {{\mathbf{x}}^{\prime },y}\right)$ to be either rejected or accepted and predicted into class $y$ , the target $k$ -dimensional one-hot probability vector has a 1 corresponding to the meta-class $y \vee \bot$ , and zeros elsewhere. We propose to use the cross-entropy loss between this target one-hot probability and the predicted probability of $f$ , given by
+
+$$
+{\ell }_{\mathrm{{CE}}}^{\mathrm{{rej}}}\left( {{\mathbf{x}}^{\prime },y;\mathbf{\theta }}\right) = - \log \left\lbrack {\left( {1 - {h}_{ \bot }\left( {{\mathbf{x}}^{\prime };\mathbf{\theta }}\right) }\right) {h}_{y}\left( {{\mathbf{x}}^{\prime };{\mathbf{\theta }}_{c}}\right) }\right.
+$$
+
+$$
+\left. {+{h}_{ \bot }\left( {{\mathbf{x}}^{\prime };\mathbf{\theta }}\right) }\right\rbrack \text{ . } \tag{7}
+$$
+
+This loss function approaches 0 when i) the probability of rejection is close to 1, or ii) the probability of rejection is close to 0 and the predicted probability of class $y$ is close to 1. Note that both the loss functions have a range $\lbrack 0,\infty )$ .
+
+§ ROBUST TRAINING OBJECTIVE
+
+Given clean labeled samples(x, y)from a data distribution $\mathcal{D}$ , a perturbation budget for robust classification $\epsilon > 0$ , and a smaller perturbation budget ${\epsilon }_{0} \in \left\lbrack {0,\epsilon }\right\rbrack$ , we propose the following training objective for learning a robust classifier with a reject option:
+
+$$
+\mathcal{L}\left( \mathbf{\theta }\right) = \underset{\left( {\mathbf{x},y}\right) \sim \mathcal{D}}{\mathbb{E}}\left\lbrack {{\ell }_{\mathrm{{CE}}}\left( {\mathbf{x},y;\mathbf{\theta }}\right) + \beta \mathop{\max }\limits_{{{\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},{\epsilon }_{0}}\right) }}{\ell }_{\mathrm{{CE}}}\left( {{\mathbf{x}}^{\prime },y;\mathbf{\theta }}\right) }\right.
+$$
+
+$$
+\left. {+\gamma \mathop{\max }\limits_{{{\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},\epsilon }\right) }}{\ell }_{\mathrm{{CE}}}^{\mathrm{{rej}}}\left( {{\mathbf{x}}^{\prime },y;\mathbf{\theta }}\right) }\right\rbrack \text{ . } \tag{8}
+$$
+
+The first term corresponds to the standard cross-entropy loss on clean inputs from the data distribution. The second term corresponds to the robust loss for small perturbations in the ${\epsilon }_{0}$ -neighborhood that we would like the classifier to accept and correctly classify. The third term corresponds to the robust loss for large perturbations in the $\epsilon$ -neighborhood that we would like the classifier to either reject or accept and correctly classify. The classifier parameters ${\mathbf{\theta }}_{c}$ and the detector parameters ${\mathbf{\theta }}_{d}$ are jointly learned by minimizing $\mathcal{L}\left( \mathbf{\theta }\right)$ . The hyper-parameters $\beta \geq 0$ and $\gamma \geq 0$ control the trade-off between the natural error and robust error terms of the classifier. We use standard PGD attack (Madry et al. 2018) to solve the inner maximization in our training objective.
+
+Comments. Suppose we choose the detector to always accept inputs, i.e., ${h}_{ \bot }\left( {\mathbf{x};\mathbf{\theta }}\right) = 0,\forall \mathbf{x}$ , and fix ${\epsilon }_{0} = \epsilon ,\beta = 1$ , $\gamma = 0$ , then the training objective specializes to standard adversarial training. The proposed training objective (8) differs from adversarial training by allowing large perturbations of an input to be rejected, when the classifier is likely to predict them incorrectly. As we show experimentally, the proposed method of robust training with rejection typically has higher robustness on unseen adversarial attacks that have a larger perturbation budget $\epsilon$ than that used in training, whereas the robustness of standard adversarial training drops significantly on those unseen adversarial attacks.
+
+${}^{2}$ Note that this is a valid probability distribution over the meta-classes that sums to 1 .
+
+${}^{1}$ We discuss specific choices for the function $u$ in the sequel.
+
+§ CHOICE OF THE DETECTOR
+
+Recall that we defined the detector as a general parametric function of the classifier's un-normalized prediction, that outputs the probability of rejection ${h}_{ \bot }\left( {\mathbf{x};\mathbf{\theta }}\right) =$ $u\left( {\widetilde{\mathbf{h}}\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right) ;{\mathbf{\theta }}_{d}}\right)$ . We explored a few approaches for defining $u\left( {\cdot ;{\mathbf{\theta }}_{d}}\right)$ based on smooth approximations of the prediction confidence $\mathop{\max }\limits_{{y \in \overline{\mathcal{Y}}}}{h}_{y}\left( {\mathbf{x};{\mathbf{\theta }}_{c}}\right)$ . For instance, we used a temperature-scaled log-sum-exponential approximation to the max function, followed by an affine transformation and the Sigmoid function (in order to get a probabilistic output). We also explored a multilayer fully-connected neural network to model the detector, which takes the prediction logits as its input and predicts the probability of rejection. We found the neural network-based model of the detector to have consistently better performance compared to the simple confidence-based approaches. Therefore, we adopt this model of the detector in our experiments.
+
+§ DESIGN OF ADAPTIVE ATTACKS
+
+We design strong adaptive attacks to evaluate the robustness with rejection of our method. To compute robustness with rejection at budgets ${\epsilon }_{0}$ and $\epsilon$ , we need to generate two adversarial examples ${\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},{\epsilon }_{0}}\right)$ and ${\mathbf{x}}^{\prime \prime } \in \mathcal{N}\left( {\mathbf{x},\epsilon }\right)$ for each clean input(x, y). We generate the adversarial example ${\mathbf{x}}^{\prime }$ within the ${\epsilon }_{0}$ -ball $\mathcal{N}\left( {\mathbf{x},{\epsilon }_{0}}\right)$ using the following objective:
+
+$$
+{\mathbf{x}}^{\prime } = \mathop{\operatorname{argmax}}\limits_{{{\mathbf{x}}^{\prime } \in \mathcal{N}\left( {\mathbf{x},{\epsilon }_{0}}\right) }} - \log \left( {1 - {h}_{ \bot }\left( {{\mathbf{x}}^{\prime };\mathbf{\theta }}\right) }\right) .
+$$
+
+The goal of the adversary is to make the detector reject the adversarial input by pushing the probability of rejection close to ${1}^{3}$ . We generate the adversarial example ${x}^{\prime \prime }$ within the larger $\epsilon$ -ball $\mathcal{N}\left( {\mathbf{x},\epsilon }\right)$ via the following objective:
+
+$$
+{\mathbf{x}}^{\prime \prime } = \mathop{\operatorname{argmax}}\limits_{{{\mathbf{x}}^{\prime \prime } \in \mathcal{N}\left( {\mathbf{x},\epsilon }\right) }}{\ell }_{\mathrm{{CE}}}^{\mathrm{{rej}}}\left( {{\mathbf{x}}^{\prime \prime },y;\mathbf{\theta }}\right) .
+$$
+
+By solving this objective, the adversary attempts to push both the probability of rejection ${h}_{ \bot }\left( {{\mathbf{x}}^{\prime \prime };\mathbf{\theta }}\right)$ and the predicted probability of the true class ${h}_{y}\left( {{\mathbf{x}}^{\prime \prime };{\mathbf{\theta }}_{c}}\right)$ close to 0 . Thus, the goal of the adversary is to make the classifier-detector accept and incorrectly classify the adversarial input.
+
+We use the Projected Gradient Descent (PGD) method with Backtracking proposed by (Stutz, Hein, and Schiele 2020) to solve the attack objectives. The hyperparameters for PGD with backtracking are specified in the experiment section. Adaptive attacks for evaluating the baseline methods are discussed in the Appendix.
+
+§ EXPERIMENTS
+
+In this section, we perform experiments to evaluate the proposed method (SATR) and compare it to the baseline methods. Our main findings are summarized as follows:
+
+1) SATR achieves higher robustness with rejection compared to adversarial training (with and without confidence-based rejection) and CCAT (Stutz, Hein, and Schiele 2019).
+
+2) On small perturbations, SATR has a much lower rejection rate compared to CCAT, which often rejects a large fraction of the perturbed inputs;
+
+3) Our method outperforms both CCAT and adversarial training under unseen attacks;
+
+We next provide details on the experimental setup, datasets and DNN architectures, baseline methods, and the performance metric.
+
+§ SETUP
+
+We describe the important experimental settings in this section, and provide additional details about our method and the baselines in the appendix.
+
+Datasets. We perform experiments on the MNIST (LeCun 1998) and CIFAR-10 (Krizhevsky, Hinton et al. 2009) image datasets. MNIST contains 50,000 training images and 10,000 test images from 10 classes corresponding to handwritten digits. CIFAR-10 contains 50,000 training images and 10,000 test images from 10 classes corresponding to object categories. Following the setup in (Stutz, Hein, and Schiele 2020), we compute the accuracy of the models on the first 9,000 images of the test set and compute the robustness of the models on the first 1,000 images of the test set. We use the last 1,000 images of the test set as a validation dataset for selecting the rejection threshold of the methods.
+
+Baseline Methods. We compare the performance of SATR with the following three baselines: (1) AT: standard adversarial training without rejection (i.e. accepting every input) (Madry et al. 2018); (2) AT + Rejection: adversarial training with rejection based on the prediction confidence; (3) CCAT: confidence-calibrated adversarial training (Stutz, Hein, and Schiele 2020).
+
+DNN Architectures. On MNIST, we use LeNet network architecture (LeCun et al. 1989) for the classifier ${\mathbf{\theta }}_{c}$ , and use a three-layer fully-connected neural network with width 256 and ReLU activation function for the detector ${\mathbf{\theta }}_{d}$ . On CIFAR- 10, we use ResNet-20 network architecture (He et al. 2016) for the classifier ${\mathbf{\theta }}_{c}$ , and use a seven-layer fully-connected neural network with width 1024, ReLU activation function and a batch normalization layer for the detector ${\mathbf{\theta }}_{d}$ .
+
+Training Details. On both MNIST and CIFAR-10, we train the model for 100 epochs with a batch size of 128 . We use standard stochastic gradient descent (SGD) starting with a learning rate of 0.1 . The learning rate is multiplied by 0.95 after each epoch. We use a momentum of 0.9 and do not use weight decay for SGD. For each training batch, we split it into two sub-batches with equal size and use the first sub-batch for the first two loss terms in our training objective (objective (8)) and use the second sub-batch for the third loss term in our training objective. We set the hyper-parameters $\beta = 1$ and $\gamma = 1$ in our training objective without tuning. On MNIST, we train the model from scratch, while on CIFAR-10 we use an adversarially-trained model to initialize the classifier parameters ${\mathbf{\theta }}_{c}$ . On CIFAR-10, we also augment the training images using random crop and random horizontal flip. We use the standard PGD attack (Madry et al. 2018) to generate adversarial training examples. On MNIST, we use the PGD attack with a step size of0.01,40steps and a random start. On CIFAR-10, we use the PGD attack with a step size of $2/{255},{10}$ steps and a random start. In the training objective, by default, we set $\epsilon = {0.3}$ and ${\epsilon }_{0} = {0.1}$ for MNIST, and set $\epsilon = 8/{255}$ and ${\epsilon }_{0} = 2/{255}$ for CIFAR-10.
+
+${}^{3}$ We appeal to the definition of robust error with rejection in Eq. (2), where rejecting a perturbed input in the ${\epsilon }_{0}$ -neighborhood is considered an error.
+
+Performance Metric. We use the robustness with rejection at budgets ${\epsilon }_{0}$ and $\epsilon$ , defined as $1 - {R}_{{\epsilon }_{0},\epsilon }^{\mathrm{{rej}}}\left( f\right)$ , as the evaluation metric. For a fixed $\epsilon$ , we vary ${\epsilon }_{0}$ from 0 to $\epsilon$ over a given number of values. Note that the ${\epsilon }_{0}$ in this performance metric is different from the fixed ${\epsilon }_{0}$ that is used in the training objective of the proposed method. For convenience, we define the factor $\alpha \mathrel{\text{ := }} {\epsilon }_{0}/\epsilon \in \left\lbrack {0,1}\right\rbrack$ , and calculate the robustness with rejection metric for each of the $\alpha$ values from the set $\{ 0,{0.05},{0.1},{0.2},{0.3},{0.4},{0.5},{1.0}\}$ . Note that each $\alpha$ value corresponds to an ${\epsilon }_{0}$ value equal to ${\alpha \epsilon }$ . We plot a robustness curve for each method, where the $\alpha$ value is plotted on the $\mathrm{x}$ -axis and the corresponding robustness with rejection metric is plotted on the y-axis. A larger value of robustness corresponds to better performance. Referring to Fig. 3, we note that at the right end of this curve $\left( {{\epsilon }_{0} = \epsilon }\right)$ , the robustness $1 - {R}_{\epsilon ,\epsilon }^{\text{ rej }}\left( f\right)$ corresponds to the standard definition of adversarial robustness without rejection (Madry et al. 2018). At the left end of this curve $\left( {{\epsilon }_{0} = 0}\right)$ , the robustness $1 - {R}_{0,\epsilon }^{\text{ rej }}\left( f\right)$ corresponds to the robustness with rejection defined by (Tramèr 2021). In practice, we are mainly interested only in the robustness for small values of $\alpha$ , where the radius of perturbations (to be accepted) are small.
+
+Evaluation. We use the same approach to set the rejection threshold for all the methods. Specifically, on MNIST, we set the threshold such that only $1\%$ of clean validation inputs are rejected. On CIFAR-10, we set the threshold such that only 5% of clean validation inputs are rejected. We consider ${\ell }_{\infty }$ -norm bounded attacks and generate adversarial examples to compute the robustness with rejection metric via the PGD attack with backtracking (Stutz, Hein, and Schiele 2020). We use a base learning rate of 0.05, momentum factor of 0.9, a learning rate factor of1.25,200iterations, and 10 random restarts for generating adversarial examples ${\mathbf{x}}^{\prime }$ within the ${\epsilon }_{0}$ -ball $\mathcal{N}\left( {\mathbf{x},{\epsilon }_{0}}\right)$ . For generating adversarial examples ${x}^{\prime \prime }$ within the larger $\epsilon$ -ball $\mathcal{N}\left( {\mathbf{x},\epsilon }\right)$ , we use a base learning rate of 0.001, a momentum factor of 0.9, a learning rate factor of 1.1, 1000 iterations, and 10 random restarts.
+
+§ RESULTS
+
+We discuss the performance of the proposed method and the baselines on the CIFAR-10 and MNIST datasets.
+
+Evaluation under seen attacks. Figure 3 compares the robustness with rejection of the methods as a function of $\alpha$ for the scenario where the adaptive attacks used for evaluation use the same $\epsilon$ budget that was used for training the methods. For the proposed SATR, the ${\epsilon }_{0}$ value used for training is indicated (with the corresponding $\alpha$ value) using the vertical dashed line. We observe that CCAT has comparable robustness to AT only for $\alpha = 0$ , but its robustness quickly drops for larger $\alpha$ . This suggests that CCAT rejects a large fraction of small input perturbations based on its confidence threshold-ing method. AT with confidence-based rejection has higher robustness compared to standard AT on both datasets, which suggests that including even a simple rejection mechanism can help improve the robustness. On CIFAR-10, the proposed SATR has significantly higher robustness with rejection for small to moderate $\alpha$ , and its robustness drops only for large $\alpha$ values (which are not likely to be of practical interest). On MNIST, the robustness of SATR is slightly better than or comparable to AT + Rejection for small to moderate $\alpha$ . This suggests that SATR is successful at accepting and correctly classifying a majority of adversarial attacks of small size.
+
+Evaluation under unseen attacks. Figure 4 compares the robustness with rejection of the methods as a function of $\alpha$ for the unseen-adaptive-attack scenario, wherein a larger $\epsilon$ (compared to training) is used for evaluation. AT, both with and without rejection, performs poorly in this setting, suggesting that it does not generalize well to unseen (stronger) attacks. CCAT has relatively high robustness for $\alpha = 0$ ; however, its robustness sharply drops for larger $\alpha$ values. The significantly higher robustness of SATR for a range of small to moderate $\alpha$ values suggests that the proposed training method learns to reject larger input perturbations, even if the attack is unseen.
+
+Ablation study. We performed an ablation experiment to study the effect of the hyper-parameter ${\epsilon }_{0}$ used by SATR during training. The result of this experiment is shown in Figure 5 for a few different ${\epsilon }_{0}$ values. Clearly, the choice ${\epsilon }_{0} = 0$ leads to poor robustness with rejection, suggesting that a small non-zero value of ${\epsilon }_{0}$ is required for training to ensure that SATR does not reject too many small adversarial perturbations. We also observe that a larger ${\epsilon }_{0}$ during training typically leads to a higher robustness for large $\alpha$ values. However, this may come at the expense of lower robustness for small $\alpha$ , as observed on CIFAR-10 for ${\epsilon }_{0} = 4/{255}$ .
+
+§ CONCLUSION
+
+We explored the problem of learning an adversarially-robust classifier with a reject option. We conducted a careful theoretical analysis of the problem and motivate the need for not rejecting small perturbations of the input. We proposed a novel metric for evaluating the robustness of a classifier with reject option that subsumes prior definitions of robustness, and provides a more fine-grained analysis of the radius (size) of perturbations rejected by a given method. We proposed a novel training objective for learning a robust classifier with rejection that encourages small input perturbations to be accepted and classified correctly, while allowing larger input perturbations to be rejected when the classifier's prediction may be incorrect. Experimental evaluations using strong adaptive attacks demonstrate significant improvement in the adversarial robustness with rejection of the proposed method, including the setting where unseen attacks with a larger $\epsilon$ budget are present during evaluation.
+
+ < g r a p h i c s >
+
+Figure 3: Results on MNIST and CIFAR-10 datasets under seen adaptive attacks. On MNIST we set the perturbation budget $\epsilon = {0.3}$ , while on CIFAR-10 we set the perturbation budget $\epsilon = 8/{255}$ . The vertical dashed line corresponds to the $\alpha$ used for training SATR.
+
+ < g r a p h i c s >
+
+Figure 4: Results on MNIST and CIFAR-10 datasets under unseen adaptive attacks. On MNIST, we set the perturbation budget $\epsilon = {0.4}$ , while on CIFAR-10 we set the perturbation budget $\epsilon = {10}/{255}$ . The vertical dashed line corresponds to the $\alpha$ used for training SATR.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/WMIoz7O_DPz/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/WMIoz7O_DPz/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..8f7a37279c819961706857f81231488af03f86b4
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/WMIoz7O_DPz/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,443 @@
+# Robust Out-of-distribution Detection for Neural Networks
+
+## Abstract
+
+Detecting out-of-distribution (OOD) inputs is critical for safely deploying deep learning models in the real world. Existing approaches for detecting OOD examples work well when evaluated on benign in-distribution and OOD samples. However, in this paper, we show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs with minimal adversarial perturbations which don't change their semantics. Formally, we extensively study the problem of Robust Out-of-Distribution Detection on common OOD detection approaches, and show that state-of-the-art OOD detectors can be easily fooled by adding small perturbations to the in-distribution and OOD inputs. To counteract these threats, we propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples. Our method can be flexibly combined with, and render existing methods robust. On common benchmark datasets, we show that ALOE substantially improves the robustness of state-of-the-art OOD detection, with 58.4% AUROC improvement on CIFAR-10 and 46.59% improvement on CIFAR-100.
+
+## Introduction
+
+Out-of-distribution (OOD) detection has become an indispensable part of building reliable open-world machine learning models (Bendale and Boult 2015). An OOD detector is used to determine whether an input is from the training data distribution (in-distribution examples), or from a different distribution (OOD examples). Previous OOD detection methods are usually evaluated on benign in-distribution and OOD inputs (Hsu et al. 2020; Huang and Li 2021; Lee et al. 2018; Liang, Li, and Srikant 2017; Liu et al. 2020). Recently, some works have shown the existence of adversarial OOD examples, which are generated by slightly perturbing the clean OOD inputs to make the OOD detectors fail to detect them as OOD examples, and have proposed some robust OOD detection methods to address the issue of adversarial OOD examples (Sehwag et al. 2019; Hein, Andriushchenko, and Bitterwolf 2019; Meinke and Hein 2019; Bitterwolf, Meinke, and Hein 2020; Chen et al. 2021).
+
+In this paper, we also consider the problem of robust OOD detection. Different from previous works, we not only consider adversarial OOD examples, but also consider adversarial in-distribution examples, which are generated by slightly perturbing the clean in-distribution inputs and cause the OOD detectors to falsely reject them. We argue that both adversarial in-distribution examples and adversarial OOD examples can cause severe consequences if the OOD detectors fail to detect them, as illustrated in Figure 1.
+
+Formally, we study the problem of robust out-of-distribution detection and reveal the lack of robustness of common OOD detection methods. We show that existing OOD detection algorithms can be easily attacked to produce mistaken OOD prediction under small adversarial perturbations (Papernot et al. 2016; Goodfellow, Shlens, and Szegedy 2014; Biggio et al. 2013; Szegedy et al. 2013). Specifically, we construct adversarial in-distribution examples by adding small perturbations to the in-distribution inputs such that the OOD detectors will falsely reject them; whereas adversarial OOD examples are generated by adding small perturbations to the OOD inputs such that the OOD detectors will fail to reject them. Different from the common notion, the adversarial examples in our work are meant to fool the OOD detectors $G\left( x\right)$ , rather than the original image classification model $f\left( x\right)$ . It is also worth noting that the perturbation is sufficiently small so that the visual semantics as well as true distributional membership remain the same. Yet worryingly, state-of-the-art OOD detectors can fail to distinguish between adversarial in-distribution examples and adversarial OOD examples. Although there are some works trying to make OOD detection robust to adversarial OOD examples, scant attention has been paid to making the OOD detectors robust against both the adversarial in-distribution examples and adversarial OOD examples. To the best of our knowledge, we are the first to consider the issue of adversarial in-distribution examples.
+
+To address the challenge, we propose an effective method, ALOE, that improves the robust OOD detection performance. Specifically, we perform robust training by exposing the model to two types of perturbed adversarial examples. For in-distribution training data, we create a perturbed example by searching in its $\epsilon$ -ball that maximizes the negative log likelihood. In addition, we also utilize an auxiliary un-labaled dataset as in (Hendrycks, Mazeika, and Dietterich 2018), and create corresponding perturbed outlier example by searching in its $\epsilon$ -ball that maximizes the KL-divergence between model output and a uniform distribution. The overall training objective of ALOE can be viewed as an adversarial min-max game. We show that on several benchmark datasets, ALOE can improve the robust OOD detection performance by up to 58.4% compared to previous state-of-the-art method. Our approach can be complemented by techniques such as ODIN (Liang, Li, and Srikant 2017), and further boost the performance.
+
+---
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+---
+
+
+
+Figure 1: When deploying OOD detector $G\left( x\right)$ in the real world, there can be two types of attacks: outlier attack and inlier attack on $G\left( x\right)$ . To perform outlier attack, we add small perturbation to an OOD input (e.g. mailbox) which causes the OOD detector to misclassify them as in-distribution example. The downstream classifier $f\left( x\right)$ will then classify this example into one of the known classes (e.g. stop sign), and trigger wrong action. To perform inlier attack, we add small perturbation to an in-distribution sample (e.g. stop sign) which causes the OOD detector to misclassify them as out-of-distribution example and reject it without taking the correct action (e.g. stop sign). Solid lines indicate the actual computation flow.
+
+Our main contributions are as follows:
+
+- We extensively examine the robust OOD detection problem on common OOD detection approaches. We show that state-of-the-art OOD detectors can fail to distinguish between in-distribution examples and OOD examples under small adversarial perturbations;
+
+- We propose an effective algorithm, ALOE, that substantially improves the robustness of OOD detectors;
+
+- We empirically analyze why common adversarial examples targeting the classifier with small perturbations should be regarded as in-distribution rather than OOD.
+
+- We will release our code base that integrates the most common OOD detection baselines, and our robust OOD detection methods. We hope this can ensure reproducibility of all methods, and make it easy for the community to conduct future research on this topic.
+
+## Related Work
+
+OOD Detection. Hendrycks and Gimpel introduced a baseline for OOD detection using the maximum softmax probability from a pre-trained network. Subsequent works improve the OOD detection by using deep ensembles (Lak-shminarayanan, Pritzel, and Blundell 2017), the calibrated softmax score (Liang, Li, and Srikant 2017), the Mahalanobis distance-based confidence score (Lee et al. 2018), and the energy score (Liu et al. 2020). Some methods also modify the neural networks by re-training or fine-tuning on some auxiliary anomalous data that are or realistic (Hendrycks, Mazeika, and Dietterich 2018; Mohseni et al. 2020) or artificially generated by GANs (Lee et al. 2017). Many other works (Subramanya, Srinivas, and Babu 2017; Malinin and Gales 2018; Bevandić et al. 2018) also regularize the model to have lower confidence on anomalous examples. Recent works have also studied the computational efficiency aspect of OOD detection (Lin, Roy, and Li 2021) and large-scale OOD detection on ImageNet (Huang and Li 2021).
+
+Robustness of OOD detection. Worst-case aspects of OOD detection have previously been studied in (Sehwag et al. 2019; Hein, Andriushchenko, and Bitterwolf 2019; Meinke and Hein 2019; Bitterwolf, Meinke, and Hein 2020; Chen et al. 2021). However, these papers are primarily concerned with adversarial OOD examples. We are the first to present a unified framework to study both adversarial in-distribution examples and adversarial OOD examples.
+
+Adversarial Robustness. A well-known phenomenon of adversarial examples (Biggio et al. 2013; Goodfellow, Shlens, and Szegedy 2014; Papernot et al. 2016; Szegedy et al. 2013) has received great attention in recent years. Many defense methods have been proposed to address this problem. One of the most effective methods is adversarial training (Madry et al. 2017) which uses robust optimization techniques to render deep learning models resistant to adversarial attacks. In this paper, we show that the OOD detectors built from deep models are also very brittle under small perturbations, and propose a method to mitigate this issue using techniques from robust optimization.
+
+## Traditional OOD Detection
+
+Traditional OOD detection can be formulated as a canonical binary classification problem. Suppose we have an in-distribution ${P}_{\mathbf{X}}$ defined on an input space $\mathcal{X} \subset {\mathbb{R}}^{n}$ . An OOD classifier $G : \mathcal{X} \mapsto \{ 0,1\}$ is built to distinguish whether an input $x$ is from ${P}_{\mathbf{X}}$ (give it label 1) or not (give it label 0).
+
+In testing, the detector $G$ is evaluated on inputs drawn from a mixture distribution ${\mathcal{M}}_{\mathbf{X} \times Z}$ defined on $\mathcal{X} \times \{ 0,1\}$ , where the conditional probability distributions ${\mathcal{M}}_{\mathbf{X} \mid Z = 1} = {P}_{\mathbf{X}}$ and ${\mathcal{M}}_{\mathbf{X} \mid Z = 0} = {Q}_{\mathbf{X}}$ . We assume that $Z$ is drawn uniformly from $\{ 0,1\} .{Q}_{\mathbf{X}}$ is also a distribution defined on $\mathcal{X}$ which we refer to it as out-distribution. Following previous work (Bendale and Boult 2016; Sehwag et al. 2019), we assume that ${P}_{\mathbf{X}}$ and ${Q}_{\mathbf{X}}$ are sufficiently different and ${Q}_{\mathbf{X}}$ has a label set that is disjoint from that of ${P}_{\mathbf{X}}$ . We denote by ${\mathcal{D}}_{\text{in }}^{\text{test }}$ an in-distribution test set drawn from ${P}_{\mathbf{X}}$ , and ${\mathcal{D}}_{\text{out }}^{\text{test }}$ an out-of-distribution test set drawn from ${Q}_{\mathbf{X}}$ . The detection error of $G\left( x\right)$ evaluated under in-distribution ${P}_{\mathbf{X}}$ and out-distribution ${Q}_{X}$ is defined by
+
+$$
+L\left( {{P}_{\mathbf{X}},{Q}_{\mathbf{X}};G}\right) = \frac{1}{2}\left( {{\mathbb{E}}_{x \sim {P}_{\mathbf{X}}}\mathbb{I}\left\lbrack {G\left( x\right) = 0}\right\rbrack }\right. \tag{1}
+$$
+
+$$
+\left. {+{\mathbb{E}}_{x \sim {Q}_{X}}\mathbb{I}\left\lbrack {G\left( x\right) = 1}\right\rbrack }\right)
+$$
+
+## Robust Out-of-Distribution Detection
+
+Traditional OOD detection methods are shown to work well when evaluated on natural in-distribution and OOD samples. However, in this section, we show that existing OOD detectors are extremely brittle and can fail when we add minimal semantic-preserving perturbations to the inputs. We start by formally describing the problem of robust out-of-distribution detection.
+
+Problem Statement. We define $\Omega \left( x\right)$ to be a set of semantic-preserving perturbations on an input $x$ . For $\delta \in$ $\Omega \left( x\right) , x + \delta$ has the same semantic label as $x$ . This also means that $x$ and $x + \delta$ have the same distributional membership (i.e. $x$ and $x + \delta$ both belong to in-distribution ${P}_{\mathbf{X}}$ , or out-distribution ${Q}_{\mathbf{X}}$ ).
+
+A robust OOD classifier $G : \mathcal{X} \mapsto \{ 0,1\}$ is built to distinguish whether a perturbed input $x + \delta$ is from ${P}_{\mathbf{X}}$ or not. In testing, the detector $G$ is evaluated on perturbed inputs drawn from a mixture distribution ${\mathcal{M}}_{\mathbf{X} \times Z}$ defined on $\mathcal{X} \times \{ 0,1\}$ , where the conditional probability distributions ${\mathcal{M}}_{\mathbf{X} \mid Z = 1} = {P}_{\mathbf{X}}$ and ${\mathcal{M}}_{\mathbf{X} \mid Z = 0} = {Q}_{\mathbf{X}}$ . We assume that $Z$ is drawn uniformly from $\{ 0,1\}$ . The detection error of $G$ evaluated under in-distribution ${P}_{\mathbf{X}}$ and out-distribution ${Q}_{\mathbf{X}}$ is now defined by
+
+$$
+L\left( {{P}_{\mathbf{X}},{Q}_{\mathbf{X}};G,\Omega }\right) = \frac{1}{2}\left( {{\mathbb{E}}_{x \sim {P}_{\mathbf{X}}}\mathop{\max }\limits_{{\delta \in \Omega \left( x\right) }}\mathbb{I}\left\lbrack {G\left( {x + \delta }\right) = 0}\right\rbrack }\right.
+$$
+
+$$
+\left. {+{\mathbb{E}}_{x \sim {Q}_{X}}\mathop{\max }\limits_{{\delta \in \Omega \left( x\right) }}\mathbb{I}\left\lbrack {G\left( {x + \delta }\right) = 1}\right\rbrack }\right) \tag{2}
+$$
+
+In practice, it can be intractable to directly minimize $L\left( {{P}_{\mathbf{X}},{Q}_{\mathbf{X}};G,\Omega }\right)$ due to lack of prior knowledge on ${Q}_{\mathbf{X}}$ . In some cases we assume having access to auxiliary data sampled from a distribution ${U}_{\mathbf{X}}$ which is different from both ${P}_{\mathbf{X}}$ and ${Q}_{\mathbf{X}}$ .
+
+Adversarial Attacks on OOD Detection. In the appendix, we describe a few common OOD detection methods such as MSP (Hendrycks and Gimpel 2016), ODIN (Liang, Li, and Srikant 2017) and Mahalanobis (Lee et al. 2018). We then propose adversarial attack algorithms that can show the vulnerability of these OOD detection approaches. Computing the exact value of detection error defined in equation (2) requires enumerating all possible perturbations. This can be practically intractable given the large space of $\Omega \left( x\right) \subset {\mathbb{R}}^{n}$ . To this end, we propose adversarial attack algorithms that can find the perturbations in $\Omega \left( x\right)$ to compute a lower bound.
+
+Specifically, we consider image data and small ${L}_{\infty }$ norm-bounded perturbations on $x$ since it is commonly used in adversarial machine learning research (Madry et al. 2017; Athalye, Carlini, and Wagner 2018). For data point $x \in {\mathbb{R}}^{n}$ , a set of adversarial perturbations is defined as
+
+$$
+B\left( {x,\epsilon }\right) = \left\{ {\delta \in {\mathbb{R}}^{n} \mid \parallel \delta {\parallel }_{\infty } \leq \epsilon \land x + \delta \text{ is valid }}\right\} , \tag{3}
+$$
+
+where $\epsilon$ is the size of small perturbation, which is also called adversarial budget. $x + \delta$ is considered valid if the values of $x + \delta$ are in the image pixel value range.
+
+For the OOD detection methods based on softmax confidence score (e.g. MSP, ODIN and OE (Hendrycks, Mazeika, and Dietterich 2018)), we describe the attack mechanism in Algorithm 1. Specifically, we construct adversarial test examples by adding small perturbations in $B\left( {x,\epsilon }\right)$ so to change the prediction confidence in the reverse direction. To generate adversarial in-distribution examples, the model is induced to output probability distribution that is close to uniform; whereas adversarial OOD examples are constructed to induce the model produce high confidence score. We note here that the adversarial examples here are constructed to fool the OOD detectors $G\left( x\right)$ , rather than the image classification ${\;\operatorname{model}\;f}\left( x\right)$ .
+
+For the OOD detection methods using Mahalanobis distance based confidence score, we propose an attack algorithm detailed in Algorithm 2. Specifically, we construct adversarial test examples by adding small perturbations in $B\left( {x,\epsilon }\right)$ to make the logistic regression detector predict wrongly. Note that in our attack algorithm, we don't perform input preprocessing to compute the Mahalanobis distance based confidence score.
+
+Our attack algorithms assume having access to the model parameters, thus they are white-box attacks. We find that using our attack algorithms, even with very minimal attack strength $\left( {\epsilon = 1/{255}\text{and}m = {10}}\right)$ , classic OOD detection methods (e.g. MSP, ODIN, Mahalanobis, OE, and OE+ODIN) can fail miserably. For example, the false positive rate of OE method can increase by ${95.52}\%$ under such attack when evaluated on CIFAR-10 as in-distribution dataset.
+
+Algorithm 1: Adversarial attack on OOD detectors based on softmax confidence score.
+
+---
+
+input $x, F,\epsilon , m,\xi$
+
+output $\delta$
+
+ $\delta \leftarrow$ randomly choose a vector from $B\left( {x,\epsilon }\right)$
+
+ for $t = 1,2,\cdots , m$ do
+
+ ${x}^{\prime } \leftarrow x + \delta$
+
+ if $x$ is in-distribution then
+
+ $\ell \left( {x}^{\prime }\right) \leftarrow {L}_{\mathrm{{CE}}}\left( {F\left( {x}^{\prime }\right) ,{\mathcal{U}}_{K}}\right)$
+
+ else
+
+ $\ell \left( {x}^{\prime }\right) \leftarrow - \mathop{\sum }\limits_{{i = 1}}^{K}{F}_{i}\left( {x}^{\prime }\right) \log {F}_{i}\left( {x}^{\prime }\right)$
+
+ end if
+
+ ${\delta }^{\prime } \leftarrow \delta - \xi \cdot \operatorname{sign}\left( {{\nabla }_{x}\ell \left( {x}^{\prime }\right) }\right)$
+
+ $\delta \leftarrow \mathop{\prod }\limits_{{B\left( {x,\epsilon }\right) }}{\delta }^{\prime }\; \vartriangleright$ projecting ${\delta }^{\prime }$ to $B\left( {x,\epsilon }\right)$
+
+ end for
+
+---
+
+Algorithm 2: Adversarial attack on OOD detector using Ma-halanobis distance based confidence score.
+
+---
+
+input $x,{M}_{\ell }\left( \cdot \right) ,\left\{ {\alpha }_{\ell }\right\} , b,\epsilon , m,\xi$
+
+output $\delta$
+
+ $\delta \leftarrow$ randomly choose a vector from $B\left( {x,\epsilon }\right)$
+
+ for $t = 1,2,\cdots , m$ do
+
+ ${x}^{\prime } \leftarrow x + \delta$
+
+ $p\left( {x}^{\prime }\right) \leftarrow \frac{1}{1 + {e}^{-\left( {\mathop{\sum }\limits_{\ell }{\alpha }_{\ell }{M}_{\ell }\left( {x}^{\prime }\right) + b}\right) }}$
+
+ if $x$ is in-distribution then
+
+ $\ell \left( {x}^{\prime }\right) \leftarrow - \log p\left( {x}^{\prime }\right)$
+
+ else
+
+ $\ell \left( {x}^{\prime }\right) \leftarrow - \log \left( {1 - p\left( {x}^{\prime }\right) }\right)$
+
+ end if
+
+ ${\delta }^{\prime } \leftarrow \delta + \xi \cdot \operatorname{sign}\left( {{\nabla }_{x}\ell \left( {x}^{\prime }\right) }\right)$
+
+ $\delta \leftarrow \mathop{\prod }\limits_{{B\left( {x,\epsilon }\right) }}{\delta }^{\prime }\; \vartriangleright$ projecting ${\delta }^{\prime }$ to $B\left( {x,\epsilon }\right)$
+
+ end for
+
+---
+
+## ALOE: Adversarial Learning with inliner and Outlier Exposure
+
+In this section, we introduce a novel method called Adversarial Learning with inliner and Outlier Exposure (ALOE) to improve the robustness of the OOD detector $G\left( \cdot \right)$ built on top of the neural network $f\left( \cdot \right)$ against input perturbations.
+
+Training Objective. We train our model ALOE against two types of perturbed examples. For in-distribution inputs $x \in {P}_{\mathbf{X}}$ , ALOE creates adversarial inlier within the $\epsilon$ -ball that maximize the negative log likelihood. Training with perturbed examples from the in-distribution helps calibrate the error on inliers, and make the model more invariant to the additive noise. In addition, our method leverages an auxiliary unlabeled dataset ${\mathcal{D}}_{\text{out }}^{\mathrm{{OE}}}$ drawn from ${U}_{\mathbf{X}}$ as used in (Hendrycks, Mazeika, and Dietterich 2018), but in a different objective. While OE directly uses the original images $x \in {\mathcal{D}}_{\text{out }}^{\mathrm{{OE}}}$ as outliers, ALOE creates adversarial outliers by searching within the $\epsilon$ -ball that maximize the KL-divergence between model output and a uniform distribution. The overall training objective of ${F}_{\text{ALOE }}$ can be formulated as a min-max game given by
+
+$$
+\mathop{\operatorname{minimize}}\limits_{\theta }{\mathbb{E}}_{\left( {x, y}\right) \sim {\mathcal{D}}_{\text{in }}^{\text{train }}}\mathop{\max }\limits_{{\delta \in B\left( {x,\epsilon }\right) }}\left\lbrack {-\log {F}_{\theta }{\left( x + \delta \right) }_{y}}\right\rbrack
+$$
+
+$$
++ \lambda \cdot {\mathbb{E}}_{x \sim {\mathcal{D}}_{\text{out }}^{\mathrm{{OE}}}}\mathop{\max }\limits_{{\delta \in B\left( {x,\epsilon }\right) }}\left\lbrack {{L}_{\mathrm{{CE}}}\left( {{F}_{\theta }\left( {x + \delta }\right) ,{\mathcal{U}}_{K}}\right) }\right\rbrack \tag{4}
+$$
+
+where ${F}_{\theta }\left( x\right)$ is the softmax output of the neural network.
+
+To solve the inner max of these objectives, we use the Projected Gradient Descent (PGD) method (Madry et al. 2017), which is the standard method for large-scale constrained optimization. The hyper-parameters of PGD used in the training will be provided in the experiments.
+
+Once the model ${F}_{\text{ALOE }}$ is trained, it can be used for downstream OOD detection by combining with approaches such as MSP and ODIN. The corresponding detectors can be constructed as ${G}_{\mathrm{{MSP}}}\left( {x;\gamma ,{F}_{\mathrm{{ALOE}}}}\right)$ , and ${G}_{\mathrm{{ODIN}}}\left( {x;T,\eta ,\gamma ,{F}_{\mathrm{{ALOE}}}}\right)$ , respectively.
+
+Possible Variants. We also derive two other variants of robust training objective for OOD detection. The first one performs adversarial training only on the inliers. We denote this method as ADV, which is equivalent to the objective used in (Madry et al. 2017). The training objective for ADV is:
+
+$\mathop{\operatorname{minimize}}\limits_{\theta }\;{\mathbb{E}}_{\left( {x, y}\right) \sim {\mathcal{D}}_{\text{in }}^{\text{train }}}\mathop{\max }\limits_{{\delta \in B\left( {x,\epsilon }\right) }}\left\lbrack {-\log {F}_{\theta }{\left( x + \delta \right) }_{y}}\right\rbrack$
+
+Alternatively, we also considered performing adversarial training on inlier examples while simultaneously performing outlier exposure as in (Hendrycks, Mazeika, and Dietterich 2018). We refer to this variant as AOE (adversarial learning with outlier exposure). The training objective for AOE is:
+
+$$
+\mathop{\operatorname{minimize}}\limits_{\theta }\;{\mathbb{E}}_{\left( {x, y}\right) \sim {\mathcal{D}}_{\text{in }}^{\text{train }}}\mathop{\max }\limits_{{\delta \in B\left( {x,\epsilon }\right) }}\left\lbrack {-\log {F}_{\theta }{\left( x + \delta \right) }_{y}}\right\rbrack
+$$
+
+$$
++ \lambda \cdot {\mathbb{E}}_{x \sim {\mathcal{D}}_{\text{out }}^{\text{OE }}}\left\lbrack {{L}_{\mathrm{{CE}}}\left( {{F}_{\theta }\left( x\right) ,{\mathcal{U}}_{K}}\right) }\right\rbrack
+$$
+
+We provide ablation studies comparing these variants with ALOE in the next section.
+
+## Experiments
+
+In this section we perform extensive experiments to evaluate previous OOD detection methods and our ALOE method under adversarial attacks on in-distribution and OOD inputs. Our main findings are summarized as follows:
+
+(1) Classic OOD detection methods such as ODIN, Maha-lanobis, and OE fail drastically under our adversarial attacks even with a very small perturbation budget.
+
+(2) Our method ALOE can significantly improve the performance of OOD detection under our adversarial attacks compared to the classic OOD detection methods. Also, we observe that the performance of its variants ADV and $\mathrm{{AOE}}$ is worse than it in this task. And if we combine ALOE with other OOD detection approaches such as ODIN, we can further improve its performance. What's more, ALOE improves model robustness while maintaining almost the same classification accuracy on the clean test inputs (the results are in the appendix).
+
+(3) Common adversarial examples targeting the image classifier $f\left( x\right)$ with small perturbations should be regarded as in-distribution rather than OOD.
+
+Next we provide more details.
+
+## Setup
+
+In-distribution Datasets. we use GTSRB (Stallkamp et al. 2012), CIFAR-10 and CIFAR-100 datasets (Krizhevsky, Hinton et al. 2009) as in-distribution datasets. The pixel values of all the images are normalized to be in the range $\left\lbrack {0,1}\right\rbrack$ .
+
+Out-of-distribution Datasets. For auxiliary outlier dataset, we use 80 Million Tiny Images (Torralba, Fergus, and Freeman 2008), which is a large-scale, diverse dataset scraped from the web. We follow the same deduplication procedure as in (Hendrycks, Mazeika, and Dietterich 2018) and remove all examples in this dataset that appear in CIFAR-10 and CIFAR-100 to ensure that ${\mathcal{D}}_{\text{out }}^{\mathrm{{OE}}}$ and ${\mathcal{D}}_{\text{out }}^{\text{test }}$ are disjoint. For OOD test dataset, we follow the settings in (Liang, Li, and Srikant 2017; Hendrycks, Mazeika, and Dietterich 2018). For CIFAR-10 and CIFAR-100, we use six different natural image datasets: SVHN, Textures, Places365, LSUN (crop), LSUN (resize), and iSUN. For GTSRB, we use the following six datasets that are sufficiently different from it: CIFAR-10, Textures, Places365, LSUN (crop), LSUN (resize), and iSUN. Again, the pixel values of all the images are normalized to be in the range $\left\lbrack {0,1}\right\rbrack$ . The details of these datasets can be found in the appendix.
+
+Architectures and Training Configurations. We use the state-of-the-art neural network architecture DenseNet (Huang et al. 2017). We follow the same setup as in (Huang et al. 2017), with depth $L = {100}$ , growth rate $k = {12}$ (Dense-BC) and dropout rate 0 . All neural networks are trained with stochastic gradient descent with Nesterov momentum (Duchi, Hazan, and Singer 2011; Kingma and Ba 2014). Specifically, we train Dense-BC with momentum 0.9 and ${\ell }_{2}$ weight decay with a coefficient of ${10}^{-4}$ . For GTSRB, we train it for 10 epochs; for CIFAR-10 and CIFAR-100, we train it for 100 epochs. For in-distribution dataset, we use batch size 64; For outlier exposure with ${\mathcal{D}}_{\text{out }}^{\mathrm{{OE}}}$ , we use batch size 128. The initial learning rate of 0.1 decays following a cosine learning rate schedule (Loshchilov and Hutter 2016).
+
+Hyperparameters. For ODIN (Liang, Li, and Srikant 2017), we choose temperature scaling parameter $T$ and perturbation magnitude $\eta$ by validating on a random noise data, which does not depend on prior knowledge of out-of-distribution datasets in test. In all of our experiments, we set $T = {1000}$ . We set $\eta = {0.0004}$ for GTSRB, $\eta = {0.0014}$ for CIFAR-10, and $\eta = {0.0028}$ for CIFAR-100. For Maha-lanobis (Lee et al. 2018), we randomly select 1,000 examples from ${\mathcal{D}}_{\text{in }}^{\text{train }}$ and 1,000 examples from ${\mathcal{D}}_{\text{out }}^{\mathrm{{OE}}}$ to train the Logistic Regression model and tune $\eta$ , where $\eta$ is chosen from 21 evenly spaced numbers starting from 0 and ending at 0.004 , and the optimal parameters are chosen to minimize the FPR at TPR 95%. For OE, AOE and ALOE methods, we fix the regularization parameter $\lambda$ to be 0.5 . In PGD that solves the inner max of ADV, AOE and ALOE, we use step size 1/255, number of steps $\lfloor {255\epsilon } + 1\rfloor$ , and random start. For our attack algorithm, we set $\xi = 1/{255}$ and $m = {10}$ in our experiments. The adversarial budget $\epsilon$ by default is set to $1/{255}$ , however we perform ablation studies by varying the value (see the results in the appendix).
+
+
+
+Figure 2: Confidence score distribution produced by different methods. For illustration purposes, we use CIFAR-10 as in-distribution and SVHN as out-of-distribution. (a) and (b) compare the score distribution for Outlier Exposure (Hendrycks, Mazeika, and Diet-terich 2018), evaluated on clean images and PGD attacked images, respectively. The distribution overall shift toward the opposite direction under our attack, which causes the method to fail. Our method ALOE can mitigate the distribution shift as shown in (c). When combined with ODIN (Liang, Li, and Srikant 2017), the score distributions can be further separable between in- and out-distributions, as shown in (d).
+
+More experiment settings can be found in the appendix.
+
+## Evaluation Metrics
+
+We report main results using three metrics described below.
+
+FPR at 95% TPR. This metric calculates the false positive rate (FPR) on out-of-distribution examples when the true positive rate (TPR) is ${95}\%$ .
+
+Detection Error. This metric corresponds to the minimum mis-detection probability over all possible thresholds $\gamma$ , which is $\mathop{\min }\limits_{\gamma }L\left( {{P}_{X},{Q}_{X};G\left( {x;\gamma }\right) }\right)$ .
+
+AUROC. Area Under the Receiver Operating Characteristic curve is a threshold-independent metric (Davis and Goad-rich 2006). It can be interpreted as the probability that a positive example is assigned a higher detection score than a negative example (Fawcett 2006). A perfect detector corresponds to an AUROC score of ${100}\%$ .
+
+## Results
+
+All the values reported in this section are averaged over six OOD test datasets.
+
+| ${\mathcal{D}}_{\text{in }}^{\text{test }}$ | $\mathbf{{Method}}$ | $\mathbf{{FPR}}$ (95% TPR) ↓ | Detection Error ↓ | AUROC ↑ | $\mathbf{{FPR}}$ (95% TPR) ↓ | Detection Error ↓ | AUROC ↑ |
| without attack | with attack $(\epsilon = 1/{255}, m = {10}$ |
| GTSRB | MSP (Hendrycks and Gimpel 2016) | 1.13 | 2.42 | 98.45 | 97.59 | 26.02 | 73.27 |
| ODIN (Liang, Li, and Srikant 2017) | 1.42 | 2.10 | 98.81 | 75.94 | 24.87 | 75.41 |
| Mahalanobis (Lee et al. 2018) | 1.31 | 2.87 | 98.29 | 100.00 | 29.80 | 70.45 |
| OE (Hendrycks, Mazeika, and Dietterich 2018) | 0.02 | 0.34 | 99.92 | 25.85 | 5.90 | 96.09 |
| OE+ODIN | 0.02 | 0.36 | 99.92 | 14.14 | 5.59 | 97.18 |
| ADV (Madry et al. 2017) | 1.45 | 2.88 | 98.66 | 17.96 | 6.95 | 94.83 |
| AOE | 0.00 | 0.62 | 99.86 | 1.49 | 2.55 | 98.35 |
| ALOE (ours) | 0.00 | 0.44 | 99.76 | 0.66 | 1.80 | 98.95 |
| ALOE+ODIN (ours) | 0.01 | 0.45 | 99.76 | 0.69 | 1.80 | 98.98 |
| CIFAR- 10 | MSP (Hendrycks and Gimpel 2016) | 51.67 | 14.06 | 91.61 | 99.98 | 50.00 | 10.34 |
| ODIN (Liang, Li, and Srikant 2017) | 25.76 | 11.51 | 93.92 | 93.45 | 46.73 | 28.45 |
| Mahalanobis (Lee et al. 2018) | 31.01 | 15.72 | 88.53 | 89.75 | 44.30 | 32.54 |
| OE (Hendrycks, Mazeika, and Dietterich 2018) | 4.47 | 4.50 | 98.54 | 99.99 | 50.00 | 25.13 |
| OE+ODIN | 4.17 | 4.31 | 98.55 | 99.02 | 47.84 | 34.29 |
| ADV (Madry et al. 2017) | 66.99 | 19.22 | 87.23 | 98.44 | 31.72 | 66.73 |
| AOE | 10.46 | 6.58 | 97.76 | 88.91 | 26.02 | 78.39 |
| ALOE (ours) | 5.47 | 5.13 | 98.34 | 53.99 | 14.19 | 91.26 |
| ALOE+ODIN (ours) | 4.48 | 4.66 | 98.55 | 41.59 | 12.73 | 92.69 |
| CIFAR- 100 | MSP (Hendrycks and Gimpel 2016) | 81.72 | 33.46 | 71.89 | 100.00 | 50.00 | 2.39 |
| ODIN (Liang, Li, and Srikant 2017) | 58.84 | 22.94 | 83.63 | 98.87 | 49.87 | 21.02 |
| Mahalanobis (Lee et al. 2018) | 53.75 | 27.63 | 70.85 | 95.79 | 47.53 | 17.92 |
| OE (Hendrycks, Mazeika, and Dietterich 2018) | 56.49 | 19.38 | 87.73 | 100.00 | 50.00 | 2.94 |
| OE+ODIN | 47.59 | 17.39 | 90.14 | 99.49 | 50.00 | 20.02 |
| ADV (Madry et al. 2017) | 85.47 | 33.17 | 71.77 | 99.64 | 44.86 | 41.34 |
| AOE | 60.00 | 23.03 | 84.57 | 95.79 | 43.07 | 53.80 |
| ALOE (ours) | 61.99 | 23.56 | 83.72 | 92.01 | 40.09 | 61.20 |
| ALOE+ODIN (ours) | 58.48 | 21.38 | 85.75 | 88.50 | 36.20 | 66.61 |
+
+Table 1: Distinguishing in- and out-of-distribution test set data for image classification. We contrast performance on clean images (without attack) and PGD attacked images. $\uparrow$ indicates larger value is better, and $\downarrow$ indicates lower value is better. All values are percentages and are averaged over six OOD test datasets.
+
+| ${\mathcal{D}}_{\text{in }}^{\text{test }}$ | $\mathbf{{Method}}$ | 1-FPR (95% TPR) |
| CIFAR- 10 | MSP (Hendrycks and Gimpel 2016) | 10.75 |
| ODIN (Liang, Li, and Srikant 2017) | 4.02 |
| Mahalanobis (Lee et al. 2018) | 7.13 |
| OE (Hendrycks, Mazeika, and Dietterich 2018) | 12.22 |
| OE+ODIN | 12.95 |
| ADV (Madry et al. 2017) | 7.69 |
| AOE | 11.18 |
| ALOE (ours) | 8.85 |
| ALOE+ODIN (ours) | 8.71 |
| CIFAR- 100 | MSP (Hendrycks and Gimpel 2016) | 0.06 |
| ODIN (Liang, Li, and Srikant 2017) | 0.74 |
| Mahalanobis (Lee et al. 2018) | 4.29 |
| OE (Hendrycks, Mazeika, and Dietterich 2018) | 4.36 |
| OE+ODIN | 5.21 |
| ADV (Madry et al. 2017) | 3.14 |
| AOE | 8.08 |
| ALOE (ours) | 7.32 |
| ALOE+ODIN (ours) | 7.06 |
+
+Table 2: Distinguishing adversarial examples generated by PGD attack on the image classifier $f\left( x\right)$ . 1-FPR indicates the rate of misclassifying adversarial examples as out-of-distribution examples. For PGD attack, we choose $\epsilon$ as $1/{255}$ and the number of attack steps as 10 . All values are percentages.
+
+Classic OOD detection methods fail under our attack. As shown in Table 1, although classic OOD detection methods (e.g. MSP, ODIN, Mahalanobis, OE and OE+ODIN) could perform quite well on detecting natural OOD samples, their performance drops substantially under the attack (even with very minimal attack budget $\epsilon = 1/{255}$ and $m = {10}$ ). For the best-performing OOD detection method (i.e., OE+ODIN), the FPR at 95% TPR increases drastically from 4.17% (without attack) to 99.02% (with attack) when evaluated on the CIFAR-10 dataset.
+
+ALOE improves robust OOD detection performance. As shown in Table 1, our method ALOE could significantly improve the OOD detection performance under the adversarial attack. For example, ALOE can substantially improve the AUROC from 34.29% (state-of-the-art: OE+ODIN) to 92.69% evaluated on the CIFAR-10 dataset. The performance can be further improved when combining ALOE with ODIN. We observe this trend holds consistently on other benchmark datasets GTSRB and CIFAR-100 as in-distribution training data. We also find that adversarial training (ADV) or combining adversarial training with outlier exposure (AOE) yield slightly less competitive results.
+
+To better understand our method, we analyze the distribution of confidence scores produced by the OOD detectors on SVHN (out-distribution) and CIFAR-10 (in-distribution). As shown in Figure 2, OE could distinguish in-distribution and out-of-distribution samples quite well since the confidence scores are well separated. However, under our attack, the confidence scores of in-distribution samples move towards 0 and the scores of out-of-distribution samples move towards 1.0 , which renders the detector fail to distinguish in- and out-of-distribution samples. Using our method, the confidence scores (under attack) become separable and shift toward the right direction. If we further combine ALOE with ODIN, the scores produced by the detector are even more separated.
+
+Evaluating on common adversarial examples targeting the classifier $f\left( x\right)$ . Our work is primarily concerned with adversarial examples targeting OOD detectors $G\left( x\right)$ . This is very different from the common notion of adversarial examples that are constructed to fool the image classifier $f\left( x\right)$ . Based on our robust definition of OOD detection, adversarial examples constructed from in-distribution data with small perturbations to fool the image classifier $f\left( x\right)$ should be regarded as in-distribution. To validate this point, we generate PGD attacked images w.r.t the original classification model $f\left( x\right)$ trained on CIFAR-10 and CIFAR-100 respectively using a small perturbation budget of $1/{255}$ . We measure the performance of OOD detectors $G\left( x\right)$ by reporting 1-FPR (at TPR 95%), which indicates the rate of misclassifying adversarial examples as out-of-distribution examples. As shown in Table 2, the metric in general is low for both classic and robust OOD detection methods, which suggests that common adversarial examples with small perturbations are closer to in-distribution rather than OOD.
+
+## Conclusion
+
+In this paper, we study the problem, Robust Out-of-Distribution Detection, and propose adversarial attack algorithms which reveal the lack of robustness of a wide range of OOD detection methods. We show that state-of-the-art OOD detection methods can fail catastrophically under both adversarial in-distribution and out-of-distribution attacks. To counteract these threats, we propose a new method called ALOE, which substantially improves the robustness of state-of-the-art OOD detection. We empirically analyze our method under different parameter settings and optimization objectives, and provide theoretical insights behind our approach. Future work involves exploring alternative semantic-preserving perturbations beyond adversarial attacks.
+
+## References
+
+Athalye, A.; Carlini, N.; and Wagner, D. 2018. Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420.
+
+Bendale, A.; and Boult, T. E. 2015. Towards Open World Recognition. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2015, Boston, MA, USA, June 7-12, 2015, 1893-1902. IEEE Computer Society.
+
+Bendale, A.; and Boult, T. E. 2016. Towards Open Set Deep Networks. In 2016 IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2016, Las Vegas, NV, USA, June 27-30, 2016, 1563-1572. IEEE Computer Society.
+
+Bevandić, P.; Krešo, I.; Oršić, M.; and Šegvić, S. 2018. Discriminative out-of-distribution detection for semantic segmentation. arXiv preprint arXiv:1808.07703.
+
+Biggio, B.; Corona, I.; Maiorca, D.; Nelson, B.; Šrndić, N.;
+
+Laskov, P.; Giacinto, G.; and Roli, F. 2013. Evasion attacks against machine learning at test time. In Joint European conference on machine learning and knowledge discovery in databases, 387-402. Springer.
+
+Bitterwolf, J.; Meinke, A.; and Hein, M. 2020. Certifiably Adversarially Robust Detection of Out-of-Distribution Data. In Larochelle, H.; Ranzato, M.; Hadsell, R.; Balcan, M.; and Lin, H., eds., Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
+
+Chen, J.; Li, Y.; Wu, X.; Liang, Y.; and Jha, S. 2021. ATOM: Robustifying Out-of-Distribution Detection Using Outlier Mining. In Oliver, N.; Pérez-Cruz, F.; Kramer, S.; Read, J.; and Lozano, J. A., eds., Machine Learning and Knowledge Discovery in Databases. Research Track - European Conference, ECML PKDD 2021, Bilbao, Spain, September 13-17, 2021, Proceedings, Part III, volume 12977 of Lecture Notes in Computer Science, 430-445. Springer.
+
+Cimpoi, M.; Maji, S.; Kokkinos, I.; Mohamed, S.; ; and Vedaldi, A. 2014. Describing Textures in the Wild. In Proceedings of the IEEE Conf. on Computer Vision and Pattern Recognition (CVPR).
+
+Davis, J.; and Goadrich, M. 2006. The relationship between Precision-Recall and ROC curves. In Proceedings of the 23rd international conference on Machine learning, 233-240.
+
+Duchi, J.; Hazan, E.; and Singer, Y. 2011. Adaptive subgradi-ent methods for online learning and stochastic optimization. Journal of machine learning research, 12(Jul): 2121-2159.
+
+Fawcett, T. 2006. An introduction to ROC analysis. Pattern recognition letters, 27(8): 861-874.
+
+Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
+
+Guo, C.; Pleiss, G.; Sun, Y.; and Weinberger, K. Q. 2017. On calibration of modern neural networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70, 1321-1330. JMLR. org.
+
+Hein, M.; Andriushchenko, M.; and Bitterwolf, J. 2019. Why ReLU networks yield high-confidence predictions far away from the training data and how to mitigate the problem. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 41-50.
+
+Hendrycks, D.; and Gimpel, K. 2016. A baseline for detecting misclassified and out-of-distribution examples in neural networks. arXiv preprint arXiv:1610.02136.
+
+Hendrycks, D.; Mazeika, M.; and Dietterich, T. G. 2018. Deep anomaly detection with outlier exposure. arXiv preprint arXiv:1812.04606.
+
+Hsu, Y.; Shen, Y.; Jin, H.; and Kira, Z. 2020. Generalized ODIN: Detecting Out-of-Distribution Image Without Learning From Out-of-Distribution Data. In 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition, CVPR 2020, Seattle, WA, USA, June 13-19, 2020, 10948-10957. Computer Vision Foundation / IEEE.
+
+Huang, G.; Liu, Z.; Van Der Maaten, L.; and Weinberger, K. Q. 2017. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition, 4700-4708.
+
+Huang, R.; and Li, Y. 2021. MOS: Towards Scaling Out-of-
+
+Distribution Detection for Large Semantic Space. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, 8710-8719. Computer Vision Foundation / IEEE.
+
+Kingma, D. P.; and Ba, J. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
+
+Krizhevsky, A.; Hinton, G.; et al. 2009. Learning multiple layers of features from tiny images.
+
+Lakshminarayanan, B.; Pritzel, A.; and Blundell, C. 2017. Simple and scalable predictive uncertainty estimation using deep ensembles. In Advances in neural information processing systems, 6402-6413.
+
+Lee, K.; Lee, H.; Lee, K.; and Shin, J. 2017. Training confidence-calibrated classifiers for detecting out-of-distribution samples. arXiv preprint arXiv:1711.09325.
+
+Lee, K.; Lee, K.; Lee, H.; and Shin, J. 2018. A simple unified framework for detecting out-of-distribution samples and adversarial attacks. In Advances in Neural Information Processing Systems, 7167-7177.
+
+Liang, S.; Li, Y.; and Srikant, R. 2017. Enhancing the reliability of out-of-distribution image detection in neural networks. arXiv preprint arXiv:1706.02690.
+
+Lin, Z.; Roy, S. D.; and Li, Y. 2021. MOOD: Multi-Level Out-of-Distribution Detection. In IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2021, virtual, June 19-25, 2021, 15313-15323. Computer Vision Foundation / IEEE.
+
+Liu, W.; Wang, X.; Owens, J. D.; and Li, Y. 2020. Energy-based Out-of-distribution Detection. In Larochelle, H.; Ran-zato, M.; Hadsell, R.; Balcan, M.; and Lin, H., eds., Advances in Neural Information Processing Systems 33: Annual Conference on Neural Information Processing Systems 2020, NeurIPS 2020, December 6-12, 2020, virtual.
+
+Loshchilov, I.; and Hutter, F. 2016. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983.
+
+Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; and Vladu, A. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083.
+
+Malinin, A.; and Gales, M. 2018. Predictive uncertainty estimation via prior networks. In Advances in Neural Information Processing Systems, 7047-7058.
+
+Meinke, A.; and Hein, M. 2019. Towards neural networks that provably know when they don't know. arXiv preprint arXiv:1909.12180.
+
+Mohseni, S.; Pitale, M.; Yadawa, J.; and Wang, Z. 2020. Self-Supervised Learning for Generalizable Out-of-Distribution Detection.
+
+Netzer, Y.; Wang, T.; Coates, A.; Bissacco, A.; Wu, B.; and $\mathrm{{Ng}}$ , A. Y. 2011. Reading digits in natural images with unsupervised feature learning.
+
+Papernot, N.; McDaniel, P.; Jha, S.; Fredrikson, M.; Celik, Z. B.; and Swami, A. 2016. The limitations of deep learning in adversarial settings. In 2016 IEEE European symposium on security and privacy (EuroS&P), 372-387. IEEE.
+
+Sehwag, V.; Bhagoji, A. N.; Song, L.; Sitawarin, C.; Cullina, D.; Chiang, M.; and Mittal, P. 2019. Analyzing the robustness of open-world machine learning. In Proceedings of the 12th ACM Workshop on Artificial Intelligence and Security, 105-116.
+
+Stallkamp, J.; Schlipsing, M.; Salmen, J.; and Igel, C. 2012. Man vs. computer: Benchmarking machine learning algorithms for traffic sign recognition. Neural networks, 32: 323-332.
+
+Subramanya, A.; Srinivas, S.; and Babu, R. V. 2017. Confidence estimation in deep neural networks via density modelling. arXiv preprint arXiv:1707.07013.
+
+Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; and Fergus, R. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.
+
+Torralba, A.; Fergus, R.; and Freeman, W. T. 2008. 80 million tiny images: A large data set for nonparametric object and scene recognition. IEEE transactions on pattern analysis and machine intelligence, 30(11): 1958-1970.
+
+Xu, P.; Ehinger, K. A.; Zhang, Y.; Finkelstein, A.; Kulkarni, S. R.; and Xiao, J. 2015. Turkergaze: Crowdsourcing saliency with webcam based eye tracking. arXiv preprint arXiv:1504.06755.
+
+Yu, F.; Seff, A.; Zhang, Y.; Song, S.; Funkhouser, T.; and Xiao, J. 2015. Lsun: Construction of a large-scale image dataset using deep learning with humans in the loop. arXiv preprint arXiv:1506.03365.
+
+Zhou, B.; Lapedriza, A.; Khosla, A.; Oliva, A.; and Torralba, A. 2017. Places: A 10 million image database for scene recognition. IEEE transactions on pattern analysis and machine intelligence, 40(6): 1452-1464.
+
+## Appendix
+
+## Existing Approaches
+
+Recently, several approaches propose to detect OOD examples based on different notions of confidence scores from a neural network $f\left( \cdot \right)$ , which is trained on a dataset ${\mathcal{D}}_{\text{in }}^{\text{train }}$ drawn from a data distribution ${P}_{\mathbf{X}, Y}$ defined on $\mathcal{X} \times \mathcal{Y}$ with $\mathcal{Y} = \{ 1,2,\cdots , K\}$ . Note that ${P}_{\mathbf{X}}$ is the marginal distribution of ${P}_{\mathbf{X}, Y}$ . Based on this notion, we describe a few common methods below.
+
+Maximum Softmax Probability (MSP). Maximum Soft-max Probability method is as a common baseline for OOD detection (Hendrycks and Gimpel 2016). Given an input image $x$ and a pre-trained neural network $f\left( \cdot \right)$ , the softmax output of the classifier is computed by $F\left( x\right) = \frac{{e}^{{f}_{i}\left( x\right) }}{\mathop{\sum }\limits_{{j = 1}}^{K}{e}^{{f}_{j}\left( x\right) }}$ .
+
+A threshold-based detector $G\left( x\right)$ relies on the confidence score $S\left( {x;f}\right) = \mathop{\max }\limits_{i}{F}_{i}\left( x\right)$ to make prediction as follows
+
+$$
+{G}_{\mathrm{{MSP}}}\left( {x;\gamma , f}\right) = \left\{ \begin{array}{ll} 0 & \text{ if }S\left( {x;f}\right) \leq \gamma \\ 1 & \text{ if }S\left( {x;f}\right) > \gamma \end{array}\right. \tag{5}
+$$
+
+where $\gamma$ is the confidence threshold.
+
+ODIN. The original softmax confidence scores used in (Hendrycks and Gimpel 2016) can be over-confident. ODIN (Liang, Li, and Srikant 2017) leverages this insight and improves the MSP baseline using the calibrated confidence score instead (Guo et al. 2017). Specifically, the calibrated confidence score is computed by $S\left( {x;T, f}\right) =$ $\mathop{\max }\limits_{i}\frac{{e}^{{f}_{i}\left( x\right) /T}}{\mathop{\sum }\limits_{{j = 1}}^{K}{e}^{{f}_{j}\left( x\right) /T}}$ , where $T \in {\mathbb{R}}^{ + }$ is a temperature scaling parameter. In addition, ODIN applies small noise perturbation to the inputs
+
+$$
+\widetilde{x} = x - \eta \cdot \operatorname{sign}\left( {-{\nabla }_{x}\log S\left( {x;T, f}\right) }\right) , \tag{6}
+$$
+
+where the parameter $\eta$ is the perturbation magnitude.
+
+By combining the two components together, ODIN detector ${G}_{\text{ODIN }}$ is given by
+
+$$
+{G}_{\mathrm{{ODIN}}}\left( {x;T,\eta ,\gamma , f}\right) = \left\{ \begin{array}{ll} 0 & \text{ if }S\left( {\widetilde{x};T, f}\right) \leq \gamma \\ 1 & \text{ if }S\left( {\widetilde{x};T, f}\right) > \gamma \end{array}\right. \tag{7}
+$$
+
+In real applications, it may be difficult to know the out-of-distribution samples one will encounter in advance. The hy-perparameters of $T$ and $\eta$ can be tuned instead on a random noise data such as Gaussian or uniform distribution, without requiring prior knowledge of OOD dataset.
+
+Mahalanobis. Lee et al. model the features of training data as class-conditional Gaussian distribution, where its parameters are chosen as empirical class means and empirical covariance of training samples. Specifically, for a given sample $x$ , the confidence score from the $\ell$ -th feature layer is defined using the Mahalanobis distance with respect to the closest class-conditional distribution:
+
+$$
+{M}_{\ell }\left( x\right) = \mathop{\max }\limits_{c} - {\left( {f}_{\ell }\left( x\right) - {\widehat{\mu }}_{\ell , c}\right) }^{T}{\widehat{\sum }}_{\ell }^{-1}\left( {{f}_{\ell }\left( x\right) - {\widehat{\mu }}_{\ell , c}}\right) , \tag{8}
+$$
+
+where ${f}_{\ell }\left( x\right)$ is the $\ell$ -th hidden features of DNNs, and ${\widehat{\mu }}_{\ell , c}$ and ${\widehat{\sum }}_{\ell }$ are the empirical class means and covariances computed from the training data respectively.
+
+In addition, they use two techniques (1) input preprocessing and (2) feature ensemble. Specifically, for each test sample $x$ , they first calculate the pre-processed sample ${\widetilde{x}}_{\ell }$ by adding the small perturbations as in (Liang, Li, and Srikant 2017): ${\widetilde{x}}_{\ell } = x + \eta \cdot \operatorname{sign}\left( {{\nabla }_{x}{M}_{\ell }\left( x\right) }\right)$ , where $\eta$ is a magnitude of noise, which can be tuned on the validation data.
+
+The confidence scores from all layers are integrated through a weighted averaging: $\mathop{\sum }\limits_{\ell }{\alpha }_{\ell }{M}_{\ell }\left( {\widetilde{x}}_{\ell }\right)$ . The weight of each layer ${\alpha }_{\ell }$ is learned through a logistic regression model, which predicts 1 for in-distribution and 0 for OOD examples. The overall Mahalanobis distance based confidence score is
+
+$$
+M\left( x\right) = \frac{1}{1 + {e}^{-\left( {\mathop{\sum }\limits_{\ell }{\alpha }_{\ell }{M}_{\ell }\left( {\widetilde{x}}_{\ell }\right) + b}\right) }}, \tag{9}
+$$
+
+where $b$ is the bias of the logistic regression model. Putting it all together, the final Mahalanobis detector ${G}_{\text{Mahalanobis }}$ is given by
+
+$$
+{G}_{\text{Mahalanobis }}\left( {x;\eta ,\gamma ,\left\{ {\alpha }_{\ell }\right\} , b, f}\right) = \left\{ \begin{array}{ll} 0 & \text{ if }M\left( x\right) \leq \gamma \\ 1 & \text{ if }M\left( x\right) > \gamma \end{array}\right.
+$$
+
+(10)
+
+## Experimental Details
+
+## Setup
+
+Software and Hardware. We run all experiments with Py-Torch and NVDIA GeForce RTX 2080Ti GPUs.
+
+Number of Evaluation Runs. We run all experiments once with fixed random seeds.
+
+In-distribution Dataset. We provide the details of in-distribution datasets below:
+
+1. CIFAR-10 and CIFAR-100. The CIFAR-10 and CIFAR- 100 (Krizhevsky, Hinton et al. 2009) have 10 and 100 classes respectively. Both datasets consist of 50,000 training images and 10,000 test images.
+
+2. GTSRB. The German Traffic Sign Recognition Benchmark (GTSRB) (Stallkamp et al. 2012) is a dataset of color images depicting 43 different traffic signs. The images are not of a fixed dimensions and have rich background and varying light conditions as would be expected of photographed images of traffic signs. There are about 34,799 training images, 4,410 validation images and 12,630 test images. We resize each image to ${32} \times {32}$ . The dataset has a large imbalance in the number of sample occurrences across classes. We use data augmentation techniques to enlarge the training data and make the number of samples in each class balanced. We construct a class preserving data augmentation pipeline consisting of rotation, translation, and projection transforms and apply this pipeline to images in the training set until each class contained 10,000 training examples. This new augmented dataset containing 430,000 samples in total is used as ${\mathcal{D}}_{\text{in }}^{\text{train }}$ . We randomly select 10,000 images from original test images as ${\mathcal{D}}_{\text{in }}^{\text{test }}$ .
+
+OOD Test Dataset. We provide the details of OOD test datasets below:
+
+1. SVHN. The SVHN dataset (Netzer et al. 2011) contains ${32} \times {32}$ color images of house numbers. There are ten classes comprised of the digits 0 -9 . The original test set has 26,032 images. We randomly select 1,000 images for each class from the test set to form a new test dataset containing 10,000 images for our evaluation.
+
+2. Textures. The Describable Textures Dataset (DTD) (Cim-poi et al. 2014) contains textural images in the wild. We include the entire collection of 5640 images in DTD and downsample each image to size ${32} \times {32}$ .
+
+3. Places365. The Places365 dataset (Zhou et al. 2017) contains large-scale photographs of scenes with 365 scene categories. There are 900 images per category in the test set. We randomly sample 10,000 images from the test set for evaluation and downsample each image to size ${32} \times {32}$ .
+
+4. LSUN (crop) and LSUN (resize). The Large-scale Scene UNderstanding dataset (LSUN) has a testing set of 10,000 images of 10 different scenes (Yu et al. 2015). We construct two datasets, LSUN-C and LSUN-R, by randomly cropping image patches of size ${32} \times {32}$ and downsampling each image to size ${32} \times {32}$ , respectively.
+
+5. iSUN. The iSUN (Xu et al. 2015) consists of a subset of SUN images. We include the entire collection of 8925 images in iSUN and downsample each image to size ${32} \times$ 32.
+
+6. CIFAR-10. We use the 10,000 test images of CIFAR-10 as OOD test set for GTSRB.
+
+| ${\mathcal{D}}_{\text{in }}^{\text{test }}$ | $\mathbf{{Method}}$ | $\mathbf{{FPR}}$ (95% TPR) ↓ | Detection Error ↓ | AUROC ↑ | $\mathbf{{FPR}}$ (95% TPR) ↓ | Detection Error ↓ | AUROC ↑ | $\mathbf{{FPR}}$ (95% TPR) ↓ | Detection Error ↓ | AUROC ↑ |
| with attack $\left( {\epsilon = 2/{255}, m = {10}}\right)$ | with attack $\left( {\epsilon = 3/{255}, m = {10}}\right)$ | with attack $\left( {\epsilon = 4/{255}, m = {10}}\right)$ |
| GTSRB | MSP (Hendrycks and Gimpel 2016) | 99.88 | 50.00 | 26.11 | 99.99 | 50.00 | 6.79 | 99.99 | 50.00 | 6.39 |
| ODIN (Liang, Li, and Srikant 2017) | 99.23 | 49.97 | 27.38 | 99.83 | 50.00 | 6.94 | 99.84 | 50.00 | 6.52 |
| Mahalanobis (Lee et al. 2018) | 100.00 | 49.97 | 26.37 | 100.00 | 50.00 | 8.27 | 100.00 | 50.00 | 7.82 |
| OE (Hendrycks, Mazeika, and Dietterich 2018) | 96.79 | 16.09 | 83.06 | 99.91 | 25.36 | 68.62 | 99.97 | 26.37 | 66.91 |
| OE+ODIN | 89.88 | 15.78 | 84.56 | 99.25 | 24.70 | 69.71 | 99.45 | 25.67 | 68.02 |
| ADV (Madry et al. 2017) | 92.17 | 11.51 | 89.92 | 99.65 | 18.59 | 80.85 | 99.49 | 18.68 | 81.17 |
| AOE | 7.94 | 5.36 | 94.82 | 16.16 | 10.38 | 88.72 | 38.05 | 17.95 | 83.84 |
| ALOE (ours) | 4.03 | 4.19 | 95.90 | 10.82 | 7.64 | 91.21 | 16.10 | 10.10 | 89.52 |
| ALOE+ODIN (ours) | 3.95 | 4.15 | 95.72 | 9.56 | 6.91 | 91.08 | 13.85 | 9.22 | 89.44 |
| CIFAR- 10 | MSP (Hendrycks and Gimpel 2016) | 100.00 | 50.00 | 1.16 | 100.00 | 50.00 | 0.13 | 100.00 | 50.00 | 0.12 |
| ODIN (Liang, Li, and Srikant 2017) | 99.73 | 49.99 | 5.67 | 99.98 | 50.00 | 1.14 | 99.99 | 50.00 | 1.06 |
| Mahalanobis (Lee et al. 2018) | 100.00 | 50.00 | 5.90 | 100.00 | 50.00 | 1.27 | 100.00 | 50.00 | 1.05 |
| OE (Hendrycks, Mazeika, and Dietterich 2018) | 100.00 | 50.00 | 5.99 | 100.00 | 50.00 | 1.52 | 100.00 | 50.00 | 1.48 |
| OE+ODIN | 100.00 | 50.00 | 8.89 | 100.00 | 50.00 | 2.76 | 100.00 | 50.00 | 2.69 |
| ADV (Madry et al. 2017) | 99.94 | 36.57 | 56.01 | 99.89 | 39.64 | 49.88 | 99.96 | 40.57 | 48.02 |
| AOE | 91.79 | 35.08 | 66.92 | 99.96 | 39.53 | 54.43 | 98.40 | 37.37 | 59.16 |
| ALOE (ours) | 75.90 | 23.36 | 83.26 | 83.14 | 31.54 | 73.46 | 82.53 | 29.92 | 75.52 |
| ALOE+ODIN (ours) | 68.80 | 20.31 | 85.92 | 79.19 | 28.04 | 77.88 | 78.46 | 27.55 | 78.83 |
+
+Table 3: Distinguishing in- and out-of-distribution test set data for image classification. $\uparrow$ indicates larger value is better, and $\downarrow$ indicates lower value is better. All values are percentages. The in-distribution datasets are GRSRB and CIFAR-10. All the values reported are averaged over six OOD test datasets.
+
+| ${\mathcal{D}}_{\text{in }}^{\text{test }}$ | $\mathbf{{Method}}$ | Classifcation Accuracy | Robustness w.r.t image classifer |
| GTSRB | Original | 99.33% | 88.47% |
| OE | 99.38% | 83.99% |
| ADV | 99.23% | 97.13% |
| AOE | 98.82% | 94.14% |
| ALOE | 98.91% | 94.58% |
| CIFAR-10 | Original | 94.08% | 25.38% |
| OE | 94.59% | 28.94% |
| ADV | 92.97% | 84.81% |
| AOE | 93.35% | 78.60% |
| ALOE | 93.89% | 84.02% |
| CIFAR-100 | Original | 75.26% | 7.29% |
| OE | 74.45% | 7.84% |
| ADV | 70.58% | 54.58% |
| AOE | 72.56% | 52.96% |
| ALOE | 71.62% | 55.97% |
+
+Table 4: The image classification accuracy and robustness of different models on original tasks (GTSRB, CIFAR-10 and CIFAR-100). Robustness measures the accuracy under PGD attack w.r.t the original classification model.
+
+## Additional Results
+
+Effect of adversarial budget $\epsilon$ . We further perform ablation study on the adversarial budget $\epsilon$ and analyze how this affects performance. On GTSRB and CIFAR- 10 dataset, we perform comparison by varying $\epsilon =$ $1/{255},2/{255},3/{255},4/{255}$ . The results are reported in Table 3 . We observe that as we increase $\epsilon$ , the performance on classic OOD detection methods (e.g. MSP, ODIN, Ma-halanobis, OE, OE+ODIN) drops significantly under our attack: the FPR at ${95}\%$ TPR reaches almost 100% for all those methods. We also observe that our methods ALOE (and ALOE+ODIN) consistently improves the results under our attack compared to those classic methods.
+
+Classification performance of image classifier $f\left( x\right)$ . In addition to OOD detection, we also verify the accuracy and robustness on the original classification task. The results are presented in table 4. Robustness measures the accuracy under PGD attack w.r.t the original classification model. We use adversarial budget $\epsilon$ of $1/{255}$ and number of attack steps of 10. Original refers to the vanilla model trained with standard cross entropy loss on the dataset. On both GTSRB and CIFAR-10, ALOE improves the model robustness, while maintaining almost the same classification accuracy on the clean inputs. On CIFAR-100, ALOE improves robustness from 7.29% to 55.97%, albeit dropping the classification accuracy slightly (3.64%). Overall our method achieves good trade-off between the accuracy and robustness due to adversarial perturbations.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/WMIoz7O_DPz/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/WMIoz7O_DPz/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..1af5f72c48db73abe4a716f7fc6335fc1a6e8697
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/WMIoz7O_DPz/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,389 @@
+§ ROBUST OUT-OF-DISTRIBUTION DETECTION FOR NEURAL NETWORKS
+
+§ ABSTRACT
+
+Detecting out-of-distribution (OOD) inputs is critical for safely deploying deep learning models in the real world. Existing approaches for detecting OOD examples work well when evaluated on benign in-distribution and OOD samples. However, in this paper, we show that existing detection mechanisms can be extremely brittle when evaluating on in-distribution and OOD inputs with minimal adversarial perturbations which don't change their semantics. Formally, we extensively study the problem of Robust Out-of-Distribution Detection on common OOD detection approaches, and show that state-of-the-art OOD detectors can be easily fooled by adding small perturbations to the in-distribution and OOD inputs. To counteract these threats, we propose an effective algorithm called ALOE, which performs robust training by exposing the model to both adversarially crafted inlier and outlier examples. Our method can be flexibly combined with, and render existing methods robust. On common benchmark datasets, we show that ALOE substantially improves the robustness of state-of-the-art OOD detection, with 58.4% AUROC improvement on CIFAR-10 and 46.59% improvement on CIFAR-100.
+
+§ INTRODUCTION
+
+Out-of-distribution (OOD) detection has become an indispensable part of building reliable open-world machine learning models (Bendale and Boult 2015). An OOD detector is used to determine whether an input is from the training data distribution (in-distribution examples), or from a different distribution (OOD examples). Previous OOD detection methods are usually evaluated on benign in-distribution and OOD inputs (Hsu et al. 2020; Huang and Li 2021; Lee et al. 2018; Liang, Li, and Srikant 2017; Liu et al. 2020). Recently, some works have shown the existence of adversarial OOD examples, which are generated by slightly perturbing the clean OOD inputs to make the OOD detectors fail to detect them as OOD examples, and have proposed some robust OOD detection methods to address the issue of adversarial OOD examples (Sehwag et al. 2019; Hein, Andriushchenko, and Bitterwolf 2019; Meinke and Hein 2019; Bitterwolf, Meinke, and Hein 2020; Chen et al. 2021).
+
+In this paper, we also consider the problem of robust OOD detection. Different from previous works, we not only consider adversarial OOD examples, but also consider adversarial in-distribution examples, which are generated by slightly perturbing the clean in-distribution inputs and cause the OOD detectors to falsely reject them. We argue that both adversarial in-distribution examples and adversarial OOD examples can cause severe consequences if the OOD detectors fail to detect them, as illustrated in Figure 1.
+
+Formally, we study the problem of robust out-of-distribution detection and reveal the lack of robustness of common OOD detection methods. We show that existing OOD detection algorithms can be easily attacked to produce mistaken OOD prediction under small adversarial perturbations (Papernot et al. 2016; Goodfellow, Shlens, and Szegedy 2014; Biggio et al. 2013; Szegedy et al. 2013). Specifically, we construct adversarial in-distribution examples by adding small perturbations to the in-distribution inputs such that the OOD detectors will falsely reject them; whereas adversarial OOD examples are generated by adding small perturbations to the OOD inputs such that the OOD detectors will fail to reject them. Different from the common notion, the adversarial examples in our work are meant to fool the OOD detectors $G\left( x\right)$ , rather than the original image classification model $f\left( x\right)$ . It is also worth noting that the perturbation is sufficiently small so that the visual semantics as well as true distributional membership remain the same. Yet worryingly, state-of-the-art OOD detectors can fail to distinguish between adversarial in-distribution examples and adversarial OOD examples. Although there are some works trying to make OOD detection robust to adversarial OOD examples, scant attention has been paid to making the OOD detectors robust against both the adversarial in-distribution examples and adversarial OOD examples. To the best of our knowledge, we are the first to consider the issue of adversarial in-distribution examples.
+
+To address the challenge, we propose an effective method, ALOE, that improves the robust OOD detection performance. Specifically, we perform robust training by exposing the model to two types of perturbed adversarial examples. For in-distribution training data, we create a perturbed example by searching in its $\epsilon$ -ball that maximizes the negative log likelihood. In addition, we also utilize an auxiliary un-labaled dataset as in (Hendrycks, Mazeika, and Dietterich 2018), and create corresponding perturbed outlier example by searching in its $\epsilon$ -ball that maximizes the KL-divergence between model output and a uniform distribution. The overall training objective of ALOE can be viewed as an adversarial min-max game. We show that on several benchmark datasets, ALOE can improve the robust OOD detection performance by up to 58.4% compared to previous state-of-the-art method. Our approach can be complemented by techniques such as ODIN (Liang, Li, and Srikant 2017), and further boost the performance.
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+ < g r a p h i c s >
+
+Figure 1: When deploying OOD detector $G\left( x\right)$ in the real world, there can be two types of attacks: outlier attack and inlier attack on $G\left( x\right)$ . To perform outlier attack, we add small perturbation to an OOD input (e.g. mailbox) which causes the OOD detector to misclassify them as in-distribution example. The downstream classifier $f\left( x\right)$ will then classify this example into one of the known classes (e.g. stop sign), and trigger wrong action. To perform inlier attack, we add small perturbation to an in-distribution sample (e.g. stop sign) which causes the OOD detector to misclassify them as out-of-distribution example and reject it without taking the correct action (e.g. stop sign). Solid lines indicate the actual computation flow.
+
+Our main contributions are as follows:
+
+ * We extensively examine the robust OOD detection problem on common OOD detection approaches. We show that state-of-the-art OOD detectors can fail to distinguish between in-distribution examples and OOD examples under small adversarial perturbations;
+
+ * We propose an effective algorithm, ALOE, that substantially improves the robustness of OOD detectors;
+
+ * We empirically analyze why common adversarial examples targeting the classifier with small perturbations should be regarded as in-distribution rather than OOD.
+
+ * We will release our code base that integrates the most common OOD detection baselines, and our robust OOD detection methods. We hope this can ensure reproducibility of all methods, and make it easy for the community to conduct future research on this topic.
+
+§ RELATED WORK
+
+OOD Detection. Hendrycks and Gimpel introduced a baseline for OOD detection using the maximum softmax probability from a pre-trained network. Subsequent works improve the OOD detection by using deep ensembles (Lak-shminarayanan, Pritzel, and Blundell 2017), the calibrated softmax score (Liang, Li, and Srikant 2017), the Mahalanobis distance-based confidence score (Lee et al. 2018), and the energy score (Liu et al. 2020). Some methods also modify the neural networks by re-training or fine-tuning on some auxiliary anomalous data that are or realistic (Hendrycks, Mazeika, and Dietterich 2018; Mohseni et al. 2020) or artificially generated by GANs (Lee et al. 2017). Many other works (Subramanya, Srinivas, and Babu 2017; Malinin and Gales 2018; Bevandić et al. 2018) also regularize the model to have lower confidence on anomalous examples. Recent works have also studied the computational efficiency aspect of OOD detection (Lin, Roy, and Li 2021) and large-scale OOD detection on ImageNet (Huang and Li 2021).
+
+Robustness of OOD detection. Worst-case aspects of OOD detection have previously been studied in (Sehwag et al. 2019; Hein, Andriushchenko, and Bitterwolf 2019; Meinke and Hein 2019; Bitterwolf, Meinke, and Hein 2020; Chen et al. 2021). However, these papers are primarily concerned with adversarial OOD examples. We are the first to present a unified framework to study both adversarial in-distribution examples and adversarial OOD examples.
+
+Adversarial Robustness. A well-known phenomenon of adversarial examples (Biggio et al. 2013; Goodfellow, Shlens, and Szegedy 2014; Papernot et al. 2016; Szegedy et al. 2013) has received great attention in recent years. Many defense methods have been proposed to address this problem. One of the most effective methods is adversarial training (Madry et al. 2017) which uses robust optimization techniques to render deep learning models resistant to adversarial attacks. In this paper, we show that the OOD detectors built from deep models are also very brittle under small perturbations, and propose a method to mitigate this issue using techniques from robust optimization.
+
+§ TRADITIONAL OOD DETECTION
+
+Traditional OOD detection can be formulated as a canonical binary classification problem. Suppose we have an in-distribution ${P}_{\mathbf{X}}$ defined on an input space $\mathcal{X} \subset {\mathbb{R}}^{n}$ . An OOD classifier $G : \mathcal{X} \mapsto \{ 0,1\}$ is built to distinguish whether an input $x$ is from ${P}_{\mathbf{X}}$ (give it label 1) or not (give it label 0).
+
+In testing, the detector $G$ is evaluated on inputs drawn from a mixture distribution ${\mathcal{M}}_{\mathbf{X} \times Z}$ defined on $\mathcal{X} \times \{ 0,1\}$ , where the conditional probability distributions ${\mathcal{M}}_{\mathbf{X} \mid Z = 1} = {P}_{\mathbf{X}}$ and ${\mathcal{M}}_{\mathbf{X} \mid Z = 0} = {Q}_{\mathbf{X}}$ . We assume that $Z$ is drawn uniformly from $\{ 0,1\} .{Q}_{\mathbf{X}}$ is also a distribution defined on $\mathcal{X}$ which we refer to it as out-distribution. Following previous work (Bendale and Boult 2016; Sehwag et al. 2019), we assume that ${P}_{\mathbf{X}}$ and ${Q}_{\mathbf{X}}$ are sufficiently different and ${Q}_{\mathbf{X}}$ has a label set that is disjoint from that of ${P}_{\mathbf{X}}$ . We denote by ${\mathcal{D}}_{\text{ in }}^{\text{ test }}$ an in-distribution test set drawn from ${P}_{\mathbf{X}}$ , and ${\mathcal{D}}_{\text{ out }}^{\text{ test }}$ an out-of-distribution test set drawn from ${Q}_{\mathbf{X}}$ . The detection error of $G\left( x\right)$ evaluated under in-distribution ${P}_{\mathbf{X}}$ and out-distribution ${Q}_{X}$ is defined by
+
+$$
+L\left( {{P}_{\mathbf{X}},{Q}_{\mathbf{X}};G}\right) = \frac{1}{2}\left( {{\mathbb{E}}_{x \sim {P}_{\mathbf{X}}}\mathbb{I}\left\lbrack {G\left( x\right) = 0}\right\rbrack }\right. \tag{1}
+$$
+
+$$
+\left. {+{\mathbb{E}}_{x \sim {Q}_{X}}\mathbb{I}\left\lbrack {G\left( x\right) = 1}\right\rbrack }\right)
+$$
+
+§ ROBUST OUT-OF-DISTRIBUTION DETECTION
+
+Traditional OOD detection methods are shown to work well when evaluated on natural in-distribution and OOD samples. However, in this section, we show that existing OOD detectors are extremely brittle and can fail when we add minimal semantic-preserving perturbations to the inputs. We start by formally describing the problem of robust out-of-distribution detection.
+
+Problem Statement. We define $\Omega \left( x\right)$ to be a set of semantic-preserving perturbations on an input $x$ . For $\delta \in$ $\Omega \left( x\right) ,x + \delta$ has the same semantic label as $x$ . This also means that $x$ and $x + \delta$ have the same distributional membership (i.e. $x$ and $x + \delta$ both belong to in-distribution ${P}_{\mathbf{X}}$ , or out-distribution ${Q}_{\mathbf{X}}$ ).
+
+A robust OOD classifier $G : \mathcal{X} \mapsto \{ 0,1\}$ is built to distinguish whether a perturbed input $x + \delta$ is from ${P}_{\mathbf{X}}$ or not. In testing, the detector $G$ is evaluated on perturbed inputs drawn from a mixture distribution ${\mathcal{M}}_{\mathbf{X} \times Z}$ defined on $\mathcal{X} \times \{ 0,1\}$ , where the conditional probability distributions ${\mathcal{M}}_{\mathbf{X} \mid Z = 1} = {P}_{\mathbf{X}}$ and ${\mathcal{M}}_{\mathbf{X} \mid Z = 0} = {Q}_{\mathbf{X}}$ . We assume that $Z$ is drawn uniformly from $\{ 0,1\}$ . The detection error of $G$ evaluated under in-distribution ${P}_{\mathbf{X}}$ and out-distribution ${Q}_{\mathbf{X}}$ is now defined by
+
+$$
+L\left( {{P}_{\mathbf{X}},{Q}_{\mathbf{X}};G,\Omega }\right) = \frac{1}{2}\left( {{\mathbb{E}}_{x \sim {P}_{\mathbf{X}}}\mathop{\max }\limits_{{\delta \in \Omega \left( x\right) }}\mathbb{I}\left\lbrack {G\left( {x + \delta }\right) = 0}\right\rbrack }\right.
+$$
+
+$$
+\left. {+{\mathbb{E}}_{x \sim {Q}_{X}}\mathop{\max }\limits_{{\delta \in \Omega \left( x\right) }}\mathbb{I}\left\lbrack {G\left( {x + \delta }\right) = 1}\right\rbrack }\right) \tag{2}
+$$
+
+In practice, it can be intractable to directly minimize $L\left( {{P}_{\mathbf{X}},{Q}_{\mathbf{X}};G,\Omega }\right)$ due to lack of prior knowledge on ${Q}_{\mathbf{X}}$ . In some cases we assume having access to auxiliary data sampled from a distribution ${U}_{\mathbf{X}}$ which is different from both ${P}_{\mathbf{X}}$ and ${Q}_{\mathbf{X}}$ .
+
+Adversarial Attacks on OOD Detection. In the appendix, we describe a few common OOD detection methods such as MSP (Hendrycks and Gimpel 2016), ODIN (Liang, Li, and Srikant 2017) and Mahalanobis (Lee et al. 2018). We then propose adversarial attack algorithms that can show the vulnerability of these OOD detection approaches. Computing the exact value of detection error defined in equation (2) requires enumerating all possible perturbations. This can be practically intractable given the large space of $\Omega \left( x\right) \subset {\mathbb{R}}^{n}$ . To this end, we propose adversarial attack algorithms that can find the perturbations in $\Omega \left( x\right)$ to compute a lower bound.
+
+Specifically, we consider image data and small ${L}_{\infty }$ norm-bounded perturbations on $x$ since it is commonly used in adversarial machine learning research (Madry et al. 2017; Athalye, Carlini, and Wagner 2018). For data point $x \in {\mathbb{R}}^{n}$ , a set of adversarial perturbations is defined as
+
+$$
+B\left( {x,\epsilon }\right) = \left\{ {\delta \in {\mathbb{R}}^{n} \mid \parallel \delta {\parallel }_{\infty } \leq \epsilon \land x + \delta \text{ is valid }}\right\} , \tag{3}
+$$
+
+where $\epsilon$ is the size of small perturbation, which is also called adversarial budget. $x + \delta$ is considered valid if the values of $x + \delta$ are in the image pixel value range.
+
+For the OOD detection methods based on softmax confidence score (e.g. MSP, ODIN and OE (Hendrycks, Mazeika, and Dietterich 2018)), we describe the attack mechanism in Algorithm 1. Specifically, we construct adversarial test examples by adding small perturbations in $B\left( {x,\epsilon }\right)$ so to change the prediction confidence in the reverse direction. To generate adversarial in-distribution examples, the model is induced to output probability distribution that is close to uniform; whereas adversarial OOD examples are constructed to induce the model produce high confidence score. We note here that the adversarial examples here are constructed to fool the OOD detectors $G\left( x\right)$ , rather than the image classification ${\;\operatorname{model}\;f}\left( x\right)$ .
+
+For the OOD detection methods using Mahalanobis distance based confidence score, we propose an attack algorithm detailed in Algorithm 2. Specifically, we construct adversarial test examples by adding small perturbations in $B\left( {x,\epsilon }\right)$ to make the logistic regression detector predict wrongly. Note that in our attack algorithm, we don't perform input preprocessing to compute the Mahalanobis distance based confidence score.
+
+Our attack algorithms assume having access to the model parameters, thus they are white-box attacks. We find that using our attack algorithms, even with very minimal attack strength $\left( {\epsilon = 1/{255}\text{ and }m = {10}}\right)$ , classic OOD detection methods (e.g. MSP, ODIN, Mahalanobis, OE, and OE+ODIN) can fail miserably. For example, the false positive rate of OE method can increase by ${95.52}\%$ under such attack when evaluated on CIFAR-10 as in-distribution dataset.
+
+Algorithm 1: Adversarial attack on OOD detectors based on softmax confidence score.
+
+input $x,F,\epsilon ,m,\xi$
+
+output $\delta$
+
+ $\delta \leftarrow$ randomly choose a vector from $B\left( {x,\epsilon }\right)$
+
+ for $t = 1,2,\cdots ,m$ do
+
+ ${x}^{\prime } \leftarrow x + \delta$
+
+ if $x$ is in-distribution then
+
+ $\ell \left( {x}^{\prime }\right) \leftarrow {L}_{\mathrm{{CE}}}\left( {F\left( {x}^{\prime }\right) ,{\mathcal{U}}_{K}}\right)$
+
+ else
+
+ $\ell \left( {x}^{\prime }\right) \leftarrow - \mathop{\sum }\limits_{{i = 1}}^{K}{F}_{i}\left( {x}^{\prime }\right) \log {F}_{i}\left( {x}^{\prime }\right)$
+
+ end if
+
+ ${\delta }^{\prime } \leftarrow \delta - \xi \cdot \operatorname{sign}\left( {{\nabla }_{x}\ell \left( {x}^{\prime }\right) }\right)$
+
+ $\delta \leftarrow \mathop{\prod }\limits_{{B\left( {x,\epsilon }\right) }}{\delta }^{\prime }\; \vartriangleright$ projecting ${\delta }^{\prime }$ to $B\left( {x,\epsilon }\right)$
+
+ end for
+
+Algorithm 2: Adversarial attack on OOD detector using Ma-halanobis distance based confidence score.
+
+input $x,{M}_{\ell }\left( \cdot \right) ,\left\{ {\alpha }_{\ell }\right\} ,b,\epsilon ,m,\xi$
+
+output $\delta$
+
+ $\delta \leftarrow$ randomly choose a vector from $B\left( {x,\epsilon }\right)$
+
+ for $t = 1,2,\cdots ,m$ do
+
+ ${x}^{\prime } \leftarrow x + \delta$
+
+ $p\left( {x}^{\prime }\right) \leftarrow \frac{1}{1 + {e}^{-\left( {\mathop{\sum }\limits_{\ell }{\alpha }_{\ell }{M}_{\ell }\left( {x}^{\prime }\right) + b}\right) }}$
+
+ if $x$ is in-distribution then
+
+ $\ell \left( {x}^{\prime }\right) \leftarrow - \log p\left( {x}^{\prime }\right)$
+
+ else
+
+ $\ell \left( {x}^{\prime }\right) \leftarrow - \log \left( {1 - p\left( {x}^{\prime }\right) }\right)$
+
+ end if
+
+ ${\delta }^{\prime } \leftarrow \delta + \xi \cdot \operatorname{sign}\left( {{\nabla }_{x}\ell \left( {x}^{\prime }\right) }\right)$
+
+ $\delta \leftarrow \mathop{\prod }\limits_{{B\left( {x,\epsilon }\right) }}{\delta }^{\prime }\; \vartriangleright$ projecting ${\delta }^{\prime }$ to $B\left( {x,\epsilon }\right)$
+
+ end for
+
+§ ALOE: ADVERSARIAL LEARNING WITH INLINER AND OUTLIER EXPOSURE
+
+In this section, we introduce a novel method called Adversarial Learning with inliner and Outlier Exposure (ALOE) to improve the robustness of the OOD detector $G\left( \cdot \right)$ built on top of the neural network $f\left( \cdot \right)$ against input perturbations.
+
+Training Objective. We train our model ALOE against two types of perturbed examples. For in-distribution inputs $x \in {P}_{\mathbf{X}}$ , ALOE creates adversarial inlier within the $\epsilon$ -ball that maximize the negative log likelihood. Training with perturbed examples from the in-distribution helps calibrate the error on inliers, and make the model more invariant to the additive noise. In addition, our method leverages an auxiliary unlabeled dataset ${\mathcal{D}}_{\text{ out }}^{\mathrm{{OE}}}$ drawn from ${U}_{\mathbf{X}}$ as used in (Hendrycks, Mazeika, and Dietterich 2018), but in a different objective. While OE directly uses the original images $x \in {\mathcal{D}}_{\text{ out }}^{\mathrm{{OE}}}$ as outliers, ALOE creates adversarial outliers by searching within the $\epsilon$ -ball that maximize the KL-divergence between model output and a uniform distribution. The overall training objective of ${F}_{\text{ ALOE }}$ can be formulated as a min-max game given by
+
+$$
+\mathop{\operatorname{minimize}}\limits_{\theta }{\mathbb{E}}_{\left( {x,y}\right) \sim {\mathcal{D}}_{\text{ in }}^{\text{ train }}}\mathop{\max }\limits_{{\delta \in B\left( {x,\epsilon }\right) }}\left\lbrack {-\log {F}_{\theta }{\left( x + \delta \right) }_{y}}\right\rbrack
+$$
+
+$$
++ \lambda \cdot {\mathbb{E}}_{x \sim {\mathcal{D}}_{\text{ out }}^{\mathrm{{OE}}}}\mathop{\max }\limits_{{\delta \in B\left( {x,\epsilon }\right) }}\left\lbrack {{L}_{\mathrm{{CE}}}\left( {{F}_{\theta }\left( {x + \delta }\right) ,{\mathcal{U}}_{K}}\right) }\right\rbrack \tag{4}
+$$
+
+where ${F}_{\theta }\left( x\right)$ is the softmax output of the neural network.
+
+To solve the inner max of these objectives, we use the Projected Gradient Descent (PGD) method (Madry et al. 2017), which is the standard method for large-scale constrained optimization. The hyper-parameters of PGD used in the training will be provided in the experiments.
+
+Once the model ${F}_{\text{ ALOE }}$ is trained, it can be used for downstream OOD detection by combining with approaches such as MSP and ODIN. The corresponding detectors can be constructed as ${G}_{\mathrm{{MSP}}}\left( {x;\gamma ,{F}_{\mathrm{{ALOE}}}}\right)$ , and ${G}_{\mathrm{{ODIN}}}\left( {x;T,\eta ,\gamma ,{F}_{\mathrm{{ALOE}}}}\right)$ , respectively.
+
+Possible Variants. We also derive two other variants of robust training objective for OOD detection. The first one performs adversarial training only on the inliers. We denote this method as ADV, which is equivalent to the objective used in (Madry et al. 2017). The training objective for ADV is:
+
+$\mathop{\operatorname{minimize}}\limits_{\theta }\;{\mathbb{E}}_{\left( {x,y}\right) \sim {\mathcal{D}}_{\text{ in }}^{\text{ train }}}\mathop{\max }\limits_{{\delta \in B\left( {x,\epsilon }\right) }}\left\lbrack {-\log {F}_{\theta }{\left( x + \delta \right) }_{y}}\right\rbrack$
+
+Alternatively, we also considered performing adversarial training on inlier examples while simultaneously performing outlier exposure as in (Hendrycks, Mazeika, and Dietterich 2018). We refer to this variant as AOE (adversarial learning with outlier exposure). The training objective for AOE is:
+
+$$
+\mathop{\operatorname{minimize}}\limits_{\theta }\;{\mathbb{E}}_{\left( {x,y}\right) \sim {\mathcal{D}}_{\text{ in }}^{\text{ train }}}\mathop{\max }\limits_{{\delta \in B\left( {x,\epsilon }\right) }}\left\lbrack {-\log {F}_{\theta }{\left( x + \delta \right) }_{y}}\right\rbrack
+$$
+
+$$
++ \lambda \cdot {\mathbb{E}}_{x \sim {\mathcal{D}}_{\text{ out }}^{\text{ OE }}}\left\lbrack {{L}_{\mathrm{{CE}}}\left( {{F}_{\theta }\left( x\right) ,{\mathcal{U}}_{K}}\right) }\right\rbrack
+$$
+
+We provide ablation studies comparing these variants with ALOE in the next section.
+
+§ EXPERIMENTS
+
+In this section we perform extensive experiments to evaluate previous OOD detection methods and our ALOE method under adversarial attacks on in-distribution and OOD inputs. Our main findings are summarized as follows:
+
+(1) Classic OOD detection methods such as ODIN, Maha-lanobis, and OE fail drastically under our adversarial attacks even with a very small perturbation budget.
+
+(2) Our method ALOE can significantly improve the performance of OOD detection under our adversarial attacks compared to the classic OOD detection methods. Also, we observe that the performance of its variants ADV and $\mathrm{{AOE}}$ is worse than it in this task. And if we combine ALOE with other OOD detection approaches such as ODIN, we can further improve its performance. What's more, ALOE improves model robustness while maintaining almost the same classification accuracy on the clean test inputs (the results are in the appendix).
+
+(3) Common adversarial examples targeting the image classifier $f\left( x\right)$ with small perturbations should be regarded as in-distribution rather than OOD.
+
+Next we provide more details.
+
+§ SETUP
+
+In-distribution Datasets. we use GTSRB (Stallkamp et al. 2012), CIFAR-10 and CIFAR-100 datasets (Krizhevsky, Hinton et al. 2009) as in-distribution datasets. The pixel values of all the images are normalized to be in the range $\left\lbrack {0,1}\right\rbrack$ .
+
+Out-of-distribution Datasets. For auxiliary outlier dataset, we use 80 Million Tiny Images (Torralba, Fergus, and Freeman 2008), which is a large-scale, diverse dataset scraped from the web. We follow the same deduplication procedure as in (Hendrycks, Mazeika, and Dietterich 2018) and remove all examples in this dataset that appear in CIFAR-10 and CIFAR-100 to ensure that ${\mathcal{D}}_{\text{ out }}^{\mathrm{{OE}}}$ and ${\mathcal{D}}_{\text{ out }}^{\text{ test }}$ are disjoint. For OOD test dataset, we follow the settings in (Liang, Li, and Srikant 2017; Hendrycks, Mazeika, and Dietterich 2018). For CIFAR-10 and CIFAR-100, we use six different natural image datasets: SVHN, Textures, Places365, LSUN (crop), LSUN (resize), and iSUN. For GTSRB, we use the following six datasets that are sufficiently different from it: CIFAR-10, Textures, Places365, LSUN (crop), LSUN (resize), and iSUN. Again, the pixel values of all the images are normalized to be in the range $\left\lbrack {0,1}\right\rbrack$ . The details of these datasets can be found in the appendix.
+
+Architectures and Training Configurations. We use the state-of-the-art neural network architecture DenseNet (Huang et al. 2017). We follow the same setup as in (Huang et al. 2017), with depth $L = {100}$ , growth rate $k = {12}$ (Dense-BC) and dropout rate 0 . All neural networks are trained with stochastic gradient descent with Nesterov momentum (Duchi, Hazan, and Singer 2011; Kingma and Ba 2014). Specifically, we train Dense-BC with momentum 0.9 and ${\ell }_{2}$ weight decay with a coefficient of ${10}^{-4}$ . For GTSRB, we train it for 10 epochs; for CIFAR-10 and CIFAR-100, we train it for 100 epochs. For in-distribution dataset, we use batch size 64; For outlier exposure with ${\mathcal{D}}_{\text{ out }}^{\mathrm{{OE}}}$ , we use batch size 128. The initial learning rate of 0.1 decays following a cosine learning rate schedule (Loshchilov and Hutter 2016).
+
+Hyperparameters. For ODIN (Liang, Li, and Srikant 2017), we choose temperature scaling parameter $T$ and perturbation magnitude $\eta$ by validating on a random noise data, which does not depend on prior knowledge of out-of-distribution datasets in test. In all of our experiments, we set $T = {1000}$ . We set $\eta = {0.0004}$ for GTSRB, $\eta = {0.0014}$ for CIFAR-10, and $\eta = {0.0028}$ for CIFAR-100. For Maha-lanobis (Lee et al. 2018), we randomly select 1,000 examples from ${\mathcal{D}}_{\text{ in }}^{\text{ train }}$ and 1,000 examples from ${\mathcal{D}}_{\text{ out }}^{\mathrm{{OE}}}$ to train the Logistic Regression model and tune $\eta$ , where $\eta$ is chosen from 21 evenly spaced numbers starting from 0 and ending at 0.004, and the optimal parameters are chosen to minimize the FPR at TPR 95%. For OE, AOE and ALOE methods, we fix the regularization parameter $\lambda$ to be 0.5 . In PGD that solves the inner max of ADV, AOE and ALOE, we use step size 1/255, number of steps $\lfloor {255\epsilon } + 1\rfloor$ , and random start. For our attack algorithm, we set $\xi = 1/{255}$ and $m = {10}$ in our experiments. The adversarial budget $\epsilon$ by default is set to $1/{255}$ , however we perform ablation studies by varying the value (see the results in the appendix).
+
+ < g r a p h i c s >
+
+Figure 2: Confidence score distribution produced by different methods. For illustration purposes, we use CIFAR-10 as in-distribution and SVHN as out-of-distribution. (a) and (b) compare the score distribution for Outlier Exposure (Hendrycks, Mazeika, and Diet-terich 2018), evaluated on clean images and PGD attacked images, respectively. The distribution overall shift toward the opposite direction under our attack, which causes the method to fail. Our method ALOE can mitigate the distribution shift as shown in (c). When combined with ODIN (Liang, Li, and Srikant 2017), the score distributions can be further separable between in- and out-distributions, as shown in (d).
+
+More experiment settings can be found in the appendix.
+
+§ EVALUATION METRICS
+
+We report main results using three metrics described below.
+
+FPR at 95% TPR. This metric calculates the false positive rate (FPR) on out-of-distribution examples when the true positive rate (TPR) is ${95}\%$ .
+
+Detection Error. This metric corresponds to the minimum mis-detection probability over all possible thresholds $\gamma$ , which is $\mathop{\min }\limits_{\gamma }L\left( {{P}_{X},{Q}_{X};G\left( {x;\gamma }\right) }\right)$ .
+
+AUROC. Area Under the Receiver Operating Characteristic curve is a threshold-independent metric (Davis and Goad-rich 2006). It can be interpreted as the probability that a positive example is assigned a higher detection score than a negative example (Fawcett 2006). A perfect detector corresponds to an AUROC score of ${100}\%$ .
+
+§ RESULTS
+
+All the values reported in this section are averaged over six OOD test datasets.
+
+max width=
+
+2*${\mathcal{D}}_{\text{ in }}^{\text{ test }}$ 2*$\mathbf{{Method}}$ $\mathbf{{FPR}}$ (95% TPR) ↓ Detection Error ↓ AUROC ↑ $\mathbf{{FPR}}$ (95% TPR) ↓ Detection Error ↓ AUROC ↑
+
+3-8
+ 3|c|without attack 3|c|with attack $(\epsilon = 1/{255},m = {10}$
+
+1-8
+9*GTSRB MSP (Hendrycks and Gimpel 2016) 1.13 2.42 98.45 97.59 26.02 73.27
+
+2-8
+ ODIN (Liang, Li, and Srikant 2017) 1.42 2.10 98.81 75.94 24.87 75.41
+
+2-8
+ Mahalanobis (Lee et al. 2018) 1.31 2.87 98.29 100.00 29.80 70.45
+
+2-8
+ OE (Hendrycks, Mazeika, and Dietterich 2018) 0.02 0.34 99.92 25.85 5.90 96.09
+
+2-8
+ OE+ODIN 0.02 0.36 99.92 14.14 5.59 97.18
+
+2-8
+ ADV (Madry et al. 2017) 1.45 2.88 98.66 17.96 6.95 94.83
+
+2-8
+ AOE 0.00 0.62 99.86 1.49 2.55 98.35
+
+2-8
+ ALOE (ours) 0.00 0.44 99.76 0.66 1.80 98.95
+
+2-8
+ ALOE+ODIN (ours) 0.01 0.45 99.76 0.69 1.80 98.98
+
+1-8
+9*CIFAR- 10 MSP (Hendrycks and Gimpel 2016) 51.67 14.06 91.61 99.98 50.00 10.34
+
+2-8
+ ODIN (Liang, Li, and Srikant 2017) 25.76 11.51 93.92 93.45 46.73 28.45
+
+2-8
+ Mahalanobis (Lee et al. 2018) 31.01 15.72 88.53 89.75 44.30 32.54
+
+2-8
+ OE (Hendrycks, Mazeika, and Dietterich 2018) 4.47 4.50 98.54 99.99 50.00 25.13
+
+2-8
+ OE+ODIN 4.17 4.31 98.55 99.02 47.84 34.29
+
+2-8
+ ADV (Madry et al. 2017) 66.99 19.22 87.23 98.44 31.72 66.73
+
+2-8
+ AOE 10.46 6.58 97.76 88.91 26.02 78.39
+
+2-8
+ ALOE (ours) 5.47 5.13 98.34 53.99 14.19 91.26
+
+2-8
+ ALOE+ODIN (ours) 4.48 4.66 98.55 41.59 12.73 92.69
+
+1-8
+9*CIFAR- 100 MSP (Hendrycks and Gimpel 2016) 81.72 33.46 71.89 100.00 50.00 2.39
+
+2-8
+ ODIN (Liang, Li, and Srikant 2017) 58.84 22.94 83.63 98.87 49.87 21.02
+
+2-8
+ Mahalanobis (Lee et al. 2018) 53.75 27.63 70.85 95.79 47.53 17.92
+
+2-8
+ OE (Hendrycks, Mazeika, and Dietterich 2018) 56.49 19.38 87.73 100.00 50.00 2.94
+
+2-8
+ OE+ODIN 47.59 17.39 90.14 99.49 50.00 20.02
+
+2-8
+ ADV (Madry et al. 2017) 85.47 33.17 71.77 99.64 44.86 41.34
+
+2-8
+ AOE 60.00 23.03 84.57 95.79 43.07 53.80
+
+2-8
+ ALOE (ours) 61.99 23.56 83.72 92.01 40.09 61.20
+
+2-8
+ ALOE+ODIN (ours) 58.48 21.38 85.75 88.50 36.20 66.61
+
+1-8
+
+Table 1: Distinguishing in- and out-of-distribution test set data for image classification. We contrast performance on clean images (without attack) and PGD attacked images. $\uparrow$ indicates larger value is better, and $\downarrow$ indicates lower value is better. All values are percentages and are averaged over six OOD test datasets.
+
+max width=
+
+${\mathcal{D}}_{\text{ in }}^{\text{ test }}$ $\mathbf{{Method}}$ 1-FPR (95% TPR)
+
+1-3
+9*CIFAR- 10 MSP (Hendrycks and Gimpel 2016) 10.75
+
+2-3
+ ODIN (Liang, Li, and Srikant 2017) 4.02
+
+2-3
+ Mahalanobis (Lee et al. 2018) 7.13
+
+2-3
+ OE (Hendrycks, Mazeika, and Dietterich 2018) 12.22
+
+2-3
+ OE+ODIN 12.95
+
+2-3
+ ADV (Madry et al. 2017) 7.69
+
+2-3
+ AOE 11.18
+
+2-3
+ ALOE (ours) 8.85
+
+2-3
+ ALOE+ODIN (ours) 8.71
+
+1-3
+9*CIFAR- 100 MSP (Hendrycks and Gimpel 2016) 0.06
+
+2-3
+ ODIN (Liang, Li, and Srikant 2017) 0.74
+
+2-3
+ Mahalanobis (Lee et al. 2018) 4.29
+
+2-3
+ OE (Hendrycks, Mazeika, and Dietterich 2018) 4.36
+
+2-3
+ OE+ODIN 5.21
+
+2-3
+ ADV (Madry et al. 2017) 3.14
+
+2-3
+ AOE 8.08
+
+2-3
+ ALOE (ours) 7.32
+
+2-3
+ ALOE+ODIN (ours) 7.06
+
+1-3
+
+Table 2: Distinguishing adversarial examples generated by PGD attack on the image classifier $f\left( x\right)$ . 1-FPR indicates the rate of misclassifying adversarial examples as out-of-distribution examples. For PGD attack, we choose $\epsilon$ as $1/{255}$ and the number of attack steps as 10 . All values are percentages.
+
+Classic OOD detection methods fail under our attack. As shown in Table 1, although classic OOD detection methods (e.g. MSP, ODIN, Mahalanobis, OE and OE+ODIN) could perform quite well on detecting natural OOD samples, their performance drops substantially under the attack (even with very minimal attack budget $\epsilon = 1/{255}$ and $m = {10}$ ). For the best-performing OOD detection method (i.e., OE+ODIN), the FPR at 95% TPR increases drastically from 4.17% (without attack) to 99.02% (with attack) when evaluated on the CIFAR-10 dataset.
+
+ALOE improves robust OOD detection performance. As shown in Table 1, our method ALOE could significantly improve the OOD detection performance under the adversarial attack. For example, ALOE can substantially improve the AUROC from 34.29% (state-of-the-art: OE+ODIN) to 92.69% evaluated on the CIFAR-10 dataset. The performance can be further improved when combining ALOE with ODIN. We observe this trend holds consistently on other benchmark datasets GTSRB and CIFAR-100 as in-distribution training data. We also find that adversarial training (ADV) or combining adversarial training with outlier exposure (AOE) yield slightly less competitive results.
+
+To better understand our method, we analyze the distribution of confidence scores produced by the OOD detectors on SVHN (out-distribution) and CIFAR-10 (in-distribution). As shown in Figure 2, OE could distinguish in-distribution and out-of-distribution samples quite well since the confidence scores are well separated. However, under our attack, the confidence scores of in-distribution samples move towards 0 and the scores of out-of-distribution samples move towards 1.0, which renders the detector fail to distinguish in- and out-of-distribution samples. Using our method, the confidence scores (under attack) become separable and shift toward the right direction. If we further combine ALOE with ODIN, the scores produced by the detector are even more separated.
+
+Evaluating on common adversarial examples targeting the classifier $f\left( x\right)$ . Our work is primarily concerned with adversarial examples targeting OOD detectors $G\left( x\right)$ . This is very different from the common notion of adversarial examples that are constructed to fool the image classifier $f\left( x\right)$ . Based on our robust definition of OOD detection, adversarial examples constructed from in-distribution data with small perturbations to fool the image classifier $f\left( x\right)$ should be regarded as in-distribution. To validate this point, we generate PGD attacked images w.r.t the original classification model $f\left( x\right)$ trained on CIFAR-10 and CIFAR-100 respectively using a small perturbation budget of $1/{255}$ . We measure the performance of OOD detectors $G\left( x\right)$ by reporting 1-FPR (at TPR 95%), which indicates the rate of misclassifying adversarial examples as out-of-distribution examples. As shown in Table 2, the metric in general is low for both classic and robust OOD detection methods, which suggests that common adversarial examples with small perturbations are closer to in-distribution rather than OOD.
+
+§ CONCLUSION
+
+In this paper, we study the problem, Robust Out-of-Distribution Detection, and propose adversarial attack algorithms which reveal the lack of robustness of a wide range of OOD detection methods. We show that state-of-the-art OOD detection methods can fail catastrophically under both adversarial in-distribution and out-of-distribution attacks. To counteract these threats, we propose a new method called ALOE, which substantially improves the robustness of state-of-the-art OOD detection. We empirically analyze our method under different parameter settings and optimization objectives, and provide theoretical insights behind our approach. Future work involves exploring alternative semantic-preserving perturbations beyond adversarial attacks.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/Y3fjmc2vkKA/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/Y3fjmc2vkKA/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..36a62fa5479a28db7f80844de860e4fd4a08070f
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/Y3fjmc2vkKA/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,167 @@
+# Patch Vestiges in the Adversarial Examples Against Vision Transformer Can Be Leveraged for Adversarial Detection
+
+blind review
+
+blind review
+
+## Abstract
+
+Vision Transformer (ViT), a Transformer-based architecture that divides images into patches, can catch up with or surpass convolution-based networks in multiple Computer Vision tasks. However, ViT is also vulnerable in the face of adversarial examples (AEs). Thus the topic around the attack and defense of ViT becomes very rewarding. Recent studies have found that the AEs against ViT seem to have grid-like textures that coincide with the patches. We confirm such sensation is true. In this paper, we show that these grid-like textures are the remained vestiges due to the patch division of ViT. We name them Patch Vestiges. We propose statistics to measure the sizes of Patch Vestiges in the images or AEs quantitatively. We also build a linear-regression classifier to detect the AEs against ViT practically via the proposed statistics. The experiments show that the performance of the simple classifier can even match some recent adversarial detection methods, suggesting that when trying to attack ViT or detect the AEs against ViT, Patch Vestiges are worth considering about as a critical factor.
+
+Transformer (Vaswani et al. 2017) is almost based on self-attention mechanisms and fully connected layers. It creatively subverts the architecture of RNNs and realizes the state-of-the-art performances on almost all Natural Language Processing tasks. It is naturally hoped that Transformer can be applied to the field of Computer Vision. However, Transformer requires a sequential input that has a quite different shape from an image. Vision Transformer (ViT) (Dosovitskiy et al. 2020) overcomes the difficulty by dividing an image into small patches and linking them into a sequence. With the help of Transformer, ViT achieves excellent performances in many Computer Vision tasks.
+
+Although ViT is effective, it has similar weakness with CNNs in front of the adversarial examples. Adversarial Examples (AEs) (Szegedy et al. 2013) are images with artificial perturbations that are small enough to fool the human eyes but can make deep neural networks output wrong results. Some preliminary studies (Bhojanapalli et al. 2021; Shao et al. 2021; Mahmood, Mahmood, and van Dijk 2021) show that ViT is vulnerable to all common AEs, and even weaker than CNNs under some attacks. The good news is that it is difficult for the AEs against CNNs to transfer to ViT directly (Shao et al. 2021; Naseer et al. 2021; Aldahdooh, Hamidouche, and Déforges 2021). Thus it is meaningful to study the unique natures of the AEs against ViT.
+
+To the human eye, the magnified adversarial perturbations of the AEs against ViT seem to have grid-like textures and exhibit some periodicity and repetition (Bhojanapalli et al. 2021), as shown in Figure 1. This is the initial inspiration of this paper. A very intuitive conjecture is that the AEs against ViT may also be divided into patches. In this paper, we confirm this conjecture is true and bring up the concept Patch Vestiges. We define Patch Vestiges as the abnormalities of the AEs against ViT that are caused by the patch division.
+
+We also find a method to measure Patch Vestiges quantitatively. We propose Leaps to measure the step changes between two adjacent pixels in different patches. We assume the step changes are the key points of Patch Vestiges. Additionally, we propose statistics PV, IPC and NCC based on Leaps and build a binary linear-regression classifier with them. The experiments show that our approximations of Leaps is successful and that by our proposed statistics PV, IPC and NCC the linear-regression classifier can detect the AEs against ViT effectively.
+
+We sum up the key contributions of this paper as follows:
+
+- We substantiate the human instinct that the patches used in Vision Transformer remain vestiges in the adversarial examples.
+
+- We bring up the concept Patch Vestiges and find a quantitative measurement for them.
+
+- We prove that Patch Vestiges can be the critical weakness for the adversarial examples against ViT.
+
+## Related Work
+
+Vision Transformers Vision Transformer (ViT) (Dosovit-skiy et al. 2020) is the first to successfully leverage Transformer (Vaswani et al. 2017) in Computer Vision tasks by dividing images into patches. DeiT (Touvron et al. 2021) uses a similar model structure but adds a new distillation token . T2T-ViT (Yuan et al. 2021) recursively integrates adjacent tokens to better extract the low-level image features.Swin Transformer (Liu et al. 2021) shows the superiority of Transformer and defeats CNN-based models in many tasks by bringing in the shifted window scheme.
+
+---
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+---
+
+
+
+Figure 1: A clean image and its AEs and according adversarial perturbations. The image "plane" is chosen from the ILSVRC2012 (ImageNet) dataset (Russakovsky et al. 2015). ResNet and ViT both give the correct classification when the input is clean. The AEs generated by PGD ${\ell }_{\infty } = 8$ make ResNet and ViT both output the wrong category "dog". The adversarial perturbations are effect images magnified from the real values to make them explicit.
+
+Adversarial Detection Bayesian uncertainty (BU) and kernel density (KD) are previously proposed to detect the out-of-manifold points (Feinman et al. 2017). RCE (Pang et al. 2018) uses a new Reverse Cross-Entropy based on KD to better distance the clean images from the AEs. LID (Ma et al. 2018) detects the AEs by the local sparseness. A Ma-halanobis distance based score is afterwards proposed (Lee et al. 2018). Under the assumption that AEs are out of the manifold of the natural scenes, natural scene statistics (NSS) are used in the detector (Kherchouche et al. 2020). More recently, LiBRe (Deng et al. 2021) leverages Bayesian neural networks with refined training procedures for adversarial detection.
+
+## Methodology
+
+Despite the recent excellent improvements of ViT, we focus on the vanilla ViT model (Dosovitskiy et al. 2020) because the fixed division makes the research stable. The vanilla ViT divides an image into $n \times n$ patches in a grid shape, making several horizontal and vertical dividing lines. Intuitively, the adversarial perturbations of the adjacent pixels astride the dividing lines should have step changes. We measure the step changes by Leaps. To calculate Leaps, we approximately assume the pixel values of the clean image and the adversarial perturbations inside the patches vary mildly, and only the adversarial perturbations across the patches are violent, as shown in Figure 2(a).
+
+
+
+Figure 2: (a) The illustrative diagrams of Leaps. (b) The example positions of the proposed statistics PV, IPC and NCC.
+
+We calculate Leaps as follows. We denote the change of the pixel values between the adjacent pixels $i$ and $j$ as $G\left( {i, j}\right)$ . For both clean images and inside-patch adversarial perturbations, a center $G$ should be equal to the average of its bilateral $G$ s. But for adversarial perturbations astride the dividing lines, the equality does not hold. Thus we define
+
+$$
+\operatorname{Leap}\left( {i, \vdash }\right)
+$$
+
+$$
+= \left| {G\left( {i, i \oplus 1}\right) - \frac{\left( G\left( i, i \ominus 1\right) + G\left( i \oplus 1, i \oplus 2\right) \right) }{2}}\right| \text{,} \tag{1}
+$$
+
+where $\vdash$ means the alternative direction that is either horizontal or vertical and $i \oplus n, i \ominus n$ means moving $n$ pixels forward or backward along the direction $\vdash$ . Leap $\left( {i, \vdash }\right)$ shows the non-smoothness of the local changes $G$ around the pixel $i$ . If $\operatorname{Leap}\left( {i, \vdash }\right)$ is higher and $i$ and $i \oplus 1$ stride over a dividing line, there will be a higher possibility that the given image is an AE against ViT.
+
+Based on Leap, we propose PV, IPC and NCC, standing for Patch Vestiges, Inside-Patch Contrast and Natural Change Contrast respectively. PV consists of Leaps astride the dividing lines, IPC consists of Leaps fully inside the patches, and NCC consists of all the adjacent changes (see Figure 2(b)). More precise definitions are:
+
+$$
+{PV}\left( X\right) = {Av}{e}_{ \vdash \in \{ -, \mid \} , i \in {PB}\left( {X, \vdash }\right) }\left( {\operatorname{Leap}\left( {i, \vdash }\right) }\right) ,
+$$
+
+$$
+{IPC}\left( X\right) = {\operatorname{Ave}}_{ \vdash \in \{ -, \mid \} , i \in {PI}\left( X\right) }\left( {\operatorname{Leap}\left( {i, \vdash }\right) }\right) , \tag{2}
+$$
+
+$$
+{NCC}\left( X\right) = {\operatorname{Ave}}_{ \vdash \in \{ - , \mid \} , i \in X}\left( \left| {G\left( {i, i \oplus 1}\right) }\right| \right) ,
+$$
+
+where ${Ave}$ means the average, ${PB}\left( {X, \vdash }\right)$ means the pixel set that $i$ and $i \oplus 1$ stride over a dividing line, and ${PI}\left( X\right)$ means the pixel set that $i, i \oplus 1, i \oplus 2, i \ominus 1$ are in the same patch. Under this definition, PV will be much higher than IPC only for the images with strong Patch Vestiges. NCC measures the natural pixel fluctuations of clean images and is a baseline for PV and IPC.
+
+We also leverage linear regression and build a simple binary classifier $y = {a}_{1}{PV} + {a}_{2}{IPC} + {a}_{3}{NCC} + {a}_{4}$ . If PV is very different from IPC in the AEs against ViT, the binary classifier will have high capacity to distinguish those AEs from clean images. And since PV, IPC and NCC are all statistics, if the the simple linear classifier works well, DNNs should be more capable to dig out the artifacts in Patch Vestiges.
+
+
+
+Figure 3: The distributions of the statistics PV, IPC and NCC of the images or AEs on the CIFAR-10 training set. The AEs are generated by the PGD ${\ell }_{\infty } = 8$ attack. The victim models are ResNet and ViT respectively. The solid lines in the figures represent the frequencies of the statistics accumulated by 100 groups. The dashed lines are the averages of the according statistics.
+
+## Experimental Setups
+
+Datasets We use the CIFAR-10 (Krizhevsky 2009) dataset for our experiments. The CIFAR-10 dataset has 50,000 training images, 10,000 test images and 10 categories. The size of each image is $3 \times {32} \times {32}$ .
+
+Attacks We use the white-box adversarial attack methods FGSM (Goodfellow, Shlens, and Szegedy 2015), BIM (Kurakin, Goodfellow, and Bengio 2017a), PGD (Ku-rakin, Goodfellow, and Bengio 2017b) and DeepFool (DF) (Moosavi-Dezfooli, Fawzi, and Frossard 2016). We restrict all the AEs with ${\ell }_{\infty } = 8$ . Notice that DF originally generates AEs with ${\ell }_{\infty } \leq 8$ . We rescale the perturbations and use DF* to denote the modification. We run all PGD attacks for 20 iterations. We use the AEs of PGD to train the linear-regression classifier and directly test it on the AEs of all the attack methods.
+
+Victim Models The major victim model is vanilla Vision Transformer (ViT) (Dosovitskiy et al. 2020). We also use ResNet (He et al. 2016) as a contrast model. ViT used in the experiments has a $4 \times 4$ patch size,6layers and 16 heads for Multi-Head Attentions. The ResNet model in the experiments has 56 layers.
+
+Compared Methods We use LID (Ma et al. 2018) and NSS (Kherchouche et al. 2020) for comparison. Notice that the settings of the methods and not in accord with ours strictly. For example, our classifier only requires the input image, the ViT logits and the patch size. The comparison is more of a reference.
+
+Environments We build our project on the open-source toolbox ARES (Dong et al. 2020) and make references to the codes of TRADES (Zhang et al. 2019). We run the experiments on GeForce RTX 2080 Ti.
+
+## Results
+
+We first compare the distributions and the averages of PV, IPC and NCC over the clean images, the AEs using the PGD attack against ResNet and those against ViT. All the images and AEs are from the training set of CIFAR-10. The results are shown in Figure 3. We observe that for the clean images and the AEs against ResNet, the distributions of PV and IPC are approximate. But PV in the AEs against ViT is much larger than IPC. We use the T'-test and confirm PV is significantly larger than IPC and NCC (p<5e-4) in Figure 3(c). We can also observe in Figure 3(c) that there are large area where the distribution of PV is not overlapped with IPC and NCC. The results prove that the assumptions and approximations about Leaps is effective, and illustrate that Patch Vestiges are unique and significant characteristics of the AEs against ViT.
+
+| Model | KR | DR |
| FGSM | BIM | PGD | DF* |
| LID | 96.17 | 57.36 | 85.17 | 94.51 | 78.64 |
| NSS | 93.29 | 87.10 | 66.26 | 62.09 | 61.55 |
| Ours | 86.90 | 94.60 | 87.74 | 88.11 | 74.62 |
+
+Table 1: The KR (%) of the clean images and the DR (%) of the different AEs of the compared models. DF* is the DF modification introduced in the experimental setup section. All the AEs have perturbations with ${\ell }_{\infty } = 8$ .
+
+We also train a linear classifier using the PGD attack and compare the keeping rates (KR, the ratio of classifying clean images correctly) and the detection rates (DR, the ratio of classifying AEs correctly) under different attacks. Table 1 shows the results. We observe that the simple linear-regression classifier, although not state-of-the-art, is comparable enough with the mature adversarial detection methods. This again suggests that Patch Vestiges are significant. The results also show that Patch Vestiges are the intrinsic attributes of the AEs against ViT and are easily transferred from one attack method to another.
+
+## Conclusion
+
+In this paper, we confirm the human intuition that the division of the patches by ViT remains large vestiges in the adversarial examples. We bring up the concept Patch Vestiges to measure to what extend the patches leaves over the its trace. We also quantitatively show that Patch Vestiges can be leveraged to detect whether an image is an adversarial example against ViT or a clean one.
+
+Besides the practical significance, our work can also promote people's thinking. Is a more complicated structure more vulnerable or safe? In many area, the answer is "vulnerable". But in this paper, the artifacts of ViT on the contrary improve its robustness. Perhaps it can not be easily answered in the AE area.
+
+## References
+
+Aldahdooh, A.; Hamidouche, W.; and Déforges, O. 2021. Reveal of Vision Transformers Robustness against Adversarial Attacks. arXiv:2106.03734.
+
+Bhojanapalli, S.; Chakrabarti, A.; Glasner, D.; Li, D.; Unterthiner, T.; and Veit, A. 2021. Understanding Robustness of Transformers for Image Classification. arXiv:2103.14586.
+
+Deng, Z.; Yang, X.; Xu, S.; Su, H.; and Zhu, J. 2021. LiBRe: A Practical Bayesian Approach to Adversarial Detection. In Proceedings of the 2021 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 972-982.
+
+Dong, Y.; Fu, Q.-A.; Yang, X.; Pang, T.; Su, H.; Xiao, Z.; and Zhu, J. 2020. Benchmarking Adversarial Robustness on Image Classification. In Proceedings of the 2020 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
+
+Dosovitskiy, A.; Beyer, L.; Kolesnikov, A.; Weissenborn, D.; Zhai, X.; Unterthiner, T.; Dehghani, M.; Minderer, M.; Heigold, G.; Gelly, S.; Uszkoreit, J.; and Houlsby, N. 2020. An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale. arXiv:2010.11929.
+
+Feinman, R.; Curtin, R. R.; Shintre, S.; and Gardner, A. B. 2017. Detecting Adversarial Samples from Artifacts. arXiv:1703.00410.
+
+Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2015. Explaining and Harnessing Adversarial Examples. In Proceedings of the International Conference on Learning Representations (ICLR).
+
+He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep Residual Learning for Image Recognition. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770-778.
+
+Kherchouche, A.; Fezza, S. A.; Hamidouche, W.; and Déforges, O. 2020. Detection of Adversarial Examples in Deep Neural Networks with Natural Scene Statistics. In Proceedings of the 2020 International Joint Conference on Neural Networks (IJCNN), 1-7.
+
+Krizhevsky, A. 2009. Learning Multiple Layers of Features from Tiny Images.
+
+Kurakin, A.; Goodfellow, I. J.; and Bengio, S. 2017a. Adversarial Examples in the Physical World. In Workshop of the International Conference on Learning Representations (ICLR).
+
+Kurakin, A.; Goodfellow, I. J.; and Bengio, S. 2017b. Adversarial Machine Learning at Scale. In Proceedings of
+
+the International Conference on Learning Representations (ICLR).
+
+Lee, K.; Lee, K.; Lee, H.; and Shin, J. 2018. A Simple Unified Framework for Detecting Out-of-Distribution Samples and Adversarial Attacks. In Advances in Neural Information Processing Systems (NeurIPS).
+
+Liu, Z.; Lin, Y.; Cao, Y.; Hu, H.; Wei, Y.; Zhang, Z.; Lin, S. C.-F.; and Guo, B. 2021. Swin Transformer: Hierarchical Vision Transformer using Shifted Windows. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV).
+
+Ma, X.; Li, B.; Wang, Y.; Erfani, S. M.; Wijewickrema, S. N. R.; Houle, M. E.; Schoenebeck, G. R.; Song, D. X.; and Bailey, J. 2018. Characterizing Adversarial Subspaces Using Local Intrinsic Dimensionality. In Proceedings of the International Conference on Learning Representations (ICLR).
+
+Mahmood, K.; Mahmood, R.; and van Dijk, M. 2021. On the Robustness of Vision Transformers to Adversarial Examples. arXiv:2104.02610.
+
+Moosavi-Dezfooli, S.-M.; Fawzi, A.; and Frossard, P. 2016. DeepFool: A Simple and Accurate Method to Fool Deep Neural Networks. In Proceedings of the 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2574-2582.
+
+Naseer, M.; Ranasinghe, K.; Khan, S. H.; Khan, F. S.; and Porikli, F. M. 2021. On Improving Adversarial Transferability of Vision Transformers. arXiv:2106.04169.
+
+Pang, T.; Du, C.; Dong, Y.; and Zhu, J. 2018. Towards Robust Detection of Adversarial Examples. In Advances in Neural Information Processing Systems (NeurIPS).
+
+Russakovsky, O.; Deng, J.; Su, H.; Krause, J.; Satheesh, S.; Ma, S.; Huang, Z.; Karpathy, A.; Khosla, A.; Bernstein, M. S.; Berg, A. C.; and Fei-Fei, L. 2015. ImageNet Large Scale Visual Recognition Challenge. International Journal of Computer Vision, 115: 211-252.
+
+Shao, R.; Shi, Z.; Yi, J.; Chen, P.-Y.; and Hsieh, C.-J. 2021. On the Adversarial Robustness of Visual Transformers. arXiv:2103.15670.
+
+Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I. J.; and Fergus, R. 2013. Intriguing Properties of Neural Networks. Computer Science.
+
+Touvron, H.; Cord, M.; Douze, M.; Massa, F.; Sablayrolles, A.; and J'egou, H. 2021. Training Data-Efficient Image Transformers & Distillation through Attention. In ICML.
+
+Vaswani, A.; Shazeer, N. M.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, L.; and Polosukhin, I. 2017. Attention is All you Need. arXiv:1706.03762.
+
+Yuan, L.; Chen, Y.; Wang, T.; Yu, W.; Shi, Y.; Jiang, Z.- H.; Tay, F. E. H.; Feng, J.; and Yan, S. 2021. Tokens-to-Token ViT: Training Vision Transformers From Scratch on ImageNet. In Proceedings of the IEEE/CVF International Conference on Computer Vision (ICCV), 558-567.
+
+Zhang, H. R.; Yu, Y.; Jiao, J.; Xing, E. P.; Ghaoui, L. E.; and Jordan, M. I. 2019. Theoretically Principled Trade-off between Robustness and Accuracy. In Proceedings of the International Conference on Machine Learning (ICML).
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/Y3fjmc2vkKA/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/Y3fjmc2vkKA/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..c624f7d461f856d49997065c5ff3bf10e15a8c6b
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/Y3fjmc2vkKA/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,123 @@
+§ PATCH VESTIGES IN THE ADVERSARIAL EXAMPLES AGAINST VISION TRANSFORMER CAN BE LEVERAGED FOR ADVERSARIAL DETECTION
+
+blind review
+
+blind review
+
+§ ABSTRACT
+
+Vision Transformer (ViT), a Transformer-based architecture that divides images into patches, can catch up with or surpass convolution-based networks in multiple Computer Vision tasks. However, ViT is also vulnerable in the face of adversarial examples (AEs). Thus the topic around the attack and defense of ViT becomes very rewarding. Recent studies have found that the AEs against ViT seem to have grid-like textures that coincide with the patches. We confirm such sensation is true. In this paper, we show that these grid-like textures are the remained vestiges due to the patch division of ViT. We name them Patch Vestiges. We propose statistics to measure the sizes of Patch Vestiges in the images or AEs quantitatively. We also build a linear-regression classifier to detect the AEs against ViT practically via the proposed statistics. The experiments show that the performance of the simple classifier can even match some recent adversarial detection methods, suggesting that when trying to attack ViT or detect the AEs against ViT, Patch Vestiges are worth considering about as a critical factor.
+
+Transformer (Vaswani et al. 2017) is almost based on self-attention mechanisms and fully connected layers. It creatively subverts the architecture of RNNs and realizes the state-of-the-art performances on almost all Natural Language Processing tasks. It is naturally hoped that Transformer can be applied to the field of Computer Vision. However, Transformer requires a sequential input that has a quite different shape from an image. Vision Transformer (ViT) (Dosovitskiy et al. 2020) overcomes the difficulty by dividing an image into small patches and linking them into a sequence. With the help of Transformer, ViT achieves excellent performances in many Computer Vision tasks.
+
+Although ViT is effective, it has similar weakness with CNNs in front of the adversarial examples. Adversarial Examples (AEs) (Szegedy et al. 2013) are images with artificial perturbations that are small enough to fool the human eyes but can make deep neural networks output wrong results. Some preliminary studies (Bhojanapalli et al. 2021; Shao et al. 2021; Mahmood, Mahmood, and van Dijk 2021) show that ViT is vulnerable to all common AEs, and even weaker than CNNs under some attacks. The good news is that it is difficult for the AEs against CNNs to transfer to ViT directly (Shao et al. 2021; Naseer et al. 2021; Aldahdooh, Hamidouche, and Déforges 2021). Thus it is meaningful to study the unique natures of the AEs against ViT.
+
+To the human eye, the magnified adversarial perturbations of the AEs against ViT seem to have grid-like textures and exhibit some periodicity and repetition (Bhojanapalli et al. 2021), as shown in Figure 1. This is the initial inspiration of this paper. A very intuitive conjecture is that the AEs against ViT may also be divided into patches. In this paper, we confirm this conjecture is true and bring up the concept Patch Vestiges. We define Patch Vestiges as the abnormalities of the AEs against ViT that are caused by the patch division.
+
+We also find a method to measure Patch Vestiges quantitatively. We propose Leaps to measure the step changes between two adjacent pixels in different patches. We assume the step changes are the key points of Patch Vestiges. Additionally, we propose statistics PV, IPC and NCC based on Leaps and build a binary linear-regression classifier with them. The experiments show that our approximations of Leaps is successful and that by our proposed statistics PV, IPC and NCC the linear-regression classifier can detect the AEs against ViT effectively.
+
+We sum up the key contributions of this paper as follows:
+
+ * We substantiate the human instinct that the patches used in Vision Transformer remain vestiges in the adversarial examples.
+
+ * We bring up the concept Patch Vestiges and find a quantitative measurement for them.
+
+ * We prove that Patch Vestiges can be the critical weakness for the adversarial examples against ViT.
+
+§ RELATED WORK
+
+Vision Transformers Vision Transformer (ViT) (Dosovit-skiy et al. 2020) is the first to successfully leverage Transformer (Vaswani et al. 2017) in Computer Vision tasks by dividing images into patches. DeiT (Touvron et al. 2021) uses a similar model structure but adds a new distillation token . T2T-ViT (Yuan et al. 2021) recursively integrates adjacent tokens to better extract the low-level image features.Swin Transformer (Liu et al. 2021) shows the superiority of Transformer and defeats CNN-based models in many tasks by bringing in the shifted window scheme.
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+ < g r a p h i c s >
+
+Figure 1: A clean image and its AEs and according adversarial perturbations. The image "plane" is chosen from the ILSVRC2012 (ImageNet) dataset (Russakovsky et al. 2015). ResNet and ViT both give the correct classification when the input is clean. The AEs generated by PGD ${\ell }_{\infty } = 8$ make ResNet and ViT both output the wrong category "dog". The adversarial perturbations are effect images magnified from the real values to make them explicit.
+
+Adversarial Detection Bayesian uncertainty (BU) and kernel density (KD) are previously proposed to detect the out-of-manifold points (Feinman et al. 2017). RCE (Pang et al. 2018) uses a new Reverse Cross-Entropy based on KD to better distance the clean images from the AEs. LID (Ma et al. 2018) detects the AEs by the local sparseness. A Ma-halanobis distance based score is afterwards proposed (Lee et al. 2018). Under the assumption that AEs are out of the manifold of the natural scenes, natural scene statistics (NSS) are used in the detector (Kherchouche et al. 2020). More recently, LiBRe (Deng et al. 2021) leverages Bayesian neural networks with refined training procedures for adversarial detection.
+
+§ METHODOLOGY
+
+Despite the recent excellent improvements of ViT, we focus on the vanilla ViT model (Dosovitskiy et al. 2020) because the fixed division makes the research stable. The vanilla ViT divides an image into $n \times n$ patches in a grid shape, making several horizontal and vertical dividing lines. Intuitively, the adversarial perturbations of the adjacent pixels astride the dividing lines should have step changes. We measure the step changes by Leaps. To calculate Leaps, we approximately assume the pixel values of the clean image and the adversarial perturbations inside the patches vary mildly, and only the adversarial perturbations across the patches are violent, as shown in Figure 2(a).
+
+ < g r a p h i c s >
+
+Figure 2: (a) The illustrative diagrams of Leaps. (b) The example positions of the proposed statistics PV, IPC and NCC.
+
+We calculate Leaps as follows. We denote the change of the pixel values between the adjacent pixels $i$ and $j$ as $G\left( {i,j}\right)$ . For both clean images and inside-patch adversarial perturbations, a center $G$ should be equal to the average of its bilateral $G$ s. But for adversarial perturbations astride the dividing lines, the equality does not hold. Thus we define
+
+$$
+\operatorname{Leap}\left( {i, \vdash }\right)
+$$
+
+$$
+= \left| {G\left( {i,i \oplus 1}\right) - \frac{\left( G\left( i,i \ominus 1\right) + G\left( i \oplus 1,i \oplus 2\right) \right) }{2}}\right| \text{ , } \tag{1}
+$$
+
+where $\vdash$ means the alternative direction that is either horizontal or vertical and $i \oplus n,i \ominus n$ means moving $n$ pixels forward or backward along the direction $\vdash$ . Leap $\left( {i, \vdash }\right)$ shows the non-smoothness of the local changes $G$ around the pixel $i$ . If $\operatorname{Leap}\left( {i, \vdash }\right)$ is higher and $i$ and $i \oplus 1$ stride over a dividing line, there will be a higher possibility that the given image is an AE against ViT.
+
+Based on Leap, we propose PV, IPC and NCC, standing for Patch Vestiges, Inside-Patch Contrast and Natural Change Contrast respectively. PV consists of Leaps astride the dividing lines, IPC consists of Leaps fully inside the patches, and NCC consists of all the adjacent changes (see Figure 2(b)). More precise definitions are:
+
+$$
+{PV}\left( X\right) = {Av}{e}_{ \vdash \in \{ -, \mid \} ,i \in {PB}\left( {X, \vdash }\right) }\left( {\operatorname{Leap}\left( {i, \vdash }\right) }\right) ,
+$$
+
+$$
+{IPC}\left( X\right) = {\operatorname{Ave}}_{ \vdash \in \{ -, \mid \} ,i \in {PI}\left( X\right) }\left( {\operatorname{Leap}\left( {i, \vdash }\right) }\right) , \tag{2}
+$$
+
+$$
+{NCC}\left( X\right) = {\operatorname{Ave}}_{ \vdash \in \{ - , \mid \} ,i \in X}\left( \left| {G\left( {i,i \oplus 1}\right) }\right| \right) ,
+$$
+
+where ${Ave}$ means the average, ${PB}\left( {X, \vdash }\right)$ means the pixel set that $i$ and $i \oplus 1$ stride over a dividing line, and ${PI}\left( X\right)$ means the pixel set that $i,i \oplus 1,i \oplus 2,i \ominus 1$ are in the same patch. Under this definition, PV will be much higher than IPC only for the images with strong Patch Vestiges. NCC measures the natural pixel fluctuations of clean images and is a baseline for PV and IPC.
+
+We also leverage linear regression and build a simple binary classifier $y = {a}_{1}{PV} + {a}_{2}{IPC} + {a}_{3}{NCC} + {a}_{4}$ . If PV is very different from IPC in the AEs against ViT, the binary classifier will have high capacity to distinguish those AEs from clean images. And since PV, IPC and NCC are all statistics, if the the simple linear classifier works well, DNNs should be more capable to dig out the artifacts in Patch Vestiges.
+
+ < g r a p h i c s >
+
+Figure 3: The distributions of the statistics PV, IPC and NCC of the images or AEs on the CIFAR-10 training set. The AEs are generated by the PGD ${\ell }_{\infty } = 8$ attack. The victim models are ResNet and ViT respectively. The solid lines in the figures represent the frequencies of the statistics accumulated by 100 groups. The dashed lines are the averages of the according statistics.
+
+§ EXPERIMENTAL SETUPS
+
+Datasets We use the CIFAR-10 (Krizhevsky 2009) dataset for our experiments. The CIFAR-10 dataset has 50,000 training images, 10,000 test images and 10 categories. The size of each image is $3 \times {32} \times {32}$ .
+
+Attacks We use the white-box adversarial attack methods FGSM (Goodfellow, Shlens, and Szegedy 2015), BIM (Kurakin, Goodfellow, and Bengio 2017a), PGD (Ku-rakin, Goodfellow, and Bengio 2017b) and DeepFool (DF) (Moosavi-Dezfooli, Fawzi, and Frossard 2016). We restrict all the AEs with ${\ell }_{\infty } = 8$ . Notice that DF originally generates AEs with ${\ell }_{\infty } \leq 8$ . We rescale the perturbations and use DF* to denote the modification. We run all PGD attacks for 20 iterations. We use the AEs of PGD to train the linear-regression classifier and directly test it on the AEs of all the attack methods.
+
+Victim Models The major victim model is vanilla Vision Transformer (ViT) (Dosovitskiy et al. 2020). We also use ResNet (He et al. 2016) as a contrast model. ViT used in the experiments has a $4 \times 4$ patch size,6layers and 16 heads for Multi-Head Attentions. The ResNet model in the experiments has 56 layers.
+
+Compared Methods We use LID (Ma et al. 2018) and NSS (Kherchouche et al. 2020) for comparison. Notice that the settings of the methods and not in accord with ours strictly. For example, our classifier only requires the input image, the ViT logits and the patch size. The comparison is more of a reference.
+
+Environments We build our project on the open-source toolbox ARES (Dong et al. 2020) and make references to the codes of TRADES (Zhang et al. 2019). We run the experiments on GeForce RTX 2080 Ti.
+
+§ RESULTS
+
+We first compare the distributions and the averages of PV, IPC and NCC over the clean images, the AEs using the PGD attack against ResNet and those against ViT. All the images and AEs are from the training set of CIFAR-10. The results are shown in Figure 3. We observe that for the clean images and the AEs against ResNet, the distributions of PV and IPC are approximate. But PV in the AEs against ViT is much larger than IPC. We use the T'-test and confirm PV is significantly larger than IPC and NCC (p<5e-4) in Figure 3(c). We can also observe in Figure 3(c) that there are large area where the distribution of PV is not overlapped with IPC and NCC. The results prove that the assumptions and approximations about Leaps is effective, and illustrate that Patch Vestiges are unique and significant characteristics of the AEs against ViT.
+
+max width=
+
+2*Model 2*KR 4|c|DR
+
+3-6
+ FGSM BIM PGD DF*
+
+1-6
+LID 96.17 57.36 85.17 94.51 78.64
+
+1-6
+NSS 93.29 87.10 66.26 62.09 61.55
+
+1-6
+Ours 86.90 94.60 87.74 88.11 74.62
+
+1-6
+
+Table 1: The KR (%) of the clean images and the DR (%) of the different AEs of the compared models. DF* is the DF modification introduced in the experimental setup section. All the AEs have perturbations with ${\ell }_{\infty } = 8$ .
+
+We also train a linear classifier using the PGD attack and compare the keeping rates (KR, the ratio of classifying clean images correctly) and the detection rates (DR, the ratio of classifying AEs correctly) under different attacks. Table 1 shows the results. We observe that the simple linear-regression classifier, although not state-of-the-art, is comparable enough with the mature adversarial detection methods. This again suggests that Patch Vestiges are significant. The results also show that Patch Vestiges are the intrinsic attributes of the AEs against ViT and are easily transferred from one attack method to another.
+
+§ CONCLUSION
+
+In this paper, we confirm the human intuition that the division of the patches by ViT remains large vestiges in the adversarial examples. We bring up the concept Patch Vestiges to measure to what extend the patches leaves over the its trace. We also quantitatively show that Patch Vestiges can be leveraged to detect whether an image is an adversarial example against ViT or a clean one.
+
+Besides the practical significance, our work can also promote people's thinking. Is a more complicated structure more vulnerable or safe? In many area, the answer is "vulnerable". But in this paper, the artifacts of ViT on the contrary improve its robustness. Perhaps it can not be easily answered in the AE area.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/Z8lffFu2rTT/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/Z8lffFu2rTT/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..33392a0e8e1246497caa790b4b10542dffc18b00
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/Z8lffFu2rTT/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,207 @@
+# The Diversity Metrics of Sub-models based on SVD of Jacobians for Ensembles Adversarial Robustness
+
+## Abstract
+
+Transferability of adversarial samples under different CNN models is not only one of the metrics indicators for evaluating the performance of adversarial examples, but also an important research direction in the defense of adversarial examples. Diversified models prevent black-box attacks relying on a specific alternative model. Meanwhile, recent research has revealed that adversarial transferability across sub-models may be used to abstractly express the diversity needs of sub-models under ensemble robustness. Because there was no mathematical description for this diversity in earlier studies, the difference in model architecture or model output was employed as an empirical standard in the assessment, with the model loss as the optimization aim. This paper proposes corresponding assessment criteria and provides a more accurate mathematical explanation of the transferability of adversarial samples between models based on the singular value decomposition (SVD) of data-dependent Jacobians. A new constraints norm is proposed in model training based on these criteria to isolate adversarial transferability without any prior knowledge of adversarial samples. Under the novel condition of high-dimensional inputs in training process, the model attribute extraction from dimensionality reduction of Jacobians makes evaluation metric and training norm more effective. Experiments have proved that the proposed metric is highly correlated with the actual robustness of transferability between sub-models and the model trained based on this constraint norm improve the adversarial robustness of ensemble.
+
+## Introduction
+
+In the research of adversarial examples, transferable adversarial examples have become an important research direction because of their more flexible and extensive application scenarios in practice(Akhtar and Mian 2018). As a way to improve the robustness, ensemble has become an important research direction to defense against adversarial samples at this stage. Essentially, the robustness of ensemble model is due to the well-calibrated uncertainty estimation for adversarial samples that outside the training data distribution(Lakshminarayanan, Pritzel, and Blun-dell 2016). Related test results combined with research (Kuncheva and Whitaker 2003) proposed the concept of diversity of sub-models under ensemble conditions, and experimentally demonstrated that the robustness of the ensemble has a certain correlation with the diversity of sub-models.
+
+Ensemble is widely used on both sides of attack and defense in related competitions, and the description of diversity metric is summarized as the diversity of model structure(Kurakin et al. 2018). More studies have proved that models trained on the same data set without additional constraints are more inclined to extract the same non-robust features (Ilyas et al. 2019; Li et al. 2015) making such an empirical defense method not always effective in practice.More research hopes to further define the diversity between models through an abstract characterisation, so as to obtain sub-models based on diversity constraint and improve the robustness of the ensemble(Bagnall, Bunescu, and Stewart 2017; Pang et al. 2019; Kariyappa and Qureshi 2019; Yang et al. 2020). The common problem of these methods is that the define of diversity is only based on abstract concepts without mathematical description, so its evaluation is more restricted from the perspective of optimization loss.
+
+Based on the conclusion of the correlation between transferability and diversity of sub-models(Yang et al. 2020), this paper proposes an metric for accurately evaluating model diversity based on the SVD of the Jacobian matrix. And through the singular value and vector from mathematical metric, the above-mentioned abstract expression is further explained theoretically. Geometrically, Figure 1 simplify demonstrate the difference between the evaluation method proposed in this paper and methods based on abstract characterisation by the level set of the optimize problem gradient, and gives a more accurate definition of transferability in theory. Further, a regular term constraint through the proposed diversified evaluation metric is used in model training process to generate diversified sub-models, thereby improving the robustness of ensemble. In summary, the main contributions of this article are as follows:
+
+- This paper proposed a quantitatively metric based on SVD of the Jacobian matrix for adversarial transferability.
+
+- The mathematical characterisation based on optimization theory of the transferability further help us to understand model attribute of the black-box.
+
+- This paper further uses the metric of diversity metric as a regular norm item in network training so as to improve
+
+---
+
+Copyright (c) 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+---
+
+
+
+Figure 1: The illustration of the different metric of transferability based on level set of the optimize problem. (a) The upper bound defined in GAL; (b) Transferability disturbance conditions that cannot be accurately defined by GAL; (c) DVERGE maximizes the distance between the optimal disturbances to limit transferability; (d) Transferability disturbance conditions that cannot be accurately defined by DVERGE; (e) The definition of transferability in this paper based on singular value and Wasserstein distance of singular vector.
+
+the ensemble robustness.
+
+## Related work
+
+Abstract description and hypothesis of sub-model diversity Subsequent studies proceed from different assumptions and put forward different evaluation metric for the diversity of sub-models under ensemble robustness. Based on the difference non-maximal logits outputs of model, ADP(Bagnall, Bunescu, and Stewart 2017; Pang et al. 2019) evaluated such diversity between sub-models. Based on the overlap of adversarial subspaces (Tramèr et al. 2017), GAL(Kariyappa and Qureshi 2019) evaluates such diversity through the gradient direction difference; Based on the non-robust features, DVERGE(Yang et al. 2020) further evaluated such diversity through the non-robust distill feature transferability. Different from the above-mentioned assumption, this paper starts from the perspective of the transferability of adversarial samples on the basis of the assumption of (Yang et al. 2020), and gives a mathematical expression through optimization theory.
+
+Theoretical analysis of model attributes based on Jacobian matrix The attributes of the model for more theoretical analysis is the essential to explain the black-box performance.The Frobenius norm of the Jacobian matrix is first used in the regularization training of the model's robustness (Hoffman, Roberts, and Yaida 2019; Jakubovitz and Giryes 2018; Novak et al. 2018). When the adversarial samples are initially discovered, the spectral norm of a certain layer weight is considered to be a metric for evaluating the sensitivity (Szegedy et al. 2013). The global spectral norm of the model Jacobian matrix is further used to constrain the robustness of the model (Sokolić et al. 2017; Far-nia, Zhang, and Tse 2018). (Khrulkov and Oseledets 2018; Roth, Kilcher, and Hofmann 2019) essentially reveals that the iterative generation process of adversarial samples is mathematically approximates to the SVD of Jacobian matrix through power method (Boyd 1974). Through further mathematical analysis, the Frobenius norm of Jacobian matrix is connected with the transferability of Universal Adversarial Perturbations (UAP) (Co, Rego, and Lupu 2021).
+
+This paper expands the theoretical analysis based on Jacobian matrix and innovatively evaluate the transferability between models accurately through SVD. The Jacobian matrix after dimensionality reduction is decomposed, and the degree of alignment between the singular vectors is precisely defined by the Wasserstein distance.
+
+## Method
+
+Define $f\left( x\right)$ as the logit output of convolutional neural network $f$ under image $x$ , while ${J}_{f}\left( x\right) = {\left. \frac{\partial {f}_{i}}{\partial x}\right| }_{x}$ is the Jacobian matrix under image $x$ . In the case where the perturdation $\delta$ is small enough and higher-order terms are ignored, the degree variation output of model which measured by the regular term ${L}_{q}$ can be linearly represented as the first-order Taylor expansion through Jacobian matrix ${J}_{f}\left( x\right)$ :
+
+$$
+\parallel f\left( {x + \delta }\right) - f\left( x\right) \parallel \approx {\begin{Vmatrix}{J}_{f}\left( x\right) \delta \end{Vmatrix}}_{q} \leq {\begin{Vmatrix}{J}_{f}\left( x\right) \end{Vmatrix}}_{F}\parallel \delta \parallel \tag{1}
+$$
+
+From the perspective of optimization theory, the goal of adversarial sample optimization is to maximize ${\begin{Vmatrix}{J}_{f}\left( x\right) \delta \end{Vmatrix}}_{q}$ . When $q = 2$ the goal of adversarial sample optimization can be simplified to a constrained optimization problem of quadratic functions:
+
+$$
+\text{maxmize}{\delta }^{T}{Q\delta } \tag{2}
+$$
+
+$$
+\text{subject to}{\delta }^{T}{P\delta } = K
+$$
+
+Where $Q = {J}^{T}J$ . Because of the homogeneity of the norm, $\mathrm{k}$ is set as 1 to solve this constrained optimization problem. Through Lagrange function $l\left( {x,\lambda }\right) = {x}^{T}{Qx} + \lambda \left( {1 - {x}^{T}{Px}}\right)$ under constrained optimization problem, the Lagrange condition can be obtained as:
+
+$$
+{P}^{-1}{Q\delta } = {\lambda \delta } \tag{3}
+$$
+
+Therefore, the eigenvector of ${P}^{-1}Q$ is the optimal $\delta$ that corresponding to the solution of objective equation (3). When the perturbation constraint is also under ${L}_{2}$ norm, $P$ is the identity matrix, and the maximum eigenvalue of $Q$ is the maximum value of equation (3). It can be seen that the singular vector of the Jacobian matrix $J$ essentially defines the possible local optimal solutions of $\delta$ , and the maximum singular value defines the maximum output variation of the model under ${L}_{2}$ norm. Without this meaning of singular values, the upper bound of transferability was defined through inequality in (1)(Kariyappa and Qureshi 2019). But as shown in Figure 1(a) and (b), when singular vectors are not fullly align and singular values are not constant up to a fixed scalar this metric can not define the transability more accurately. Through the theory of optimization, the meaning of eigenvalues and eigenvectors can be combined to further analyze the transferability.
+
+In order to evaluate the transferability of adversarial samples more accurately, this paper characterize the transferability based on the distance between singular vectors. How to choose a reasonable distance function is an essential issue in our method. (Gulrajani et al. 2017) proposed theory that the constraint of the variation between the logites output of different images is essentially a constraint on the Wasserstein distance of the image. Converting to the scenario of adversarial distinguish, the diversity metric in DVERGE can be expressed as the discrimination of GAN to distinguish the adversarial samples. So the diversity constraint achieved by the DVERGE increase the distance between optimal perturbations. As shown in Figure 1(d), considering extreme cases when singular values of a target Jacobian matrix is not much different and there is one target singular vector has small Wasserstein distance with source optimal perturbations, it can also achieve strong transferability under this constraint. A more accurate assessment of transferability defined in this paper is characterized as: Given the singular vector $\left( {{s}_{ - }{vec}}\right)$ corresponding to the largest singular value of the source Jacobian matrix $\left( {\max \left( {{s}_{ - }{\text{val }}_{{J}_{s}}}\right) }\right)$ , the singular value $\left( {{s}_{ - }{val}}\right)$ corresponding to the target Jacobian singular vector under the condition of minimizing the Wasserstein distance(mindis_s_val ${}_{{J}_{s} \rightarrow {J}_{t}}$ ) reveals the approximate output variation. Let $d$ as Wasserstein distance, the Equation (4) mathematically expresses this metric as:
+
+$$
+\frac{{mindi}{s}_{ - }{s}_{ - }{va}{l}_{{J}_{s} \rightarrow {J}_{t}}}{{max}\left( {{s}_{ - }{va}{l}_{{J}_{s}}}\right) \times {min}\left( {d\left( {{\operatorname{argmax}}_{{s}_{ - }}\left( {{s}_{ - }{val}}\right) ,{s}_{ - }{ve}{c}_{{J}_{t}}}\right) }\right) } \tag{4}
+$$
+
+Algorithm 1: Ensemble network optimization based on transferability metric
+
+---
+
+Input: Batch images X, N sub-models
+
+Parameter: Parameters of sub-model
+
+Output: models for ensemble
+
+ initialization or pretraining model reload.
+
+ while i=1..N do
+
+ Randomly initialize sub-model ${f}_{i}$
+
+ end while
+
+ while epoch=1..M do
+
+ while $\mathrm{i} = 1..\mathrm{N}$ do
+
+ ens_out $+ = \operatorname{softmax}\left( {{\operatorname{model}}_{i}\left( X\right) }\right)$
+
+ while j=1..N do
+
+ ${\text{trans_metrics}}_{i} + = {\text{trans_metrics}}_{i, j \neq i} \vartriangleleft {eq}\left( 4\right)$
+
+ end while
+
+ ${\text{ens_transs = mean}}_{N}({\text{trans_metrics}}_{i})$
+
+ ens_loss $= {BCE}\left( {{\text{mean}}_{N}\left( \text{ens_out}\right) , Y\text{_one hot}}\right)$
+
+ ${g}_{\omega } = { \bigtriangledown }_{\omega }\left( {\text{ens_loss} + \text{ens_trans}}\right)$
+
+ $\omega = \omega + \alpha \cdot$ RMSProp $\left( {\omega ,{g}_{\omega }}\right) \vartriangleleft$ gradient regular
+
+ end while
+
+ end while
+
+ return Diversity sub-models
+
+---
+
+s.t. $\operatorname{mindis}{}_{ - }{s}_{ - }{\operatorname{vac}}_{{J}_{s} \rightarrow {J}_{t}} = \mathop{\operatorname{argmin}}\limits_{{{s}_{ - }{\operatorname{vec}}_{{J}_{t}}}}\mathrm{\;d}\left( {{\operatorname{argmax}}_{{\mathrm{J}}_{\mathbf{s}}}\left( {{\mathrm{s}}_{ - }\mathrm{{val}}}\right) ,{\mathrm{s}}_{ - }{\mathrm{{vec}}}_{{\mathrm{J}}_{\mathbf{t}}}}\right)$
+
+Drawing idea from PCA's dimensionality reduction, we make redundant assumptions about the role of batch-size and image-channel dimensions in the gradient, and reduce the dimensionality of the Jacobian matrix through HOSVD decomposition (Kolda and Bader 2009; Chen and Saad 2009). This paper follows the overall parameter optimization of ensemble in the training process. Algorithm 1 shows the overall optimization algorithm.
+
+## Experiment and results
+
+## Experiment of different evaluation metrics
+
+The experiment combines the evaluation metric described by abstract concepts to discuss the correlation between it and the evaluation metric proposed by this paper and verifies ours effectiveness.DVERGE (Yang et al. 2020) characterizes the degree of output variation of distillation adversarial examples between different sub-models as equation (5):
+
+$$
+\frac{1}{2}{E}_{\left( {x, y}\right) \left( {{x}_{s},{y}_{s}}\right) }\left\lbrack {{l}_{{f}_{i}}\left( {{x}_{{f}_{l}^{j}}^{\prime }\left( {x,{x}_{s}, y}\right) }\right) + {l}_{{f}_{j}}\left( {{x}_{{f}_{l}^{i}}^{\prime }\left( {x,{x}_{s}, y}\right) }\right) }\right\rbrack \tag{5}
+$$
+
+Based on the diversity evaluation metric of formula (5), the experimental result in Table 1 shows the diversity evaluation results of different method. The result shows the distillation adversarial loss between sub-models based on formula (5). The brackets after each method give the transferability evaluation metric based on formula (4). The feature distillation of adversarial examples is based on the method of article (Ilyas et al. 2019). The perturbation strength is set as standard ${0.03}\left( { \approx 8/{255}}\right)$ while the iteration step is 50 .
+
+| $\mathbf{{Ours}}\left( {4.917}\right)$ 13.12 13.17 9.175 4.41 4.517 1.226 | $\mathbf{{DVERGE}\left( {19.757}\right) }$ | Baseline(157) |
| 19.0726.57 | 0.71 | 15.971 | 16.33 | | | 0.355 4.42 4.035 |
| 27.689 23.63 13. | 16.438 | 0.82 | 16.28 | | | $\begin{array}{lll} {5.08} & {0.39} & {4.46} \end{array}$ |
| 27.4326.68 | 16.45 | 15.949 | 0.787 | | | 5.059 4.81 0.314 |
| $\mathbf{{ADP}}\left( {69.873}\right)$ | GAL(31) | Advt(48.565) |
| 1.314.73 | 2.559 | 6.369 | 6.46 | 3.966 | 4.55 | 4.6 |
| 4.591.197 | 6.07 | 1.448 | 19.48 | 4.68 | 3.86 | 4.598 |
| 4.6465.071 | 6.048 | 17.59 | 1.24 | 4.687 | 4.56 | 3.88 |
+
+Table 1: The diversity evaluation results of different method. The brackets after each method give the metric based on equation (4)
+
+Comparing the results of different methods on different evaluation metric, the results obtained based on equation (4) proposed in this paper are consistent with the results of equation (5) in the evaluation of model diversity. It is demonstrated that the metric proposed in this paper based on equation (4) have coherence in evaluating the output variation caused by perturbation with equation (5). The evaluation metric of equation (4) is based on the attributes of the model itself, and does not depend on any prior information based on adversarial samples. This is also the essential difference between the method in this paper and DVERGE. Also based on the attribute extraction attribute of the Jacobian matrix, the GAL method(Kariyappa and Qureshi 2019) optimizes the upper bound constraint defined by formula (1), which also improves the diversity of the network. However, compared with the metric proposed in this paper, the poor results fully demonstrate that the method in this paper is a more effective characterization of the model's output variation under the transfer attack.
+
+
+
+Figure 2: Robustness result with different perturbation: (a)white-box attack; (b)black-box attack. Different line shows the different method to diversify sub-model. All ensemble is achieved under three sub-models.
+
+Experiment of ensemble robustness The experiment in this section evaluates the robustness of the ensemble model with different method. In order to evaluate the robustness more comprehensively, different adversarial perturbation experiments are set up. Figure 2 shows the according result under white-box attack and black-box attack. The white box attack algorithm uses the current PGD attack algorithm (Madry et al. 2017) which has the best attack performance. The black box attack mainly relies on the transfer attack algorithm of the alternative model, which is consistent with the setting of DVERGE. Based on the baseline models, three types of adversarial example including (1) PGD (2) M-DI2- FGSM (Xie et al. 2019) (3) SGM (Wu et al. 2020) are generated, and the final accuracy rate is calculated comprehensively under different types of adversarial samples.
+
+Comparison under the white-box attack, the method in this paper achieves the best robust performance without the any adversarial sample prior conditions. Compared with the optimal result of DVERGE, because the equation (4) only gives constraints from the output variation range, the final recognition accuracy is not characterize enough, so the optimal result is not achieved under the robustness of recognition, which is the further direction to improve. Comparison under the black-box attack, our method achieves the best defense performance under high perturbation. The robustness under low perturbation conditions is not optimal. By comparing the accuracy of each type of adversarial sample, the adversarial sample based on CW loss has relatively good attack performance. This also shows that the perturbation constraint characterized by the output variation in this paper is still more sensitive to the change of the loss function in practice. The theoretical characterization based on the loss function is an important point to further improve the robustness.
+
+## Conclusion
+
+In this paper, the transferability of adversarial samples between sub-models is taken as the starting point for the study of ensemble robustness. Through the optimization theory analysis under Lagrange conditions, the SVD of the Jacobian matrix is a characterization of the model's optimal perturbation and output variation. Based on this theory, the level set of optimization further mathematical demonstrate the shortcomings of the previous abstract characterization of trasferability. So, this paper effectively redefines the transferability metric between models: Given the singular vector corresponding to the largest singular value of the source Jacobian matrix, the singular value corresponding to the target Jacobian singular vector under the condition of minimizing the Wasserstein distance reveals the approximate output variation. By performing SVD on the dimensionality-reduced Jacobian matrix, the sub-models obtained by this metric as a regular term in network training has a great effect on reducing the degree of output variation. Without relying on any prior information of adversarial samples, experiments show that the method, using as a model attribute extraction, finally improves the robustness of the ensemble. The theoretical characterization of the loss function and classification performance instead of the output variation will be an important direction to further improve the robustness of classification.
+
+References
+
+Akhtar, N.; and Mian, A. 2018. Threat of adversarial attacks on deep learning in computer vision: A survey. Ieee Access, 6: 14410-14430.
+
+Bagnall, A.; Bunescu, R.; and Stewart, G. 2017. Training ensembles to detect adversarial examples. arXiv preprint arXiv:1712.04006.
+
+Boyd, D. W. 1974. The power method for lp norms. Linear Algebra and its Applications, 9: 95-101.
+
+Chen, J.; and Saad, Y. 2009. On the tensor SVD and the optimal low rank orthogonal approximation of tensors. SIAM journal on Matrix Analysis and Applications, 30(4): 1709- 1734.
+
+Co, K. T.; Rego, D. M.; and Lupu, E. C. 2021. Jacobian Regularization for Mitigating Universal Adversarial Perturbations. arXiv preprint arXiv:2104.10459.
+
+Farnia, F.; Zhang, J. M.; and Tse, D. 2018. Generalizable adversarial training via spectral normalization. arXiv preprint arXiv:1811.07457.
+
+Gulrajani, I.; Ahmed, F.; Arjovsky, M.; Dumoulin, V.; and Courville, A. 2017. Improved training of wasserstein gans. arXiv preprint arXiv:1704.00028.
+
+Hoffman, J.; Roberts, D. A.; and Yaida, S. 2019. Robust learning with jacobian regularization. arXiv preprint arXiv:1908.02729.
+
+Ilyas, A.; Santurkar, S.; Tsipras, D.; Engstrom, L.; Tran, B.; and Madry, A. 2019. Adversarial examples are not bugs, they are features. arXiv preprint arXiv:1905.02175.
+
+Jakubovitz, D.; and Giryes, R. 2018. Improving dnn robustness to adversarial attacks using jacobian regularization. In Proceedings of the European Conference on Computer Vision (ECCV), 514-529.
+
+Kariyappa, S.; and Qureshi, M. K. 2019. Improving adversarial robustness of ensembles with diversity training. arXiv preprint arXiv:1901.09981.
+
+Khrulkov, V.; and Oseledets, I. 2018. Art of singular vectors and universal adversarial perturbations. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 8562-8570.
+
+Kolda, T. G.; and Bader, B. W. 2009. Tensor decompositions and applications. SIAM review, 51(3): 455-500.
+
+Kuncheva, L. I.; and Whitaker, C. J. 2003. Measures of diversity in classifier ensembles and their relationship with the ensemble accuracy. Machine learning, 51(2): 181-207.
+
+Kurakin, A.; Goodfellow, I.; Bengio, S.; Dong, Y.; Liao, F.; Liang, M.; Pang, T.; Zhu, J.; Hu, X.; Xie, C.; et al. 2018. Adversarial attacks and defences competition. In The NIPS'17 Competition: Building Intelligent Systems, 195- 231. Springer.
+
+Lakshminarayanan, B.; Pritzel, A.; and Blundell, C. 2016. Simple and scalable predictive uncertainty estimation using deep ensembles. arXiv preprint arXiv:1612.01474.
+
+Li, Y.; Yosinski, J.; Clune, J.; Lipson, H.; Hopcroft, J. E.; et al. 2015. Convergent learning: Do different neural networks learn the same representations? In FE@ NIPS, 196- 212.
+
+Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; and Vladu, A. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083.
+
+Novak, R.; Bahri, Y.; Abolafia, D. A.; Pennington, J.; and Sohl-Dickstein, J. 2018. Sensitivity and generalization in neural networks: an empirical study. arXiv preprint arXiv:1802.08760.
+
+Pang, T.; Xu, K.; Du, C.; Chen, N.; and Zhu, J. 2019. Improving adversarial robustness via promoting ensemble diversity. In International Conference on Machine Learning, 4970-4979. PMLR.
+
+Roth, K.; Kilcher, Y.; and Hofmann, T. 2019. Adversarial training is a form of data-dependent operator norm regularization. arXiv preprint arXiv:1906.01527.
+
+Sokolić, J.; Giryes, R.; Sapiro, G.; and Rodrigues, M. R. 2017. Robust large margin deep neural networks. IEEE Transactions on Signal Processing, 65(16): 4265-4280.
+
+Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; and Fergus, R. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.
+
+Tramèr, F.; Papernot, N.; Goodfellow, I.; Boneh, D.; and McDaniel, P. 2017. The space of transferable adversarial examples. arXiv preprint arXiv:1704.03453.
+
+Wu, D.; Wang, Y.; Xia, S.-T.; Bailey, J.; and Ma, X. 2020. Skip connections matter: On the transferability of adversarial examples generated with resnets. arXiv preprint arXiv:2002.05990.
+
+Xie, C.; Zhang, Z.; Zhou, Y.; Bai, S.; Wang, J.; Ren, Z.; and Yuille, A. L. 2019. Improving transferability of adversarial examples with input diversity. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2730-2739.
+
+Yang, H.; Zhang, J.; Dong, H.; Inkawhich, N.; Gardner, A.; Touchet, A.; Wilkes, W.; Berry, H.; and Li, H. 2020. DVERGE: diversifying vulnerabilities for enhanced robust generation of ensembles. arXiv preprint arXiv:2009.14720.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/Z8lffFu2rTT/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/Z8lffFu2rTT/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..658e9244f9c35a6a05b4da3a1a7e46a39faf5dc6
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/Z8lffFu2rTT/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,168 @@
+§ THE DIVERSITY METRICS OF SUB-MODELS BASED ON SVD OF JACOBIANS FOR ENSEMBLES ADVERSARIAL ROBUSTNESS
+
+§ ABSTRACT
+
+Transferability of adversarial samples under different CNN models is not only one of the metrics indicators for evaluating the performance of adversarial examples, but also an important research direction in the defense of adversarial examples. Diversified models prevent black-box attacks relying on a specific alternative model. Meanwhile, recent research has revealed that adversarial transferability across sub-models may be used to abstractly express the diversity needs of sub-models under ensemble robustness. Because there was no mathematical description for this diversity in earlier studies, the difference in model architecture or model output was employed as an empirical standard in the assessment, with the model loss as the optimization aim. This paper proposes corresponding assessment criteria and provides a more accurate mathematical explanation of the transferability of adversarial samples between models based on the singular value decomposition (SVD) of data-dependent Jacobians. A new constraints norm is proposed in model training based on these criteria to isolate adversarial transferability without any prior knowledge of adversarial samples. Under the novel condition of high-dimensional inputs in training process, the model attribute extraction from dimensionality reduction of Jacobians makes evaluation metric and training norm more effective. Experiments have proved that the proposed metric is highly correlated with the actual robustness of transferability between sub-models and the model trained based on this constraint norm improve the adversarial robustness of ensemble.
+
+§ INTRODUCTION
+
+In the research of adversarial examples, transferable adversarial examples have become an important research direction because of their more flexible and extensive application scenarios in practice(Akhtar and Mian 2018). As a way to improve the robustness, ensemble has become an important research direction to defense against adversarial samples at this stage. Essentially, the robustness of ensemble model is due to the well-calibrated uncertainty estimation for adversarial samples that outside the training data distribution(Lakshminarayanan, Pritzel, and Blun-dell 2016). Related test results combined with research (Kuncheva and Whitaker 2003) proposed the concept of diversity of sub-models under ensemble conditions, and experimentally demonstrated that the robustness of the ensemble has a certain correlation with the diversity of sub-models.
+
+Ensemble is widely used on both sides of attack and defense in related competitions, and the description of diversity metric is summarized as the diversity of model structure(Kurakin et al. 2018). More studies have proved that models trained on the same data set without additional constraints are more inclined to extract the same non-robust features (Ilyas et al. 2019; Li et al. 2015) making such an empirical defense method not always effective in practice.More research hopes to further define the diversity between models through an abstract characterisation, so as to obtain sub-models based on diversity constraint and improve the robustness of the ensemble(Bagnall, Bunescu, and Stewart 2017; Pang et al. 2019; Kariyappa and Qureshi 2019; Yang et al. 2020). The common problem of these methods is that the define of diversity is only based on abstract concepts without mathematical description, so its evaluation is more restricted from the perspective of optimization loss.
+
+Based on the conclusion of the correlation between transferability and diversity of sub-models(Yang et al. 2020), this paper proposes an metric for accurately evaluating model diversity based on the SVD of the Jacobian matrix. And through the singular value and vector from mathematical metric, the above-mentioned abstract expression is further explained theoretically. Geometrically, Figure 1 simplify demonstrate the difference between the evaluation method proposed in this paper and methods based on abstract characterisation by the level set of the optimize problem gradient, and gives a more accurate definition of transferability in theory. Further, a regular term constraint through the proposed diversified evaluation metric is used in model training process to generate diversified sub-models, thereby improving the robustness of ensemble. In summary, the main contributions of this article are as follows:
+
+ * This paper proposed a quantitatively metric based on SVD of the Jacobian matrix for adversarial transferability.
+
+ * The mathematical characterisation based on optimization theory of the transferability further help us to understand model attribute of the black-box.
+
+ * This paper further uses the metric of diversity metric as a regular norm item in network training so as to improve
+
+Copyright (c) 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+ < g r a p h i c s >
+
+Figure 1: The illustration of the different metric of transferability based on level set of the optimize problem. (a) The upper bound defined in GAL; (b) Transferability disturbance conditions that cannot be accurately defined by GAL; (c) DVERGE maximizes the distance between the optimal disturbances to limit transferability; (d) Transferability disturbance conditions that cannot be accurately defined by DVERGE; (e) The definition of transferability in this paper based on singular value and Wasserstein distance of singular vector.
+
+the ensemble robustness.
+
+§ RELATED WORK
+
+Abstract description and hypothesis of sub-model diversity Subsequent studies proceed from different assumptions and put forward different evaluation metric for the diversity of sub-models under ensemble robustness. Based on the difference non-maximal logits outputs of model, ADP(Bagnall, Bunescu, and Stewart 2017; Pang et al. 2019) evaluated such diversity between sub-models. Based on the overlap of adversarial subspaces (Tramèr et al. 2017), GAL(Kariyappa and Qureshi 2019) evaluates such diversity through the gradient direction difference; Based on the non-robust features, DVERGE(Yang et al. 2020) further evaluated such diversity through the non-robust distill feature transferability. Different from the above-mentioned assumption, this paper starts from the perspective of the transferability of adversarial samples on the basis of the assumption of (Yang et al. 2020), and gives a mathematical expression through optimization theory.
+
+Theoretical analysis of model attributes based on Jacobian matrix The attributes of the model for more theoretical analysis is the essential to explain the black-box performance.The Frobenius norm of the Jacobian matrix is first used in the regularization training of the model's robustness (Hoffman, Roberts, and Yaida 2019; Jakubovitz and Giryes 2018; Novak et al. 2018). When the adversarial samples are initially discovered, the spectral norm of a certain layer weight is considered to be a metric for evaluating the sensitivity (Szegedy et al. 2013). The global spectral norm of the model Jacobian matrix is further used to constrain the robustness of the model (Sokolić et al. 2017; Far-nia, Zhang, and Tse 2018). (Khrulkov and Oseledets 2018; Roth, Kilcher, and Hofmann 2019) essentially reveals that the iterative generation process of adversarial samples is mathematically approximates to the SVD of Jacobian matrix through power method (Boyd 1974). Through further mathematical analysis, the Frobenius norm of Jacobian matrix is connected with the transferability of Universal Adversarial Perturbations (UAP) (Co, Rego, and Lupu 2021).
+
+This paper expands the theoretical analysis based on Jacobian matrix and innovatively evaluate the transferability between models accurately through SVD. The Jacobian matrix after dimensionality reduction is decomposed, and the degree of alignment between the singular vectors is precisely defined by the Wasserstein distance.
+
+§ METHOD
+
+Define $f\left( x\right)$ as the logit output of convolutional neural network $f$ under image $x$ , while ${J}_{f}\left( x\right) = {\left. \frac{\partial {f}_{i}}{\partial x}\right| }_{x}$ is the Jacobian matrix under image $x$ . In the case where the perturdation $\delta$ is small enough and higher-order terms are ignored, the degree variation output of model which measured by the regular term ${L}_{q}$ can be linearly represented as the first-order Taylor expansion through Jacobian matrix ${J}_{f}\left( x\right)$ :
+
+$$
+\parallel f\left( {x + \delta }\right) - f\left( x\right) \parallel \approx {\begin{Vmatrix}{J}_{f}\left( x\right) \delta \end{Vmatrix}}_{q} \leq {\begin{Vmatrix}{J}_{f}\left( x\right) \end{Vmatrix}}_{F}\parallel \delta \parallel \tag{1}
+$$
+
+From the perspective of optimization theory, the goal of adversarial sample optimization is to maximize ${\begin{Vmatrix}{J}_{f}\left( x\right) \delta \end{Vmatrix}}_{q}$ . When $q = 2$ the goal of adversarial sample optimization can be simplified to a constrained optimization problem of quadratic functions:
+
+$$
+\text{ maxmize }{\delta }^{T}{Q\delta } \tag{2}
+$$
+
+$$
+\text{ subject to }{\delta }^{T}{P\delta } = K
+$$
+
+Where $Q = {J}^{T}J$ . Because of the homogeneity of the norm, $\mathrm{k}$ is set as 1 to solve this constrained optimization problem. Through Lagrange function $l\left( {x,\lambda }\right) = {x}^{T}{Qx} + \lambda \left( {1 - {x}^{T}{Px}}\right)$ under constrained optimization problem, the Lagrange condition can be obtained as:
+
+$$
+{P}^{-1}{Q\delta } = {\lambda \delta } \tag{3}
+$$
+
+Therefore, the eigenvector of ${P}^{-1}Q$ is the optimal $\delta$ that corresponding to the solution of objective equation (3). When the perturbation constraint is also under ${L}_{2}$ norm, $P$ is the identity matrix, and the maximum eigenvalue of $Q$ is the maximum value of equation (3). It can be seen that the singular vector of the Jacobian matrix $J$ essentially defines the possible local optimal solutions of $\delta$ , and the maximum singular value defines the maximum output variation of the model under ${L}_{2}$ norm. Without this meaning of singular values, the upper bound of transferability was defined through inequality in (1)(Kariyappa and Qureshi 2019). But as shown in Figure 1(a) and (b), when singular vectors are not fullly align and singular values are not constant up to a fixed scalar this metric can not define the transability more accurately. Through the theory of optimization, the meaning of eigenvalues and eigenvectors can be combined to further analyze the transferability.
+
+In order to evaluate the transferability of adversarial samples more accurately, this paper characterize the transferability based on the distance between singular vectors. How to choose a reasonable distance function is an essential issue in our method. (Gulrajani et al. 2017) proposed theory that the constraint of the variation between the logites output of different images is essentially a constraint on the Wasserstein distance of the image. Converting to the scenario of adversarial distinguish, the diversity metric in DVERGE can be expressed as the discrimination of GAN to distinguish the adversarial samples. So the diversity constraint achieved by the DVERGE increase the distance between optimal perturbations. As shown in Figure 1(d), considering extreme cases when singular values of a target Jacobian matrix is not much different and there is one target singular vector has small Wasserstein distance with source optimal perturbations, it can also achieve strong transferability under this constraint. A more accurate assessment of transferability defined in this paper is characterized as: Given the singular vector $\left( {{s}_{ - }{vec}}\right)$ corresponding to the largest singular value of the source Jacobian matrix $\left( {\max \left( {{s}_{ - }{\text{ val }}_{{J}_{s}}}\right) }\right)$ , the singular value $\left( {{s}_{ - }{val}}\right)$ corresponding to the target Jacobian singular vector under the condition of minimizing the Wasserstein distance(mindis_s_val ${}_{{J}_{s} \rightarrow {J}_{t}}$ ) reveals the approximate output variation. Let $d$ as Wasserstein distance, the Equation (4) mathematically expresses this metric as:
+
+$$
+\frac{{mindi}{s}_{ - }{s}_{ - }{va}{l}_{{J}_{s} \rightarrow {J}_{t}}}{{max}\left( {{s}_{ - }{va}{l}_{{J}_{s}}}\right) \times {min}\left( {d\left( {{\operatorname{argmax}}_{{s}_{ - }}\left( {{s}_{ - }{val}}\right) ,{s}_{ - }{ve}{c}_{{J}_{t}}}\right) }\right) } \tag{4}
+$$
+
+Algorithm 1: Ensemble network optimization based on transferability metric
+
+Input: Batch images X, N sub-models
+
+Parameter: Parameters of sub-model
+
+Output: models for ensemble
+
+ initialization or pretraining model reload.
+
+ while i=1..N do
+
+ Randomly initialize sub-model ${f}_{i}$
+
+ end while
+
+ while epoch=1..M do
+
+ while $\mathrm{i} = 1..\mathrm{N}$ do
+
+ ens_out $+ = \operatorname{softmax}\left( {{\operatorname{model}}_{i}\left( X\right) }\right)$
+
+ while j=1..N do
+
+ ${\text{ trans\_metrics }}_{i} + = {\text{ trans\_metrics }}_{i,j \neq i} \vartriangleleft {eq}\left( 4\right)$
+
+ end while
+
+ ${\text{ ens\_transs = mean }}_{N}({\text{ trans\_metrics }}_{i})$
+
+ ens_loss $= {BCE}\left( {{\text{ mean }}_{N}\left( \text{ ens\_out }\right) ,Y\text{ \_one hot }}\right)$
+
+ ${g}_{\omega } = { \bigtriangledown }_{\omega }\left( {\text{ ens\_loss } + \text{ ens\_trans }}\right)$
+
+ $\omega = \omega + \alpha \cdot$ RMSProp $\left( {\omega ,{g}_{\omega }}\right) \vartriangleleft$ gradient regular
+
+ end while
+
+ end while
+
+ return Diversity sub-models
+
+s.t. $\operatorname{mindis}{}_{ - }{s}_{ - }{\operatorname{vac}}_{{J}_{s} \rightarrow {J}_{t}} = \mathop{\operatorname{argmin}}\limits_{{{s}_{ - }{\operatorname{vec}}_{{J}_{t}}}}\mathrm{\;d}\left( {{\operatorname{argmax}}_{{\mathrm{J}}_{\mathbf{s}}}\left( {{\mathrm{s}}_{ - }\mathrm{{val}}}\right) ,{\mathrm{s}}_{ - }{\mathrm{{vec}}}_{{\mathrm{J}}_{\mathbf{t}}}}\right)$
+
+Drawing idea from PCA's dimensionality reduction, we make redundant assumptions about the role of batch-size and image-channel dimensions in the gradient, and reduce the dimensionality of the Jacobian matrix through HOSVD decomposition (Kolda and Bader 2009; Chen and Saad 2009). This paper follows the overall parameter optimization of ensemble in the training process. Algorithm 1 shows the overall optimization algorithm.
+
+§ EXPERIMENT AND RESULTS
+
+§ EXPERIMENT OF DIFFERENT EVALUATION METRICS
+
+The experiment combines the evaluation metric described by abstract concepts to discuss the correlation between it and the evaluation metric proposed by this paper and verifies ours effectiveness.DVERGE (Yang et al. 2020) characterizes the degree of output variation of distillation adversarial examples between different sub-models as equation (5):
+
+$$
+\frac{1}{2}{E}_{\left( {x,y}\right) \left( {{x}_{s},{y}_{s}}\right) }\left\lbrack {{l}_{{f}_{i}}\left( {{x}_{{f}_{l}^{j}}^{\prime }\left( {x,{x}_{s},y}\right) }\right) + {l}_{{f}_{j}}\left( {{x}_{{f}_{l}^{i}}^{\prime }\left( {x,{x}_{s},y}\right) }\right) }\right\rbrack \tag{5}
+$$
+
+Based on the diversity evaluation metric of formula (5), the experimental result in Table 1 shows the diversity evaluation results of different method. The result shows the distillation adversarial loss between sub-models based on formula (5). The brackets after each method give the transferability evaluation metric based on formula (4). The feature distillation of adversarial examples is based on the method of article (Ilyas et al. 2019). The perturbation strength is set as standard ${0.03}\left( { \approx 8/{255}}\right)$ while the iteration step is 50 .
+
+max width=
+
+$\mathbf{{Ours}}\left( {4.917}\right)$ 13.12 13.17 9.175 4.41 4.517 1.226 3|c|$\mathbf{{DVERGE}\left( {19.757}\right) }$ 3|c|Baseline(157)
+
+1-7
+19.0726.57 0.71 15.971 16.33 X X 0.355 4.42 4.035
+
+1-7
+27.689 23.63 13. 16.438 0.82 16.28 X X $\begin{array}{lll} {5.08} & {0.39} & {4.46} \end{array}$
+
+1-7
+27.4326.68 16.45 15.949 0.787 X X 5.059 4.81 0.314
+
+1-7
+$\mathbf{{ADP}}\left( {69.873}\right)$ 3|c|GAL(31) 3|c|Advt(48.565)
+
+1-7
+1.314.73 2.559 6.369 6.46 3.966 4.55 4.6
+
+1-7
+4.591.197 6.07 1.448 19.48 4.68 3.86 4.598
+
+1-7
+4.6465.071 6.048 17.59 1.24 4.687 4.56 3.88
+
+1-7
+
+Table 1: The diversity evaluation results of different method. The brackets after each method give the metric based on equation (4)
+
+Comparing the results of different methods on different evaluation metric, the results obtained based on equation (4) proposed in this paper are consistent with the results of equation (5) in the evaluation of model diversity. It is demonstrated that the metric proposed in this paper based on equation (4) have coherence in evaluating the output variation caused by perturbation with equation (5). The evaluation metric of equation (4) is based on the attributes of the model itself, and does not depend on any prior information based on adversarial samples. This is also the essential difference between the method in this paper and DVERGE. Also based on the attribute extraction attribute of the Jacobian matrix, the GAL method(Kariyappa and Qureshi 2019) optimizes the upper bound constraint defined by formula (1), which also improves the diversity of the network. However, compared with the metric proposed in this paper, the poor results fully demonstrate that the method in this paper is a more effective characterization of the model's output variation under the transfer attack.
+
+ < g r a p h i c s >
+
+Figure 2: Robustness result with different perturbation: (a)white-box attack; (b)black-box attack. Different line shows the different method to diversify sub-model. All ensemble is achieved under three sub-models.
+
+Experiment of ensemble robustness The experiment in this section evaluates the robustness of the ensemble model with different method. In order to evaluate the robustness more comprehensively, different adversarial perturbation experiments are set up. Figure 2 shows the according result under white-box attack and black-box attack. The white box attack algorithm uses the current PGD attack algorithm (Madry et al. 2017) which has the best attack performance. The black box attack mainly relies on the transfer attack algorithm of the alternative model, which is consistent with the setting of DVERGE. Based on the baseline models, three types of adversarial example including (1) PGD (2) M-DI2- FGSM (Xie et al. 2019) (3) SGM (Wu et al. 2020) are generated, and the final accuracy rate is calculated comprehensively under different types of adversarial samples.
+
+Comparison under the white-box attack, the method in this paper achieves the best robust performance without the any adversarial sample prior conditions. Compared with the optimal result of DVERGE, because the equation (4) only gives constraints from the output variation range, the final recognition accuracy is not characterize enough, so the optimal result is not achieved under the robustness of recognition, which is the further direction to improve. Comparison under the black-box attack, our method achieves the best defense performance under high perturbation. The robustness under low perturbation conditions is not optimal. By comparing the accuracy of each type of adversarial sample, the adversarial sample based on CW loss has relatively good attack performance. This also shows that the perturbation constraint characterized by the output variation in this paper is still more sensitive to the change of the loss function in practice. The theoretical characterization based on the loss function is an important point to further improve the robustness.
+
+§ CONCLUSION
+
+In this paper, the transferability of adversarial samples between sub-models is taken as the starting point for the study of ensemble robustness. Through the optimization theory analysis under Lagrange conditions, the SVD of the Jacobian matrix is a characterization of the model's optimal perturbation and output variation. Based on this theory, the level set of optimization further mathematical demonstrate the shortcomings of the previous abstract characterization of trasferability. So, this paper effectively redefines the transferability metric between models: Given the singular vector corresponding to the largest singular value of the source Jacobian matrix, the singular value corresponding to the target Jacobian singular vector under the condition of minimizing the Wasserstein distance reveals the approximate output variation. By performing SVD on the dimensionality-reduced Jacobian matrix, the sub-models obtained by this metric as a regular term in network training has a great effect on reducing the degree of output variation. Without relying on any prior information of adversarial samples, experiments show that the method, using as a model attribute extraction, finally improves the robustness of the ensemble. The theoretical characterization of the loss function and classification performance instead of the output variation will be an important direction to further improve the robustness of classification.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/gP4WxGjNd3k/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/gP4WxGjNd3k/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..b8b5239425afc05b15d6d1eaf4f875e626847400
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/gP4WxGjNd3k/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,259 @@
+## Meta Adversarial Perturbations
+
+Anonymous
+
+## Abstract
+
+A plethora of attack methods have been proposed to generate adversarial examples, among which the iterative methods have been demonstrated the ability to find a strong attack. However, the computation of an adversarial perturbation for a new data point requires solving a time-consuming optimization problem from scratch. To generate a stronger attack, it normally requires updating a data point with more iterations. In this paper, we show the existence of a meta adversarial perturbation (MAP), a better initialization that causes natural images to be misclassified with high probability after being updated through only a one-step gradient ascent update, and propose an algorithm for computing such perturbations. We conduct extensive experiments, and the empirical results demonstrate that state-of-the-art deep neural networks are vulnerable to meta perturbations. We further show that these perturbations are not only image-agnostic, but also model-agnostic, as a single perturbation generalizes well across unseen data points and different neural network architectures.
+
+## 1 Introduction
+
+Deep neural networks (DNNs) have achieved remarkable performance in many applications, including computer vision, natural language processing, speech, and robotics, etc. However, DNNs are shown to be vulnerable to adversarial examples (Szegedy et al. 2013; Goodfellow, Shlens, and Szegedy 2014), i.e. examples that are intentionally designed to be misclassified by the models but nearly imperceptible to human eyes. In recent years, many methods have been proposed to craft such malicious examples (Szegedy et al. 2013; Goodfellow, Shlens, and Szegedy 2014; Moosavi-Dezfooli, Fawzi, and Frossard 2016; Kurakin et al. 2016; Madry et al. 2017; Carlini and Wagner 2017; Chen et al. 2018), among which the iterative methods, such as PGD (Madry et al. 2017), BIM (Kurakin et al. 2016), and MIM (Dong et al. 2018), have been demonstrated to be effective to craft adversarial attacks with a high success rate. Nevertheless, to craft a stronger attack with iterative methods, it usually requires updating a data point through more gradient ascent steps. This time-consuming process gives rise to a question: is it possible to find a single perturbation, which can be served as a good meta initialization, such that after a few updates, it can become an effective attack for different data points?
+
+Inspired by the philosophy of meta-learning (Schmidhu-ber 1987; Bengio, Bengio, and Cloutier 1990; Andrychow-icz et al. 2016; Li and Malik 2016; Finn, Abbeel, and Levine 2017), we show the existence of a quasi-imperceptible meta adversarial perturbation (MAP) that leads natural images to be misclassified with high probability after being updated through only one-step gradient ascent update. In meta-learning, the goal of the trained model is to quickly adapt to a new task with a small amount of data. On the contrary, the goal of the meta perturbations is to rapidly adapt to a new data point within a few iterations. The key idea underlying our method is to train an initial perturbation such that it has maximal performance on new data after the perturbation has been updated through one or a few gradient steps. We then propose a simple algorithm, which is plug-and-play and is compatible with any gradient-based iterative adversarial attack method, for seeking such perturbations. By adding a meta perturbation at initialization, we can craft a more effective adversarial example without multi-step updates.
+
+We summarize our main contributions as follows:
+
+- We show the existence of image-agnostic learnable meta adversarial perturbations for efficient robustness evaluation of state-of-the-art deep neural networks.
+
+- We propose an algorithm (MAP) to find meta perturbations, such that a small number of gradient ascent updates will suffice to be a strong attack on a new data point.
+
+- We show that our meta perturbations have remarkable generalizability, as a perturbation computed from a small number of training data is able to adapt and fool the unseen data with high probability.
+
+- We demonstrate that meta perturbations are not only image-agnostic, but also model-agnostic. Such perturbations generalize well across a wide range of deep neural networks.
+
+## 2 Related Works
+
+There is a large body of works on adversarial attacks. Please refer to (Chakraborty et al. 2018; Akhtar and Mian 2018; Biggio and Roli 2018) for comprehensive surveys. Here, we discuss the works most closely related to ours.
+
+---
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+---
+
+### 2.1 Data-dependent Adversarial Perturbations
+
+Despite the impressive performance of deep neural networks on many domains, these classifiers are shown to be vulnerable to adversarial perturbations (Szegedy et al. 2013; Goodfellow, Shlens, and Szegedy 2014). Generating an adversarial example requires solving an optimization problem (Moosavi-Dezfooli, Fawzi, and Frossard 2016; Carlini and Wagner 2017) or through multiple steps of gradient ascent (Goodfellow, Shlens, and Szegedy 2014; Kurakin et al. 2016; Madry et al. 2017; Chen et al. 2018) for each data point independently, among which the iterative methods have been shown to be able to craft an attack with a high success rate. Given a data point $x$ , a corresponding label $y$ , and a classifier $f$ parametrized by $\theta$ . Let $L$ denote the loss function for the classification task, which is usually the cross-entropy loss. FGSM (Goodfellow, Shlens, and Szegedy 2014) utilizes gradient information to compute the adversarial perturbation in one step that maximizes the loss:
+
+$$
+{x}^{\prime } = x + \epsilon \operatorname{sign}\left( {{\nabla }_{x}L\left( {{f}_{\theta }, x, y}\right) }\right) , \tag{1}
+$$
+
+where ${x}^{\prime }$ is the adversarial example and $\epsilon$ is the maximum allowable perturbation measured by ${l}_{\infty }$ distance. This simple one-step method is extended by several follow-up works (Kurakin et al. 2016; Madry et al. 2017; Dong et al. 2018; Xie et al. 2019), which propose iterative methods to improve the success rate of the adversarial attack. More specifically, those methods generate adversarial examples through multistep updates, which can be described as:
+
+$$
+{x}^{t + 1} = {\Pi }_{\epsilon }\left( {{x}^{t} + \gamma \operatorname{sign}\left( {{\nabla }_{x}L\left( {{f}_{\theta }, x, y}\right) }\right) }\right) , \tag{2}
+$$
+
+where ${\Pi }_{\epsilon }$ projects the updated perturbations onto the feasible set if they exceed the maximum allowable amount indicated by $\epsilon .{x}^{0} = x$ and $\gamma = \epsilon /T$ , where $T$ is the number of iterations. To generate a malicious example that has a high probability to be misclassified by the model, the perturbation sample needs to be updated with more iterations. The computational time has a linear relationship with the number of iterations, thus it takes more time to craft a strong attack.
+
+### 2.2 Universal Adversarial Perturbations
+
+Instead of solving a data-dependent optimization problem to craft adversarial examples, (Moosavi-Dezfooli et al. 2017) shows the existence of a universal adversarial perturbation (UAP). Such a perturbation is image-agnostic and quasi-imperceptible, as a single perturbation can fool the classifier $f$ on most data points sampled from a distribution over data distribution $\mu$ . That is, they seek a perturbation $v$ such that
+
+$$
+f\left( {x + v}\right) \neq f\left( x\right) \text{for "most"}x \sim \mu \text{.} \tag{3}
+$$
+
+In other words, the perturbation process for a new data point involves merely the addition of precomputed UAP to it without solving a data-dependent optimization problem or gradient computation from scratch. However, its effectiveness is proportional to the amount of data used for computing a universal adversarial perturbation. It requires a large amount of data to achieve a high fooling ratio. In addition, although UAP demonstrates a certain degree of transferability, the fooling ratios on different networks, which are normally lower than ${50}\%$ , may not be high enough for an attacker. This problem is particularly obvious when the architecture of the target model is very different from the surrogate model (Moosavi-Dezfooli et al. 2017).
+
+Algorithm 1: Meta Adversarial Perturbation (MAP)
+
+---
+
+Input: $\mathbb{D},\alpha ,\beta ,{f}_{\theta }, L,{\Pi }_{\epsilon }$
+
+Output: Meta adversarial perturbations $v$
+
+Randomly initialize $v$
+
+while not done do
+
+ for minibatch $\mathbb{B} = \left\{ {{x}^{\left( i\right) },{y}^{\left( i\right) }}\right\} \sim \mathbb{D}$ do
+
+ Evaluate ${\nabla }_{v}L\left( {f}_{\theta }\right)$ using minibatch $\mathbb{B}$ with perturba-
+
+ tion $v$
+
+ Compute adapted perturbations with gradient ascent:
+
+ ${v}^{\prime } = v + \alpha {\nabla }_{v}L\left( {{f}_{\theta },\mathbb{B} + v}\right)$
+
+ Sample a batch of data ${\mathbb{B}}^{\prime }$ from $\mathbb{D}$
+
+ Evaluate ${\nabla }_{v}L\left( {f}_{\theta }\right)$ using minibatch ${\mathbb{B}}^{\prime }$ with adapted
+
+ perturbation ${v}^{\prime }$
+
+ Update $v \leftarrow v + \beta {\nabla }_{v}L\left( {{f}_{\theta },{\mathbb{B}}^{\prime } + {v}^{\prime }}\right)$
+
+ Project $v \leftarrow {\Pi }_{\epsilon }\left( v\right)$
+
+ end
+
+end
+
+return $v$
+
+---
+
+Although there are some works (Yang et al. 2021; Yuan et al. 2021) that seem similar to our method, our goal is completely different. (Yuan et al. 2021) proposes to use a meta-learning-like architecture to improve the cross-model transferability of the adversarial examples, while (Yang et al. 2021) devise an approach to learn the optimizer parameterized by a recurrent neural network to generate adversarial attacks. Both works are distinct from the meta adversarial perturbations considered in this paper, as we seek a single perturbation that is able to efficiently adapt to a new data point and fool the classifier with high probability.
+
+## 3 Meta Adversarial Perturbations
+
+We formalize in this section the notion of meta adversarial perturbations (MAPs) and propose an algorithm for computing such perturbations. Our goal is to train a perturbation that can become more effective attacks on new data points within one- or few-step updates. How can we find such a perturbation that can achieve fast adaptation? Inspired by the model-agnostic meta-learning (MAML) (Finn, Abbeel, and Levine 2017), we formulate this problem analogously. Since the perturbation will be updated using a gradient-based iterative method on new data, we will aim to learn a perturbation in such a way that this iterative method can rapidly adapt the perturbation to new data within one or a few iterations.
+
+Formally, we consider a meta adversarial perturbation $v$ , which is randomly initialized, and a trained model $f$ parameterized by $\theta .L$ denotes a cross-entropy loss and $\mathbb{D}$ denotes the dataset used for generating a MAP. When adapting to a batch of data points $\mathbb{B} = \left\{ {{x}^{\left( i\right) },{y}^{\left( i\right) }}\right\} \sim \mathbb{D}$ , the perturbation $v$ becomes ${v}^{\prime }$ . Our method aims to seek a single meta perturbation $v$ such that after adapting to new data points within a few iterations it can fool the model on almost all data points with high probability. That is, we look for a perturbation $v$ such that
+
+$$
+f\left( {x + {v}^{\prime }}\right) \neq f\left( x\right) \text{for "most"}x \sim \mu \text{.} \tag{4}
+$$
+
+| Attack\\Model | VGG11 | VGG19 | ResNet18 | ResNet50 | DenseNet121 | SENet | MobileNetV2 |
| Clean | ID | 100.0% | 100.0% | 100.0% | 100.0% | 100.0% | 100.0% | 100.0% |
| T | 92.6% | 93.7% | 95.3% | 95.4% | 95.4% | 95.8% | 94.1% |
| FGSM | ID | 28.0% | 53.0% | 47.0% | 29.0% | 41.0% | 40.0% | 30.0% |
| T | 29.3% | 49.4% | 41.4% | 35.7% | 35.5% | 38.2% | 32.8% |
| UAP | ID | 99.0% | 98.0% | 58.0% | 32.0% | 33.0% | 42.0% | 42.0% |
| T | 88.9% | 83.3% | 45.8% | 33.5% | 25.5% | 32.5% | 45.8% |
| MAP | ID | 22.0% | 31.0% | 21.0% | 14.0% | 12.0% | 18.0% | 13.0% |
| T | 22.0% | 36.1% | 20.3% | 17.4% | 20.8% | 17.6% | 16.3% |
+
+Table 1: The accuracy against different attacks on the set $\mathbb{D}$ , and the test set $\mathbb{T}$ (lower means better attacks).
+
+We describe such a perturbation meta since it can quickly adapt to new data points sampled from the data distribution $\mu$ and cause those data to be misclassified by the model with high probability. Notice that a MAP is image-agnostic, as a single perturbation can adapt to all the new data.
+
+In our method, we use one- or multi-step gradient ascent to compute the updated perturbation ${v}^{\prime }$ on new data points. For instance, using one-step gradient ascent to update the perturbation is as follows:
+
+$$
+{v}^{\prime } = v + \alpha {\nabla }_{v}L\left( {{f}_{\theta },\mathbb{B} + v}\right) , \tag{5}
+$$
+
+where the step size $\alpha$ is a hyperparameter, which can be seen as $\gamma$ in Eq. (2). For simplicity of notation, we will consider a one-step update for the rest of this section, but it is straightforward to extend our method to multi-step updates.
+
+The meta perturbation is updated by maximizing the loss with respect to $v$ evaluated on a batch of new data points ${\mathbb{B}}^{\prime }$ with the addition of the updated perturbation ${v}^{\prime }$ . More precisely, the meta-objective can be described as:
+
+$$
+\mathop{\max }\limits_{v}\mathop{\sum }\limits_{{\mathbb{B} \sim \mathbb{D}}}L\left( {{f}_{\theta },{\mathbb{B}}^{\prime } + {v}^{\prime }}\right)
+$$
+
+$$
+= \mathop{\max }\limits_{v}\mathop{\sum }\limits_{{\mathbb{B} \sim \mathbb{D}}}L\left( {{f}_{\theta },{\mathbb{B}}^{\prime } + \left( {v + \alpha {\nabla }_{v}L\left( {{f}_{\theta },\mathbb{B} + v}\right) }\right) }\right) .
+$$
+
+(6)
+
+Note that the meta-optimization is performed over the perturbation $v$ , whereas the objective is computed using the adapted perturbation ${v}^{\prime }$ . In effect, our proposed method aims to optimize the meta adversarial perturbation such that after one or a small number of gradient ascent updates on new data points, it will produce maximally effective adversarial perturbations, i.e. attacks with a high success rate.
+
+We use stochastic gradient ascent to optimize the meta-objective:
+
+$$
+v \leftarrow v + \beta {\nabla }_{v}L\left( {{f}_{\theta },{\mathbb{B}}^{\prime } + {v}^{\prime }}\right) , \tag{7}
+$$
+
+where $\beta$ is the meta step size. Algorithm 1 outlines the key steps of MAP. At line 9, MAP projects the updated perturbations onto the feasible set if they exceed the maximum allowable amount indicated by $\epsilon$ . A smaller $\epsilon$ makes an attack less visible to humans.
+
+The meta-gradient update involves a gradient through a gradient. This requires computing Hessian-vector products with an additional backward pass through $v$ . Since back-propagating through many inner gradient steps in computation and memory intensive, there are a plethora of works (Li et al. 2017; Nichol, Achiam, and Schulman 2018; Zhou, Wu, and Li 2018; Behl, Baydin, and Torr 2019; Raghu et al. 2019; Rajeswaran et al. 2019; Zintgraf et al. 2019) try to solve this problem after MAML (Finn, Abbeel, and Levine 2017) was proposed. We believe that the computation efficiency of MAP can benefit from those advanced methods.
+
+## 4 Experiments
+
+We conduct experiments to evaluate the performance of MAP using the following default settings.
+
+We assess the MAP on the CIFAR-10 (Krizhevsky, Hinton et al. 2009) test set $\mathbb{T}$ , which contains 10,000 images. We follow the experimental protocol proposed by (Moosavi-Dezfooli et al. 2017), where a set $\mathbb{D}$ used to compute the perturbation contains 100 images from the training set, i.e. on average 10 images per class. The maximum allowable perturbation $\epsilon$ is set to $8/{255}$ measured by ${l}_{\infty }$ distance. When computing a MAP, we use one gradient update for Eq. (5) with a fixed step size $\alpha = \epsilon = 8/{255}$ , and use the fast gradient sign method (FGSM) in Eq. (1) as the optimizer. We use seven trained models to measure the effectiveness of MAP, including VGG11, VGG19 (Simonyan and Zisserman 2014), ResNet18, ResNet50 (He et al. 2016), DenseNet121 (Huang et al. 2017), SENet (Hu, Shen, and Sun 2018), and MobileNetV2 (Sandler et al. 2018). We consider FGSM (Goodfellow, Shlens, and Szegedy 2014) and universal adversarial perturbation (UAP) (Moosavi-Dezfooli et al. 2017) as our baselines. We implement baselines using the same hyperparameters when they are applicable.
+
+### 4.1 Non-targeted Attacks
+
+First, we evaluate the performance of different attacks on various models. For the FGSM and MAP, we compute the data-dependent perturbation for each image by using a one-step gradient ascent (see Eq. (1)) to create non-targeted attacks. For the UAP, we follow the original setting as (Moosavi-Dezfooli et al. 2017), where we add the UAP on the test set $\mathbb{T}$ without any adaptation.
+
+The results are shown in Table 1. Each result is reported on the set $\mathbb{D}$ , which is used to compute the MAP and UAP, as well as on the test set $\mathbb{T}$ . Note that the test set is not used in the process of the computation of both perturbations. As we can see, MAP significantly outperforms the baselines. For all networks, the MAP achieves roughly ${10} - {20}\%$ improvement. These results have an element of surprise, as they show that by merely using a MAP as an initial perturbation for generating adversarial examples, the one-step attack can lead to much lower robustness, compared with the naive FGSM. Moreover, such a perturbation is image-agnostic, i.e. a single MAP works well on all test data. We notice that for some models, the UAP performs poorly when only using 100 data for generating the perturbation. These results are consistent with the earlier finding that the UAP requires a large amount of data to achieve a high fooling ratio (Moosavi-Dezfooli et al. 2017).
+
+ | VGG11 | VGG19 | ResNet18 | ResNet50 | DenseNet121 | SENet | MobileNetV2 |
| VGG11 | $\mathbf{{22.0}\% }$ | 37.2% | 24.9% | 19.6% | 24.2% | 20.5% | 20.2% |
| VGG19 | 22.9% | 36.1% | 24.5% | 18.3% | 22.0% | 19.2% | 18.3% |
| ResNet18 | 22.7% | 33.6% | $\mathbf{{20.3}\% }$ | 17.1% | 21.6% | 18.3% | 17.8% |
| ResNet50 | 23.6% | 35.6% | 23.0% | 17.4% | 20.8% | 19.3% | 18.1% |
| DenseNet121 | 23.1% | 32.7% | 21.3% | 16.1% | 20.8% | 18.1% | 16.9% |
| SENet | 22.5% | 34.9% | 23.7% | 17.5% | 20.8% | 17.6% | 17.5% |
| MobileNetV2 | 23.7% | 35.3% | 22.2% | 16.7% | 20.7% | 18.0% | 16.3% |
| FGSM | 29.3% | 49.4% | 41.4% | 35.7% | 35.5% | 38.2% | 32.8% |
+
+Table 2: Transferability of the meta adversarial perturbations across different networks (with one-step update on the target model). The percentage indicates the accuracy on the test set $\mathbb{T}$ . The row headers indicate the architectures where the meta perturbations are generated (source), and the column headers represent the models where the accuracies are reported (target). The bottom row shows the accuracies of FGSM on the target models without using meta perturbation at initialization.
+
+### 4.2 Transferability in Meta Perturbations
+
+We take a step further to investigate the transferability of MAP. That is, whether the meta perturbations computed from a specific architecture are also effective for another architecture. Table 2 shows a matrix summarizing the transferability of MAP across seven models. For each architecture, we compute a meta perturbation and show the accuracy on all other architectures, with one-step update on the target model. We show the accuracies without using MAP at initialization in the bottom row. As shown in Table 2, the MAP generalizes very well across other models. For instance, the meta perturbation generated from the DenseNet121 achieves comparable performance to those perturbations computed specifically for other models. In practice, when crafting an adversarial example for some other neural networks, using the meta perturbation computed on the DenseNet121 at initialization can lead to a stronger attack, compared with the from-scratch method. The results show that the meta perturbations are therefore not only image-agnostic, but also model-agnostic. Such perturbations are generalizable to a wide range of deep neural networks.
+
+### 4.3 Ablation Study
+
+While the above meta perturbations are computed for a set $\mathbb{D}$ containing 100 images from the training set, we now examine the influence of the size $\left| \mathbb{D}\right|$ on the effectiveness of the MAP. Here we use the ResNet18 for computing the MAP. The results, which are shown in Fig. 1, indicate that a larger size of $\mathbb{D}$ leads to better performance. Surprisingly, even using only 10 images for computing a meta perturbation, such a perturbation still causes the robustness to drop by around 15%, compared with the naive FGSM. This verifies that meta perturbations have a marvelous generalization ability over unseen data points, and can be computed on a very small set of training data.
+
+
+
+Figure 1: Accuracy on the test set $\mathbb{T}$ versus the number of images in $\mathbb{D}$ for learning MAP.
+
+## 5 Conclusion and Future Work
+
+In this work, we show the existence and realization of a meta adversarial perturbation (MAP), an initial perturbation that can be added to the data for generating more effective adversarial attacks through a one-step gradient ascent. We then propose an algorithm to find such perturbations and conduct extensive experiments to demonstrate their superior performance. For future work, we plan to extend this idea to time-efficient adversarial training (Shafahi et al. 2019; Wong, Rice, and Kolter 2019; Zhang et al. 2019; Zheng et al. 2020). Also, evaluating our attack on robust pre-trained models or different data modalities is another research direction.
+
+References
+
+Akhtar, N.; and Mian, A. 2018. Threat of adversarial attacks on deep learning in computer vision: A survey. Ieee Access.
+
+Andrychowicz, M.; Denil, M.; Gomez, S.; Hoffman, M. W.; Pfau, D.; Schaul, T.; Shillingford, B.; and De Freitas, N. 2016. Learning to learn by gradient descent by gradient descent. In Advances in neural information processing systems (NeurIPS).
+
+Behl, H. S.; Baydin, A. G.; and Torr, P. H. 2019. Alpha maml: Adaptive model-agnostic meta-learning. arXiv preprint arXiv:1905.07435.
+
+Bengio, Y.; Bengio, S.; and Cloutier, J. 1990. Learning a synaptic learning rule. Citeseer.
+
+Biggio, B.; and Roli, F. 2018. Wild patterns: Ten years after the rise of adversarial machine learning. Pattern Recognition, 84: 317-331.
+
+Carlini, N.; and Wagner, D. 2017. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp). IEEE.
+
+Chakraborty, A.; Alam, M.; Dey, V.; Chattopadhyay, A.; and Mukhopadhyay, D. 2018. Adversarial attacks and defences: A survey. arXiv preprint arXiv:1810.00069.
+
+Chen, P.-Y.; Sharma, Y.; Zhang, H.; Yi, J.; and Hsieh, C.-J. 2018. EAD: elastic-net attacks to deep neural networks via adversarial examples. In Proceedings of the AAAI Conference on Artificial Intelligence, 10-17.
+
+Dong, Y.; Liao, F.; Pang, T.; Su, H.; Zhu, J.; Hu, X.; and Li, J. 2018. Boosting adversarial attacks with momentum. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR).
+
+Finn, C.; Abbeel, P.; and Levine, S. 2017. Model-agnostic meta-learning for fast adaptation of deep networks. In International Conference on Machine Learning (ICML). PMLR.
+
+Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
+
+He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR).
+
+Hu, J.; Shen, L.; and Sun, G. 2018. Squeeze-and-excitation networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR).
+
+Huang, G.; Liu, Z.; Van Der Maaten, L.; and Weinberger, K. Q. 2017. Densely connected convolutional networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR).
+
+Krizhevsky, A.; Hinton, G.; et al. 2009. Learning multiple layers of features from tiny images.
+
+Kurakin, A.; Goodfellow, I.; Bengio, S.; et al. 2016. Adversarial examples in the physical world.
+
+Li, K.; and Malik, J. 2016. Learning to optimize. arXiv preprint arXiv:1606.01885.
+
+Li, Z.; Zhou, F.; Chen, F.; and Li, H. 2017. Meta-sgd: Learning to learn quickly for few-shot learning. arXiv preprint arXiv:1707.09835.
+
+Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; and Vladu, A. 2017. Towards deep learning models resistant to
+
+adversarial attacks. arXiv preprint arXiv:1706.06083.
+
+Moosavi-Dezfooli, S.-M.; Fawzi, A.; Fawzi, O.; and Frossard, P. 2017. Universal adversarial perturbations. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR).
+
+Moosavi-Dezfooli, S.-M.; Fawzi, A.; and Frossard, P. 2016. Deepfool: a simple and accurate method to fool deep neural networks. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR).
+
+Nichol, A.; Achiam, J.; and Schulman, J. 2018. On first-order meta-learning algorithms. arXiv preprint arXiv:1803.02999.
+
+Raghu, A.; Raghu, M.; Bengio, S.; and Vinyals, O. 2019. Rapid learning or feature reuse? towards understanding the effectiveness of maml. arXiv preprint arXiv:1909.09157.
+
+Rajeswaran, A.; Finn, C.; Kakade, S. M.; and Levine, S. 2019. Meta-Learning with Implicit Gradients. Advances in Neural Information Processing Systems (NeurIPS).
+
+Sandler, M.; Howard, A.; Zhu, M.; Zhmoginov, A.; and Chen, L.-C. 2018. Mobilenetv2: Inverted residuals and linear bottlenecks. In Proceedings of the IEEE conference on computer vision and pattern recognition (CVPR).
+
+Schmidhuber, J. 1987. Evolutionary principles in self-referential learning, or on learning how to learn: the meta-meta-... hook. Ph.D. thesis, Technische Universität München.
+
+Shafahi, A.; Najibi, M.; Ghiasi, M. A.; Xu, Z.; Dickerson, J.; Studer, C.; Davis, L. S.; Taylor, G.; and Goldstein, T. 2019. Adversarial training for free! Advances in Neural Information Processing Systems (NeurIPS).
+
+Simonyan, K.; and Zisserman, A. 2014. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
+
+Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; and Fergus, R. 2013. Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.
+
+Wong, E.; Rice, L.; and Kolter, J. Z. 2019. Fast is better than free: Revisiting adversarial training. In International Conference on Learning Representations (ICLR).
+
+Xie, C.; Zhang, Z.; Zhou, Y.; Bai, S.; Wang, J.; Ren, Z.; and Yuille, A. L. 2019. Improving transferability of adversarial examples with input diversity. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
+
+Yang, X.; Dong, Y.; Xiang, W.; Pang, T.; Su, H.; and Zhu, J. 2021. Model-Agnostic Meta-Attack: Towards Reliable Evaluation of Adversarial Robustness. arXiv preprint arXiv:2110.08256.
+
+Yuan, Z.; Zhang, J.; Jia, Y.; Tan, C.; Xue, T.; and Shan, S. 2021. Meta gradient adversarial attack. In Proceedings of the IEEE/CVF International Conference on Computer Vision (CVPR).
+
+Zhang, D.; Zhang, T.; Lu, Y.; Zhu, Z.; and Dong, B. 2019. You Only Propagate Once: Accelerating Adversarial Training via Maximal Principle. Advances in Neural Information Processing Systems (NeurIPS).
+
+Zheng, H.; Zhang, Z.; Gu, J.; Lee, H.; and Prakash, A. 2020. Efficient adversarial training with transferable adversarial examples. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
+
+Zhou, F.; Wu, B.; and Li, Z. 2018. Deep meta-learning: Learning to learn in the concept space. arXiv preprint arXiv:1802.03596.
+
+Zintgraf, L.; Shiarli, K.; Kurin, V.; Hofmann, K.; and White-son, S. 2019. Fast context adaptation via meta-learning. In International Conference on Machine Learning (ICML). PMLR.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/gP4WxGjNd3k/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/gP4WxGjNd3k/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..2fb6d930583c0f00696efdb7cc80b3327feea47f
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/gP4WxGjNd3k/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,229 @@
+§ META ADVERSARIAL PERTURBATIONS
+
+Anonymous
+
+§ ABSTRACT
+
+A plethora of attack methods have been proposed to generate adversarial examples, among which the iterative methods have been demonstrated the ability to find a strong attack. However, the computation of an adversarial perturbation for a new data point requires solving a time-consuming optimization problem from scratch. To generate a stronger attack, it normally requires updating a data point with more iterations. In this paper, we show the existence of a meta adversarial perturbation (MAP), a better initialization that causes natural images to be misclassified with high probability after being updated through only a one-step gradient ascent update, and propose an algorithm for computing such perturbations. We conduct extensive experiments, and the empirical results demonstrate that state-of-the-art deep neural networks are vulnerable to meta perturbations. We further show that these perturbations are not only image-agnostic, but also model-agnostic, as a single perturbation generalizes well across unseen data points and different neural network architectures.
+
+§ 1 INTRODUCTION
+
+Deep neural networks (DNNs) have achieved remarkable performance in many applications, including computer vision, natural language processing, speech, and robotics, etc. However, DNNs are shown to be vulnerable to adversarial examples (Szegedy et al. 2013; Goodfellow, Shlens, and Szegedy 2014), i.e. examples that are intentionally designed to be misclassified by the models but nearly imperceptible to human eyes. In recent years, many methods have been proposed to craft such malicious examples (Szegedy et al. 2013; Goodfellow, Shlens, and Szegedy 2014; Moosavi-Dezfooli, Fawzi, and Frossard 2016; Kurakin et al. 2016; Madry et al. 2017; Carlini and Wagner 2017; Chen et al. 2018), among which the iterative methods, such as PGD (Madry et al. 2017), BIM (Kurakin et al. 2016), and MIM (Dong et al. 2018), have been demonstrated to be effective to craft adversarial attacks with a high success rate. Nevertheless, to craft a stronger attack with iterative methods, it usually requires updating a data point through more gradient ascent steps. This time-consuming process gives rise to a question: is it possible to find a single perturbation, which can be served as a good meta initialization, such that after a few updates, it can become an effective attack for different data points?
+
+Inspired by the philosophy of meta-learning (Schmidhu-ber 1987; Bengio, Bengio, and Cloutier 1990; Andrychow-icz et al. 2016; Li and Malik 2016; Finn, Abbeel, and Levine 2017), we show the existence of a quasi-imperceptible meta adversarial perturbation (MAP) that leads natural images to be misclassified with high probability after being updated through only one-step gradient ascent update. In meta-learning, the goal of the trained model is to quickly adapt to a new task with a small amount of data. On the contrary, the goal of the meta perturbations is to rapidly adapt to a new data point within a few iterations. The key idea underlying our method is to train an initial perturbation such that it has maximal performance on new data after the perturbation has been updated through one or a few gradient steps. We then propose a simple algorithm, which is plug-and-play and is compatible with any gradient-based iterative adversarial attack method, for seeking such perturbations. By adding a meta perturbation at initialization, we can craft a more effective adversarial example without multi-step updates.
+
+We summarize our main contributions as follows:
+
+ * We show the existence of image-agnostic learnable meta adversarial perturbations for efficient robustness evaluation of state-of-the-art deep neural networks.
+
+ * We propose an algorithm (MAP) to find meta perturbations, such that a small number of gradient ascent updates will suffice to be a strong attack on a new data point.
+
+ * We show that our meta perturbations have remarkable generalizability, as a perturbation computed from a small number of training data is able to adapt and fool the unseen data with high probability.
+
+ * We demonstrate that meta perturbations are not only image-agnostic, but also model-agnostic. Such perturbations generalize well across a wide range of deep neural networks.
+
+§ 2 RELATED WORKS
+
+There is a large body of works on adversarial attacks. Please refer to (Chakraborty et al. 2018; Akhtar and Mian 2018; Biggio and Roli 2018) for comprehensive surveys. Here, we discuss the works most closely related to ours.
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+§ 2.1 DATA-DEPENDENT ADVERSARIAL PERTURBATIONS
+
+Despite the impressive performance of deep neural networks on many domains, these classifiers are shown to be vulnerable to adversarial perturbations (Szegedy et al. 2013; Goodfellow, Shlens, and Szegedy 2014). Generating an adversarial example requires solving an optimization problem (Moosavi-Dezfooli, Fawzi, and Frossard 2016; Carlini and Wagner 2017) or through multiple steps of gradient ascent (Goodfellow, Shlens, and Szegedy 2014; Kurakin et al. 2016; Madry et al. 2017; Chen et al. 2018) for each data point independently, among which the iterative methods have been shown to be able to craft an attack with a high success rate. Given a data point $x$ , a corresponding label $y$ , and a classifier $f$ parametrized by $\theta$ . Let $L$ denote the loss function for the classification task, which is usually the cross-entropy loss. FGSM (Goodfellow, Shlens, and Szegedy 2014) utilizes gradient information to compute the adversarial perturbation in one step that maximizes the loss:
+
+$$
+{x}^{\prime } = x + \epsilon \operatorname{sign}\left( {{\nabla }_{x}L\left( {{f}_{\theta },x,y}\right) }\right) , \tag{1}
+$$
+
+where ${x}^{\prime }$ is the adversarial example and $\epsilon$ is the maximum allowable perturbation measured by ${l}_{\infty }$ distance. This simple one-step method is extended by several follow-up works (Kurakin et al. 2016; Madry et al. 2017; Dong et al. 2018; Xie et al. 2019), which propose iterative methods to improve the success rate of the adversarial attack. More specifically, those methods generate adversarial examples through multistep updates, which can be described as:
+
+$$
+{x}^{t + 1} = {\Pi }_{\epsilon }\left( {{x}^{t} + \gamma \operatorname{sign}\left( {{\nabla }_{x}L\left( {{f}_{\theta },x,y}\right) }\right) }\right) , \tag{2}
+$$
+
+where ${\Pi }_{\epsilon }$ projects the updated perturbations onto the feasible set if they exceed the maximum allowable amount indicated by $\epsilon .{x}^{0} = x$ and $\gamma = \epsilon /T$ , where $T$ is the number of iterations. To generate a malicious example that has a high probability to be misclassified by the model, the perturbation sample needs to be updated with more iterations. The computational time has a linear relationship with the number of iterations, thus it takes more time to craft a strong attack.
+
+§ 2.2 UNIVERSAL ADVERSARIAL PERTURBATIONS
+
+Instead of solving a data-dependent optimization problem to craft adversarial examples, (Moosavi-Dezfooli et al. 2017) shows the existence of a universal adversarial perturbation (UAP). Such a perturbation is image-agnostic and quasi-imperceptible, as a single perturbation can fool the classifier $f$ on most data points sampled from a distribution over data distribution $\mu$ . That is, they seek a perturbation $v$ such that
+
+$$
+f\left( {x + v}\right) \neq f\left( x\right) \text{ for "most" }x \sim \mu \text{ . } \tag{3}
+$$
+
+In other words, the perturbation process for a new data point involves merely the addition of precomputed UAP to it without solving a data-dependent optimization problem or gradient computation from scratch. However, its effectiveness is proportional to the amount of data used for computing a universal adversarial perturbation. It requires a large amount of data to achieve a high fooling ratio. In addition, although UAP demonstrates a certain degree of transferability, the fooling ratios on different networks, which are normally lower than ${50}\%$ , may not be high enough for an attacker. This problem is particularly obvious when the architecture of the target model is very different from the surrogate model (Moosavi-Dezfooli et al. 2017).
+
+Algorithm 1: Meta Adversarial Perturbation (MAP)
+
+Input: $\mathbb{D},\alpha ,\beta ,{f}_{\theta },L,{\Pi }_{\epsilon }$
+
+Output: Meta adversarial perturbations $v$
+
+Randomly initialize $v$
+
+while not done do
+
+ for minibatch $\mathbb{B} = \left\{ {{x}^{\left( i\right) },{y}^{\left( i\right) }}\right\} \sim \mathbb{D}$ do
+
+ Evaluate ${\nabla }_{v}L\left( {f}_{\theta }\right)$ using minibatch $\mathbb{B}$ with perturba-
+
+ tion $v$
+
+ Compute adapted perturbations with gradient ascent:
+
+ ${v}^{\prime } = v + \alpha {\nabla }_{v}L\left( {{f}_{\theta },\mathbb{B} + v}\right)$
+
+ Sample a batch of data ${\mathbb{B}}^{\prime }$ from $\mathbb{D}$
+
+ Evaluate ${\nabla }_{v}L\left( {f}_{\theta }\right)$ using minibatch ${\mathbb{B}}^{\prime }$ with adapted
+
+ perturbation ${v}^{\prime }$
+
+ Update $v \leftarrow v + \beta {\nabla }_{v}L\left( {{f}_{\theta },{\mathbb{B}}^{\prime } + {v}^{\prime }}\right)$
+
+ Project $v \leftarrow {\Pi }_{\epsilon }\left( v\right)$
+
+ end
+
+end
+
+return $v$
+
+Although there are some works (Yang et al. 2021; Yuan et al. 2021) that seem similar to our method, our goal is completely different. (Yuan et al. 2021) proposes to use a meta-learning-like architecture to improve the cross-model transferability of the adversarial examples, while (Yang et al. 2021) devise an approach to learn the optimizer parameterized by a recurrent neural network to generate adversarial attacks. Both works are distinct from the meta adversarial perturbations considered in this paper, as we seek a single perturbation that is able to efficiently adapt to a new data point and fool the classifier with high probability.
+
+§ 3 META ADVERSARIAL PERTURBATIONS
+
+We formalize in this section the notion of meta adversarial perturbations (MAPs) and propose an algorithm for computing such perturbations. Our goal is to train a perturbation that can become more effective attacks on new data points within one- or few-step updates. How can we find such a perturbation that can achieve fast adaptation? Inspired by the model-agnostic meta-learning (MAML) (Finn, Abbeel, and Levine 2017), we formulate this problem analogously. Since the perturbation will be updated using a gradient-based iterative method on new data, we will aim to learn a perturbation in such a way that this iterative method can rapidly adapt the perturbation to new data within one or a few iterations.
+
+Formally, we consider a meta adversarial perturbation $v$ , which is randomly initialized, and a trained model $f$ parameterized by $\theta .L$ denotes a cross-entropy loss and $\mathbb{D}$ denotes the dataset used for generating a MAP. When adapting to a batch of data points $\mathbb{B} = \left\{ {{x}^{\left( i\right) },{y}^{\left( i\right) }}\right\} \sim \mathbb{D}$ , the perturbation $v$ becomes ${v}^{\prime }$ . Our method aims to seek a single meta perturbation $v$ such that after adapting to new data points within a few iterations it can fool the model on almost all data points with high probability. That is, we look for a perturbation $v$ such that
+
+$$
+f\left( {x + {v}^{\prime }}\right) \neq f\left( x\right) \text{ for "most" }x \sim \mu \text{ . } \tag{4}
+$$
+
+max width=
+
+2|c|Attack \Model VGG11 VGG19 ResNet18 ResNet50 DenseNet121 SENet MobileNetV2
+
+1-9
+2*Clean ID 100.0% 100.0% 100.0% 100.0% 100.0% 100.0% 100.0%
+
+2-9
+ T 92.6% 93.7% 95.3% 95.4% 95.4% 95.8% 94.1%
+
+1-9
+2*FGSM ID 28.0% 53.0% 47.0% 29.0% 41.0% 40.0% 30.0%
+
+2-9
+ T 29.3% 49.4% 41.4% 35.7% 35.5% 38.2% 32.8%
+
+1-9
+2*UAP ID 99.0% 98.0% 58.0% 32.0% 33.0% 42.0% 42.0%
+
+2-9
+ T 88.9% 83.3% 45.8% 33.5% 25.5% 32.5% 45.8%
+
+1-9
+2*MAP ID 22.0% 31.0% 21.0% 14.0% 12.0% 18.0% 13.0%
+
+2-9
+ T 22.0% 36.1% 20.3% 17.4% 20.8% 17.6% 16.3%
+
+1-9
+
+Table 1: The accuracy against different attacks on the set $\mathbb{D}$ , and the test set $\mathbb{T}$ (lower means better attacks).
+
+We describe such a perturbation meta since it can quickly adapt to new data points sampled from the data distribution $\mu$ and cause those data to be misclassified by the model with high probability. Notice that a MAP is image-agnostic, as a single perturbation can adapt to all the new data.
+
+In our method, we use one- or multi-step gradient ascent to compute the updated perturbation ${v}^{\prime }$ on new data points. For instance, using one-step gradient ascent to update the perturbation is as follows:
+
+$$
+{v}^{\prime } = v + \alpha {\nabla }_{v}L\left( {{f}_{\theta },\mathbb{B} + v}\right) , \tag{5}
+$$
+
+where the step size $\alpha$ is a hyperparameter, which can be seen as $\gamma$ in Eq. (2). For simplicity of notation, we will consider a one-step update for the rest of this section, but it is straightforward to extend our method to multi-step updates.
+
+The meta perturbation is updated by maximizing the loss with respect to $v$ evaluated on a batch of new data points ${\mathbb{B}}^{\prime }$ with the addition of the updated perturbation ${v}^{\prime }$ . More precisely, the meta-objective can be described as:
+
+$$
+\mathop{\max }\limits_{v}\mathop{\sum }\limits_{{\mathbb{B} \sim \mathbb{D}}}L\left( {{f}_{\theta },{\mathbb{B}}^{\prime } + {v}^{\prime }}\right)
+$$
+
+$$
+= \mathop{\max }\limits_{v}\mathop{\sum }\limits_{{\mathbb{B} \sim \mathbb{D}}}L\left( {{f}_{\theta },{\mathbb{B}}^{\prime } + \left( {v + \alpha {\nabla }_{v}L\left( {{f}_{\theta },\mathbb{B} + v}\right) }\right) }\right) .
+$$
+
+(6)
+
+Note that the meta-optimization is performed over the perturbation $v$ , whereas the objective is computed using the adapted perturbation ${v}^{\prime }$ . In effect, our proposed method aims to optimize the meta adversarial perturbation such that after one or a small number of gradient ascent updates on new data points, it will produce maximally effective adversarial perturbations, i.e. attacks with a high success rate.
+
+We use stochastic gradient ascent to optimize the meta-objective:
+
+$$
+v \leftarrow v + \beta {\nabla }_{v}L\left( {{f}_{\theta },{\mathbb{B}}^{\prime } + {v}^{\prime }}\right) , \tag{7}
+$$
+
+where $\beta$ is the meta step size. Algorithm 1 outlines the key steps of MAP. At line 9, MAP projects the updated perturbations onto the feasible set if they exceed the maximum allowable amount indicated by $\epsilon$ . A smaller $\epsilon$ makes an attack less visible to humans.
+
+The meta-gradient update involves a gradient through a gradient. This requires computing Hessian-vector products with an additional backward pass through $v$ . Since back-propagating through many inner gradient steps in computation and memory intensive, there are a plethora of works (Li et al. 2017; Nichol, Achiam, and Schulman 2018; Zhou, Wu, and Li 2018; Behl, Baydin, and Torr 2019; Raghu et al. 2019; Rajeswaran et al. 2019; Zintgraf et al. 2019) try to solve this problem after MAML (Finn, Abbeel, and Levine 2017) was proposed. We believe that the computation efficiency of MAP can benefit from those advanced methods.
+
+§ 4 EXPERIMENTS
+
+We conduct experiments to evaluate the performance of MAP using the following default settings.
+
+We assess the MAP on the CIFAR-10 (Krizhevsky, Hinton et al. 2009) test set $\mathbb{T}$ , which contains 10,000 images. We follow the experimental protocol proposed by (Moosavi-Dezfooli et al. 2017), where a set $\mathbb{D}$ used to compute the perturbation contains 100 images from the training set, i.e. on average 10 images per class. The maximum allowable perturbation $\epsilon$ is set to $8/{255}$ measured by ${l}_{\infty }$ distance. When computing a MAP, we use one gradient update for Eq. (5) with a fixed step size $\alpha = \epsilon = 8/{255}$ , and use the fast gradient sign method (FGSM) in Eq. (1) as the optimizer. We use seven trained models to measure the effectiveness of MAP, including VGG11, VGG19 (Simonyan and Zisserman 2014), ResNet18, ResNet50 (He et al. 2016), DenseNet121 (Huang et al. 2017), SENet (Hu, Shen, and Sun 2018), and MobileNetV2 (Sandler et al. 2018). We consider FGSM (Goodfellow, Shlens, and Szegedy 2014) and universal adversarial perturbation (UAP) (Moosavi-Dezfooli et al. 2017) as our baselines. We implement baselines using the same hyperparameters when they are applicable.
+
+§ 4.1 NON-TARGETED ATTACKS
+
+First, we evaluate the performance of different attacks on various models. For the FGSM and MAP, we compute the data-dependent perturbation for each image by using a one-step gradient ascent (see Eq. (1)) to create non-targeted attacks. For the UAP, we follow the original setting as (Moosavi-Dezfooli et al. 2017), where we add the UAP on the test set $\mathbb{T}$ without any adaptation.
+
+The results are shown in Table 1. Each result is reported on the set $\mathbb{D}$ , which is used to compute the MAP and UAP, as well as on the test set $\mathbb{T}$ . Note that the test set is not used in the process of the computation of both perturbations. As we can see, MAP significantly outperforms the baselines. For all networks, the MAP achieves roughly ${10} - {20}\%$ improvement. These results have an element of surprise, as they show that by merely using a MAP as an initial perturbation for generating adversarial examples, the one-step attack can lead to much lower robustness, compared with the naive FGSM. Moreover, such a perturbation is image-agnostic, i.e. a single MAP works well on all test data. We notice that for some models, the UAP performs poorly when only using 100 data for generating the perturbation. These results are consistent with the earlier finding that the UAP requires a large amount of data to achieve a high fooling ratio (Moosavi-Dezfooli et al. 2017).
+
+max width=
+
+X VGG11 VGG19 ResNet18 ResNet50 DenseNet121 SENet MobileNetV2
+
+1-8
+VGG11 $\mathbf{{22.0}\% }$ 37.2% 24.9% 19.6% 24.2% 20.5% 20.2%
+
+1-8
+VGG19 22.9% 36.1% 24.5% 18.3% 22.0% 19.2% 18.3%
+
+1-8
+ResNet18 22.7% 33.6% $\mathbf{{20.3}\% }$ 17.1% 21.6% 18.3% 17.8%
+
+1-8
+ResNet50 23.6% 35.6% 23.0% 17.4% 20.8% 19.3% 18.1%
+
+1-8
+DenseNet121 23.1% 32.7% 21.3% 16.1% 20.8% 18.1% 16.9%
+
+1-8
+SENet 22.5% 34.9% 23.7% 17.5% 20.8% 17.6% 17.5%
+
+1-8
+MobileNetV2 23.7% 35.3% 22.2% 16.7% 20.7% 18.0% 16.3%
+
+1-8
+FGSM 29.3% 49.4% 41.4% 35.7% 35.5% 38.2% 32.8%
+
+1-8
+
+Table 2: Transferability of the meta adversarial perturbations across different networks (with one-step update on the target model). The percentage indicates the accuracy on the test set $\mathbb{T}$ . The row headers indicate the architectures where the meta perturbations are generated (source), and the column headers represent the models where the accuracies are reported (target). The bottom row shows the accuracies of FGSM on the target models without using meta perturbation at initialization.
+
+§ 4.2 TRANSFERABILITY IN META PERTURBATIONS
+
+We take a step further to investigate the transferability of MAP. That is, whether the meta perturbations computed from a specific architecture are also effective for another architecture. Table 2 shows a matrix summarizing the transferability of MAP across seven models. For each architecture, we compute a meta perturbation and show the accuracy on all other architectures, with one-step update on the target model. We show the accuracies without using MAP at initialization in the bottom row. As shown in Table 2, the MAP generalizes very well across other models. For instance, the meta perturbation generated from the DenseNet121 achieves comparable performance to those perturbations computed specifically for other models. In practice, when crafting an adversarial example for some other neural networks, using the meta perturbation computed on the DenseNet121 at initialization can lead to a stronger attack, compared with the from-scratch method. The results show that the meta perturbations are therefore not only image-agnostic, but also model-agnostic. Such perturbations are generalizable to a wide range of deep neural networks.
+
+§ 4.3 ABLATION STUDY
+
+While the above meta perturbations are computed for a set $\mathbb{D}$ containing 100 images from the training set, we now examine the influence of the size $\left| \mathbb{D}\right|$ on the effectiveness of the MAP. Here we use the ResNet18 for computing the MAP. The results, which are shown in Fig. 1, indicate that a larger size of $\mathbb{D}$ leads to better performance. Surprisingly, even using only 10 images for computing a meta perturbation, such a perturbation still causes the robustness to drop by around 15%, compared with the naive FGSM. This verifies that meta perturbations have a marvelous generalization ability over unseen data points, and can be computed on a very small set of training data.
+
+ < g r a p h i c s >
+
+Figure 1: Accuracy on the test set $\mathbb{T}$ versus the number of images in $\mathbb{D}$ for learning MAP.
+
+§ 5 CONCLUSION AND FUTURE WORK
+
+In this work, we show the existence and realization of a meta adversarial perturbation (MAP), an initial perturbation that can be added to the data for generating more effective adversarial attacks through a one-step gradient ascent. We then propose an algorithm to find such perturbations and conduct extensive experiments to demonstrate their superior performance. For future work, we plan to extend this idea to time-efficient adversarial training (Shafahi et al. 2019; Wong, Rice, and Kolter 2019; Zhang et al. 2019; Zheng et al. 2020). Also, evaluating our attack on robust pre-trained models or different data modalities is another research direction.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/gVe36H8OrHW/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/gVe36H8OrHW/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..e7d9b1b2654af91c9aeec6156aecfd564791962c
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/gVe36H8OrHW/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,363 @@
+# Training Universal Adversarial Perturbations with Alternating Loss Functions
+
+## Abstract
+
+Despite being very successful, deep learning models were shown to be vulnerable to crafted perturbations. Furthermore, changing the prediction of a network over any image by learning a single universal adversarial perturbation (UAP) was shown to be possible. In this work, we propose 3 different ways of training UAPs that can attain a predefined fooling rate, while, in association, optimizing ${L}_{2}$ or ${L}_{\infty }$ norms. To stabilize around a predefined fooling rate, we have integrated an alternating loss function scheme that changes the current loss function based on a given condition. In particular, the loss functions we propose are: Batch Alternating Loss, Epoch-Batch Alternating Loss and Progressive Alternating Loss. In addition, we empirically observed that UAPs that were learned by minimization attacks contain strong image-like features around the edges, hence we propose integrating a circular masking operation to the training to further alleviate visible perturbations. The proposed ${L}_{2}$ Progressive Alternating Loss method outperforms the popular attacks by providing a higher fooling rate at equal ${L}_{2}$ norms. Furthermore Filtered Progressive Alternating Loss can further reduce the ${L}_{2}$ norm by ${33.3}\%$ at the same fooling rate. When optimized with regards to ${L}_{\infty }$ , Progressive Alternating Loss manages to stabilize on the desired fooling rate of ${95}\%$ with only 1 percentage point of deviation, despite ${L}_{\infty }$ norm being particularly sensitive to small updates.
+
+## Introduction
+
+Deep learning models have been adopted as standard methods in many visual tasks due to their success. On the other hand, deep neural networks have also been shown to be vulnerable against purposefully generated data samples called adversarial examples. The most popular way of generating adversarial examples is applying an adversarial attack to a benign sample, and obtain a particular perturbation that leads to misclassification when added to this benign sample. With this method, generating a whole dataset of adversarial examples involve putting each image through the same algorithm to calculate an image dependent perturbation, which results in a significant time overhead. Recently, it has been shown that a single perturbation, can be used to make any sample an adversarial example; these perturbations are called universal adversarial perturbations (UAP).
+
+Table 1: Overview of the proposed UAP training methods
+
+| Attack | Abbreviation | Loss Alteration Condition |
| Batch Alternating Loss | B-AL | Fooling rate of each batch |
| Epoch-Batch Alternating Loss | EB-AL | Fooling rate of each batch, if the previous epoch reached the fooling rate |
| Progressive Alternating Loss | P-AL | Fooling rate up to the point of processing the current batch |
| Filtered Progressive Alternating Loss | FP-AL | Same as P-AL but after each batch the filter in Equation 3 is applied on the UAP |
+
+These types of perturbations have distinct properties compared to image dependent adversarial perturbations, such as having image-like features by themselves (Zhang et al. 2020), whereas traditional perturbations are perceived as noise by humans.
+
+## Related Work
+
+Adversarial perturbations are traditionally generated specifically for a single sample. Fast gradient sign method (FGSM) (Goodfellow, Shlens, and Szegedy 2014) is an adversarial attack which can be used with ${L}_{1},{L}_{2}$ and ${L}_{\infty }$ norms, and despite its simplicity, it is still being widely used. Basic iterative method (Kurakin, Goodfellow, and Bengio 2017) and projected gradient descent (Madry et al. 2018) algorithms; as opposed to FGSM, iteratively optimize the perturbation with fixed size steps. Different to these ${L}_{p}$ bounded attacks, there are also minimization attacks. DeepFool (Moosavi-Dezfooli, Fawzi, and Frossard 2016) aims to geometrically shift the benign image to the closest decision boundary to force misclassification. Carlini&Wagner attack (Carlini and Wagner 2017) reformulates a constrained optimization problem to generate the smallest successful adversarial perturbation. Perceptual Color distance Alternating Loss (Zhao, Liu, and Larson 2020) is a modified version of Carlini&Wagner that decouples the norm and adversarial optimization using alternating loss method, which is also adopted in our proposed algorithms.
+
+---
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+---
+
+
+
+Figure 1: Sample UAPs trained with ${L}_{2}$ P-AL (left) and Filtered P-AL (right). The prediction for benign image is the correct class, band aid, with 98.70% confidence. The adversarial examples yield peacock predictions with 99.90% and 99.97% confidence respectively. The predictions are from ResNet50.
+
+Universal adversarial perturbations were formally introduced in (Moosavi-Dezfooli et al. 2017), which applies DeepFool (Moosavi-Dezfooli, Fawzi, and Frossard 2016) algorithm to each sample iteratively, updates the overall universal perturbation and projects the perturbation to a ${L}_{p}$ ball. Generative models were also trained to obtain UAPs. Network for adversary generation (NAG) (Mopuri et al. 2018), is a generative adversarial network framework that trains a generator, using a freezed target classification network, to generate a UAP, from an input noise vector. On the other hand, Fast Feature Fool (Mopuri, Garg, and Babu 2017) is a data-free algorithm, that trains a UAP that maximizes the activation values of convolutional layers. This algorithm generally performs worse than data dependent attacks, but is a good proof that UAPs can be generated by only using the properties of the target convolutional network. Feature-UAP (Zhang et al. 2020) is a ${L}_{p}$ constrained attack that trains a UAP using mini-batch training to achieve state of the art fooling rates and the authors provide a detailed comparison between image-dependent attacks and universal attacks. High-Pass-UAP (Zhang et al. 2021) is a similar algorithm that also trains UAPs using mini-batches, but also applies a Fourier domain high-pass filter to the current UAP, after revealing that UAPs tend to perform better when they contain more high frequency features, while being imperceptible to human eye. In the same work, Universal Secret Adversarial Perturbation (Zhang et al. 2021) was introduced, where a UAP not only fools models, but also contains extractable information. Training UAPs to make a network to perceive a predefined class as another target class, was introduced and named 'Double Targeted UAPs' (Benz et al. 2020).
+
+In this paper, we propose 3 alternative approaches using alternating loss for training UAPs: Batch Alternating Loss (B-AL), Epoch-Batch Alternating Loss (EB-AL) and Progressive Alternating Loss Training (P-AL). All the universal attacks in the literature are norm bounded; thus the norm optimization is not a stochastic operation, rather it is a projection. Our method is different from these works in this regard. In addition, we propose integrating a filtering to the training to further reduce the perturbations at the same fooling levels.
+
+Algorithm 1: Batch Alternating Loss Training(B-AL)
+
+---
+
+Input: Dataset $\mu$ , target class $t$ , fooling rate $\delta$ , epoch $k$ ,
+
+model $f$ , norm $p$
+
+Variables: Counter $i$ , fooling rate ${fr}$ , prediction out,
+
+adversarial loss function ${adv}$ , loss $L$
+
+Output: Universal adversarial perturbation $v$
+
+ $v \leftarrow 0$
+
+ $i \leftarrow 0$
+
+ while $i < k$ do
+
+ for $x \sim \mu$ do
+
+ out $\leftarrow f\left( {x + v}\right)$
+
+ ${fr} \leftarrow \#$ of correct predictions / batch size
+
+ if ${fr} < \delta$ then
+
+ $L \leftarrow$ adv(out, $t$ )
+
+ else
+
+ $L \leftarrow \parallel v{\parallel }_{p}$
+
+ end if
+
+ backpropagate $L$
+
+ update $v$
+
+ $i \leftarrow i + 1$
+
+ end for
+
+ end while
+
+ return $v$
+
+---
+
+## Methodology
+
+The universal adversarial attack problem can be formally defined as in Equation 1, where $v$ is the UAP, $x$ is a benign image sampled from a dataset $\mu , f$ is the target model, $\delta$ is the minimum fooling rate and $\epsilon$ is the maximum ${L}_{p}$ norm of $v$ .
+
+$$
+{P}_{x \sim \mu }\left( {f\left( {x + v}\right) \neq f\left( x\right) }\right) \geq \delta \text{ s.t. }\;\parallel \nu {\parallel }_{p} \leq \epsilon \tag{1}
+$$
+
+The norm bounded attack concept is widely used in adversarial machine learning, however it is also possible to formulate it as a minimization problem as in Equation 2, by slight modifications over Equation 1.
+
+$$
+{\min }_{\parallel v{\parallel }_{p}}{P}_{x \sim \mu }\left( {f\left( {x + v}\right) = t}\right) \approx \delta \tag{2}
+$$
+
+
+
+Figure 2: UAP calculated for Peacock target class applied over a hamster image (left column) and the UAP images (right column), corresponding to B-AL, EB-AL, P-AL, Filtered P-AL, from top to bottom.
+
+The variable $t$ is the target class in this equation. Now, the problem becomes finding the smallest $\parallel v{\parallel }_{p}$ , that attains the desired fooling rate. This problem can be turned into a min-max problem as well by setting $\delta$ to 1 .
+
+In this work, we propose a solution to this problem by introducing 3 UAP training methods (shown in Table 1) leading to different attacks that take advantage of the alternating loss strategy. Alternating loss scheme switches between 2 loss functions depending on the current state of the training; this strategy is used in image dependent adversarial attacks (optimize the norm of the perturbation if the current image is adversarial, if not, optimize the adversarial loss), however it is not directly applicable to the UAP domain. The first proposed method is Batch Alternating Loss (B-AL), which aims to reach the desired fooling rate by achieving the same fooling rate for each batch. The second method is Epoch-Batch Alternating Loss (EB-AL), which also takes into account the fooling rate achieved over the epoch, alongside with each individual batch. The final training method is Progressive Alternating Loss (P-AL), which uses the fooling rate achieved until the current batch to alter the loss function. We also empirically find that stronger features are generated around the edges, along with smaller artifacts in the middle; therefore we propose applying filtering during training to alleviate these artifacts. The proposed filtering scheme can be integrated into any minimization based UAP training.
+
+Alternating loss scheme requires a decoupled decision mechanism that will set the current loss function to either adversarial loss which changes the prediction of the network, or the norm of the UAP which is either ${L}_{2}$ or ${L}_{\infty }$ . In image-dependent attacks, alternating loss function can be iteratively selected based on the current state of the perturbation; when the current perturbation is successful in making the image an adversarial example, minimize the norm of the perturbation, else, optimize the adversarial loss to obtain an adversarial example (Zhao, Liu, and Larson 2020). However, training a UAP for several iterations for a single image while changing the loss function at each iteration would be incompatible with mini-batch training. We can instead use batches to train the UAP, while changing the current loss function based on the fooling rate achieved on that batch. Note that the main parameter in this optimization is the desired fooling rate over the whole training dataset.
+
+## Batch Alternating Loss (B-AL)
+
+Algorithm 1 shows the pseudo-code of B-AL. In this approach, the loss function is switched according to the fooling rate performance of the current state of the UAP, over the current batch: if UAP can achieve the desired fooling rate over the current batch, norm loss is selected, otherwise, adversarial loss is selected, which is chosen to be the cross entropy function. This training method can bring the fooling rate around the desired level in several epochs. However, when some of the batches yield the desired fooling rate, the loss function lowers the adversarial energy to decrease the norm of the UAP, thus the overall fooling rate stays below the target fooling rate, since the majority of the images remain benign. To address this problem, the following 2 methods of training are proposed.
+
+## Epoch-Batch Alternating Loss (EB-AL)
+
+Algorithm 2 shows the pseudo-code of EB-AL. This method aims to ensure that the UAP does not start diminishing the adversarial energy until the desired fooling rate is achieved. Before reaching the target, the loss function strictly becomes adversarial, regardless of the individual performance of each batch. At the end of each epoch, we check whether the target fooling rate was achieved, if it was, then in the next epoch, the same loss function alteration scheme presented in B-AL training is applied. EB-AL almost always achieves the desired fooling rate, if it is possible at all. When an epoch is completed with a successful fooling rate, many of the batches may yield fooling rates above the target. This causes the extensive usage of the norm loss function, which brings the overall fooling rate down, which will then make the loss function strictly adversarial in the next epoch. This phenomenon makes the fooling rate oscillate around the target, which may cause some imprecision in attaining the target fool rate.
+
+---
+
+Algorithm 2: Epoch-Batch Alternating Loss Training (EB-
+
+AL)
+
+Input: Dataset $\mu$ , target class $t$ , fooling rate $\delta$ , epoch $k$ ,
+
+model $f$ , norm $p$
+
+Variables: Counter $i$ , fooling rate ${fr}$ , prediction out,
+
+adversarial loss function adv, loss $L$ , optimization mode
+
+$m$ , number of correct predictions correct, image number
+
+counter imcount, fooling rate over the epoch epoch fr
+
+Output: Universal adversarial perturbation $v$
+
+ $v \leftarrow 0$
+
+ $i \leftarrow 0$
+
+ $m \leftarrow$ ’epoch’
+
+ while $i < k$ do
+
+ correct $\leftarrow 0$
+
+ imcount $\leftarrow 0$
+
+ for $x \sim \mu$ do
+
+ out $\leftarrow f\left( {x + v}\right)$
+
+ correct $\leftarrow$ correct $+ \#$ of correct predictions
+
+ imcount $\leftarrow$ imcount+ batch size
+
+ ${fr} \leftarrow \#$ of correct predictions / batch size
+
+ if $\left( {m = = \text{’epoch’}}\right)$ or
+
+ $\left( {m = = \text{’batch’ and}{fr} < \delta }\right)$ then
+
+ $L \leftarrow {adv}\left( {{out}, t}\right)$
+
+ else
+
+ $L \leftarrow \parallel v{\parallel }_{p}$
+
+ end if
+
+ backpropagate $L$
+
+ update $v$
+
+ $i \leftarrow i + 1$
+
+ end for
+
+ epoch ${fr} \leftarrow 1 -$ correct/imcount
+
+ if $\operatorname{epochfr} < \delta$ then
+
+ $m \leftarrow$ ’epoch’
+
+ else
+
+ $m \leftarrow$ ’batch’
+
+ end if
+
+ end while
+
+ return $v$
+
+---
+
+## Progressive Alternating Loss (P-AL)
+
+Algorithm 3 shows the pseudo-code of P-AL. Because of the nature of the training procedure, it is not trivial to have fooling rate completely stabilized over the target, however, it is possible to minimize the oscillation caused by the phenomena explained in the previous section. In P-AL training, similar to EB-AL, the adversarial loss is maintained until the target fooling rate is achieved. Furthermore, after reaching the target, the loss function is altered based on the fooling rate achieved from the beginning of the epoch until the currently optimized batch. By this way, it is possible to maintain the overall fooling rate, while optimizing the norm when possible. Although, this method cannot completely prevent the oscillation, it minimizes it to a certain degree.
+
+Algorithm 3: Filtered Progressive Alternating Loss Training (FP-AL)
+
+---
+
+Input: Dataset $\mu$ , target class $t$ , fooling rate $\delta$ , epoch $k$ ,
+
+model $f$ , norm $p$ , mask radius $D$
+
+Variables: Counter $i$ , fooling rate ${fr}$ , prediction out,
+
+adversarial loss function ${adv}$ , loss $L$ , number of correct
+
+predictions correct, image number counter imcount,
+
+circlar filter filter
+
+Output: Universal adversarial perturbation v
+
+ $v \leftarrow 0$
+
+ $i \leftarrow 0$
+
+ filter $\leftarrow$ filter in Equation 3 with $D$
+
+ while $i < k$ do
+
+ correct $\leftarrow 0$
+
+ imcount $\leftarrow 0$
+
+ for $x \sim \mu$ do
+
+ out $\leftarrow f\left( {x + v}\right)$
+
+ correct $\leftarrow$ correct + # of correct predictions
+
+ imcount $\leftarrow$ imcount + batch size
+
+ ${fr} \leftarrow$ length of correct $/$ imcount
+
+ if ${fr} < \delta$ then
+
+ $L \leftarrow {adv}\left( {{out}, t}\right)$
+
+ else
+
+ $L \leftarrow \parallel v{\parallel }_{p}$
+
+ end if
+
+ backpropagate $L$
+
+ update $v$
+
+ $v \leftarrow$ filter(v)
+
+ $i \leftarrow i + 1$
+
+ end for
+
+ end while
+
+ return $v$
+
+---
+
+## Masked Training
+
+We empirically found out that when the UAP is not normalized by a norm constraint, perturbations with high intensities tend to accumulate around the edges and corners. These perturbations also contain more image-like features, thus they influence the prediction towards the target. Applying masks to smooth out perturbations was investigated for image dependent attacks (Aksoy and Temizel 2020), and it is a simple yet efficient way to control the geometry of the perturbations, hence we propose integrating a masking operation in the UAP training procedure. To reduce the perturbations around the center of the image (which are also mostly positioned on top of the target object), we apply a filter (Equation 3) to the UAP after each batch, where(x, y)is the pixel position, $h$ and $w$ are the height-width values of UAP respectively, and $D$ is the radius of the circle. Algorithm 3 shows FP-AL training, which is the P-AL training with filtering. The visual effect of filtered training can be seen in Figure 1.
+
+$$
+f\left( x\right) = \left\{ \begin{array}{ll} 1, & \text{ if }\sqrt{{\left( \frac{w}{2} - x\right) }^{2} + {\left( \frac{h}{2} - y\right) }^{2}} \geq D \\ 0, & \text{ otherwise } \end{array}\right. \tag{3}
+$$
+
+Table 2: ${L}_{2}$ attack results, provided in terms of ${L}_{2}$ and ${L}_{\infty }$ metrics and FR refers to the Fooling Rate. Note that UAP (Moosavi-Dezfooli et al. 2017) and F-UAP attacks (Zhang et al. 2020) are set to reach the same ${L}_{2}$ values as P-AL to allow comparison of FR at the same level of perturbation.
+
+| Method | DenseNet121 | ResNet50 | GoogleNet | VGG16 |
| L2 | ${L}_{\infty }$ | FR | L2 | ${L}_{\infty }$ | FR | L2 | ${L}_{\infty }$ | FR | L2 | ${L}_{\infty }$ | FR |
| B-AL | 9.14 | 0.45 | 0.93 | 9.08 | 0.52 | 0.93 | 9.56 | 0.45 | 0.91 | 7.10 | 0.47 | 0.95 |
| EB-AL | 14.11 | 0.47 | 0.98 | 14.36 | 0.60 | 0.98 | 15.73 | 0.58 | 0.98 | 7.39 | 0.44 | 0.95 |
| P-AL | 11.16 | 0.43 | 0.95 | 11.66 | 0.53 | 0.95 | 13.01 | 0.60 | 0.95 | 5.52 | 0.44 | 0.95 |
| UAP | 11.16 | 0.29 | 0.33 | 11.66 | 0.21 | 0.34 | 13.01 | 0.27 | 0.43 | 5.52 | 0.15 | 0.30 |
| F-UAP | 11.16 | 0.23 | 0.90 | 11.66 | 0.32 | 0.93 | 13.01 | 0.34 | 0.91 | 5.52 | 0.16 | 0.70 |
+
+Table 3: ${L}_{\infty }$ attack results, provided in terms of ${L}_{2}$ and ${L}_{\infty }$ metrics and FR refers to the Fooling Rate. Note that UAP (Moosavi-Dezfooli et al. 2017) and F-UAP attacks (Zhang et al. 2020) are set to reach the same ${L}_{\infty }$ values as P-AL to allow comparison of FR at the same level of perturbation.
+
+| Method | DenseNet121 | ResNet50 | GoogleNet | VGG16 |
| ${L}_{\infty }$ | ${L}_{2}$ | FR | ${L}_{\infty }$ | ${L}_{2}$ | FR | ${L}_{\infty }$ | ${L}_{2}$ | FR | ${L}_{\infty }$ | ${L}_{2}$ | FR |
| B-AL | 0.17 | 24.96 | 1.00 | 0.17 | 26.64 | 1.00 | 0.20 | 26.63 | 1.00 | 0.16 | 19.04 | 1.00 |
| EB-AL | 0.16 | 24.42 | 1.00 | 0.18 | 29.39 | 1.00 | 0.22 | 33.55 | 1.00 | 0.17 | 25.70 | 1.00 |
| P-AL | 0.11 | 16.76 | 0.96 | 0.13 | 16.61 | 0.96 | 0.16 | 20.60 | 0.96 | 0.11 | 12.45 | 0.95 |
| UAP | 0.11 | 25.71 | 0.52 | 0.13 | 29.64 | 0.60 | 0.16 | 35.32 | 0.79 | 0.11 | 25.84 | 0.75 |
| F-UAP | 0.11 | 25.55 | 0.99 | 0.13 | 26.74 | 0.99 | 0.16 | 31.00 | 0.99 | 0.11 | 24.57 | 0.99 |
+
+This method can be applied to any minimization based UAP training scenario, as for the norm constrained attacks, the perturbations do not always mitigate towards the edges. Also, we empirically found that smooth circular filters such as 2D Gaussian or Butterworth (Butterworth et al. 1930) filters tend to limit the adversarial capacity of the UAPs, by slightly smoothing the features around the edges.
+
+## Experimental Design
+
+We have trained the UAPs using a sampled ImageNet dataset containing a total of 10000 images, formed by 10 images from each class. We have compared our attack with 2 other attacks which can be trained with small dataset sizes; vanilla UAP (Moosavi-Dezfooli et al. 2017) and Feature-UAP attacks (Zhang et al. 2020). We have chosen the target class as peacock for our attacks and Feature-UAP (vanilla UAP is strictly an untargeted attack as it is based on DeepFool). Similar to our attack, vanilla UAP allows specification of a target fooling rate over the training set. Therefore to allow comparisons on the same ground, we set this parameter to be the same, specifically, to a target fooling rate of 95%. However, as both of these attacks are norm constrained attacks, a direct comparison is not possible; therefore we first trained UAPs with our attacks, then we set the constraints, i.e. epsilons to match our obtained ${L}_{2}$ or ${L}_{\infty }$ values depending on the type of the norm which was selected to be optimized. To measure the performance of the attacks, we have used the standard ImageNet validation set having 50000 images. For fast convergence, Adam was selected as the optimizer, and the UAPs have been trained for 20 epochs. For the experiments where filtering is applied, a radius of 112 is used, as the dimensions of the input images are ${224} \times {224}$ .
+
+## Results
+
+Two different experiments have been conducted by applying the attacks with ${L}_{2}$ and ${L}_{\infty }$ norms, which are both supported by all attack types in question. The results are then compared with regards to both ${L}_{2}$ and ${L}_{\infty }$ values.
+
+## ${L}_{2}$ attacks
+
+Table 2 shows the ${L}_{2}$ attack results for B-AL, EB-AL, PAL, vanilla UAP and Feature-UAP (F-UAP). For 95% fooling rate constrained attacks, the attack is regarded successful if it is above and close to the target. Despite achieving the smallest ${L}_{2}$ value compared to the other base models, B-AL cannot attain the target FR for DenseNet121 (Huang et al. 2018), ResNet50 (He et al. 2015) and GoogleNet (Szegedy et al. 2014). It can only reach the target for VGG16 (Simonyan and Zisserman 2015) with batch normalization, however in that case its ${L}_{2}$ results are comparatively higher. On the other hand, EB-AL is above the target FR by 3 percentage points (except for VGG16 where is achieves the target FR), which renders the attack sub-optimal; furthermore the ${L}_{2}$ values are consistently higher than both B-AL and PAL. P-AL, which was introduced to address the inefficiencies of B-AL and EB-AL, consistently achieves the desired fooling rate while also having the smallest ${L}_{2}$ values. Overall, P-AL stays inside the desired range; furthermore by only integrating a filter during training, (Table 4) FP-AL achieves even lower perturbation levels, both in terms of ${L}_{2}$ and distance from the desired fooling rate. This algorithm consistently achieves ${95}\%$ fooling rate, while yielding the smallest -successful- ${L}_{2}$ distance over any of the given attacks.
+
+Table 4: Comparison between the results of P-AL and FP-AL, in both ${L}_{2}$ and ${L}_{\infty }$ . FR signifies the fooling rate over the whole dataset.
+
+| Method | DenseNet121 | ResNet50 | GoogleNet | VGG16 |
| ${L}_{2}$ | ${L}_{\infty }$ | FR | ${L}_{2}$ | ${L}_{\infty }$ | FR | ${L}_{2}$ | ${L}_{\infty }$ | FR | ${L}_{2}$ | ${L}_{\infty }$ | FR |
| ${L}_{2}$ P-AL | 11.16 | 0.43 | 0.95 | 11.66 | 0.53 | 0.95 | 13.01 | 0.60 | 0.95 | 5.52 | 0.44 | 0.95 |
| ${L}_{2}$ FP-AL | 8.38 | 0.41 | 0.95 | 10.75 | 0.60 | 0.95 | 10.62 | 0.52 | 0.95 | 8.93 | 0.43 | 0.95 |
| ${L}_{\infty }$ P-AL | 16.76 | 0.11 | 0.96 | 16.61 | 0.13 | 0.96 | 20.60 | 0.16 | 0.96 | 12.45 | 0.11 | 0.95 |
| ${L}_{\infty }$ FP-AL | 15.57 | 0.15 | 0.97 | 16.42 | 0.15 | 0.96 | 18.80 | 0.18 | 0.97 | 13.28 | 0.13 | 0.96 |
+
+It should be noted that both UAP and F-UAP are meant to be mainly run under ${L}_{\infty }$ constraints, however the algorithms are suitable for ${L}_{2}$ normalization during training. As mentioned earlier, both of these attacks are norm constrained attacks as opposed to our minimization attacks which makes it difficult to compare them. However, when the ${L}_{2}$ constraints are equalized at the level of our attacks, we see that both UAP and F-UAP fall below the desired fooling rate; nonetheless both reach significantly smaller ${L}_{\infty }$ values compared to our attacks.
+
+## ${L}_{\infty }$ attacks
+
+Table 3 shows the results of the ${L}_{\infty }$ attacks. It should be noted that while our attacks are mainly designed to minimize ${L}_{2}$ norms of the UAPs, they can minimize ${L}_{\infty }$ as well. This time, B-AL and EB-AL overshoots the desired fooling rate, which is not optimal in our constraints; besides, their ${L}_{\infty }$ values are comparatively higher. P-AL achieves both better ${L}_{2}$ and ${L}_{\infty }$ values, while staying closer to the desired fooling rate. According to the results on Table 4, FP-AL slightly increases the ${L}_{\infty }$ values while also getting further from the target, in exchange for an overall decrease in ${L}_{2}$ norm.
+
+As UAP and F-UAP are mainly ${L}_{\infty }$ bounded attacks, it is fair to expect better results from them. UAP shows much better results compared to ${L}_{2}$ normalized attack, however still cannot reach the target fooling rate against either of the networks. F-UAP achieves ${99}\%$ fooling rate for each network type, albeit yielding comparatively higher ${L}_{2}$ values.
+
+## Discussion
+
+Altering the loss function based on the progressive fooling rate gives the best results in both ${L}_{2}$ and ${L}_{\infty }$ attacks. Although our other attacks take a similar approach, because each batch affects the optimization too much, alternating loss scheme causes unstable behaviour, thus makes it harder to converge to the desired fooling rate. Another possible drawback from these attacks comes from the fact that the batch size takes a crucial role on how the optimization proceeds. For instance, the final UAPs that were trained with batch sizes of 32 and 128 may have severe performance difference, since by increasing the sample size, we get a better understanding about the performance over the whole dataset. P-AL is independent of the batch size and it is more stable around the desired fooling rate.
+
+We should also point out that P-AL addresses the problem of obtaining a fooling rate around the target level, not exceeding it to obtain even a better fooling rate. In a case where the target is to maximize fooling rate as much as possible, our attacks are likely to be less optimal. For instance, if we set the target fooling rate as ${100}\%$ , the UAP will be trained only with the adversarial loss function, therefore never minimizing the ${L}_{p}$ norm. To maximize the fooling rate, EB-AL may be the better choice, since it will first bring the fooling rate to ${100}\%$ , if possible, then it will try to minimize the norm. Table 2 and 3 shows that higher fooling rates can be achieved by EB-AL, although the ${L}_{p}$ norms are slightly higher. In that note, using ${L}_{\infty }$ norm to optimize the fooling rate can be another way of maximizing the fooling rate, along with B-AL and EB-AL. Using ${L}_{\infty }$ usually makes the optimization converge much faster at high fooling rate targets; since ${L}_{\infty }$ norm is in s scale [0-1], the adversarial loss function (cross-entropy) takes much higher values thus updates the perturbation values more drastically. In those cases, while optimizing the UAP in a strong adversarial manner, a small norm optimization is also done, which can yield a UAP with a very high fooling rate.
+
+## Perturbation Features
+
+Figure 2 shows sample UAPs trained with different methods. The UAPs on the first 3 rows (obtained without a filter) exhibit perturbations with visible image features accumulated around the edges. However, this phenomenon is not caused by the alternating loss scheme adopted, rather it is a consequence of performing targeted universal attacks that minimize the targets standard loss. The gradients on the perturbation are high over the edges after only few iterations, which causes the image features to concentrate on these regions. Figure 3 shows the scatter plots of the mean gradient values when a UAP is applied on the whole dataset, versus the distance of each pixel containing the gradient. The scatter plots are generated using a state of a UAP that is trained; from left to right, top to bottom, the UAP is taken from iterations 1, 30, 150 and 300 of the training. The gradients flow to the pixels that are far from the center of the images. We speculate that, since usually the main objects of the images are located around the center, the magnitudes of gradients with respect to a loss function whose objective class is different from the original class become relatively higher where features from the original object are absent, hence the edges and corners. On the other hand, it is also possible to see small feature-full perturbations that are generated around the center of the image, such as the green dots that can be seen in Figure 2. It is known that universal perturbations take advantage of image features that outweigh the original image features, hence why it can be possible to understand the target class by only looking at the universal perturbations; yet, the small accumulations around the center defy these assumptions. For that reason, our filtered training scheme not only makes the center of attention clean of perturbations, but also quantitatively yield better ${L}_{p}$ norms and stable fooling rates.
+
+
+
+Figure 3: Vertical axes show the mean gradient value, horizontal axes show the distance between the pixel containing the corresponding mean gradient, and the center of the image. The scatter plots are extracted from UAP states after iteration number1,30,150and 300 .
+
+## Conclusion
+
+In this work, we propose and evaluate alternative approaches for training a UAP that can achieve target fooling rates for a dataset, while being a minimization optimization, rather than being ${L}_{p}$ bounded. For that, we have integrated ’alternating loss', an image-dependent attack strategy into universal adversarial domain. As it was not directly possible to integrate this strategy into a training procedure, we came up with 3 different approaches for its utilization. B-AL training altered the loss function based on solely the currently processed batch. EB-AL training also took the performance of the UAP over the whole dataset, before altering the loss function. Finally, P-AL training took into account the fooling rate up to the point where a batch is processed. Using PAL, we achieved remarkable ${L}_{2}$ distances, while maintaining the desired fooling rates. On top of P-AL, we have also applied circular filtering to mitigate the small perturbations that appear in the center of the UAP, to the edges. By this way, we obtain perceptually better UAPs, having less perturbations in the center of the image, while achieving even smaller ${L}_{2}$ distances. On the other hand, this work can further be improved by regularizing the altered loss functions to achieve better ${L}_{p}$ norms. Also, investigating the mitigation of the image-features on the UAPs can also be helpful for understanding not only the existence of these perturbations, but also the behaviour of the deep neural networks.
+
+## References
+
+Aksoy, B.; and Temizel, A. 2020. Attack Type Agnostic Perceptual Enhancement Of Adversarial Images. International Workshop on Adversarial Machine Learning And Security (AMLAS), IEEE World Congress on Computational Intelligence (IEEE WCCI),19 July 2020.
+
+Benz, P.; Zhang, C.; Imtiaz, T.; and Kweon, I. S. 2020. Double targeted universal adversarial perturbations. In Proceedings of the Asian Conference on Computer Vision.
+
+Butterworth, S.; et al. 1930. On the theory of filter amplifiers. Wireless Engineer, 7(6): 536-541.
+
+Carlini, N.; and Wagner, D. 2017. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy), 39-57.
+
+Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
+
+He, K.; Zhang, X.; Ren, S.; and Sun, J. 2015. Deep Residual Learning for Image Recognition. arXiv:1512.03385.
+
+Huang, G.; Liu, Z.; van der Maaten, L.; and Weinberger, K. Q. 2018. Densely Connected Convolutional Networks. arXiv:1608.06993.
+
+Kurakin, A.; Goodfellow, I.; and Bengio, S. 2017. Adversarial examples in the physical world. arXiv:1607.02533.
+
+Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; and Vladu, A. 2018. Towards Deep Learning Models Resistant to Adversarial Attacks. In International Conference on Learning Representations (ICLR).
+
+Moosavi-Dezfooli, S.-M.; Fawzi, A.; Fawzi, O.; and Frossard, P. 2017. Universal adversarial perturbations. In Proceedings of the IEEE conference on computer vision and pattern recognition, 1765-1773.
+
+Moosavi-Dezfooli, S.-M.; Fawzi, A.; and Frossard, P. 2016. Deepfool: a simple and accurate method to fool deep neural networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2574-2582.
+
+Mopuri, K. R.; Garg, U.; and Babu, R. V. 2017. Fast feature fool: A data independent approach to universal adversarial perturbations. arXiv preprint arXiv:1707.05572.
+
+Mopuri, K. R.; Ojha, U.; Garg, U.; and Babu, R. V. 2018. NAG: Network for adversary generation. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 742-751.
+
+Simonyan, K.; and Zisserman, A. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. arXiv:1409.1556.
+
+Szegedy, C.; Liu, W.; Jia, Y.; Sermanet, P.; Reed, S.; Anguelov, D.; Erhan, D.; Vanhoucke, V.; and Rabinovich, A. 2014. Going Deeper with Convolutions. arXiv:1409.4842.
+
+Zhang, C.; Benz, P.; Imtiaz, T.; and Kweon, I. S. 2020. Understanding adversarial examples from the mutual influence of images and perturbations. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 14521-14530.
+
+Zhang, C.; Benz, P.; Karjauv, A.; and Kweon, I. S. 2021. Universal adversarial perturbations through the lens of deep steganography: Towards a fourier perspective. arXiv preprint arXiv:2102.06479.
+
+Zhao, Z.; Liu, Z.; and Larson, M. 2020. Towards large yet imperceptible adversarial image perturbations with perceptual color distance. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 1039- 1048.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/gVe36H8OrHW/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/gVe36H8OrHW/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..3592152e226410170fc5f644361d227cdcf08028
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/gVe36H8OrHW/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,388 @@
+§ TRAINING UNIVERSAL ADVERSARIAL PERTURBATIONS WITH ALTERNATING LOSS FUNCTIONS
+
+§ ABSTRACT
+
+Despite being very successful, deep learning models were shown to be vulnerable to crafted perturbations. Furthermore, changing the prediction of a network over any image by learning a single universal adversarial perturbation (UAP) was shown to be possible. In this work, we propose 3 different ways of training UAPs that can attain a predefined fooling rate, while, in association, optimizing ${L}_{2}$ or ${L}_{\infty }$ norms. To stabilize around a predefined fooling rate, we have integrated an alternating loss function scheme that changes the current loss function based on a given condition. In particular, the loss functions we propose are: Batch Alternating Loss, Epoch-Batch Alternating Loss and Progressive Alternating Loss. In addition, we empirically observed that UAPs that were learned by minimization attacks contain strong image-like features around the edges, hence we propose integrating a circular masking operation to the training to further alleviate visible perturbations. The proposed ${L}_{2}$ Progressive Alternating Loss method outperforms the popular attacks by providing a higher fooling rate at equal ${L}_{2}$ norms. Furthermore Filtered Progressive Alternating Loss can further reduce the ${L}_{2}$ norm by ${33.3}\%$ at the same fooling rate. When optimized with regards to ${L}_{\infty }$ , Progressive Alternating Loss manages to stabilize on the desired fooling rate of ${95}\%$ with only 1 percentage point of deviation, despite ${L}_{\infty }$ norm being particularly sensitive to small updates.
+
+§ INTRODUCTION
+
+Deep learning models have been adopted as standard methods in many visual tasks due to their success. On the other hand, deep neural networks have also been shown to be vulnerable against purposefully generated data samples called adversarial examples. The most popular way of generating adversarial examples is applying an adversarial attack to a benign sample, and obtain a particular perturbation that leads to misclassification when added to this benign sample. With this method, generating a whole dataset of adversarial examples involve putting each image through the same algorithm to calculate an image dependent perturbation, which results in a significant time overhead. Recently, it has been shown that a single perturbation, can be used to make any sample an adversarial example; these perturbations are called universal adversarial perturbations (UAP).
+
+Table 1: Overview of the proposed UAP training methods
+
+max width=
+
+Attack Abbreviation Loss Alteration Condition
+
+1-3
+Batch Alternating Loss B-AL Fooling rate of each batch
+
+1-3
+Epoch-Batch Alternating Loss EB-AL Fooling rate of each batch, if the previous epoch reached the fooling rate
+
+1-3
+Progressive Alternating Loss P-AL Fooling rate up to the point of processing the current batch
+
+1-3
+Filtered Progressive Alternating Loss FP-AL Same as P-AL but after each batch the filter in Equation 3 is applied on the UAP
+
+1-3
+
+These types of perturbations have distinct properties compared to image dependent adversarial perturbations, such as having image-like features by themselves (Zhang et al. 2020), whereas traditional perturbations are perceived as noise by humans.
+
+§ RELATED WORK
+
+Adversarial perturbations are traditionally generated specifically for a single sample. Fast gradient sign method (FGSM) (Goodfellow, Shlens, and Szegedy 2014) is an adversarial attack which can be used with ${L}_{1},{L}_{2}$ and ${L}_{\infty }$ norms, and despite its simplicity, it is still being widely used. Basic iterative method (Kurakin, Goodfellow, and Bengio 2017) and projected gradient descent (Madry et al. 2018) algorithms; as opposed to FGSM, iteratively optimize the perturbation with fixed size steps. Different to these ${L}_{p}$ bounded attacks, there are also minimization attacks. DeepFool (Moosavi-Dezfooli, Fawzi, and Frossard 2016) aims to geometrically shift the benign image to the closest decision boundary to force misclassification. Carlini&Wagner attack (Carlini and Wagner 2017) reformulates a constrained optimization problem to generate the smallest successful adversarial perturbation. Perceptual Color distance Alternating Loss (Zhao, Liu, and Larson 2020) is a modified version of Carlini&Wagner that decouples the norm and adversarial optimization using alternating loss method, which is also adopted in our proposed algorithms.
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+ < g r a p h i c s >
+
+Figure 1: Sample UAPs trained with ${L}_{2}$ P-AL (left) and Filtered P-AL (right). The prediction for benign image is the correct class, band aid, with 98.70% confidence. The adversarial examples yield peacock predictions with 99.90% and 99.97% confidence respectively. The predictions are from ResNet50.
+
+Universal adversarial perturbations were formally introduced in (Moosavi-Dezfooli et al. 2017), which applies DeepFool (Moosavi-Dezfooli, Fawzi, and Frossard 2016) algorithm to each sample iteratively, updates the overall universal perturbation and projects the perturbation to a ${L}_{p}$ ball. Generative models were also trained to obtain UAPs. Network for adversary generation (NAG) (Mopuri et al. 2018), is a generative adversarial network framework that trains a generator, using a freezed target classification network, to generate a UAP, from an input noise vector. On the other hand, Fast Feature Fool (Mopuri, Garg, and Babu 2017) is a data-free algorithm, that trains a UAP that maximizes the activation values of convolutional layers. This algorithm generally performs worse than data dependent attacks, but is a good proof that UAPs can be generated by only using the properties of the target convolutional network. Feature-UAP (Zhang et al. 2020) is a ${L}_{p}$ constrained attack that trains a UAP using mini-batch training to achieve state of the art fooling rates and the authors provide a detailed comparison between image-dependent attacks and universal attacks. High-Pass-UAP (Zhang et al. 2021) is a similar algorithm that also trains UAPs using mini-batches, but also applies a Fourier domain high-pass filter to the current UAP, after revealing that UAPs tend to perform better when they contain more high frequency features, while being imperceptible to human eye. In the same work, Universal Secret Adversarial Perturbation (Zhang et al. 2021) was introduced, where a UAP not only fools models, but also contains extractable information. Training UAPs to make a network to perceive a predefined class as another target class, was introduced and named 'Double Targeted UAPs' (Benz et al. 2020).
+
+In this paper, we propose 3 alternative approaches using alternating loss for training UAPs: Batch Alternating Loss (B-AL), Epoch-Batch Alternating Loss (EB-AL) and Progressive Alternating Loss Training (P-AL). All the universal attacks in the literature are norm bounded; thus the norm optimization is not a stochastic operation, rather it is a projection. Our method is different from these works in this regard. In addition, we propose integrating a filtering to the training to further reduce the perturbations at the same fooling levels.
+
+Algorithm 1: Batch Alternating Loss Training(B-AL)
+
+Input: Dataset $\mu$ , target class $t$ , fooling rate $\delta$ , epoch $k$ ,
+
+model $f$ , norm $p$
+
+Variables: Counter $i$ , fooling rate ${fr}$ , prediction out,
+
+adversarial loss function ${adv}$ , loss $L$
+
+Output: Universal adversarial perturbation $v$
+
+ $v \leftarrow 0$
+
+ $i \leftarrow 0$
+
+ while $i < k$ do
+
+ for $x \sim \mu$ do
+
+ out $\leftarrow f\left( {x + v}\right)$
+
+ ${fr} \leftarrow \#$ of correct predictions / batch size
+
+ if ${fr} < \delta$ then
+
+ $L \leftarrow$ adv(out, $t$ )
+
+ else
+
+ $L \leftarrow \parallel v{\parallel }_{p}$
+
+ end if
+
+ backpropagate $L$
+
+ update $v$
+
+ $i \leftarrow i + 1$
+
+ end for
+
+ end while
+
+ return $v$
+
+§ METHODOLOGY
+
+The universal adversarial attack problem can be formally defined as in Equation 1, where $v$ is the UAP, $x$ is a benign image sampled from a dataset $\mu ,f$ is the target model, $\delta$ is the minimum fooling rate and $\epsilon$ is the maximum ${L}_{p}$ norm of $v$ .
+
+$$
+{P}_{x \sim \mu }\left( {f\left( {x + v}\right) \neq f\left( x\right) }\right) \geq \delta \text{ s.t. }\;\parallel \nu {\parallel }_{p} \leq \epsilon \tag{1}
+$$
+
+The norm bounded attack concept is widely used in adversarial machine learning, however it is also possible to formulate it as a minimization problem as in Equation 2, by slight modifications over Equation 1.
+
+$$
+{\min }_{\parallel v{\parallel }_{p}}{P}_{x \sim \mu }\left( {f\left( {x + v}\right) = t}\right) \approx \delta \tag{2}
+$$
+
+ < g r a p h i c s >
+
+Figure 2: UAP calculated for Peacock target class applied over a hamster image (left column) and the UAP images (right column), corresponding to B-AL, EB-AL, P-AL, Filtered P-AL, from top to bottom.
+
+The variable $t$ is the target class in this equation. Now, the problem becomes finding the smallest $\parallel v{\parallel }_{p}$ , that attains the desired fooling rate. This problem can be turned into a min-max problem as well by setting $\delta$ to 1 .
+
+In this work, we propose a solution to this problem by introducing 3 UAP training methods (shown in Table 1) leading to different attacks that take advantage of the alternating loss strategy. Alternating loss scheme switches between 2 loss functions depending on the current state of the training; this strategy is used in image dependent adversarial attacks (optimize the norm of the perturbation if the current image is adversarial, if not, optimize the adversarial loss), however it is not directly applicable to the UAP domain. The first proposed method is Batch Alternating Loss (B-AL), which aims to reach the desired fooling rate by achieving the same fooling rate for each batch. The second method is Epoch-Batch Alternating Loss (EB-AL), which also takes into account the fooling rate achieved over the epoch, alongside with each individual batch. The final training method is Progressive Alternating Loss (P-AL), which uses the fooling rate achieved until the current batch to alter the loss function. We also empirically find that stronger features are generated around the edges, along with smaller artifacts in the middle; therefore we propose applying filtering during training to alleviate these artifacts. The proposed filtering scheme can be integrated into any minimization based UAP training.
+
+Alternating loss scheme requires a decoupled decision mechanism that will set the current loss function to either adversarial loss which changes the prediction of the network, or the norm of the UAP which is either ${L}_{2}$ or ${L}_{\infty }$ . In image-dependent attacks, alternating loss function can be iteratively selected based on the current state of the perturbation; when the current perturbation is successful in making the image an adversarial example, minimize the norm of the perturbation, else, optimize the adversarial loss to obtain an adversarial example (Zhao, Liu, and Larson 2020). However, training a UAP for several iterations for a single image while changing the loss function at each iteration would be incompatible with mini-batch training. We can instead use batches to train the UAP, while changing the current loss function based on the fooling rate achieved on that batch. Note that the main parameter in this optimization is the desired fooling rate over the whole training dataset.
+
+§ BATCH ALTERNATING LOSS (B-AL)
+
+Algorithm 1 shows the pseudo-code of B-AL. In this approach, the loss function is switched according to the fooling rate performance of the current state of the UAP, over the current batch: if UAP can achieve the desired fooling rate over the current batch, norm loss is selected, otherwise, adversarial loss is selected, which is chosen to be the cross entropy function. This training method can bring the fooling rate around the desired level in several epochs. However, when some of the batches yield the desired fooling rate, the loss function lowers the adversarial energy to decrease the norm of the UAP, thus the overall fooling rate stays below the target fooling rate, since the majority of the images remain benign. To address this problem, the following 2 methods of training are proposed.
+
+§ EPOCH-BATCH ALTERNATING LOSS (EB-AL)
+
+Algorithm 2 shows the pseudo-code of EB-AL. This method aims to ensure that the UAP does not start diminishing the adversarial energy until the desired fooling rate is achieved. Before reaching the target, the loss function strictly becomes adversarial, regardless of the individual performance of each batch. At the end of each epoch, we check whether the target fooling rate was achieved, if it was, then in the next epoch, the same loss function alteration scheme presented in B-AL training is applied. EB-AL almost always achieves the desired fooling rate, if it is possible at all. When an epoch is completed with a successful fooling rate, many of the batches may yield fooling rates above the target. This causes the extensive usage of the norm loss function, which brings the overall fooling rate down, which will then make the loss function strictly adversarial in the next epoch. This phenomenon makes the fooling rate oscillate around the target, which may cause some imprecision in attaining the target fool rate.
+
+Algorithm 2: Epoch-Batch Alternating Loss Training (EB-
+
+AL)
+
+Input: Dataset $\mu$ , target class $t$ , fooling rate $\delta$ , epoch $k$ ,
+
+model $f$ , norm $p$
+
+Variables: Counter $i$ , fooling rate ${fr}$ , prediction out,
+
+adversarial loss function adv, loss $L$ , optimization mode
+
+$m$ , number of correct predictions correct, image number
+
+counter imcount, fooling rate over the epoch epoch fr
+
+Output: Universal adversarial perturbation $v$
+
+ $v \leftarrow 0$
+
+ $i \leftarrow 0$
+
+ $m \leftarrow$ ’epoch’
+
+ while $i < k$ do
+
+ correct $\leftarrow 0$
+
+ imcount $\leftarrow 0$
+
+ for $x \sim \mu$ do
+
+ out $\leftarrow f\left( {x + v}\right)$
+
+ correct $\leftarrow$ correct $+ \#$ of correct predictions
+
+ imcount $\leftarrow$ imcount+ batch size
+
+ ${fr} \leftarrow \#$ of correct predictions / batch size
+
+ if $\left( {m = = \text{ ’epoch’ }}\right)$ or
+
+ $\left( {m = = \text{ ’batch’ and }{fr} < \delta }\right)$ then
+
+ $L \leftarrow {adv}\left( {{out},t}\right)$
+
+ else
+
+ $L \leftarrow \parallel v{\parallel }_{p}$
+
+ end if
+
+ backpropagate $L$
+
+ update $v$
+
+ $i \leftarrow i + 1$
+
+ end for
+
+ epoch ${fr} \leftarrow 1 -$ correct/imcount
+
+ if $\operatorname{epochfr} < \delta$ then
+
+ $m \leftarrow$ ’epoch’
+
+ else
+
+ $m \leftarrow$ ’batch’
+
+ end if
+
+ end while
+
+ return $v$
+
+§ PROGRESSIVE ALTERNATING LOSS (P-AL)
+
+Algorithm 3 shows the pseudo-code of P-AL. Because of the nature of the training procedure, it is not trivial to have fooling rate completely stabilized over the target, however, it is possible to minimize the oscillation caused by the phenomena explained in the previous section. In P-AL training, similar to EB-AL, the adversarial loss is maintained until the target fooling rate is achieved. Furthermore, after reaching the target, the loss function is altered based on the fooling rate achieved from the beginning of the epoch until the currently optimized batch. By this way, it is possible to maintain the overall fooling rate, while optimizing the norm when possible. Although, this method cannot completely prevent the oscillation, it minimizes it to a certain degree.
+
+Algorithm 3: Filtered Progressive Alternating Loss Training (FP-AL)
+
+Input: Dataset $\mu$ , target class $t$ , fooling rate $\delta$ , epoch $k$ ,
+
+model $f$ , norm $p$ , mask radius $D$
+
+Variables: Counter $i$ , fooling rate ${fr}$ , prediction out,
+
+adversarial loss function ${adv}$ , loss $L$ , number of correct
+
+predictions correct, image number counter imcount,
+
+circlar filter filter
+
+Output: Universal adversarial perturbation v
+
+ $v \leftarrow 0$
+
+ $i \leftarrow 0$
+
+ filter $\leftarrow$ filter in Equation 3 with $D$
+
+ while $i < k$ do
+
+ correct $\leftarrow 0$
+
+ imcount $\leftarrow 0$
+
+ for $x \sim \mu$ do
+
+ out $\leftarrow f\left( {x + v}\right)$
+
+ correct $\leftarrow$ correct + # of correct predictions
+
+ imcount $\leftarrow$ imcount + batch size
+
+ ${fr} \leftarrow$ length of correct $/$ imcount
+
+ if ${fr} < \delta$ then
+
+ $L \leftarrow {adv}\left( {{out},t}\right)$
+
+ else
+
+ $L \leftarrow \parallel v{\parallel }_{p}$
+
+ end if
+
+ backpropagate $L$
+
+ update $v$
+
+ $v \leftarrow$ filter(v)
+
+ $i \leftarrow i + 1$
+
+ end for
+
+ end while
+
+ return $v$
+
+§ MASKED TRAINING
+
+We empirically found out that when the UAP is not normalized by a norm constraint, perturbations with high intensities tend to accumulate around the edges and corners. These perturbations also contain more image-like features, thus they influence the prediction towards the target. Applying masks to smooth out perturbations was investigated for image dependent attacks (Aksoy and Temizel 2020), and it is a simple yet efficient way to control the geometry of the perturbations, hence we propose integrating a masking operation in the UAP training procedure. To reduce the perturbations around the center of the image (which are also mostly positioned on top of the target object), we apply a filter (Equation 3) to the UAP after each batch, where(x, y)is the pixel position, $h$ and $w$ are the height-width values of UAP respectively, and $D$ is the radius of the circle. Algorithm 3 shows FP-AL training, which is the P-AL training with filtering. The visual effect of filtered training can be seen in Figure 1.
+
+$$
+f\left( x\right) = \left\{ \begin{array}{ll} 1, & \text{ if }\sqrt{{\left( \frac{w}{2} - x\right) }^{2} + {\left( \frac{h}{2} - y\right) }^{2}} \geq D \\ 0, & \text{ otherwise } \end{array}\right. \tag{3}
+$$
+
+Table 2: ${L}_{2}$ attack results, provided in terms of ${L}_{2}$ and ${L}_{\infty }$ metrics and FR refers to the Fooling Rate. Note that UAP (Moosavi-Dezfooli et al. 2017) and F-UAP attacks (Zhang et al. 2020) are set to reach the same ${L}_{2}$ values as P-AL to allow comparison of FR at the same level of perturbation.
+
+max width=
+
+Method 3|c|DenseNet121 3|c|ResNet50 3|c|GoogleNet 3|c|VGG16
+
+1-13
+X L2 ${L}_{\infty }$ FR L2 ${L}_{\infty }$ FR L2 ${L}_{\infty }$ FR L2 ${L}_{\infty }$ FR
+
+1-13
+B-AL 9.14 0.45 0.93 9.08 0.52 0.93 9.56 0.45 0.91 7.10 0.47 0.95
+
+1-13
+EB-AL 14.11 0.47 0.98 14.36 0.60 0.98 15.73 0.58 0.98 7.39 0.44 0.95
+
+1-13
+P-AL 11.16 0.43 0.95 11.66 0.53 0.95 13.01 0.60 0.95 5.52 0.44 0.95
+
+1-13
+UAP 11.16 0.29 0.33 11.66 0.21 0.34 13.01 0.27 0.43 5.52 0.15 0.30
+
+1-13
+F-UAP 11.16 0.23 0.90 11.66 0.32 0.93 13.01 0.34 0.91 5.52 0.16 0.70
+
+1-13
+
+Table 3: ${L}_{\infty }$ attack results, provided in terms of ${L}_{2}$ and ${L}_{\infty }$ metrics and FR refers to the Fooling Rate. Note that UAP (Moosavi-Dezfooli et al. 2017) and F-UAP attacks (Zhang et al. 2020) are set to reach the same ${L}_{\infty }$ values as P-AL to allow comparison of FR at the same level of perturbation.
+
+max width=
+
+Method 3|c|DenseNet121 3|c|ResNet50 3|c|GoogleNet 3|c|VGG16
+
+1-13
+X ${L}_{\infty }$ ${L}_{2}$ FR ${L}_{\infty }$ ${L}_{2}$ FR ${L}_{\infty }$ ${L}_{2}$ FR ${L}_{\infty }$ ${L}_{2}$ FR
+
+1-13
+B-AL 0.17 24.96 1.00 0.17 26.64 1.00 0.20 26.63 1.00 0.16 19.04 1.00
+
+1-13
+EB-AL 0.16 24.42 1.00 0.18 29.39 1.00 0.22 33.55 1.00 0.17 25.70 1.00
+
+1-13
+P-AL 0.11 16.76 0.96 0.13 16.61 0.96 0.16 20.60 0.96 0.11 12.45 0.95
+
+1-13
+UAP 0.11 25.71 0.52 0.13 29.64 0.60 0.16 35.32 0.79 0.11 25.84 0.75
+
+1-13
+F-UAP 0.11 25.55 0.99 0.13 26.74 0.99 0.16 31.00 0.99 0.11 24.57 0.99
+
+1-13
+
+This method can be applied to any minimization based UAP training scenario, as for the norm constrained attacks, the perturbations do not always mitigate towards the edges. Also, we empirically found that smooth circular filters such as 2D Gaussian or Butterworth (Butterworth et al. 1930) filters tend to limit the adversarial capacity of the UAPs, by slightly smoothing the features around the edges.
+
+§ EXPERIMENTAL DESIGN
+
+We have trained the UAPs using a sampled ImageNet dataset containing a total of 10000 images, formed by 10 images from each class. We have compared our attack with 2 other attacks which can be trained with small dataset sizes; vanilla UAP (Moosavi-Dezfooli et al. 2017) and Feature-UAP attacks (Zhang et al. 2020). We have chosen the target class as peacock for our attacks and Feature-UAP (vanilla UAP is strictly an untargeted attack as it is based on DeepFool). Similar to our attack, vanilla UAP allows specification of a target fooling rate over the training set. Therefore to allow comparisons on the same ground, we set this parameter to be the same, specifically, to a target fooling rate of 95%. However, as both of these attacks are norm constrained attacks, a direct comparison is not possible; therefore we first trained UAPs with our attacks, then we set the constraints, i.e. epsilons to match our obtained ${L}_{2}$ or ${L}_{\infty }$ values depending on the type of the norm which was selected to be optimized. To measure the performance of the attacks, we have used the standard ImageNet validation set having 50000 images. For fast convergence, Adam was selected as the optimizer, and the UAPs have been trained for 20 epochs. For the experiments where filtering is applied, a radius of 112 is used, as the dimensions of the input images are ${224} \times {224}$ .
+
+§ RESULTS
+
+Two different experiments have been conducted by applying the attacks with ${L}_{2}$ and ${L}_{\infty }$ norms, which are both supported by all attack types in question. The results are then compared with regards to both ${L}_{2}$ and ${L}_{\infty }$ values.
+
+§ ${L}_{2}$ ATTACKS
+
+Table 2 shows the ${L}_{2}$ attack results for B-AL, EB-AL, PAL, vanilla UAP and Feature-UAP (F-UAP). For 95% fooling rate constrained attacks, the attack is regarded successful if it is above and close to the target. Despite achieving the smallest ${L}_{2}$ value compared to the other base models, B-AL cannot attain the target FR for DenseNet121 (Huang et al. 2018), ResNet50 (He et al. 2015) and GoogleNet (Szegedy et al. 2014). It can only reach the target for VGG16 (Simonyan and Zisserman 2015) with batch normalization, however in that case its ${L}_{2}$ results are comparatively higher. On the other hand, EB-AL is above the target FR by 3 percentage points (except for VGG16 where is achieves the target FR), which renders the attack sub-optimal; furthermore the ${L}_{2}$ values are consistently higher than both B-AL and PAL. P-AL, which was introduced to address the inefficiencies of B-AL and EB-AL, consistently achieves the desired fooling rate while also having the smallest ${L}_{2}$ values. Overall, P-AL stays inside the desired range; furthermore by only integrating a filter during training, (Table 4) FP-AL achieves even lower perturbation levels, both in terms of ${L}_{2}$ and distance from the desired fooling rate. This algorithm consistently achieves ${95}\%$ fooling rate, while yielding the smallest -successful- ${L}_{2}$ distance over any of the given attacks.
+
+Table 4: Comparison between the results of P-AL and FP-AL, in both ${L}_{2}$ and ${L}_{\infty }$ . FR signifies the fooling rate over the whole dataset.
+
+max width=
+
+Method 3|c|DenseNet121 3|c|ResNet50 3|c|GoogleNet 3|c|VGG16
+
+1-13
+X ${L}_{2}$ ${L}_{\infty }$ FR ${L}_{2}$ ${L}_{\infty }$ FR ${L}_{2}$ ${L}_{\infty }$ FR ${L}_{2}$ ${L}_{\infty }$ FR
+
+1-13
+${L}_{2}$ P-AL 11.16 0.43 0.95 11.66 0.53 0.95 13.01 0.60 0.95 5.52 0.44 0.95
+
+1-13
+${L}_{2}$ FP-AL 8.38 0.41 0.95 10.75 0.60 0.95 10.62 0.52 0.95 8.93 0.43 0.95
+
+1-13
+${L}_{\infty }$ P-AL 16.76 0.11 0.96 16.61 0.13 0.96 20.60 0.16 0.96 12.45 0.11 0.95
+
+1-13
+${L}_{\infty }$ FP-AL 15.57 0.15 0.97 16.42 0.15 0.96 18.80 0.18 0.97 13.28 0.13 0.96
+
+1-13
+
+It should be noted that both UAP and F-UAP are meant to be mainly run under ${L}_{\infty }$ constraints, however the algorithms are suitable for ${L}_{2}$ normalization during training. As mentioned earlier, both of these attacks are norm constrained attacks as opposed to our minimization attacks which makes it difficult to compare them. However, when the ${L}_{2}$ constraints are equalized at the level of our attacks, we see that both UAP and F-UAP fall below the desired fooling rate; nonetheless both reach significantly smaller ${L}_{\infty }$ values compared to our attacks.
+
+§ ${L}_{\INFTY }$ ATTACKS
+
+Table 3 shows the results of the ${L}_{\infty }$ attacks. It should be noted that while our attacks are mainly designed to minimize ${L}_{2}$ norms of the UAPs, they can minimize ${L}_{\infty }$ as well. This time, B-AL and EB-AL overshoots the desired fooling rate, which is not optimal in our constraints; besides, their ${L}_{\infty }$ values are comparatively higher. P-AL achieves both better ${L}_{2}$ and ${L}_{\infty }$ values, while staying closer to the desired fooling rate. According to the results on Table 4, FP-AL slightly increases the ${L}_{\infty }$ values while also getting further from the target, in exchange for an overall decrease in ${L}_{2}$ norm.
+
+As UAP and F-UAP are mainly ${L}_{\infty }$ bounded attacks, it is fair to expect better results from them. UAP shows much better results compared to ${L}_{2}$ normalized attack, however still cannot reach the target fooling rate against either of the networks. F-UAP achieves ${99}\%$ fooling rate for each network type, albeit yielding comparatively higher ${L}_{2}$ values.
+
+§ DISCUSSION
+
+Altering the loss function based on the progressive fooling rate gives the best results in both ${L}_{2}$ and ${L}_{\infty }$ attacks. Although our other attacks take a similar approach, because each batch affects the optimization too much, alternating loss scheme causes unstable behaviour, thus makes it harder to converge to the desired fooling rate. Another possible drawback from these attacks comes from the fact that the batch size takes a crucial role on how the optimization proceeds. For instance, the final UAPs that were trained with batch sizes of 32 and 128 may have severe performance difference, since by increasing the sample size, we get a better understanding about the performance over the whole dataset. P-AL is independent of the batch size and it is more stable around the desired fooling rate.
+
+We should also point out that P-AL addresses the problem of obtaining a fooling rate around the target level, not exceeding it to obtain even a better fooling rate. In a case where the target is to maximize fooling rate as much as possible, our attacks are likely to be less optimal. For instance, if we set the target fooling rate as ${100}\%$ , the UAP will be trained only with the adversarial loss function, therefore never minimizing the ${L}_{p}$ norm. To maximize the fooling rate, EB-AL may be the better choice, since it will first bring the fooling rate to ${100}\%$ , if possible, then it will try to minimize the norm. Table 2 and 3 shows that higher fooling rates can be achieved by EB-AL, although the ${L}_{p}$ norms are slightly higher. In that note, using ${L}_{\infty }$ norm to optimize the fooling rate can be another way of maximizing the fooling rate, along with B-AL and EB-AL. Using ${L}_{\infty }$ usually makes the optimization converge much faster at high fooling rate targets; since ${L}_{\infty }$ norm is in s scale [0-1], the adversarial loss function (cross-entropy) takes much higher values thus updates the perturbation values more drastically. In those cases, while optimizing the UAP in a strong adversarial manner, a small norm optimization is also done, which can yield a UAP with a very high fooling rate.
+
+§ PERTURBATION FEATURES
+
+Figure 2 shows sample UAPs trained with different methods. The UAPs on the first 3 rows (obtained without a filter) exhibit perturbations with visible image features accumulated around the edges. However, this phenomenon is not caused by the alternating loss scheme adopted, rather it is a consequence of performing targeted universal attacks that minimize the targets standard loss. The gradients on the perturbation are high over the edges after only few iterations, which causes the image features to concentrate on these regions. Figure 3 shows the scatter plots of the mean gradient values when a UAP is applied on the whole dataset, versus the distance of each pixel containing the gradient. The scatter plots are generated using a state of a UAP that is trained; from left to right, top to bottom, the UAP is taken from iterations 1, 30, 150 and 300 of the training. The gradients flow to the pixels that are far from the center of the images. We speculate that, since usually the main objects of the images are located around the center, the magnitudes of gradients with respect to a loss function whose objective class is different from the original class become relatively higher where features from the original object are absent, hence the edges and corners. On the other hand, it is also possible to see small feature-full perturbations that are generated around the center of the image, such as the green dots that can be seen in Figure 2. It is known that universal perturbations take advantage of image features that outweigh the original image features, hence why it can be possible to understand the target class by only looking at the universal perturbations; yet, the small accumulations around the center defy these assumptions. For that reason, our filtered training scheme not only makes the center of attention clean of perturbations, but also quantitatively yield better ${L}_{p}$ norms and stable fooling rates.
+
+ < g r a p h i c s >
+
+Figure 3: Vertical axes show the mean gradient value, horizontal axes show the distance between the pixel containing the corresponding mean gradient, and the center of the image. The scatter plots are extracted from UAP states after iteration number1,30,150and 300 .
+
+§ CONCLUSION
+
+In this work, we propose and evaluate alternative approaches for training a UAP that can achieve target fooling rates for a dataset, while being a minimization optimization, rather than being ${L}_{p}$ bounded. For that, we have integrated ’alternating loss', an image-dependent attack strategy into universal adversarial domain. As it was not directly possible to integrate this strategy into a training procedure, we came up with 3 different approaches for its utilization. B-AL training altered the loss function based on solely the currently processed batch. EB-AL training also took the performance of the UAP over the whole dataset, before altering the loss function. Finally, P-AL training took into account the fooling rate up to the point where a batch is processed. Using PAL, we achieved remarkable ${L}_{2}$ distances, while maintaining the desired fooling rates. On top of P-AL, we have also applied circular filtering to mitigate the small perturbations that appear in the center of the UAP, to the edges. By this way, we obtain perceptually better UAPs, having less perturbations in the center of the image, while achieving even smaller ${L}_{2}$ distances. On the other hand, this work can further be improved by regularizing the altered loss functions to achieve better ${L}_{p}$ norms. Also, investigating the mitigation of the image-features on the UAPs can also be helpful for understanding not only the existence of these perturbations, but also the behaviour of the deep neural networks.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/n3PMOhS42s6/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/n3PMOhS42s6/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..3eb799f83f7a46b0bdad78dbed832bc65d74a890
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/n3PMOhS42s6/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,209 @@
+# An Adversarial Benchmark for Fake News Detection Models
+
+Anonymous Author(s)
+
+${}^{1}$ Affiliation
+
+Address
+
+Email
+
+## Abstract
+
+With the proliferation of online misinformation, fake news detection has gained importance in the artificial intelligence community. Recent work achieved promising results on benchmark datasets, with their performance often attributed to deep learning models' ability to understand text and learn facts from data. However, literature that study the limitations of this claim remains scarce. In this paper, we create adversarial benchmarks based on the LIAR and Fake-News datasets that target three aspects of "understanding", and show that a basic BERT and FakeBERT model are vulnerable two attacks. Ultimately, this strengthens the need for such models to be used in conjunction with other fact checking methods.
+
+## Introduction
+
+As online media plays an increasingly impactful role in modern social and political movements, the ability to detect and halt the flow of misinformation has become the subject of substantial research in the artificial intelligence community. An important component of this research is the task of fake news detection-a natural language classification task in which a model must determine whether a news article is intentionally deceptive (Rubin, Chen, and Conroy 2015). Unfortunately, fake news detection is as challenging as it is important. In order to successfully distinguish fake news articles from genuine ones, a model must not only be proficient in natural language understanding, but also be able to incorporate world knowledge into its computation, including knowledge of current events.
+
+The inherent difficulty of this task, as well as the social and political incentives that encourage development of methods for evading content filters, raises questions surrounding the robustness of fake news detectors against ad-versarially written articles. To that end, a number of studies, such as Zhou et al. (2019), Ali et al. (2021), and Koenders et al. (2021), have subjected fake news detectors to a battery of attacks. All three of these studies have been able to produce cleverly written fake news articles that evade detection.
+
+This paper proposes an adversarial benchmark for fake news detection that is designed to target three aspects of a model's "understanding": whether it has the ability to employ semantic composition, whether it incorporates world knowledge of political parties, and whether adverb intensity is employed as a signal of fake news. Our benchmark is based on the premise that an ideal fake news detector should base its classification on the semantic content of its input and its relation to real-world facts, and not on superficial features of the text. This means that models that are vulnerable to our attacks are likely to be overly reliant on heuristics relating to word choice while failing to extract substantive assertions made by the articles they are tested on.
+
+To test our benchmark, we fine-tune BERT classifiers (Devlin et al. 2019) on the LIAR dataset (Wang 2017) and the Kaggle Fake-News dataset (UTK Machine Learning Club 2017) and subject them to our three adversarial attacks. Since BERT is pre-trained on a large corpus of books (Zhu et al. 2015) and Wikipedia articles, it is possible that a BERT-based fake news detector might contain world knowledge that could be leveraged for fake news detection. For the most part, this is not borne out by our results: we find that our models are vulnerable to two of our three attacks, suggesting that they lack the ability both to extract the content of an article and to compare this content to the knowledge provided by the pre-training corpus.
+
+## Related Work
+
+A number of authors have employed neural text models for fake news classification. These include deep diffusion networks (Zhang, Dong, and Yu 2020), recurrent and convolutional networks (Ruchansky, Seo, and Liu 2017; Yang et al. 2018; Nasir, Khan, and Varlamis 2021), and BERT-based models (Ding, Hu, and Chang 2020; Kaliyar, Goswami, and Narang 2021). Common benchmarks for fake news detection are the LIAR dataset (Wang 2017) and the Kaggle Fake-News dataset (UTK Machine Learning Club 2017). Ding, Hu, and Chang's (2020) BERT-based model achieved state of the art results on the LIAR dataset, while Kali-yar, Goswami, and Narang's (2021) FakeBERT architecture achieved state of the art results on the Kaggle Fake-News dataset.
+
+On adversarial attacks for fake news detection, previous literature has shown that fake news detection models can be fooled by carefully tweaked input. Ali et al. (2021) and Koenders et al. (2021) applied a series of text based adversarial attacks including Text Bugger (Li et al. 2019), Text-Fooler (Jin et al. 2020), DeepWordBug (Gao et al. 2018) and Pruthi (Pruthi, Dhingra, and Lipton 2019). These are generic attacks for natural language models consisting of textual noise such as typos, character swaps, and synonym substitution. In addition to these standard attacks, Zhou et al. (2021) proposed three novel challenges for fake news detectors: (1) modifying details of a sentence involving time, location, etc., (2) swapping the subject and object of a sentence, and (3) adding causal relationships between events in a sentence or removing some of its parts.
+
+---
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+---
+
+The attacks we mention above mainly simulate noise that might appear in online text. In contrast, the attacks we propose are specifically tailored to the problem of fake news detection, particularly in the context of politics. Our attacks are not designed to simulate naturally occurring noise, but rather to test whether deep-learning models understand text, learn real-world facts, and employ inferential reasoning.
+
+## Adversarial Attacks
+
+We choose three attacks that would test a model's understanding of text and real-world facts. Our goal is to see whether the models tweak their outputs accordingly when the truthfulness of an input has been changed, or keep them unchanged otherwise.
+
+For each adversarial attack, we input the original and modified statements into the model. Then, we compute (1) the percentage of instances where the predicted label was different for the original and modified statement (%LabelFlip), and (2) the average change in output probability that the statement is fake $\left( {\Delta }_{\text{Prob }}\right)$ , where a positive change means the attack increases the probability that the statement is fake.
+
+## Negating Sentences
+
+In the first attack, we negate the sentences of each input text using a script due to Bajena (2017). The script heuristically attempts to identify sentences with a third-person singular subject, and changes linking verbs such as is, was, or should into is not, was not, and should not, and vice versa. While the script is not guaranteed to negate a sentence completely, we assume that it tweaks the semantics of the dataset enough to justify a conspicuous effect on the classification probabilities. We assume that an ideal fake news detector would assign opposite labels to a text and its negation.
+
+## Reversing Political Party Affiliations
+
+In the second attack, we attempt to reverse the political party affiliations of named individuals appearing in the text. We identify names of American politicians in the text along with their party affiliations, and filter statements to those containing names from the Republican or Democratic Party. Then, we manually filter the remaining statements to only include factually true statements where replacing the original name with a random one would make the sentence untrue. In each of these texts, we replace names of Democrats with a randomly selected Republican, and vice versa.
+
+The statements in the adversarial dataset consist of quotes, facts, or events associated with particular individuals. We therefore expect that name replacement should cause the model to classify a modified statement as factually false.
+
+## Reducing Intensity of Statements
+
+In the third attack, we remove adverbs that increase sentences' intensity (e.g. absolutely, completely). We hypothesize that fake news is sometimes characterized by "clickbait" titles with highly charged words (Alonso et al. 2021).
+
+Removing polarizing words does not change the meaning of a sentence, thus the label should not change. For this attack, we input false statements into the model, and expect that the model should still classify them as false.
+
+## Experimental Setup
+
+We test our benchmark on three fine-tuned ${\mathrm{{BERT}}}_{\text{BASE }}$ classifiers: two trained on the LIAR dataset and one trained on the Kaggle Fake-News dataset. For each benchmark, we apply our three transformations to the detector's test set, present the resulting texts to the appropriate models, and report the two metrics from the previous section, $\%$ LabelFlip and ${\Delta }_{\text{Prob }}{}^{1}$
+
+## Models
+
+Below we describe our three models.
+
+LIAR Models LIAR (Wang 2017) is a six-class dataset that classifies statements made by politicians as True, Mostly True, Half True, Barely True, False, and Pants on Fire. We train two models on this dataset, which differ in the number of possible output labels the model can predict. First, to verify that our BERT model achieves a level of performance comparable with the results reported by Ding, Hu, and Chang (2020) for LIAR, we train a six-class BERT classifier on the original version of the dataset. Next, in order to facilitate compatibility with the adversarial attacks, we train a two-class model that collapses the True, Mostly True, and Half True labels into a single True class and the Barely True, False, and Pants on Fire labels into a single False class.
+
+Kaggle Fake-News Model The Kaggle Fake-News dataset (UTK Machine Learning Club 2017) is a two-class dataset consisting of headlines and text from news articles published during the 2016 United States presidential election. Our third model is a two-class classifier fine-tuned on this dataset. Since the officially published version of the dataset only contains gold-standard labels for the training data, we use ${70}\%$ of the training set for training and the remaining ${30}\%$ for testing.
+
+## Feature Saliency Analysis
+
+In addition to reporting ${\% }_{\text{LabelFlip }}$ and ${\Delta }_{\text{Prob }}$ , we compute saliency maps for our Kaggle Fake-News model using the Gradient $\times$ Input method (G $\times$ I, Shrikumar, Greenside, and Kundaje 2017; Shrikumar et al. 2017) to measure how individual words impact the models’ classifications. $\mathrm{G} \times \mathrm{I}$ is a local explanation method that quantifies how much each input contributes to the output logits. In $\mathrm{G} \times \mathrm{I}$ , the contribution of a feature is measured by the value of its corresponding term in a linear approximation of the target output unit.
+
+---
+
+${}^{1}$ The code for our experiments is available at the following anonymized repository: https://anonymous.4open.science/r/fake-news-explainability-F77F.
+
+---
+
+| Dataset | SOTA | Our Model |
| LIAR 2 Classes | - | 57.5 |
| LIAR 6 Classes | 27.3 | 29.4 |
| Kaggle Fake-News | 98.9 | 98.8 |
+
+Table 1: Test set accuracy attained by our models, compared with previously reported state-of-the-art results.
+
+| Dataset | %LabelFlip | ${\Delta }_{\text{Prob }}$ |
| LIAR 2 Classes | 15.5 | 0.021 |
| Kaggle Fake-News | 0.3 | -0.0001 |
+
+Table 2: Impact of the negation attack on our models.
+
+We obtain token-level saliency scores by adding together the saliency scores assigned to the embedding dimensions for each token.
+
+## Results
+
+Before discussing our results, we validate the quality of our models by comparing their performance with the current state of the art. These results are shown in Table 1. The six-class version of our LIAR model slightly outperforms the BERT-Based Mental Model of Ding, Hu, and Chang (2020), while our Kaggle Fake-News model achieves a comparable level of performance to Kaliyar, Goswami, and Narang's (2021) FakeBERT model. ${}^{2}$
+
+## Negation Attack
+
+Table 2 shows the impact of the sentence negation adversarial attack on the outputs of our two-class models. The LIAR model proves to be much more vulnerable to this attack than the Kaggle Fake-News model, though the vast majority of predictions were unchanged for both models. We observe in particular that negation causes only in a small increase in the probability scores assigned to the False class, despite the fact that the negation script targets the main auxiliary verb of the sentence, which typically has the effect of completely reverses the meaning of a sentence.
+
+## Party Reversal Attack
+
+Table 3 shows the impact of the name replacement attack on the models. Again, we find that the LIAR model is more susceptible to this attack than the Kaggle Fake-News model. Although most labels are still unchanged, we find that this attack has a greater impact on our models than the negation attack. It is therefore likely that our models are more sensitive to lexical relationships between specific words appearing in a statement than to the syntactic relationships that govern negation.
+
+| Dataset | %LabelFlip | ${\Delta }_{\text{Prob }}$ |
| LIAR 2 Classes | 20.0 | 0.052 |
| Kaggle Fake-News | 4.0 | 0.014 |
+
+Table 3: Impact of the political party reversal attack on our models.
+
+| Dataset | %LabelFlip | ${\Delta }_{\text{Prob }}$ |
| LIAR 2 Classes | 0.0 | 0.027 |
| Kaggle Fake-News | 0.9 | -0.008 |
+
+Table 4: Impact of the adverb intensity attack on our models.
+
+## Adverb Intensity Attack
+
+Table 4 shows the impact of the intensity-reduction attack on the models. As shown, this attack has almost no effect on the models' output. Since the expected behavior is for the output predictions to remain unchanged, our models can be deemed to be robust to this attack. This result suggests that adverb intensity is not a significant heuristic for fake news classification.
+
+## Saliency Analysis
+
+We use $\mathrm{G} \times \mathrm{I}$ heatmaps to identify keywords that may serve as signals for one class over the other. Due to its superior performance, we apply the saliency analysis to our Kaggle Fake-News model.
+
+Figure 1 shows that frequency affects the degree to which a word may be associated with true or false statements. Here, we find that words which appear in fewer documents are assigned more extreme saliency scores. Among the top 30 words with the most extreme $\mathrm{G} \times \mathrm{I}$ scores are names that appear once or twice in the dataset, such as Sanford, Jody, Marco, and Gore. In contrast, frequently-occurring names such as Trump, Hillary, and Obama have average $\mathrm{G} \times \mathrm{I}$ scores close to zero.
+
+Figure 2 visualizes the impact of high-intensity adverbs on our model. Observe that the adverbs totally and completely have small $\mathrm{G} \times \mathrm{I}$ scores in comparison to other words in the sentence. This reflects the resilience of our model against the adverb intensity attack.
+
+## Conclusion
+
+In this study, we have created an adversarial benchmark for fake news detection that is designed to test models' ability to reason about real-world facts. We find that our BERT-based models are vulnerable to negation and party reversal attacks, whereas they are robust to the adverb intensity attack. For all three attacks, our model did not change its prediction in the vast majority of cases. It may be the case that the models are simply unresponsive to the perturbations we performed on the inputs.
+
+Deep learning has demonstrated an impressive level of competence in learning dependencies and relationships in natural language tasks. However, our findings suggest that current techniques are still not sufficient for tasks like fake news detection that require sophisticated forms of reasoning. As the state of the art in fake news detection continues to advance, our benchmark will serve as a valuable metric for the reasoning capabilities of future models.
+
+---
+
+${}^{2}$ It is worth noting that Kaliyar, Goswami, and Narang (2021) did not perform a train-test split on the officially published training data for Kaggle Fake-News, but instead used the entire training set for both training and evaluation. Thus, the SOTA result in Table 1 is not directly comparable with our result, since the former may be
+
+inflated due to overfitting.
+
+---
+
+
+
+Figure 1: On average, words that appear more frequently in the datasets are assigned saliency scores closer to 0 .
+
+
+
+Figure 2: High-intensity adverbs have relatively small contributions to the output logits.
+
+These findings strengthen the need for fake news classification models to be used in conjunction with other fact checking methods. Other work made strides in this area by exploring features like comments on an article (Shu et al. 2019) or article interaction metrics article (likes, shares, retweets) that may signify an article is being maliciously spread (Prakash and Tucker 2021; Tschiatschek et al. 2018), or the possibility of incorporating crowd sourced knowledge or human fact checkers into the process altogether (Demartini, Mizzaro, and Spina 2020; Pennycook and Rand 2019).
+
+We also observed that the model trained on LIAR was more sensitive (i.e. more labels were flipped) than the model trained on the Fake-News dataset. Upon reading the data, we observed that statements in LIAR were generally less polar and more focused on facts, whereas the Fake-News dataset appeared to be a mixed bag of headlines with more polarizing words. This suggests that data quality greatly impacts models' ability to learn facts and understand text.
+
+Limitations of this work are that (1) the models were trained on only two datasets, and the results may not generalize to statements unrelated to general US politics, (2) computational limitations only let us explore shallow neural network architectures, and (3) the adversarial attacks we tried were relatively simple, and a real human may be able to negate or change the intensity of a sentence in more complex ways. Future work could employ more data sets as the training corpus, explore deeper model architectures, and use more complex adversarial attacks, for a more robust evaluation of these fake news models.
+
+## References
+
+Ali, H.; Khan, M. S.; Alghadhban, A.; Alazmi, M.; Alzamil, A.; Al-Utaibi, K.; and Qadir, J. 2021. All Your Fake Detector Are Belong to Us: Evaluating Adversarial Robustness of Fake-News Detectors Under Black-Box Settings. IEEE Access, 9: 81678-81692.
+
+Alonso, M. A.; Vilares, D.; Gómez-Rodríguez, C.; and Vi-lares, J. 2021. Sentiment Analysis for Fake News Detection. Electronics, 10(11).
+
+Bajena, J. 2017. SublimeNegateSentence. https://github.com/Bajena/SublimeNegateSentence.Accessed: 2021-08- 01.
+
+Demartini, G.; Mizzaro, S.; and Spina, D. 2020. Human-in-the-Loop Artificial Intelligence for Fighting Online Misinformation: Challenges and Opportunities. Bulletin of the Technical Committee on Data Engineering, 43(3): 65-74.
+
+Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019. BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding. In Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, volume 1 (Long and Short Papers), 4171-4186. Minneapolis, MN, USA: Association for Computational Linguistics.
+
+Ding, J.; Hu, Y.; and Chang, H. 2020. BERT-Based Mental Model, a Better Fake News Detector. In Proceedings of the 2020 6th International Conference on Computing and Artificial Intelligence, ICCAI '20, 396-400. New York, NY, USA: Association for Computing Machinery. ISBN 978-1- 4503-7708-9.
+
+Gao, J.; Lanchantin, J.; Soffa, M. L.; and Qi, Y. 2018. Black-Box Generation of Adversarial Text Sequences to Evade Deep Learning Classifiers. In 2018 IEEE Security and Privacy Workshops (SPW), 50-56. San Francisco, CA, USA: IEEE.
+
+Jin, D.; Jin, Z.; Zhou, J. T.; and Szolovits, P. 2020. Is BERT Really Robust? A Strong Baseline for Natural Language Attack on Text Classification and Entailment. Proceedings of the AAAI Conference on Artificial Intelligence, 34(05): 8018-8025.
+
+Kaliyar, R. K.; Goswami, A.; and Narang, P. 2021. Fake-BERT: Fake News Detection in Social Media with a BERT-Based Deep Learning Approach. Multimedia Tools and Applications, 80(8): 11765-11788.
+
+Koenders, C.; Filla, J.; Schneider, N.; and Woloszyn, V. 2021. How Vulnerable Are Automatic Fake News Detection Methods to Adversarial Attacks? Computing Research Repository, arXiv:2107.07970 [cs].
+
+Li, J.; Ji, S.; Du, T.; Li, B.; and Wang, T. 2019. TextBugger: Generating Adversarial Text Against Real-World Applications. In NDSS Symposium 2019. San Diego, CA, USA.
+
+Nasir, J. A.; Khan, O. S.; and Varlamis, I. 2021. Fake News Detection: A Hybrid CNN-RNN Based Deep Learning Approach. International Journal of Information Management Data Insights, 1(1).
+
+Pennycook, G.; and Rand, D. G. 2019. Fighting Misinformation on Social Media Using Crowdsourced Judgments of News Source Quality. Proceedings of the National Academy of Sciences, 116(7): 2521-2526.
+
+Prakash, S. K. A.; and Tucker, C. 2021. Classification of Unlabeled Online Media. Scientific Reports, 11(1): 6908.
+
+Pruthi, D.; Dhingra, B.; and Lipton, Z. C. 2019. Combating Adversarial Misspellings with Robust Word Recognition. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics, 5582-5591. Florence, Italy: Association for Computational Linguistics.
+
+Rubin, V. L.; Chen, Y.; and Conroy, N. K. 2015. Deception Detection for News: Three Types of Fakes. Proceedings of the Association for Information Science and Technology, 52(1): 1-4.
+
+Ruchansky, N.; Seo, S.; and Liu, Y. 2017. CSI: A Hybrid Deep Model for Fake News Detection. In Proceedings of the 2017 ACM on Conference on Information and Knowledge Management, CIKM '17, 797-806. New York, NY, USA: Association for Computing Machinery. ISBN 978-1-4503- 4918-5.
+
+Shrikumar, A.; Greenside, P.; and Kundaje, A. 2017. Learning Important Features Through Propagating Activation Differences. In Proceedings of the 34th International Conference on Machine Learning, volume 70 of Proceedings of Machine Learning Research, 3145-3153. Sydney, Australia: PMLR.
+
+Shrikumar, A.; Greenside, P.; Shcherbina, A.; and Kundaje, A. 2017. Not Just a Black Box: Learning Important Features Through Propagating Activation Differences. Computing Research Repository, arXiv:1605.01713 [cs].
+
+Shu, K.; Cui, L.; Wang, S.; Lee, D.; and Liu, H. 2019. dEFEND: Explainable Fake News Detection. In Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '19, 395-405. New York, NY, USA: Association for Computing Machinery. ISBN 978-1-4503-6201-6.
+
+Tschiatschek, S.; Singla, A.; Gomez Rodriguez, M.; Merchant, A.; and Krause, A. 2018. Fake News Detection in Social Networks via Crowd Signals. In Companion Proceedings of the The Web Conference 2018, 517-524. Geneva, Switzerland: International World Wide Web Conferences Steering Committee. ISBN 978-1-4503-5640-4.
+
+UTK Machine Learning Club. 2017. Fake News Dataset. https://www.kaggle.com/c/fake-news/overview.Accessed: 2021-08-01.
+
+Wang, W. Y. 2017. "Liar, Liar Pants on Fire": A New Benchmark Dataset for Fake News Detection. Computing Research Repository, arXiv:1705.00648 [cs].
+
+Yang, Y.; Zheng, L.; Zhang, J.; Cui, Q.; Li, Z.; and Yu, P. S. 2018. TI-CNN: Convolutional Neural Networks for Fake News Detection. Computing Research Repository, arXiv:1806.00749 [cs].
+
+Zhang, J.; Dong, B.; and Yu, P. S. 2020. FakeDetector: Effective Fake News Detection with Deep Diffusive Neural Network. In 2020 IEEE 36th International Conference on Data Engineering (ICDE), 1826-1829. Online: IEEE.
+
+Zhou, Z.; Guan, H.; Bhat, M.; and Hsu, J. 2021. Fake News Detection via NLP Is Vulnerable to Adversarial Attacks. In Proceedings of the 11th International Conference on Agents and Artificial Intelligence, volume 2: ICAART, 794-800. Prague, Czech Republic. ISBN 978-989-758-350-6.
+
+Zhou, Z.; Guan, H.; Bhat, M. M.; and Hsu, J. 2019. Fake News Detection via NLP Is Vulnerable to Adversarial Attacks. Computing Research Repository, arXiv:1901.09657 [cs]: 794-800.
+
+Zhu, Y.; Kiros, R.; Zemel, R.; Salakhutdinov, R.; Urtasun, R.; Torralba, A.; and Fidler, S. 2015. Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books. In 2015 IEEE International Conference on Computer Vision (ICCV), 19-27. Santiago, Chile: IEEE.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/n3PMOhS42s6/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/n3PMOhS42s6/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..82f7fdb6bd3b5b8217b7e5e3202f8a3ae8b3f88e
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/n3PMOhS42s6/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,182 @@
+§ AN ADVERSARIAL BENCHMARK FOR FAKE NEWS DETECTION MODELS
+
+Anonymous Author(s)
+
+${}^{1}$ Affiliation
+
+Address
+
+Email
+
+§ ABSTRACT
+
+With the proliferation of online misinformation, fake news detection has gained importance in the artificial intelligence community. Recent work achieved promising results on benchmark datasets, with their performance often attributed to deep learning models' ability to understand text and learn facts from data. However, literature that study the limitations of this claim remains scarce. In this paper, we create adversarial benchmarks based on the LIAR and Fake-News datasets that target three aspects of "understanding", and show that a basic BERT and FakeBERT model are vulnerable two attacks. Ultimately, this strengthens the need for such models to be used in conjunction with other fact checking methods.
+
+§ INTRODUCTION
+
+As online media plays an increasingly impactful role in modern social and political movements, the ability to detect and halt the flow of misinformation has become the subject of substantial research in the artificial intelligence community. An important component of this research is the task of fake news detection-a natural language classification task in which a model must determine whether a news article is intentionally deceptive (Rubin, Chen, and Conroy 2015). Unfortunately, fake news detection is as challenging as it is important. In order to successfully distinguish fake news articles from genuine ones, a model must not only be proficient in natural language understanding, but also be able to incorporate world knowledge into its computation, including knowledge of current events.
+
+The inherent difficulty of this task, as well as the social and political incentives that encourage development of methods for evading content filters, raises questions surrounding the robustness of fake news detectors against ad-versarially written articles. To that end, a number of studies, such as Zhou et al. (2019), Ali et al. (2021), and Koenders et al. (2021), have subjected fake news detectors to a battery of attacks. All three of these studies have been able to produce cleverly written fake news articles that evade detection.
+
+This paper proposes an adversarial benchmark for fake news detection that is designed to target three aspects of a model's "understanding": whether it has the ability to employ semantic composition, whether it incorporates world knowledge of political parties, and whether adverb intensity is employed as a signal of fake news. Our benchmark is based on the premise that an ideal fake news detector should base its classification on the semantic content of its input and its relation to real-world facts, and not on superficial features of the text. This means that models that are vulnerable to our attacks are likely to be overly reliant on heuristics relating to word choice while failing to extract substantive assertions made by the articles they are tested on.
+
+To test our benchmark, we fine-tune BERT classifiers (Devlin et al. 2019) on the LIAR dataset (Wang 2017) and the Kaggle Fake-News dataset (UTK Machine Learning Club 2017) and subject them to our three adversarial attacks. Since BERT is pre-trained on a large corpus of books (Zhu et al. 2015) and Wikipedia articles, it is possible that a BERT-based fake news detector might contain world knowledge that could be leveraged for fake news detection. For the most part, this is not borne out by our results: we find that our models are vulnerable to two of our three attacks, suggesting that they lack the ability both to extract the content of an article and to compare this content to the knowledge provided by the pre-training corpus.
+
+§ RELATED WORK
+
+A number of authors have employed neural text models for fake news classification. These include deep diffusion networks (Zhang, Dong, and Yu 2020), recurrent and convolutional networks (Ruchansky, Seo, and Liu 2017; Yang et al. 2018; Nasir, Khan, and Varlamis 2021), and BERT-based models (Ding, Hu, and Chang 2020; Kaliyar, Goswami, and Narang 2021). Common benchmarks for fake news detection are the LIAR dataset (Wang 2017) and the Kaggle Fake-News dataset (UTK Machine Learning Club 2017). Ding, Hu, and Chang's (2020) BERT-based model achieved state of the art results on the LIAR dataset, while Kali-yar, Goswami, and Narang's (2021) FakeBERT architecture achieved state of the art results on the Kaggle Fake-News dataset.
+
+On adversarial attacks for fake news detection, previous literature has shown that fake news detection models can be fooled by carefully tweaked input. Ali et al. (2021) and Koenders et al. (2021) applied a series of text based adversarial attacks including Text Bugger (Li et al. 2019), Text-Fooler (Jin et al. 2020), DeepWordBug (Gao et al. 2018) and Pruthi (Pruthi, Dhingra, and Lipton 2019). These are generic attacks for natural language models consisting of textual noise such as typos, character swaps, and synonym substitution. In addition to these standard attacks, Zhou et al. (2021) proposed three novel challenges for fake news detectors: (1) modifying details of a sentence involving time, location, etc., (2) swapping the subject and object of a sentence, and (3) adding causal relationships between events in a sentence or removing some of its parts.
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+The attacks we mention above mainly simulate noise that might appear in online text. In contrast, the attacks we propose are specifically tailored to the problem of fake news detection, particularly in the context of politics. Our attacks are not designed to simulate naturally occurring noise, but rather to test whether deep-learning models understand text, learn real-world facts, and employ inferential reasoning.
+
+§ ADVERSARIAL ATTACKS
+
+We choose three attacks that would test a model's understanding of text and real-world facts. Our goal is to see whether the models tweak their outputs accordingly when the truthfulness of an input has been changed, or keep them unchanged otherwise.
+
+For each adversarial attack, we input the original and modified statements into the model. Then, we compute (1) the percentage of instances where the predicted label was different for the original and modified statement (%LabelFlip), and (2) the average change in output probability that the statement is fake $\left( {\Delta }_{\text{ Prob }}\right)$ , where a positive change means the attack increases the probability that the statement is fake.
+
+§ NEGATING SENTENCES
+
+In the first attack, we negate the sentences of each input text using a script due to Bajena (2017). The script heuristically attempts to identify sentences with a third-person singular subject, and changes linking verbs such as is, was, or should into is not, was not, and should not, and vice versa. While the script is not guaranteed to negate a sentence completely, we assume that it tweaks the semantics of the dataset enough to justify a conspicuous effect on the classification probabilities. We assume that an ideal fake news detector would assign opposite labels to a text and its negation.
+
+§ REVERSING POLITICAL PARTY AFFILIATIONS
+
+In the second attack, we attempt to reverse the political party affiliations of named individuals appearing in the text. We identify names of American politicians in the text along with their party affiliations, and filter statements to those containing names from the Republican or Democratic Party. Then, we manually filter the remaining statements to only include factually true statements where replacing the original name with a random one would make the sentence untrue. In each of these texts, we replace names of Democrats with a randomly selected Republican, and vice versa.
+
+The statements in the adversarial dataset consist of quotes, facts, or events associated with particular individuals. We therefore expect that name replacement should cause the model to classify a modified statement as factually false.
+
+§ REDUCING INTENSITY OF STATEMENTS
+
+In the third attack, we remove adverbs that increase sentences' intensity (e.g. absolutely, completely). We hypothesize that fake news is sometimes characterized by "clickbait" titles with highly charged words (Alonso et al. 2021).
+
+Removing polarizing words does not change the meaning of a sentence, thus the label should not change. For this attack, we input false statements into the model, and expect that the model should still classify them as false.
+
+§ EXPERIMENTAL SETUP
+
+We test our benchmark on three fine-tuned ${\mathrm{{BERT}}}_{\text{ BASE }}$ classifiers: two trained on the LIAR dataset and one trained on the Kaggle Fake-News dataset. For each benchmark, we apply our three transformations to the detector's test set, present the resulting texts to the appropriate models, and report the two metrics from the previous section, $\%$ LabelFlip and ${\Delta }_{\text{ Prob }}{}^{1}$
+
+§ MODELS
+
+Below we describe our three models.
+
+LIAR Models LIAR (Wang 2017) is a six-class dataset that classifies statements made by politicians as True, Mostly True, Half True, Barely True, False, and Pants on Fire. We train two models on this dataset, which differ in the number of possible output labels the model can predict. First, to verify that our BERT model achieves a level of performance comparable with the results reported by Ding, Hu, and Chang (2020) for LIAR, we train a six-class BERT classifier on the original version of the dataset. Next, in order to facilitate compatibility with the adversarial attacks, we train a two-class model that collapses the True, Mostly True, and Half True labels into a single True class and the Barely True, False, and Pants on Fire labels into a single False class.
+
+Kaggle Fake-News Model The Kaggle Fake-News dataset (UTK Machine Learning Club 2017) is a two-class dataset consisting of headlines and text from news articles published during the 2016 United States presidential election. Our third model is a two-class classifier fine-tuned on this dataset. Since the officially published version of the dataset only contains gold-standard labels for the training data, we use ${70}\%$ of the training set for training and the remaining ${30}\%$ for testing.
+
+§ FEATURE SALIENCY ANALYSIS
+
+In addition to reporting ${\% }_{\text{ LabelFlip }}$ and ${\Delta }_{\text{ Prob }}$ , we compute saliency maps for our Kaggle Fake-News model using the Gradient $\times$ Input method (G $\times$ I, Shrikumar, Greenside, and Kundaje 2017; Shrikumar et al. 2017) to measure how individual words impact the models’ classifications. $\mathrm{G} \times \mathrm{I}$ is a local explanation method that quantifies how much each input contributes to the output logits. In $\mathrm{G} \times \mathrm{I}$ , the contribution of a feature is measured by the value of its corresponding term in a linear approximation of the target output unit.
+
+${}^{1}$ The code for our experiments is available at the following anonymized repository: https://anonymous.4open.science/r/fake-news-explainability-F77F.
+
+max width=
+
+Dataset SOTA Our Model
+
+1-3
+LIAR 2 Classes - 57.5
+
+1-3
+LIAR 6 Classes 27.3 29.4
+
+1-3
+Kaggle Fake-News 98.9 98.8
+
+1-3
+
+Table 1: Test set accuracy attained by our models, compared with previously reported state-of-the-art results.
+
+max width=
+
+Dataset %LabelFlip ${\Delta }_{\text{ Prob }}$
+
+1-3
+LIAR 2 Classes 15.5 0.021
+
+1-3
+Kaggle Fake-News 0.3 -0.0001
+
+1-3
+
+Table 2: Impact of the negation attack on our models.
+
+We obtain token-level saliency scores by adding together the saliency scores assigned to the embedding dimensions for each token.
+
+§ RESULTS
+
+Before discussing our results, we validate the quality of our models by comparing their performance with the current state of the art. These results are shown in Table 1. The six-class version of our LIAR model slightly outperforms the BERT-Based Mental Model of Ding, Hu, and Chang (2020), while our Kaggle Fake-News model achieves a comparable level of performance to Kaliyar, Goswami, and Narang's (2021) FakeBERT model. ${}^{2}$
+
+§ NEGATION ATTACK
+
+Table 2 shows the impact of the sentence negation adversarial attack on the outputs of our two-class models. The LIAR model proves to be much more vulnerable to this attack than the Kaggle Fake-News model, though the vast majority of predictions were unchanged for both models. We observe in particular that negation causes only in a small increase in the probability scores assigned to the False class, despite the fact that the negation script targets the main auxiliary verb of the sentence, which typically has the effect of completely reverses the meaning of a sentence.
+
+§ PARTY REVERSAL ATTACK
+
+Table 3 shows the impact of the name replacement attack on the models. Again, we find that the LIAR model is more susceptible to this attack than the Kaggle Fake-News model. Although most labels are still unchanged, we find that this attack has a greater impact on our models than the negation attack. It is therefore likely that our models are more sensitive to lexical relationships between specific words appearing in a statement than to the syntactic relationships that govern negation.
+
+max width=
+
+Dataset %LabelFlip ${\Delta }_{\text{ Prob }}$
+
+1-3
+LIAR 2 Classes 20.0 0.052
+
+1-3
+Kaggle Fake-News 4.0 0.014
+
+1-3
+
+Table 3: Impact of the political party reversal attack on our models.
+
+max width=
+
+Dataset %LabelFlip ${\Delta }_{\text{ Prob }}$
+
+1-3
+LIAR 2 Classes 0.0 0.027
+
+1-3
+Kaggle Fake-News 0.9 -0.008
+
+1-3
+
+Table 4: Impact of the adverb intensity attack on our models.
+
+§ ADVERB INTENSITY ATTACK
+
+Table 4 shows the impact of the intensity-reduction attack on the models. As shown, this attack has almost no effect on the models' output. Since the expected behavior is for the output predictions to remain unchanged, our models can be deemed to be robust to this attack. This result suggests that adverb intensity is not a significant heuristic for fake news classification.
+
+§ SALIENCY ANALYSIS
+
+We use $\mathrm{G} \times \mathrm{I}$ heatmaps to identify keywords that may serve as signals for one class over the other. Due to its superior performance, we apply the saliency analysis to our Kaggle Fake-News model.
+
+Figure 1 shows that frequency affects the degree to which a word may be associated with true or false statements. Here, we find that words which appear in fewer documents are assigned more extreme saliency scores. Among the top 30 words with the most extreme $\mathrm{G} \times \mathrm{I}$ scores are names that appear once or twice in the dataset, such as Sanford, Jody, Marco, and Gore. In contrast, frequently-occurring names such as Trump, Hillary, and Obama have average $\mathrm{G} \times \mathrm{I}$ scores close to zero.
+
+Figure 2 visualizes the impact of high-intensity adverbs on our model. Observe that the adverbs totally and completely have small $\mathrm{G} \times \mathrm{I}$ scores in comparison to other words in the sentence. This reflects the resilience of our model against the adverb intensity attack.
+
+§ CONCLUSION
+
+In this study, we have created an adversarial benchmark for fake news detection that is designed to test models' ability to reason about real-world facts. We find that our BERT-based models are vulnerable to negation and party reversal attacks, whereas they are robust to the adverb intensity attack. For all three attacks, our model did not change its prediction in the vast majority of cases. It may be the case that the models are simply unresponsive to the perturbations we performed on the inputs.
+
+Deep learning has demonstrated an impressive level of competence in learning dependencies and relationships in natural language tasks. However, our findings suggest that current techniques are still not sufficient for tasks like fake news detection that require sophisticated forms of reasoning. As the state of the art in fake news detection continues to advance, our benchmark will serve as a valuable metric for the reasoning capabilities of future models.
+
+${}^{2}$ It is worth noting that Kaliyar, Goswami, and Narang (2021) did not perform a train-test split on the officially published training data for Kaggle Fake-News, but instead used the entire training set for both training and evaluation. Thus, the SOTA result in Table 1 is not directly comparable with our result, since the former may be
+
+inflated due to overfitting.
+
+ < g r a p h i c s >
+
+Figure 1: On average, words that appear more frequently in the datasets are assigned saliency scores closer to 0 .
+
+ < g r a p h i c s >
+
+Figure 2: High-intensity adverbs have relatively small contributions to the output logits.
+
+These findings strengthen the need for fake news classification models to be used in conjunction with other fact checking methods. Other work made strides in this area by exploring features like comments on an article (Shu et al. 2019) or article interaction metrics article (likes, shares, retweets) that may signify an article is being maliciously spread (Prakash and Tucker 2021; Tschiatschek et al. 2018), or the possibility of incorporating crowd sourced knowledge or human fact checkers into the process altogether (Demartini, Mizzaro, and Spina 2020; Pennycook and Rand 2019).
+
+We also observed that the model trained on LIAR was more sensitive (i.e. more labels were flipped) than the model trained on the Fake-News dataset. Upon reading the data, we observed that statements in LIAR were generally less polar and more focused on facts, whereas the Fake-News dataset appeared to be a mixed bag of headlines with more polarizing words. This suggests that data quality greatly impacts models' ability to learn facts and understand text.
+
+Limitations of this work are that (1) the models were trained on only two datasets, and the results may not generalize to statements unrelated to general US politics, (2) computational limitations only let us explore shallow neural network architectures, and (3) the adversarial attacks we tried were relatively simple, and a real human may be able to negate or change the intensity of a sentence in more complex ways. Future work could employ more data sets as the training corpus, explore deeper model architectures, and use more complex adversarial attacks, for a more robust evaluation of these fake news models.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/o_O7TOBC7jl/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/o_O7TOBC7jl/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..5294d80532564669defe5cebc1fb75a9dcb0983e
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/o_O7TOBC7jl/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,171 @@
+# Mitigation of Adversarial Policy Imitation via Constrained Randomization of Policy (CRoP)
+
+## Abstract
+
+Deep Reinforcement Learning (DRL) policies are vulnerable to unauthorized replication attacks, where an adversary exploits imitation learning to reproduce target policies from observed behavior. In this paper, we propose Constrained Randomization of Policy (CRoP) as a mitigation technique against such attacks. CRoP induces the execution of sub-optimal actions at random under performance loss constraints. We present a parametric analysis of CRoP, address the optimality of CRoP, and establish theoretical bounds on the adversarial budget and the expectation of loss. Furthermore, we report the experimental evaluation of CRoP in Atari environments under adversarial imitation, which demonstrate the efficacy and feasibility of our proposed method against policy replication attacks.
+
+## Introduction
+
+Deep Reinforcement Learning (DRL) is a learning framework for sequential decision-making leveraging neural networks for generalization and function approximation. With the growing interest in DRL and its integration in commercial and critical systems, the security of such algorithms have become of paramount importance (Behzadan and Munir 2018).
+
+In tandem with DRL, similar advancements have been made in Imitation Learning (IL) techniques that utilize expert demonstrations to learn and replicate the expert's behavior in sequential decision making tasks. Deep Q-Learning from Demonstration (DQfD)(Hester et al. 2017) is an IL variant that has enabled DRL agents to converge quicker to an optimal policy. However, recent work in (Behzadan and Hsu 2019a) and (Chen et al. 2020) demonstrate that IL can also be exploited by adversaries to replicate other agents' policies from passive observation of their behavior. This gives rise to risks concerning intellectual property and adversarial information gain for more effective active attacks. Current state of the art in countering such attacks include watermarking (Behzadan and Hsu 2019b)(Chen et al. 2021), which enables the post-attack identification of replicated policies.
+
+In this paper, we propose an active mitigation technique against policy imitation attacks, named Constrained Randomization of Policy (CRoP). The proposed technique is based on intermittent randomization of a trained policy, constrained on a threshold for maximum amount of acceptable loss in the expected return. The goal is to increase the adversary's imitation training cost, measured as the minimum number of training iterations and observed demonstrations required for training a replica that matches the target policy's performance.
+
+The main contributions of this paper are: (1) We propose and formulate CRoP as a mitigation technique against adversarial policy imitation, (2) We present a formal analysis of the bounds on expected loss of optimality under CRoP, (3) We formally establish bounds on the adversary's imitation cost induced by CRoP. (3) We report the results of empirical evaulation of adversarial imitation via DQfD against CRoP agents in classical DRL benchmarks, and demonstrate the efficacy and feasibility of CRoP in those settings.
+
+The remainder of this paper is organized as follows: we introduce Constraint Randomization of Policy (CRoP) which analyzes the optimality of a CRoP policy in relation to an optimal policy and describes CRoP's impact upon minimizing divergence objectives, and presents the minimal adversarial budget induced by CRoP and analysis on expectation of loss. The following section reports the experimental evaluation of CRoP in three Atari benchmark environments, along with measurements of the training and test-time performance of DQfD-based adversarial imitation learning agents targeting CRoP-enabled policies. We conclude the paper with a summary of findings and remarks on future directions of research.
+
+## Constrained Randomization of Policy
+
+In the remainder of this paper, we assume the target policy aims to solve a Markov Decision Process (MDP) denoted by the tuple $< S, A, R, T,\gamma >$ where $S$ is a finite state space, $A$ is a finite action space, $T$ defines the environment’s transition probabilities, a discount value $\gamma \in \lbrack 0,1)$ , and a reward function $R : S \times A \rightarrow \left\lbrack {0,1}\right\rbrack$ . The solution to this MDP is a policy $\pi : S \rightarrow A$ that maps states to actions. An agent implementing a policy $\pi$ can measure the value of a state $V\left( s\right) = \mathop{\max }\limits_{a}\left( {{r}_{s, a} + {\gamma V}\left( {s}^{\prime }\right) }\right)$ , where ${s}^{\prime }$ is the next state. Similarly, the value of a state-action pair is given by $Q\left( {s, a}\right) = \mathop{\max }\limits_{a}\left( {{r}_{s, a} + {\gamma Q}\left( {{s}^{\prime },{a}^{\prime }}\right) }\right)$ where ${s}^{\prime }$ is the next state and ${a}^{\prime }$ is the next action.
+
+Constrained Randomization of Policy (CRoP) is an action diversion strategy from an optimal policy under constrained performance deviation from optimal. Let $\widehat{a} \in \widehat{A}$ where $\widehat{a}$ are candidate actions that satisfy $0 < Q\left( {s,\pi \left( s\right) }\right) - Q\left( {s,{\widehat{a}}_{i}}\right) < \rho$ . In other words, $\widehat{A}$ is the space of all candidate actions for $s \in S$ excluding the optimal action $\pi \left( s\right)$ . We define CRoP as the function below:
+
+$$
+f\left( s\right) = \left\{ \begin{array}{l} \pi \left( s\right) \;\Pr \left( \delta \right) \text{ or }\beta \widehat{a} \in \widehat{A} \\ {\widehat{A}}_{\widehat{a} \sim U\left( \widehat{A}\right) }\;\Pr \left( {1 - \delta }\right) \end{array}\right. \tag{1}
+$$
+
+---
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+---
+
+Where $U\left( \widehat{A}\right)$ is the uniform distribution over $\widehat{A}$ . This definition of $\rho$ threshold is the difference of Q-values. We have three variations of $\rho$ for CRoP: Q-value difference (Q-diff) as described in Equation 1, and two measures inspired by the advantage function: advantage-inspired difference (A-diff), and positive advantage-inspired difference $\left( {{\mathrm{A}}^{ + }\text{-diff }}\right)$ . A-diff CRoP is thus defined as:
+
+$$
+\widetilde{A}\left( {{s}_{t},{a}_{t}}\right) = Q\left( {{s}_{t},{a}_{t}}\right) - V\left( {s}_{t - 1}\right) > - \rho \tag{2}
+$$
+
+${\mathrm{A}}^{ + }$ -diff’s $\rho$ has the condition $\widehat{A}\left( {{s}_{t},{a}_{t}}\right) \geq 0$ . A-diff and ${\mathrm{A}}^{ + }$ - diff’s $\rho$ are interpreted as 1-step hindsight estimation which is relevant to the trajectory taken instead of only pure future estimate as with Q-diff, eg. played badly, now play safe vs. plan to feint ahead. However, the selection of $\rho$ should consider estimation error due to either finite training or function approximation. One can look to the analysis of learning complexity as a method of finding error bounds to derive a safety margin for $\rho$ . We choose these three threshold variations because their performance vary across the environments, implying that being able to successfully deviate from optimal policy is conditioned on the environment dynamics itself as well as the defender's tolerance for loss which may not be captured by a single threshold such as Q-diff's. We cannot use the traditional Advantage since $V\left( s\right) = \mathop{\max }\limits_{a}Q\left( {s, a}\right)$ implies Q-diff's implementation would have a similar impact in regards to threshold. Additionally, it is important to recognize that CRoP is similar to that of a $\epsilon$ -greedy policy; however, the difference lies on the constraint expected loss that $\epsilon$ -greedy does not guarantee.
+
+
+
+Figure 1: Visualization that ${\pi }^{\prime }$ is an $\left( {\epsilon + {\epsilon }^{\prime }}\right)$ -optimal to ${Q}^{ * }/{V}^{ * }$
+
+By definition, a policy $\pi$ is $\epsilon$ -optimal if there exists a non-negative constant $\epsilon$ such that ${v}^{\pi }\left( x\right) \geq V\left( x\right) - \epsilon$ for all initial states $x$ in $S$ . Some definitions include that it occurs at $P\left( {1 - \delta }\right)$ , which we will abide by. In other words, $\epsilon$ -optimal policies are within $\epsilon$ neighborhood of ${V}^{ * }$ , specifically ${V}^{ * } - {V}^{\pi } < \epsilon$ for all $a \in A$ and $s \in S$ at probability $\left( {1 - \delta }\right)$ . As illustrated in Figure $1,{\pi }^{ * }$ is the optimal and greedy policy extracted from ${V}^{ * }$ where $\pi$ is the extracted policy from ${V}^{\pi }$ and ${\pi }^{\prime }$ is the extracted policy from ${V}^{{\pi }^{\prime }}$ , we see that ${\pi }^{\prime }$ may be expressed as an $\left( {\epsilon + {\epsilon }^{\prime }}\right)$ -optimal to ${Q}^{ * }/{V}^{ * }$ when evaluated for the initial states and to follow a greedy policy thereafter. Since we do not assume $\pi$ to be an optimal policy, it is possible for ${\pi }^{\prime }$ to be more optimal than $\pi$ . However, it is noteworthy that an evaluation of optimality based on the difference to the value function does not imply extracted policies with small error to ${V}^{ * }$ resemble the optimal policy when assessed on behavioral differences. Theorem 1 establishes that CRoP policy $f$ is at worst $\left( {\epsilon + {\epsilon }^{\prime }}\right)$ -optimal to ${Q}^{ * }$ at probability $\left( {1 - \delta }\right)$ as an evaluation of the initial states assuming a greedy policy is followed after. However, an evaluation for committing to following CRoP thereafter (or for any evaluation of a trajectory under CRoP) will have compounding sub-optimality. Therefore, instead for any fix length horizon $T$ , we know that ${\pi }^{\prime }$ will be $\left( {T \times \epsilon + {\epsilon }^{\prime }}\right)$ -optimal to ${V}^{ * }$ . This is useful for finite horizons; however, can be problematic with infinite-horizons. However, since we allow the defender to modify $\rho$ , one can simply stop the deviating behavior to cease and bound the compounding error.
+
+Theorem 1 Given ${Q}^{ * }\left( {{s}_{t},{a}_{t}}\right) - {Q}^{\pi }\left( {{s}_{t},{a}_{t}}\right) < {\epsilon }^{\prime }$ at probability $\left( {1 - \delta }\right)$ and $\left| {{Q}^{\pi }\left( {{s}_{t},{a}_{t}}\right) - {Q}^{{\pi }^{\prime }}\left( {{s}_{t},{a}_{t}}\right) }\right| \leq \epsilon$ for all $s \in S$ and $a \in A$ , then ${Q}^{ * }\left( {{s}_{t},{a}_{t}}\right) - {Q}^{{\pi }^{\prime }}\left( {{s}_{t},{a}_{t}}\right) \leq \epsilon + {\epsilon }^{\prime }$ at probability $\left( {1 - \delta }\right) .{\pi }^{\prime }$ is an $\left( {\epsilon + {\epsilon }^{\prime }}\right)$ -optimal to ${Q}^{ * }/{V}^{ * }$ at probability $\left( {1 - \delta }\right)$ .
+
+Proof. Given ${Q}^{ * }\left( {{s}_{t},{a}_{t}}\right) - {Q}^{\pi }\left( {{s}_{t},{a}_{t}}\right) < {\epsilon }^{\prime }$ at probability $\left( {1 - \delta }\right)$ and $\left| {{Q}^{\pi }\left( {{s}_{t},{a}_{t}}\right) - {Q}^{{\pi }^{\prime }}\left( {{s}_{t},{a}_{t}}\right) }\right| \leq \epsilon$ for all $s \in S$ and $a \in A$ , then ${Q}^{ * }\left( {{s}_{t},{a}_{t}}\right) - {Q}^{{\pi }^{\prime }}\left( {{s}_{t},{a}_{t}}\right) \leq \epsilon + {\epsilon }^{\prime }$ at probability $\left( {1 - \delta }\right)$ .
+
+Let ${Q}_{\text{diff }} = {Q}^{ * }\left( {{s}_{t},{a}_{t}}\right) - {Q}^{f}\left( {{s}_{t},{a}_{t}}\right) + \mid {Q}^{f}\left( {{s}_{t},{a}_{t}}\right) -$ ${Q}^{{\pi }^{\prime }}\left( {{s}_{t},{a}_{t}}\right) \mid$ . Given that $Q\left( {s, a}\right) \in \left( {0,\frac{1}{1 - \gamma }}\right)$ , at $\left( {1 - \delta }\right)$ probability:
+
+$$
+{Q}^{ * }\left( {{s}_{t},{a}_{t}}\right) - {Q}^{{\pi }^{\prime }}\left( {{s}_{t},{a}_{t}}\right) \leq {Q}_{diff} \leq \epsilon + {\epsilon }^{\prime } \tag{3}
+$$
+
+IL has two common approaches: Behavioral Clones (BC) which are supervised learners and inverse RL which finds a reward function to match the demonstration. Work by (Ke et al. 2020) shows that: BC minimizes the KL divergence, Generative Adversarial Imitation Learning (GAIL) (Ho and Ermon 2016) minimize the Jensen Shannon divergence and DAgger (Ross, Gordon, and Bagnell 2011) minimizes total variance. For BC, CRoP affects the maximum likelihood in a similar manner to data poisoning attacks like label flipping (Xiao, Xiao, and Eckert 2012) or class imbalance. In regard to GAIL, the discriminator from a GAN prioritizes expert experiences so unless modified for decay when out-performed, additional penalty is given to the training policy. Furthermore, when CRoP lowers the action distribution for ${a}^{ * }$ according to $\delta$ probability and increases the distribution for candidate actions, it results in smaller maximal difference for DAgger.
+
+## Budget Analysis for Perfect Information Adversary
+
+We measure the adversary's budget in the sample quantity or trajectories that it can acquire through a passive attack. Nair and Doshi-Velez (Nair and Doshi-Velez 2020) derive upper and lower bounds on the sample complexity of direct policy learning and model-based imitation learning in relaxed problem spaces. This follows the research of RL sample efficiency and Offline RL(Levine et al. 2020). However, in this work we divert from a direct treatment of sample efficiency to consider information optimality from observed target demonstration without environment interaction. Consider the set $\mathcal{T}$ where ${\tau }_{i}$ $\left( {\forall ,{\tau }_{i} \in \mathcal{T}}\right)$ which is composed of a $T$ -length chain of(s, a)- pairs. Assume each(s, a)-pair has two possible outcomes, optimal at $P\left( \delta \right)$ or sub-optimal at $P\left( {1 - \delta }\right)$ . Assume pair and trajectory uniqueness, this would contain ${2}^{T}$ trajectories where $T$ is the length of the horizon. To obtain optimal target $\pi$ , we would require all trajectories except the event of a complete sub-optimal trajectory ${\left( 1 - \delta \right) }^{T}$ . Let an adversary pull from $\mathcal{T}$ . Group the desired ${2}^{T} - 1$ trajectories in set $\alpha$ and the worst event trajectory in set $\beta$ . As an adversary samples from $\mathcal{T}$ , if they obtain an unseen desired trajectory $\tau$ , it is from $\alpha$ and is moved to their adversarial set $\widehat{\mathcal{T}}.\tau$ is then replaced in $\mathcal{T}$ but is no longer unseen so if encountered again, it would be from $\beta$ . Let ${\tau }_{w}$ be the worst-case trajectory and $\widehat{m}$ be the sum of the expected number of trajectories for each sequential pull from $\mathcal{T}$ . It follows that:
+
+$$
+\mathbb{E}\left\lbrack \widehat{m}\right\rbrack = \mathop{\sum }\limits_{{n = 1}}^{{{2}^{T} - 1}}\mathbb{E}\left\lbrack {m}_{n}\right\rbrack = \mathop{\sum }\limits_{1}^{{{2}^{T} - 1}}1/\left( {1 - P\left( {\tau }_{w}\right) + \mathop{\sum }\limits_{{{\tau }_{i} \in \widehat{\mathcal{T}}}} - P\left( {\tau }_{i}\right) }\right) \tag{4}
+$$
+
+Intuitively, we see in the denominator the probability of pulling unseen trajectories given the trajectories in $\widehat{\mathcal{T}}$ and known probability for all ${\tau }_{i} \in \widehat{\mathcal{T}}$ . We know an expectation on expensive to obtain informative trajectories from $\pi$ . However, typically an adversary has a fixed budget and therefore we would want to know what to expect given their budget $\mathbb{B}$ , here we calculate for a budget measured in optimal state-action pairs. To calculated an expected number of optimal state-action pairs, we find a $t < T$ such that:
+
+$$
+\mathbb{B} \approx \mathop{\sum }\limits_{{i = 1}}^{t}\mathbb{E}\left\lbrack {m}_{i}\right\rbrack = \mathop{\sum }\limits_{{i = 1}}^{t}\frac{1}{\delta } \tag{5}
+$$
+
+Given we can reset to the previous state and resample until we obtain an optimal state-action pair. This would give an expectation for the adversary to obtain $t$ optimal state-action pairs with $\mathbb{B}$ budget. This can be extended to the expectation of number of trajectories by approximating $\mathbb{B}$ , similar to Equation 5 where we find a $t < T$ , but with Equation 4 .
+
+We can consider re-visitation as an expectation. Let $k =$ $\mathbb{E}\left\lbrack n\right\rbrack$ where $n$ is the number of state-action pair without revisitation of maximum length $T$ for a trajectory. Consider using $k$ as the new horizon, rounding $k$ up to the nearest integer. We would expect that the expected number of trajectories to obtain $\pi$ decrease because of shorter horizon. Using the Markov Property, for some $\widehat{X}$ non-negative, bounded random variable for $N$ iterations, for any $t > 0$
+
+$$
+P\left( {\tau }_{i}\right) = {\left( \delta \right) }^{N}{\left( 1 - \delta \right) }^{k - N}\;P\left( {\widehat{X} \geq t}\right) \leq \mathbb{E}\left\lbrack X\right\rbrack /t
+$$
+
+Like before let $\mathcal{T}$ be the set of all trajectories ${\tau }_{i}$ with maximum length $T,\widehat{\mathcal{T}}$ randomly sample from $\mathcal{T}$ , and $\widehat{\tau }$ be the fragmented trajectory of all unique $\left( {{s}_{i},{a}_{i}}\right) \in \tau$ , Assume for the instance below that $\left| \circ \right|$ refers to cardinality and $k$ still refers to $\mathbb{E}\left\lbrack n\right\rbrack$ , then the Markov inequality and reverse Markov inequality for $0 < t < k$ with $T$ as the maximum trajectory length:
+
+$$
+P\left( {\left| {\widehat{\tau }}_{i}\right| < t}\right) \geq 1 - k/t\;P\left( {\left| {\widehat{\tau }}_{i}\right| \leq t}\right) \leq \left( {T - k}\right) /\left( {T - t}\right) \tag{6}
+$$
+
+For interpretation, we can say we have an expectation on the number of trajectories $\mathbb{E}\left\lbrack \widehat{m}\right\rbrack$ with probability between $\left( {1 - k/t}\right)$ to $\left( {T - k}\right) /\left( {T - t}\right)$ given a fixed $t$ where $0 < t < k$ , which is a weak bound with lack of information on variance.
+
+## Policy Evaulation and Expectation of Loss
+
+We see that the Q-value under $f$ will be either equivalent or less than the Q-value under target policy $\pi$ which dictates selected ${a}^{\prime }$ . Furthermore, the expected return ${G}_{t}^{f}$ for stochastic policy $f$ with uniform sampling from $\widehat{A}$ is expressed as the following:
+
+$$
+{G}_{t}^{f} = \delta \mathop{\sum }\limits_{{t = 0,1,2\ldots }}^{N}{\gamma }^{t}\left\lbrack {r}_{{s}_{t},{a}_{t}^{ * }}\right\rbrack + \frac{1 - \delta }{\left| \widehat{A}\right| }\mathop{\sum }\limits_{{t = 0,1,2\ldots }}^{N}{\gamma }^{t}\left\lbrack {\mathop{\sum }\limits_{{\widehat{a}}_{t}}{r}_{{s}_{t},{\widehat{a}}_{t}}}\right\rbrack \tag{7}
+$$
+
+With Equation 7, ${G}_{t}^{f}$ is the weighted sum of an optimal expected return at probability $\delta$ and the expected return across all rewards given by candidate actions at probability $\left( {1 - \delta }\right)$ . Given ${G}_{t}^{ * }$ and ${G}_{t}^{f}$ , the difference between the expected return in $Q$ -value form is exactly:
+
+$$
+{G}_{t}^{ * } - {G}_{t}^{f} = \left( {1 - \delta }\right) \left\lbrack {{Q}^{\pi }\left( {{s}_{t},{a}_{t}}\right) - \mathbb{E}\left\lbrack {{Q}^{f}\left( {{s}_{t},{\widehat{a}}_{t}}\right) }\right\rbrack }\right\rbrack \tag{8}
+$$
+
+Since ${Q}^{\pi }\left( {{s}_{t},{a}_{t}}\right) - \mathbb{E}\left\lbrack {{Q}^{f}\left( {{s}_{t},{\widehat{a}}_{t}}\right) }\right\rbrack < \rho$ , then the expectation loss ${G}_{t}^{ * } - {G}_{t}^{f} \leq \left( {1 - \delta }\right) \rho \leq \rho$ . This expectation of loss is calculated from the current state's forward estimation of future reward. We see there exists an upperbound, call it $\mathbb{E}\left\lbrack L\right\rbrack$ :
+
+$$
+\mathop{\sum }\limits_{{t = 0}}^{N}\left| {{Q}^{\pi }\left( {{s}_{t},{a}_{t}}\right) - \mathbb{E}\left\lbrack {{Q}^{f}\left( {{s}_{t},{\widehat{a}}_{t}}\right) }\right\rbrack }\right| \leq N \times \left( {1 - \delta }\right) \rho \leq N \times \rho = \mathbb{E}\left\lbrack L\right\rbrack
+$$
+
+(9)
+
+## Experimental Evaluation
+
+We investigate DQfD as our adversarial IL method and evaluate test-time and training time performance across three Atari environments: Breakout, Cartpole, and Space Invaders. We train DQfD agents under default parameters (supplied in supplements) with CRoP induced demonstrations, a control DQfD agent (our baseline IL comparison), and a default, double DQN (DDQN) agent which provided the expert demonstrations and as well as be a baseline performance comparison for IL agents. The results of a parameter search on trained DDQN policies from Stable-Baseline Zoo (Raffin 2018) are in Figure 3. The test evaluations of the target policy under selected thresholds of CRoP follow in Figure 2 to show that the selected $\rho$ and $\delta$ values reflect the evaluation of average reward performance. As expected, higher $\delta$ allows for higher values of $\rho$ . The trade-off on $\delta$ and $\rho$ is similar to an allowance of high or low variance in Q-value. One can draw a similarity of the defender’s selection of $\delta$ and $\rho$ as risk-adverse, risk-neutral, and risk-seeking behaviors determined by the defender. The results, illustrated in Figure 4, demonstrate that the performance of imitated policies generally remain below their control/baseline DQfD agents for earlier spans of training episodes. We compare it to the baseline DQfD because the baseline demonstrates the performance of a DQfD agent with no mitigation against adversarial ease-dropping on state-actions performed by the target agent. CRoP may induce variance similar to optimistic initialization, for example, work by (Kamiura and Sano 2017) and (Szita and Lörincz 2009). However, we argue that by adding deviating behavior to the target policy which we assume to be optimal, unless this assumption is to be false, generally the target policy is withholding the maximal information gain an adversary can observe, thus increasing their adversarial budget. Figure 5 depicts the comparison of test-time performance among agents trained with various values of $\delta$ and $\rho$ . We emphasize the constrains in CRoP are expected loss which are not true performance loss. The table for test-time evaluation timestep counts and timesteps with successful action diversion counts is located in Table 1 which shows that it is possible to deviate more often in one environment than another due to the nature of the environment. For example, we saw that visually, Cartpole could deviate early; however, had to make up for it by acting optimally to avoid bad states where it could not transition out from. This further supports our need for several threshold variations because many of the environments resulted in different behaviors with faced with risk.
+
+
+
+Figure 2: Test-time evaluation of target agent under various CRoP thresholds across 10 episodes
+
+## Conclusion
+
+This study investigated the threat emanating from passive policy replication attacks through adversarial usage of Imitation Learning. We proposed Constrained Randomization of Policy (CRoP), a deviation from optimal policy under a threshold constraint, as a mitigation technique against such attacks. We perform a parameter search and empirically evaluate the target policy under CRoP in comparison to the target policy without protection. We analyzed its performance with regards to $\epsilon$ -optimality, estimated impact on adversarial cost, and the expectation of loss. Furthermore, we empirically evaluated CRoP across 3 Atari game benchmarks, and verified the efficacy and efficiency of CRoP against DQfD-based policy replication attacks, demonstrating that it is possible for the target policy to accomplish its task while deviating behavior in a bounded manner to increase the adversarial cost for successful policy replication.
+
+
+
+Figure 3: Parameter search performance 5000 timesteps
+
+| - | Q-value difference $\rho$ | Positive advantage-inspired $\rho$ |
| env | $\delta$ | $\rho$ | succ. | $\delta \times \mathrm{T}$ | T | $\delta$ | succ. | $\delta \times \mathrm{T}$ | T |
| Breakout-v4 | 0.0 | 0.1 | 7812 | 8450 | 8450 | 0.0 | 9857 | 15412 | 14512 |
| Breakout-v4 | 0.5 | 0.02 | 12056 | 25761 | 51686 | 0.4 | 12402 | 33658 | 56336 |
| Cartpole-v0 | 0.7 | 0.01 | 1345 | 1979 | 2000 | 0.0 | 505 | 2000 | 2000 |
| Cartpole-v0 | 0.7 | 0.01 | 1345 | 1979 | 2000 | 0.1 | 430 | 1746 | 1938 |
| SpaceInvaders-v4 | 0.0 | 0.1 | 18963 | 18968 | 26038 | 0.0 | 10111 | 21190 | 21190 |
| SpaceInvaders-v4 | 0.6 | 0.02 | 10281 | 10358 | 26038 | | | | |
| - | Advantage-inspired $\rho$ | | | | |
| env | $\delta$ | $\rho$ | succ. | $\delta \times \mathrm{T}$ | T | | | | |
| Breakout-v4 | 0.0 | 0.1 | 3238 | 3464 | 3464 | | | | |
| Breakout-v4 | 0.0 | 0.1 | 3238 | 3464 | 3464 | | | | |
| Cartpole-v0 | 0.0 | 0.02 | 279 | 2000 | 2000 | | | | |
| Cartpole-v0 | 0.0 | 0.1 | 946 | 2000 | 2000 | | | | |
| SpaceInvaders-v4 | 0.0 | 0.1 | 21706 | 21706 | 21706 | | | | |
| SpaceInvaders-v4 | 0.7 | 0.15 | 7117 | 7117 | 23730 | | | | |
+
+Table 1: Test-time evaluation timestep count over 10 episodes
+
+
+
+Figure 4: Imitating DQfD agents training on CRoP-induced demonstrations
+
+
+
+Figure 5: Test-time evaluation of replicated policies and the target DDQN agent across 10 episodes
+
+## References
+
+Behzadan, V.; and Hsu, W. 2019a. Adversarial Exploitation of Policy Imitation. arXiv:1906.01121.
+
+Behzadan, V.; and Hsu, W. 2019b. Sequential triggers for watermarking of deep reinforcement learning policies. arXiv preprint arXiv:1906.01126.
+
+Behzadan, V.; and Munir, A. 2018. The faults in our pi stars: Security issues and open challenges in deep reinforcement learning. arXiv preprint arXiv:1810.10369.
+
+Chen, K.; Guo, S.; Zhang, T.; Li, S.; and Liu, Y. 2021. Temporal Watermarks for Deep Reinforcement Learning Models. In Proceedings of the 20th International Conference on Autonomous Agents and MultiAgent Systems, 314-322.
+
+Chen, K.; Guo, S.; Zhang, T.; Xie, X.; and Liu, Y. 2020. Stealing Deep Reinforcement Learning Models for Fun and Profit. arXiv:2006.05032.
+
+Hester, T.; Vecerik, M.; Pietquin, O.; Lanctot, M.; Schaul, T.; Piot, B.; Horgan, D.; Quan, J.; Sendonaris, A.; Dulac-Arnold, G.; Osband, I.; Agapiou, J.; Leibo, J. Z.; and Gruslys, A. 2017. Deep Q-learning from Demonstrations. arXiv:1704.03732.
+
+Ho, J.; and Ermon, S. 2016. Generative Adversarial Imitation Learning. arXiv:1606.03476.
+
+Kamiura, M.; and Sano, K. 2017. Optimism in the Face of Uncertainty Supported by a Statistically-Designed MultiArmed Bandit Algorithm. Biosystems, 160.
+
+Ke, L.; Choudhury, S.; Barnes, M.; Sun, W.; Lee, G.; and Srinivasa, S. 2020. Imitation Learning as $f$ -Divergence Minimization. arXiv:1905.12888.
+
+Levine, S.; Kumar, A.; Tucker, G.; and Fu, J. 2020. Offline Reinforcement Learning: Tutorial, Review, and Perspectives on Open Problems. arXiv:2005.01643.
+
+Nair, Y.; and Doshi-Velez, F. 2020. PAC Bounds for Imitation and Model-based Batch Learning of Contextual Markov Decision Processes. arXiv:2006.06352.
+
+Raffin, A. 2018. RL Baselines Zoo. https://github.com/ araffin/rl-baselines-zoo.
+
+Ross, S.; Gordon, G. J.; and Bagnell, J. A. 2011. A Reduction of Imitation Learning and Structured Prediction to No-Regret Online Learning. arXiv:1011.0686.
+
+Szita, I.; and Lörincz, A. 2009. Optimistic initialization and greediness lead to polynomial time learning in factored MDPs. In Proceedings of the 26th International Conference On Machine Learning, ICML 2009, volume 382, 126.
+
+Xiao, H.; Xiao, H.; and Eckert, C. 2012. Adversarial Label Flips Attack on Support Vector Machines. In ECAI.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/o_O7TOBC7jl/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/o_O7TOBC7jl/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..9680dd116b0f79bdee6be1b999df46d17a65e8cd
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/o_O7TOBC7jl/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,184 @@
+§ MITIGATION OF ADVERSARIAL POLICY IMITATION VIA CONSTRAINED RANDOMIZATION OF POLICY (CROP)
+
+§ ABSTRACT
+
+Deep Reinforcement Learning (DRL) policies are vulnerable to unauthorized replication attacks, where an adversary exploits imitation learning to reproduce target policies from observed behavior. In this paper, we propose Constrained Randomization of Policy (CRoP) as a mitigation technique against such attacks. CRoP induces the execution of sub-optimal actions at random under performance loss constraints. We present a parametric analysis of CRoP, address the optimality of CRoP, and establish theoretical bounds on the adversarial budget and the expectation of loss. Furthermore, we report the experimental evaluation of CRoP in Atari environments under adversarial imitation, which demonstrate the efficacy and feasibility of our proposed method against policy replication attacks.
+
+§ INTRODUCTION
+
+Deep Reinforcement Learning (DRL) is a learning framework for sequential decision-making leveraging neural networks for generalization and function approximation. With the growing interest in DRL and its integration in commercial and critical systems, the security of such algorithms have become of paramount importance (Behzadan and Munir 2018).
+
+In tandem with DRL, similar advancements have been made in Imitation Learning (IL) techniques that utilize expert demonstrations to learn and replicate the expert's behavior in sequential decision making tasks. Deep Q-Learning from Demonstration (DQfD)(Hester et al. 2017) is an IL variant that has enabled DRL agents to converge quicker to an optimal policy. However, recent work in (Behzadan and Hsu 2019a) and (Chen et al. 2020) demonstrate that IL can also be exploited by adversaries to replicate other agents' policies from passive observation of their behavior. This gives rise to risks concerning intellectual property and adversarial information gain for more effective active attacks. Current state of the art in countering such attacks include watermarking (Behzadan and Hsu 2019b)(Chen et al. 2021), which enables the post-attack identification of replicated policies.
+
+In this paper, we propose an active mitigation technique against policy imitation attacks, named Constrained Randomization of Policy (CRoP). The proposed technique is based on intermittent randomization of a trained policy, constrained on a threshold for maximum amount of acceptable loss in the expected return. The goal is to increase the adversary's imitation training cost, measured as the minimum number of training iterations and observed demonstrations required for training a replica that matches the target policy's performance.
+
+The main contributions of this paper are: (1) We propose and formulate CRoP as a mitigation technique against adversarial policy imitation, (2) We present a formal analysis of the bounds on expected loss of optimality under CRoP, (3) We formally establish bounds on the adversary's imitation cost induced by CRoP. (3) We report the results of empirical evaulation of adversarial imitation via DQfD against CRoP agents in classical DRL benchmarks, and demonstrate the efficacy and feasibility of CRoP in those settings.
+
+The remainder of this paper is organized as follows: we introduce Constraint Randomization of Policy (CRoP) which analyzes the optimality of a CRoP policy in relation to an optimal policy and describes CRoP's impact upon minimizing divergence objectives, and presents the minimal adversarial budget induced by CRoP and analysis on expectation of loss. The following section reports the experimental evaluation of CRoP in three Atari benchmark environments, along with measurements of the training and test-time performance of DQfD-based adversarial imitation learning agents targeting CRoP-enabled policies. We conclude the paper with a summary of findings and remarks on future directions of research.
+
+§ CONSTRAINED RANDOMIZATION OF POLICY
+
+In the remainder of this paper, we assume the target policy aims to solve a Markov Decision Process (MDP) denoted by the tuple $< S,A,R,T,\gamma >$ where $S$ is a finite state space, $A$ is a finite action space, $T$ defines the environment’s transition probabilities, a discount value $\gamma \in \lbrack 0,1)$ , and a reward function $R : S \times A \rightarrow \left\lbrack {0,1}\right\rbrack$ . The solution to this MDP is a policy $\pi : S \rightarrow A$ that maps states to actions. An agent implementing a policy $\pi$ can measure the value of a state $V\left( s\right) = \mathop{\max }\limits_{a}\left( {{r}_{s,a} + {\gamma V}\left( {s}^{\prime }\right) }\right)$ , where ${s}^{\prime }$ is the next state. Similarly, the value of a state-action pair is given by $Q\left( {s,a}\right) = \mathop{\max }\limits_{a}\left( {{r}_{s,a} + {\gamma Q}\left( {{s}^{\prime },{a}^{\prime }}\right) }\right)$ where ${s}^{\prime }$ is the next state and ${a}^{\prime }$ is the next action.
+
+Constrained Randomization of Policy (CRoP) is an action diversion strategy from an optimal policy under constrained performance deviation from optimal. Let $\widehat{a} \in \widehat{A}$ where $\widehat{a}$ are candidate actions that satisfy $0 < Q\left( {s,\pi \left( s\right) }\right) - Q\left( {s,{\widehat{a}}_{i}}\right) < \rho$ . In other words, $\widehat{A}$ is the space of all candidate actions for $s \in S$ excluding the optimal action $\pi \left( s\right)$ . We define CRoP as the function below:
+
+$$
+f\left( s\right) = \left\{ \begin{array}{l} \pi \left( s\right) \;\Pr \left( \delta \right) \text{ or }\beta \widehat{a} \in \widehat{A} \\ {\widehat{A}}_{\widehat{a} \sim U\left( \widehat{A}\right) }\;\Pr \left( {1 - \delta }\right) \end{array}\right. \tag{1}
+$$
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+Where $U\left( \widehat{A}\right)$ is the uniform distribution over $\widehat{A}$ . This definition of $\rho$ threshold is the difference of Q-values. We have three variations of $\rho$ for CRoP: Q-value difference (Q-diff) as described in Equation 1, and two measures inspired by the advantage function: advantage-inspired difference (A-diff), and positive advantage-inspired difference $\left( {{\mathrm{A}}^{ + }\text{ -diff }}\right)$ . A-diff CRoP is thus defined as:
+
+$$
+\widetilde{A}\left( {{s}_{t},{a}_{t}}\right) = Q\left( {{s}_{t},{a}_{t}}\right) - V\left( {s}_{t - 1}\right) > - \rho \tag{2}
+$$
+
+${\mathrm{A}}^{ + }$ -diff’s $\rho$ has the condition $\widehat{A}\left( {{s}_{t},{a}_{t}}\right) \geq 0$ . A-diff and ${\mathrm{A}}^{ + }$ - diff’s $\rho$ are interpreted as 1-step hindsight estimation which is relevant to the trajectory taken instead of only pure future estimate as with Q-diff, eg. played badly, now play safe vs. plan to feint ahead. However, the selection of $\rho$ should consider estimation error due to either finite training or function approximation. One can look to the analysis of learning complexity as a method of finding error bounds to derive a safety margin for $\rho$ . We choose these three threshold variations because their performance vary across the environments, implying that being able to successfully deviate from optimal policy is conditioned on the environment dynamics itself as well as the defender's tolerance for loss which may not be captured by a single threshold such as Q-diff's. We cannot use the traditional Advantage since $V\left( s\right) = \mathop{\max }\limits_{a}Q\left( {s,a}\right)$ implies Q-diff's implementation would have a similar impact in regards to threshold. Additionally, it is important to recognize that CRoP is similar to that of a $\epsilon$ -greedy policy; however, the difference lies on the constraint expected loss that $\epsilon$ -greedy does not guarantee.
+
+ < g r a p h i c s >
+
+Figure 1: Visualization that ${\pi }^{\prime }$ is an $\left( {\epsilon + {\epsilon }^{\prime }}\right)$ -optimal to ${Q}^{ * }/{V}^{ * }$
+
+By definition, a policy $\pi$ is $\epsilon$ -optimal if there exists a non-negative constant $\epsilon$ such that ${v}^{\pi }\left( x\right) \geq V\left( x\right) - \epsilon$ for all initial states $x$ in $S$ . Some definitions include that it occurs at $P\left( {1 - \delta }\right)$ , which we will abide by. In other words, $\epsilon$ -optimal policies are within $\epsilon$ neighborhood of ${V}^{ * }$ , specifically ${V}^{ * } - {V}^{\pi } < \epsilon$ for all $a \in A$ and $s \in S$ at probability $\left( {1 - \delta }\right)$ . As illustrated in Figure $1,{\pi }^{ * }$ is the optimal and greedy policy extracted from ${V}^{ * }$ where $\pi$ is the extracted policy from ${V}^{\pi }$ and ${\pi }^{\prime }$ is the extracted policy from ${V}^{{\pi }^{\prime }}$ , we see that ${\pi }^{\prime }$ may be expressed as an $\left( {\epsilon + {\epsilon }^{\prime }}\right)$ -optimal to ${Q}^{ * }/{V}^{ * }$ when evaluated for the initial states and to follow a greedy policy thereafter. Since we do not assume $\pi$ to be an optimal policy, it is possible for ${\pi }^{\prime }$ to be more optimal than $\pi$ . However, it is noteworthy that an evaluation of optimality based on the difference to the value function does not imply extracted policies with small error to ${V}^{ * }$ resemble the optimal policy when assessed on behavioral differences. Theorem 1 establishes that CRoP policy $f$ is at worst $\left( {\epsilon + {\epsilon }^{\prime }}\right)$ -optimal to ${Q}^{ * }$ at probability $\left( {1 - \delta }\right)$ as an evaluation of the initial states assuming a greedy policy is followed after. However, an evaluation for committing to following CRoP thereafter (or for any evaluation of a trajectory under CRoP) will have compounding sub-optimality. Therefore, instead for any fix length horizon $T$ , we know that ${\pi }^{\prime }$ will be $\left( {T \times \epsilon + {\epsilon }^{\prime }}\right)$ -optimal to ${V}^{ * }$ . This is useful for finite horizons; however, can be problematic with infinite-horizons. However, since we allow the defender to modify $\rho$ , one can simply stop the deviating behavior to cease and bound the compounding error.
+
+Theorem 1 Given ${Q}^{ * }\left( {{s}_{t},{a}_{t}}\right) - {Q}^{\pi }\left( {{s}_{t},{a}_{t}}\right) < {\epsilon }^{\prime }$ at probability $\left( {1 - \delta }\right)$ and $\left| {{Q}^{\pi }\left( {{s}_{t},{a}_{t}}\right) - {Q}^{{\pi }^{\prime }}\left( {{s}_{t},{a}_{t}}\right) }\right| \leq \epsilon$ for all $s \in S$ and $a \in A$ , then ${Q}^{ * }\left( {{s}_{t},{a}_{t}}\right) - {Q}^{{\pi }^{\prime }}\left( {{s}_{t},{a}_{t}}\right) \leq \epsilon + {\epsilon }^{\prime }$ at probability $\left( {1 - \delta }\right) .{\pi }^{\prime }$ is an $\left( {\epsilon + {\epsilon }^{\prime }}\right)$ -optimal to ${Q}^{ * }/{V}^{ * }$ at probability $\left( {1 - \delta }\right)$ .
+
+Proof. Given ${Q}^{ * }\left( {{s}_{t},{a}_{t}}\right) - {Q}^{\pi }\left( {{s}_{t},{a}_{t}}\right) < {\epsilon }^{\prime }$ at probability $\left( {1 - \delta }\right)$ and $\left| {{Q}^{\pi }\left( {{s}_{t},{a}_{t}}\right) - {Q}^{{\pi }^{\prime }}\left( {{s}_{t},{a}_{t}}\right) }\right| \leq \epsilon$ for all $s \in S$ and $a \in A$ , then ${Q}^{ * }\left( {{s}_{t},{a}_{t}}\right) - {Q}^{{\pi }^{\prime }}\left( {{s}_{t},{a}_{t}}\right) \leq \epsilon + {\epsilon }^{\prime }$ at probability $\left( {1 - \delta }\right)$ .
+
+Let ${Q}_{\text{ diff }} = {Q}^{ * }\left( {{s}_{t},{a}_{t}}\right) - {Q}^{f}\left( {{s}_{t},{a}_{t}}\right) + \mid {Q}^{f}\left( {{s}_{t},{a}_{t}}\right) -$ ${Q}^{{\pi }^{\prime }}\left( {{s}_{t},{a}_{t}}\right) \mid$ . Given that $Q\left( {s,a}\right) \in \left( {0,\frac{1}{1 - \gamma }}\right)$ , at $\left( {1 - \delta }\right)$ probability:
+
+$$
+{Q}^{ * }\left( {{s}_{t},{a}_{t}}\right) - {Q}^{{\pi }^{\prime }}\left( {{s}_{t},{a}_{t}}\right) \leq {Q}_{diff} \leq \epsilon + {\epsilon }^{\prime } \tag{3}
+$$
+
+IL has two common approaches: Behavioral Clones (BC) which are supervised learners and inverse RL which finds a reward function to match the demonstration. Work by (Ke et al. 2020) shows that: BC minimizes the KL divergence, Generative Adversarial Imitation Learning (GAIL) (Ho and Ermon 2016) minimize the Jensen Shannon divergence and DAgger (Ross, Gordon, and Bagnell 2011) minimizes total variance. For BC, CRoP affects the maximum likelihood in a similar manner to data poisoning attacks like label flipping (Xiao, Xiao, and Eckert 2012) or class imbalance. In regard to GAIL, the discriminator from a GAN prioritizes expert experiences so unless modified for decay when out-performed, additional penalty is given to the training policy. Furthermore, when CRoP lowers the action distribution for ${a}^{ * }$ according to $\delta$ probability and increases the distribution for candidate actions, it results in smaller maximal difference for DAgger.
+
+§ BUDGET ANALYSIS FOR PERFECT INFORMATION ADVERSARY
+
+We measure the adversary's budget in the sample quantity or trajectories that it can acquire through a passive attack. Nair and Doshi-Velez (Nair and Doshi-Velez 2020) derive upper and lower bounds on the sample complexity of direct policy learning and model-based imitation learning in relaxed problem spaces. This follows the research of RL sample efficiency and Offline RL(Levine et al. 2020). However, in this work we divert from a direct treatment of sample efficiency to consider information optimality from observed target demonstration without environment interaction. Consider the set $\mathcal{T}$ where ${\tau }_{i}$ $\left( {\forall ,{\tau }_{i} \in \mathcal{T}}\right)$ which is composed of a $T$ -length chain of(s, a)- pairs. Assume each(s, a)-pair has two possible outcomes, optimal at $P\left( \delta \right)$ or sub-optimal at $P\left( {1 - \delta }\right)$ . Assume pair and trajectory uniqueness, this would contain ${2}^{T}$ trajectories where $T$ is the length of the horizon. To obtain optimal target $\pi$ , we would require all trajectories except the event of a complete sub-optimal trajectory ${\left( 1 - \delta \right) }^{T}$ . Let an adversary pull from $\mathcal{T}$ . Group the desired ${2}^{T} - 1$ trajectories in set $\alpha$ and the worst event trajectory in set $\beta$ . As an adversary samples from $\mathcal{T}$ , if they obtain an unseen desired trajectory $\tau$ , it is from $\alpha$ and is moved to their adversarial set $\widehat{\mathcal{T}}.\tau$ is then replaced in $\mathcal{T}$ but is no longer unseen so if encountered again, it would be from $\beta$ . Let ${\tau }_{w}$ be the worst-case trajectory and $\widehat{m}$ be the sum of the expected number of trajectories for each sequential pull from $\mathcal{T}$ . It follows that:
+
+$$
+\mathbb{E}\left\lbrack \widehat{m}\right\rbrack = \mathop{\sum }\limits_{{n = 1}}^{{{2}^{T} - 1}}\mathbb{E}\left\lbrack {m}_{n}\right\rbrack = \mathop{\sum }\limits_{1}^{{{2}^{T} - 1}}1/\left( {1 - P\left( {\tau }_{w}\right) + \mathop{\sum }\limits_{{{\tau }_{i} \in \widehat{\mathcal{T}}}} - P\left( {\tau }_{i}\right) }\right) \tag{4}
+$$
+
+Intuitively, we see in the denominator the probability of pulling unseen trajectories given the trajectories in $\widehat{\mathcal{T}}$ and known probability for all ${\tau }_{i} \in \widehat{\mathcal{T}}$ . We know an expectation on expensive to obtain informative trajectories from $\pi$ . However, typically an adversary has a fixed budget and therefore we would want to know what to expect given their budget $\mathbb{B}$ , here we calculate for a budget measured in optimal state-action pairs. To calculated an expected number of optimal state-action pairs, we find a $t < T$ such that:
+
+$$
+\mathbb{B} \approx \mathop{\sum }\limits_{{i = 1}}^{t}\mathbb{E}\left\lbrack {m}_{i}\right\rbrack = \mathop{\sum }\limits_{{i = 1}}^{t}\frac{1}{\delta } \tag{5}
+$$
+
+Given we can reset to the previous state and resample until we obtain an optimal state-action pair. This would give an expectation for the adversary to obtain $t$ optimal state-action pairs with $\mathbb{B}$ budget. This can be extended to the expectation of number of trajectories by approximating $\mathbb{B}$ , similar to Equation 5 where we find a $t < T$ , but with Equation 4 .
+
+We can consider re-visitation as an expectation. Let $k =$ $\mathbb{E}\left\lbrack n\right\rbrack$ where $n$ is the number of state-action pair without revisitation of maximum length $T$ for a trajectory. Consider using $k$ as the new horizon, rounding $k$ up to the nearest integer. We would expect that the expected number of trajectories to obtain $\pi$ decrease because of shorter horizon. Using the Markov Property, for some $\widehat{X}$ non-negative, bounded random variable for $N$ iterations, for any $t > 0$
+
+$$
+P\left( {\tau }_{i}\right) = {\left( \delta \right) }^{N}{\left( 1 - \delta \right) }^{k - N}\;P\left( {\widehat{X} \geq t}\right) \leq \mathbb{E}\left\lbrack X\right\rbrack /t
+$$
+
+Like before let $\mathcal{T}$ be the set of all trajectories ${\tau }_{i}$ with maximum length $T,\widehat{\mathcal{T}}$ randomly sample from $\mathcal{T}$ , and $\widehat{\tau }$ be the fragmented trajectory of all unique $\left( {{s}_{i},{a}_{i}}\right) \in \tau$ , Assume for the instance below that $\left| \circ \right|$ refers to cardinality and $k$ still refers to $\mathbb{E}\left\lbrack n\right\rbrack$ , then the Markov inequality and reverse Markov inequality for $0 < t < k$ with $T$ as the maximum trajectory length:
+
+$$
+P\left( {\left| {\widehat{\tau }}_{i}\right| < t}\right) \geq 1 - k/t\;P\left( {\left| {\widehat{\tau }}_{i}\right| \leq t}\right) \leq \left( {T - k}\right) /\left( {T - t}\right) \tag{6}
+$$
+
+For interpretation, we can say we have an expectation on the number of trajectories $\mathbb{E}\left\lbrack \widehat{m}\right\rbrack$ with probability between $\left( {1 - k/t}\right)$ to $\left( {T - k}\right) /\left( {T - t}\right)$ given a fixed $t$ where $0 < t < k$ , which is a weak bound with lack of information on variance.
+
+§ POLICY EVAULATION AND EXPECTATION OF LOSS
+
+We see that the Q-value under $f$ will be either equivalent or less than the Q-value under target policy $\pi$ which dictates selected ${a}^{\prime }$ . Furthermore, the expected return ${G}_{t}^{f}$ for stochastic policy $f$ with uniform sampling from $\widehat{A}$ is expressed as the following:
+
+$$
+{G}_{t}^{f} = \delta \mathop{\sum }\limits_{{t = 0,1,2\ldots }}^{N}{\gamma }^{t}\left\lbrack {r}_{{s}_{t},{a}_{t}^{ * }}\right\rbrack + \frac{1 - \delta }{\left| \widehat{A}\right| }\mathop{\sum }\limits_{{t = 0,1,2\ldots }}^{N}{\gamma }^{t}\left\lbrack {\mathop{\sum }\limits_{{\widehat{a}}_{t}}{r}_{{s}_{t},{\widehat{a}}_{t}}}\right\rbrack \tag{7}
+$$
+
+With Equation 7, ${G}_{t}^{f}$ is the weighted sum of an optimal expected return at probability $\delta$ and the expected return across all rewards given by candidate actions at probability $\left( {1 - \delta }\right)$ . Given ${G}_{t}^{ * }$ and ${G}_{t}^{f}$ , the difference between the expected return in $Q$ -value form is exactly:
+
+$$
+{G}_{t}^{ * } - {G}_{t}^{f} = \left( {1 - \delta }\right) \left\lbrack {{Q}^{\pi }\left( {{s}_{t},{a}_{t}}\right) - \mathbb{E}\left\lbrack {{Q}^{f}\left( {{s}_{t},{\widehat{a}}_{t}}\right) }\right\rbrack }\right\rbrack \tag{8}
+$$
+
+Since ${Q}^{\pi }\left( {{s}_{t},{a}_{t}}\right) - \mathbb{E}\left\lbrack {{Q}^{f}\left( {{s}_{t},{\widehat{a}}_{t}}\right) }\right\rbrack < \rho$ , then the expectation loss ${G}_{t}^{ * } - {G}_{t}^{f} \leq \left( {1 - \delta }\right) \rho \leq \rho$ . This expectation of loss is calculated from the current state's forward estimation of future reward. We see there exists an upperbound, call it $\mathbb{E}\left\lbrack L\right\rbrack$ :
+
+$$
+\mathop{\sum }\limits_{{t = 0}}^{N}\left| {{Q}^{\pi }\left( {{s}_{t},{a}_{t}}\right) - \mathbb{E}\left\lbrack {{Q}^{f}\left( {{s}_{t},{\widehat{a}}_{t}}\right) }\right\rbrack }\right| \leq N \times \left( {1 - \delta }\right) \rho \leq N \times \rho = \mathbb{E}\left\lbrack L\right\rbrack
+$$
+
+(9)
+
+§ EXPERIMENTAL EVALUATION
+
+We investigate DQfD as our adversarial IL method and evaluate test-time and training time performance across three Atari environments: Breakout, Cartpole, and Space Invaders. We train DQfD agents under default parameters (supplied in supplements) with CRoP induced demonstrations, a control DQfD agent (our baseline IL comparison), and a default, double DQN (DDQN) agent which provided the expert demonstrations and as well as be a baseline performance comparison for IL agents. The results of a parameter search on trained DDQN policies from Stable-Baseline Zoo (Raffin 2018) are in Figure 3. The test evaluations of the target policy under selected thresholds of CRoP follow in Figure 2 to show that the selected $\rho$ and $\delta$ values reflect the evaluation of average reward performance. As expected, higher $\delta$ allows for higher values of $\rho$ . The trade-off on $\delta$ and $\rho$ is similar to an allowance of high or low variance in Q-value. One can draw a similarity of the defender’s selection of $\delta$ and $\rho$ as risk-adverse, risk-neutral, and risk-seeking behaviors determined by the defender. The results, illustrated in Figure 4, demonstrate that the performance of imitated policies generally remain below their control/baseline DQfD agents for earlier spans of training episodes. We compare it to the baseline DQfD because the baseline demonstrates the performance of a DQfD agent with no mitigation against adversarial ease-dropping on state-actions performed by the target agent. CRoP may induce variance similar to optimistic initialization, for example, work by (Kamiura and Sano 2017) and (Szita and Lörincz 2009). However, we argue that by adding deviating behavior to the target policy which we assume to be optimal, unless this assumption is to be false, generally the target policy is withholding the maximal information gain an adversary can observe, thus increasing their adversarial budget. Figure 5 depicts the comparison of test-time performance among agents trained with various values of $\delta$ and $\rho$ . We emphasize the constrains in CRoP are expected loss which are not true performance loss. The table for test-time evaluation timestep counts and timesteps with successful action diversion counts is located in Table 1 which shows that it is possible to deviate more often in one environment than another due to the nature of the environment. For example, we saw that visually, Cartpole could deviate early; however, had to make up for it by acting optimally to avoid bad states where it could not transition out from. This further supports our need for several threshold variations because many of the environments resulted in different behaviors with faced with risk.
+
+ < g r a p h i c s >
+
+Figure 2: Test-time evaluation of target agent under various CRoP thresholds across 10 episodes
+
+§ CONCLUSION
+
+This study investigated the threat emanating from passive policy replication attacks through adversarial usage of Imitation Learning. We proposed Constrained Randomization of Policy (CRoP), a deviation from optimal policy under a threshold constraint, as a mitigation technique against such attacks. We perform a parameter search and empirically evaluate the target policy under CRoP in comparison to the target policy without protection. We analyzed its performance with regards to $\epsilon$ -optimality, estimated impact on adversarial cost, and the expectation of loss. Furthermore, we empirically evaluated CRoP across 3 Atari game benchmarks, and verified the efficacy and efficiency of CRoP against DQfD-based policy replication attacks, demonstrating that it is possible for the target policy to accomplish its task while deviating behavior in a bounded manner to increase the adversarial cost for successful policy replication.
+
+ < g r a p h i c s >
+
+Figure 3: Parameter search performance 5000 timesteps
+
+max width=
+
+- 5|c|Q-value difference $\rho$ 4|c|Positive advantage-inspired $\rho$
+
+1-10
+env $\delta$ $\rho$ succ. $\delta \times \mathrm{T}$ T $\delta$ succ. $\delta \times \mathrm{T}$ T
+
+1-10
+Breakout-v4 0.0 0.1 7812 8450 8450 0.0 9857 15412 14512
+
+1-10
+Breakout-v4 0.5 0.02 12056 25761 51686 0.4 12402 33658 56336
+
+1-10
+Cartpole-v0 0.7 0.01 1345 1979 2000 0.0 505 2000 2000
+
+1-10
+Cartpole-v0 0.7 0.01 1345 1979 2000 0.1 430 1746 1938
+
+1-10
+SpaceInvaders-v4 0.0 0.1 18963 18968 26038 0.0 10111 21190 21190
+
+1-10
+SpaceInvaders-v4 0.6 0.02 10281 10358 26038 X X X X
+
+1-10
+- 5|c|Advantage-inspired $\rho$ X X X X
+
+1-10
+env $\delta$ $\rho$ succ. $\delta \times \mathrm{T}$ T X X X X
+
+1-10
+Breakout-v4 0.0 0.1 3238 3464 3464 X X X X
+
+1-10
+Breakout-v4 0.0 0.1 3238 3464 3464 X X X X
+
+1-10
+Cartpole-v0 0.0 0.02 279 2000 2000 X X X X
+
+1-10
+Cartpole-v0 0.0 0.1 946 2000 2000 X X X X
+
+1-10
+SpaceInvaders-v4 0.0 0.1 21706 21706 21706 X X X X
+
+1-10
+SpaceInvaders-v4 0.7 0.15 7117 7117 23730 X X X X
+
+1-10
+
+Table 1: Test-time evaluation timestep count over 10 episodes
+
+ < g r a p h i c s >
+
+Figure 4: Imitating DQfD agents training on CRoP-induced demonstrations
+
+ < g r a p h i c s >
+
+Figure 5: Test-time evaluation of replicated policies and the target DDQN agent across 10 episodes
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/og7CXiEXqpZ/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/og7CXiEXqpZ/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..d10ad18f13357e40535117c2ad5a650d4aa56cc3
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/og7CXiEXqpZ/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,289 @@
+# Ensemble-in-One: Learning Ensemble within Random Gated Networks for Enhanced Adversarial Robustness
+
+Anonymous authors
+
+## Abstract
+
+Adversarial attacks have threatened modern deep learning systems by crafting adversarial examples with small perturbations to fool the convolutional neural networks (CNNs). Ensemble training methods are promising to facilitate better adversarial robustness by diversifying the vulnerabilities among the sub-models, simultaneously maintaining comparable accuracy as standard training. Previous practices also demonstrate that enlarging the ensemble can improve the robustness. However, existing ensemble methods are with poor scalability, owing to the rapid complexity increase when including more sub-models in the ensemble. Moreover, it is usually infeasible to train or deploy an ensemble with substantial sub-models, owing to the tight hardware resource budget and latency requirement. In this work, we propose Ensemble-in-One (EIO), a simple but effective method to enlarge the ensemble within a random gated network (RGN). EIO augments the original model by replacing the parameterized layers with multi-path random gated blocks (RGBs) to construct an RGN. By diversifying the vulnerability of the numerous paths through the super-net, it provides high scal-ability because the paths within an RGN exponentially increase with the network depth. Our experiments demonstrate that EIO consistently outperforms previous ensemble training methods with even less computational overhead, simultaneously achieving better accuracy-robustness trade-offs than adversarial training.
+
+## Introduction
+
+With the convolutional neural networks (CNNs) becoming ubiquitous, the security and robustness of neural networks is attracting increasing interests. Recent studies find that CNN models are inherently vulnerable to adversarial attacks (Goodfellow, Shlens, and Szegedy 2014), which craft imperceptible perturbations on the images, referred to as adversarial examples, to mislead the neural network models. Even without accessing the target model, an adversary can still generate adversarial examples from other surrogate models to attack the target model by exploiting the adversarial transferability among them.
+
+Such vulnerability of CNN models has spurred extensive researches on improving the robustness against adversarial attacks. One stream of approaches targets on learning robust features for an individual model (Madry et al. 2017; Brendel et al. 2020). Informally, robust features are defined as the features that are less sensitive to the adversarial perturbations added on the inputs. A representative approach, referred to as adversarial training (Madry et al. 2017), on-line generates adversarial examples on which the model minimizes the training loss. As a result, adversarial training encourages the model to learn the features that are less sensitive to the adversarial perturbations, thereby alleviating the model's vulnerability. However, such adversarial training methods often have to sacrifice the clean accuracy for enhanced robustness (Zhang et al. 2019), since they exclude the non-robust features and become less distinguishable for the examples with high similarity in the feature space.
+
+Besides empowering improved robustness for an individual model, another stream of researches focuses on forming strong ensembles to improve the robustness (Yang et al. 2020; Bagnall, Bunescu, and Stewart 2017; Pang et al. 2019; Kariyappa and Qureshi 2019). Generally speaking, an ensemble is constructed by aggregating multiple sub-models. Intuitively, an ensemble is promising to facilitate better robustness than an individual model because a successful attack needs to mislead the majority of the sub-models rather than just one. While the robustness of an ensemble highly relies on the diversity of the sub-models, recent study finds that CNN models trained independently on the same dataset are with highly-overlapped adversarial subspaces (Tramèr et al. 2017). Therefore, many studies propose ensemble training methods to diversify the sub-models. For example, DVERGE (Yang et al. 2020) proposes to distill non-robust features corresponding to each sub-model's vulnerability, then isolates the vulnerabilities of the sub-models by mutual learning such that impeding the adversarial transferability among them.
+
+There is another learned insight that the ensembles composed by more sub-models tend to capture greater robustness improvement. Table 1 shows the robustness trend of the ensembles trained with various ensemble training methods. Robustness improvement can be obtained by including more sub-models within the ensemble. This drives us to further explore whether the trend will continue when keeping enlarging the ensemble. However, existing ensemble construction methods are with poor scalability because of the rapidly increasing overhead, especially with mutual learning which trains the sub-models in a round-robin manner, the complexity will rise at a speed of $O\left( {n}^{2}\right)$ .
+
+---
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+---
+
+| #sub-model | Baseline | ADP | GAL | DVERGE |
| 3 | 0.0%/1.5% | 0.0%/9.6% | 39.7%/11.4% | 53.2%/40.0% |
| 5 | 0.0%/2.1% | 0.0%/11.8% | 32.4%/31.7% | 57.2%/48.9% |
| 8 | 0.0%13.2% | 0.0%/12.0% | 22.4%137.0% | 63.6%157.9% |
+
+Table 1: Adversarial accuracy of the ensembles trained by different methods, with 3, 5, and 8 sub-models respectively (Yang et al. 2020). The numbers before and after the slash mean black-box adversarial accuracy under perturbation strength0.03 (around 8/255) and white-box adversarial accuracy under perturbation strength 0.01 .
+
+We propose Ensemble-in-One, a novel approach that can improve the scalability of ensemble training and introduce randomness mechanism for enhanced generalization, simultaneously obtaining better robustness and higher efficiency. For a dedicated CNN model, we conduct a Random Gated Network (RGN) by substituting each parameterized layer with a Random Gated Block (RGB) on top of the neural architecture. Through this, the network can instantiate numerous sub-models by controlling the gates in each block. Ensemble-in-One substantially reduces the complexity when scaling up the ensemble. In summary, the contributions of this work are listed as below:
+
+- Ensemble-in-One is a simple but effective method that learns adversarially robust ensembles within one over-parametrized random gated network. The EIO enables us to employ ensemble learning techniques to learn more robust individual models with minimal computational overheads and no extra inference overhead.
+
+- Extensive experiments demonstrate the effectiveness of EIO. It consistently outperforms the previous ensemble training methods with even less computational overhead. Moreover, EIO also achieves better accuracy-robustness trade-offs than adversarial training method.
+
+## Related Work
+
+## Adversarial attacks and countermeasures.
+
+The inherent vulnerability of CNN models poses challenges on the security of deep learning systems. An adversary can apply an additive perturbation on an original input to generate an adversarial example that induces wrong prediction in CNN models (Goodfellow, Shlens, and Szegedy 2014). Denoting an input as $x$ , the goal of adversarial attacks is to find a perturbation $\delta$ s.t. ${x}_{adv} = x + \delta$ can mislead the model, where $\parallel \delta {\parallel }_{p}$ satisfies the intensity constraint $\parallel \delta {\parallel }_{p} \leq \epsilon$ . To formulate that, the adversarial attack aims at maximizing the loss $\mathcal{L}$ for the model with parameters $\theta$ on the input-label pair(x, y), i.e. $\delta = {\operatorname{argmax}}_{\delta }{\mathcal{L}}_{\theta }\left( {x + \delta , y}\right)$ , under the constraint that the ${\ell }_{p}$ norm of the perturbation should not exceed the bound $\epsilon$ . Usually, we use ${\ell }_{\infty }$ norm (Goodfellow, Shlens, and Szegedy 2014; Madry et al. 2017) of the perturbations to measure the attack's effectiveness or model's robustness. An attack that requires smaller perturbation to successfully deceive the model is regarded to be stronger. Correspondingly, a defense that enforces the attacks to enlarge perturbation intensity is regarded to be more robust.
+
+Various adversarial attack methods have been investigated to strengthen the attack effectiveness. The fast gradient sign method (FGSM) (Goodfellow, Shlens, and Szegedy 2014) exploits the gradient descent method to generate adversarial examples. As an improvement, many studies further show the attack can be strengthened through multi-step projected gradient descent (PGD) (Madry et al. 2017) generation, random-starting strategy, and momentum mechanism (Dong et al. 2017). Then SGM (Wu et al. 2020) further finds that adding weight to the gradients going through the skip connections can make the attacks more effective. Other prevalent attack approaches include C&W losses (Carlini and Wagner 2017b), M-DI ${}^{2}$ -FGSM (Xie et al. 2019), etc. These attacks provide strong and effective ways to generate adversarial examples, rendering a huge threat to real-world deep learning systems.
+
+To improve the robustness of CNN systems, there are also extensive countermeasures for adversarial attacks. One active research direction targets improving the robustness of individual models. Adversarial training (Madry et al. 2017) optimizes the model on the adversarial examples generated in every step of the training stage. Therefore, the optimized model will tend to drop non-robust features to converge better on the adversarial data. However, adversarial training encourages the model to fit the adversarial examples, thereby reducing the generalization on the clean data and causing significant degradation of the clean accuracy.
+
+## Test-time randomness for adversarial defense
+
+Besides the aforementioned training techniques, there exist studies that introduce test-time randomness to improve the robustness. Feinman et. al. (Feinman et al. 2017) utilize the uncertainty measure in dropout networks to detect adversarial examples. Dhillon et. al. (Dhillon et al. 2018) and Xie et. al. (Xie et al. 2017) incorporate layer-wise weighted dropout and random input transformations during test time to improve the robustness. Test-time randomness is found to be effective in increasing the required distortion on the model, since test-time randomness makes generating white-box adversarial examples almost as difficult as generating transferable black-box ones (Carlini and Wagner 2017a). Nevertheless, test-time randomness increases the inference cost and can be circumvented to some extent with the expectation-over-transformation technique (Athalye, Carlini, and Wagner 2018).
+
+## Ensemble training for adversarial defense.
+
+Besides improving the robustness of individual models, another recent research direction is to investigate the robustness of model ensembles in which multiple sub-models work together. The basic idea is that multiple sub-models can provide diverse decisions. Ensemble methods can combine multiple weak models to jointly make decisions, thereby assembling as a stronger entirety. However, it is demonstrated that independent training of multiple models tends to capture similar features, which would not provide diversities among them (Kariyappa and Qureshi 2019).
+
+
+
+Figure 1: Normal ensemble training of multiple sub-models (Left) and the proposed ensemble-in-one training within a random gated network (Right). By selecting the paths along augmented layers, the ensemble-in-one network can instantiate ${n}^{L}$ sub-models, where $n$ represents the augmentation factor of the multi-gated block for each augmented layer and $L$ represents the number of augmented layers in the network.
+
+Therefore, several studies propose ensemble training methods to fully diversify the sub-models to improve the ensemble robustness. For example, Pan et. al. treat the distribution of output predictions as a diversity measurement and they propose an adaptive diversity promoting (ADP) regularizer (Pang et al. 2019) to diversify the non-max predictions of sub-models. Sanjay et. al. regard the gradients w.r.t. the inputs as a discrimination of different models, thus they propose a gradient alignment loss (GAL) (Kariyappa and Qureshi 2019) which takes the cosine similarity of the gradients as a criterion to train the sub-models. The very recent work DVERGE (Yang et al. 2020) claims that the similar non-robust features captured by the sub-models cause high adversarial transferability among them. Therefore, the authors exploit non-robust feature distillation and adopt mutual learning to diversify and isolate the vulnerabilities among the sub-models, such that the within-ensemble transferability is highly impeded. However, as mentioned before, such ensemble methods are overwhelmed by the fast-increasing overhead when scaling up the ensemble. For example, DVERGE takes 11 hours to train an ensemble with three sub-models while needs approximately 50 hours when the sub-model count increases to eight. Therefore, a more efficient ensemble construction method is highly demanded to tackle the scaling problem.
+
+## Ensemble-in-One
+
+## Basic Motivation
+
+The conventional way to construct ensembles is to simply aggregate multiple sub-models by averaging their predictions, which is inefficient and hard to scale up. An intuitive way to enhance the scalability of the ensemble construction is to introduce an ensemble for each layer in the network. As shown in Fig.1, we can augment a dynamic network by augmenting each parameterized layer with an $n$ - path gated block. Then by selecting the paths along the augmented layer, the dynamic network can instantiate ${n}^{L}$ varied sub-models ideally. Taking ResNet-20 as an example, by replacing each convolution layer (ignoring the skip connection branch) with a two-path gated module, the overall path count will approach ${2}^{19} = {524288}$ . Such augmentation way provides an approximation to training a very large ensemble of sub-models. Then through vulnerability diversification mutual learning, each path tends to capture better robustness. Following this idea, we propose Ensemble-in-One to further improve the robustness of both individual models and ensembles.
+
+## Construction of the Random Gated Network
+
+Denote a candidate neural network as $\mathcal{N}\left( {{o}_{1},{o}_{2},\ldots ,{o}_{m}}\right)$ , where ${o}_{i}$ represents an operator in the network. To transform the original network into a random gated network (RGN), we first extract the neural architecture to obtain the connection topology and layer types. On top of that, we replace each parameterized layer (mainly convolutional layer, optionally followed by a batch normalization layer) with a random gated block (RGB). As shown in Fig. 2, each RGB simply repeats the original layer by $n$ times, and leverages binary gates with uniform probabilities to control the open or mutation of corresponding sub-layers. These repeated sublayers are with different weight parameters. We denote the RGN as $\mathcal{N}\left( {{d}_{1},{d}_{2},\ldots ,{d}_{m}}\right)$ , where ${d}_{i} = \left( {{o}_{i1},\ldots ,{o}_{in}}\right)$ . Let ${g}_{i}$ be the gate information in the ${i}_{\text{th }}$ RGB, then a specific path derived from the RGN can be expressed as $\mathcal{P} =$ $\left( {{g}_{1} \cdot {d}_{1},{g}_{2} \cdot {d}_{2},\ldots ,{g}_{m} \cdot {d}_{m}}\right)$ .
+
+For each RGB, when performing the computation, only one of the $n$ gates is opened at a time, and the others will be temporarily muted. Thus by, only one path of activation is active in memory during training, which reduces the memory occupation of training an RGN to the same level of training an individual model. Moreover, to ensure that all paths can be equally sampled and trained, each gate in a RGB is chosen with identical probability, i.e. $1/n$ if each RGB consists of $n$ sub-operators. Therefore, the binary gate function can be expressed as:
+
+$$
+{g}_{i} = \left\{ \begin{matrix} \left\lbrack {1,0,\ldots ,0}\right\rbrack & \text{ with probability }1/n, \\ \left\lbrack {0,1,\ldots ,0}\right\rbrack & \text{ with probability }1/n, \\ \ldots & \\ \left\lbrack {0,0,\ldots ,1}\right\rbrack & \text{ with probability }1/n. \end{matrix}\right. \tag{1}
+$$
+
+An RGN is analogous to the super network in parameter-sharing neural architecture search, and the forward process of an RGN is similar to evaluating a sub-architecture (Pham et al. 2018; Cai, Zhu, and Han 2018). Compared to conventional ensemble training methods, our method is easier to scale up the ensemble. It only incurs $n \times$ memory occupation for the weight storage, while still keeping the same memory requirement for activation as an individual model.
+
+
+
+Figure 2: The construction of random gated network based on random gated blocks. The forward propagation will select one path to allow the input pass. Correspondingly, the gradients will also propagate backward along the same path.
+
+## Learning Ensemble in One
+
+The goal of learning ensemble-in-one is to encourage the vulnerabilities diversity of all the paths within the RGN by mutually learning from each other. Let ${\mathcal{P}}_{i}$ and ${\mathcal{P}}_{j}$ be two different paths, where we define two paths as different when at least one of their gates is different. To diversify the vulnerabilities, we need first distill the non-robust features of the paths so that the optimization process can isolate them. We adopt the same non-robust feature distillation strategy as previous work (Ilyas et al. 2019; Yang et al. 2020). Consider two randomly-sampled independent input-label pairs $\left( {{x}_{t},{y}_{t}}\right)$ and $\left( {{x}_{s},{y}_{s}}\right)$ from the training dataset, the distilled feature of ${x}_{t}$ corresponding to ${x}_{s}$ by the ${l}_{\text{th }}$ layer of path ${\mathcal{P}}_{i}$ can be achieved by:
+
+$$
+{x}_{{\mathcal{P}}_{i}^{l}}^{\prime }\left( {{x}_{t},{x}_{s}}\right) = {\operatorname{argmin}}_{z}{\begin{Vmatrix}{f}_{{\mathcal{P}}_{i}}^{l}\left( z\right) - {f}_{{\mathcal{P}}_{i}}^{l}\left( {x}_{t}\right) \end{Vmatrix}}^{2}, \tag{2}
+$$
+
+which s.t. ${\begin{Vmatrix}z - {x}_{s}\end{Vmatrix}}_{\infty } \leq {\epsilon }_{d}$ . Such feature distillation aims to construct a sample ${x}_{{\mathcal{P}}_{i}^{l}}^{\prime }$ by adding perturbations on ${x}_{s}$ so that the response in ${l}_{\text{th }}$ layer of ${\mathcal{P}}_{i}$ of ${x}_{{\mathcal{P}}_{i}^{l}}^{\prime }$ is similar as that of ${x}_{t}$ , while the two inputs ${x}_{t}$ and ${x}_{s}$ are completely different and independent. This exposes the vulnerability of path ${\mathcal{P}}_{i}$ on classifying ${x}_{s}$ . Therefore, for another different path ${\mathcal{P}}_{j}$ , it can learn on the distilled data to correctly classify them to circumvent the vulnerability. The optimization objective for path ${\mathcal{P}}_{j}$ is to minimize:
+
+$$
+{\mathbb{E}}_{\left( {{x}_{t},{y}_{t}}\right) ,\left( {{x}_{s},{y}_{s}}\right) , l}{\mathcal{L}}_{{f}_{{\mathcal{P}}_{j}}}\left( {{x}_{{\mathcal{P}}_{i}^{l}}^{\prime }\left( {{x}_{t},{x}_{s}}\right) ,{y}_{s}}\right) . \tag{3}
+$$
+
+As it is desired that each path can learn from the vulnerabilities of all the other paths, the objective of training the ensemble-in-one RGN is to minimize:
+
+$$
+\mathop{\sum }\limits_{{\forall {\mathcal{P}}_{j} \in \mathcal{N}}}{\mathbb{E}}_{\left( {{x}_{t},{y}_{t}}\right) ,\left( {{x}_{s},{y}_{s}}\right) , l}\mathop{\sum }\limits_{{\forall {\mathcal{P}}_{i} \in \mathcal{N}, i \neq j}}{\mathcal{L}}_{{f}_{{\mathcal{P}}_{j}}}\left( {{x}_{{\mathcal{P}}_{i}^{l}}^{\prime }\left( {{x}_{t},{x}_{s}}\right) ,{y}_{s}}\right) ,
+$$
+
+(4)
+
+where $\mathcal{N}$ is the set of all paths in the RGN. While it is obviously impossible to involve all the paths in a training iteration, we randomly sample a certain number of paths by stochastically set the binary gates according to Eq.1. We denote the number of paths sampled in each iteration as $p$ . Then the selected paths can temporarily combine as a subset of the RGN, referred to as $\mathcal{S}$ . The paths in the set $\mathcal{S}$ keep changing throughout the whole training process, such that all paths will have equal opportunities to be trained.
+
+Algorithm 1: Training process for learning Ensemble-in-One
+
+Require: Path samples per ietration $p$
+
+---
+
+Require: Random Gated Network $\mathcal{N}$ with $L$ parameterized layers
+
+Require: Pre-training epoch ${E}_{w}$ , training epoch $E$ , and data batch
+
+ ${B}_{d}$
+
+Require: Optimization loss $\mathcal{L}$ , learning rate ${lr}$
+
+Ensure: Trained Ensemble-in-One model
+
+ #pre-training of $\mathcal{N}$
+
+ for $\mathrm{e} = 1,2,\ldots ,{E}_{w}$ do
+
+ for $\mathrm{b} = 1,2,\ldots ,{B}_{d}$ do
+
+ Random Sample Path ${\mathcal{P}}_{i}$ from $\mathcal{N}$
+
+ Train ${\mathcal{P}}_{i}$ in batched data
+
+ end for
+
+ end for
+
+ #learning vulnerability diversity for $\mathcal{N}$
+
+ for $\mathrm{e} = 1,2,\ldots , E)$ do
+
+ for $\mathrm{b} = 1,2,\ldots ,{B}_{d}$ ) do
+
+ Random sample $l \in \left\lbrack {1, L}\right\rbrack$
+
+ #randomly sample $p$ paths
+
+ $\mathcal{S} = \left\lbrack {{\mathcal{P}}_{1},{\mathcal{P}}_{2},\ldots ,{\mathcal{P}}_{p}}\right\rbrack$ , s.t. $\forall i, j,\exists k \in \left\lbrack {1, l}\right\rbrack$ , s.t. ${\mathcal{P}}_{i}\left\lbrack k\right\rbrack \neq$
+
+ ${\mathcal{P}}_{j}\left\lbrack k\right\rbrack$
+
+ Get data $\left( {{X}_{t},{Y}_{t}}\right) ,\left( {{X}_{s},{Y}_{s}}\right) \leftarrow D$
+
+ #Get distilled data
+
+ for $\mathrm{i} = 1,2,\ldots , p$ do
+
+ ${X}_{i}^{\prime } = {x}_{{\mathcal{P}}_{i}^{l}}^{\prime }\left( {{\widehat{X}}_{t},{X}_{s}}\right)$
+
+ end for
+
+ ${\nabla }_{\mathcal{N}} \leftarrow 0$
+
+ for $\mathrm{i} = 1,2,\ldots , p$ do
+
+ ${\nabla }_{{\mathcal{P}}_{i}} = \nabla \left( {\mathop{\sum }\limits_{{j \neq i}}{\mathcal{L}}_{{f}_{{\mathcal{P}}_{i}}}\left( {{f}_{{\mathcal{P}}_{i}}\left( {X}_{j}^{\prime }\right) ,{Y}_{s}}\right) }\right)$
+
+ ${\nabla }_{\mathcal{N}} = {\nabla }_{\mathcal{N}} + {\nabla }_{{\mathcal{P}}_{i}}$
+
+ end for
+
+ $\mathcal{N} = \mathcal{N} - {lr} * {\nabla }_{\mathcal{N}}$
+
+ end for
+
+ end for
+
+ $= 0$
+
+---
+
+The training process of the RGN is summarized by the pseudo-code in Algorithm 1. Before starting vulnerability diversification training, we pre-train the RGN based on standard training settings to help the RGN obtain basic capabilities. The process is simple, where a random path will be sampled in each iteration and trained on clean data. Then for each batched data, the process of vulnerability diversification contains three basic steps. First, randomly sample $p$ paths to be involved in the iteration. Note that the sampled paths should be varied, i.e. if the distilling layer is set to $l$ , for any ${\mathcal{P}}_{i},{\mathcal{P}}_{j}$ in $\mathcal{S}$ , there must be at least one different gate among the top $l$ gates, i.e. $\exists k \in \left\lbrack {1, l}\right\rbrack$ , s.t. ${\mathcal{P}}_{i}\left\lbrack k\right\rbrack \neq {\mathcal{P}}_{j}\left\lbrack k\right\rbrack$ . Second, distilling the vulnerable features of the sampled paths according to Eq. 2. The distillation process is the same as proposed in DVERGE, by applying a PGD scheme for approximating the optimal perturbations. Third, mutually train each path with the distilled data from the other paths in a round-robin manner. Because the paths unavoidably share a proportion of weights owing to the weight sharing mechanism in super-net, the gradients of the weights will not be updated until all sampled paths are included.
+
+
+
+Figure 3: Investigation on the hyper-parameters involved in the Ensemble-in-One construction and training. All these experiments are implemented on ResNet-20 over CIFAR-10 dataset. Left: The black-box adversarial accuracy under different sample count $p$ per iteration; Middle: The black-box adversarial accuracy under different distillation perturbation ${\epsilon }_{d}$ ; and Right: the adversarial accuracy under different augmentation factor $n$ .
+
+## Model Derivation and Deployment
+
+Once the training of RGN is finished, we can then derive and deploy the model in two ways. One way is to deploy the entire RGN, then in inference stage, the gates throughout the network will be randomly selected to process an input. The advantage is that the computation is randomized, which may beneficial for improving the robustness under white-box attacks, because the transferability among different paths was impeded during diversity training. However, the disadvantage is that the accuracy is unstable owing to the dynamic choice of inference path, where the fluctuation reaches 1-2 percentage.
+
+Another way is to derive individual models from the RGN. By sampling a random path and eliminating the other redundant modules, an individual model can be rolled out. We can also sample multiple paths and derive multiple models to combine as an ensemble. Deploying models in this way ensures the stability of the prediction as the randomness is eliminated. In addition, the derived models can be slightly finetuned with small learning rate for a few epochs to compensate for the under-convergence, as the training process of RGN cannot fully train all paths as the probability of each specific path being sampled is relatively low. In our implementation, we exploit the latter method to derive an individual model for deployment.
+
+## Experimental Results
+
+## Experiment Settings
+
+Benchmark. The experiments are constructed on the ResNet-20 (He et al. 2016) and VGG-16 networks with the CIFAR dataset (Krizhevsky, Hinton et al. 2009). Specifically, we construct the ResNet-20 and VGG-16 based RGNs by substituting each convolution layer to a $n$ -path RGB (in default $n = 2$ ). Overall, there are 19 RGBs (containing 19 convolution layers in the straight-through branch) for ResNet-20, and 14 RGBs for VGG-16 (containing the 14 convolution layers). To evaluate the effectiveness of our method, we compare Ensemble-in-One with multiple counterparts, including the Baseline which trains the models in a standard way and three previous ensemble training methods: ${ADP}$ (Pang et al. 2019), ${GAL}$ (Kariyappa and Qureshi 2019), and DVERGE (Yang et al. 2020). Meanwhile, we also add the adversarial training(AdvT)method into the comparison.
+
+Training Details. The trained ensemble models of Baseline, ADP, GAL, and DVERGE are referred to the implementation which is publicly released in (Yang et al. 2020) ${}^{1}$ . We train the Ensemble-in-One networks for 200 epochs using SGD with momentum 0.9 and weight decay 0.0001 . The initial learning rate is 0.1, and decayed by ${10}\mathrm{x}$ at the 100-th and the 150-th epochs respectively. When deriving the individual models, we fine-tune the derived models for 0-20 epochs using SGD with learning rate 0.001 , momentum 0.9 and weight decay 0.0001 . Note that the fine-tuning process is optional and can adjust the epochs for a dedicated model. In default, for an RGN training, we sample 3 paths to construct temporary sub-ensemble per iteration. The augmentation factor $n$ for each RGB is set to 2, and the PGD-based perturbation strength ${\epsilon }_{d}$ for feature distillation is set to 0.07 with 10 iterative steps and each step size of ${\epsilon }_{d}/{10}$ .
+
+---
+
+${}^{1}$ https://github.com/zjysteven/DVERGE
+
+---
+
+
+
+Figure 4: Contrasting the robustness of Ensemble-in-One with previous ensemble training methods. Left: adversarial accuracy under black-box transfer attack; and right: adversarial accuracy under white-box attack. The number after the slash stands for the number of sub-models within the ensemble. The evaluations include ResNet-20 and VGG-16 over the CIFAR-10 dataset. The distillation perturbation strength of VGG-16-based EIO is set as ${\epsilon }_{d} = {0.03}$ .
+
+Attack Models. We categorize the adversarial attacks as black-box transfer attacks and white-box attacks. The white-box attack assumes the adversary has full knowledge of the model parameters and architectures, and the black-box attack assumes the adversary cannot access the target model and can only generate adversarial examples from surrogate models to launch the attacks. For fair comparison, we adopt exactly the same attack methodologies and the same surrogate models as DVERGE to evaluate the robustness. For black-box transfer attacks, the involved attack methods include: (1) PGD with momentum and with three random starts (Madry et al. 2017); (2) M-DI ${}^{2}$ -FGSM (Xie et al. 2019); and (3) SGM (Wu et al. 2020). The attacks are with different perturbation strength and the iterative steps are set to 100 with the step size of $\epsilon /5$ . Besides the cross-entropy loss, we also apply the C&W loss to incorporate with the attacks. Therefore, there will be 3 (surrogate models) $\times 5$ (attack methods, $\mathrm{{PGD}}$ with three random starts, $\mathrm{M} - {\mathrm{{DI}}}^{2} - \mathrm{{FGSM}}$ , and SGM) $\times 2$ (losses) $= {30}$ adversarial versions. For white-box attacks, we apply 50 -step PGD with the step size of $\epsilon /5$ with five random starts. Both the black-box and white-box adversarial accuracy is reported in an all-or-nothing fashion: a sample is judged to be correctly classified only if its 30 (for black-box attack) or 5 (for white-box attack) adversarial versions are all corrected classified by the model. In default, we randomly sample 1000 instances from the test dataset for evaluation. We believe the attacks are powerful and can identify the robustness of the various models.
+
+## Robustness Evaluation
+
+Hyper-parameter Exploration. Recall that three important hyper-parameters are involved in the training procedure. One is the count of sampled paths $p$ to participate in each training iteration, one is the strength of feature distillation perturbation ${\epsilon }_{d}$ as illustrated in Eq.2, and the other is the augmentation factor $n$ for constructing the RGN, i.e. how many times will an operator be repeated to build a RGB. We make experiments to investigate the impact of these hyper-parameters on the clean accuracy and the adversarial robustness.
+
+Fig.3 (Left) shows the curves of black-box adversarial accuracy under different sampled path count $p$ per training iteration. As is observed, when the sampled paths increase, the robustness of the derived individual model also improves. The underlying reason is that more samples of paths participating in each iteration allows more paths to be mutually trained, thereby each path is expected to learn from more diverse vulnerabilities. However, the clean accuracy drops with the increasing of path sample count, because a single operator has to adapt to diverse paths simultaneously. Moreover, the training time will also increase as the training complexity satisfies $\mathcal{O}\left( {p}^{2}\right)$ .
+
+Fig. 3 (Middle) shows the curves of black-box adversarial accuracy under different feature distillation ${\epsilon }_{d}$ . We find similar conclusions as presented in DVERGE. A larger ${\epsilon }_{d}$ can push the distilled data ${x}_{{\mathcal{P}}^{l}}^{\prime }\left( {{x}_{t},{x}_{s}}\right)$ share more similar internal representation as ${x}_{t}$ . While the objective is to reduce the loss of ${\mathcal{P}}_{j}$ on classifying ${x}_{{\mathcal{P}}_{i}^{l}}^{\prime }$ , the larger loss will boost the effectiveness of learning the vulnerability, thereby achieving better robustness. However, we also find the clean accuracy drops with the increase of ${\epsilon }_{d}$ . And there exists a switching point where it will stop obtaining robustness improvement from continually increasing ${\epsilon }_{d}$ . The experimental results suggest ${\epsilon }_{d} = {0.07}$ to achieve higher robustness and clean accuracy simultaneously.
+
+
+
+Figure 5: Contrasting the robustness of Ensemble-in-One and AdvT with different adversarial perturbation settings. The experiments are implemented on ResNet-20 over CIFAR-10. The "ft-epoch" means the fine-tuning epoch of the derived model. When aligning the clean accuracy, EIO achieves better robustness than AdvT.
+
+Fig. 3 (Right) shows the comparison of adversarial accuracy when applying different augmentation factor $n$ for constructing the RGN. Observe that increasing the factor $n$ brings no benefit on either the clean accuracy or adversarial accuracy. It stands to reason that augmenting $2 \times$ operators for each RGB has already provided sufficient candidate paths. Moreover, increasing the $n$ leads to more severe under-convergence of training because each path has a decreased probability of being sampled. Therefore, we suggest the augmentation factor as $n = 2$ for each convolution layer.
+
+Comparison with Ensemble Methods. Fig. 4 shows the overall adversarial accuracy of the models trained by different methods with a wide range of attack perturbation strength. ResNet-20 and VGG-16 are selected as the basic network to construct the ensembles and the EIO super-networks. The results show that through our Ensemble-inOne method, an individual model derived from the RGN can significantly outperform the heavy ensembles trained by previous ensemble training methods with higher adversarial accuracy under both black-box and white-box attacks, simultaneously achieving comparable clean accuracy. These results demonstrate that we successfully train an ensemble within one RGN network and improves the robustness of an individual model to outperform the ensembles such that the deployment overhead can be substantially reduced.
+
+Comparison with Adversarial Training. AdvT has been demonstrated as a promising approach on enhancing the robustness. Prior work attributes the enhancement to the exclusion of non-robust features during AdvT. However, these non-robust features might be useful to the classification accuracy, resulting in trade-offs between the clean accuracy and the robustness. One can adjust the perturbation strength in the AdvT to acquire different combinations of clean accuracy and adversarial robustness, as shown in Fig.5. It can be figured out that EIO significantly outperforms AdvT when aligning their clean accuracy (AdvT w/ $\epsilon = {0.005}$ ), which suggests that EIO learns more useful, robust features while excluding more useless, non-robust features than AdvT.
+
+## Discussion & Future Work
+
+There are also several points that are worthy further exploration while we leave to future work. First, current implementation of augmenting the RGN is simple, by repeating the convolution layers for multiple times. Nevertheless, as observed in Fig. 3 (Right), enlarging the augmentation factor brings no benefit on improving the robustness. Hence, there might be better ways of constructing RGNs that can compose stronger randomized network, e.g. subtracting some of the unnecessary RGBs or augmenting by diverse operators instead of simply repeating. Second, although black-box attacks are more prevalent in real world, defending against white-box attacks is still in demand because recent research warns the high risks of exposing the private models to the adversary (Hua, Zhang, and Suh 2018; Hu et al. 2020). Randomized multi-path network can provide promising solutions to alleviating the white-box threats. If the adversarial transferability among the different paths can be impeded, the adversarial example generated from one path will be ineffective for another path. Hence, it will make the white-box attacks as difficult as black-box transfer attacks. We believe it is a valuable direction to explore defensive method based on randomized multi-path network.
+
+## Conclusions
+
+In this work, we propose Ensemble-in-One, a novel approach that constructs random gated network (RGN) and learns adversarially robust ensembles within the network. The method is inherently scalable, which can ideally instantiate numerous sub-models by sampling different paths within the RGN. By diversifying the vulnerabilities of different paths, the Ensemble-in-One method can efficiently obtain models with higher robustness, simultaneously reducing the overhead of model training and deployment. The individual model derived from the RGN shows much better robustness than previous ensemble training methods and achieves better trade-offs than adversarial training.
+
+## References
+
+Athalye, A.; Carlini, N.; and Wagner, D. 2018. Obfuscated
+
+gradients give a false sense of security: Circumventing defenses to adversarial examples. In International Conference on Machine Learning, 274-283. PMLR.
+
+Bagnall, A.; Bunescu, R.; and Stewart, G. 2017. Training ensembles to detect adversarial examples. arXiv preprint arXiv:1712.04006.
+
+Brendel, W.; Rauber, J.; Kurakin, A.; Papernot, N.; Veliqi, B.; Mohanty, S. P.; Laurent, F.; Salathé, M.; Bethge, M.; Yu, Y.; et al. 2020. Adversarial vision challenge. In The NeurIPS'18 Competition, 129-153. Springer.
+
+Cai, H.; Zhu, L.; and Han, S. 2018. Proxylessnas: Direct neural architecture search on target task and hardware. arXiv preprint arXiv:1812.00332.
+
+Carlini, N.; and Wagner, D. 2017a. Adversarial examples are not easily detected: Bypassing ten detection methods. In Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security, 3-14.
+
+Carlini, N.; and Wagner, D. 2017b. Towards evaluating the robustness of neural networks. In 2017 ieee symposium on security and privacy (sp), 39-57. IEEE.
+
+Dhillon, G. S.; Azizzadenesheli, K.; Lipton, Z. C.; Bernstein, J.; Kossaifi, J.; Khanna, A.; and Anandkumar, A. 2018. Stochastic activation pruning for robust adversarial defense. arXiv preprint arXiv:1803.01442.
+
+Dong, Y.; Liao, F.; Pang, T.; Hu, X.; and Zhu, J. 2017. Discovering adversarial examples with momentum. arXiv preprint arXiv:1710.06081.
+
+Feinman, R.; Curtin, R. R.; Shintre, S.; and Gardner, A. B. 2017. Detecting adversarial samples from artifacts. arXiv preprint arXiv:1703.00410.
+
+Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2014. Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
+
+He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition, 770-778.
+
+Hu, X.; Liang, L.; Li, S.; Deng, L.; Zuo, P.; Ji, Y.; Xie, X.; Ding, Y.; Liu, C.; Sherwood, T.; et al. 2020. Deepsniffer: A dnn model extraction framework based on learning architectural hints. In Proceedings of the Twenty-Fifth International Conference on Architectural Support for Programming Languages and Operating Systems, 385-399.
+
+Hua, W.; Zhang, Z.; and Suh, G. E. 2018. Reverse engineering convolutional neural networks through side-channel information leaks. In 2018 55th ACM/ESDA/IEEE Design Automation Conference (DAC), 1-6. IEEE.
+
+Ilyas, A.; Santurkar, S.; Tsipras, D.; Engstrom, L.; Tran, B.; and Madry, A. 2019. Adversarial examples are not bugs, they are features. arXiv preprint arXiv:1905.02175.
+
+Kariyappa, S.; and Qureshi, M. K. 2019. Improving adversarial robustness of ensembles with diversity training. arXiv preprint arXiv:1901.09981.
+
+Krizhevsky, A.; Hinton, G.; et al. 2009. Learning multiple layers of features from tiny images.
+
+Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; and Vladu, A. 2017. Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083.
+
+Pang, T.; Xu, K.; Du, C.; Chen, N.; and Zhu, J. 2019. Improving adversarial robustness via promoting ensemble diversity. In International Conference on Machine Learning, 4970-4979. PMLR.
+
+Pham, H.; Guan, M.; Zoph, B.; Le, Q.; and Dean, J. 2018. Efficient neural architecture search via parameters sharing. In International Conference on Machine Learning, 4095- 4104. PMLR.
+
+Tramèr, F.; Papernot, N.; Goodfellow, I.; Boneh, D.; and McDaniel, P. 2017. The space of transferable adversarial examples. arXiv preprint arXiv:1704.03453.
+
+Wu, D.; Wang, Y.; Xia, S.-T.; Bailey, J.; and Ma, X. 2020. Skip connections matter: On the transferability of adversarial examples generated with resnets. arXiv preprint arXiv:2002.05990.
+
+Xie, C.; Wang, J.; Zhang, Z.; Ren, Z.; and Yuille, A. 2017. Mitigating adversarial effects through randomization. arXiv preprint arXiv:1711.01991.
+
+Xie, C.; Zhang, Z.; Zhou, Y.; Bai, S.; Wang, J.; Ren, Z.; and Yuille, A. L. 2019. Improving transferability of adversarial examples with input diversity. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 2730-2739.
+
+Yang, H.; Zhang, J.; Dong, H.; Inkawhich, N.; Gardner, A.; Touchet, A.; Wilkes, W.; Berry, H.; and Li, H. 2020. DVERGE: diversifying vulnerabilities for enhanced robust generation of ensembles. arXiv preprint arXiv:2009.14720.
+
+Zhang, H.; Yu, Y.; Jiao, J.; Xing, E.; El Ghaoui, L.; and Jordan, M. 2019. Theoretically principled trade-off between robustness and accuracy. In International Conference on Machine Learning, 7472-7482. PMLR.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/og7CXiEXqpZ/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/og7CXiEXqpZ/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..b918377a3e150993c8b421f847e460d57645171b
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/og7CXiEXqpZ/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,236 @@
+§ ENSEMBLE-IN-ONE: LEARNING ENSEMBLE WITHIN RANDOM GATED NETWORKS FOR ENHANCED ADVERSARIAL ROBUSTNESS
+
+Anonymous authors
+
+§ ABSTRACT
+
+Adversarial attacks have threatened modern deep learning systems by crafting adversarial examples with small perturbations to fool the convolutional neural networks (CNNs). Ensemble training methods are promising to facilitate better adversarial robustness by diversifying the vulnerabilities among the sub-models, simultaneously maintaining comparable accuracy as standard training. Previous practices also demonstrate that enlarging the ensemble can improve the robustness. However, existing ensemble methods are with poor scalability, owing to the rapid complexity increase when including more sub-models in the ensemble. Moreover, it is usually infeasible to train or deploy an ensemble with substantial sub-models, owing to the tight hardware resource budget and latency requirement. In this work, we propose Ensemble-in-One (EIO), a simple but effective method to enlarge the ensemble within a random gated network (RGN). EIO augments the original model by replacing the parameterized layers with multi-path random gated blocks (RGBs) to construct an RGN. By diversifying the vulnerability of the numerous paths through the super-net, it provides high scal-ability because the paths within an RGN exponentially increase with the network depth. Our experiments demonstrate that EIO consistently outperforms previous ensemble training methods with even less computational overhead, simultaneously achieving better accuracy-robustness trade-offs than adversarial training.
+
+§ INTRODUCTION
+
+With the convolutional neural networks (CNNs) becoming ubiquitous, the security and robustness of neural networks is attracting increasing interests. Recent studies find that CNN models are inherently vulnerable to adversarial attacks (Goodfellow, Shlens, and Szegedy 2014), which craft imperceptible perturbations on the images, referred to as adversarial examples, to mislead the neural network models. Even without accessing the target model, an adversary can still generate adversarial examples from other surrogate models to attack the target model by exploiting the adversarial transferability among them.
+
+Such vulnerability of CNN models has spurred extensive researches on improving the robustness against adversarial attacks. One stream of approaches targets on learning robust features for an individual model (Madry et al. 2017; Brendel et al. 2020). Informally, robust features are defined as the features that are less sensitive to the adversarial perturbations added on the inputs. A representative approach, referred to as adversarial training (Madry et al. 2017), on-line generates adversarial examples on which the model minimizes the training loss. As a result, adversarial training encourages the model to learn the features that are less sensitive to the adversarial perturbations, thereby alleviating the model's vulnerability. However, such adversarial training methods often have to sacrifice the clean accuracy for enhanced robustness (Zhang et al. 2019), since they exclude the non-robust features and become less distinguishable for the examples with high similarity in the feature space.
+
+Besides empowering improved robustness for an individual model, another stream of researches focuses on forming strong ensembles to improve the robustness (Yang et al. 2020; Bagnall, Bunescu, and Stewart 2017; Pang et al. 2019; Kariyappa and Qureshi 2019). Generally speaking, an ensemble is constructed by aggregating multiple sub-models. Intuitively, an ensemble is promising to facilitate better robustness than an individual model because a successful attack needs to mislead the majority of the sub-models rather than just one. While the robustness of an ensemble highly relies on the diversity of the sub-models, recent study finds that CNN models trained independently on the same dataset are with highly-overlapped adversarial subspaces (Tramèr et al. 2017). Therefore, many studies propose ensemble training methods to diversify the sub-models. For example, DVERGE (Yang et al. 2020) proposes to distill non-robust features corresponding to each sub-model's vulnerability, then isolates the vulnerabilities of the sub-models by mutual learning such that impeding the adversarial transferability among them.
+
+There is another learned insight that the ensembles composed by more sub-models tend to capture greater robustness improvement. Table 1 shows the robustness trend of the ensembles trained with various ensemble training methods. Robustness improvement can be obtained by including more sub-models within the ensemble. This drives us to further explore whether the trend will continue when keeping enlarging the ensemble. However, existing ensemble construction methods are with poor scalability because of the rapidly increasing overhead, especially with mutual learning which trains the sub-models in a round-robin manner, the complexity will rise at a speed of $O\left( {n}^{2}\right)$ .
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+max width=
+
+#sub-model Baseline ADP GAL DVERGE
+
+1-5
+3 0.0%/1.5% 0.0%/9.6% 39.7%/11.4% 53.2%/40.0%
+
+1-5
+5 0.0%/2.1% 0.0%/11.8% 32.4%/31.7% 57.2%/48.9%
+
+1-5
+8 0.0%13.2% 0.0%/12.0% 22.4%137.0% 63.6%157.9%
+
+1-5
+
+Table 1: Adversarial accuracy of the ensembles trained by different methods, with 3, 5, and 8 sub-models respectively (Yang et al. 2020). The numbers before and after the slash mean black-box adversarial accuracy under perturbation strength0.03 (around 8/255) and white-box adversarial accuracy under perturbation strength 0.01 .
+
+We propose Ensemble-in-One, a novel approach that can improve the scalability of ensemble training and introduce randomness mechanism for enhanced generalization, simultaneously obtaining better robustness and higher efficiency. For a dedicated CNN model, we conduct a Random Gated Network (RGN) by substituting each parameterized layer with a Random Gated Block (RGB) on top of the neural architecture. Through this, the network can instantiate numerous sub-models by controlling the gates in each block. Ensemble-in-One substantially reduces the complexity when scaling up the ensemble. In summary, the contributions of this work are listed as below:
+
+ * Ensemble-in-One is a simple but effective method that learns adversarially robust ensembles within one over-parametrized random gated network. The EIO enables us to employ ensemble learning techniques to learn more robust individual models with minimal computational overheads and no extra inference overhead.
+
+ * Extensive experiments demonstrate the effectiveness of EIO. It consistently outperforms the previous ensemble training methods with even less computational overhead. Moreover, EIO also achieves better accuracy-robustness trade-offs than adversarial training method.
+
+§ RELATED WORK
+
+§ ADVERSARIAL ATTACKS AND COUNTERMEASURES.
+
+The inherent vulnerability of CNN models poses challenges on the security of deep learning systems. An adversary can apply an additive perturbation on an original input to generate an adversarial example that induces wrong prediction in CNN models (Goodfellow, Shlens, and Szegedy 2014). Denoting an input as $x$ , the goal of adversarial attacks is to find a perturbation $\delta$ s.t. ${x}_{adv} = x + \delta$ can mislead the model, where $\parallel \delta {\parallel }_{p}$ satisfies the intensity constraint $\parallel \delta {\parallel }_{p} \leq \epsilon$ . To formulate that, the adversarial attack aims at maximizing the loss $\mathcal{L}$ for the model with parameters $\theta$ on the input-label pair(x, y), i.e. $\delta = {\operatorname{argmax}}_{\delta }{\mathcal{L}}_{\theta }\left( {x + \delta ,y}\right)$ , under the constraint that the ${\ell }_{p}$ norm of the perturbation should not exceed the bound $\epsilon$ . Usually, we use ${\ell }_{\infty }$ norm (Goodfellow, Shlens, and Szegedy 2014; Madry et al. 2017) of the perturbations to measure the attack's effectiveness or model's robustness. An attack that requires smaller perturbation to successfully deceive the model is regarded to be stronger. Correspondingly, a defense that enforces the attacks to enlarge perturbation intensity is regarded to be more robust.
+
+Various adversarial attack methods have been investigated to strengthen the attack effectiveness. The fast gradient sign method (FGSM) (Goodfellow, Shlens, and Szegedy 2014) exploits the gradient descent method to generate adversarial examples. As an improvement, many studies further show the attack can be strengthened through multi-step projected gradient descent (PGD) (Madry et al. 2017) generation, random-starting strategy, and momentum mechanism (Dong et al. 2017). Then SGM (Wu et al. 2020) further finds that adding weight to the gradients going through the skip connections can make the attacks more effective. Other prevalent attack approaches include C&W losses (Carlini and Wagner 2017b), M-DI ${}^{2}$ -FGSM (Xie et al. 2019), etc. These attacks provide strong and effective ways to generate adversarial examples, rendering a huge threat to real-world deep learning systems.
+
+To improve the robustness of CNN systems, there are also extensive countermeasures for adversarial attacks. One active research direction targets improving the robustness of individual models. Adversarial training (Madry et al. 2017) optimizes the model on the adversarial examples generated in every step of the training stage. Therefore, the optimized model will tend to drop non-robust features to converge better on the adversarial data. However, adversarial training encourages the model to fit the adversarial examples, thereby reducing the generalization on the clean data and causing significant degradation of the clean accuracy.
+
+§ TEST-TIME RANDOMNESS FOR ADVERSARIAL DEFENSE
+
+Besides the aforementioned training techniques, there exist studies that introduce test-time randomness to improve the robustness. Feinman et. al. (Feinman et al. 2017) utilize the uncertainty measure in dropout networks to detect adversarial examples. Dhillon et. al. (Dhillon et al. 2018) and Xie et. al. (Xie et al. 2017) incorporate layer-wise weighted dropout and random input transformations during test time to improve the robustness. Test-time randomness is found to be effective in increasing the required distortion on the model, since test-time randomness makes generating white-box adversarial examples almost as difficult as generating transferable black-box ones (Carlini and Wagner 2017a). Nevertheless, test-time randomness increases the inference cost and can be circumvented to some extent with the expectation-over-transformation technique (Athalye, Carlini, and Wagner 2018).
+
+§ ENSEMBLE TRAINING FOR ADVERSARIAL DEFENSE.
+
+Besides improving the robustness of individual models, another recent research direction is to investigate the robustness of model ensembles in which multiple sub-models work together. The basic idea is that multiple sub-models can provide diverse decisions. Ensemble methods can combine multiple weak models to jointly make decisions, thereby assembling as a stronger entirety. However, it is demonstrated that independent training of multiple models tends to capture similar features, which would not provide diversities among them (Kariyappa and Qureshi 2019).
+
+ < g r a p h i c s >
+
+Figure 1: Normal ensemble training of multiple sub-models (Left) and the proposed ensemble-in-one training within a random gated network (Right). By selecting the paths along augmented layers, the ensemble-in-one network can instantiate ${n}^{L}$ sub-models, where $n$ represents the augmentation factor of the multi-gated block for each augmented layer and $L$ represents the number of augmented layers in the network.
+
+Therefore, several studies propose ensemble training methods to fully diversify the sub-models to improve the ensemble robustness. For example, Pan et. al. treat the distribution of output predictions as a diversity measurement and they propose an adaptive diversity promoting (ADP) regularizer (Pang et al. 2019) to diversify the non-max predictions of sub-models. Sanjay et. al. regard the gradients w.r.t. the inputs as a discrimination of different models, thus they propose a gradient alignment loss (GAL) (Kariyappa and Qureshi 2019) which takes the cosine similarity of the gradients as a criterion to train the sub-models. The very recent work DVERGE (Yang et al. 2020) claims that the similar non-robust features captured by the sub-models cause high adversarial transferability among them. Therefore, the authors exploit non-robust feature distillation and adopt mutual learning to diversify and isolate the vulnerabilities among the sub-models, such that the within-ensemble transferability is highly impeded. However, as mentioned before, such ensemble methods are overwhelmed by the fast-increasing overhead when scaling up the ensemble. For example, DVERGE takes 11 hours to train an ensemble with three sub-models while needs approximately 50 hours when the sub-model count increases to eight. Therefore, a more efficient ensemble construction method is highly demanded to tackle the scaling problem.
+
+§ ENSEMBLE-IN-ONE
+
+§ BASIC MOTIVATION
+
+The conventional way to construct ensembles is to simply aggregate multiple sub-models by averaging their predictions, which is inefficient and hard to scale up. An intuitive way to enhance the scalability of the ensemble construction is to introduce an ensemble for each layer in the network. As shown in Fig.1, we can augment a dynamic network by augmenting each parameterized layer with an $n$ - path gated block. Then by selecting the paths along the augmented layer, the dynamic network can instantiate ${n}^{L}$ varied sub-models ideally. Taking ResNet-20 as an example, by replacing each convolution layer (ignoring the skip connection branch) with a two-path gated module, the overall path count will approach ${2}^{19} = {524288}$ . Such augmentation way provides an approximation to training a very large ensemble of sub-models. Then through vulnerability diversification mutual learning, each path tends to capture better robustness. Following this idea, we propose Ensemble-in-One to further improve the robustness of both individual models and ensembles.
+
+§ CONSTRUCTION OF THE RANDOM GATED NETWORK
+
+Denote a candidate neural network as $\mathcal{N}\left( {{o}_{1},{o}_{2},\ldots ,{o}_{m}}\right)$ , where ${o}_{i}$ represents an operator in the network. To transform the original network into a random gated network (RGN), we first extract the neural architecture to obtain the connection topology and layer types. On top of that, we replace each parameterized layer (mainly convolutional layer, optionally followed by a batch normalization layer) with a random gated block (RGB). As shown in Fig. 2, each RGB simply repeats the original layer by $n$ times, and leverages binary gates with uniform probabilities to control the open or mutation of corresponding sub-layers. These repeated sublayers are with different weight parameters. We denote the RGN as $\mathcal{N}\left( {{d}_{1},{d}_{2},\ldots ,{d}_{m}}\right)$ , where ${d}_{i} = \left( {{o}_{i1},\ldots ,{o}_{in}}\right)$ . Let ${g}_{i}$ be the gate information in the ${i}_{\text{ th }}$ RGB, then a specific path derived from the RGN can be expressed as $\mathcal{P} =$ $\left( {{g}_{1} \cdot {d}_{1},{g}_{2} \cdot {d}_{2},\ldots ,{g}_{m} \cdot {d}_{m}}\right)$ .
+
+For each RGB, when performing the computation, only one of the $n$ gates is opened at a time, and the others will be temporarily muted. Thus by, only one path of activation is active in memory during training, which reduces the memory occupation of training an RGN to the same level of training an individual model. Moreover, to ensure that all paths can be equally sampled and trained, each gate in a RGB is chosen with identical probability, i.e. $1/n$ if each RGB consists of $n$ sub-operators. Therefore, the binary gate function can be expressed as:
+
+$$
+{g}_{i} = \left\{ \begin{matrix} \left\lbrack {1,0,\ldots ,0}\right\rbrack & \text{ with probability }1/n, \\ \left\lbrack {0,1,\ldots ,0}\right\rbrack & \text{ with probability }1/n, \\ \ldots & \\ \left\lbrack {0,0,\ldots ,1}\right\rbrack & \text{ with probability }1/n. \end{matrix}\right. \tag{1}
+$$
+
+An RGN is analogous to the super network in parameter-sharing neural architecture search, and the forward process of an RGN is similar to evaluating a sub-architecture (Pham et al. 2018; Cai, Zhu, and Han 2018). Compared to conventional ensemble training methods, our method is easier to scale up the ensemble. It only incurs $n \times$ memory occupation for the weight storage, while still keeping the same memory requirement for activation as an individual model.
+
+ < g r a p h i c s >
+
+Figure 2: The construction of random gated network based on random gated blocks. The forward propagation will select one path to allow the input pass. Correspondingly, the gradients will also propagate backward along the same path.
+
+§ LEARNING ENSEMBLE IN ONE
+
+The goal of learning ensemble-in-one is to encourage the vulnerabilities diversity of all the paths within the RGN by mutually learning from each other. Let ${\mathcal{P}}_{i}$ and ${\mathcal{P}}_{j}$ be two different paths, where we define two paths as different when at least one of their gates is different. To diversify the vulnerabilities, we need first distill the non-robust features of the paths so that the optimization process can isolate them. We adopt the same non-robust feature distillation strategy as previous work (Ilyas et al. 2019; Yang et al. 2020). Consider two randomly-sampled independent input-label pairs $\left( {{x}_{t},{y}_{t}}\right)$ and $\left( {{x}_{s},{y}_{s}}\right)$ from the training dataset, the distilled feature of ${x}_{t}$ corresponding to ${x}_{s}$ by the ${l}_{\text{ th }}$ layer of path ${\mathcal{P}}_{i}$ can be achieved by:
+
+$$
+{x}_{{\mathcal{P}}_{i}^{l}}^{\prime }\left( {{x}_{t},{x}_{s}}\right) = {\operatorname{argmin}}_{z}{\begin{Vmatrix}{f}_{{\mathcal{P}}_{i}}^{l}\left( z\right) - {f}_{{\mathcal{P}}_{i}}^{l}\left( {x}_{t}\right) \end{Vmatrix}}^{2}, \tag{2}
+$$
+
+which s.t. ${\begin{Vmatrix}z - {x}_{s}\end{Vmatrix}}_{\infty } \leq {\epsilon }_{d}$ . Such feature distillation aims to construct a sample ${x}_{{\mathcal{P}}_{i}^{l}}^{\prime }$ by adding perturbations on ${x}_{s}$ so that the response in ${l}_{\text{ th }}$ layer of ${\mathcal{P}}_{i}$ of ${x}_{{\mathcal{P}}_{i}^{l}}^{\prime }$ is similar as that of ${x}_{t}$ , while the two inputs ${x}_{t}$ and ${x}_{s}$ are completely different and independent. This exposes the vulnerability of path ${\mathcal{P}}_{i}$ on classifying ${x}_{s}$ . Therefore, for another different path ${\mathcal{P}}_{j}$ , it can learn on the distilled data to correctly classify them to circumvent the vulnerability. The optimization objective for path ${\mathcal{P}}_{j}$ is to minimize:
+
+$$
+{\mathbb{E}}_{\left( {{x}_{t},{y}_{t}}\right) ,\left( {{x}_{s},{y}_{s}}\right) ,l}{\mathcal{L}}_{{f}_{{\mathcal{P}}_{j}}}\left( {{x}_{{\mathcal{P}}_{i}^{l}}^{\prime }\left( {{x}_{t},{x}_{s}}\right) ,{y}_{s}}\right) . \tag{3}
+$$
+
+As it is desired that each path can learn from the vulnerabilities of all the other paths, the objective of training the ensemble-in-one RGN is to minimize:
+
+$$
+\mathop{\sum }\limits_{{\forall {\mathcal{P}}_{j} \in \mathcal{N}}}{\mathbb{E}}_{\left( {{x}_{t},{y}_{t}}\right) ,\left( {{x}_{s},{y}_{s}}\right) ,l}\mathop{\sum }\limits_{{\forall {\mathcal{P}}_{i} \in \mathcal{N},i \neq j}}{\mathcal{L}}_{{f}_{{\mathcal{P}}_{j}}}\left( {{x}_{{\mathcal{P}}_{i}^{l}}^{\prime }\left( {{x}_{t},{x}_{s}}\right) ,{y}_{s}}\right) ,
+$$
+
+(4)
+
+where $\mathcal{N}$ is the set of all paths in the RGN. While it is obviously impossible to involve all the paths in a training iteration, we randomly sample a certain number of paths by stochastically set the binary gates according to Eq.1. We denote the number of paths sampled in each iteration as $p$ . Then the selected paths can temporarily combine as a subset of the RGN, referred to as $\mathcal{S}$ . The paths in the set $\mathcal{S}$ keep changing throughout the whole training process, such that all paths will have equal opportunities to be trained.
+
+Algorithm 1: Training process for learning Ensemble-in-One
+
+Require: Path samples per ietration $p$
+
+Require: Random Gated Network $\mathcal{N}$ with $L$ parameterized layers
+
+Require: Pre-training epoch ${E}_{w}$ , training epoch $E$ , and data batch
+
+ ${B}_{d}$
+
+Require: Optimization loss $\mathcal{L}$ , learning rate ${lr}$
+
+Ensure: Trained Ensemble-in-One model
+
+ #pre-training of $\mathcal{N}$
+
+ for $\mathrm{e} = 1,2,\ldots ,{E}_{w}$ do
+
+ for $\mathrm{b} = 1,2,\ldots ,{B}_{d}$ do
+
+ Random Sample Path ${\mathcal{P}}_{i}$ from $\mathcal{N}$
+
+ Train ${\mathcal{P}}_{i}$ in batched data
+
+ end for
+
+ end for
+
+ #learning vulnerability diversity for $\mathcal{N}$
+
+ for $\mathrm{e} = 1,2,\ldots ,E)$ do
+
+ for $\mathrm{b} = 1,2,\ldots ,{B}_{d}$ ) do
+
+ Random sample $l \in \left\lbrack {1,L}\right\rbrack$
+
+ #randomly sample $p$ paths
+
+ $\mathcal{S} = \left\lbrack {{\mathcal{P}}_{1},{\mathcal{P}}_{2},\ldots ,{\mathcal{P}}_{p}}\right\rbrack$ , s.t. $\forall i,j,\exists k \in \left\lbrack {1,l}\right\rbrack$ , s.t. ${\mathcal{P}}_{i}\left\lbrack k\right\rbrack \neq$
+
+ ${\mathcal{P}}_{j}\left\lbrack k\right\rbrack$
+
+ Get data $\left( {{X}_{t},{Y}_{t}}\right) ,\left( {{X}_{s},{Y}_{s}}\right) \leftarrow D$
+
+ #Get distilled data
+
+ for $\mathrm{i} = 1,2,\ldots ,p$ do
+
+ ${X}_{i}^{\prime } = {x}_{{\mathcal{P}}_{i}^{l}}^{\prime }\left( {{\widehat{X}}_{t},{X}_{s}}\right)$
+
+ end for
+
+ ${\nabla }_{\mathcal{N}} \leftarrow 0$
+
+ for $\mathrm{i} = 1,2,\ldots ,p$ do
+
+ ${\nabla }_{{\mathcal{P}}_{i}} = \nabla \left( {\mathop{\sum }\limits_{{j \neq i}}{\mathcal{L}}_{{f}_{{\mathcal{P}}_{i}}}\left( {{f}_{{\mathcal{P}}_{i}}\left( {X}_{j}^{\prime }\right) ,{Y}_{s}}\right) }\right)$
+
+ ${\nabla }_{\mathcal{N}} = {\nabla }_{\mathcal{N}} + {\nabla }_{{\mathcal{P}}_{i}}$
+
+ end for
+
+ $\mathcal{N} = \mathcal{N} - {lr} * {\nabla }_{\mathcal{N}}$
+
+ end for
+
+ end for
+
+ $= 0$
+
+The training process of the RGN is summarized by the pseudo-code in Algorithm 1. Before starting vulnerability diversification training, we pre-train the RGN based on standard training settings to help the RGN obtain basic capabilities. The process is simple, where a random path will be sampled in each iteration and trained on clean data. Then for each batched data, the process of vulnerability diversification contains three basic steps. First, randomly sample $p$ paths to be involved in the iteration. Note that the sampled paths should be varied, i.e. if the distilling layer is set to $l$ , for any ${\mathcal{P}}_{i},{\mathcal{P}}_{j}$ in $\mathcal{S}$ , there must be at least one different gate among the top $l$ gates, i.e. $\exists k \in \left\lbrack {1,l}\right\rbrack$ , s.t. ${\mathcal{P}}_{i}\left\lbrack k\right\rbrack \neq {\mathcal{P}}_{j}\left\lbrack k\right\rbrack$ . Second, distilling the vulnerable features of the sampled paths according to Eq. 2. The distillation process is the same as proposed in DVERGE, by applying a PGD scheme for approximating the optimal perturbations. Third, mutually train each path with the distilled data from the other paths in a round-robin manner. Because the paths unavoidably share a proportion of weights owing to the weight sharing mechanism in super-net, the gradients of the weights will not be updated until all sampled paths are included.
+
+ < g r a p h i c s >
+
+Figure 3: Investigation on the hyper-parameters involved in the Ensemble-in-One construction and training. All these experiments are implemented on ResNet-20 over CIFAR-10 dataset. Left: The black-box adversarial accuracy under different sample count $p$ per iteration; Middle: The black-box adversarial accuracy under different distillation perturbation ${\epsilon }_{d}$ ; and Right: the adversarial accuracy under different augmentation factor $n$ .
+
+§ MODEL DERIVATION AND DEPLOYMENT
+
+Once the training of RGN is finished, we can then derive and deploy the model in two ways. One way is to deploy the entire RGN, then in inference stage, the gates throughout the network will be randomly selected to process an input. The advantage is that the computation is randomized, which may beneficial for improving the robustness under white-box attacks, because the transferability among different paths was impeded during diversity training. However, the disadvantage is that the accuracy is unstable owing to the dynamic choice of inference path, where the fluctuation reaches 1-2 percentage.
+
+Another way is to derive individual models from the RGN. By sampling a random path and eliminating the other redundant modules, an individual model can be rolled out. We can also sample multiple paths and derive multiple models to combine as an ensemble. Deploying models in this way ensures the stability of the prediction as the randomness is eliminated. In addition, the derived models can be slightly finetuned with small learning rate for a few epochs to compensate for the under-convergence, as the training process of RGN cannot fully train all paths as the probability of each specific path being sampled is relatively low. In our implementation, we exploit the latter method to derive an individual model for deployment.
+
+§ EXPERIMENTAL RESULTS
+
+§ EXPERIMENT SETTINGS
+
+Benchmark. The experiments are constructed on the ResNet-20 (He et al. 2016) and VGG-16 networks with the CIFAR dataset (Krizhevsky, Hinton et al. 2009). Specifically, we construct the ResNet-20 and VGG-16 based RGNs by substituting each convolution layer to a $n$ -path RGB (in default $n = 2$ ). Overall, there are 19 RGBs (containing 19 convolution layers in the straight-through branch) for ResNet-20, and 14 RGBs for VGG-16 (containing the 14 convolution layers). To evaluate the effectiveness of our method, we compare Ensemble-in-One with multiple counterparts, including the Baseline which trains the models in a standard way and three previous ensemble training methods: ${ADP}$ (Pang et al. 2019), ${GAL}$ (Kariyappa and Qureshi 2019), and DVERGE (Yang et al. 2020). Meanwhile, we also add the adversarial training(AdvT)method into the comparison.
+
+Training Details. The trained ensemble models of Baseline, ADP, GAL, and DVERGE are referred to the implementation which is publicly released in (Yang et al. 2020) ${}^{1}$ . We train the Ensemble-in-One networks for 200 epochs using SGD with momentum 0.9 and weight decay 0.0001 . The initial learning rate is 0.1, and decayed by ${10}\mathrm{x}$ at the 100-th and the 150-th epochs respectively. When deriving the individual models, we fine-tune the derived models for 0-20 epochs using SGD with learning rate 0.001, momentum 0.9 and weight decay 0.0001 . Note that the fine-tuning process is optional and can adjust the epochs for a dedicated model. In default, for an RGN training, we sample 3 paths to construct temporary sub-ensemble per iteration. The augmentation factor $n$ for each RGB is set to 2, and the PGD-based perturbation strength ${\epsilon }_{d}$ for feature distillation is set to 0.07 with 10 iterative steps and each step size of ${\epsilon }_{d}/{10}$ .
+
+${}^{1}$ https://github.com/zjysteven/DVERGE
+
+ < g r a p h i c s >
+
+Figure 4: Contrasting the robustness of Ensemble-in-One with previous ensemble training methods. Left: adversarial accuracy under black-box transfer attack; and right: adversarial accuracy under white-box attack. The number after the slash stands for the number of sub-models within the ensemble. The evaluations include ResNet-20 and VGG-16 over the CIFAR-10 dataset. The distillation perturbation strength of VGG-16-based EIO is set as ${\epsilon }_{d} = {0.03}$ .
+
+Attack Models. We categorize the adversarial attacks as black-box transfer attacks and white-box attacks. The white-box attack assumes the adversary has full knowledge of the model parameters and architectures, and the black-box attack assumes the adversary cannot access the target model and can only generate adversarial examples from surrogate models to launch the attacks. For fair comparison, we adopt exactly the same attack methodologies and the same surrogate models as DVERGE to evaluate the robustness. For black-box transfer attacks, the involved attack methods include: (1) PGD with momentum and with three random starts (Madry et al. 2017); (2) M-DI ${}^{2}$ -FGSM (Xie et al. 2019); and (3) SGM (Wu et al. 2020). The attacks are with different perturbation strength and the iterative steps are set to 100 with the step size of $\epsilon /5$ . Besides the cross-entropy loss, we also apply the C&W loss to incorporate with the attacks. Therefore, there will be 3 (surrogate models) $\times 5$ (attack methods, $\mathrm{{PGD}}$ with three random starts, $\mathrm{M} - {\mathrm{{DI}}}^{2} - \mathrm{{FGSM}}$ , and SGM) $\times 2$ (losses) $= {30}$ adversarial versions. For white-box attacks, we apply 50 -step PGD with the step size of $\epsilon /5$ with five random starts. Both the black-box and white-box adversarial accuracy is reported in an all-or-nothing fashion: a sample is judged to be correctly classified only if its 30 (for black-box attack) or 5 (for white-box attack) adversarial versions are all corrected classified by the model. In default, we randomly sample 1000 instances from the test dataset for evaluation. We believe the attacks are powerful and can identify the robustness of the various models.
+
+§ ROBUSTNESS EVALUATION
+
+Hyper-parameter Exploration. Recall that three important hyper-parameters are involved in the training procedure. One is the count of sampled paths $p$ to participate in each training iteration, one is the strength of feature distillation perturbation ${\epsilon }_{d}$ as illustrated in Eq.2, and the other is the augmentation factor $n$ for constructing the RGN, i.e. how many times will an operator be repeated to build a RGB. We make experiments to investigate the impact of these hyper-parameters on the clean accuracy and the adversarial robustness.
+
+Fig.3 (Left) shows the curves of black-box adversarial accuracy under different sampled path count $p$ per training iteration. As is observed, when the sampled paths increase, the robustness of the derived individual model also improves. The underlying reason is that more samples of paths participating in each iteration allows more paths to be mutually trained, thereby each path is expected to learn from more diverse vulnerabilities. However, the clean accuracy drops with the increasing of path sample count, because a single operator has to adapt to diverse paths simultaneously. Moreover, the training time will also increase as the training complexity satisfies $\mathcal{O}\left( {p}^{2}\right)$ .
+
+Fig. 3 (Middle) shows the curves of black-box adversarial accuracy under different feature distillation ${\epsilon }_{d}$ . We find similar conclusions as presented in DVERGE. A larger ${\epsilon }_{d}$ can push the distilled data ${x}_{{\mathcal{P}}^{l}}^{\prime }\left( {{x}_{t},{x}_{s}}\right)$ share more similar internal representation as ${x}_{t}$ . While the objective is to reduce the loss of ${\mathcal{P}}_{j}$ on classifying ${x}_{{\mathcal{P}}_{i}^{l}}^{\prime }$ , the larger loss will boost the effectiveness of learning the vulnerability, thereby achieving better robustness. However, we also find the clean accuracy drops with the increase of ${\epsilon }_{d}$ . And there exists a switching point where it will stop obtaining robustness improvement from continually increasing ${\epsilon }_{d}$ . The experimental results suggest ${\epsilon }_{d} = {0.07}$ to achieve higher robustness and clean accuracy simultaneously.
+
+ < g r a p h i c s >
+
+Figure 5: Contrasting the robustness of Ensemble-in-One and AdvT with different adversarial perturbation settings. The experiments are implemented on ResNet-20 over CIFAR-10. The "ft-epoch" means the fine-tuning epoch of the derived model. When aligning the clean accuracy, EIO achieves better robustness than AdvT.
+
+Fig. 3 (Right) shows the comparison of adversarial accuracy when applying different augmentation factor $n$ for constructing the RGN. Observe that increasing the factor $n$ brings no benefit on either the clean accuracy or adversarial accuracy. It stands to reason that augmenting $2 \times$ operators for each RGB has already provided sufficient candidate paths. Moreover, increasing the $n$ leads to more severe under-convergence of training because each path has a decreased probability of being sampled. Therefore, we suggest the augmentation factor as $n = 2$ for each convolution layer.
+
+Comparison with Ensemble Methods. Fig. 4 shows the overall adversarial accuracy of the models trained by different methods with a wide range of attack perturbation strength. ResNet-20 and VGG-16 are selected as the basic network to construct the ensembles and the EIO super-networks. The results show that through our Ensemble-inOne method, an individual model derived from the RGN can significantly outperform the heavy ensembles trained by previous ensemble training methods with higher adversarial accuracy under both black-box and white-box attacks, simultaneously achieving comparable clean accuracy. These results demonstrate that we successfully train an ensemble within one RGN network and improves the robustness of an individual model to outperform the ensembles such that the deployment overhead can be substantially reduced.
+
+Comparison with Adversarial Training. AdvT has been demonstrated as a promising approach on enhancing the robustness. Prior work attributes the enhancement to the exclusion of non-robust features during AdvT. However, these non-robust features might be useful to the classification accuracy, resulting in trade-offs between the clean accuracy and the robustness. One can adjust the perturbation strength in the AdvT to acquire different combinations of clean accuracy and adversarial robustness, as shown in Fig.5. It can be figured out that EIO significantly outperforms AdvT when aligning their clean accuracy (AdvT w/ $\epsilon = {0.005}$ ), which suggests that EIO learns more useful, robust features while excluding more useless, non-robust features than AdvT.
+
+§ DISCUSSION & FUTURE WORK
+
+There are also several points that are worthy further exploration while we leave to future work. First, current implementation of augmenting the RGN is simple, by repeating the convolution layers for multiple times. Nevertheless, as observed in Fig. 3 (Right), enlarging the augmentation factor brings no benefit on improving the robustness. Hence, there might be better ways of constructing RGNs that can compose stronger randomized network, e.g. subtracting some of the unnecessary RGBs or augmenting by diverse operators instead of simply repeating. Second, although black-box attacks are more prevalent in real world, defending against white-box attacks is still in demand because recent research warns the high risks of exposing the private models to the adversary (Hua, Zhang, and Suh 2018; Hu et al. 2020). Randomized multi-path network can provide promising solutions to alleviating the white-box threats. If the adversarial transferability among the different paths can be impeded, the adversarial example generated from one path will be ineffective for another path. Hence, it will make the white-box attacks as difficult as black-box transfer attacks. We believe it is a valuable direction to explore defensive method based on randomized multi-path network.
+
+§ CONCLUSIONS
+
+In this work, we propose Ensemble-in-One, a novel approach that constructs random gated network (RGN) and learns adversarially robust ensembles within the network. The method is inherently scalable, which can ideally instantiate numerous sub-models by sampling different paths within the RGN. By diversifying the vulnerabilities of different paths, the Ensemble-in-One method can efficiently obtain models with higher robustness, simultaneously reducing the overhead of model training and deployment. The individual model derived from the RGN shows much better robustness than previous ensemble training methods and achieves better trade-offs than adversarial training.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/p4SrFydwO5/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/p4SrFydwO5/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..0e08a1db9924b35abf38542d49c499dbf20a9c23
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/p4SrFydwO5/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,763 @@
+# Demystifying the Adversarial Robustness of Random Transformation Defenses
+
+Anonymized Authors
+
+${}^{1}$ Anonymized Institution
+
+## Abstract
+
+Current machine learning models suffer from evasion attacks (i.e., adversarial examples) raising concerns in many security-sensitive settings such as autonomous vehicles. While many countermeasures have shown promising results, only a few withstand rigorous evaluation from more recent attacks. Recently, the use of random transformations (RT) has shown an impressive result, particularly BaRT (Raff et al. 2019) on Ima-geNet. However, this type of defense has not been rigorously evaluated, and its robustness properties are poorly understood. These models are also stochastic in nature, making evaluation more challenging and many proposed attacks on deterministic models inapplicable. In this paper, we attempt to construct the strongest possible RT defense through the informed selection of transformations and the use of Bayesian optimization to tune their parameters. Furthermore, we attempt to identify the strongest possible attack to evaluate our RT defense. Our new attack vastly outperforms the naive attack, reducing the accuracy by ${83}\%$ , while the baseline EoT attack can only achieve ${19}\%$ reduction, a ${4.3} \times$ improvement. This indicates that the RT defense on Imagenette dataset (ten-class subset of Ima-geNet) is not robust against adversarial examples. Extending the study further, we use our new attack to adversarially train RT defense (called AdvRT) with an intuition that a stronger attack used during adversarial training will lead to more robust models. However, the attack is still not sufficiently strong, and thus, the AdvRT model is no more robust than its RT counterpart. The outcomes are slightly different for CIFAR-10 dataset where both RT and AdvRT models show some level robustness, but they are still outperformed by robust deterministic models. In the process of formulating our defense and attack, we perform several ablation studies and uncover insights that we hope will broadly benefit scientific communities that study stochastic neural networks and their robustness properties.
+
+## 1 Introduction
+
+Today, deep neural networks are widely deployed in safety-critical settings such as autonomous driving and cybersecu-rity. Despite their effectiveness at solving a wide-range of challenging problems, they are known to have a major vulnerability. Tiny crafted perturbations added to inputs (so called adversarial examples) can arbitrarily manipulate the outputs of these large models, posing a threat to the safety and privacy of the millions of people who rely on existing ML systems. The importance of this problem has drawn substantial attention, and yet we have not devised a concrete countermeasure as a research community.
+
+Adversarial training (Madry et al. 2018) has been the foremost approach for defending against adversarial examples. While adversarial training provides increased robustness, it results in a loss of accuracy on benign inputs. Recently, a promising line of defenses against adversarial examples has emerged. These defenses randomize either the model parameters or the inputs themselves (Lecuyer et al. 2019; He, Rakin, and Fan 2019; Raff et al. 2019; Liu et al. 2019; Xie et al. 2018; Zhang and Liang 2019; Bender et al. 2020; Liu et al. 2018; Cohen, Rosenfeld, and Kolter 2019; Dhillon et al. 2018; Guo et al. 2018). Introducing randomness into the model can be thought of as a form of smoothing that removes sinuous portions of the decision boundary where adversarial examples frequently lie (He, Li, and Song 2018). Among these randomization approaches, Raff et al. (2019) propose Barrage of Random Transforms (BaRT), a new defense which applies a large set of random image transformations to classifier inputs. They report a ${24} \times$ increase in robust accuracy over previously proposed defenses.
+
+Despite these promising results, researchers still lack a clear understanding of how to properly evaluate random defenses. This is concerning as a defense can falsely appear more robust than it actually is when evaluated using suboptimal attacks (Athalye, Carlini, and Wagner 2018; Tramer et al. 2020). Therefore, in this work, we improve existing attacks on randomized defenses, and use them to rigorously evaluate BaRT and more generally, random transformation (RT) defenses. We find that sub-optimal attacks have led to an overly optimistic view of these RT defenses. Notably, we show that even our best RT defense is much less secure than previously thought, formulating a new attack that reduces its security (from 70% adversarial accuracy found by the baseline attack to only $6\%$ on Imagenette).
+
+We also take the investigation further and combine RT defense with adversarial training. Nevertheless, this turns out to be ineffective as the attack is not sufficiently strong and only generates weak adversarial examples for the model to train with. The outcomes appear more promising for CIFAR- 10 , but it still lacks behind deterministic defense such as Madry et al. (2018) and Zhang et al. (2019). We believe that stronger and more efficient attacks on RT-based models will be necessary not only for accurate evaluation of the stochastic defenses but also for improving the effectiveness of adversarial training for such models.
+
+---
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+---
+
+To summarize, we make the following contributions:
+
+- We show that non-differentiable transforms impede optimization during an attack and even an adaptive technique for circumventing non-differentiability (i.e., BPDA (Atha-lye, Carlini, and Wagner 2018)) is not sufficiently effective. This reveals that existing RT defenses are likely non-robust.
+
+- To this end, we suggest that an RT defense should only use differentiable transformations for reliable evaluations and compatibility with adversarial training.
+
+- We propose a new state-of-the-art attack for RT defense that improves over EoT (Athalye et al. 2018) in terms of both the loss function and the optimizer. We explain the success of our attack through the variance of the gradients.
+
+- Improve the RT scheme by using Bayesian optimization for hyperparameter tuning and combining it with adversarial training which uses our new attack method instead of the baseline EoT.
+
+## 2 Background and Related Works
+
+### 2.1 Adversarial Examples
+
+Adversarial examples are carefully perturbed inputs designed to fool a machine learning model (Szegedy et al. 2014; Biggio et al. 2013; Goodfellow, Shlens, and Szegedy 2015). An adversarial perturbation $\delta$ is typically constrained to be within some ${\ell }_{p}$ -norm ball with a radius of $\epsilon$ . The ${\ell }_{p}$ -norm ball is a proxy to the "imperceptibility" of $\delta$ and can be thought of as the adversary’s budget. In this work, we primarily use $p = \infty$ and only consider adaptive white-box adversary. Finding the worst-case perturbation ${\delta }^{ * }$ requires solving the following optimization problem:
+
+$$
+{x}_{\text{adv }} = x + {\delta }^{ * } = x + \underset{\delta : \parallel \delta {\parallel }_{p} \leq \epsilon }{\arg \max }L\left( {x + \delta , y}\right) \tag{1}
+$$
+
+where $L : {\mathbb{R}}^{d} \times {\mathbb{R}}^{C} \rightarrow \mathbb{R}$ is the loss function of the target model which, in our case, is a classifier which makes predictions among $C$ classes. Projected gradient descent (PGD) is often used to solve the optimization problem in Eqn. 1.
+
+### 2.2 Randomization Defenses
+
+A number of recent papers have proposed defenses against adversarial examples which utilize inference-time randomization. One common approach is to sample weights of the network from some probability distribution (Liu et al. 2018; He, Rakin, and Fan 2019; Liu et al. 2019; Bender et al. 2020). In this paper, we instead focus on defenses that apply random transforms to the input (Raff et al. 2019; Xie et al. 2018; Zhang and Liang 2019; Cohen, Rosenfeld, and Kolter 2019). Many of these defenses claim to achieve state-of-the-art robustness. Unlike prior evaluations, we test these defenses using a wide range of white-box attacks as well as a novel stronger attack. A key issue when evaluating these schemes is that PGD attacks require gradients through the entire model pipeline, but many defenses use non-differentiable transforms. As we show later, this can cause evaluation results to be misleading.
+
+Transformed input Output distribution Neural Network Final prediction Neural Network Neural Network Input Image Transforms
+
+Figure 1: An illustration of a random transformation (RT) defense against adversarial examples. Transformations of different types and parameters are sampled and applied sequentially to multiple copies of the input. All of the transformed inputs are then passed to a single neural network, and the outputs are combined to make the final prediction.
+
+Different works have tried applying different random transformations to their inputs. Xie et al. randomly resize and pad images (Xie et al. 2018). While this defense ranked second in the NeurIPS 2017 adversarial robustness competition, their security evaluation did not consider adaptive attacks where the adversary has full knowledge of the transformations.
+
+Zhang et al. (Zhang and Liang 2019) add Gaussian noise to the input and then quantize it. They report that this defense outperforms all of the NeurIPS 2017 submissions. For their attack, Zhang et al. approximate the gradient of the transform, which could lead to a sub-optimal attack. In this paper, we use the exact gradients for all transformations when available
+
+More recently, Raff et al. (Raff et al. 2019) claim to achieve a state-of-the-art robust accuracy ${24} \times$ better than adversarial training using a random transformation defense known as Barrage of Random Transforms (BaRT). BaRT involves randomly sampling a large set of image transformations and applying them to the input in a random order. Because many transformations are non-differentiable, BaRT evaluates their scheme using an attack that approximates the gradients of the transforms. In Section 4, we show that this approximation is ineffective, giving overly optimistic impression of BaRT's robustness, and we re-evaluate BaRT using a stronger attack which utilizes exact transform gradients.
+
+## 3 Random Transformation Defense
+
+Here, we introduce notations and the design of our RT defense, formalizing the BaRT defense.
+
+### 3.1 Decision Rules
+
+RT repeatedly applies a randomly chosen transform to the input, uses a neural network to make a prediction, and then averages the softmax prediction scores:
+
+$$
+g\left( x\right) \mathrel{\text{:=}} {\mathbb{E}}_{\theta \sim p\left( \theta \right) }\left\lbrack {\sigma \left( {f\left( {t\left( {x;\theta }\right) }\right) }\right) }\right\rbrack \tag{2}
+$$
+
+where $\sigma \left( \cdot \right)$ is the softmax function, $f : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{C}$ a neural network ( $C$ is the number of classes), and the transformation $t\left( {\cdot ;\theta }\right) : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ is parameterized by a random variable $\theta$ drawn from some distribution $p\left( \theta \right)$ .
+
+In practice, we approximate the expectation in Eqn. 2 with $n$ Monte Carlo samples per one input $x$ :
+
+$$
+g\left( x\right) \approx {g}_{n}\left( x\right) \mathrel{\text{:=}} \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\sigma \left( {f\left( {t\left( {x;{\theta }_{i}}\right) }\right) }\right) \tag{3}
+$$
+
+We then define the final prediction as the class with the largest softmax probability: $\widehat{y}\left( x\right) = \arg \mathop{\max }\limits_{{c \in \left\lbrack C\right\rbrack }}{\left\lbrack {g}_{n}\left( x\right) \right\rbrack }_{c}$ . Note that this decision rule is different from most previous works that use a majority vote on hard labels, i.e., ${\widehat{y}}_{\text{maj }}\left( x\right) =$ $\arg \mathop{\max }\limits_{{c \in \left\lbrack C\right\rbrack }}\;\mathop{\sum }\limits_{{i = 1}}^{n}\mathbb{1}\left\{ {c = \arg \mathop{\max }\limits_{{j \in \left\lbrack C\right\rbrack }}{f}_{j}\left( x\right) }\right\}$ (Raff et al. 2019; Cohen, Rosenfeld, and Kolter 2019). We later show in Appendix D.1 that our rule is empirically superior to the majority vote. From the Law of Large Numbers, as $n$ increases, the approximation in Eqn. 3 converges to the expectation in Eqn. 2. Fig. 1 illustrates the structure and the components of the RT architecture.
+
+### 3.2 Parameterization of Transformations
+
+Here, $t\left( {\cdot ;\theta }\right)$ represents a composition of $S$ different image transformations where $\theta = \left\{ {{\theta }^{\left( 1\right) },\ldots ,{\theta }^{\left( S\right) }}\right\}$ and ${\theta }^{\left( s\right) }$ denotes the parameters for the $s$ -th transformation, i.e.,
+
+$$
+t\left( {x;\theta }\right) = {t}_{{\theta }^{\left( S\right) }} \circ {t}_{{\theta }^{\left( S - 1\right) }} \circ \cdots \circ {t}_{{\theta }^{\left( 1\right) }}\left( x\right) \tag{4}
+$$
+
+Each ${\theta }^{\left( s\right) }$ is a random variable comprised of three components, i.e., ${\theta }^{\left( s\right) } = \left\{ {{\tau }^{\left( s\right) },{\beta }^{\left( s\right) },{\alpha }^{\left( s\right) }}\right\}$ , which dictate the properties of a transformation:
+
+1. Type $\tau$ of transformation to apply (e.g., rotation, JPEG compression), which is uniformly drawn, without replacement, from a pool of $K$ transformation types: $\tau \sim$ $\operatorname{Cat}\left( {K,1/K}\right)$ .
+
+2. A boolean $\beta$ indicating whether the transformation will be applied. This is a Bernoulli random variable with probability ${p}_{\beta } : \beta \sim \operatorname{Bern}\left( p\right)$ .
+
+3. Strength of the transformation (e.g., rotation angle, JPEG quality) denoted by $\alpha$ , sampled from a predefined distribution (either uniform or normal): $\alpha \sim p\left( a\right)$ .
+
+Specifically, for each of the $n$ transformed samples, we sample a permutation of size $S$ out of $K$ transformation types in total, i.e. $\left\{ {{\tau }^{\left( 1\right) },\ldots ,{\tau }^{\left( S\right) }}\right\} \in \operatorname{Perm}\left( {K, S}\right)$ . Then the boolean and the strength of the $s$ -th transform are sampled: ${\beta }^{\left( s\right) } \sim \operatorname{Bern}\left( {p}_{{\tau }^{\left( s\right) }}\right)$ and ${\alpha }^{\left( s\right) } \sim p\left( {a}_{{\tau }^{\left( s\right) }}\right)$ . We abbreviate this sampling process as $\theta \sim p\left( \theta \right)$ which is repeated for every transformed sample (out of $n$ ) for a single input.
+
+Assuming that the $K$ transformation types are fixed, an RT defense introduces, at most, ${2K}$ hyperparameters, $\left\{ {{p}_{1},\ldots ,{p}_{K}}\right\}$ and $\left\{ {{a}_{1},\ldots ,{a}_{K}}\right\}$ , that can be tuned. It is also possible to tune by selecting ${K}^{\prime }$ out of $K$ transformation types, but this is combinatorially large in $K$ . In Appendix C, we show a heuristic for "pruning" the transformation types through tuning $p$ and $a$ (e.g., setting $p = 0$ is equivalent to removing that transformation type).
+
+### 3.3 Choices of Transformations
+
+In this work, we use a pool of $K = {33}$ different image transformations including 19 differentiable and 2 nondifferentiable transforms taken from the ${30}\mathrm{{BaRT}}$ transforms (Raff et al. 2019) (counting each type of noise injection as its own transform). We replace non-differentiable transformations with a smooth differentiable alternative (Shin and Song 2017). The transformations fall into seven groups: noise injection (7), blur filtering (4), color-space alteration (8), edge detection (2), lossy compression (3), geometric transformation (5), and stylization (4). All transforms are described in Appendix A.1.
+
+## 4 Evaluating Raff et al. (2019)'s BaRT
+
+Backward-pass differentiable approximation (BPDA) was proposed as a heuristic for approximating gradients of nondifferentiable components in many defenses to make gradient-based attacks applicable (Athalye, Carlini, and Wagner 2018). It works by first approximating the function with a neural network and backpropagate through this network instead of the non-differentiable function. Evaluations of BaRT in Raff et al. (2019) have considered BPDA as some transformations are innately non-differentiable or have zero gradients almost everywhere (e.g., JPEG compression, precision reduction, etc.). To approximate a transformation, we train a model ${\widetilde{t}}_{\phi }$ that minimizes the Euclidean distance between the transformed image and the model output:
+
+$$
+\mathop{\min }\limits_{\phi }\mathop{\sum }\limits_{{i = 1}}^{N}\underset{\theta \sim p\left( \theta \right) }{\mathbb{E}}{\begin{Vmatrix}{\widetilde{t}}_{\phi }\left( {x}_{i};\theta \right) - t\left( {x}_{i};\theta \right) \end{Vmatrix}}_{2} \tag{5}
+$$
+
+We evaluate the BPDA approximation below in a series of experiments that compare the effectiveness of the BPDA attack to an attack that uses exact gradients.
+
+### 4.1 Experiment Setup
+
+Our experiments use two datasets: CIFAR-10 and Ima-genette (Howard 2021), a ten-class subset of ImageNet. While CIFAR-10 is the most common benchmark in the adversarial robustness domain, some image transformations work poorly on low-resolution images. We choose Imagenette because BaRT was created on ImageNet, but we do not have resources to do thorough investigation on top of adversarial training on ImageNet. Additionally, the large and realistic images from Imagenette more closely resemble real-world usage All Imagenette models are pre-trained on ImageNet to speed up training and boost performance. Since RT models are stochastic, we report their average accuracy together with the ${95}\%$ confidence interval from 10 independent runs. Throughout this work, we consider the perturbation size $\epsilon$ of 16/255 for Imagenette and 8/255 for CIFAR-10. Appendix A. 2 has more details on the experiments (network architecture, hyperparameters, etc.).
+
+### 4.2 BPDA Attack is Not Sufficiently Strong
+
+We re-implemented and trained a BaRT model on these datasets, and then evaluated the effectiveness of BPDA attacks against this model. ${}^{1}$ First, we evaluate the full BaRT model in Table 1, comparing an attack that uses a BPDA approximation (as Raff et al. (2019)) vs an attack that uses the exact gradient for differentiable transforms and
+
+---
+
+${}^{1}$ The authors have been very helpful with the implementation details but cannot make the official code or model weights public.
+
+---
+
+| Transforms used | Clean accuracy | Adversarial accuracy w/ gradient approximations |
| Exact | BPDA | Identity | Combo |
| BaRT (full) | ${88.10} \pm {0.16}$ | n/a | ${52.32} \pm {0.22}$ | ${36.49} \pm {0.25}$ | ${25.24} \pm {0.16}$ |
| BaRT (only differentiable) | ${87.43} \pm {0.28}$ | ${26.06} \pm {0.21}$ | ${65.28} \pm {0.25}$ | ${41.25} \pm {0.26}$ | n/a |
+
+Table 1: Comparison of attacks with different gradient approximations. "Exact" directly uses the exact gradient. "BPDA" uses the BPDA gradient for most transforms and the identity for a few. "Identity" backpropagates as an identity function, and "Combo" uses exact gradient for differentiable transforms and BPDA gradient otherwise. Full BaRT uses a nearly complete set of BaRT transforms $\left( {K = {26}}\right)$ , and "BaRT (only differentiable)" uses only differentiable transforms $\left( {K = {21}}\right)$ . We use PGD attack with EoT and CE loss $\left( {\epsilon = {16}/{255},{40}\text{steps}}\right)$ .
+
+(a) Original (b) Exact crop (c) BPDA crop
+
+Figure 2: Comparison of crop transform output and output of BPDA network trained to approximate crop transform.
+
+BPDA for non-differentiable transforms, denoted "BPDA" and "Combo", respectively. Empirically, we observe that attacks using BPDA are far weaker than the equivalent attack using exact gradient approximations. Similarly, on a variant BaRT model that uses only the subset of differentiable transforms, the BDPA attack is worse than an attack that uses the exact gradient for all transforms. BPDA is surprisingly weaker than even a naive attack which approximates all transform gradients with the identity. There are a few possible explanations for the inability of BPDA to approximate transformation gradients well:
+
+1. As Fig. 2 illustrates, BPDA struggles to approximate some transforms accurately. This might be partly because the architecture Raff et al. (2019) used (and we use) to approximate each transform has limited functional expressivity: it consists of five convolutional layers with $5 \times 5$ kernel and one with $3 \times 3$ kernel (all strides are 1), so a single output pixel can only depend on the input pixels fewer than 11 spaces away in any direction $\left( {5 \cdot \left\lfloor \frac{5}{2}\right\rfloor + 1 \cdot \left\lfloor \frac{3}{2}\right\rfloor = {11}}\right)$ . Considering the inputs for Imagenette are of size ${224} \times {224}$ , some transforms like "crop" which require moving pixels much longer distances are impossible to approximate with such an architecture.
+
+2. The BPDA network training process for solving Eqn. 5 may only find a sub-optimal solution, yielding a poor approximation of the true transformation.
+
+3. During the attack, the trained BPDA networks are given partially transformed images, yet the BPDA networks are only trained with untransformed inputs.
+
+4. Since we are backpropagating through several transforms, one poor transform gradient approximation could ruin the overall gradient approximation.
+
+Appendix A. 3 has more details on these experiments. These results show that BaRT's evaluation using BPDA was overly optimistic, and BaRT is not as robust as previously thought.
+
+Since BPDA is unreliable for approximating gradients of non-differentiable image transformations, we recommend that other ensuing RT-based defenses only use differentiable transformations. For the rest of this paper, we only study the robustness of RT defenses with differentiable transforms to isolate them from an orthogonal line of research on non-differentiable defenses (e.g., with approximate gradients or zero-th order attacks). Additionally, differentiable models can also boost their robustness further when combined with adversarial training. We explore this direction further in Section 7. Even without non-differentiable transforms, we still lack reliable evaluation on stochastic defenses apart from EoT. In the next section, we show that applying an EoT attack on RT defense results in a critically sub-optimal evaluation. After that, we propose a stronger attack.
+
+## 5 Hyperparameter Tuning on RT Defenses
+
+Before investigating attacks, we want to ensure we evaluate on the most robust RT defense possible. We found that BaRT is not robust, but it could be because of the chosen transformations and their hyperparameters which they do not provide any justification for. Finding the most robust RT defense is, however, challenging because it consists of numerous hyperparameters including the $K$ transformation types, the number of transformations to apply(S), and their parameters ( $a$ and $p$ ). A typical grid search is intractable since we have 33 transformations, and trying to optimize the parameters directly with the reparameterization trick does not work as most transforms are not differentiable w.r.t. their parameters.
+
+We systematically address this problem by using Bayesian optimization (BO) (Snoek, Larochelle, and Adams 2012), a well-known black-box optimization technique used for hyper-parameter search, to fine-tune $a$ and $p$ . In short, BO optimizes an objective function that takes in the hyperparameters ( $a$ and $p$ in our case) as inputs and outputs adversarial accuracy. This process, which is equivalent to one iteration in BO, is computationally expensive as it involves training a neural network as a backbone for an RT defense and evaluating it with our new attack. Consequently, we have to scale down the problem by shortening the training, using fewer training/testing data samples, and evaluating with fewer attack steps. Essentially, we have to trade off precision of the search for efficiency. Because BO does not natively support categorical or integral variables, we experiment with different choices for $K$ and $S$ without the use of BO. The full details of this procedure are presented Appendix C.
+
+| Datasets | Attacks | Adv. Accuracy |
| Imagenette CIFAR-10 | Baseline | ${70.79} \pm {0.53}$ |
| AutoAttack | ${85.46} \pm {0.43}$ |
| Our attack | ${6.34} \pm {0.35}$ |
| Baseline | ${33.83} \pm {0.44}$ |
| AutoAttack | ${61.13} \pm {0.85}$ |
| Our attack | $\mathbf{{29.91} \pm {0.35}}$ |
+
+Table 2: Comparison between the baseline EoT attack (Atha-lye et al. 2018), AutoAttack (Croce and Hein 2020), and our attack on the RT defense whose transformation parameters have been fine-tuned by Bayesian Optimization to maximize the robustness. For AutoAttack, we use its standard version combined with EoT. For Imagenette, we use $\epsilon = {16}/{255}$ , for CIFAR-10, $\epsilon = 8/{255}$ .
+
+Algorithm 1: Our best attack on RT defenses
+
+---
+
+Input: Set of $K$ transformations and distributions of their
+
+ parameters $p\left( \theta \right)$ , neural network $f$ , perturbation size
+
+ $\epsilon$ , max. PGD steps $T$ , step size ${\left\{ {\gamma }_{t}\right\} }_{t = 1}^{T}$ , and
+
+ AggMo’s damping constants ${\left\{ {\mu }_{b}\right\} }_{b = 1}^{B}$ .
+
+Output: Adversarial examples ${x}_{\text{adv }}$
+
+Data: Test input $x$ and its ground-truth label $y$
+
+// Initialize x_adv and velocities
+
+${x}_{\text{adv }} \leftarrow x + u \sim \mathcal{U}\left\lbrack {-\epsilon ,\epsilon }\right\rbrack ,\;{\left\{ {v}_{b}\right\} }_{b = 1}^{B} \leftarrow \mathbf{0}$
+
+for $\mathrm{t} \leftarrow 1$ to $T$ do
+
+ ${\left\{ {\theta }_{i}\right\} }_{i = 1}^{n} \sim p\left( \theta \right)$
+
+ // Compute a gradient estimate with
+
+ linear loss on logits
+
+ (Section 6.2) and with SGM
+
+ (Section 6.3)
+
+ ${G}_{n} \leftarrow \nabla {\mathcal{L}}_{\text{Linear }}\left( {\frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}f\left( {t\left( {{x}_{\mathrm{{adv}}};{\theta }_{i}}\right) }\right) , y}\right)$
+
+ ${\widehat{G}}_{n} \leftarrow \operatorname{sign}\left( {G}_{n}\right) \;//$ Use signed gradients
+
+ // Update velocities and x-adv with
+
+ AggMo (Section 6.4)
+
+ for $\mathrm{b} \leftarrow 1$ to $B$ do
+
+ ${v}_{b} \leftarrow {\mu }_{b} \cdot {v}_{b} + {\widehat{G}}_{n}$
+
+ ${x}_{\text{adv }} \leftarrow {x}_{\text{adv }} + \frac{{\gamma }_{t}}{B}\mathop{\sum }\limits_{{b = 1}}^{B}{v}_{b}$
+
+return ${x}_{\text{adv }}$
+
+---
+
+## 6 State-of-the-Art Attack on RT Defenses
+
+We propose a new attack on differentiable RT defenses that leverages insights from previous literature on transfer attacks as well as recent stochastic optimization algorithms. Our attack is immensely successful and shows that even the fine-tuned RT defense from Section 5 shows almost no adversarial robustness (Table 2). We summarize our attack in Algorithm 1 before describing the setup and investigating the three main design choices that make this attack successful and outperform the baseline from Athalye et al. (2018) by a large margin.
+
+### 6.1 Setup: Stochastic Gradient Method
+
+First, we describe the setup and explain intuitions around variance of the gradient estimates. Finding adversarial examples on RT defenses can be formulated as the following stochastic optimization problem:
+
+$$
+\mathop{\max }\limits_{{\delta : \parallel \delta {\parallel }_{\infty } \leq \epsilon }}H\left( \delta \right) \mathrel{\text{:=}} \mathop{\max }\limits_{{\delta : \parallel \delta {\parallel }_{\infty } \leq \epsilon }}{\mathbb{E}}_{\theta }\left\lbrack {h\left( {\delta ;\theta }\right) }\right\rbrack \tag{6}
+$$
+
+$$
+\mathrel{\text{:=}} \mathop{\max }\limits_{{\delta : \parallel \delta {\parallel }_{\infty } \leq \epsilon }}{\mathbb{E}}_{\theta }\left\lbrack {\mathcal{L}\left( {f\left( {t\left( {x + \delta ;\theta }\right) }\right) , y}\right) }\right\rbrack \tag{7}
+$$
+
+for some objective function $\mathcal{L}$ . Note that we drop dependence on(x, y)to declutter the notation. Since it is not possible to evaluate the expectation or its gradients exactly, the gradients are estimated by sampling ${\left\{ {\theta }_{i}\right\} }_{i = 1}^{n}$ similarly to how we obtain a prediction ${g}_{n}$ . Suppose that $H$ is smooth and convex, and variance of the gradient estimates is bounded by ${\sigma }^{2}$ , i.e.,
+
+$$
+\underset{\theta \sim p\left( \theta \right) }{\mathbb{E}}\left\lbrack {\begin{Vmatrix}\nabla h\left( \delta ;\theta \right) - \nabla H\left( \delta \right) \end{Vmatrix}}^{2}\right\rbrack \leq {\sigma }^{2}, \tag{8}
+$$
+
+the error of SGD after $T$ iterations is $\mathcal{O}\left( {1/T + \sigma /\sqrt{T}}\right)$ for an appropriate step size (Ghadimi and Lan 2013). This result suggests that small $\sigma$ or low-variance gradient speeds up convergence which is highly desirable for attackers and defenders alike. Specifically, it leads to more efficient and more accurate evaluation as well as a stronger attack to use during adversarial training, which in turn, could yield a better defense (we explore this in Section 7).
+
+As a result, the analyses on our attack will be largely based on variance and two other measures of spread of the gradients. Specifically, we measure (1) the dimension-averaged variance in Eqn. 8, (2) cosine similarity and (3) a percentage of matching signs between mean gradient and each gradient sample. Since all three metrics appear to be highly correlated in theory and in practice, we only report the variance in the main paper. For the other metrics and their mathematical definitions, please see Appendix B.3.
+
+EoT Baseline. We compare our attack to the baseline which is exactly taken from Athalye et al. (2018). This attack takes on the same form as Eqn. 7 and its gradients are averaged over $n$ gradient samples:
+
+$$
+{H}_{n}^{\mathrm{{EoT}}}\left( \delta \right) \mathrel{\text{:=}} \frac{1}{n}\mathop{\sum }\limits_{{j = 1}}^{n}\mathcal{L}\left( {f\left( {t\left( {x + \delta ;{\theta }_{j}}\right) }\right) , y}\right) \tag{9}
+$$
+
+It is important to note that this approximation does not exactly match the decision rule of RT defenses as the expectation should be in front of $f$ but behind the loss function (see Eqn. 2). While the gradient estimates from Eqn. 9 are unbiased, they may have high variance as each gradient sample is equivalent to computing the loss on ${g}_{n}$ with $n = 1$ . In the next section, we will compare other options for objective functions and decision rules and show that there are better alternatives to the original EoT.
+
+Signed gradients. All of the attacks used in this study including ours and the baseline use signs of gradients instead of the gradients themselves. This is a common practice for gradient-based ${\ell }_{\infty }$ -attacks, and we have also empirically confirm that it leads to much stronger attacks. This is also the reason that we measure sign matching as a measure of spread of the gradient estimates. In addition to the ${\ell }_{\infty }$ -constraint, using signed gradients as well as signed momentum is also beneficial as it has been shown to reduce variance for neural network training and achieve even faster convergence than normal SGD in certain cases (Bernstein et al. 2018).
+
+85 85 Lin+MB 80 Lin+LinBP Adversarial Accuracy 75 Lin+SGM Lin+TG 70 60 55 50 200 400 600 800 Number of Attack Steps 80 Adversarial Accuracy 75 70 65 60 55 20 (b) Comparison among transfer attack techniques Baseline 80 CE (softmax) Adversarial Accuracy 75 Linear (logits) 70 60 55 50 200 400 600 800 Number of Attack Steps 80 Adversarial Accuracy 75 70 60 55 20 (a) Comparison among loss func tions and decision rules
+
+Figure 3: Comparison of PGD attack's effectiveness with (a) different loss functions and decision rules, and (b) different attack variants with improved transferability. The error bars are too small to see with the markers so we report the numerical results in Table 4. "Baseline" refers to EoT with CE loss in Eqn. 9.
+
+### 6.2 Adversarial Objectives and Decision Rules
+
+Here, we propose new decision rules and loss functions for the attacks as alternatives to EoT. Note that this need not be the same as the rule used for making prediction in Eqn. 2. First, we introduce softmax and logits rules:
+
+$$
+{H}^{\text{softmax }}\left( \delta \right) \mathrel{\text{:=}} \mathcal{L}\left( {\underset{\theta \sim p\left( \theta \right) }{\mathbb{E}}\left\lbrack {\sigma \left( {f\left( {t\left( {x + \delta ;\theta }\right) }\right) }\right) }\right\rbrack , y}\right) \tag{10}
+$$
+
+$$
+{H}^{\text{logits }}\left( \delta \right) \mathrel{\text{:=}} \mathcal{L}\left( {\underset{\theta \sim p\left( \theta \right) }{\mathbb{E}}\left\lbrack {f\left( {t\left( {x + \delta ;\theta }\right) }\right) }\right\rbrack , y}\right) \tag{11}
+$$
+
+${H}^{\text{softmax }}$ , or loss of the expected softmax probability, is the same rule as the decision rule of RT defenses (Eqn. 2). It was also used by Salman et al. (2019) where $\mathcal{L}$ is cross-entropy loss. ${H}^{\text{logits }}$ or an expected logits, is similar to ${H}^{\text{softmax }}$ but without the softmax function to avoid potential vanishing gradients from softmax.
+
+In addition to the rules, we experiment with two choices of $\mathcal{L}$ commonly used for generating adversarial examples: cross-entropy loss (CE) and linear loss (Linear). The linear loss is defined as the difference between the largest logit of
+
+0.98 Loss functions Transfer attacks Gradient Variance 0.97 0.96 0.95 0.94 0.92
+
+Figure 4: Comparison of dimension-normalized variance of the gradient estimates across (blue) different loss functions and decision rules and (yellow) transferability-improving attacks. Strong attacks are highly correlated with low variance of their gradient estimates, i.e., Lin+SGM. Note that Lin+MB or Momentum Boosting is not shown here because it does not modify the gradients.
+
+the wrong class and logit of the correct class:
+
+$$
+{\mathcal{L}}_{\text{Linear }}\left( {x, y}\right) \mathrel{\text{:=}} \mathop{\max }\limits_{{j \neq y}}{F}_{j} - {F}_{y} \tag{12}
+$$
+
+$$
+\text{where}F = \underset{\theta \sim p\left( \theta \right) }{\mathbb{E}}\left\lbrack {f\left( {t\left( {x;\theta }\right) }\right) }\right\rbrack \tag{13}
+$$
+
+The advantage of the linear loss is that its gradient estimates are unbiased, similarly to EoT, meaning that the expectation can be moved in front of $\mathcal{L}$ due to linearity. However, this is not the case for CE loss.
+
+Attack evaluation and comparison. We evaluate the attacks by their effectiveness in reducing the adversarial accuracy (lower means stronger attack) on the RT defense obtained from Section 5. In our settting, the adversarial examples are generated once, and then they are used to compute the accuracy 10 times, each with a different random seed on the RT defense. We report the average accuracy over these 10 runs together with the ${95}\%$ -confidence interval. Alternatively, one can imagine a threat model that counts at least one misclassification among a certain number of trials as incorrect. This is an interesting and perhaps more realistic in some settings, but the optimal attack will be very different from EoT as we care a lot less about the expectation. This, however, is outside of the scope of our work.
+
+In Fig. 3a, we compare the effectiveness of four attacks, each using a different pair of losses and decision rules with varying numbers of PGD steps and samples $n$ . The widely used EoT method performs the worst of the four. CE loss on mean softmax probability performs better than EoT, confirming the observation made by Salman et al. (Salman et al. 2019). Linear loss and CE loss on average logits are even better and are consistently the strongest attacks, across all hy-perparameters. For the rest of this paper, we adopt the linear loss with mean logits as the main objective function.
+
+Connection to variance. As we predicted in Section 6.1, a stronger attack directly corresponds to lower variance. This hypothesis is confirmed by Fig. 4. For instance, the EoT baseline has the highest variance as well as the worst performance according to Fig. 5. On the other hand, the linear loss (Lin) has the lowest variance among the three loss functions (blue) and hence, it performs the best. The other three points in orange will be covered in the next section.
+
+### 6.3 Ensemble and Transfer Attacks
+
+RT can be regarded as an ensemble with each member sharing the same neural network parameters but applying different sets of transformations to the input (i.e., different $\theta$ ’s from random sampling). Consequently, we may view a white-box attack on RT defenses as a "partial" black-box attack on an ensemble of (infinitely) many models where the adversary wishes to "transfer" adversarial examples generated on some subset of the members to another unseen subset.
+
+Given this interpretation, we apply four techniques designed to enhance the transferability of adversarial examples to improve the attack success rate on RT defense. The techniques include momentum boosting (MB) (Dong et al. 2018), modifying backward passes by ignoring non-linear activation (LinBP) (Guo, Li, and Chen 2020) or by emphasizing the gradient through skip connections of ResNets more than through the residual block (SGM) (Wu et al. 2020), and simply using a targeted attack with the linear loss function (TG) (Zhao, Liu, and Larson 2021). In Fig. 3b, we compare these techniques combined with the best performing loss and decision rule from Section 6.2 (i.e., the linear loss on logits). Only SGM improves the attack success rate at all settings while the rest result in weaker attacks than the one without any of the techniques (denoted by "Linear (logits)" in Fig. 3a).
+
+SGM essentially normalizes the gradients and scales ones from the residual blocks by some constant less than 1 (we use 0.5 ) to reduce its influence and prioritize the gradients from the skip connection. Wu et al. (2020) explain that SGM leads to better transferability because gradients through skip connections preserve "low-level information" which tends to transfer better. Intuitively, this agrees with our variance explanation as the increased transferability implies a stronger agreement among gradient samples and hence, less spread or lower variance.
+
+### 6.4 Stochastic Optimization Algorithm
+
+While most attacks on deterministic models can use naive PGD to solve Eqn. 1 effectively, this is not the case for stochastic models like the RT defense. Here, the adversary only has access to noisy estimates of the gradients, making it a strictly more difficult problem, and techniques used in the deterministic case may no longer apply.
+
+As mentioned in Section 6.1, high-variance gradient estimates undermine the convergence rate of SGD. Thus, the attack should benefit from optimization techniques aimed at reducing the variance or speeding up the convergence of SGD. We first experiment with common optimizers such as SGD and Adam (Kingma and Ba 2015) with different hy-perparameters, e.g., momentum, Nesterov acceleration, and learning rate schedules, to find the best setting for the linear loss with SGM. Based on this experiment, we found that a momentum term with an appropriate damping constant plays an important role in the attack success rate. Momentum is also well-known to accelerate and stabilize training of neural networks (Sutskever et al. 2013). Fig. 10a reports adversarial accuracy at varying attack iterations and indicates that higher momentum constant leads to faster convergence and a higher attack success rate. However, the results seem highly sensitive to this momentum constant which also varies from one setting to another (e.g., number or types of transformations, dataset, etc.).
+
+100 Baseline AggMe SGD $\operatorname{AggMo}\left( {B = 6}\right)$ Adam 1500 2000 2500 3000 Number of Attack Steps Adversarial Accuracy 80 60 40 20 0 0 500 1000
+
+Figure 5: Comparison of the optimizers for attacking an RT defense with $\epsilon = {16}/{255}, n = {10}$ on Imagenette dataset. All but the baseline (CE loss with EoT) use the linear loss with SGM, and all but AggMo $\left( {B = 6}\right)$ use the default hyperparameters. AggMo with $B = 6$ outperforms the other algorithms in terms of both the convergence rate and the final adversarial accuracy obtained. This result is not very sensitive to $B$ as any sufficiently large value $\left( { \geq 4}\right)$ yields the same outcome.
+
+To mitigate this issue, we introduce another optimizer. AggMo is exactly designed to be less sensitive to choices of the damping coefficient by aggregating $B$ momentum terms with different constants instead of one (Lucas et al. 2019). After only a few tries, we found a wide range of values of $B$ where $\mathrm{{AggMo}}$ outperforms $\mathrm{{SGD}}$ with a fine-tuned momentum constant (see Fig. 10b). Fig. 5 compares the attacks using different choices of the optimizers to the baseline EoT attack. Here, the baseline can only reduce the adversarial accuracy from 89% to ${70}\%$ while our best attack manages to reach $6\%$ or over ${4.3} \times$ improvement. This concludes that the optimizer plays a crucial role in the success of the attack, and the RT defense, even with a carefully and systematically chosen transformation hyperparameters, is not robust against adversarial examples.
+
+Furthermore, we note that without our loss function and only using AggMo, the accuracy only goes down to ${23}\%$ at a much slower rate. Conversely, when the linear loss and SGM are used with SGD (no momentum), the accuracy drops to ${51}\%$ . This signifies that all three techniques we deploy play important roles to the attack's effectiveness.
+
+### 6.5 Comparison with AutoAttack
+
+AutoAttack (Croce and Hein 2020) was proposed as a standardized benchmark for evaluating deterministic defenses against adversarial examples. It uses an ensemble of four different attacks that cover weaknesses of one another, one of which does not use gradients. AutoAttack has been proven to be one of the strongest attack currently and is capable of catching defenses with false robustness caused by gradient obfuscation (Athalye, Carlini, and Wagner 2018).
+
+| Defenses | Imagenette | CIFAR-10 |
| Clean Accuracy | Adv. Accuracy | Clean Accuracy | Adv. Accuracy |
| Normal model | 95.41 | 0.00 | 95.10 | 0.00 |
| Madry et al. (2018) | 78.25 | 37.10 | 81.90 | 45.30 |
| Zhang et al. (2019) | 87.43 | 33.19 | 81.26 | 46.89 |
| RT defense | ${89.04} \pm {0.34}$ | ${6.34} \pm {0.35}$ | ${81.12} \pm {0.54}$ | ${29.91} \pm {0.35}$ |
| AdvRT defense | ${88.83} \pm {0.26}$ | ${8.68} \pm {0.52}$ | ${80.69} \pm {0.66}$ | ${41.30} \pm {0.49}$ |
+
+Table 3: Comparison of RT and AdvRT defenses to prior robust deterministic models and a normally trained model. Both the RT and the AdvRT models on Imagenette lack the adversarial robustness. Conversely, the RT defense on CIFAR-10 does bring substantial robustness, and combining it with adversarial training boosts the adversarial accuracy further. Nonetheless, they still fall behind the previously proposed deterministic models including Madry et al. (2018) and Zhang et al. (2019). The largest number in each column is in bold.
+
+While not particularly designed for stochastic models, Au-toAttack can be used to evaluate them when combined with EoT. We report the accuracy on adversarial examples generated on AutoAttack with all default hyperparameters in the "standard" mode and 10-sample EoT in Table 2. AutoAttack performs worse than the baseline EoT and our attack on both Imagenette and CIFAR-10 by a large margin. One of the reasons is that AutoAttack is optimized for efficiency and so each of its attacks is usually terminated once a misclassification occurs. This is applicable to deterministic models, but for stochastic ones such as an RT defense, the adversary is better off finding the adversarial examples that maximize the expected loss instead of ones that are misclassified once.
+
+To take this property into account, we include the accuracy reported by AutoAttack that treats a sample as incorrect if it is misclassified at least once throughout the entire process. For Imagenette, the accuracies after each of the four attacks (APGD-CE, APGD-T, FAB, and Square) is applied sequentially are82.03,78.81,78.03, and 77.34, respectively. Note that this is a one-time evaluation so there is no error bar here. Needless to say, the adversarial accuracy computed this way is strictly lower than the one we reported in Table 2 and violates our threat model. However, it is still higher than that of the baseline EoT and our attack, suggesting that AutoAttack is ineffective against randomized models like RT defenses. AutoAttack also comes with a "random" mode for randomized models which only use APGD-CE and APGD-DLR with 20-sample EoT. The adversarial accuracies obtained from this mode are 85.62 and 83.83 or ${88.62} \pm {0.46}$ for single-pass evaluation as in Table 2. This random mode performs worse than the standard version.
+
+## 7 Combining with Adversarial Training
+
+To deepen our investigation, we explore the possibility of combining RT defense with adversarial training. However, this is a challenging problem on its own. For normal deterministic models, 10-step PGD is sufficient for reaching adversarial accuracy close to best known attack or the optimal adversarial accuracy. However, this is not the case for RT defenses as even our new attack still requires more than one thousand iterations before the adversarial accuracy starts to plateau. Ultimately, the robustness of adversarially trained models largely depends on the strength of the attack used to generate the adversarial examples, and using a weak attack means that the obtained model will not be robust. A similar phenomenon is observed by Tramèr et al. (2018) and Wong, Rice, and Kolter (2020) where an adversarially trained model overfits to the weak FGSM attacks but has shown to be non-robust with the accurate evaluation. To test this hypothesis, we adversarially train the RT defense from Section 5 using our new attack with 50 iterations (already $5 \times$ the common number of steps) and call this defense "AdvRT." The attack step size is also adjusted accordingly to $\epsilon /8$ .
+
+In Table 3, we confirm that training AdvRT this way results in a model with virtually no robustness improvement over the normal RT on Imagenette. On the other hand, the AdvRT trained on CIFAR-10 proves to be more promising even though it is still not as robust as deterministic models trained with adversarial training or TRADES (Zhang et al. 2019). Based on this result, we conclude that a stronger attack on RT defenses that converge within a much fewer iterations will be necessary to make adversarial training successful. In theory, it might be possible to achieve a robust RT model with 1,000-step attack on Imagenette, but this is too computationally intensive for us to verify, and it will not to scale to any realistic setting.
+
+## 8 Conclusion
+
+While recent papers report state-of-the-art robustness with RT defenses, our evaluations show that RT generally under-performs existing defenses like adversarial training when met with a stronger attack, even after fine-tuning the hyperpa-rameters of the defense. Through our experiments, we found that non-differentiability and high-variance gradients can seriously inhibit adversarial optimization, so we recommend using only differentiable transformations along with their exact gradients in the evaluation of future RT defenses. In this setting, we propose a new state-of-the-art attack that improves significantly over the baseline (PGD with EoT) and show that RT defenses as well as their adversarially trained counterparts are not as robust to adversarial examples as they were previously believed to be.
+
+## A Experiment Details
+
+### A.1 Details on the Image Transformations
+
+The exact implementation of RT models and all the transformations will be released. Here, we provide some details on each of the transformation types and groups. Then, we describe how we approximate some non-differentiable functions with differentiable ones.
+
+## Noise injection
+
+- Erase: Set the pixels in a box with random size and location to zero.
+
+- Gaussian noise: Add Gaussian noise to each pixel.
+
+- Pepper: Zero out pixels with some probability.
+
+- Poisson noise: Add Poisson noise to each pixel.
+
+- Salt: Set pixels to one with some probability.
+
+- Speckle noise: Add speckle noise to each pixel.
+
+- Uniform noise: Add uniform noise to each pixel.
+
+## Blur filtering
+
+- Box blur: Blur with randomly sized mean filter.
+
+- Gaussian blur: Blur with randomly sized Gaussian filter with randomly chosen variance.
+
+- Median blur: Blur with randomly sized median filter.
+
+- Motion blur: Blur with kernel for random motion angle and direction.
+
+## Color-space alteration
+
+- HSV: Convert to HSV color-space, add uniform noise, then convert back.
+
+- LAB: Convert to LAB color-space, add uniform noise, then convert back.
+
+- Gray scale mix: Mix channels with random proportions.
+
+- Gray scale partial mix: Mix channels with random proportions, then mix gray image with each channel with random proportions.
+
+- Two channel gray scale mix: Mix two random channels with random proportions.
+
+- One channel partial gray: Mix two random channels with random proportions, then mix gray image with other channel.
+
+- XYZ: Convert to XYZ color-space, add uniform noise, then convert back.
+
+- YUV: Convert to YUV color-space, add uniform noise, then convert back.
+
+## Edge detection
+
+- Laplacian: Apply Laplacian filter.
+
+- Sobel: Apply the Sobel operator.
+
+## Lossy compression
+
+- JPEG compression: Compress image using JPEG to a random quality.
+
+- Color precision reduction: Reduce color precision to a random number of bins.
+
+- FFT perturbation: Perform FFT on image and remove each component with some probability.
+
+## Geometric transforms
+
+- Affine: Perform random affine transformation on image.
+
+- Crop: Crop image randomly and resize to original shape.
+
+- Horizontal flip: Flip image across the vertical.
+
+- Swirl: Swirl the pixels of an image with random radius and strength.
+
+- Vertical flip: Flip image across the horizontal.
+
+## Stylization
+
+- Color jitter: Randomly alter the brightness, contrast, and saturation. - Gamma: Randomly alter gamma. - Sharpen: Apply sharpness filter with random strength. - Solarize: Solarize the image.
+
+## Non-differentiable (for BPDA Tests Only)
+
+- Adaptive histogram: Equalize histogram in patches of random kernel size.
+
+- Chambolle denoise: Apply Chambolle's total variation denoising algorithm with random weight (can be implemented differentiably but was not due to time constraints).
+
+- Contrast stretching: Pick a random minimum and maximum pixel value to rescale intensities (can be implemented differentiably but was not due to time constraints).
+
+- Histogram: Equalize histogram using a random number of bins.
+
+## Unused transforms from BaRT
+
+- Seam carving: Algorithm used in Raff et al. (2019) has been patented and is no longer available for open-source use.
+
+- Wavelet denoising: The implementation in Raff et al. (2019) is incomplete.
+
+- Salt & pepper: We have already used salt and pepper noise separately.
+
+- Non-local means denoising: The implementation of NL means denoising in Raff et al. (2019) is too slow.
+
+### A.2 Experiment Details
+
+All of the experiments are evaluated on 1000 randomly chosen test samples. Since we choose the default $n$ to be 20 for inference and 10 for the attacks, the experiments are at least 10 times more expensive than usual, and we cannot afford enough computation to run a large number of experiments on the entire test set. The networks used in this paper are ResNet-34 (He et al. 2016a) for Imagenette and Pre-activation ResNet-20 (He et al. 2016b) for CIFAR-10. In all of the experiments, we use a learning rate of 0.05 , batch size of 128 , and weight decay of 0.0005 . We use cosine annealing schedule (Loshchilov and Hutter 2017) for the learning rate with a period of 10 epochs which also doubles after every period. All models are trained for 70 epochs, and we save the weights with the highest accuracy on the held-out validation data (which does not overlap with the training or test set) For adversarially trained RT defenses, the cosine annealing step is set to 10 and the training lasts for 70 epochs to reduce the computation. To help the training converge faster, we pre-train these RT models on clean data before turning on adversarial training as suggested by Gupta, Dube, and Verma (2020).
+
+##
+
+
+
+Figure 6: Fully-convolutional BPDA network from Raff et al. (2019). The network has six convolutional layers. All layers have a stride of 1 . The first five layers have kernel size of 5 and padding size of 2 , and the last layer has a kernel size of 3 and padding size of 1 . The input consists of more than 5 channels, 3 of which are for the image RGB channels, 2 of which are CoordConv channels that include the coordinates of each pixel at that pixel's location, and the remaining channels are the parameters for the transformation copied at each pixel location. The network contains a skip connection from the input to each layer except the final layer.
+
+### A.3 Details on BPDA Experiments
+
+We used the following setup for the differentiability related experiments conducted in Section 4.2:
+
+- Each accuracy is an average over 10 trials on the same set of 1000 Imagenette images.
+
+- The defense samples $S = {10}$ transforms from the full set of $K$ transforms.
+
+- The image classifier uses a ResNet-50 architecture like in Raff et al. (2019) trained on transformed images for 30 epochs.
+
+- The attack uses 40 PGD steps of size $4/{255}$ with an $\epsilon =$ ${16}/{255}$ to minimize the EoT objective.
+
+The BPDA network architecture is the same used by Raff et al. (2019) and is outlined in Fig. 6. Here are more details on BPDA training:
+
+- All BPDA networks were trained using Adam with a learning rate of 0.01 for 10 epochs.
+
+- All networks achieve a per-pixel MSE below 0.01 . The outputs of the BPDA networks are compared to the true transform outputs for several different transform types in Fig. 7.
+
+The specific set of transforms used in each defense are the following:
+
+- BaRT (all): adaptive histogram, histogram, bilateral blur, box blur, Gaussian blur, median blur, contrast stretching, FFT, gray scale mix, gray scale partial mix, two channel gray scale mix, one channel gray scale mix, HSV, LAB, XYZ, YUV, JPEG compression, Gaussian noise, Poisson noise, salt, pepper, color precision reduction, swirl, Chambolle denoising, crop.
+
+- BaRT (only differentiable): all of the BaRT all transforms excluding adaptive histogram, histogram, contrast stretching, and Chambolle denoising.
+
+## B Details of the Attacks
+
+### B.1 Differentiable Approximation
+
+Some of the transformations contain non-differentiable operations which can be easily approximated with differentiable functions. Specifically, we approximate the rounding function in JPEG compression and color precision reduction, and the modulo operator in all transformations that require conversion between RGB and HSV color-spaces (HSV alteration and color jitter). Note that we are not using the non-differentiable transform on the forward pass and a differentiable approximation on the backward pass (like in BPDA). Instead, we are using the differentiable version both when performing the forward pass and when computing the gradient.
+
+We take the approximation of the rounding function from Shin and Song (2017) shown in Eqn. 14.
+
+$$
+\lfloor x{\rceil }_{\text{approx }} = \lfloor x\rceil + {\left( x-\lfloor x\rceil \right) }^{3} \tag{14}
+$$
+
+For the modulo or the remainder function, we approximate it using the above differentiable rounding function as a basis.
+
+$$
+\operatorname{mod}\left( x\right) = \left\{ \begin{array}{ll} x - \lfloor x\rceil & \text{ if }x > \lfloor x\rceil \\ x - \lfloor x\rceil + 1 & \text{ otherwise } \end{array}\right. \tag{15}
+$$
+
+To obtain a differentiable approximation, we can replace the rounding operator with its smooth version in Eqn. 14. This function (approximately) returns decimal numbers or a fractional part of a given real number, and it can be scaled to approximate a modulo operator with any divisor.
+
+Note that these operators are step functions and are differentiable almost everywhere, like ReLU. However, their derivatives are always zero (unlike ReLU), and so a first-order optimization algorithm would still fail on these functions.
+
+### B.2 Effect of the Permutation of the Transformations
+
+We mentioned in Section 3.2 that a permutation of the transforms ${\left\{ {\tau }^{\left( s\right) }\right\} }_{s = 1}^{S}$ is randomly sampled for each of the $n$ samples. However, we found that in practice, this leads to high-variance estimates of the gradients. On the other hand, fixing the permutation across $n$ samples in each attack iteration (i.e., $\tau$ is fixed but not $\alpha$ or $\beta$ ) results in lower variance and hence, a stronger attack, even though the gradient estimates are biased as $\tau$ is fixed. For instance, with fixed permutation, adversarial accuracy achieved by EoT attack is 51.44 where the baseline EoT with completely random permutation is 70.79 . The variance also reduces from 0.97 to 0.94 .
+
+Additionally, the fixed permutation reduces the computation time as all transformations can be applied in batch. All of the attacks reported in this paper, apart from the baseline, use this fixed permutation.
+
+### B.3 Variance of Gradients
+
+We have described how we compute the sample variance of the gradients in Section 6.1. Here, we provide detailed calculations of the other three metrics. First, the unbiased variance is computed as normal with an additional normalization by dimension.
+
+$$
+{\mu }_{n} \mathrel{\text{:=}} \frac{1}{n}\mathop{\sum }\limits_{{j = 1}}^{n}\nabla {\widehat{G}}_{1, j} \tag{16}
+$$
+
+$$
+{\sigma }_{n}^{2} \mathrel{\text{:=}} \frac{1}{d}\frac{1}{n - 1}\mathop{\sum }\limits_{{j = 1}}^{n}{\begin{Vmatrix}{\mu }_{n} - {\widehat{G}}_{1, j}\end{Vmatrix}}_{2}^{2} \tag{17}
+$$
+
+where ${\widehat{G}}_{1}$ is the signed gradients where the loss is estimated with one sample as defined in Algorithm 1.
+
+(a) Original (c) Box blur (e) HSV color alteration (g) Crop (b) Adaptive histogram (d) Poisson noise (f) FFT
+
+Figure 7: Comparison of the true transformed outputs (top row) and outputs of respective BPDA networks (bottom row) for six different transformation types.
+
+| Attacks | Adv. acc. with varying attack steps $\left( {n = {10}}\right)$ | Adv. acc. with varying $n$ (attack steps $= {200}$ ) |
| 50 | 200 | 800 | 5 | 10 | 20 |
| Baseline | ${82.34} \pm {0.43}$ | ${73.36} \pm {0.37}$ | ${71.70} \pm {0.39}$ | ${74.81} \pm {0.47}$ | ${74.46} \pm {0.55}$ | ${76.06} \pm {0.29}$ |
| CE (softmax) | ${82.37} \pm {0.39}$ | ${71.05} \pm {0.36}$ | ${65.06} \pm {0.39}$ | ${73.82} \pm {0.35}$ | ${70.71} \pm {0.53}$ | ${68.51} \pm {0.33}$ |
| Linear (logits) | ${80.67} \pm {0.50}$ | ${66.11} \pm {0.58}$ | ${58.26} \pm {0.62}$ | ${70.67} \pm {0.41}$ | ${66.59} \pm {0.57}$ | ${62.48} \pm {0.41}$ |
| Linear+MB | ${78.51} \pm {0.45}$ | ${72.66} \pm {0.50}$ | ${65.28} \pm {0.41}$ | ${72.47} \pm {0.39}$ | ${72.51} \pm {0.55}$ | ${71.06} \pm {0.32}$ |
| Linear+LinBP | ${82.90} \pm {0.50}$ | ${70.57} \pm {0.32}$ | ${65.15} \pm {0.43}$ | ${75.24} \pm {0.35}$ | ${72.73} \pm {0.40}$ | ${70.02} \pm {0.31}$ |
| Linear+SGM | ${80.10} \pm {0.43}$ | $\mathbf{{63.75}} \pm {0.21}$ | $\mathbf{{51.68}} \pm {0.35}$ | $\mathbf{{66.93}} \pm {0.43}$ | $\mathbf{{62.57}} \pm {0.31}$ | ${59.61} \pm {0.55}$ |
| Linear+TG | ${80.78} \pm {0.56}$ | ${68.70} \pm {0.34}$ | $\mathbf{{59.69}} \pm {0.57}$ | ${71.72} \pm {0.41}$ | ${67.84} \pm {0.50}$ | ${65.63} \pm {0.50}$ |
+
+Table 4: Comparison of different attack techniques on our best RT model. Lower means stronger attack. This table only shows the numerical results plotted in Fig. 3.
+
+0.220 0.562 Sign Matches 0.558 0.556 0.554 0.552 Baseline CE (softmax) Lin (logits) (b) Sign Matches Cosine Similarity 0.215 0.210 0.205 0.195 0.190 Baseline CE (softmax) Lin (logits) (a) Cosine Similarity
+
+Figure 8: (a) Cosine similarity and (b) percentage of sign matches for three pairs of attack loss functions and decision rules: CE loss with EoT "Baseline", CE loss on mean softmax probability "CE (softmax)", and linear loss on logits "Lin (logits)".
+
+0.235 0.570 0.568 Sign Matches 0.564 0.562 0.560 0.558 130 (b) Sign Matches Cosine Similarity 0.230 0.225 0.220 0.215 UN (a) Cosine Similarity
+
+Figure 9: (a) Cosine similarity and (b) percentage of sign matches for the linear loss and its combinations with three transfer attack techniques: Linear Backward Pass "LinBP", Skip Gradient Method "SGM", and targeted "TG".
+
+The cosine similarity is computed between the mean gradient and all $n$ samples and then averaged.
+
+$$
+{\cos }_{n} \mathrel{\text{:=}} \frac{1}{n}\mathop{\sum }\limits_{{j = 1}}^{n}\frac{\left\langle {\widehat{G}}_{1, j},{\mu }_{n}\right\rangle }{{\begin{Vmatrix}{\widehat{G}}_{1, j}\end{Vmatrix}}_{2} \cdot {\begin{Vmatrix}{\mu }_{n}\end{Vmatrix}}_{2}} \tag{18}
+$$
+
+Lastly, the sign matching percentage is
+
+$$
+{\operatorname{sign}}_{ - }{\operatorname{match}}_{n}. \mathrel{\text{:=}} \frac{1}{n}\mathop{\sum }\limits_{{j = 1}}^{n}\frac{1}{d}\mathop{\sum }\limits_{{i = 1}}^{d}\mathbb{1}\left\{ {{\left\lbrack {\widehat{G}}_{1, j}\right\rbrack }_{i} = {\left\lbrack {\mu }_{n}\right\rbrack }_{i}}\right\} \tag{19}
+$$
+
+Fig. 8 and Fig. 9 plot the cosine similarly and the sign matching for varying loss functions and varying transfer attacks, respectively. Similarly to Fig. 4, better attacks result in less spread of the gradient samples which corresponds to higher cosine similarity and sign matching percentage.
+
+## C Details on Bayesian Optimization
+
+One major challenge in implementing an RT defense is selecting the defense hyperparameters which include the $K$ transformation types, the number of transformations to apply (S), and their parameters(aandp). To improve the robustness of RT defense, we use Bayesian optimization (BO), a well-known black-box optimization technique, to fine-tune $a$ and $p$ (Snoek, Larochelle, and Adams 2012). In this case, BO models the hyperparameter tuning as a Gaussian process where the objective function takes in $a$ and $p$ , trains a neural network as a backbone for an RT defense, and outputs adversarial accuracy under some pre-defined ${\ell }_{\infty }$ -budget $\epsilon$ as the metric used for optimization.
+
+Since BO quickly becomes ineffective as we increase the dimensions of the search space, we choose to tune either $a$ or $p$ , never both, for each of the $K$ transformation types. For transformations that have a tunable $a$ , we fix $p = 1$ (e.g., noise injection, affine transform). For the transformations without an adjustable strength $a$ , we only tune $p$ (e.g., Laplacian filter, horizontal flip). Additionally, because BO does not natively support categorical or integral variables, we experiment with different choices for $K$ and $S$ without the use of BO. Therefore, our BO problem must optimize over $K$ (up to 33) variables, far more than are typically present when doing model hyperparamter tuning using BO.
+
+100 Baseline SGD-M-0.99 SGD-M-0 SGD-M-0.999 SGD-M-0.9 1500 2000 2500 3000 Number of Attack Steps $\operatorname{AggMo}\left( {B = 4}\right)$ $\operatorname{AggMo}\left( {B = 6}\right)$ $\operatorname{AggMo}\left( {B = 8}\right)$ 1500 2000 2500 3000 Number of Attack Steps (b) AggMo with varying $B$ ’s Adversarial Accuracy 80 60 40 20 0 500 1000 (a) SGD with varying momentum constants 100 Baseline AggMo $\left( {B = 2}\right)$ Adversarial Accuracy $\operatorname{AggMo}\left( {B = 3}\right)$ 80 60 40 20 0 0 500 1000
+
+Figure 10: Effectiveness of the optimizers, (a) SGD and (b) AggMo, with varying momentum parameters. Increasing $B$ for AggMo in this case monotonically reduces the final adversarial accuracy until $B = 4$ where it plateaus. This is more predictable and stable than increasing the momentum constant in SGD.
+
+Mathematically, the objective function $\psi$ is defined as
+
+$$
+\psi : {\left\lbrack 0,1\right\rbrack }^{K} \rightarrow {\mathcal{R}}_{\infty ,\epsilon } \in \left\lbrack {0,1}\right\rbrack \tag{20}
+$$
+
+where the input is $K$ real numbers between 0 and 1, and ${\mathcal{R}}_{\infty ,\epsilon }$ denotes the adversarial accuracy or the accuracy on ${x}_{\text{adv }}$ as defined in Eqn. 1. Since $\psi$ is very expensive to evaluate as it involves training and testing a large neural network, we employ the following strategies to reduce the computation: (1) only a subset of the training and validation set is used, (2) the network is trained for fewer epochs with a cosine annealing learning rate schedule to speed up convergence (Loshchilov and Hutter 2017), and (3) the attack used for computing ${\mathcal{R}}_{\infty ,\epsilon }$ is weaker but faster. Even with these speedups, one BO run still takes approximately two days to complete on two GPUs (Nvidia GeForce GTX 1080 Ti). We also experimented with other sophisticated hyperparameter-tuning algorithms based on Gaussian processes (Bergstra, Yamins, and Cox 2013; Kandasamy et al. 2020; Falkner, Klein, and Hutter 2018) but do not find them more effective. We summarize the main steps for tuning and training an RT defense in Algorithm 2.
+
+Algorithm 2: Tuning and training RT defense.
+
+---
+
+ Input: Set of transformation types, $n, p,\epsilon$
+
+ Output: ${g}^{ * }\left( \cdot \right) ,\mathcal{R},{\mathcal{R}}_{p,\epsilon }$
+
+ Data: Training data $\left( {{\mathbf{X}}^{\text{train }},{\mathbf{Y}}^{\text{train }}}\right)$ , test data
+
+ $\left( {{\mathbf{X}}^{test},{\mathbf{Y}}^{test}}\right)$
+
+ // Starting Bayesian optimization (BO)
+
+ Sub-sample $\left( {{\mathbf{X}}^{\text{train }},{\mathbf{Y}}^{\text{train }}}\right)$ and split it into BO’s training
+
+ data $\left( {{\mathbf{X}}_{\mathrm{{BO}}}^{\text{train }},{\mathbf{Y}}_{\mathrm{{BO}}}^{\text{train }}}\right)$ and validation data $\left( {{\mathbf{X}}_{\mathrm{{BO}}}^{\text{val }},{\mathbf{Y}}_{\mathrm{{BO}}}^{\text{val }}}\right)$ .
+
+ 2 ${\mathcal{R}}_{p,\epsilon }^{ * } \leftarrow 0\;//$ Best adversarial accuracy
+
+ 3 ${\left\{ \left( {p}_{i}^{ * },{\alpha }_{i}^{ * }\right) \right\} }_{i = 1}^{K} \leftarrow 0$ // Best RT
+
+ hyperparameters
+
+ for step $\leftarrow$ 0 to ${MAX}\_ {BO}\_ {STEPS}$ do
+
+ // Running one trial of BO
+
+ BO specifies ${\left\{ \left( {p}_{i},{\alpha }_{i}\right) \right\} }_{i = 1}^{K}$ to evaluate.
+
+ Train an RT model on $\left( {{\mathbf{X}}_{\mathrm{{BO}}}^{\text{train }},{\mathbf{Y}}_{\mathrm{{BO}}}^{\text{train }}}\right)$ with
+
+ hyperparameters ${\left\{ \left( {p}_{i},{\alpha }_{i}\right) \right\} }_{i = 1}^{K}$ to obtain $g$ .
+
+ Test $g$ by computing ${\mathcal{R}}_{p,\epsilon }$ on $\left( {{\mathbf{X}}_{\mathrm{{BO}}}^{\mathrm{{val}}},{\mathbf{Y}}_{\mathrm{{BO}}}^{\mathrm{{val}}}}\right)$ using a
+
+ weak but fast attack.
+
+ if ${\mathcal{R}}_{p,\epsilon } > {\mathcal{R}}_{p,\epsilon }^{ * }$ then
+
+ ${\mathcal{R}}_{p,\epsilon }^{ * } \leftarrow {\mathcal{R}}_{p,\epsilon }$
+
+ ${\left\{ \left( {p}_{i}^{ * },{\alpha }_{i}^{ * }\right) \right\} }_{i = 1}^{K} \leftarrow {\left\{ \left( {p}_{i},{\alpha }_{i}\right) \right\} }_{i = 1}^{K}$
+
+ else if No improvement for some steps then
+
+12 break;
+
+ // Full training of RT
+
+ Train an RT model on $\left( {{\mathbf{X}}^{\text{train }},{\mathbf{Y}}^{\text{train }}}\right)$ with best
+
+ hyperparameters ${\left\{ \left( {p}_{i}^{ * },{\alpha }_{i}^{ * }\right) \right\} }_{i = 1}^{K}$ to obtain ${g}^{ * }$ .
+
+ 4 Evaluate ${g}^{ * }$ by computing $\mathcal{R}$ and ${\mathcal{R}}_{p,\epsilon }$ on $\left( {{\mathbf{X}}^{\text{test }},{\mathbf{Y}}^{\text{test }}}\right)$
+
+ using a strong attack.
+
+---
+
+We use the Ray Tune library for RT's hyperparameter tuning in Python (Liaw et al. 2018). The Bayesian optimization tool is implemented by Nogueira (2014), following analyses and instructions by Snoek, Larochelle, and Adams (2012) and Brochu, Cora, and de Freitas (2010). As mentioned in Section 5, we sub-sample the data to reduce computation for each BO trial. Specifically, we use ${20}\%$ and ${10}\%$ of the training samples for Imagenette and CIFAR-10 respectively (Algorithm 2, line 1) as Imagenette has a much smaller number of samples in total. The models are trained with the same transformations and hyperparameters used during inference, and here, $n$ is set to 1 during training, just as is done during standard data augmentation. We use 200 samples to evaluate each BO run in line 7 of Algorithm 2 with only 100 steps and $n = {10}$ .
+
+One BO experiment executes two BO's in parallel. The maximum number of BO runs is 160 , but we terminate the experiment if no improvement has been made in the last 40 runs after a minimum of 80 runs have taken place. The runtime depends on $S$ and the transformation types used. In our typical case, when all 33 transformation types are used and $S = {14}$ , one BO run takes almost an hour on an Nvidia GeForce GTX 1080 Ti for Imagenette. One BO experiment then takes about two days to finish.
+
+90.0 Majority vote Avg. softmax probs Avg. logits 40 50 60 70 80 Number of Monte Carlo Samples(n) 87.5 Clean Accuracy (R) 85.0 82.5 80.0 77.5 75.0 72.5 10 20 30
+
+Figure 11: Clean accuracy of our best RT model computed with three decision rules for obtaining the final prediction from the $n$ output samples. The rules are majority vote (red), average softmax probability (blue), and average logits (green). The shaded areas represent the ${95}\%$ confidence interval for each decision rule.
+
+In line 13 and 14 of Algorithm 2, we now use the full training set and 1000 test samples as mentioned earlier. During the full training, $n$ is set to four which increases the training time by approximately four times. We find that using a larger $n$ is beneficial to both the clean and the adversarial accuracy, but $n$ larger than four does not make any significant difference.
+
+### C.1 Details on the Final RT Model
+
+We run multiple BO experiments (Algorithm 2) on different subsets of transformation types to identify which transformations are most/least effective in order to reduce $K$ as well as the number of hyperparameters our final run of BO has to tune. We then repeat Algorithm 2 initialized with the input-output pairs from the prior runs of $\mathrm{{BO}}$ to obtain a new set of hyperparameters. Finally, we remove the transformations whose $p$ or $a$ has been set to zero by the first run of BO, and we run BO once more with this filtered subset of transformations. At the end of this expensive procedure, we obtain the best and final RT model that we use in the experiments throughout this paper. For Imagenette, the final set of 18 transformation types used in this model are color jitter, erase, gamma, affine, horizontal flip, vertical flip, Laplacian filter, Sobel filter, Gaussian blur, median blur, motion blur, Poisson noise, FFT, JPEG compression, color precision reduction, salt noise, sharpen, and solarize. $S$ is set to 14 .
+
+## D Additional Experiments on the RT Model
+
+### D.1 Decision Rules and Number of Samples
+
+Fig. 11 and Fig. 12 compare three different decision rules that aggregate the $n$ outputs of the RT model to produce the final prediction $\widehat{y}\left( x\right)$ given an input $x$ . We choose the average softmax probability rule for all of our RT models because it provides a good trade-off between the clean accuracy and the robustness. Majority vote has poor clean accuracy, and the average logits have poor robustness.
+
+Adversarial Accuracy $\left( {\mathcal{R}}_{\infty ,{16}/{255}}\right)$ 32 Majority vote Avg. softmax probs Avg. logits 40 50 60 70 80 Number of Monte Carlo Samples(n) 31 30 29 28 27 26 10 20 30
+
+Figure 12: Adversarial accuracy $\left( {\epsilon = {16}/{255}}\right)$ of our best RT model computed with three decision rules for obtaining the final prediction from the $n$ output samples. The rules are majority vote (red), average softmax probability (blue), and average logits (green). The shaded areas represent the 95% confidence interval for each decision rule.
+
+Table 5: RT's performance when only one of the transformation groups is applied. The attack is Linear+Adam+SGM with 200 steps and $n = {20}$ .
+
+| Used Transformations | Clean Acc. | Adv. Acc. |
| Noise injection | ${80.93} \pm {0.44}$ | ${8.35} \pm {0.20}$ |
| Blur filter | ${97.32} \pm {0.20}$ | ${0.00} \pm {0.00}$ |
| Color space | ${94.40} \pm {0.53}$ | ${0.00} \pm {0.00}$ |
| Edge detection | ${97.64} \pm {0.09}$ | ${0.00} \pm {0.00}$ |
| Lossy compression | ${83.56} \pm {0.66}$ | ${3.56} \pm {0.26}$ |
| Geometric transforms | ${88.42} \pm {0.28}$ | ${0.83} \pm {0.21}$ |
| Stylization | ${98.31} \pm {0.09}$ | ${0.00} \pm {0.00}$ |
+
+### D.2 Importance of the Transformation Groups
+
+Choosing the best set of transformation types to use is a computationally expensive problem. There are many more transformations that can be applied outside of the 33 types we choose, and the number of possible combinations grows exponentially. BO gives us an approximate solution but is by no means perfect. Here, we take a step further to understand the importance of each transformation group. Table 5 gives an alternative way to gauge the contribution of each transformation group. According to this experiment, noise injection appears most robust followed by lossy compression and geometric transformations. However, this result is not very informative as most of the groups have zero adversarial accuracy, and the rest are likely to also reduce to zero given more attack steps. This result also surprisingly follows the commonly observed robustness-accuracy trade-off (Tsipras et al. 2019).
+
+90 Clean accuracy Adv. accuracy 8 9 10 11 Number of Transformations Applied(S) 80 Accuracy 70 60 50 40 30 6 7
+
+Figure 13: Adversarial accuracy of RT models obtained after running Algorithm 2 for different values of $S$ on CIFAR-10
+
+### D.3 Number of Transformations
+
+We test the effect of the transform permutation size $S$ on the clean and the robust accuracy of RT models (Fig. 13). We run Bayesian optimization experiments for different values of $S$ using all 33 transformation types, and all of the models are trained using the same procedure. Fig. 13 shows that generally more transformations (larger $S$ ) increase robustness but lower accuracy on benign samples.
+
+## References
+
+Athalye, A.; Carlini, N.; and Wagner, D. 2018. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. In Dy, J.; and Krause, A., eds., Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, 274-283. Stockholmsmässan, Stockholm Sweden: PMLR.
+
+Athalye, A.; Engstrom, L.; Ilyas, A.; and Kwok, K. 2018. Synthesizing Robust Adversarial Examples. In Dy, J.; and Krause, A., eds., Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, 284-293. Stockholmsmässan, Stockholm Sweden: PMLR.
+
+Bender, C.; Li, Y.; Shi, Y.; Reiter, M. K.; and Oliva, J. 2020. Defense through Diverse Directions. In III, H. D.; and Singh, A., eds., Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, 756-766. PMLR.
+
+Bergstra, J.; Yamins, D.; and Cox, D. D. 2013. Making a Science of Model Search: Hyperparameter Optimization in Hundreds of Dimensions for Vision Architectures. In Proceedings of the 30th International Conference on International Conference on Machine Learning - Volume 28, ICML'13, I-115-I-123. Atlanta, GA, USA: JMLR.org.
+
+Bernstein, J.; Wang, Y.-X.; Azizzadenesheli, K.; and Anand-kumar, A. 2018. signSGD: Compressed Optimisation for Non-Convex Problems. In Dy, J.; and Krause, A., eds., Proceedings of the 35th International Conference on Machine
+
+Learning, volume 80 of Proceedings of Machine Learning Research, 560-569. PMLR.
+
+Biggio, B.; Corona, I.; Maiorca, D.; Nelson, B.; Šrndić, N.; Laskov, P.; Giacinto, G.; and Roli, F. 2013. Evasion Attacks against Machine Learning at Test Time. In Blockeel, H.; Kersting, K.; Nijssen, S.; and Zelezný, F., eds., Machine Learning and Knowledge Discovery in Databases, 387-402. Berlin, Heidelberg: Springer Berlin Heidelberg. ISBN 978-3- 642-40994-3.
+
+Brochu, E.; Cora, V. M.; and de Freitas, N. 2010. A Tutorial on Bayesian Optimization of Expensive Cost Functions, with Application to Active User Modeling and Hierarchical Reinforcement Learning. arXiv:1012.2599 [cs].
+
+Cohen, J.; Rosenfeld, E.; and Kolter, Z. 2019. Certified Adversarial Robustness via Randomized Smoothing. In Chaud-huri, K.; and Salakhutdinov, R., eds., Proceedings of the 36th International Conference on Machine Learning, volume 97 of Proceedings of Machine Learning Research, 1310-1320. PMLR.
+
+Croce, F.; and Hein, M. 2020. Reliable Evaluation of Adversarial Robustness with an Ensemble of Diverse Parameter-Free Attacks. In III, H. D.; and Singh, A., eds., Proceedings of the 37th International Conference on Machine Learning, volume 119 of Proceedings of Machine Learning Research, 2206-2216. PMLR.
+
+Dhillon, G. S.; Azizzadenesheli, K.; Bernstein, J. D.; Kos-saifi, J.; Khanna, A.; Lipton, Z. C.; and Anandkumar, A. 2018. Stochastic Activation Pruning for Robust Adversarial Defense. In International Conference on Learning Representations.
+
+Dong, Y.; Liao, F.; Pang, T.; Su, H.; Zhu, J.; Hu, X.; and Li, J. 2018. Boosting Adversarial Attacks with Momentum. In 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR). IEEE.
+
+Falkner, S.; Klein, A.; and Hutter, F. 2018. BOHB: Robust and Efficient Hyperparameter Optimization at Scale. In Dy, J.; and Krause, A., eds., Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, 1437-1446. Stockholmsmässan, Stockholm Sweden: PMLR.
+
+Ghadimi, S.; and Lan, G. 2013. Stochastic First- and Zeroth-Order Methods for Nonconvex Stochastic Programming. SIAM Journal on Optimization, 23(4): 2341-2368.
+
+Goodfellow, I.; Shlens, J.; and Szegedy, C. 2015. Explaining and Harnessing Adversarial Examples. In International Conference on Learning Representations.
+
+Guo, C.; Rana, M.; Cisse, M.; and van der Maaten, L. 2018. Countering Adversarial Images Using Input Transformations. In International Conference on Learning Representations.
+
+Guo, Y.; Li, Q.; and Chen, H. 2020. Backpropagating Linearly Improves Transferability of Adversarial Examples. In NeurIPS.
+
+Gupta, S.; Dube, P.; and Verma, A. 2020. Improving the Affordability of Robustness Training for DNNs. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops.
+
+He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016a. Deep Residual Learning for Image Recognition. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770- 778.
+
+He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016b. Identity Mappings in Deep Residual Networks. In European Conference on Computer Vision, 630-645. Springer.
+
+He, W.; Li, B.; and Song, D. 2018. Decision Boundary Analysis of Adversarial Examples. In International Conference on Learning Representations.
+
+He, Z.; Rakin, A. S.; and Fan, D. 2019. Parametric Noise Injection: Trainable Randomness to Improve Deep Neural Network Robustness against Adversarial Attack. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition, 588-597.
+
+Howard, J. 2021. Fastai/Imagenette. fast.ai.
+
+Kandasamy, K.; Vysyaraju, K. R.; Neiswanger, W.; Paria, B.; Collins, C. R.; Schneider, J.; Poczos, B.; and Xing, E. P. 2020. Tuning Hyperparameters without Grad Students: Scalable and Robust Bayesian Optimisation with Dragonfly. Journal of Machine Learning Research, 21(81): 1-27.
+
+Kingma, D. P.; and Ba, J. 2015. Adam: A Method for Stochastic Optimization. In Bengio, Y.; and LeCun, Y., eds., 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
+
+Lecuyer, M.; Atlidakis, V.; Geambasu, R.; Hsu, D.; and Jana, S. 2019. Certified Robustness to Adversarial Examples with Differential Privacy. In 2019 IEEE Symposium on Security and Privacy (SP), 656-672.
+
+Liaw, R.; Liang, E.; Nishihara, R.; Moritz, P.; Gonzalez, J. E.; and Stoica, I. 2018. Tune: A Research Platform for Distributed Model Selection and Training. arXiv preprint arXiv:1807.05118.
+
+Liu, X.; Cheng, M.; Zhang, H.; and Hsieh, C.-J. 2018. Towards Robust Neural Networks via Random Self-Ensemble. In ${ECCV}\left( 7\right) ,{381} - {397}$ .
+
+Liu, X.; Li, Y.; Wu, C.; and Hsieh, C.-J. 2019. Adv-BNN: Improved Adversarial Defense through Robust Bayesian Neural Network. In International Conference on Learning Representations.
+
+Loshchilov, I.; and Hutter, F. 2017. SGDR: Stochastic Gradient Descent with Warm Restarts. In International Conference on Learning Representations.
+
+Lucas, J.; Sun, S.; Zemel, R.; and Grosse, R. 2019. Aggregated Momentum: Stability through Passive Damping. In International Conference on Learning Representations.
+
+Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; and Vladu, A. 2018. Towards Deep Learning Models Resistant to Adversarial Attacks. In International Conference on Learning Representations.
+
+Nogueira, F. 2014. Bayesian Optimization: Open Source Constrained Global Optimization Tool for Python.
+
+Raff, E.; Sylvester, J.; Forsyth, S.; and McLean, M. 2019. Barrage of Random Transforms for Adversarially Robust Defense. In 2019 IEEE/CVF Conference on Computer Vision
+
+and Pattern Recognition (CVPR), 6521-6530. Long Beach,
+
+CA, USA: IEEE. ISBN 978-1-72813-293-8.
+
+Salman, H.; Li, J.; Razenshteyn, I.; Zhang, P.; Zhang, H.; Bubeck, S.; and Yang, G. 2019. Provably Robust Deep Learning via Adversarially Trained Smoothed Classifiers. In Wallach, H.; Larochelle, H.; Beygelzimer, A.; dAlché-Buc, F.; Fox, E.; and Garnett, R., eds., Advances in Neural Information Processing Systems, volume 32. Curran Associates, Inc.
+
+Shin, R.; and Song, D. 2017. JPEG-Resistant Adversarial Images. In Machine Learning and Computer Security Workshop (Co-Located with NeurIPS 2017). Long Beach, CA, USA.
+
+Snoek, J.; Larochelle, H.; and Adams, R. P. 2012. Practical Bayesian Optimization of Machine Learning Algorithms. In Pereira, F.; Burges, C. J. C.; Bottou, L.; and Weinberger, K. Q., eds., Advances in Neural Information Processing Systems, volume 25. Curran Associates, Inc.
+
+Sutskever, I.; Martens, J.; Dahl, G.; and Hinton, G. 2013. On the Importance of Initialization and Momentum in Deep Learning. In Dasgupta, S.; and McAllester, D., eds., Proceedings of the 30th International Conference on Machine Learning, volume 28 of Proceedings of Machine Learning Research, 1139-1147. Atlanta, Georgia, USA: PMLR.
+
+Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; and Fergus, R. 2014. Intriguing Properties of Neural Networks. In International Conference on Learning Representations.
+
+Tramer, F.; Carlini, N.; Brendel, W.; and Madry, A. 2020. On Adaptive Attacks to Adversarial Example Defenses. In Larochelle, H.; Ranzato, M.; Hadsell, R.; Balcan, M. F.; and Lin, H., eds., Advances in Neural Information Processing Systems, volume 33, 1633-1645. Curran Associates, Inc.
+
+Tramèr, F.; Kurakin, A.; Papernot, N.; Goodfellow, I.; Boneh, D.; and McDaniel, P. 2018. Ensemble Adversarial Training: Attacks and Defenses. In International Conference on Learning Representations.
+
+Tsipras, D.; Santurkar, S.; Engstrom, L.; Turner, A.; and Madry, A. 2019. Robustness May Be at Odds with Accuracy. In International Conference on Learning Representations.
+
+Wong, E.; Rice, L.; and Kolter, J. Z. 2020. Fast Is Better than Free: Revisiting Adversarial Training. In International Conference on Learning Representations.
+
+Wu, D.; Wang, Y.; Xia, S.-T.; Bailey, J.; and Ma, X. 2020. Skip Connections Matter: On the Transferability of Adversarial Examples Generated with ResNets. In International Conference on Learning Representations.
+
+Xie, C.; Wang, J.; Zhang, Z.; Ren, Z.; and Yuille, A. 2018. Mitigating Adversarial Effects through Randomization. In International Conference on Learning Representations.
+
+Zhang, H.; Yu, Y.; Jiao, J.; Xing, E. P.; Ghaoui, L. E.; and Jordan, M. I. 2019. Theoretically Principled Trade-off between Robustness and Accuracy. In International Conference on Machine Learning.
+
+Zhang, Y.; and Liang, P. 2019. Defending against White-box Adversarial Attacks via Randomized Discretization. In
+
+Chaudhuri, K.; and Sugiyama, M., eds., Proceedings of Machine Learning Research, volume 89 of Proceedings of Machine Learning Research, 684-693. PMLR.
+
+Zhao, Z.; Liu, Z.; and Larson, M. 2021. On Success and Simplicity: A Second Look at Transferable Targeted Attacks. arXiv:2012.11207 [cs].
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/p4SrFydwO5/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/p4SrFydwO5/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..69894d0c53fd1ace2c89f4d9fcebad8ec3f53b0e
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/p4SrFydwO5/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,753 @@
+§ DEMYSTIFYING THE ADVERSARIAL ROBUSTNESS OF RANDOM TRANSFORMATION DEFENSES
+
+Anonymized Authors
+
+${}^{1}$ Anonymized Institution
+
+§ ABSTRACT
+
+Current machine learning models suffer from evasion attacks (i.e., adversarial examples) raising concerns in many security-sensitive settings such as autonomous vehicles. While many countermeasures have shown promising results, only a few withstand rigorous evaluation from more recent attacks. Recently, the use of random transformations (RT) has shown an impressive result, particularly BaRT (Raff et al. 2019) on Ima-geNet. However, this type of defense has not been rigorously evaluated, and its robustness properties are poorly understood. These models are also stochastic in nature, making evaluation more challenging and many proposed attacks on deterministic models inapplicable. In this paper, we attempt to construct the strongest possible RT defense through the informed selection of transformations and the use of Bayesian optimization to tune their parameters. Furthermore, we attempt to identify the strongest possible attack to evaluate our RT defense. Our new attack vastly outperforms the naive attack, reducing the accuracy by ${83}\%$ , while the baseline EoT attack can only achieve ${19}\%$ reduction, a ${4.3} \times$ improvement. This indicates that the RT defense on Imagenette dataset (ten-class subset of Ima-geNet) is not robust against adversarial examples. Extending the study further, we use our new attack to adversarially train RT defense (called AdvRT) with an intuition that a stronger attack used during adversarial training will lead to more robust models. However, the attack is still not sufficiently strong, and thus, the AdvRT model is no more robust than its RT counterpart. The outcomes are slightly different for CIFAR-10 dataset where both RT and AdvRT models show some level robustness, but they are still outperformed by robust deterministic models. In the process of formulating our defense and attack, we perform several ablation studies and uncover insights that we hope will broadly benefit scientific communities that study stochastic neural networks and their robustness properties.
+
+§ 1 INTRODUCTION
+
+Today, deep neural networks are widely deployed in safety-critical settings such as autonomous driving and cybersecu-rity. Despite their effectiveness at solving a wide-range of challenging problems, they are known to have a major vulnerability. Tiny crafted perturbations added to inputs (so called adversarial examples) can arbitrarily manipulate the outputs of these large models, posing a threat to the safety and privacy of the millions of people who rely on existing ML systems. The importance of this problem has drawn substantial attention, and yet we have not devised a concrete countermeasure as a research community.
+
+Adversarial training (Madry et al. 2018) has been the foremost approach for defending against adversarial examples. While adversarial training provides increased robustness, it results in a loss of accuracy on benign inputs. Recently, a promising line of defenses against adversarial examples has emerged. These defenses randomize either the model parameters or the inputs themselves (Lecuyer et al. 2019; He, Rakin, and Fan 2019; Raff et al. 2019; Liu et al. 2019; Xie et al. 2018; Zhang and Liang 2019; Bender et al. 2020; Liu et al. 2018; Cohen, Rosenfeld, and Kolter 2019; Dhillon et al. 2018; Guo et al. 2018). Introducing randomness into the model can be thought of as a form of smoothing that removes sinuous portions of the decision boundary where adversarial examples frequently lie (He, Li, and Song 2018). Among these randomization approaches, Raff et al. (2019) propose Barrage of Random Transforms (BaRT), a new defense which applies a large set of random image transformations to classifier inputs. They report a ${24} \times$ increase in robust accuracy over previously proposed defenses.
+
+Despite these promising results, researchers still lack a clear understanding of how to properly evaluate random defenses. This is concerning as a defense can falsely appear more robust than it actually is when evaluated using suboptimal attacks (Athalye, Carlini, and Wagner 2018; Tramer et al. 2020). Therefore, in this work, we improve existing attacks on randomized defenses, and use them to rigorously evaluate BaRT and more generally, random transformation (RT) defenses. We find that sub-optimal attacks have led to an overly optimistic view of these RT defenses. Notably, we show that even our best RT defense is much less secure than previously thought, formulating a new attack that reduces its security (from 70% adversarial accuracy found by the baseline attack to only $6\%$ on Imagenette).
+
+We also take the investigation further and combine RT defense with adversarial training. Nevertheless, this turns out to be ineffective as the attack is not sufficiently strong and only generates weak adversarial examples for the model to train with. The outcomes appear more promising for CIFAR- 10, but it still lacks behind deterministic defense such as Madry et al. (2018) and Zhang et al. (2019). We believe that stronger and more efficient attacks on RT-based models will be necessary not only for accurate evaluation of the stochastic defenses but also for improving the effectiveness of adversarial training for such models.
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+To summarize, we make the following contributions:
+
+ * We show that non-differentiable transforms impede optimization during an attack and even an adaptive technique for circumventing non-differentiability (i.e., BPDA (Atha-lye, Carlini, and Wagner 2018)) is not sufficiently effective. This reveals that existing RT defenses are likely non-robust.
+
+ * To this end, we suggest that an RT defense should only use differentiable transformations for reliable evaluations and compatibility with adversarial training.
+
+ * We propose a new state-of-the-art attack for RT defense that improves over EoT (Athalye et al. 2018) in terms of both the loss function and the optimizer. We explain the success of our attack through the variance of the gradients.
+
+ * Improve the RT scheme by using Bayesian optimization for hyperparameter tuning and combining it with adversarial training which uses our new attack method instead of the baseline EoT.
+
+§ 2 BACKGROUND AND RELATED WORKS
+
+§ 2.1 ADVERSARIAL EXAMPLES
+
+Adversarial examples are carefully perturbed inputs designed to fool a machine learning model (Szegedy et al. 2014; Biggio et al. 2013; Goodfellow, Shlens, and Szegedy 2015). An adversarial perturbation $\delta$ is typically constrained to be within some ${\ell }_{p}$ -norm ball with a radius of $\epsilon$ . The ${\ell }_{p}$ -norm ball is a proxy to the "imperceptibility" of $\delta$ and can be thought of as the adversary’s budget. In this work, we primarily use $p = \infty$ and only consider adaptive white-box adversary. Finding the worst-case perturbation ${\delta }^{ * }$ requires solving the following optimization problem:
+
+$$
+{x}_{\text{ adv }} = x + {\delta }^{ * } = x + \underset{\delta : \parallel \delta {\parallel }_{p} \leq \epsilon }{\arg \max }L\left( {x + \delta ,y}\right) \tag{1}
+$$
+
+where $L : {\mathbb{R}}^{d} \times {\mathbb{R}}^{C} \rightarrow \mathbb{R}$ is the loss function of the target model which, in our case, is a classifier which makes predictions among $C$ classes. Projected gradient descent (PGD) is often used to solve the optimization problem in Eqn. 1.
+
+§ 2.2 RANDOMIZATION DEFENSES
+
+A number of recent papers have proposed defenses against adversarial examples which utilize inference-time randomization. One common approach is to sample weights of the network from some probability distribution (Liu et al. 2018; He, Rakin, and Fan 2019; Liu et al. 2019; Bender et al. 2020). In this paper, we instead focus on defenses that apply random transforms to the input (Raff et al. 2019; Xie et al. 2018; Zhang and Liang 2019; Cohen, Rosenfeld, and Kolter 2019). Many of these defenses claim to achieve state-of-the-art robustness. Unlike prior evaluations, we test these defenses using a wide range of white-box attacks as well as a novel stronger attack. A key issue when evaluating these schemes is that PGD attacks require gradients through the entire model pipeline, but many defenses use non-differentiable transforms. As we show later, this can cause evaluation results to be misleading.
+
+ < g r a p h i c s >
+
+Figure 1: An illustration of a random transformation (RT) defense against adversarial examples. Transformations of different types and parameters are sampled and applied sequentially to multiple copies of the input. All of the transformed inputs are then passed to a single neural network, and the outputs are combined to make the final prediction.
+
+Different works have tried applying different random transformations to their inputs. Xie et al. randomly resize and pad images (Xie et al. 2018). While this defense ranked second in the NeurIPS 2017 adversarial robustness competition, their security evaluation did not consider adaptive attacks where the adversary has full knowledge of the transformations.
+
+Zhang et al. (Zhang and Liang 2019) add Gaussian noise to the input and then quantize it. They report that this defense outperforms all of the NeurIPS 2017 submissions. For their attack, Zhang et al. approximate the gradient of the transform, which could lead to a sub-optimal attack. In this paper, we use the exact gradients for all transformations when available
+
+More recently, Raff et al. (Raff et al. 2019) claim to achieve a state-of-the-art robust accuracy ${24} \times$ better than adversarial training using a random transformation defense known as Barrage of Random Transforms (BaRT). BaRT involves randomly sampling a large set of image transformations and applying them to the input in a random order. Because many transformations are non-differentiable, BaRT evaluates their scheme using an attack that approximates the gradients of the transforms. In Section 4, we show that this approximation is ineffective, giving overly optimistic impression of BaRT's robustness, and we re-evaluate BaRT using a stronger attack which utilizes exact transform gradients.
+
+§ 3 RANDOM TRANSFORMATION DEFENSE
+
+Here, we introduce notations and the design of our RT defense, formalizing the BaRT defense.
+
+§ 3.1 DECISION RULES
+
+RT repeatedly applies a randomly chosen transform to the input, uses a neural network to make a prediction, and then averages the softmax prediction scores:
+
+$$
+g\left( x\right) \mathrel{\text{ := }} {\mathbb{E}}_{\theta \sim p\left( \theta \right) }\left\lbrack {\sigma \left( {f\left( {t\left( {x;\theta }\right) }\right) }\right) }\right\rbrack \tag{2}
+$$
+
+where $\sigma \left( \cdot \right)$ is the softmax function, $f : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{C}$ a neural network ( $C$ is the number of classes), and the transformation $t\left( {\cdot ;\theta }\right) : {\mathbb{R}}^{d} \rightarrow {\mathbb{R}}^{d}$ is parameterized by a random variable $\theta$ drawn from some distribution $p\left( \theta \right)$ .
+
+In practice, we approximate the expectation in Eqn. 2 with $n$ Monte Carlo samples per one input $x$ :
+
+$$
+g\left( x\right) \approx {g}_{n}\left( x\right) \mathrel{\text{ := }} \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\sigma \left( {f\left( {t\left( {x;{\theta }_{i}}\right) }\right) }\right) \tag{3}
+$$
+
+We then define the final prediction as the class with the largest softmax probability: $\widehat{y}\left( x\right) = \arg \mathop{\max }\limits_{{c \in \left\lbrack C\right\rbrack }}{\left\lbrack {g}_{n}\left( x\right) \right\rbrack }_{c}$ . Note that this decision rule is different from most previous works that use a majority vote on hard labels, i.e., ${\widehat{y}}_{\text{ maj }}\left( x\right) =$ $\arg \mathop{\max }\limits_{{c \in \left\lbrack C\right\rbrack }}\;\mathop{\sum }\limits_{{i = 1}}^{n}\mathbb{1}\left\{ {c = \arg \mathop{\max }\limits_{{j \in \left\lbrack C\right\rbrack }}{f}_{j}\left( x\right) }\right\}$ (Raff et al. 2019; Cohen, Rosenfeld, and Kolter 2019). We later show in Appendix D.1 that our rule is empirically superior to the majority vote. From the Law of Large Numbers, as $n$ increases, the approximation in Eqn. 3 converges to the expectation in Eqn. 2. Fig. 1 illustrates the structure and the components of the RT architecture.
+
+§ 3.2 PARAMETERIZATION OF TRANSFORMATIONS
+
+Here, $t\left( {\cdot ;\theta }\right)$ represents a composition of $S$ different image transformations where $\theta = \left\{ {{\theta }^{\left( 1\right) },\ldots ,{\theta }^{\left( S\right) }}\right\}$ and ${\theta }^{\left( s\right) }$ denotes the parameters for the $s$ -th transformation, i.e.,
+
+$$
+t\left( {x;\theta }\right) = {t}_{{\theta }^{\left( S\right) }} \circ {t}_{{\theta }^{\left( S - 1\right) }} \circ \cdots \circ {t}_{{\theta }^{\left( 1\right) }}\left( x\right) \tag{4}
+$$
+
+Each ${\theta }^{\left( s\right) }$ is a random variable comprised of three components, i.e., ${\theta }^{\left( s\right) } = \left\{ {{\tau }^{\left( s\right) },{\beta }^{\left( s\right) },{\alpha }^{\left( s\right) }}\right\}$ , which dictate the properties of a transformation:
+
+1. Type $\tau$ of transformation to apply (e.g., rotation, JPEG compression), which is uniformly drawn, without replacement, from a pool of $K$ transformation types: $\tau \sim$ $\operatorname{Cat}\left( {K,1/K}\right)$ .
+
+2. A boolean $\beta$ indicating whether the transformation will be applied. This is a Bernoulli random variable with probability ${p}_{\beta } : \beta \sim \operatorname{Bern}\left( p\right)$ .
+
+3. Strength of the transformation (e.g., rotation angle, JPEG quality) denoted by $\alpha$ , sampled from a predefined distribution (either uniform or normal): $\alpha \sim p\left( a\right)$ .
+
+Specifically, for each of the $n$ transformed samples, we sample a permutation of size $S$ out of $K$ transformation types in total, i.e. $\left\{ {{\tau }^{\left( 1\right) },\ldots ,{\tau }^{\left( S\right) }}\right\} \in \operatorname{Perm}\left( {K,S}\right)$ . Then the boolean and the strength of the $s$ -th transform are sampled: ${\beta }^{\left( s\right) } \sim \operatorname{Bern}\left( {p}_{{\tau }^{\left( s\right) }}\right)$ and ${\alpha }^{\left( s\right) } \sim p\left( {a}_{{\tau }^{\left( s\right) }}\right)$ . We abbreviate this sampling process as $\theta \sim p\left( \theta \right)$ which is repeated for every transformed sample (out of $n$ ) for a single input.
+
+Assuming that the $K$ transformation types are fixed, an RT defense introduces, at most, ${2K}$ hyperparameters, $\left\{ {{p}_{1},\ldots ,{p}_{K}}\right\}$ and $\left\{ {{a}_{1},\ldots ,{a}_{K}}\right\}$ , that can be tuned. It is also possible to tune by selecting ${K}^{\prime }$ out of $K$ transformation types, but this is combinatorially large in $K$ . In Appendix C, we show a heuristic for "pruning" the transformation types through tuning $p$ and $a$ (e.g., setting $p = 0$ is equivalent to removing that transformation type).
+
+§ 3.3 CHOICES OF TRANSFORMATIONS
+
+In this work, we use a pool of $K = {33}$ different image transformations including 19 differentiable and 2 nondifferentiable transforms taken from the ${30}\mathrm{{BaRT}}$ transforms (Raff et al. 2019) (counting each type of noise injection as its own transform). We replace non-differentiable transformations with a smooth differentiable alternative (Shin and Song 2017). The transformations fall into seven groups: noise injection (7), blur filtering (4), color-space alteration (8), edge detection (2), lossy compression (3), geometric transformation (5), and stylization (4). All transforms are described in Appendix A.1.
+
+§ 4 EVALUATING RAFF ET AL. (2019)'S BART
+
+Backward-pass differentiable approximation (BPDA) was proposed as a heuristic for approximating gradients of nondifferentiable components in many defenses to make gradient-based attacks applicable (Athalye, Carlini, and Wagner 2018). It works by first approximating the function with a neural network and backpropagate through this network instead of the non-differentiable function. Evaluations of BaRT in Raff et al. (2019) have considered BPDA as some transformations are innately non-differentiable or have zero gradients almost everywhere (e.g., JPEG compression, precision reduction, etc.). To approximate a transformation, we train a model ${\widetilde{t}}_{\phi }$ that minimizes the Euclidean distance between the transformed image and the model output:
+
+$$
+\mathop{\min }\limits_{\phi }\mathop{\sum }\limits_{{i = 1}}^{N}\underset{\theta \sim p\left( \theta \right) }{\mathbb{E}}{\begin{Vmatrix}{\widetilde{t}}_{\phi }\left( {x}_{i};\theta \right) - t\left( {x}_{i};\theta \right) \end{Vmatrix}}_{2} \tag{5}
+$$
+
+We evaluate the BPDA approximation below in a series of experiments that compare the effectiveness of the BPDA attack to an attack that uses exact gradients.
+
+§ 4.1 EXPERIMENT SETUP
+
+Our experiments use two datasets: CIFAR-10 and Ima-genette (Howard 2021), a ten-class subset of ImageNet. While CIFAR-10 is the most common benchmark in the adversarial robustness domain, some image transformations work poorly on low-resolution images. We choose Imagenette because BaRT was created on ImageNet, but we do not have resources to do thorough investigation on top of adversarial training on ImageNet. Additionally, the large and realistic images from Imagenette more closely resemble real-world usage All Imagenette models are pre-trained on ImageNet to speed up training and boost performance. Since RT models are stochastic, we report their average accuracy together with the ${95}\%$ confidence interval from 10 independent runs. Throughout this work, we consider the perturbation size $\epsilon$ of 16/255 for Imagenette and 8/255 for CIFAR-10. Appendix A. 2 has more details on the experiments (network architecture, hyperparameters, etc.).
+
+§ 4.2 BPDA ATTACK IS NOT SUFFICIENTLY STRONG
+
+We re-implemented and trained a BaRT model on these datasets, and then evaluated the effectiveness of BPDA attacks against this model. ${}^{1}$ First, we evaluate the full BaRT model in Table 1, comparing an attack that uses a BPDA approximation (as Raff et al. (2019)) vs an attack that uses the exact gradient for differentiable transforms and
+
+${}^{1}$ The authors have been very helpful with the implementation details but cannot make the official code or model weights public.
+
+max width=
+
+2*Transforms used 2*Clean accuracy 4|c|Adversarial accuracy w/ gradient approximations
+
+3-6
+ Exact BPDA Identity Combo
+
+1-6
+BaRT (full) ${88.10} \pm {0.16}$ n/a ${52.32} \pm {0.22}$ ${36.49} \pm {0.25}$ ${25.24} \pm {0.16}$
+
+1-6
+BaRT (only differentiable) ${87.43} \pm {0.28}$ ${26.06} \pm {0.21}$ ${65.28} \pm {0.25}$ ${41.25} \pm {0.26}$ n/a
+
+1-6
+
+Table 1: Comparison of attacks with different gradient approximations. "Exact" directly uses the exact gradient. "BPDA" uses the BPDA gradient for most transforms and the identity for a few. "Identity" backpropagates as an identity function, and "Combo" uses exact gradient for differentiable transforms and BPDA gradient otherwise. Full BaRT uses a nearly complete set of BaRT transforms $\left( {K = {26}}\right)$ , and "BaRT (only differentiable)" uses only differentiable transforms $\left( {K = {21}}\right)$ . We use PGD attack with EoT and CE loss $\left( {\epsilon = {16}/{255},{40}\text{ steps }}\right)$ .
+
+ < g r a p h i c s >
+
+Figure 2: Comparison of crop transform output and output of BPDA network trained to approximate crop transform.
+
+BPDA for non-differentiable transforms, denoted "BPDA" and "Combo", respectively. Empirically, we observe that attacks using BPDA are far weaker than the equivalent attack using exact gradient approximations. Similarly, on a variant BaRT model that uses only the subset of differentiable transforms, the BDPA attack is worse than an attack that uses the exact gradient for all transforms. BPDA is surprisingly weaker than even a naive attack which approximates all transform gradients with the identity. There are a few possible explanations for the inability of BPDA to approximate transformation gradients well:
+
+1. As Fig. 2 illustrates, BPDA struggles to approximate some transforms accurately. This might be partly because the architecture Raff et al. (2019) used (and we use) to approximate each transform has limited functional expressivity: it consists of five convolutional layers with $5 \times 5$ kernel and one with $3 \times 3$ kernel (all strides are 1), so a single output pixel can only depend on the input pixels fewer than 11 spaces away in any direction $\left( {5 \cdot \left\lfloor \frac{5}{2}\right\rfloor + 1 \cdot \left\lfloor \frac{3}{2}\right\rfloor = {11}}\right)$ . Considering the inputs for Imagenette are of size ${224} \times {224}$ , some transforms like "crop" which require moving pixels much longer distances are impossible to approximate with such an architecture.
+
+2. The BPDA network training process for solving Eqn. 5 may only find a sub-optimal solution, yielding a poor approximation of the true transformation.
+
+3. During the attack, the trained BPDA networks are given partially transformed images, yet the BPDA networks are only trained with untransformed inputs.
+
+4. Since we are backpropagating through several transforms, one poor transform gradient approximation could ruin the overall gradient approximation.
+
+Appendix A. 3 has more details on these experiments. These results show that BaRT's evaluation using BPDA was overly optimistic, and BaRT is not as robust as previously thought.
+
+Since BPDA is unreliable for approximating gradients of non-differentiable image transformations, we recommend that other ensuing RT-based defenses only use differentiable transformations. For the rest of this paper, we only study the robustness of RT defenses with differentiable transforms to isolate them from an orthogonal line of research on non-differentiable defenses (e.g., with approximate gradients or zero-th order attacks). Additionally, differentiable models can also boost their robustness further when combined with adversarial training. We explore this direction further in Section 7. Even without non-differentiable transforms, we still lack reliable evaluation on stochastic defenses apart from EoT. In the next section, we show that applying an EoT attack on RT defense results in a critically sub-optimal evaluation. After that, we propose a stronger attack.
+
+§ 5 HYPERPARAMETER TUNING ON RT DEFENSES
+
+Before investigating attacks, we want to ensure we evaluate on the most robust RT defense possible. We found that BaRT is not robust, but it could be because of the chosen transformations and their hyperparameters which they do not provide any justification for. Finding the most robust RT defense is, however, challenging because it consists of numerous hyperparameters including the $K$ transformation types, the number of transformations to apply(S), and their parameters ( $a$ and $p$ ). A typical grid search is intractable since we have 33 transformations, and trying to optimize the parameters directly with the reparameterization trick does not work as most transforms are not differentiable w.r.t. their parameters.
+
+We systematically address this problem by using Bayesian optimization (BO) (Snoek, Larochelle, and Adams 2012), a well-known black-box optimization technique used for hyper-parameter search, to fine-tune $a$ and $p$ . In short, BO optimizes an objective function that takes in the hyperparameters ( $a$ and $p$ in our case) as inputs and outputs adversarial accuracy. This process, which is equivalent to one iteration in BO, is computationally expensive as it involves training a neural network as a backbone for an RT defense and evaluating it with our new attack. Consequently, we have to scale down the problem by shortening the training, using fewer training/testing data samples, and evaluating with fewer attack steps. Essentially, we have to trade off precision of the search for efficiency. Because BO does not natively support categorical or integral variables, we experiment with different choices for $K$ and $S$ without the use of BO. The full details of this procedure are presented Appendix C.
+
+max width=
+
+Datasets Attacks Adv. Accuracy
+
+1-3
+6*Imagenette CIFAR-10 Baseline ${70.79} \pm {0.53}$
+
+2-3
+ AutoAttack ${85.46} \pm {0.43}$
+
+2-3
+ Our attack ${6.34} \pm {0.35}$
+
+2-3
+ Baseline ${33.83} \pm {0.44}$
+
+2-3
+ AutoAttack ${61.13} \pm {0.85}$
+
+2-3
+ Our attack $\mathbf{{29.91} \pm {0.35}}$
+
+1-3
+
+Table 2: Comparison between the baseline EoT attack (Atha-lye et al. 2018), AutoAttack (Croce and Hein 2020), and our attack on the RT defense whose transformation parameters have been fine-tuned by Bayesian Optimization to maximize the robustness. For AutoAttack, we use its standard version combined with EoT. For Imagenette, we use $\epsilon = {16}/{255}$ , for CIFAR-10, $\epsilon = 8/{255}$ .
+
+Algorithm 1: Our best attack on RT defenses
+
+Input: Set of $K$ transformations and distributions of their
+
+ parameters $p\left( \theta \right)$ , neural network $f$ , perturbation size
+
+ $\epsilon$ , max. PGD steps $T$ , step size ${\left\{ {\gamma }_{t}\right\} }_{t = 1}^{T}$ , and
+
+ AggMo’s damping constants ${\left\{ {\mu }_{b}\right\} }_{b = 1}^{B}$ .
+
+Output: Adversarial examples ${x}_{\text{ adv }}$
+
+Data: Test input $x$ and its ground-truth label $y$
+
+// Initialize x_adv and velocities
+
+${x}_{\text{ adv }} \leftarrow x + u \sim \mathcal{U}\left\lbrack {-\epsilon ,\epsilon }\right\rbrack ,\;{\left\{ {v}_{b}\right\} }_{b = 1}^{B} \leftarrow \mathbf{0}$
+
+for $\mathrm{t} \leftarrow 1$ to $T$ do
+
+ ${\left\{ {\theta }_{i}\right\} }_{i = 1}^{n} \sim p\left( \theta \right)$
+
+ // Compute a gradient estimate with
+
+ linear loss on logits
+
+ (Section 6.2) and with SGM
+
+ (Section 6.3)
+
+ ${G}_{n} \leftarrow \nabla {\mathcal{L}}_{\text{ Linear }}\left( {\frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}f\left( {t\left( {{x}_{\mathrm{{adv}}};{\theta }_{i}}\right) }\right) ,y}\right)$
+
+ ${\widehat{G}}_{n} \leftarrow \operatorname{sign}\left( {G}_{n}\right) \;//$ Use signed gradients
+
+ // Update velocities and x-adv with
+
+ AggMo (Section 6.4)
+
+ for $\mathrm{b} \leftarrow 1$ to $B$ do
+
+ ${v}_{b} \leftarrow {\mu }_{b} \cdot {v}_{b} + {\widehat{G}}_{n}$
+
+ ${x}_{\text{ adv }} \leftarrow {x}_{\text{ adv }} + \frac{{\gamma }_{t}}{B}\mathop{\sum }\limits_{{b = 1}}^{B}{v}_{b}$
+
+return ${x}_{\text{ adv }}$
+
+§ 6 STATE-OF-THE-ART ATTACK ON RT DEFENSES
+
+We propose a new attack on differentiable RT defenses that leverages insights from previous literature on transfer attacks as well as recent stochastic optimization algorithms. Our attack is immensely successful and shows that even the fine-tuned RT defense from Section 5 shows almost no adversarial robustness (Table 2). We summarize our attack in Algorithm 1 before describing the setup and investigating the three main design choices that make this attack successful and outperform the baseline from Athalye et al. (2018) by a large margin.
+
+§ 6.1 SETUP: STOCHASTIC GRADIENT METHOD
+
+First, we describe the setup and explain intuitions around variance of the gradient estimates. Finding adversarial examples on RT defenses can be formulated as the following stochastic optimization problem:
+
+$$
+\mathop{\max }\limits_{{\delta : \parallel \delta {\parallel }_{\infty } \leq \epsilon }}H\left( \delta \right) \mathrel{\text{ := }} \mathop{\max }\limits_{{\delta : \parallel \delta {\parallel }_{\infty } \leq \epsilon }}{\mathbb{E}}_{\theta }\left\lbrack {h\left( {\delta ;\theta }\right) }\right\rbrack \tag{6}
+$$
+
+$$
+\mathrel{\text{ := }} \mathop{\max }\limits_{{\delta : \parallel \delta {\parallel }_{\infty } \leq \epsilon }}{\mathbb{E}}_{\theta }\left\lbrack {\mathcal{L}\left( {f\left( {t\left( {x + \delta ;\theta }\right) }\right) ,y}\right) }\right\rbrack \tag{7}
+$$
+
+for some objective function $\mathcal{L}$ . Note that we drop dependence on(x, y)to declutter the notation. Since it is not possible to evaluate the expectation or its gradients exactly, the gradients are estimated by sampling ${\left\{ {\theta }_{i}\right\} }_{i = 1}^{n}$ similarly to how we obtain a prediction ${g}_{n}$ . Suppose that $H$ is smooth and convex, and variance of the gradient estimates is bounded by ${\sigma }^{2}$ , i.e.,
+
+$$
+\underset{\theta \sim p\left( \theta \right) }{\mathbb{E}}\left\lbrack {\begin{Vmatrix}\nabla h\left( \delta ;\theta \right) - \nabla H\left( \delta \right) \end{Vmatrix}}^{2}\right\rbrack \leq {\sigma }^{2}, \tag{8}
+$$
+
+the error of SGD after $T$ iterations is $\mathcal{O}\left( {1/T + \sigma /\sqrt{T}}\right)$ for an appropriate step size (Ghadimi and Lan 2013). This result suggests that small $\sigma$ or low-variance gradient speeds up convergence which is highly desirable for attackers and defenders alike. Specifically, it leads to more efficient and more accurate evaluation as well as a stronger attack to use during adversarial training, which in turn, could yield a better defense (we explore this in Section 7).
+
+As a result, the analyses on our attack will be largely based on variance and two other measures of spread of the gradients. Specifically, we measure (1) the dimension-averaged variance in Eqn. 8, (2) cosine similarity and (3) a percentage of matching signs between mean gradient and each gradient sample. Since all three metrics appear to be highly correlated in theory and in practice, we only report the variance in the main paper. For the other metrics and their mathematical definitions, please see Appendix B.3.
+
+EoT Baseline. We compare our attack to the baseline which is exactly taken from Athalye et al. (2018). This attack takes on the same form as Eqn. 7 and its gradients are averaged over $n$ gradient samples:
+
+$$
+{H}_{n}^{\mathrm{{EoT}}}\left( \delta \right) \mathrel{\text{ := }} \frac{1}{n}\mathop{\sum }\limits_{{j = 1}}^{n}\mathcal{L}\left( {f\left( {t\left( {x + \delta ;{\theta }_{j}}\right) }\right) ,y}\right) \tag{9}
+$$
+
+It is important to note that this approximation does not exactly match the decision rule of RT defenses as the expectation should be in front of $f$ but behind the loss function (see Eqn. 2). While the gradient estimates from Eqn. 9 are unbiased, they may have high variance as each gradient sample is equivalent to computing the loss on ${g}_{n}$ with $n = 1$ . In the next section, we will compare other options for objective functions and decision rules and show that there are better alternatives to the original EoT.
+
+Signed gradients. All of the attacks used in this study including ours and the baseline use signs of gradients instead of the gradients themselves. This is a common practice for gradient-based ${\ell }_{\infty }$ -attacks, and we have also empirically confirm that it leads to much stronger attacks. This is also the reason that we measure sign matching as a measure of spread of the gradient estimates. In addition to the ${\ell }_{\infty }$ -constraint, using signed gradients as well as signed momentum is also beneficial as it has been shown to reduce variance for neural network training and achieve even faster convergence than normal SGD in certain cases (Bernstein et al. 2018).
+
+ < g r a p h i c s >
+
+Figure 3: Comparison of PGD attack's effectiveness with (a) different loss functions and decision rules, and (b) different attack variants with improved transferability. The error bars are too small to see with the markers so we report the numerical results in Table 4. "Baseline" refers to EoT with CE loss in Eqn. 9.
+
+§ 6.2 ADVERSARIAL OBJECTIVES AND DECISION RULES
+
+Here, we propose new decision rules and loss functions for the attacks as alternatives to EoT. Note that this need not be the same as the rule used for making prediction in Eqn. 2. First, we introduce softmax and logits rules:
+
+$$
+{H}^{\text{ softmax }}\left( \delta \right) \mathrel{\text{ := }} \mathcal{L}\left( {\underset{\theta \sim p\left( \theta \right) }{\mathbb{E}}\left\lbrack {\sigma \left( {f\left( {t\left( {x + \delta ;\theta }\right) }\right) }\right) }\right\rbrack ,y}\right) \tag{10}
+$$
+
+$$
+{H}^{\text{ logits }}\left( \delta \right) \mathrel{\text{ := }} \mathcal{L}\left( {\underset{\theta \sim p\left( \theta \right) }{\mathbb{E}}\left\lbrack {f\left( {t\left( {x + \delta ;\theta }\right) }\right) }\right\rbrack ,y}\right) \tag{11}
+$$
+
+${H}^{\text{ softmax }}$ , or loss of the expected softmax probability, is the same rule as the decision rule of RT defenses (Eqn. 2). It was also used by Salman et al. (2019) where $\mathcal{L}$ is cross-entropy loss. ${H}^{\text{ logits }}$ or an expected logits, is similar to ${H}^{\text{ softmax }}$ but without the softmax function to avoid potential vanishing gradients from softmax.
+
+In addition to the rules, we experiment with two choices of $\mathcal{L}$ commonly used for generating adversarial examples: cross-entropy loss (CE) and linear loss (Linear). The linear loss is defined as the difference between the largest logit of
+
+ < g r a p h i c s >
+
+Figure 4: Comparison of dimension-normalized variance of the gradient estimates across (blue) different loss functions and decision rules and (yellow) transferability-improving attacks. Strong attacks are highly correlated with low variance of their gradient estimates, i.e., Lin+SGM. Note that Lin+MB or Momentum Boosting is not shown here because it does not modify the gradients.
+
+the wrong class and logit of the correct class:
+
+$$
+{\mathcal{L}}_{\text{ Linear }}\left( {x,y}\right) \mathrel{\text{ := }} \mathop{\max }\limits_{{j \neq y}}{F}_{j} - {F}_{y} \tag{12}
+$$
+
+$$
+\text{ where }F = \underset{\theta \sim p\left( \theta \right) }{\mathbb{E}}\left\lbrack {f\left( {t\left( {x;\theta }\right) }\right) }\right\rbrack \tag{13}
+$$
+
+The advantage of the linear loss is that its gradient estimates are unbiased, similarly to EoT, meaning that the expectation can be moved in front of $\mathcal{L}$ due to linearity. However, this is not the case for CE loss.
+
+Attack evaluation and comparison. We evaluate the attacks by their effectiveness in reducing the adversarial accuracy (lower means stronger attack) on the RT defense obtained from Section 5. In our settting, the adversarial examples are generated once, and then they are used to compute the accuracy 10 times, each with a different random seed on the RT defense. We report the average accuracy over these 10 runs together with the ${95}\%$ -confidence interval. Alternatively, one can imagine a threat model that counts at least one misclassification among a certain number of trials as incorrect. This is an interesting and perhaps more realistic in some settings, but the optimal attack will be very different from EoT as we care a lot less about the expectation. This, however, is outside of the scope of our work.
+
+In Fig. 3a, we compare the effectiveness of four attacks, each using a different pair of losses and decision rules with varying numbers of PGD steps and samples $n$ . The widely used EoT method performs the worst of the four. CE loss on mean softmax probability performs better than EoT, confirming the observation made by Salman et al. (Salman et al. 2019). Linear loss and CE loss on average logits are even better and are consistently the strongest attacks, across all hy-perparameters. For the rest of this paper, we adopt the linear loss with mean logits as the main objective function.
+
+Connection to variance. As we predicted in Section 6.1, a stronger attack directly corresponds to lower variance. This hypothesis is confirmed by Fig. 4. For instance, the EoT baseline has the highest variance as well as the worst performance according to Fig. 5. On the other hand, the linear loss (Lin) has the lowest variance among the three loss functions (blue) and hence, it performs the best. The other three points in orange will be covered in the next section.
+
+§ 6.3 ENSEMBLE AND TRANSFER ATTACKS
+
+RT can be regarded as an ensemble with each member sharing the same neural network parameters but applying different sets of transformations to the input (i.e., different $\theta$ ’s from random sampling). Consequently, we may view a white-box attack on RT defenses as a "partial" black-box attack on an ensemble of (infinitely) many models where the adversary wishes to "transfer" adversarial examples generated on some subset of the members to another unseen subset.
+
+Given this interpretation, we apply four techniques designed to enhance the transferability of adversarial examples to improve the attack success rate on RT defense. The techniques include momentum boosting (MB) (Dong et al. 2018), modifying backward passes by ignoring non-linear activation (LinBP) (Guo, Li, and Chen 2020) or by emphasizing the gradient through skip connections of ResNets more than through the residual block (SGM) (Wu et al. 2020), and simply using a targeted attack with the linear loss function (TG) (Zhao, Liu, and Larson 2021). In Fig. 3b, we compare these techniques combined with the best performing loss and decision rule from Section 6.2 (i.e., the linear loss on logits). Only SGM improves the attack success rate at all settings while the rest result in weaker attacks than the one without any of the techniques (denoted by "Linear (logits)" in Fig. 3a).
+
+SGM essentially normalizes the gradients and scales ones from the residual blocks by some constant less than 1 (we use 0.5 ) to reduce its influence and prioritize the gradients from the skip connection. Wu et al. (2020) explain that SGM leads to better transferability because gradients through skip connections preserve "low-level information" which tends to transfer better. Intuitively, this agrees with our variance explanation as the increased transferability implies a stronger agreement among gradient samples and hence, less spread or lower variance.
+
+§ 6.4 STOCHASTIC OPTIMIZATION ALGORITHM
+
+While most attacks on deterministic models can use naive PGD to solve Eqn. 1 effectively, this is not the case for stochastic models like the RT defense. Here, the adversary only has access to noisy estimates of the gradients, making it a strictly more difficult problem, and techniques used in the deterministic case may no longer apply.
+
+As mentioned in Section 6.1, high-variance gradient estimates undermine the convergence rate of SGD. Thus, the attack should benefit from optimization techniques aimed at reducing the variance or speeding up the convergence of SGD. We first experiment with common optimizers such as SGD and Adam (Kingma and Ba 2015) with different hy-perparameters, e.g., momentum, Nesterov acceleration, and learning rate schedules, to find the best setting for the linear loss with SGM. Based on this experiment, we found that a momentum term with an appropriate damping constant plays an important role in the attack success rate. Momentum is also well-known to accelerate and stabilize training of neural networks (Sutskever et al. 2013). Fig. 10a reports adversarial accuracy at varying attack iterations and indicates that higher momentum constant leads to faster convergence and a higher attack success rate. However, the results seem highly sensitive to this momentum constant which also varies from one setting to another (e.g., number or types of transformations, dataset, etc.).
+
+ < g r a p h i c s >
+
+Figure 5: Comparison of the optimizers for attacking an RT defense with $\epsilon = {16}/{255},n = {10}$ on Imagenette dataset. All but the baseline (CE loss with EoT) use the linear loss with SGM, and all but AggMo $\left( {B = 6}\right)$ use the default hyperparameters. AggMo with $B = 6$ outperforms the other algorithms in terms of both the convergence rate and the final adversarial accuracy obtained. This result is not very sensitive to $B$ as any sufficiently large value $\left( { \geq 4}\right)$ yields the same outcome.
+
+To mitigate this issue, we introduce another optimizer. AggMo is exactly designed to be less sensitive to choices of the damping coefficient by aggregating $B$ momentum terms with different constants instead of one (Lucas et al. 2019). After only a few tries, we found a wide range of values of $B$ where $\mathrm{{AggMo}}$ outperforms $\mathrm{{SGD}}$ with a fine-tuned momentum constant (see Fig. 10b). Fig. 5 compares the attacks using different choices of the optimizers to the baseline EoT attack. Here, the baseline can only reduce the adversarial accuracy from 89% to ${70}\%$ while our best attack manages to reach $6\%$ or over ${4.3} \times$ improvement. This concludes that the optimizer plays a crucial role in the success of the attack, and the RT defense, even with a carefully and systematically chosen transformation hyperparameters, is not robust against adversarial examples.
+
+Furthermore, we note that without our loss function and only using AggMo, the accuracy only goes down to ${23}\%$ at a much slower rate. Conversely, when the linear loss and SGM are used with SGD (no momentum), the accuracy drops to ${51}\%$ . This signifies that all three techniques we deploy play important roles to the attack's effectiveness.
+
+§ 6.5 COMPARISON WITH AUTOATTACK
+
+AutoAttack (Croce and Hein 2020) was proposed as a standardized benchmark for evaluating deterministic defenses against adversarial examples. It uses an ensemble of four different attacks that cover weaknesses of one another, one of which does not use gradients. AutoAttack has been proven to be one of the strongest attack currently and is capable of catching defenses with false robustness caused by gradient obfuscation (Athalye, Carlini, and Wagner 2018).
+
+max width=
+
+2*Defenses 2|c|Imagenette 2|c|CIFAR-10
+
+2-5
+ Clean Accuracy Adv. Accuracy Clean Accuracy Adv. Accuracy
+
+1-5
+Normal model 95.41 0.00 95.10 0.00
+
+1-5
+Madry et al. (2018) 78.25 37.10 81.90 45.30
+
+1-5
+Zhang et al. (2019) 87.43 33.19 81.26 46.89
+
+1-5
+RT defense ${89.04} \pm {0.34}$ ${6.34} \pm {0.35}$ ${81.12} \pm {0.54}$ ${29.91} \pm {0.35}$
+
+1-5
+AdvRT defense ${88.83} \pm {0.26}$ ${8.68} \pm {0.52}$ ${80.69} \pm {0.66}$ ${41.30} \pm {0.49}$
+
+1-5
+
+Table 3: Comparison of RT and AdvRT defenses to prior robust deterministic models and a normally trained model. Both the RT and the AdvRT models on Imagenette lack the adversarial robustness. Conversely, the RT defense on CIFAR-10 does bring substantial robustness, and combining it with adversarial training boosts the adversarial accuracy further. Nonetheless, they still fall behind the previously proposed deterministic models including Madry et al. (2018) and Zhang et al. (2019). The largest number in each column is in bold.
+
+While not particularly designed for stochastic models, Au-toAttack can be used to evaluate them when combined with EoT. We report the accuracy on adversarial examples generated on AutoAttack with all default hyperparameters in the "standard" mode and 10-sample EoT in Table 2. AutoAttack performs worse than the baseline EoT and our attack on both Imagenette and CIFAR-10 by a large margin. One of the reasons is that AutoAttack is optimized for efficiency and so each of its attacks is usually terminated once a misclassification occurs. This is applicable to deterministic models, but for stochastic ones such as an RT defense, the adversary is better off finding the adversarial examples that maximize the expected loss instead of ones that are misclassified once.
+
+To take this property into account, we include the accuracy reported by AutoAttack that treats a sample as incorrect if it is misclassified at least once throughout the entire process. For Imagenette, the accuracies after each of the four attacks (APGD-CE, APGD-T, FAB, and Square) is applied sequentially are82.03,78.81,78.03, and 77.34, respectively. Note that this is a one-time evaluation so there is no error bar here. Needless to say, the adversarial accuracy computed this way is strictly lower than the one we reported in Table 2 and violates our threat model. However, it is still higher than that of the baseline EoT and our attack, suggesting that AutoAttack is ineffective against randomized models like RT defenses. AutoAttack also comes with a "random" mode for randomized models which only use APGD-CE and APGD-DLR with 20-sample EoT. The adversarial accuracies obtained from this mode are 85.62 and 83.83 or ${88.62} \pm {0.46}$ for single-pass evaluation as in Table 2. This random mode performs worse than the standard version.
+
+§ 7 COMBINING WITH ADVERSARIAL TRAINING
+
+To deepen our investigation, we explore the possibility of combining RT defense with adversarial training. However, this is a challenging problem on its own. For normal deterministic models, 10-step PGD is sufficient for reaching adversarial accuracy close to best known attack or the optimal adversarial accuracy. However, this is not the case for RT defenses as even our new attack still requires more than one thousand iterations before the adversarial accuracy starts to plateau. Ultimately, the robustness of adversarially trained models largely depends on the strength of the attack used to generate the adversarial examples, and using a weak attack means that the obtained model will not be robust. A similar phenomenon is observed by Tramèr et al. (2018) and Wong, Rice, and Kolter (2020) where an adversarially trained model overfits to the weak FGSM attacks but has shown to be non-robust with the accurate evaluation. To test this hypothesis, we adversarially train the RT defense from Section 5 using our new attack with 50 iterations (already $5 \times$ the common number of steps) and call this defense "AdvRT." The attack step size is also adjusted accordingly to $\epsilon /8$ .
+
+In Table 3, we confirm that training AdvRT this way results in a model with virtually no robustness improvement over the normal RT on Imagenette. On the other hand, the AdvRT trained on CIFAR-10 proves to be more promising even though it is still not as robust as deterministic models trained with adversarial training or TRADES (Zhang et al. 2019). Based on this result, we conclude that a stronger attack on RT defenses that converge within a much fewer iterations will be necessary to make adversarial training successful. In theory, it might be possible to achieve a robust RT model with 1,000-step attack on Imagenette, but this is too computationally intensive for us to verify, and it will not to scale to any realistic setting.
+
+§ 8 CONCLUSION
+
+While recent papers report state-of-the-art robustness with RT defenses, our evaluations show that RT generally under-performs existing defenses like adversarial training when met with a stronger attack, even after fine-tuning the hyperpa-rameters of the defense. Through our experiments, we found that non-differentiability and high-variance gradients can seriously inhibit adversarial optimization, so we recommend using only differentiable transformations along with their exact gradients in the evaluation of future RT defenses. In this setting, we propose a new state-of-the-art attack that improves significantly over the baseline (PGD with EoT) and show that RT defenses as well as their adversarially trained counterparts are not as robust to adversarial examples as they were previously believed to be.
+
+§ A EXPERIMENT DETAILS
+
+§ A.1 DETAILS ON THE IMAGE TRANSFORMATIONS
+
+The exact implementation of RT models and all the transformations will be released. Here, we provide some details on each of the transformation types and groups. Then, we describe how we approximate some non-differentiable functions with differentiable ones.
+
+§ NOISE INJECTION
+
+ * Erase: Set the pixels in a box with random size and location to zero.
+
+ * Gaussian noise: Add Gaussian noise to each pixel.
+
+ * Pepper: Zero out pixels with some probability.
+
+ * Poisson noise: Add Poisson noise to each pixel.
+
+ * Salt: Set pixels to one with some probability.
+
+ * Speckle noise: Add speckle noise to each pixel.
+
+ * Uniform noise: Add uniform noise to each pixel.
+
+§ BLUR FILTERING
+
+ * Box blur: Blur with randomly sized mean filter.
+
+ * Gaussian blur: Blur with randomly sized Gaussian filter with randomly chosen variance.
+
+ * Median blur: Blur with randomly sized median filter.
+
+ * Motion blur: Blur with kernel for random motion angle and direction.
+
+§ COLOR-SPACE ALTERATION
+
+ * HSV: Convert to HSV color-space, add uniform noise, then convert back.
+
+ * LAB: Convert to LAB color-space, add uniform noise, then convert back.
+
+ * Gray scale mix: Mix channels with random proportions.
+
+ * Gray scale partial mix: Mix channels with random proportions, then mix gray image with each channel with random proportions.
+
+ * Two channel gray scale mix: Mix two random channels with random proportions.
+
+ * One channel partial gray: Mix two random channels with random proportions, then mix gray image with other channel.
+
+ * XYZ: Convert to XYZ color-space, add uniform noise, then convert back.
+
+ * YUV: Convert to YUV color-space, add uniform noise, then convert back.
+
+§ EDGE DETECTION
+
+ * Laplacian: Apply Laplacian filter.
+
+ * Sobel: Apply the Sobel operator.
+
+§ LOSSY COMPRESSION
+
+ * JPEG compression: Compress image using JPEG to a random quality.
+
+ * Color precision reduction: Reduce color precision to a random number of bins.
+
+ * FFT perturbation: Perform FFT on image and remove each component with some probability.
+
+§ GEOMETRIC TRANSFORMS
+
+ * Affine: Perform random affine transformation on image.
+
+ * Crop: Crop image randomly and resize to original shape.
+
+ * Horizontal flip: Flip image across the vertical.
+
+ * Swirl: Swirl the pixels of an image with random radius and strength.
+
+ * Vertical flip: Flip image across the horizontal.
+
+§ STYLIZATION
+
+ * Color jitter: Randomly alter the brightness, contrast, and saturation. - Gamma: Randomly alter gamma. - Sharpen: Apply sharpness filter with random strength. - Solarize: Solarize the image.
+
+§ NON-DIFFERENTIABLE (FOR BPDA TESTS ONLY)
+
+ * Adaptive histogram: Equalize histogram in patches of random kernel size.
+
+ * Chambolle denoise: Apply Chambolle's total variation denoising algorithm with random weight (can be implemented differentiably but was not due to time constraints).
+
+ * Contrast stretching: Pick a random minimum and maximum pixel value to rescale intensities (can be implemented differentiably but was not due to time constraints).
+
+ * Histogram: Equalize histogram using a random number of bins.
+
+§ UNUSED TRANSFORMS FROM BART
+
+ * Seam carving: Algorithm used in Raff et al. (2019) has been patented and is no longer available for open-source use.
+
+ * Wavelet denoising: The implementation in Raff et al. (2019) is incomplete.
+
+ * Salt & pepper: We have already used salt and pepper noise separately.
+
+ * Non-local means denoising: The implementation of NL means denoising in Raff et al. (2019) is too slow.
+
+§ A.2 EXPERIMENT DETAILS
+
+All of the experiments are evaluated on 1000 randomly chosen test samples. Since we choose the default $n$ to be 20 for inference and 10 for the attacks, the experiments are at least 10 times more expensive than usual, and we cannot afford enough computation to run a large number of experiments on the entire test set. The networks used in this paper are ResNet-34 (He et al. 2016a) for Imagenette and Pre-activation ResNet-20 (He et al. 2016b) for CIFAR-10. In all of the experiments, we use a learning rate of 0.05, batch size of 128, and weight decay of 0.0005 . We use cosine annealing schedule (Loshchilov and Hutter 2017) for the learning rate with a period of 10 epochs which also doubles after every period. All models are trained for 70 epochs, and we save the weights with the highest accuracy on the held-out validation data (which does not overlap with the training or test set) For adversarially trained RT defenses, the cosine annealing step is set to 10 and the training lasts for 70 epochs to reduce the computation. To help the training converge faster, we pre-train these RT models on clean data before turning on adversarial training as suggested by Gupta, Dube, and Verma (2020).
+
+##
+
+ < g r a p h i c s >
+
+Figure 6: Fully-convolutional BPDA network from Raff et al. (2019). The network has six convolutional layers. All layers have a stride of 1 . The first five layers have kernel size of 5 and padding size of 2, and the last layer has a kernel size of 3 and padding size of 1 . The input consists of more than 5 channels, 3 of which are for the image RGB channels, 2 of which are CoordConv channels that include the coordinates of each pixel at that pixel's location, and the remaining channels are the parameters for the transformation copied at each pixel location. The network contains a skip connection from the input to each layer except the final layer.
+
+§ A.3 DETAILS ON BPDA EXPERIMENTS
+
+We used the following setup for the differentiability related experiments conducted in Section 4.2:
+
+ * Each accuracy is an average over 10 trials on the same set of 1000 Imagenette images.
+
+ * The defense samples $S = {10}$ transforms from the full set of $K$ transforms.
+
+ * The image classifier uses a ResNet-50 architecture like in Raff et al. (2019) trained on transformed images for 30 epochs.
+
+ * The attack uses 40 PGD steps of size $4/{255}$ with an $\epsilon =$ ${16}/{255}$ to minimize the EoT objective.
+
+The BPDA network architecture is the same used by Raff et al. (2019) and is outlined in Fig. 6. Here are more details on BPDA training:
+
+ * All BPDA networks were trained using Adam with a learning rate of 0.01 for 10 epochs.
+
+ * All networks achieve a per-pixel MSE below 0.01 . The outputs of the BPDA networks are compared to the true transform outputs for several different transform types in Fig. 7.
+
+The specific set of transforms used in each defense are the following:
+
+ * BaRT (all): adaptive histogram, histogram, bilateral blur, box blur, Gaussian blur, median blur, contrast stretching, FFT, gray scale mix, gray scale partial mix, two channel gray scale mix, one channel gray scale mix, HSV, LAB, XYZ, YUV, JPEG compression, Gaussian noise, Poisson noise, salt, pepper, color precision reduction, swirl, Chambolle denoising, crop.
+
+ * BaRT (only differentiable): all of the BaRT all transforms excluding adaptive histogram, histogram, contrast stretching, and Chambolle denoising.
+
+§ B DETAILS OF THE ATTACKS
+
+§ B.1 DIFFERENTIABLE APPROXIMATION
+
+Some of the transformations contain non-differentiable operations which can be easily approximated with differentiable functions. Specifically, we approximate the rounding function in JPEG compression and color precision reduction, and the modulo operator in all transformations that require conversion between RGB and HSV color-spaces (HSV alteration and color jitter). Note that we are not using the non-differentiable transform on the forward pass and a differentiable approximation on the backward pass (like in BPDA). Instead, we are using the differentiable version both when performing the forward pass and when computing the gradient.
+
+We take the approximation of the rounding function from Shin and Song (2017) shown in Eqn. 14.
+
+$$
+\lfloor x{\rceil }_{\text{ approx }} = \lfloor x\rceil + {\left( x-\lfloor x\rceil \right) }^{3} \tag{14}
+$$
+
+For the modulo or the remainder function, we approximate it using the above differentiable rounding function as a basis.
+
+$$
+\operatorname{mod}\left( x\right) = \left\{ \begin{array}{ll} x - \lfloor x\rceil & \text{ if }x > \lfloor x\rceil \\ x - \lfloor x\rceil + 1 & \text{ otherwise } \end{array}\right. \tag{15}
+$$
+
+To obtain a differentiable approximation, we can replace the rounding operator with its smooth version in Eqn. 14. This function (approximately) returns decimal numbers or a fractional part of a given real number, and it can be scaled to approximate a modulo operator with any divisor.
+
+Note that these operators are step functions and are differentiable almost everywhere, like ReLU. However, their derivatives are always zero (unlike ReLU), and so a first-order optimization algorithm would still fail on these functions.
+
+§ B.2 EFFECT OF THE PERMUTATION OF THE TRANSFORMATIONS
+
+We mentioned in Section 3.2 that a permutation of the transforms ${\left\{ {\tau }^{\left( s\right) }\right\} }_{s = 1}^{S}$ is randomly sampled for each of the $n$ samples. However, we found that in practice, this leads to high-variance estimates of the gradients. On the other hand, fixing the permutation across $n$ samples in each attack iteration (i.e., $\tau$ is fixed but not $\alpha$ or $\beta$ ) results in lower variance and hence, a stronger attack, even though the gradient estimates are biased as $\tau$ is fixed. For instance, with fixed permutation, adversarial accuracy achieved by EoT attack is 51.44 where the baseline EoT with completely random permutation is 70.79 . The variance also reduces from 0.97 to 0.94 .
+
+Additionally, the fixed permutation reduces the computation time as all transformations can be applied in batch. All of the attacks reported in this paper, apart from the baseline, use this fixed permutation.
+
+§ B.3 VARIANCE OF GRADIENTS
+
+We have described how we compute the sample variance of the gradients in Section 6.1. Here, we provide detailed calculations of the other three metrics. First, the unbiased variance is computed as normal with an additional normalization by dimension.
+
+$$
+{\mu }_{n} \mathrel{\text{ := }} \frac{1}{n}\mathop{\sum }\limits_{{j = 1}}^{n}\nabla {\widehat{G}}_{1,j} \tag{16}
+$$
+
+$$
+{\sigma }_{n}^{2} \mathrel{\text{ := }} \frac{1}{d}\frac{1}{n - 1}\mathop{\sum }\limits_{{j = 1}}^{n}{\begin{Vmatrix}{\mu }_{n} - {\widehat{G}}_{1,j}\end{Vmatrix}}_{2}^{2} \tag{17}
+$$
+
+where ${\widehat{G}}_{1}$ is the signed gradients where the loss is estimated with one sample as defined in Algorithm 1.
+
+ < g r a p h i c s >
+
+Figure 7: Comparison of the true transformed outputs (top row) and outputs of respective BPDA networks (bottom row) for six different transformation types.
+
+max width=
+
+2*Attacks 3|c|Adv. acc. with varying attack steps $\left( {n = {10}}\right)$ 3|c|Adv. acc. with varying $n$ (attack steps $= {200}$ )
+
+2-7
+ 50 200 800 5 10 20
+
+1-7
+Baseline ${82.34} \pm {0.43}$ ${73.36} \pm {0.37}$ ${71.70} \pm {0.39}$ ${74.81} \pm {0.47}$ ${74.46} \pm {0.55}$ ${76.06} \pm {0.29}$
+
+1-7
+CE (softmax) ${82.37} \pm {0.39}$ ${71.05} \pm {0.36}$ ${65.06} \pm {0.39}$ ${73.82} \pm {0.35}$ ${70.71} \pm {0.53}$ ${68.51} \pm {0.33}$
+
+1-7
+Linear (logits) ${80.67} \pm {0.50}$ ${66.11} \pm {0.58}$ ${58.26} \pm {0.62}$ ${70.67} \pm {0.41}$ ${66.59} \pm {0.57}$ ${62.48} \pm {0.41}$
+
+1-7
+Linear+MB ${78.51} \pm {0.45}$ ${72.66} \pm {0.50}$ ${65.28} \pm {0.41}$ ${72.47} \pm {0.39}$ ${72.51} \pm {0.55}$ ${71.06} \pm {0.32}$
+
+1-7
+Linear+LinBP ${82.90} \pm {0.50}$ ${70.57} \pm {0.32}$ ${65.15} \pm {0.43}$ ${75.24} \pm {0.35}$ ${72.73} \pm {0.40}$ ${70.02} \pm {0.31}$
+
+1-7
+Linear+SGM ${80.10} \pm {0.43}$ $\mathbf{{63.75}} \pm {0.21}$ $\mathbf{{51.68}} \pm {0.35}$ $\mathbf{{66.93}} \pm {0.43}$ $\mathbf{{62.57}} \pm {0.31}$ ${59.61} \pm {0.55}$
+
+1-7
+Linear+TG ${80.78} \pm {0.56}$ ${68.70} \pm {0.34}$ $\mathbf{{59.69}} \pm {0.57}$ ${71.72} \pm {0.41}$ ${67.84} \pm {0.50}$ ${65.63} \pm {0.50}$
+
+1-7
+
+Table 4: Comparison of different attack techniques on our best RT model. Lower means stronger attack. This table only shows the numerical results plotted in Fig. 3.
+
+ < g r a p h i c s >
+
+Figure 8: (a) Cosine similarity and (b) percentage of sign matches for three pairs of attack loss functions and decision rules: CE loss with EoT "Baseline", CE loss on mean softmax probability "CE (softmax)", and linear loss on logits "Lin (logits)".
+
+ < g r a p h i c s >
+
+Figure 9: (a) Cosine similarity and (b) percentage of sign matches for the linear loss and its combinations with three transfer attack techniques: Linear Backward Pass "LinBP", Skip Gradient Method "SGM", and targeted "TG".
+
+The cosine similarity is computed between the mean gradient and all $n$ samples and then averaged.
+
+$$
+{\cos }_{n} \mathrel{\text{ := }} \frac{1}{n}\mathop{\sum }\limits_{{j = 1}}^{n}\frac{\left\langle {\widehat{G}}_{1,j},{\mu }_{n}\right\rangle }{{\begin{Vmatrix}{\widehat{G}}_{1,j}\end{Vmatrix}}_{2} \cdot {\begin{Vmatrix}{\mu }_{n}\end{Vmatrix}}_{2}} \tag{18}
+$$
+
+Lastly, the sign matching percentage is
+
+$$
+{\operatorname{sign}}_{ - }{\operatorname{match}}_{n}. \mathrel{\text{ := }} \frac{1}{n}\mathop{\sum }\limits_{{j = 1}}^{n}\frac{1}{d}\mathop{\sum }\limits_{{i = 1}}^{d}\mathbb{1}\left\{ {{\left\lbrack {\widehat{G}}_{1,j}\right\rbrack }_{i} = {\left\lbrack {\mu }_{n}\right\rbrack }_{i}}\right\} \tag{19}
+$$
+
+Fig. 8 and Fig. 9 plot the cosine similarly and the sign matching for varying loss functions and varying transfer attacks, respectively. Similarly to Fig. 4, better attacks result in less spread of the gradient samples which corresponds to higher cosine similarity and sign matching percentage.
+
+§ C DETAILS ON BAYESIAN OPTIMIZATION
+
+One major challenge in implementing an RT defense is selecting the defense hyperparameters which include the $K$ transformation types, the number of transformations to apply (S), and their parameters(aandp). To improve the robustness of RT defense, we use Bayesian optimization (BO), a well-known black-box optimization technique, to fine-tune $a$ and $p$ (Snoek, Larochelle, and Adams 2012). In this case, BO models the hyperparameter tuning as a Gaussian process where the objective function takes in $a$ and $p$ , trains a neural network as a backbone for an RT defense, and outputs adversarial accuracy under some pre-defined ${\ell }_{\infty }$ -budget $\epsilon$ as the metric used for optimization.
+
+Since BO quickly becomes ineffective as we increase the dimensions of the search space, we choose to tune either $a$ or $p$ , never both, for each of the $K$ transformation types. For transformations that have a tunable $a$ , we fix $p = 1$ (e.g., noise injection, affine transform). For the transformations without an adjustable strength $a$ , we only tune $p$ (e.g., Laplacian filter, horizontal flip). Additionally, because BO does not natively support categorical or integral variables, we experiment with different choices for $K$ and $S$ without the use of BO. Therefore, our BO problem must optimize over $K$ (up to 33) variables, far more than are typically present when doing model hyperparamter tuning using BO.
+
+ < g r a p h i c s >
+
+Figure 10: Effectiveness of the optimizers, (a) SGD and (b) AggMo, with varying momentum parameters. Increasing $B$ for AggMo in this case monotonically reduces the final adversarial accuracy until $B = 4$ where it plateaus. This is more predictable and stable than increasing the momentum constant in SGD.
+
+Mathematically, the objective function $\psi$ is defined as
+
+$$
+\psi : {\left\lbrack 0,1\right\rbrack }^{K} \rightarrow {\mathcal{R}}_{\infty ,\epsilon } \in \left\lbrack {0,1}\right\rbrack \tag{20}
+$$
+
+where the input is $K$ real numbers between 0 and 1, and ${\mathcal{R}}_{\infty ,\epsilon }$ denotes the adversarial accuracy or the accuracy on ${x}_{\text{ adv }}$ as defined in Eqn. 1. Since $\psi$ is very expensive to evaluate as it involves training and testing a large neural network, we employ the following strategies to reduce the computation: (1) only a subset of the training and validation set is used, (2) the network is trained for fewer epochs with a cosine annealing learning rate schedule to speed up convergence (Loshchilov and Hutter 2017), and (3) the attack used for computing ${\mathcal{R}}_{\infty ,\epsilon }$ is weaker but faster. Even with these speedups, one BO run still takes approximately two days to complete on two GPUs (Nvidia GeForce GTX 1080 Ti). We also experimented with other sophisticated hyperparameter-tuning algorithms based on Gaussian processes (Bergstra, Yamins, and Cox 2013; Kandasamy et al. 2020; Falkner, Klein, and Hutter 2018) but do not find them more effective. We summarize the main steps for tuning and training an RT defense in Algorithm 2.
+
+Algorithm 2: Tuning and training RT defense.
+
+ Input: Set of transformation types, $n,p,\epsilon$
+
+ Output: ${g}^{ * }\left( \cdot \right) ,\mathcal{R},{\mathcal{R}}_{p,\epsilon }$
+
+ Data: Training data $\left( {{\mathbf{X}}^{\text{ train }},{\mathbf{Y}}^{\text{ train }}}\right)$ , test data
+
+ $\left( {{\mathbf{X}}^{test},{\mathbf{Y}}^{test}}\right)$
+
+ // Starting Bayesian optimization (BO)
+
+ Sub-sample $\left( {{\mathbf{X}}^{\text{ train }},{\mathbf{Y}}^{\text{ train }}}\right)$ and split it into BO’s training
+
+ data $\left( {{\mathbf{X}}_{\mathrm{{BO}}}^{\text{ train }},{\mathbf{Y}}_{\mathrm{{BO}}}^{\text{ train }}}\right)$ and validation data $\left( {{\mathbf{X}}_{\mathrm{{BO}}}^{\text{ val }},{\mathbf{Y}}_{\mathrm{{BO}}}^{\text{ val }}}\right)$ .
+
+ 2 ${\mathcal{R}}_{p,\epsilon }^{ * } \leftarrow 0\;//$ Best adversarial accuracy
+
+ 3 ${\left\{ \left( {p}_{i}^{ * },{\alpha }_{i}^{ * }\right) \right\} }_{i = 1}^{K} \leftarrow 0$ // Best RT
+
+ hyperparameters
+
+ for step $\leftarrow$ 0 to ${MAX}\_ {BO}\_ {STEPS}$ do
+
+ // Running one trial of BO
+
+ BO specifies ${\left\{ \left( {p}_{i},{\alpha }_{i}\right) \right\} }_{i = 1}^{K}$ to evaluate.
+
+ Train an RT model on $\left( {{\mathbf{X}}_{\mathrm{{BO}}}^{\text{ train }},{\mathbf{Y}}_{\mathrm{{BO}}}^{\text{ train }}}\right)$ with
+
+ hyperparameters ${\left\{ \left( {p}_{i},{\alpha }_{i}\right) \right\} }_{i = 1}^{K}$ to obtain $g$ .
+
+ Test $g$ by computing ${\mathcal{R}}_{p,\epsilon }$ on $\left( {{\mathbf{X}}_{\mathrm{{BO}}}^{\mathrm{{val}}},{\mathbf{Y}}_{\mathrm{{BO}}}^{\mathrm{{val}}}}\right)$ using a
+
+ weak but fast attack.
+
+ if ${\mathcal{R}}_{p,\epsilon } > {\mathcal{R}}_{p,\epsilon }^{ * }$ then
+
+ ${\mathcal{R}}_{p,\epsilon }^{ * } \leftarrow {\mathcal{R}}_{p,\epsilon }$
+
+ ${\left\{ \left( {p}_{i}^{ * },{\alpha }_{i}^{ * }\right) \right\} }_{i = 1}^{K} \leftarrow {\left\{ \left( {p}_{i},{\alpha }_{i}\right) \right\} }_{i = 1}^{K}$
+
+ else if No improvement for some steps then
+
+12 break;
+
+ // Full training of RT
+
+ Train an RT model on $\left( {{\mathbf{X}}^{\text{ train }},{\mathbf{Y}}^{\text{ train }}}\right)$ with best
+
+ hyperparameters ${\left\{ \left( {p}_{i}^{ * },{\alpha }_{i}^{ * }\right) \right\} }_{i = 1}^{K}$ to obtain ${g}^{ * }$ .
+
+ 4 Evaluate ${g}^{ * }$ by computing $\mathcal{R}$ and ${\mathcal{R}}_{p,\epsilon }$ on $\left( {{\mathbf{X}}^{\text{ test }},{\mathbf{Y}}^{\text{ test }}}\right)$
+
+ using a strong attack.
+
+We use the Ray Tune library for RT's hyperparameter tuning in Python (Liaw et al. 2018). The Bayesian optimization tool is implemented by Nogueira (2014), following analyses and instructions by Snoek, Larochelle, and Adams (2012) and Brochu, Cora, and de Freitas (2010). As mentioned in Section 5, we sub-sample the data to reduce computation for each BO trial. Specifically, we use ${20}\%$ and ${10}\%$ of the training samples for Imagenette and CIFAR-10 respectively (Algorithm 2, line 1) as Imagenette has a much smaller number of samples in total. The models are trained with the same transformations and hyperparameters used during inference, and here, $n$ is set to 1 during training, just as is done during standard data augmentation. We use 200 samples to evaluate each BO run in line 7 of Algorithm 2 with only 100 steps and $n = {10}$ .
+
+One BO experiment executes two BO's in parallel. The maximum number of BO runs is 160, but we terminate the experiment if no improvement has been made in the last 40 runs after a minimum of 80 runs have taken place. The runtime depends on $S$ and the transformation types used. In our typical case, when all 33 transformation types are used and $S = {14}$ , one BO run takes almost an hour on an Nvidia GeForce GTX 1080 Ti for Imagenette. One BO experiment then takes about two days to finish.
+
+ < g r a p h i c s >
+
+Figure 11: Clean accuracy of our best RT model computed with three decision rules for obtaining the final prediction from the $n$ output samples. The rules are majority vote (red), average softmax probability (blue), and average logits (green). The shaded areas represent the ${95}\%$ confidence interval for each decision rule.
+
+In line 13 and 14 of Algorithm 2, we now use the full training set and 1000 test samples as mentioned earlier. During the full training, $n$ is set to four which increases the training time by approximately four times. We find that using a larger $n$ is beneficial to both the clean and the adversarial accuracy, but $n$ larger than four does not make any significant difference.
+
+§ C.1 DETAILS ON THE FINAL RT MODEL
+
+We run multiple BO experiments (Algorithm 2) on different subsets of transformation types to identify which transformations are most/least effective in order to reduce $K$ as well as the number of hyperparameters our final run of BO has to tune. We then repeat Algorithm 2 initialized with the input-output pairs from the prior runs of $\mathrm{{BO}}$ to obtain a new set of hyperparameters. Finally, we remove the transformations whose $p$ or $a$ has been set to zero by the first run of BO, and we run BO once more with this filtered subset of transformations. At the end of this expensive procedure, we obtain the best and final RT model that we use in the experiments throughout this paper. For Imagenette, the final set of 18 transformation types used in this model are color jitter, erase, gamma, affine, horizontal flip, vertical flip, Laplacian filter, Sobel filter, Gaussian blur, median blur, motion blur, Poisson noise, FFT, JPEG compression, color precision reduction, salt noise, sharpen, and solarize. $S$ is set to 14 .
+
+§ D ADDITIONAL EXPERIMENTS ON THE RT MODEL
+
+§ D.1 DECISION RULES AND NUMBER OF SAMPLES
+
+Fig. 11 and Fig. 12 compare three different decision rules that aggregate the $n$ outputs of the RT model to produce the final prediction $\widehat{y}\left( x\right)$ given an input $x$ . We choose the average softmax probability rule for all of our RT models because it provides a good trade-off between the clean accuracy and the robustness. Majority vote has poor clean accuracy, and the average logits have poor robustness.
+
+ < g r a p h i c s >
+
+Figure 12: Adversarial accuracy $\left( {\epsilon = {16}/{255}}\right)$ of our best RT model computed with three decision rules for obtaining the final prediction from the $n$ output samples. The rules are majority vote (red), average softmax probability (blue), and average logits (green). The shaded areas represent the 95% confidence interval for each decision rule.
+
+Table 5: RT's performance when only one of the transformation groups is applied. The attack is Linear+Adam+SGM with 200 steps and $n = {20}$ .
+
+max width=
+
+Used Transformations Clean Acc. Adv. Acc.
+
+1-3
+Noise injection ${80.93} \pm {0.44}$ ${8.35} \pm {0.20}$
+
+1-3
+Blur filter ${97.32} \pm {0.20}$ ${0.00} \pm {0.00}$
+
+1-3
+Color space ${94.40} \pm {0.53}$ ${0.00} \pm {0.00}$
+
+1-3
+Edge detection ${97.64} \pm {0.09}$ ${0.00} \pm {0.00}$
+
+1-3
+Lossy compression ${83.56} \pm {0.66}$ ${3.56} \pm {0.26}$
+
+1-3
+Geometric transforms ${88.42} \pm {0.28}$ ${0.83} \pm {0.21}$
+
+1-3
+Stylization ${98.31} \pm {0.09}$ ${0.00} \pm {0.00}$
+
+1-3
+
+§ D.2 IMPORTANCE OF THE TRANSFORMATION GROUPS
+
+Choosing the best set of transformation types to use is a computationally expensive problem. There are many more transformations that can be applied outside of the 33 types we choose, and the number of possible combinations grows exponentially. BO gives us an approximate solution but is by no means perfect. Here, we take a step further to understand the importance of each transformation group. Table 5 gives an alternative way to gauge the contribution of each transformation group. According to this experiment, noise injection appears most robust followed by lossy compression and geometric transformations. However, this result is not very informative as most of the groups have zero adversarial accuracy, and the rest are likely to also reduce to zero given more attack steps. This result also surprisingly follows the commonly observed robustness-accuracy trade-off (Tsipras et al. 2019).
+
+ < g r a p h i c s >
+
+Figure 13: Adversarial accuracy of RT models obtained after running Algorithm 2 for different values of $S$ on CIFAR-10
+
+§ D.3 NUMBER OF TRANSFORMATIONS
+
+We test the effect of the transform permutation size $S$ on the clean and the robust accuracy of RT models (Fig. 13). We run Bayesian optimization experiments for different values of $S$ using all 33 transformation types, and all of the models are trained using the same procedure. Fig. 13 shows that generally more transformations (larger $S$ ) increase robustness but lower accuracy on benign samples.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/rq2hMS4OaUX/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/rq2hMS4OaUX/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..93b186b6b2c627017cfc42da0e0b858d55502ea6
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/rq2hMS4OaUX/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,261 @@
+# Improving Perceptual Quality of Adversarial Images Using Perceptual Distance Minimization and Normalized Variance Weighting
+
+## Abstract
+
+Neural networks are known to be vulnerable to adversarial examples, which are obtained by adding intentionally crafted perturbations to original images. However, these perturbations degrade their perceptual quality and make them more difficult to perceive by humans. In this paper, we propose two separate attack agnostic methods to increase the perceptual quality while preserving the target fooling rate. The first method intensifies the perturbations in the high variance areas in the images. This method could be used in both white-box and black-box settings for any type of adversarial examples with only the computational cost of calculating the pixel based image variance. The second method aims to minimize the perturbations of already generated adversarial examples independent of the attack type. In this method, the distance between benign and adversarial examples are reduced until adversarial examples reach the decision boundaries of the true class. We show that these methods could also be used in conjunction to improve the perceptual quality of adversarial examples and demonstrate the quantitative improvements on CIFAR-10 and NIPS2017 Adversarial Learning Challenge datasets.
+
+## Introduction
+
+While Deep Neural Networks (DNNs) are being used in a variety of domains, there are several studies that show their vulnerabilities. An initial study, L-BFGS method (Szegedy et al. 2014), revealed that neural networks are not robust to adversarial attacks specifically produced to fool the networks. After the discovery of adversarial attacks, several different methods have been proposed such as Fast Gradient Sign Method (FGSM) (Goodfellow, Shlens, and Szegedy 2015), Projected Gradient Descent (PGD) (Madry et al. 2018), DeepFool (Moosavi-Dezfooli, Fawzi, and Frossard 2016), Jacobian Saliency Map Attack (JSMA) (Papernot et al. 2016), Spatially Transformed Adversarial Examples (stAdv) (Xiao et al. 2018) and Carlini&Wagner Attack (Carlini and Wagner 2017).
+
+As adversarial examples can fool the networks, they can be used for the purpose of distinguishing humans from algorithms. While humans could still perceive the content of these images, algorithms would be deceived by the adversarial input. For such a system be effective, the perturbation that will be added to the image should be reasonably small and, while still misleading the algorithm, human vision should not be distracted from by the perturbation. Completely Automated Public Turing test to tell Computers and Humans Apart - CAPTCHA, is one of the most common examples where human users are distinguished from computer algorithms (Aksoy and Temizel 2019). The main motivation of this study is to improve successful adversarial attacks while reducing the perturbations that are distracting to humans. So, we propose two separate methods to improve the perceptual quality while keeping the attacks successful. The first method is based on intensifying the perturbation in high-variance zones and suppressing in low-variance zones using the variance map of input images for any type of attack. In effect, disguising the adversarial noise in high-variance areas and limiting the high-frequency noise added to low-variance areas where they would be more distracting. The second method is based on minimization of the perturbation until it reaches the boundary. While variance weighting is applied during the attack, minimization method could be considered as post processing after acquiring the adversarial example with any type of adversarial attack. As seen in Figure 1, localized and minimized perturbations improve the perceptual quality while keeping the fooling rate stable.
+
+
+
+Figure 1: FGSM $\left( {\epsilon = 8/{255}}\right)$ results on NIPS2017 against ResNet50 with both proposed methods: variance weighting and minimization (shown separately for minimization with respect to ${L}_{2}$ and LPIPS) and combinations of them.
+
+---
+
+Copyright (C) 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+---
+
+## Related Work
+
+## Adversarial Attacks
+
+L-BFGS is the initial method for generating adversarial examples using box-constrained optimization method (Szegedy et al. 2014). However, this method is computationally very costly. FGSM (Goodfellow, Shlens, and Szegedy 2015) is an efficient gradient-based attack algorithm, which computes the gradient only once, and adds perturbation in the gradient ascending direction of the loss function. Iterative Fast Gradient Sign Method (I-FGSM) (Kurakin, Good-fellow, and Bengio 2017) extends FGSM by iteratively attacking with a small step size and calculating the gradient at each step. C&W attack (Carlini and Wagner 2017) minimizes ${L}_{2}$ norm with an improved optimization method. DeepFool (Moosavi-Dezfooli, Fawzi, and Frossard 2016) efficiently computes the smallest perturbations according to closest decision boundary. Jacobian-based Saliency Map Attack (JSMA) (Papernot et al. 2016) generates sparse perturbations via generating saliency map and rank the contribution of each input variable to the adversarial objective. A perturbation is then selected from the saliency map at each iteration.
+
+## Perceptual Metrics
+
+All adversarial attack methods essentially aim to fool the network while minimizing the dissimilarity between benign and adversarial examples (i.e., minimizing the added perturbation). While the similarity metrics vary according to attack type, the most widely used distance metrics are ${L}_{p}$ norms $\left( {p = 0,1,2,\infty }\right)$ . In particular, FGSM is an ${L}_{\infty }$ , JSMA is an ${L}_{0}$ , and $\mathrm{C}\& \mathrm{\;W}$ is an ${L}_{2}$ norm based attack. Even though ${L}_{p}$ norms are very convenient and commonly used, several studies state that ${L}_{p}$ norms do not reflect the human perception accurately (Sharif, Bauer, and Reiter 2018; Jordan et al. 2019). Besides, there are some attack types such as (Jordan et al. 2019; Laidlaw and Feizi 2019; Laidlaw, Singla, and Feizi 2021; Aydin et al. 2021) for which ${L}_{p}$ norms are not suitable to evaluate the attack success. Thus these studies employ different and more recent perceptual metrics such as Learned Perceptual Image Patch Similarity (LPIPS) metric (Zhang et al. 2018) or Deep Image Structure and Texture Similarity (DISTS) index (Ding et al. 2020). Both of these methods use an additional neural network to measure the distance. LPIPS is calibrated with human perception and measures the Euclidean distance of deep representations. Likewise, DISTS optimizes human perception while using the combination of deep image structure and texture similarity.
+
+## Variance Map on Adversarial Attacks
+
+Human perception is affected more by perturbations in the low variance areas compared to high variance areas and this information is exploited in various image processing applications (Legge and Foley 1980; Lin, Dong, and Xue 2005; Liu et al. 2010). Regarding this fact, variance map has been used in previous studies (Luo et al. 2018; Croce and Hein 2019) to generate adversarial examples. In this work variance map is used to produce a variance based componentwise box constraints to generate sparse adversarial examples (Croce and Hein 2019). In another study, variance map is applied for the selection of high variance pixels (Luo et al. 2018). Using only ${L}_{p}$ norms for these variance based sparse attacks do not accurately reflect the perceptual quality (Luo et al. 2018), thus variance based sparse adversarial examples either use mean and median values of pixels or introduce a new distance metric that is more suited for the evaluation of their proposed attacks.
+
+## Methodology
+
+## Normalized Variance Weighting
+
+In our study, we use variance map to intensify the perturbations in the high variance zones, instead of selecting high variance pixels or variance boundaries in an attack agnostic manner. We adopt the variance map method in (Croce and Hein 2019) to produce variance map of input images. In this method, standard deviation of both axes with 2 neighbour pixels and main pixel for each color channel are calculated $\left( {\sigma }_{ij}^{\left( x\right) }\right.$ and ${\sigma }_{ij}^{\left( y\right) }$ for $x$ and $y$ axis respectively) and the square root of the minimum of standard deviation of axes is taken to obtain variance map ${\sigma }_{ij}$ (Equation 1). The variance map is then normalized to obtain normalized variance map ${V}_{i, j}$ (Equation 2).
+
+$$
+{\sigma }_{ij} = \sqrt{\min \left\{ {{\sigma }_{ij}^{\left( x\right) },{\sigma }_{ij}^{\left( y\right) }}\right\} } \tag{1}
+$$
+
+$$
+{V}_{i, j} = \frac{{\sigma }_{i, j}}{\sqrt{\mathop{\sum }\limits_{h}^{H}\mathop{\sum }\limits_{w}^{W}{\sigma }_{h, w}^{2}}} \tag{2}
+$$
+
+Since our method does not involve selecting pixels or generating variance box constraints, it does not require any additional threshold or coefficient variable. Normalizing and weighting procedures remove the need for an additional variable. As seen in Algorithm 1, the proposed method initially generates the variance map of input images for only once (Equation 1), then normalizes the variance map using ${L}_{2}$ -norm (Equation 2) and applies variance map by weighting the perturbation with normalized variance map at each iteration (if adversarial attack is iterative). This method could be adapted for both white-box and black-box setting and does not require any optimization, or additional gradient-based steps. Therefore it does not bring any additional computational cost except the calculation of variance map for once.
+
+
+
+Figure 2: Visual representation of integration of proposed methods.
+
+## Minimization Method
+
+The proposed minimization method is applied after generating the initial adversarial example and aims to reduce the distance between benign and adversarial examples using an optimizer. Optimizer minimizes the distance until adversarial examples reach the decision boundaries of true classes or maximum iteration number (Algorithm 2). We apply our minimization technique to minimize with regards to two different: ${L}_{2}$ -norm and LPIPS (It has to be noted that some attacks are not suitable for ${L}_{2}$ distance metric (Aydin et al. 2021)). As LPIPS measures the perceptual distance using an additional neural network (i.e., VGG16 (Simonyan and Zisserman 2015)), it has higher processing time and higher number of parameters compared to ${L}_{2}$ -norm minimization. In (Aksoy and Temizel 2019), the attack strength is iteratively adjusted to obtain the minimal perturbation needed in an attack agnostic manner after the generation of adversarial example, our proposed method improves this by directly optimizing the minimization of perturbation.
+
+## Normalized Variance Weighting + Minimization
+
+Normalized variance weighting method is applied during the adversarial attack while minimization method is applied after the generation of adversarial example. So both methods could be integrated and used in association for generation of adversarial examples. The complete pipeline integrating both methods is illustrated in Figure 2. We first generate adversarial examples, apply the variance weighting method and after the generation of variance weighted adversarial example, we apply minimization method as a post processing to obtain improved adversarial examples.
+
+Algorithm 1: Normalized Variance Weighting
+
+---
+
+Input: $x$ : original image, ${Adv}$ : one iteration of adversarial
+
+attack
+
+Parameter: ${i}_{max}$ : maximum iteration of adversarial attack
+
+Output: $y$ : adversarial example
+
+ Let $i = 0$
+
+ $v =$ VarianceMap(x)
+
+ $v = {L}_{2}$ Normalize(v)
+
+ $y = x$
+
+ while $i < {i}_{max}$ do
+
+ $y = {Adv}\left( y\right)$
+
+ $p = \left( {y - x}\right) \times v$
+
+ $y = x + p$
+
+ $i = i + 1$
+
+ end while
+
+ return $y$
+
+---
+
+## Experiments
+
+Datasets. We used CIFAR-10 and NIPS2017 Adversarial Learning Challenge datasets in the experiments. CIFAR-10 testset contains 10000 images with ${32} \times {32}$ resolution. We conducted our experiments on a subset of CIFAR-10 test-set with 1000 images (100 random images from each category). NIPS2017 dataset is a subset of Imagenet dataset and contains 1000 images (one images from each category) with ${299} \times {299}$ resolution.
+
+Attack Types. We have tested the proposed methods using 3 different untargeted attack types: a single step gradient based attack (FGSM) (Goodfellow, Shlens, and Szegedy 2015), an iterative gradient based attack (I-FGSM) (Ku-rakin, Goodfellow, and Bengio 2017) and an optimization based attack (C&W) (Carlini and Wagner 2017) on CI-FAR10 and NIPS2017 datasets. We have used ResNet50 (He et al. 2016) and Inception-V3 (Szegedy et al. 2016) for NIPS2017 dataset; only ResNet50 (He et al. 2016) for CI-FAR10 dataset. We have used CleverHans (Papernot et al. 2018) implementation for default attacks and we integrated the proposed methods into these attacks.
+
+Table 1: FGSM results on CIFAR10 dataset against ResNet50 with and without variance weighting (shown as Var.) and minimization method (shown as Minim.) using LPIPS and ${L}_{2}$ . Results are reported in both LPIPS $\left( {\times {10}^{2}}\right)$ and ${L}_{2}$ metrics.
+
+| Var. | Minim. | 30% | 40% | 50% | 60% |
| | LPIPS | ${L}_{2}$ | LPIPS | ${L}_{2}$ | LPIPS | ${L}_{2}$ | LPIPS | ${L}_{2}$ |
| - | - | 0.19 | 0.13 | 0.93 | 0.28 | 3.19 | 0.58 | 7.42 | 1.14 |
| - | LPIPS | 0.07 | 0.13 | 0.59 | 0.27 | 2.58 | 0.57 | 6.46 | 1.13 |
| - | ${L}_{2}$ | 0.10 | 0.09 | 0.75 | 0.22 | 2.93 | 0.52 | 7.06 | 1.09 |
| + | - | 0.16 | 0.14 | 0.54 | 0.26 | 2.36 | 0.54 | 6.85 | 1.18 |
| + | LPIPS | 0.06 | 0.13 | 0.29 | 0.25 | 1.82 | 0.54 | 5.96 | 1.17 |
| + | ${L}_{2}$ | 0.09 | 0.10 | 0.41 | 0.22 | 2.14 | 0.50 | 6.54 | 1.13 |
+
+Table 2: I-FGSM results on CIFAR10 dataset against ResNet50 with and without variance weighting (shown as Var.) and minimization method (shown as Minim.) using LPIPS and ${L}_{2}$ . Results are reported in both LPIPS $\left( {\times {10}^{2}}\right)$ and ${L}_{2}$ metrics.
+
+| Var. | Minim. | 30% | 40% | 50% | 60% |
| | LPIPS | ${L}_{2}$ | LPIPS | ${L}_{2}$ | LPIPS | ${L}_{2}$ | LPIPS | ${L}_{2}$ |
| - | - | 0.62 | 0.26 | 0.97 | 0.33 | 1.52 | 0.41 | 2.71 | 0.57 |
| - | LPIPS | 0.28 | 0.25 | 0.50 | 0.32 | 0.86 | 0.40 | 1.74 | 0.56 |
| - | ${L}_{2}$ | 0.43 | 0.21 | 0.73 | 0.28 | 1.22 | 0.37 | 2.32 | 0.52 |
| + | - | 0.57 | 0.27 | 0.87 | 0.34 | 1.37 | 0.43 | 2.51 | 0.60 |
| + | LPIPS | 0.26 | 0.26 | 0.43 | 0.33 | 0.76 | 0.42 | 1.58 | 0.59 |
| + | ${L}_{2}$ | 0.41 | 0.23 | 0.66 | 0.30 | 1.11 | 0.39 | 2.16 | 0.56 |
+
+Algorithm 2: Minimization Method
+
+---
+
+Input: $x$ : original image, $y$ : adversarial example
+
+Parameter: ${lr}$ : learning rate, ${i}_{max}$ : maximum iteration
+
+ output: ${y}_{\text{best }}$ : improved adversarial exam-
+
+ple
+
+ Let $i = 0$
+
+ ${y}_{\text{best }} = y$
+
+ ${y}_{\text{opt }} = y$
+
+ while $i < {i}_{max}$ do
+
+ if ${\text{class}}_{{y}_{\text{opt }}} \neq {\text{class}}_{x}$ then
+
+ return ${y}_{\text{best }}$
+
+ else
+
+ ${y}_{\text{best }} = {y}_{\text{opt }}$
+
+ end if
+
+ ${y}_{opt} = \operatorname{MinimizeDIST}\left( {{y}_{opt}, x,{lr}}\right)$
+
+ end while
+
+ return ${y}_{\text{best }}$
+
+---
+
+## Experimental Settings for Normalized Variance Weighting
+
+For variance map, we used 2 neighbour pixels and main pixel for every color channel in the generation of variance map similar to (Croce and Hein 2019). We observed that using variance weighting method considerably decreases the fooling rate when the attack strength is fixed. Thus, to compare on a fair ground, we fixed the fooling rate and let the $\epsilon$ (for FGSM and I-FGSM attacks) or initial cost (for C&W attack) vary. This allowed reaching the target fooling rate within a $\pm {0.5}\%$ error tolerence. We targeted 4 different fooling rates for FGSM: ${30}\% ,{40}\% ,{50}\% ,{60}\%$ and I-FGSM: ${60}\% ,{70}\% ,{80}\% ,{90}\%$ on both datasets. We used a single fooling rate for C&W attack on each dataset: ${95}\%$ on CIFAR10 and 100% on NIPS2017 (for both ResNet50 and Inception-V3), since there is ${L}_{2}$ -normalization after producing variance map, measuring ${L}_{p}$ norms would be misleading for variance weighting method. Therefore, we mainly used LPIPS perceptual distance metric, which is calibrated with human vision, for its evaluation.
+
+## Experimental Settings for Minimization Method
+
+For the experimental settings of the proposed minimization method, we used Adam (Kingma and Ba 2015) as the optimizer and set the maximum iteration number as 10 . We set the learning rate as 0.0001 for CIFAR10 dataset for both minimization methods. We set learning rate as 0.0001 for ${L}_{2}$ -Minimization and 0.00001 for LPIPS-minimization on NIPS2017 (for both ResNet50 and Inception-V3), since they were not converging with the same learning rate.
+
+Table 3: FGSM results on NIPS2017 dataset against ResNet50 and Inception-V3 with and without variance weighting (shown as Var.) and minimization method (shown as Minim.) using LPIPS and ${L}_{2}$ . Results are reported in both LPIPS $\left( {\times {10}^{2}}\right)$ and ${L}_{2}$ metrics.
+
+| Var. | Minim. | 30% | 40% | 50% | 60% |
| | ResNet50 | Inc-V3 | ResNet50 | Inc-V3 | ResNet50 | Inc-V3 | ResNet50 | Inc-V3 |
| | LPIPS | ${L}_{2}$ | LPIPS | ${L}_{2}$ | LPIPS | ${L}_{2}$ | LPIPS | ${L}_{2}$ | LPIPS | ${L}_{2}$ | LPIPS | ${L}_{2}$ | LPIPS | ${L}_{2}$ | LPIPS | ${L}_{2}$ |
| - | - | 0.07 | 0.22 | 0.12 | 0.30 | 0.14 | 0.32 | 0.26 | 0.43 | 0.26 | 0.42 | 0.54 | 0.63 | 0.44 | 0.55 | 1.10 | 0.94 |
| - | LPIPS | 0.02 | 0.20 | 0.02 | 0.27 | 0.03 | 0.28 | 0.04 | 0.39 | 0.05 | 0.38 | 0.10 | 0.57 | 0.08 | 0.50 | 0.26 | 0.87 |
| - | ${L}_{2}$ | 0.05 | 0.16 | 0.07 | 0.20 | 0.10 | 0.21 | 0.11 | 0.24 | 0.13 | 0.25 | 0.22 | 0.37 | 0.23 | 0.35 | 0.30 | 0.39 |
| + | - | 0.04 | 0.26 | 0.06 | 0.35 | 0.08 | 0.37 | 0.13 | 0.49 | 0.14 | 0.49 | 0.25 | 0.71 | 0.22 | 0.62 | 0.52 | 1,04 |
| + | LPIPS | 0.01 | 0.23 | 0.01 | 0.32 | 0.02 | 0.34 | 0.02 | 0.45 | 0.03 | 0.46 | 0.05 | 0.66 | 0.05 | 0.58 | 0.13 | 0.99 |
| + | ${L}_{2}$ | 0.02 | 0.15 | 0.03 | 0.17 | 0.04 | 0.20 | 0.05 | 0.26 | 0.08 | 0.29 | 0.09 | 0.38 | 0.12 | 0.39 | 0.15 | 0.56 |
+
+Table 4: I-FGSM Results on NIPS2017 dataset against ResNet50 and Inception-V3 with and without variance weighting (shown as Var.) and minimization method (shown as Minim.) using LPIPS and ${L}_{2}$ . Results are reported in both LPIPS $\left( {\times {10}^{2}}\right)$ and ${L}_{2}$ metrics.
+
+| Var. | Minim. | 30% | 40% | 50% | 60% |
| | ResNet50 | Inc-V3 | ResNet50 | Inc-V3 | ResNet50 | Inc-V3 | ResNet50 | Inc-V3 |
| | LPIPS | ${L}_{2}$ | LPIPS | ${L}_{2}$ | LPIPS | ${L}_{2}$ | LPIPS | ${L}_{2}$ | LPIPS | ${L}_{2}$ | LPIPS | ${L}_{2}$ | LPIPS | ${L}_{2}$ | LPIPS | ${L}_{2}$ |
| - | - | 0.12 | 0.32 | 0.23 | 0.43 | 0.17 | 0.40 | 0.37 | 0.56 | 0.27 | 0.51 | 0.63 | 0.75 | 0.46 | 0.68 | 1.19 | 1.08 |
| - | LPIPS | 0.02 | 0.29 | 0.04 | 0.39 | 0.03 | 0.35 | 0.07 | 0.51 | 0.05 | 0.46 | 0.12 | 0.69 | 0.08 | 0.62 | 0.28 | 1.01 |
| - | ${L}_{2}$ | 0.08 | 0.2 | 0.14 | 0.29 | 0.13 | 0.30 | 0.21 | 0.36 | 0.21 | 0.39 | 0.25 | 0.44 | 0.26 | 0.46 | 0.32 | 0.55 |
| + | - | 0.09 | 0.42 | 0.13 | 0.52 | 0.50 | 0.20 | 0.21 | 0.67 | 0.13 | 0.64 | 0.37 | 0.90 | 0.35 | 0.87 | 0.71 | 1.29 |
| + | LPIPS | 0.02 | 0.38 | 0.03 | 0.47 | 0.03 | 0.46 | 0.04 | 0.62 | 0.04 | 0.60 | 0.08 | 0.84 | 0.07 | 0.81 | 0.17 | 1.23 |
| + | ${L}_{2}$ | 0.06 | 0.29 | 0.08 | 0.35 | 0.09 | 0.38 | 0.13 | 0.45 | 0.15 | 0.50 | 0.15 | 0.59 | 0.19 | 0.64 | 0.20 | 0.78 |
+
+Table 5: CW2 Results on CIFAR10 dataset against ResNet50 with and without variance weighting (shown as Var.) and minimization method (shown as Minim.) using LPIPS and ${L}_{2}$ . Results are reported in both LPIPS $\left( {\times {10}^{2}}\right)$ and ${L}_{2}$ metrics.
+
+| Var. | Minim. | LPIPS | ${L}_{2}$ |
| - | - | 0.25 | 0.27 |
| - | LPIPS | 0.15 | 0.27 |
| - | ${L}_{2}$ | 0.25 | 0.28 |
| + | - | 0.19 | 0.27 |
| + | LPIPS | 0.12 | 0.26 |
| + | ${L}_{2}$ | 0.19 | 0.28 |
+
+## Results
+
+Normalized Variance Weighting. The effect of variance weighting method on FGSM attack can be observed in Table 1 and Table 3 for CIFAR10 (using ResNet50) and NIPS2017 (using ResNet50 and Inception-V3) datasets respectively. The method reduces the LPIPS distances considerably in all cases, i.e. without minimization and when used together with minimization with respect to both ${L}_{2}$ and LPIPS. The corresponding results in Table 2 and Table 4 for I-FGSM and Table 5 and Table 6 for C&W attack confirm that these findings are pertinent to these attacks as well and variance weighting is effective in reducing the LPIPS distance for all attack types in question.
+
+Minimization Methods. LPIPS-Minimization method applied on FGSM attack decreases the LPIPS distances considerably when used individually as well as when it is combined together with variance weighting for both CI-
+
+Table 6: CW2 Results on NIPS2017 dataset against ResNet50 and Inception-V3 with and without variance weighting (shown as Var.) and minimization method (shown as Minim.) using LPIPS and ${L}_{2}$ . Results are reported in both LPIPS $\left( {\times {10}^{2}}\right)$ and ${L}_{2}$ metrics.
+
+| Var. | Minim. | ResNet50 | Inc-V3 |
| | LPIPS | ${L}_{2}$ | LPIPS | ${L}_{2}$ |
| - | - | 0.25 | 0.27 | 0.33 | 0.38 |
| - | LPIPS | 0.15 | 0.27 | 0.17 | 0.37 |
| - | ${L}_{2}$ | 0.25 | 0.28 | 0.32 | 0.38 |
| + | - | 0.19 | 0.27 | 0.33 | 0.45 |
| + | LPIPS | 0.12 | 0.26 | 0.19 | 0.45 |
| + | ${L}_{2}$ | 0.19 | 0.28 | 0.32 | 0.46 |
+
+FAR10 (using ResNet50) and NIPS2017 (using ResNet50 and Inception-V3) datasets (Table 1 and Table 3). The corresponding results in Table 2 and Table 4 for I-FGSM and Table 5 and Table 6 for C&W attack confirm that these findings are pertinent to these attacks as well and LPIPS is effective in reducing the LPIPS distance for all attack types in question.
+
+In addition to these results, LPIPS-Minimization method also improves ${L}_{2}$ distance considerably. Though, as expected, ${L}_{2}$ -Minimization method results in the best ${L}_{2}$ distance improvements for FGSM and I-FGSM on both CI-FAR10 (using ResNet50) and NIPS2017 (using ResNet50 and Inception-V3) datasets. Considering C&W is already optimizing L2 distance, the improvement is relatively limited for C&W attack on both datasets.
+
+
+
+Figure 3: Ineffective NIPS2017 samples against normalized variance weighting Method.
+
+Variance + Minimization Methods. The best results (i.e. lowest LPIPS distances) are obtained when we combine variance weighting method with LPIPS-Minimization and the results show that there is considerable improvement for FGSM, I-FGSM and C&W attack types on both CIFAR10 (using ResNet50) and NIPS2017 (using both ResNet50 and Inception-V3).
+
+## Discussion
+
+From the experiments, it is seen that both variance weighting and minimization methods individually improve the perceptual quality and the best LPIPS results are obtained when they are integrated. However, this improvement is relatively limited for attacks that are inherently able to produce adversarial examples with quantitatively lower perceptual distances, such as C&W. Results in Tables 1 to 6 show that NIPS2017 dataset results have smaller perceptual distances yet improvement percentages are higher compared to CIFAR10 dataset (i.e., LPIPS distance is reduced by 10% and 25% for CIFAR10 (40% fooling rate) and NIPS2017 (70% fooling rate) datasets respectively for I-FGSM against ResNet50).
+
+We have also investigated variance-based box-constrainted method (Croce and Hein 2019) for our attack type agnostic white-box setting as an alternative to variance weighting. While variance-based box-constrainted adversarial examples could improve the perceptual quality as well, it requires an additional coefficient parameter for each adversarial attack type, dataset and network. Even when parameters for threshold levels are optimized, in most instances, we have observed that variance weighted perturbations have better perceptual quality, which makes variance weighting a better choice.
+
+We have conducted our experiments based on ${L}_{2}$ and LPIPS distance metrics. Since our normalized variance weighting method is not suitable to measure for traditional ${L}_{p}$ norms, we consider the LPIPS perceptual distance as the primary metric for its evaluation. With regards to the proposed minimization methods, LPIPS-Minimization can be used in conjunction with any type of attack, while ${L}_{2}$ - Minimization is not suitable to all types of adversarial attacks such as the ones based on shifting of pixels (e.g., (Aydin et al. 2021)). Nevertheless, we have measured the distances of ${L}_{2}$ and LPIPS minimization methods with both ${L}_{2}$ and LPIPS metrics. It is seen that both distance metrics usually decrease with any of minimization method. Though, as expected, the measured metric benefits more when it is the same metric as the one used in minimization (e.g., LPIPS-minimization method minimizes LPIPS distance proportionally more compared to ${L}_{2}$ distance).
+
+Our empirical observations show that the variance weighting method significantly improves perceptual quality of images which have low variance backgrounds (e.g., sky, wall or sea) as it could be seen in Figure 1. However it is less effective for images with dominantly high variance zones (e.g., umbrella image in Figure 3) and for images having dominantly low variance zones (e.g., flag image in Figure 3). In the flag image, the variance of the background is highly low and high variance region is very narrow, hence variance weighting method could not improve the flag image adequately.
+
+## Conclusion
+
+We have proposed two separate attack agnostic techniques to improve perceptual quality of adversarial examples while preserving the fooling rate. We have shown that applying our variance weighting improves the perceptual quality of different types of adversarial attacks without any significant computational cost in white-box setting. We have also shown that perturbations produced by different types of adversarial attacks could be minimized while preserving the fooling rate. Integration of the variance weighting and minimization generates adversarial examples with the best perceptual quality measured by LPIPS. Other attack agnostic improvements (e.g., generating adversarial attacks on YUV color space (Aksoy and Temizel 2019) could be combined with these two proposed methods to enhance perceptual quality further in the future.
+
+## References
+
+Aksoy, B.; and Temizel, A. 2019. Attack Type Agnostic Perceptual Enhancement of Adversarial Images. In International Workshop on Adversarial Machine Learning And Security (AMLAS), IEEE World Congress on Computational Intelligence (IEEE WCCI).
+
+Aydin, A.; Sen, D.; Karli, B. T.; Hanoglu, O.; and Tem-izel, A. 2021. Imperceptible Adversarial Examples by Spatial Chroma-Shift. In Proceedings of the 1st International Workshop on Adversarial Learning for Multimedia, ADVM
+
+'21, 8-14. New York, NY, USA: Association for Computing Machinery. ISBN 9781450386722.
+
+Carlini, N.; and Wagner, D. 2017. Towards evaluating the robustness of neural networks. In 2017 IEEE Symposium on Security and Privacy), 39-57.
+
+Croce, F.; and Hein, M. 2019. Sparse and Imperceivable Adversarial Attacks. 2019 IEEE/CVF International Conference on Computer Vision (ICCV), 4723-4731.
+
+Ding, K.; Ma, K.; Wang, S.; and Simoncelli, E. 2020. Image Quality Assessment: Unifying Structure and Texture Similarity. IEEE Transactions on Pattern Analysis and Machine Intelligence, PP: 1-1.
+
+Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2015. Explaining and Harnessing Adversarial Examples. In Bengio, Y.; and LeCun, Y., eds., 3rd International Conference on Learning Representations, ICLR 2015, San Diego, CA, USA, May 7-9, 2015, Conference Track Proceedings.
+
+He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep residual learning for image recognition. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 770- 778.
+
+Jordan, M.; Manoj, N.; Goel, S.; and Dimakis, A. G. 2019. Quantifying Perceptual Distortion of Adversarial Examples. arXiv:1902.08265.
+
+Kingma, D. P.; and Ba, J. 2015. Adam: A method for stochastic optimization. International Conference on Learning Representations.
+
+Kurakin, A.; Goodfellow, I.; and Bengio, S. 2017. Adversarial examples in the physical world. arXiv:1607.02533.
+
+Laidlaw, C.; and Feizi, S. 2019. Functional Adversarial Attacks. Advances in Neural Information Processing Systems, 32: 10408-10418.
+
+Laidlaw, C.; Singla, S.; and Feizi, S. 2021. Perceptual Adversarial Robustness: Defense Against Unseen Threat Models. In 9th International Conference on Learning Representations, ICLR 2021, Virtual Event, Austria, May 3-7, 2021. OpenReview.net.
+
+Legge, G. E.; and Foley, J. M. 1980. Contrast masking in human vision. J. Opt. Soc. Am., 70(12): 1458-1471.
+
+Lin, W.; Dong, L.; and Xue, P. 2005. Visual distortion gauge based on discrimination of noticeable contrast changes. IEEE Transactions on Circuits and Systems for Video Technology, 15(7): 900-909.
+
+Liu, A.; Lin, W.; Paul, M.; Deng, C.; and Zhang, F. 2010. Just noticeable difference for images with decomposition model for separating edge and textured regions. IEEE Transactions on Circuits and Systems for Video Technology, 20(11): 1648-1652.
+
+Luo, B.; Liu, Y.; Wei, L.; and Xu, Q. 2018. Towards imperceptible and robust adversarial example attacks against neural networks. In Thirty-second aaai conference on artificial intelligence.
+
+Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; and Vladu, A. 2018. Towards Deep Learning Models Resistant to Adversarial Attacks. In International Conference on Learning Representations (ICLR).
+
+Moosavi-Dezfooli, S.-M.; Fawzi, A.; and Frossard, P. 2016. Deepfool: a simple and accurate method to fool deep neural networks. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2574-2582.
+
+Papernot, N.; Faghri, F.; Carlini, N.; Goodfellow, I.; Fein-man, R.; Kurakin, A.; Xie, C.; Sharma, Y.; Brown, T.; Roy, A.; Matyasko, A.; Behzadan, V.; Hambardzumyan, K.; Zhang, Z.; Juang, Y.-L.; Li, Z.; Sheatsley, R.; Garg, A.; Ue-sato, J.; Gierke, W.; Dong, Y.; Berthelot, D.; Hendricks, P.; Rauber, J.; and Long, R. 2018. Technical Report on the CleverHans v2.1.0 Adversarial Examples Library. arXiv preprint arXiv:1610.00768.
+
+Papernot, N.; McDaniel, P.; Jha, S.; Fredrikson, M.; Celik, Z. B.; and Swami, A. 2016. The Limitations of Deep Learning in Adversarial Settings. 372-387.
+
+Sharif, M.; Bauer, L.; and Reiter, M. 2018. On the Suitability of Lp-Norms for Creating and Preventing Adversarial Examples. 2018 IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops (CVPRW), 1686- 16868.
+
+Simonyan, K.; and Zisserman, A. 2015. Very Deep Convolutional Networks for Large-Scale Image Recognition. In International Conference on Learning Representations (ICLR).
+
+Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; and Wojna, Z. 2016. Rethinking the Inception Architecture for Computer Vision. In 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2818-2826.
+
+Szegedy, C.; Zaremba, W.; Sutskever, I.; Bruna, J.; Erhan, D.; Goodfellow, I.; and Fergus, R. 2014. Intriguing properties of neural networks. In International Conference on Learning Representations.
+
+Xiao, C.; Zhu, J. Y.; Li, B.; He, W.; Liu, M.; and Song, D. 2018. Spatially transformed adversarial examples. In International Conference on Learning Representations (ICLR).
+
+Zhang, R.; Isola, P.; Efros, A.; Shechtman, E.; and Wang, O. 2018. The Unreasonable Effectiveness of Deep Features as a Perceptual Metric. In IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR), 586-595.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/rq2hMS4OaUX/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/rq2hMS4OaUX/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..8a90adb2b6214185b9de7a2e37465e40bae34aa6
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/rq2hMS4OaUX/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,346 @@
+§ IMPROVING PERCEPTUAL QUALITY OF ADVERSARIAL IMAGES USING PERCEPTUAL DISTANCE MINIMIZATION AND NORMALIZED VARIANCE WEIGHTING
+
+§ ABSTRACT
+
+Neural networks are known to be vulnerable to adversarial examples, which are obtained by adding intentionally crafted perturbations to original images. However, these perturbations degrade their perceptual quality and make them more difficult to perceive by humans. In this paper, we propose two separate attack agnostic methods to increase the perceptual quality while preserving the target fooling rate. The first method intensifies the perturbations in the high variance areas in the images. This method could be used in both white-box and black-box settings for any type of adversarial examples with only the computational cost of calculating the pixel based image variance. The second method aims to minimize the perturbations of already generated adversarial examples independent of the attack type. In this method, the distance between benign and adversarial examples are reduced until adversarial examples reach the decision boundaries of the true class. We show that these methods could also be used in conjunction to improve the perceptual quality of adversarial examples and demonstrate the quantitative improvements on CIFAR-10 and NIPS2017 Adversarial Learning Challenge datasets.
+
+§ INTRODUCTION
+
+While Deep Neural Networks (DNNs) are being used in a variety of domains, there are several studies that show their vulnerabilities. An initial study, L-BFGS method (Szegedy et al. 2014), revealed that neural networks are not robust to adversarial attacks specifically produced to fool the networks. After the discovery of adversarial attacks, several different methods have been proposed such as Fast Gradient Sign Method (FGSM) (Goodfellow, Shlens, and Szegedy 2015), Projected Gradient Descent (PGD) (Madry et al. 2018), DeepFool (Moosavi-Dezfooli, Fawzi, and Frossard 2016), Jacobian Saliency Map Attack (JSMA) (Papernot et al. 2016), Spatially Transformed Adversarial Examples (stAdv) (Xiao et al. 2018) and Carlini&Wagner Attack (Carlini and Wagner 2017).
+
+As adversarial examples can fool the networks, they can be used for the purpose of distinguishing humans from algorithms. While humans could still perceive the content of these images, algorithms would be deceived by the adversarial input. For such a system be effective, the perturbation that will be added to the image should be reasonably small and, while still misleading the algorithm, human vision should not be distracted from by the perturbation. Completely Automated Public Turing test to tell Computers and Humans Apart - CAPTCHA, is one of the most common examples where human users are distinguished from computer algorithms (Aksoy and Temizel 2019). The main motivation of this study is to improve successful adversarial attacks while reducing the perturbations that are distracting to humans. So, we propose two separate methods to improve the perceptual quality while keeping the attacks successful. The first method is based on intensifying the perturbation in high-variance zones and suppressing in low-variance zones using the variance map of input images for any type of attack. In effect, disguising the adversarial noise in high-variance areas and limiting the high-frequency noise added to low-variance areas where they would be more distracting. The second method is based on minimization of the perturbation until it reaches the boundary. While variance weighting is applied during the attack, minimization method could be considered as post processing after acquiring the adversarial example with any type of adversarial attack. As seen in Figure 1, localized and minimized perturbations improve the perceptual quality while keeping the fooling rate stable.
+
+ < g r a p h i c s >
+
+Figure 1: FGSM $\left( {\epsilon = 8/{255}}\right)$ results on NIPS2017 against ResNet50 with both proposed methods: variance weighting and minimization (shown separately for minimization with respect to ${L}_{2}$ and LPIPS) and combinations of them.
+
+Copyright (C) 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+§ RELATED WORK
+
+§ ADVERSARIAL ATTACKS
+
+L-BFGS is the initial method for generating adversarial examples using box-constrained optimization method (Szegedy et al. 2014). However, this method is computationally very costly. FGSM (Goodfellow, Shlens, and Szegedy 2015) is an efficient gradient-based attack algorithm, which computes the gradient only once, and adds perturbation in the gradient ascending direction of the loss function. Iterative Fast Gradient Sign Method (I-FGSM) (Kurakin, Good-fellow, and Bengio 2017) extends FGSM by iteratively attacking with a small step size and calculating the gradient at each step. C&W attack (Carlini and Wagner 2017) minimizes ${L}_{2}$ norm with an improved optimization method. DeepFool (Moosavi-Dezfooli, Fawzi, and Frossard 2016) efficiently computes the smallest perturbations according to closest decision boundary. Jacobian-based Saliency Map Attack (JSMA) (Papernot et al. 2016) generates sparse perturbations via generating saliency map and rank the contribution of each input variable to the adversarial objective. A perturbation is then selected from the saliency map at each iteration.
+
+§ PERCEPTUAL METRICS
+
+All adversarial attack methods essentially aim to fool the network while minimizing the dissimilarity between benign and adversarial examples (i.e., minimizing the added perturbation). While the similarity metrics vary according to attack type, the most widely used distance metrics are ${L}_{p}$ norms $\left( {p = 0,1,2,\infty }\right)$ . In particular, FGSM is an ${L}_{\infty }$ , JSMA is an ${L}_{0}$ , and $\mathrm{C}\& \mathrm{\;W}$ is an ${L}_{2}$ norm based attack. Even though ${L}_{p}$ norms are very convenient and commonly used, several studies state that ${L}_{p}$ norms do not reflect the human perception accurately (Sharif, Bauer, and Reiter 2018; Jordan et al. 2019). Besides, there are some attack types such as (Jordan et al. 2019; Laidlaw and Feizi 2019; Laidlaw, Singla, and Feizi 2021; Aydin et al. 2021) for which ${L}_{p}$ norms are not suitable to evaluate the attack success. Thus these studies employ different and more recent perceptual metrics such as Learned Perceptual Image Patch Similarity (LPIPS) metric (Zhang et al. 2018) or Deep Image Structure and Texture Similarity (DISTS) index (Ding et al. 2020). Both of these methods use an additional neural network to measure the distance. LPIPS is calibrated with human perception and measures the Euclidean distance of deep representations. Likewise, DISTS optimizes human perception while using the combination of deep image structure and texture similarity.
+
+§ VARIANCE MAP ON ADVERSARIAL ATTACKS
+
+Human perception is affected more by perturbations in the low variance areas compared to high variance areas and this information is exploited in various image processing applications (Legge and Foley 1980; Lin, Dong, and Xue 2005; Liu et al. 2010). Regarding this fact, variance map has been used in previous studies (Luo et al. 2018; Croce and Hein 2019) to generate adversarial examples. In this work variance map is used to produce a variance based componentwise box constraints to generate sparse adversarial examples (Croce and Hein 2019). In another study, variance map is applied for the selection of high variance pixels (Luo et al. 2018). Using only ${L}_{p}$ norms for these variance based sparse attacks do not accurately reflect the perceptual quality (Luo et al. 2018), thus variance based sparse adversarial examples either use mean and median values of pixels or introduce a new distance metric that is more suited for the evaluation of their proposed attacks.
+
+§ METHODOLOGY
+
+§ NORMALIZED VARIANCE WEIGHTING
+
+In our study, we use variance map to intensify the perturbations in the high variance zones, instead of selecting high variance pixels or variance boundaries in an attack agnostic manner. We adopt the variance map method in (Croce and Hein 2019) to produce variance map of input images. In this method, standard deviation of both axes with 2 neighbour pixels and main pixel for each color channel are calculated $\left( {\sigma }_{ij}^{\left( x\right) }\right.$ and ${\sigma }_{ij}^{\left( y\right) }$ for $x$ and $y$ axis respectively) and the square root of the minimum of standard deviation of axes is taken to obtain variance map ${\sigma }_{ij}$ (Equation 1). The variance map is then normalized to obtain normalized variance map ${V}_{i,j}$ (Equation 2).
+
+$$
+{\sigma }_{ij} = \sqrt{\min \left\{ {{\sigma }_{ij}^{\left( x\right) },{\sigma }_{ij}^{\left( y\right) }}\right\} } \tag{1}
+$$
+
+$$
+{V}_{i,j} = \frac{{\sigma }_{i,j}}{\sqrt{\mathop{\sum }\limits_{h}^{H}\mathop{\sum }\limits_{w}^{W}{\sigma }_{h,w}^{2}}} \tag{2}
+$$
+
+Since our method does not involve selecting pixels or generating variance box constraints, it does not require any additional threshold or coefficient variable. Normalizing and weighting procedures remove the need for an additional variable. As seen in Algorithm 1, the proposed method initially generates the variance map of input images for only once (Equation 1), then normalizes the variance map using ${L}_{2}$ -norm (Equation 2) and applies variance map by weighting the perturbation with normalized variance map at each iteration (if adversarial attack is iterative). This method could be adapted for both white-box and black-box setting and does not require any optimization, or additional gradient-based steps. Therefore it does not bring any additional computational cost except the calculation of variance map for once.
+
+ < g r a p h i c s >
+
+Figure 2: Visual representation of integration of proposed methods.
+
+§ MINIMIZATION METHOD
+
+The proposed minimization method is applied after generating the initial adversarial example and aims to reduce the distance between benign and adversarial examples using an optimizer. Optimizer minimizes the distance until adversarial examples reach the decision boundaries of true classes or maximum iteration number (Algorithm 2). We apply our minimization technique to minimize with regards to two different: ${L}_{2}$ -norm and LPIPS (It has to be noted that some attacks are not suitable for ${L}_{2}$ distance metric (Aydin et al. 2021)). As LPIPS measures the perceptual distance using an additional neural network (i.e., VGG16 (Simonyan and Zisserman 2015)), it has higher processing time and higher number of parameters compared to ${L}_{2}$ -norm minimization. In (Aksoy and Temizel 2019), the attack strength is iteratively adjusted to obtain the minimal perturbation needed in an attack agnostic manner after the generation of adversarial example, our proposed method improves this by directly optimizing the minimization of perturbation.
+
+§ NORMALIZED VARIANCE WEIGHTING + MINIMIZATION
+
+Normalized variance weighting method is applied during the adversarial attack while minimization method is applied after the generation of adversarial example. So both methods could be integrated and used in association for generation of adversarial examples. The complete pipeline integrating both methods is illustrated in Figure 2. We first generate adversarial examples, apply the variance weighting method and after the generation of variance weighted adversarial example, we apply minimization method as a post processing to obtain improved adversarial examples.
+
+Algorithm 1: Normalized Variance Weighting
+
+Input: $x$ : original image, ${Adv}$ : one iteration of adversarial
+
+attack
+
+Parameter: ${i}_{max}$ : maximum iteration of adversarial attack
+
+Output: $y$ : adversarial example
+
+ Let $i = 0$
+
+ $v =$ VarianceMap(x)
+
+ $v = {L}_{2}$ Normalize(v)
+
+ $y = x$
+
+ while $i < {i}_{max}$ do
+
+ $y = {Adv}\left( y\right)$
+
+ $p = \left( {y - x}\right) \times v$
+
+ $y = x + p$
+
+ $i = i + 1$
+
+ end while
+
+ return $y$
+
+§ EXPERIMENTS
+
+Datasets. We used CIFAR-10 and NIPS2017 Adversarial Learning Challenge datasets in the experiments. CIFAR-10 testset contains 10000 images with ${32} \times {32}$ resolution. We conducted our experiments on a subset of CIFAR-10 test-set with 1000 images (100 random images from each category). NIPS2017 dataset is a subset of Imagenet dataset and contains 1000 images (one images from each category) with ${299} \times {299}$ resolution.
+
+Attack Types. We have tested the proposed methods using 3 different untargeted attack types: a single step gradient based attack (FGSM) (Goodfellow, Shlens, and Szegedy 2015), an iterative gradient based attack (I-FGSM) (Ku-rakin, Goodfellow, and Bengio 2017) and an optimization based attack (C&W) (Carlini and Wagner 2017) on CI-FAR10 and NIPS2017 datasets. We have used ResNet50 (He et al. 2016) and Inception-V3 (Szegedy et al. 2016) for NIPS2017 dataset; only ResNet50 (He et al. 2016) for CI-FAR10 dataset. We have used CleverHans (Papernot et al. 2018) implementation for default attacks and we integrated the proposed methods into these attacks.
+
+Table 1: FGSM results on CIFAR10 dataset against ResNet50 with and without variance weighting (shown as Var.) and minimization method (shown as Minim.) using LPIPS and ${L}_{2}$ . Results are reported in both LPIPS $\left( {\times {10}^{2}}\right)$ and ${L}_{2}$ metrics.
+
+max width=
+
+Var. Minim. 2|c|30% 2|c|40% 2|c|50% 2|c|60%
+
+1-10
+X X LPIPS ${L}_{2}$ LPIPS ${L}_{2}$ LPIPS ${L}_{2}$ LPIPS ${L}_{2}$
+
+1-10
+- - 0.19 0.13 0.93 0.28 3.19 0.58 7.42 1.14
+
+1-10
+- LPIPS 0.07 0.13 0.59 0.27 2.58 0.57 6.46 1.13
+
+1-10
+- ${L}_{2}$ 0.10 0.09 0.75 0.22 2.93 0.52 7.06 1.09
+
+1-10
++ - 0.16 0.14 0.54 0.26 2.36 0.54 6.85 1.18
+
+1-10
++ LPIPS 0.06 0.13 0.29 0.25 1.82 0.54 5.96 1.17
+
+1-10
++ ${L}_{2}$ 0.09 0.10 0.41 0.22 2.14 0.50 6.54 1.13
+
+1-10
+
+Table 2: I-FGSM results on CIFAR10 dataset against ResNet50 with and without variance weighting (shown as Var.) and minimization method (shown as Minim.) using LPIPS and ${L}_{2}$ . Results are reported in both LPIPS $\left( {\times {10}^{2}}\right)$ and ${L}_{2}$ metrics.
+
+max width=
+
+Var. Minim. 2|c|30% 2|c|40% 2|c|50% 2|c|60%
+
+1-10
+X X LPIPS ${L}_{2}$ LPIPS ${L}_{2}$ LPIPS ${L}_{2}$ LPIPS ${L}_{2}$
+
+1-10
+- - 0.62 0.26 0.97 0.33 1.52 0.41 2.71 0.57
+
+1-10
+- LPIPS 0.28 0.25 0.50 0.32 0.86 0.40 1.74 0.56
+
+1-10
+- ${L}_{2}$ 0.43 0.21 0.73 0.28 1.22 0.37 2.32 0.52
+
+1-10
++ - 0.57 0.27 0.87 0.34 1.37 0.43 2.51 0.60
+
+1-10
++ LPIPS 0.26 0.26 0.43 0.33 0.76 0.42 1.58 0.59
+
+1-10
++ ${L}_{2}$ 0.41 0.23 0.66 0.30 1.11 0.39 2.16 0.56
+
+1-10
+
+Algorithm 2: Minimization Method
+
+Input: $x$ : original image, $y$ : adversarial example
+
+Parameter: ${lr}$ : learning rate, ${i}_{max}$ : maximum iteration
+
+ output: ${y}_{\text{ best }}$ : improved adversarial exam-
+
+ple
+
+ Let $i = 0$
+
+ ${y}_{\text{ best }} = y$
+
+ ${y}_{\text{ opt }} = y$
+
+ while $i < {i}_{max}$ do
+
+ if ${\text{ class }}_{{y}_{\text{ opt }}} \neq {\text{ class }}_{x}$ then
+
+ return ${y}_{\text{ best }}$
+
+ else
+
+ ${y}_{\text{ best }} = {y}_{\text{ opt }}$
+
+ end if
+
+ ${y}_{opt} = \operatorname{MinimizeDIST}\left( {{y}_{opt},x,{lr}}\right)$
+
+ end while
+
+ return ${y}_{\text{ best }}$
+
+§ EXPERIMENTAL SETTINGS FOR NORMALIZED VARIANCE WEIGHTING
+
+For variance map, we used 2 neighbour pixels and main pixel for every color channel in the generation of variance map similar to (Croce and Hein 2019). We observed that using variance weighting method considerably decreases the fooling rate when the attack strength is fixed. Thus, to compare on a fair ground, we fixed the fooling rate and let the $\epsilon$ (for FGSM and I-FGSM attacks) or initial cost (for C&W attack) vary. This allowed reaching the target fooling rate within a $\pm {0.5}\%$ error tolerence. We targeted 4 different fooling rates for FGSM: ${30}\% ,{40}\% ,{50}\% ,{60}\%$ and I-FGSM: ${60}\% ,{70}\% ,{80}\% ,{90}\%$ on both datasets. We used a single fooling rate for C&W attack on each dataset: ${95}\%$ on CIFAR10 and 100% on NIPS2017 (for both ResNet50 and Inception-V3), since there is ${L}_{2}$ -normalization after producing variance map, measuring ${L}_{p}$ norms would be misleading for variance weighting method. Therefore, we mainly used LPIPS perceptual distance metric, which is calibrated with human vision, for its evaluation.
+
+§ EXPERIMENTAL SETTINGS FOR MINIMIZATION METHOD
+
+For the experimental settings of the proposed minimization method, we used Adam (Kingma and Ba 2015) as the optimizer and set the maximum iteration number as 10 . We set the learning rate as 0.0001 for CIFAR10 dataset for both minimization methods. We set learning rate as 0.0001 for ${L}_{2}$ -Minimization and 0.00001 for LPIPS-minimization on NIPS2017 (for both ResNet50 and Inception-V3), since they were not converging with the same learning rate.
+
+Table 3: FGSM results on NIPS2017 dataset against ResNet50 and Inception-V3 with and without variance weighting (shown as Var.) and minimization method (shown as Minim.) using LPIPS and ${L}_{2}$ . Results are reported in both LPIPS $\left( {\times {10}^{2}}\right)$ and ${L}_{2}$ metrics.
+
+max width=
+
+Var. Minim. 4|c|30% 4|c|40% 4|c|50% 4|c|60%
+
+1-18
+X X 2|c|ResNet50 2|c|Inc-V3 2|c|ResNet50 2|c|Inc-V3 2|c|ResNet50 2|c|Inc-V3 2|c|ResNet50 2|c|Inc-V3
+
+1-18
+X X LPIPS ${L}_{2}$ LPIPS ${L}_{2}$ LPIPS ${L}_{2}$ LPIPS ${L}_{2}$ LPIPS ${L}_{2}$ LPIPS ${L}_{2}$ LPIPS ${L}_{2}$ LPIPS ${L}_{2}$
+
+1-18
+- - 0.07 0.22 0.12 0.30 0.14 0.32 0.26 0.43 0.26 0.42 0.54 0.63 0.44 0.55 1.10 0.94
+
+1-18
+- LPIPS 0.02 0.20 0.02 0.27 0.03 0.28 0.04 0.39 0.05 0.38 0.10 0.57 0.08 0.50 0.26 0.87
+
+1-18
+- ${L}_{2}$ 0.05 0.16 0.07 0.20 0.10 0.21 0.11 0.24 0.13 0.25 0.22 0.37 0.23 0.35 0.30 0.39
+
+1-18
++ - 0.04 0.26 0.06 0.35 0.08 0.37 0.13 0.49 0.14 0.49 0.25 0.71 0.22 0.62 0.52 1,04
+
+1-18
++ LPIPS 0.01 0.23 0.01 0.32 0.02 0.34 0.02 0.45 0.03 0.46 0.05 0.66 0.05 0.58 0.13 0.99
+
+1-18
++ ${L}_{2}$ 0.02 0.15 0.03 0.17 0.04 0.20 0.05 0.26 0.08 0.29 0.09 0.38 0.12 0.39 0.15 0.56
+
+1-18
+
+Table 4: I-FGSM Results on NIPS2017 dataset against ResNet50 and Inception-V3 with and without variance weighting (shown as Var.) and minimization method (shown as Minim.) using LPIPS and ${L}_{2}$ . Results are reported in both LPIPS $\left( {\times {10}^{2}}\right)$ and ${L}_{2}$ metrics.
+
+max width=
+
+Var. Minim. 4|c|30% 4|c|40% 4|c|50% 4|c|60%
+
+1-18
+X X 2|c|ResNet50 2|c|Inc-V3 2|c|ResNet50 2|c|Inc-V3 2|c|ResNet50 2|c|Inc-V3 2|c|ResNet50 2|c|Inc-V3
+
+1-18
+X X LPIPS ${L}_{2}$ LPIPS ${L}_{2}$ LPIPS ${L}_{2}$ LPIPS ${L}_{2}$ LPIPS ${L}_{2}$ LPIPS ${L}_{2}$ LPIPS ${L}_{2}$ LPIPS ${L}_{2}$
+
+1-18
+- - 0.12 0.32 0.23 0.43 0.17 0.40 0.37 0.56 0.27 0.51 0.63 0.75 0.46 0.68 1.19 1.08
+
+1-18
+- LPIPS 0.02 0.29 0.04 0.39 0.03 0.35 0.07 0.51 0.05 0.46 0.12 0.69 0.08 0.62 0.28 1.01
+
+1-18
+- ${L}_{2}$ 0.08 0.2 0.14 0.29 0.13 0.30 0.21 0.36 0.21 0.39 0.25 0.44 0.26 0.46 0.32 0.55
+
+1-18
++ - 0.09 0.42 0.13 0.52 0.50 0.20 0.21 0.67 0.13 0.64 0.37 0.90 0.35 0.87 0.71 1.29
+
+1-18
++ LPIPS 0.02 0.38 0.03 0.47 0.03 0.46 0.04 0.62 0.04 0.60 0.08 0.84 0.07 0.81 0.17 1.23
+
+1-18
++ ${L}_{2}$ 0.06 0.29 0.08 0.35 0.09 0.38 0.13 0.45 0.15 0.50 0.15 0.59 0.19 0.64 0.20 0.78
+
+1-18
+
+Table 5: CW2 Results on CIFAR10 dataset against ResNet50 with and without variance weighting (shown as Var.) and minimization method (shown as Minim.) using LPIPS and ${L}_{2}$ . Results are reported in both LPIPS $\left( {\times {10}^{2}}\right)$ and ${L}_{2}$ metrics.
+
+max width=
+
+Var. Minim. LPIPS ${L}_{2}$
+
+1-4
+- - 0.25 0.27
+
+1-4
+- LPIPS 0.15 0.27
+
+1-4
+- ${L}_{2}$ 0.25 0.28
+
+1-4
++ - 0.19 0.27
+
+1-4
++ LPIPS 0.12 0.26
+
+1-4
++ ${L}_{2}$ 0.19 0.28
+
+1-4
+
+§ RESULTS
+
+Normalized Variance Weighting. The effect of variance weighting method on FGSM attack can be observed in Table 1 and Table 3 for CIFAR10 (using ResNet50) and NIPS2017 (using ResNet50 and Inception-V3) datasets respectively. The method reduces the LPIPS distances considerably in all cases, i.e. without minimization and when used together with minimization with respect to both ${L}_{2}$ and LPIPS. The corresponding results in Table 2 and Table 4 for I-FGSM and Table 5 and Table 6 for C&W attack confirm that these findings are pertinent to these attacks as well and variance weighting is effective in reducing the LPIPS distance for all attack types in question.
+
+Minimization Methods. LPIPS-Minimization method applied on FGSM attack decreases the LPIPS distances considerably when used individually as well as when it is combined together with variance weighting for both CI-
+
+Table 6: CW2 Results on NIPS2017 dataset against ResNet50 and Inception-V3 with and without variance weighting (shown as Var.) and minimization method (shown as Minim.) using LPIPS and ${L}_{2}$ . Results are reported in both LPIPS $\left( {\times {10}^{2}}\right)$ and ${L}_{2}$ metrics.
+
+max width=
+
+Var. Minim. 2|c|ResNet50 2|c|Inc-V3
+
+1-6
+X X LPIPS ${L}_{2}$ LPIPS ${L}_{2}$
+
+1-6
+- - 0.25 0.27 0.33 0.38
+
+1-6
+- LPIPS 0.15 0.27 0.17 0.37
+
+1-6
+- ${L}_{2}$ 0.25 0.28 0.32 0.38
+
+1-6
++ - 0.19 0.27 0.33 0.45
+
+1-6
++ LPIPS 0.12 0.26 0.19 0.45
+
+1-6
++ ${L}_{2}$ 0.19 0.28 0.32 0.46
+
+1-6
+
+FAR10 (using ResNet50) and NIPS2017 (using ResNet50 and Inception-V3) datasets (Table 1 and Table 3). The corresponding results in Table 2 and Table 4 for I-FGSM and Table 5 and Table 6 for C&W attack confirm that these findings are pertinent to these attacks as well and LPIPS is effective in reducing the LPIPS distance for all attack types in question.
+
+In addition to these results, LPIPS-Minimization method also improves ${L}_{2}$ distance considerably. Though, as expected, ${L}_{2}$ -Minimization method results in the best ${L}_{2}$ distance improvements for FGSM and I-FGSM on both CI-FAR10 (using ResNet50) and NIPS2017 (using ResNet50 and Inception-V3) datasets. Considering C&W is already optimizing L2 distance, the improvement is relatively limited for C&W attack on both datasets.
+
+ < g r a p h i c s >
+
+Figure 3: Ineffective NIPS2017 samples against normalized variance weighting Method.
+
+Variance + Minimization Methods. The best results (i.e. lowest LPIPS distances) are obtained when we combine variance weighting method with LPIPS-Minimization and the results show that there is considerable improvement for FGSM, I-FGSM and C&W attack types on both CIFAR10 (using ResNet50) and NIPS2017 (using both ResNet50 and Inception-V3).
+
+§ DISCUSSION
+
+From the experiments, it is seen that both variance weighting and minimization methods individually improve the perceptual quality and the best LPIPS results are obtained when they are integrated. However, this improvement is relatively limited for attacks that are inherently able to produce adversarial examples with quantitatively lower perceptual distances, such as C&W. Results in Tables 1 to 6 show that NIPS2017 dataset results have smaller perceptual distances yet improvement percentages are higher compared to CIFAR10 dataset (i.e., LPIPS distance is reduced by 10% and 25% for CIFAR10 (40% fooling rate) and NIPS2017 (70% fooling rate) datasets respectively for I-FGSM against ResNet50).
+
+We have also investigated variance-based box-constrainted method (Croce and Hein 2019) for our attack type agnostic white-box setting as an alternative to variance weighting. While variance-based box-constrainted adversarial examples could improve the perceptual quality as well, it requires an additional coefficient parameter for each adversarial attack type, dataset and network. Even when parameters for threshold levels are optimized, in most instances, we have observed that variance weighted perturbations have better perceptual quality, which makes variance weighting a better choice.
+
+We have conducted our experiments based on ${L}_{2}$ and LPIPS distance metrics. Since our normalized variance weighting method is not suitable to measure for traditional ${L}_{p}$ norms, we consider the LPIPS perceptual distance as the primary metric for its evaluation. With regards to the proposed minimization methods, LPIPS-Minimization can be used in conjunction with any type of attack, while ${L}_{2}$ - Minimization is not suitable to all types of adversarial attacks such as the ones based on shifting of pixels (e.g., (Aydin et al. 2021)). Nevertheless, we have measured the distances of ${L}_{2}$ and LPIPS minimization methods with both ${L}_{2}$ and LPIPS metrics. It is seen that both distance metrics usually decrease with any of minimization method. Though, as expected, the measured metric benefits more when it is the same metric as the one used in minimization (e.g., LPIPS-minimization method minimizes LPIPS distance proportionally more compared to ${L}_{2}$ distance).
+
+Our empirical observations show that the variance weighting method significantly improves perceptual quality of images which have low variance backgrounds (e.g., sky, wall or sea) as it could be seen in Figure 1. However it is less effective for images with dominantly high variance zones (e.g., umbrella image in Figure 3) and for images having dominantly low variance zones (e.g., flag image in Figure 3). In the flag image, the variance of the background is highly low and high variance region is very narrow, hence variance weighting method could not improve the flag image adequately.
+
+§ CONCLUSION
+
+We have proposed two separate attack agnostic techniques to improve perceptual quality of adversarial examples while preserving the fooling rate. We have shown that applying our variance weighting improves the perceptual quality of different types of adversarial attacks without any significant computational cost in white-box setting. We have also shown that perturbations produced by different types of adversarial attacks could be minimized while preserving the fooling rate. Integration of the variance weighting and minimization generates adversarial examples with the best perceptual quality measured by LPIPS. Other attack agnostic improvements (e.g., generating adversarial attacks on YUV color space (Aksoy and Temizel 2019) could be combined with these two proposed methods to enhance perceptual quality further in the future.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/u_lOumlm7mu/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/u_lOumlm7mu/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..17d1622160579dc7ac20023246923ff11bae6585
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/u_lOumlm7mu/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,697 @@
+# Robust No-Regret Learning in Min-Max Stackelberg Games
+
+Anonymous Author(s)
+
+## Abstract
+
+The behavior of no-regret learning algorithms is well understood in two-player min-max (i.e, zero-sum) games. In this paper, we investigate the behavior of no-regret learning in min-max games with dependent strategy sets, where the strategy of the first player constrains the behavior of the second. Such games are best understood as sequential, i.e., min-max Stackelberg, games. We consider two settings, one in which only the first player chooses their actions using a no-regret algorithm while the second player best responds, and one in which both players use no-regret algorithms. For the former case, we show that no-regret dynamics converge to a Stack-elberg equilibrium. For the latter case, we introduce a new type of regret, which we call Lagrangian regret, and show that if both players minimize their Lagrangian regrets, then play converges to a Stackelberg equilibrium. We then observe that online mirror descent (OMD) dynamics in these two settings correspond respectively to a known nested (i.e., sequential) gradient descent-ascent (GDA) algorithm and a new simultaneous GDA-like algorithm, thereby establishing convergence of these algorithms to Stackelberg equilibrium. Finally, we analyze the robustness of OMD dynamics to perturbations by investigating dynamic min-max Stackelberg games. We prove that OMD dynamics are robust for a large class of dynamic min-max games with independent strategy sets. In the dependent case, we demonstrate the robustness of OMD dynamics experimentally by simulating them in dynamic Fisher markets, a canonical example of a min-max Stackelberg game with dependent strategy sets.
+
+## 1 Introduction
+
+Min-max optimization problems (i.e., zero-sum games) have been attracting a great deal of attention recently because of their applicability to problems in fairness in machine learning (Dai et al. 2019; Edwards and Storkey 2016; Madras et al. 2018; Sattigeri et al. 2018), generative adversarial imitation learning (Cai et al. 2019; Hamedani et al. 2018), reinforcement learning (Dai et al. 2018), generative adversarial learning (Sanjabi et al. 2018a), adversarial learning (Sinha et al. 2020), and statistical learning, e.g., learning parameters of exponential families (Dai et al. 2019). These problems are often modelled as min-max games, i.e., constrained min-max optimization problems of the form: $\mathop{\min }\limits_{{\mathbf{x} \in X}}\mathop{\max }\limits_{{\mathbf{y} \in Y}}f\left( {\mathbf{x},\mathbf{y}}\right)$ , where $f$ : $X \times Y \rightarrow \mathbb{R}$ is continuous, and $X \subset {\mathbb{R}}^{n}$ and $Y \subset$ ${\mathbb{R}}^{m}$ are non-empty and compact. In convex-concave min-max games, where $f$ is convex in $\mathbf{x}$ and concave in $\mathbf{y}$ , von Neumann and Morgenstern's seminal minimax theorem holds (Neumann 1928): i.e., $\mathop{\min }\limits_{{\mathbf{x} \in X}}\mathop{\max }\limits_{{\mathbf{y} \in Y}}f\left( {\mathbf{x},\mathbf{y}}\right) =$ $\mathop{\max }\limits_{{\mathbf{y} \in Y}}\mathop{\min }\limits_{{\mathbf{x} \in X}}f\left( {\mathbf{x},\mathbf{y}}\right)$ , guaranteeing the existence of a saddle point, i.e., a point that is simultaneously a minimum of $f$ in the $\mathbf{x}$ -direction and a maximum of $f$ in the $y$ -direction. This theorem allows us to interpret the optimization problem as a simultaneous-move, zero-sum game, where ${\mathbf{y}}^{ * }$ (resp. ${\mathbf{x}}^{ * }$ ) is a best-response of the outer (resp. inner) player to the other’s action ${\mathbf{x}}^{ * }$ (resp. ${\mathbf{y}}^{ * }$ ), in which case a saddle point is also called a minimax point or a Nash equilibrium.
+
+In this paper, we study min-max Stackelberg games (Goktas and Greenwald 2021), i.e., constrained min-max optimization problems with dependent feasible sets of the form: $\mathop{\min }\limits_{{\mathbf{x} \in X}}\mathop{\max }\limits_{{\mathbf{y} \in Y : \mathbf{g}\left( {\mathbf{x},\mathbf{y}}\right) \geq \mathbf{0}}}f\left( {\mathbf{x},\mathbf{y}}\right)$ , where $f : X \times$ $Y \rightarrow \mathbb{R}$ is continuous, $X \subset {\mathbb{R}}^{n}$ and $Y \subset {\mathbb{R}}^{m}$ are non-empty and compact, and $\mathbf{g}\left( {\mathbf{x},\mathbf{y}}\right) = {\left( {g}_{1}\left( \mathbf{x},\mathbf{y}\right) ,\ldots ,{g}_{K}\left( \mathbf{x},\mathbf{y}\right) \right) }^{T}$ with ${g}_{k} : X \times Y \rightarrow \mathbb{R}$ . Goktas and Greenwald observe that the minimax theorem does not hold in these games (2021). As a result, such games are more appropriately viewed as sequential, i.e., Stackelberg, games for which the relevant solution concept is the Stackelberg equilibrium, ${}^{1}$ where the outer player chooses $\widehat{\mathbf{x}} \in X$ before the inner player responds with their choice of $\mathbf{y}\left( \widehat{\mathbf{x}}\right) \in Y$ s.t. $\mathbf{g}\left( {\widehat{\mathbf{x}},\mathbf{y}\left( \widehat{\mathbf{x}}\right) }\right) \geq \mathbf{0}$ . In these games, the outer player seeks to minimize their loss, assuming the inner player chooses a feasible best response: i.e., the outer player's objective, also known as their value function in the economics literature (Milgrom and Segal 2002), is defined as ${V}_{X}\left( \mathbf{x}\right) = \mathop{\max }\limits_{{\mathbf{y} \in Y : \mathbf{g}\left( {\mathbf{x},\mathbf{y}}\right) \geq \mathbf{0}}}f\left( {\mathbf{x},\mathbf{y}}\right)$ . The inner player’s value function, ${V}_{Y} : X \rightarrow \mathbb{R}$ , which they seek to maximize, is simply the objective function given the outer player’s action $\widehat{\mathbf{x}}$ : i.e., ${V}_{Y}\left( {\mathbf{y};\widehat{\mathbf{x}}}\right) = f\left( {\widehat{\mathbf{x}},\mathbf{y}}\right)$ .
+
+Goktas and Greenwald (2021) proposed a polynomial-time first-order method by which to compute Stackelberg equilibria, which they called nested gradient descent ascent (GDA). This method can be understood as an algorithm a third party might run to find an equilibrium, or as a game dynamic that the players might employ if their long-run goal were to reach an equilibrium. Rather than assume that players are jointly working towards the goal of reaching an equilibrium, it is often more reasonable to assume that they play so as to not regret their decisions: i.e., that they employ a no-regret learning algorithm, which minimizes their loss in hindsight. It is well known that when both players in a min-max game are no-regret learners, the players' strategy profile over time converges to a Nash equilibrium in average iterates: i.e., empirical play converges to a Nash equilibrium (e.g., (Freund and Schapire 1996)).
+
+---
+
+${}^{1}$ One could also view such games as pseudo-games (also known as abstract economies) (Arrow and Debreu 1954), in which players move simultaneously under the unreasonable assumption that the moves they make will satisfy the game's dependency constraints. Under this view, the relevant solution concept is generalized Nash equilibrium (Facchinei and Kanzow 2007, 2010).
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+---
+
+In this paper, we investigate no-regret learning dynamics in min-max Stackelberg games. We assume both pessimistic and optimistic settings. In the pessimistic setting, the outer player is a no-regret learner while the inner player best responds; in the optimistic setting, both players are no-regret learners. In the pessimistic case, we show that if the outer player uses a no-regret algorithm that achieves $\varepsilon$ -pessimistic regret after $T$ iterations, then the outer player’s empirical play converges to their $\varepsilon$ -Stackelberg equilibrium strategy. In the optimistic case, we introduce a new type of regret, which we call Lagrangian regret, which assumes access to a solution oracle for the optimal KKT multipliers of the game's constraints. We then show that if both players use no-regret algorithms that achieve $\varepsilon$ -Lagrangian regret after $T$ iterations, the players’ empirical play converges to an $\varepsilon$ - Stackelberg equilibrium.
+
+We then restrict our attention to online mirror descent (OMD) dynamics, which yield two algorithms, namely max-oracle gradient descent (Jin, Netrapalli, and Jordan 2020) and nested GDA (Goktas and Greenwald 2021) in the pessimistic setting, and a new simultaneous GDA-like algorithm (Nedic and Ozdaglar 2009) in the optimistic setting, which we call Lagrangian GDA (LGDA). Convergence of the former two algorithms in $O\left( {1/{\varepsilon }^{2}}\right)$ iterations then follows from our previous theorems. Additionally, the iteration complexity of $O\left( {1/{\varepsilon }^{2}}\right)$ suggests the superiority of LGDA over nested-GDA when a Lagrangian solution oracle exists, since nested-GDA converges in $O\left( {1/{\varepsilon }^{3}}\right)$ iterations (Goktas and Greenwald 2021), while LGDA converges in $O\left( {1/{\varepsilon }^{2}}\right)$ iterations, assuming the objective function is only Lipschitz continuous.
+
+Finally, we analyze the robustness of OMD dynamics to perturbations by investigating dynamic min-max Stackel-berg games. We prove that OMD dynamics are robust, in that even when the game changes with each iteration of the algorithm, OMD dynamics track the changing equilibrium closely for a large class of dynamic min-max games with independent strategy sets. In the dependent strategy set case, we demonstrate the robustness of OMD dynamics experimentally by simulating them in dynamic Fisher markets, a canonical example of a min-max Stackelberg game (with dependent strategy sets). Even when the Fisher market changes with each iteration, our OMD dynamics are able to track the changing equilibria closely. Our findings can be summarized as follows:
+
+- In min-max Stackelberg games, when the outer player is a no-regret learner and the inner-player best-responds, the average of the outer player's strategies converges to their Stackelberg equilibrium strategy.
+
+- We introduce a new type of regret we call Lagrangian regret and show that in min-max Stackelberg games when both players minimize Lagrangian regret, the average of the players' strategies converge to a Stackelberg equilibrium.
+
+- We provide novel convergence guarantees for two known algorithms, max-oracle gradient descent and nested gradient descent ascent, to an $\varepsilon$ -Stackelberg equilibrium in $O\left( {1/{\varepsilon }^{2}}\right)$ in average iterates.
+
+- We introduce a new simultaneous GDA-like algorithm and prove that its average iterates converge to an $\varepsilon$ - Stackelberg equilibrium in $O\left( {1/{\varepsilon }^{2}}\right)$ iterations.
+
+- We prove that max-oracle gradient descent and simultaneous GDA are robust to perturbations in a large class of min-max games (with independent strategy sets).
+
+- We run experiments with Fisher markets which suggest that max-oracle gradient descent and simultaneous GDA are robust to perturbations in these min-max Stackelberg games (with dependent strategy sets).
+
+We provide a review of related work in Appendix BThis paper is organized as follows. In the next section, we present the requisite mathematical preliminaries. In Section 3, we present no-regret learning dynamics that converge in a large class of min-max Stackelberg games. In Section 4, we study the convergence and robustness properties of a particular no-regret learning algorithm, namely online mirror descent, in min-max Stackelberg games.
+
+## 2 Mathematical Preliminaries
+
+Our notational conventions can be found in Appendix A.
+
+Game Definitions A min-max Stackelberg game, (X, Y, f, g), is a two-player, zero-sum game, where one player, who we call the outer, or $\mathbf{x}$ -, player (resp. the inner, or $\mathbf{y}$ -, player), is trying to minimize their loss (resp. maximize their gain), defined by a continuous objective function $f : X \times Y \rightarrow \mathbb{R}$ , by taking an action from their strategy set $X \subset {\mathbb{R}}^{n}$ , and (resp. $Y \subset {\mathbb{R}}^{m}$ ) s.t. $\mathbf{g}\left( {\mathbf{x},\mathbf{y}}\right) \geq 0$ where $\mathbf{g}\left( {\mathbf{x},\mathbf{y}}\right) = {\left( {g}_{1}\left( \mathbf{x},\mathbf{y}\right) ,\ldots ,{g}_{K}\left( \mathbf{x},\mathbf{y}\right) \right) }^{T}$ with ${g}_{k} : X \times Y \rightarrow \mathbb{R}$ continuous. A strategy profile $\left( {\mathbf{x},\mathbf{y}}\right) \in X \times Y$ is said to be feasible iff for all $k \in \left\lbrack K\right\rbrack$ , ${g}_{k}\left( {\mathbf{x},\mathbf{y}}\right) \geq 0$ . The function $f$ maps a pair of actions taken by the players $\left( {\mathbf{x},\mathbf{y}}\right) \in X \times Y$ to a real value (i.e., a payoff), which represents the loss (resp. the gain) of the $\mathbf{x}$ -player (resp. the $\mathbf{y}$ -player). A min-max game is said to be convex-concave if the objective function $f$ is convex-concave.
+
+One way to see this game is as a Stackelberg game, i.e., a sequential game with two players, where WLOG, we assume that the minimizing player moves first and the maximizing player moves second. The relevant solution concept for Stackelberg games is the Stackelberg equilibrium: A strategy profile $\left( {{\mathbf{x}}^{ * },{\mathbf{y}}^{ * }}\right) \in X \times Y$ s.t. $\mathbf{g}\left( {{\mathbf{x}}^{ * },{\mathbf{y}}^{ * }}\right) \geq \mathbf{0}$ is an $\left( {\epsilon ,\delta }\right)$ -Stackelberg equilibrium if $\mathop{\max }\limits_{{\mathbf{y} \in Y : \mathbf{g}\left( {{\mathbf{x}}^{ * },\mathbf{y}}\right) \geq 0}}$ $f\left( {{\mathbf{x}}^{ * },\mathbf{y}}\right) - \delta \; \leq \;f\left( {{\mathbf{x}}^{ * },{\mathbf{y}}^{ * }}\right) \; \leq$ $\mathop{\min }\limits_{{\mathbf{x} \in X}}\mathop{\max }\limits_{{\mathbf{y} \in Y : \mathbf{g}\left( {\mathbf{x},\mathbf{y}}\right) \geq 0}}f\left( {\mathbf{x},\mathbf{y}}\right) + \epsilon$ . Intuitively, a $\left( {\varepsilon ,\delta }\right)$ -Stackelberg equilibrium is a point at which the $\mathbf{x}$ - player’s (resp. $\mathbf{y}$ -player’s) payoff is no more than $\varepsilon$ (resp. $\delta$ ) away from its optimum. A(0,0)-Stackelberg equilibrium is guaranteed to exist in min-max Stackelberg games (Goktas and Greenwald 2021). Note that when $\mathbf{g}\left( {\mathbf{x},\mathbf{y}}\right) \geq \mathbf{0}$ , for all $\mathbf{x} \in X$ and $\mathbf{y} \in Y$ , the game reduces to a min-max game (with independent strategy sets), for which, by the min-max theorem, a Nash equilibrium is guaranteed to exist (Neumann 1928).
+
+In a min-max Stackelberg game, the outer player’s best-response set ${\mathrm{{BR}}}_{X} \subset X$ , defined as ${\mathrm{{BR}}}_{X} = \arg \mathop{\min }\limits_{{\mathbf{x} \in X}}{V}_{X}\left( \mathbf{x}\right)$ , is independent of the inner player's strategy, while the inner player's best-response correspondence ${\mathrm{{BR}}}_{Y} : X \rightrightarrows Y$ , defined as ${\mathrm{{BR}}}_{Y}\left( \mathbf{x}\right) = \arg \mathop{\max }\limits_{{\mathbf{y} \in Y : \mathbf{g}\left( {\mathbf{x},\mathbf{y}}\right) \geq 0}}{V}_{Y}\left( {\mathbf{y};\mathbf{x}}\right)$ , depends on the outer player’s strategy. A(0,0)-Stackelberg equilibrium $\left( {{\mathbf{x}}^{ * },{\mathbf{y}}^{ * }}\right) \in X \times Y$ is then a tuple of strategies such that $\left( {{\mathbf{x}}^{ * },{\mathbf{y}}^{ * }}\right) \in {\mathrm{{BR}}}_{X} \times {\mathrm{{BR}}}_{Y}\left( {\mathbf{x}}^{ * }\right) .$
+
+A dynamic min-max Stackelberg game, ${\left\{ \left( X, Y,{f}^{\left( t\right) },{\mathbf{g}}^{\left( t\right) }\right) \right\} }_{t = 1}^{T}$ , is a sequence of min-max Stack-elberg games played for $T$ time periods. We define the players’ value functions at time $t$ in a dynamic min-max Stackelberg game in the obvious way. Note that when ${\mathbf{g}}^{\left( t\right) }\left( {\mathbf{x},\mathbf{y}}\right) \geq 0$ for all $\mathbf{x} \in X,\mathbf{y} \in Y$ and all time periods $t \in \left\lbrack T\right\rbrack$ , the game reduces to a dynamic min-max game (with independent strategy sets). Moreover, if $\forall t,{t}^{\prime } \in \left\lbrack T\right\rbrack ,{f}^{\left( t\right) } = {f}^{\left( {t}^{\prime }\right) }$ , and ${\mathbf{g}}^{\left( t\right) } = {\mathbf{g}}^{\left( {t}^{\prime }\right) }$ , then the game reduces to a (static) min-max Stackelberg game, which we denote simply by(X, Y, f, g).
+
+Mathematical Preliminaries Given $A \subset {\mathbb{R}}^{n}$ , the function $f : A \rightarrow \mathbb{R}$ is said to be ${\ell }_{f}$ -Lipschitz-continuous iff $\forall {\mathbf{x}}_{1},{\mathbf{x}}_{2} \in X,$
+
+$\begin{Vmatrix}{f\left( {\mathbf{x}}_{1}\right) - f\left( {\mathbf{x}}_{2}\right) }\end{Vmatrix} \leq {\ell }_{f}\begin{Vmatrix}{{\mathbf{x}}_{1} - {\mathbf{x}}_{2}}\end{Vmatrix}$ . If the gradient of $f$ , $\nabla f$ , is ${\ell }_{\nabla f}$ -Lipschitz-continuous, we refer to $f$ as ${\ell }_{\nabla f}$ - Lipschitz-smooth. We provide a review of online convex optimization in Appendix A.
+
+## 3 No-Regret Learning Dynamics
+
+In this section we explore no-regret learning dynamics in min-max Stackelberg games, and prove the convergence of no-regret learning dynamics in two settings: a pessimistic setting in which the outer player is a no-regret learner while the inner player best-responds, and an optimistic setting in which both players are no-regret learners. All the results in this paper rely on the following assumptions:
+
+Assumption 1. 1. (Slater's condition (Slater 1959, 2014)) $\forall \mathbf{x} \in X,\exists \widehat{\mathbf{y}} \in Y$ s.t. ${g}_{k}\left( {\mathbf{x},\widehat{\mathbf{y}}}\right) > \mathbf{0}$ , for all $k = 1,\ldots , K;2.f,{g}_{1},\ldots ,{g}_{K}$ are continuous and convex-concave; and 3. ${\nabla }_{\mathbf{x}}f,{\nabla }_{\mathbf{x}}{g}_{1},\ldots ,{\nabla }_{\mathbf{x}}{g}_{K}$ are well-defined for all $\left( {\mathbf{x},\mathbf{y}}\right) \in X \times Y$ and continuous in(x, y).
+
+We note that these assumptions are in line with previous work geared towards solving min-max Stackelberg games (Goktas and Greenwald 2021). Part 1 of Assumption 1, Slater's condition, is a standard constraint qualification condition (Boyd, Boyd, and Vandenberghe 2004), which is needed to derive the optimality conditions for the inner player's maximization problem; without it the problem becomes analytically intractable. Part 2 of Assumption 1 is is required for the value function of the outer player to be continuous and convex ((Goktas and Greenwald 2021), Proposition A1) so that the problem is solvable efficiently. Finally, we note that Part 3 of Assumption 1 can be replaced by a subgradient boundedness assumption instead; however, for simplicity, we assume this stronger condition.
+
+## Pessimistic Learning Setting
+
+In Stackelberg games, the leader decides their strategy assuming that the inner player will best respond which leads us to first consider a repeated game setting in which the inner player always best responds to the strategy picked by the outer player. Such a setting also makes sense as in zero-sum Stackelberg games the outer player and inner player are adversaries, and in most applications of interest we are concerned by optimal strategies for the outer player; hence, assuming a strong adversary which always best-responds allows us to consider more robust strategies for the outer player.
+
+For any $\mathbf{x} \in X$ , denote ${\mathbf{y}}^{ * }\left( \mathbf{x}\right) \in {\mathrm{{BR}}}_{Y}\left( \mathbf{x}\right)$ , in such a setting, intuitively, the regret should be equal to the difference between the cumulative loss of the outer player w.r.t. to their sequence of actions to which the inner player best responds, and the smallest cumulative loss that the outer player can achieve by picking a fixed strategy to which the inner player best responds, i.e., $\frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{f}^{\left( t\right) }\left( {{\mathbf{x}}^{\left( t\right) },{\mathbf{y}}^{ * }\left( {\mathbf{x}}^{\left( t\right) }\right) }\right) -$ $\mathop{\sum }\limits_{{t = 1}}^{T}\frac{1}{T}{f}^{\left( t\right) }\left( {\mathbf{x},{\mathbf{y}}^{ * }\left( \mathbf{x}\right) }\right)$ . We call this regret the pessimistic regret which can be more conveniently defined as the regret incurred by an action $\mathbf{x} \in X$ of the outer player w.r.t. a sequence of actions ${\left\{ \left( X, Y,{f}^{\left( t\right) },{\mathbf{g}}^{\left( t\right) }\right) \right\} }_{t = 1}^{T}$ and a dynamic min-max Stackelberg game ${\left\{ {G}_{t}\right\} }_{t \in T}$ w.r.t. to the loss given by their value function ${\left\{ {V}_{X}^{\left( t\right) }\right\} }_{t = 1}^{T}$ , i.e.:
+
+$$
+{\operatorname{PesRegret}}_{X}^{\left( T\right) }\left( \mathbf{x}\right) = \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{V}_{X}^{\left( t\right) }\left( {\mathbf{x}}^{\left( t\right) }\right) - \mathop{\sum }\limits_{{t = 1}}^{T}\frac{1}{T}{V}_{X}^{\left( t\right) }\left( \mathbf{x}\right)
+$$
+
+(1)
+
+That is, the pessimistic regret of the outer player compares the outer player's play history to the smallest cumulative loss the outer player could achieve by picking a fixed strategy assuming that the inner player best-responds. It is pessimistic in the sense that the outer player assumes the worst possible outcome for themself.
+
+The main theorem in this section states the following: assuming the inner player best responds to the actions of the outer player, if the outer player employs a no-regret algorithm, then the outer player's average strategy converges to a Stackelberg equilibrium. Before presenting this theorem, ${}^{2}$ we recall the following property of the outer player's value function.
+
+Proposition 2 ((Goktas and Greenwald 2021), Proposition A.1). In a min-max Stackelberg game(X, Y, f, g), the outer player’s value function, $V\left( \mathbf{x}\right) = \mathop{\max }\limits_{{\mathbf{y} \in Y : \mathbf{g}\left( {\mathbf{x},\mathbf{y}}\right) \geq \mathbf{0}}}f\left( {\mathbf{x},\mathbf{y}}\right)$ , is continuous and convex.
+
+---
+
+${}^{2}$ The proofs of all mathematical claims in this section can be found in Appendix C.
+
+---
+
+Theorem 3. Consider a min-max Stackelberg game (X, Y, f, g), and suppose the outer player plays a sequence of actions ${\left\{ {\mathbf{x}}^{\left( t\right) }\right\} }_{t = 1}^{T} \subset X$ . If, after $T$ iterations, the outer player’s pessimistic regret is bounded by $\varepsilon$ for all $\mathbf{x} \in X$ , then $\left( {{\overline{\mathbf{x}}}^{\left( T\right) },{\mathbf{y}}^{ * }\left( {\overline{\mathbf{x}}}^{\left( T\right) }\right) }\right)$ is a $\left( {\varepsilon ,0}\right)$ -Stackelberg equilibrium, where ${\mathbf{y}}^{ * }\left( {\overline{\mathbf{x}}}^{\left( T\right) }\right) \in {\mathrm{{BR}}}_{Y}\left( {\overline{\mathbf{x}}}^{\left( T\right) }\right)$ .
+
+We remark that even though the definition of pessimistic regret looks similar to the standard definition of regret, its structure is very different. In particular, without Proposition 2, it is not clear that the value $\mathop{\sum }\limits_{{t = 1}}^{T}\frac{1}{T}{f}^{\left( t\right) }\left( {\mathbf{x},{\mathbf{y}}^{ * }\left( \mathbf{x}\right) }\right) =$ $\mathop{\sum }\limits_{{t = 1}}^{T}{V}^{\left( t\right) }\left( \mathbf{x}\right)$ is convex in $\mathbf{x}$ .
+
+## Optimistic Learning Setting
+
+We now turn our attention to a learning setting in which both players are no-regret learners. The most straightforward way to define regret is by considering the outer and inner players’ "vanilla" regrets, respectively: ${\operatorname{Regret}}_{X}^{\left( T\right) }\left( \mathbf{x}\right) =$ $\frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}f\left( {{\mathbf{x}}^{\left( t\right) },{\mathbf{y}}^{\left( t\right) }}\right) \; - \;\frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}f\left( {\mathbf{x},{\mathbf{y}}^{\left( t\right) }}\right)$ and ${\operatorname{Regret}}_{Y}^{\left( T\right) }\left( \mathbf{y}\right) = \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}f\left( {{\mathbf{x}}^{\left( t\right) },\mathbf{y}}\right) -$ $\frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}f\left( {{\mathbf{x}}^{\left( t\right) },{\mathbf{y}}^{\left( t\right) }}\right)$ . In convex-concave min-max games (with independent strategy sets), when both players minimize their vanilla regret, the players' average strategies converge to Nash equilibrium. In min-max Stackelberg games (with dependent strategy sets), however, convergence to a Stackelberg equilibrium in not guaranteed.
+
+Example 4. Consider the min-max Stackelberg game $\mathop{\min }\limits_{{x \in \left\lbrack {-1,1}\right\rbrack }}$
+
+$\mathop{\max }\limits_{{y \in \left\lbrack {-1,1}\right\rbrack : 0 \leq 1 - \left( {x + y}\right) }}{x}^{2} + y + 1$ . The Stackelberg equilibrium of this game is given by ${x}^{ * } = 1/2,{y}^{ * } = 1/2$ . Suppose both players employ no-regret algorithms that generate strategies ${\left\{ {\mathbf{x}}^{\left( t\right) },{\mathbf{y}}^{\left( t\right) }\right\} }_{t \in {\mathbb{N}}_{ + }}$ . Then at time $T \in {\mathbb{N}}_{ + }$ , there exists $\varepsilon > 0$ , s.t.
+
+$$
+\left\{ \begin{array}{l} \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}\left\lbrack {{{x}^{\left( t\right) }}^{2} + {y}^{\left( t\right) } + 1}\right\rbrack - \frac{1}{T}\mathop{\min }\limits_{{x \in \left\lbrack {-1,1}\right\rbrack }}\mathop{\sum }\limits_{{t = 1}}^{T}\left\lbrack {{x}^{2} + {y}^{\left( t\right) } + 1}\right\rbrack \leq \varepsilon \\ \frac{1}{T}\mathop{\max }\limits_{{y \in \left\lbrack {-1,1}\right\rbrack }}\mathop{\sum }\limits_{{t = 1}}^{T}\left\lbrack {{{x}^{\left( t\right) }}^{2} + y + 1}\right\rbrack - \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}\left\lbrack {{{x}^{\left( t\right) }}^{2} + {y}^{\left( t\right) } + 1}\right\rbrack \leq \varepsilon \end{array}\right.
+$$
+
+(2)
+
+Simplifying yields:
+
+$$
+\left\{ \begin{matrix} \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{x}^{{\left( t\right) }^{2}} - \mathop{\min }\limits_{{x \in \left\lbrack {-1,1}\right\rbrack }}{x}^{2} \leq \varepsilon \\ \mathop{\max }\limits_{{y \in \left\lbrack {-1,1}\right\rbrack }}y - \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{y}^{\left( t\right) } \leq \varepsilon \end{matrix}\right. \tag{3}
+$$
+
+Since both players are no-regret learners, there exists $T \in$ ${\mathbb{N}}_{ + }$ large enough s.t.
+
+$$
+\left\{ \begin{matrix} \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{x}^{{\left( t\right) }^{2}} \leq \mathop{\min }\limits_{{x \in \left\lbrack {-1,1}\right\rbrack }}{x}^{2} \\ \mathop{\max }\limits_{{y \in \left\lbrack {-1,1}\right\rbrack }}y \leq \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{y}^{\left( t\right) } \end{matrix}\right. = \left\{ \begin{matrix} \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{x}^{{\left( t\right) }^{2}} \leq 0 \\ 1 \leq \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{y}^{\left( t\right) } \end{matrix}\right.
+$$
+
+(4)
+
+In other words, the average iterates converge to $x = 0, y =$ 1, which is not the Stackelberg equilibrium of this game.
+
+If the inner player minimizes their vanilla regret without regard to the game's constraints, then their actions are not guaranteed to be feasible, and thus cannot converge to a Stackelberg equilibrium. To remedy this infeasibility, we introduce a new type of regret we call Lagrangian regret, and show that assuming access to a solution oracle for the optimal KKT multipliers of the game's constraints, if both players minimize their Lagrangian regret, then no-regret learning dynamics converge to a Stackelberg equilibrium.
+
+Define ${\mathcal{L}}_{\mathbf{x}}\left( {\mathbf{y},\mathbf{\lambda }}\right) = f\left( {\mathbf{x},\mathbf{y}}\right) + \mathop{\sum }\limits_{{k = 1}}^{K}{\lambda }_{k}{g}_{k}\left( {\mathbf{x},\mathbf{y}}\right)$ to be the Lagrangian associated with the outer player's value function, or equivalently, the inner player's maximization problem given the outer player’s strategy $\mathbf{x} \in X$ . If the optimal KKT multipliers ${\mathbf{\lambda }}^{ * } \in {\mathbb{R}}^{K}$ , which are guaranteed to exist by Slater's condition (Slater 1959), were known for the problem $\mathop{\min }\limits_{{\mathbf{x} \in X}}\mathop{\max }\limits_{{\mathbf{y} \in Y : \mathbf{g}\left( {\mathbf{x},\mathbf{y}}\right) \geq \mathbf{0}}}f\left( {\mathbf{x},\mathbf{y}}\right) =$ $\mathop{\min }\limits_{{\mathbf{x} \in X}}\mathop{\max }\limits_{{\mathbf{y} \in Y}}\mathop{\min }\limits_{{\mathbf{\lambda } \geq \mathbf{0}}}$
+
+${\mathcal{L}}_{\mathbf{x}}\left( {\mathbf{y},\mathbf{\lambda }}\right)$ , then one could plug them back into the Lagrangian to obtain a convex-concave saddle point problem given by $\mathop{\min }\limits_{{\mathbf{x} \in X}}\mathop{\max }\limits_{{\mathbf{y} \in Y}}$
+
+${\mathcal{L}}_{\mathbf{x}}\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right)$ . Note that a saddle point of this problem is guaranteed to exist by the minimax theorem (Neumann 1928), since ${\mathcal{L}}_{\mathbf{x}}\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right)$ is convex in $\mathbf{x}$ and concave in $\mathbf{y}$ . The next lemma states that the Stackelberg equilibria of a min-max Stackelberg game correspond to the saddle points of ${\mathcal{L}}_{\mathbf{x}}\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right)$ .
+
+Lemma 5. Any Stackelberg equilibrium $\left( {{\mathbf{x}}^{ * }{\mathbf{y}}^{ * }}\right) \in X \times$ $Y$ of any min-max Stackelberg game(X, Y, f, g)corresponds to a saddle point of ${\mathcal{L}}_{\mathbf{x}}\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right)$ , where ${\mathbf{\lambda }}^{ * } \in$ $\arg \mathop{\min }\limits_{{\mathbf{\lambda } \geq 0}}\mathop{\min }\limits_{{\mathbf{x} \in X}}\mathop{\max }\limits_{{\mathbf{y} \in Y}}{\mathcal{L}}_{\mathbf{x}}\left( {\mathbf{y},\mathbf{\lambda }}\right)$ .
+
+This lemma tells us that the function ${\mathcal{L}}_{\mathbf{x}}\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right)$ represents a new loss function that enforces the game's constraints. Based on this observation, we assume access to a Lagrangian solution oracle that provides us with ${\mathbf{\lambda }}^{ * } \in$ $\arg \mathop{\min }\limits_{{\mathbf{\lambda } > 0}}\mathop{\min }\limits_{{\mathbf{x} \in X}}\mathop{\max }\limits_{{\mathbf{y} \in Y}}{\mathcal{L}}_{\mathbf{x}}\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right)$ .
+
+Further, we define a new type of regret which we call Lagrangian regret. Given a sequence of actions ${\left\{ {\mathbf{x}}^{\left( t\right) },{\mathbf{y}}^{\left( t\right) }\right\} }_{t = 1}^{T}$ taken by the outer and inner players in a dynamic min-max Stackelberg game ${\left\{ \left( X, Y,{f}^{\left( t\right) },{\mathbf{g}}^{\left( t\right) }\right) \right\} }_{t = 1}^{T}$ , we define their Lagrangian regret, respectively, as LagrRegret ${}_{X}^{\left( T\right) }\left( \mathbf{x}\right) =$ $\frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{\mathcal{L}}_{{x}^{\left( t\right) }}^{\left( t\right) }\left( {{\mathbf{y}}^{\left( t\right) },{\mathbf{\lambda }}^{ * }}\right) \; - \;\frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{\mathcal{L}}_{\mathbf{x}}^{\left( t\right) }\left( {{\mathbf{y}}^{\left( t\right) },{\mathbf{\lambda }}^{ * }}\right)$ and ${\operatorname{LagrRegret}}_{Y}^{\left( T\right) }\left( \mathbf{y}\right) = \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{\mathcal{L}}_{{\mathbf{x}}^{\left( t\right) }}^{\left( t\right) }\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right) -$ $\begin{matrix} {\frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{\mathcal{L}}_{{\mathbf{x}}^{(t)}}^{(t)}({\mathbf{y}}^{(t)},{\mathbf{\lambda }}^{ \ast }).} \end{matrix}$
+
+The saddle point residual of a point $\left( {{\mathbf{x}}^{ * },{\mathbf{y}}^{ * }}\right) \in X \times Y$ with respect to a convex-concave function $f : X \times Y \rightarrow \mathbb{R}$ is given by $\mathop{\max }\limits_{{\mathbf{y} \in Y}}f\left( {{\mathbf{x}}^{ * },\mathbf{y}}\right) - \mathop{\min }\limits_{{\mathbf{x} \in X}}f\left( {\mathbf{x},{\mathbf{y}}^{ * }}\right)$ . When the saddle point residual is 0, the saddle point is a(0,0)- Stackelberg equilibrium.
+
+The main theorem of this section now follows: if both players play so as to minimize their Lagrangian regret, then their average strategies converge to a Stackelberg equilibrium. The bound is given in terms of the saddle point residual of ${\mathcal{L}}_{\mathbf{x}}\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right)$ .
+
+Theorem 6. Consider a min-max Stackelberg game (X, Y, f, g), and suppose the outer and the players generate sequences of actions ${\left\{ \left( {\mathbf{x}}^{\left( t\right) },{\mathbf{y}}^{\left( t\right) }\right) \right\} }_{t = 1}^{T} \subset X$ using a no-Lagrangian-regret algorithm. If after $T$ iterations, the Lagrangian regret of both players is bounded by $\varepsilon$ for all $x \in X$ , the following convergence bound holds on the saddle point residual of $\left( {{\overline{\mathbf{x}}}^{\left( T\right) },{\overline{\mathbf{y}}}^{\left( T\right) }}\right)$ w.r.t. the Lagrangian: $0 \leq \mathop{\max }\limits_{{\mathbf{y} \in Y}}{\mathcal{L}}_{{\bar{\mathbf{x}}}^{\left( T\right) }}\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right) - \mathop{\min }\limits_{{\mathbf{x} \in X}}{\mathcal{L}}_{\mathbf{x}}\left( {{\overline{\mathbf{y}}}^{\left( T\right) },{\mathbf{\lambda }}^{ * }}\right) \leq {2\varepsilon }.$
+
+Having provided convergence to Stackelberg equilibrium of general no-regret learning dynamics in min-max Stack-elberg games, we now proceed to investigate the convergence and robustness properties of a specific example of a no-regret learning dynamic, namely online mirror descent (OMD) dynamics.
+
+## 4 Online Mirror Descent
+
+In this section, we apply the results we have derived for no-regret learning dynamics to Online Mirror Descent (OMD) (Zinkevich 2003; Shalev-Shwartz et al. 2011). We apply the theorems we derived above to OMD, and then we study the robustness properties of OMD in min-max Stack-elberg games.
+
+## Convergence Analysis
+
+When the outer player is an OMD learner minimizing its pessimistic regret and the inner player best responds, we obtain the max-oracle gradient descent algorithm (Algorithm 1 - Appendix D) first proposed by Jin, Netrapalli, and Jordan (2020) for min-max games.
+
+Following Jin, Netrapalli, and Jordan (2020), Goktas and Greenwald extend the max-oracle gradient descent algorithm to min-max Stackelberg games and prove its convergence of in best iterates. The following corollary of Theorem 3, which concerns convergence of this algorithm in average iterates, complements their result: the max-oracle gradient descent algorithm is guaranteed to converge to an $\left( {\varepsilon ,0}\right)$ -Stackelberg equilibrium strategy of the outer player in average iterates after $O\left( {1/{\varepsilon }^{2}}\right)$ iterations, assuming the inner player best responds.
+
+We note that since ${V}_{X}$ is convex, by Proposition 2, ${V}_{X}$ is subdifferentiable. Moreover, for all $\widehat{\mathbf{x}} \in X,\widehat{\mathbf{y}} \in {\mathrm{{BR}}}_{Y}\left( \widehat{\mathbf{x}}\right)$ , ${\nabla }_{\mathbf{x}}f\left( {\widehat{\mathbf{x}},\widehat{\mathbf{y}}}\right) + \mathop{\sum }\limits_{{k = 1}}^{K}{\lambda }_{k}{g}_{k}\left( {\widehat{\mathbf{x}},\widehat{\mathbf{y}}}\right)$ is an arbitrary subgradi-ent of the value function at $\widehat{\mathbf{x}}$ by Goktas and Greenwald’s subdifferential envelope theorem (2021). We add that similar to Goktas and Greenwald, we assume that the optimal KKT multipliers ${\mathbf{\lambda }}^{ * }\left( {{\mathbf{x}}^{\left( t\right) },\widehat{\mathbf{y}}\left( {\mathbf{x}}^{\left( t\right) }\right) }\right)$ associated with a solution $\widehat{\mathbf{y}}\left( {\mathbf{x}}^{\left( t\right) }\right)$ ) can be computed in constant time.
+
+Corollary 7. Let $c = \mathop{\max }\limits_{{\mathbf{x} \in X}}\parallel \mathbf{x}\parallel$ and let ${\ell }_{f} =$ $\mathop{\max }\limits_{{\left( {\widehat{\mathbf{x}},\widehat{\mathbf{y}}}\right) \in X \times Y}}$
+
+$\begin{Vmatrix}{{\nabla }_{\mathbf{x}}f\left( {\widehat{\mathbf{x}},\widehat{\mathbf{y}}}\right) }\end{Vmatrix}$ . If Algorithm 1 (Appendix D) is run on a min-max Stackelberg game(X, Y, f, g)with ${\eta }_{t} = \frac{c}{{\ell }_{f}\sqrt{2T}}$ for all iteration $t \in \left\lbrack T\right\rbrack$ and any ${\mathbf{x}}^{\left( 0\right) } \in X$ , then $\left( {{\overline{\mathbf{x}}}^{\left( T\right) },{\mathbf{y}}^{ * }\left( {\overline{\mathbf{x}}}^{\left( T\right) }\right) }\right)$ is a $\left( {c{\ell }_{f}\sqrt{2}/\sqrt{T},0}\right)$ -Stackelberg equilibrium. Furthermore, for $\varepsilon \in \left( {0,1}\right)$ , if we choose $T \geq {N}_{T}\left( \varepsilon \right) \in O\left( {1/{\varepsilon }^{2}}\right)$ , then there exists an iteration ${T}^{ * } \leq T$ s.t. $\left( {{\overline{\mathbf{x}}}^{\left( T\right) },{\mathbf{y}}^{ * }\left( {\overline{\mathbf{x}}}^{\left( T\right) }\right) }\right)$ is an $\left( {\varepsilon ,0}\right)$ -Stackelberg equilibrium.
+
+Note that we can relax Theorem 3 to instead work with an approximate best response of the inner player, i.e., given the strategy of the outer player $\widehat{\mathbf{x}}$ , instead of playing an exact best-response, the inner player computes a $\widehat{\mathbf{y}}$ s.t. $f\left( {\widehat{\mathbf{x}},\widehat{\mathbf{y}}}\right) \geq$ $\mathop{\max }\limits_{{\mathbf{y} \in Y : \mathbf{g}\left( {\widehat{\mathbf{x}},\mathbf{y}}\right) \geq \mathbf{0}}}f\left( \widehat{\mathbf{x}}\right) - \varepsilon$ . Combine with results on the convergence of gradient ascent on smooth functions, the average iterates computed by Goktas and Greenwald's nested GDA algorithm converge to an $\left( {\varepsilon ,\varepsilon }\right)$ -Stackelberg equilibrium in $O\left( {1/{\varepsilon }^{3}}\right)$ iterations. If additionally, $f$ is strongly convex in $\mathbf{y}$ , then the iteration complexity can reduced to $O\left( {1/{\varepsilon }^{2}\log \left( {1/\varepsilon }\right) }\right)$ .
+
+Similarly, we can also consider the optimistic case, in which both the outer and inner players minimize their Lagrangian regrets, as OMD learners with access to a Lagrangian solution oracle that returns ${\mathbf{\lambda }}^{ * } \in$ $\arg \mathop{\min }\limits_{{\mathbf{\lambda } > 0}}\mathop{\min }\limits_{{\mathbf{x} \in X}}\mathop{\max }\limits_{{\mathbf{y} \in Y}}{\mathcal{L}}_{\mathbf{x}}\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right)$ . In this case, we obtain the Lagrangian GDA (LGDA) algorithm (Algorithm 2 - Appendix D). The following corollary of Theorem 6 states that LGDA converges in average iterates to an approximate-Stackelberg equilibrium in $O\left( {1/{\varepsilon }^{2}}\right)$ iterations.
+
+Corollary 8. Let $b = \mathop{\max }\limits_{{\mathbf{x} \in X}}\parallel \mathbf{x}\parallel , c = \mathop{\max }\limits_{{\mathbf{y} \in Y}}\parallel \mathbf{y}\parallel$ , and ${\ell }_{\mathcal{L}} = \mathop{\max }\limits_{{\left( {\widehat{\mathbf{x}},\widehat{\mathbf{y}}}\right) \in X \times Y}}\begin{Vmatrix}{{\nabla }_{\mathbf{x}}{\mathcal{L}}_{\widehat{\mathbf{x}}}\left( {\widehat{\mathbf{y}},{\mathbf{\lambda }}^{ * }}\right) }\end{Vmatrix}$ . If Algorithm 2 (Appendix D) is run on a min-max Stackelberg game (X, Y, f, g)with ${\eta }_{t}^{\mathbf{x}} = \frac{b}{{\ell }_{\mathcal{L}}\sqrt{2T}}$ and ${\eta }_{t}^{\mathbf{y}} = \frac{c}{{\ell }_{\mathcal{L}}\sqrt{2T}}$ for all iterations $t \in \left\lbrack T\right\rbrack$ and any ${\mathbf{x}}^{\left( 0\right) } \in X$ , then the following convergence bound holds on the saddle point residual $\left( {{\overline{\mathbf{x}}}^{\left( T\right) },{\overline{\mathbf{y}}}^{\left( T\right) }}\right)$ w.r.t. the Lagrangian:
+
+$$
+0 \leq \mathop{\max }\limits_{{\mathbf{y} \in Y}}{\mathcal{L}}_{{\overline{\mathbf{x}}}^{\left( T\right) }}\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right) - \mathop{\min }\limits_{{\mathbf{x} \in X}}{\mathcal{L}}_{\mathbf{x}}\left( {{\overline{\mathbf{y}}}^{\left( T\right) },{\mathbf{\lambda }}^{ * }}\right) \leq \frac{2\sqrt{2}{\ell }_{\mathcal{L}}}{\sqrt{T}}\max \{ b, c\}
+$$
+
+(5)
+
+We remark that in certain rare cases the Lagrangian can become degenerate in $\mathbf{y}$ , in that the $\mathbf{y}$ terms in the Lagrangian might cancel out when ${\mathbf{\lambda }}^{ * }$ is plugged back into Lagrangian, leading LGDA to not update the $\mathbf{y}$ variables, as demonstrated by the following example:
+
+Example 9. Consider this min-max Stackelberg game: $\mathop{\min }\limits_{{x \in \left\lbrack {-1,1}\right\rbrack }}$
+
+$\mathop{\max }\limits_{{y \in \left\lbrack {-1,1}\right\rbrack : 0 \leq 1 - \left( {x + y}\right) }}{x}^{2} + y + 1$ . When we plug the optimal $\dot{K}{KT}$ multiplier ${\lambda }^{ * } = 1$ into the Lagrangian associated with the outer player's value function, we obtain ${\mathcal{L}}_{x}\left( {y,\lambda }\right) = {x}^{2} + y + 1 - \left( {x + y}\right) = {x}^{2} - x + 1$ , with $\frac{\partial \mathcal{L}}{\partial x} = {2x} - 1$ and $\frac{\partial \mathcal{L}}{\partial y} = 0$ . It follows that the $x$ iterate converges to $1/2$ , but the $\mathbf{y}$ iterate will never be updated, and hence unless $y$ is initialized to its Stackelberg equilibirium value, LGDA will not converge to a Stackelberg equilibrium.
+
+In general, this degeneracy issue occurs when $\forall \mathbf{x} \in$ $X,{\nabla }_{\mathbf{y}}f\left( {\mathbf{x},\mathbf{y}}\right) = - \mathop{\sum }\limits_{{k = 1}}^{K}{\lambda }_{k}^{ * }{\nabla }_{\mathbf{y}}{g}_{k}\left( {\mathbf{x},\mathbf{y}}\right)$ . We can sidestep the issue by restricting our attention to min-max Stack-elberg games with convex-strictly-concave objective functions, which is sufficient to ensure that the Lagrangian is not degenerate in $\mathbf{y}$ (Boyd, Boyd, and Vandenberghe 2004).
+
+## Robustness Analysis
+
+Although the OMD dynamics we analyzed in the previous section describe a dynamic behavior in nature, they assume that the game and its properties, i.e., the objective function and constraints, are static and thus do not change over time. In many real-world games, however, the game itself is subject to perturbations, i.e., dynamic changes, in the sense that the agents' objectives and constraints might be perturbed by external influences. Analyzing and providing dynamics that are robust to ongoing changes in games is critical, since the real world is rarely static.
+
+This makes the study of dynamic min-max Stackelberg games and their associated optimal dynamic strategies for both players an important goal. Dynamic games bring with them a series of interesting issues; notably, even though the environment might change at each time period, in each period of time the game still exhibits a Stackelberg equilibrium. However, one cannot sensibly expect the players to play a Stackelberg equilibrium strategy at each time period since even in the static setting, known game dynamics require multiple time steps in order for players to reach even an approximate Stackelberg equilibrium. When players cannot directly best respond or pick the optimal strategy for themselves, they essentially become boundedly rational agents in that they can only take a step towards their optimal strategy but they cannot reach it in just one time step. Hence, in dynamic games, equilibria also become dynamic objects, which can never be reached unless the game stops changing significantly.
+
+Corollaries 8 and 7 tell us that OMD dynamics are effective equilibrium finding strategies in min-max Stackelberg games. However, they do not provide any intuition about the robustness of OMD dynamics to perturbations in the game. That is, we would like to know whether or not OMD dynamics are able to track the equilibrium even when the game changes slowly. Robustness is a desirable property for no-regret learning dynamics as many real-world applications of games involve changing environments. In this section, we provide theoretical guarantees that show that even when the game changes at each iteration, OMD dynamics closely track the changing equilibria of the dynamic game. Unfortunately, our theoretical results only concern min-max games (with independent strategy sets). Nevertheless, we provide experimental evidence that suggests that the results we prove may also apply more broadly to min-max Stackelberg games (with dependent strategy sets).
+
+We first consider the pessimistic setting in which the outer player is a no-regret learner and the inner player best-responds. In this setting, we show that when the outer player follows online projected gradient descent dynamics in a dynamic min-max game, i.e., a min-max game in which the objective function constantly changes, the outer player's strategies closely track their Stackelberg equilibrium strategy. Intuitively, the following result implies that irrespective of the initial strategy of the outer player, online projected gradient descent dynamics follow the Nash equilibrium strategy of the outer player s.t. the strategy determined by the outer player is always within a ${2d}/\delta$ radius of the outer player’s Nash equilibrium strategy.
+
+Theorem 10. Consider a dynamic min-max game ${\left\{ \left( X, Y,{f}^{\left( t\right) }\right) \right\} }_{t = 1}^{T}$ . Suppose that, for all $t \in \left\lbrack T\right\rbrack ,{f}^{\left( t\right) }$ is $\mu$ -strongly convex in $\mathbf{x}$ and strictly concave in $\mathbf{y}$ , and ${f}^{\left( t\right) }$ is ${\ell }_{\nabla f}$ -Lipschitz smooth. Suppose that the outer player generates a sequence of actions ${\left\{ {\mathbf{x}}^{\left( t\right) }\right\} }_{t = 1}^{T} \subset X$ by using an online projected gradient descent algorithm on the loss functions ${\left\{ {V}^{\left( t\right) }\right\} }_{t = 1}^{T}$ with learning rate $\eta \leq \frac{2}{\mu + {\ell }_{\nabla f}}$ and suppose that the inner player generates a sequence of best-responses to each iterate of the outer player ${\left\{ {\mathbf{y}}^{\left( t\right) }\right\} }_{t = 1}^{T} \subset Y$ . For all $t \in \left\lbrack T\right\rbrack$ , let ${\mathbf{x}}^{{\left( t\right) }^{ * }} \in \arg \mathop{\min }\limits_{{\mathbf{x} \in X}}{V}^{\left( t\right) }\left( \mathbf{x}\right)$ , ${\Delta }^{\left( t\right) } = \begin{Vmatrix}{{\mathbf{x}}^{{\left( t + 1\right) }^{ * }} - {\mathbf{x}}^{{\left( t\right) }^{ * }}}\end{Vmatrix}$ , and $\delta = \frac{{2\eta \mu }{\ell }_{\nabla f}}{{\ell }_{\nabla f} + \mu }$ , we then have:
+
+$$
+\begin{Vmatrix}{{\mathbf{x}}^{{\left( T\right) }^{ * }} - {\mathbf{x}}^{\left( T\right) }}\end{Vmatrix} \leq {\left( 1 - \delta \right) }^{T/2}\begin{Vmatrix}{{\mathbf{x}}^{{\left( 0\right) }^{ * }} - {\mathbf{x}}^{\left( 0\right) }}\end{Vmatrix} + \mathop{\sum }\limits_{{t = 1}}^{T}{\left( 1 - \delta \right) }^{\frac{T - t}{2}}{\Delta }^{\left( t\right) }
+$$
+
+(6)
+
+If additionally, for all $t \in \left\lbrack T\right\rbrack ,{\Delta }^{\left( t\right) } \leq d$ , then:
+
+$$
+\begin{Vmatrix}{{\mathbf{x}}^{{\left( T\right) }^{ * }} - {\mathbf{x}}^{\left( T\right) }}\end{Vmatrix} \leq {\left( 1 - \delta \right) }^{T/2}\begin{Vmatrix}{{\mathbf{x}}^{{\left( 0\right) }^{ * }} - {\mathbf{x}}^{\left( 0\right) }}\end{Vmatrix} + \frac{2d}{\delta } \tag{7}
+$$
+
+We can extend a similar robustness result to the setting in which the outer and inner players are both OMD learners. The following theorem implies that irrespective of the initial strategies of the two players, online projected gradient descent dynamics follow the Nash equilibrium of the game, always staying within a ${4d}/\delta$ radius.
+
+Theorem 11. Consider a dynamic min-max game ${\left\{ {G}_{t}\right\} }_{t = 0}^{T} = {\left\{ \left( X, Y,{f}^{\left( t\right) }\right) \right\} }_{t = 1}^{T}$ . Suppose that, for all $t \in \left\lbrack T\right\rbrack ,{f}^{\left( t\right) }$ is ${\mu }_{\mathbf{x}}$ -strongly convex in $\mathbf{x}$ and ${\mu }_{\mathbf{y}}$ - strongly concave in $\mathbf{y},{f}^{\left( t\right) }$ is ${\ell }_{\nabla f}$ -Lipschitz smooth. Let ${\left\{ \left( {\mathbf{x}}^{\left( t\right) },{\mathbf{y}}^{\left( t\right) }\right) \right\} }_{t = 1}^{T} \subset X \times Y$ be the strategies generated by the outer and inner players assuming that the outer player uses a online projected gradient descent algorithm on the losses ${\left\{ {f}^{\left( t\right) }\left( \cdot ,{\mathbf{y}}^{\left( t\right) }\right) \right\} }_{t = 1}^{T}$ with ${\eta }_{\mathbf{x}} = \frac{2}{{\mu }_{\mathbf{x}} + {\ell }_{\nabla f}}$ and that the inner player uses a online projected gradient descent algorithm on the losses ${\left\{ -{f}^{\left( t\right) }\left( {\mathbf{x}}^{\left( t\right) }, \cdot \right) \right\} }_{t = 1}^{T}$ with ${\eta }_{\mathbf{y}} = \frac{2}{{\mu }_{\mathbf{y}} + {\ell }_{\nabla f}}$ . For all $t \in \left\lbrack T\right\rbrack$ , let ${\mathbf{x}}^{{\left( t\right) }^{ * }} \in \arg \mathop{\min }\limits_{{\mathbf{x} \in X}}{f}^{\left( t\right) }\left( {\mathbf{x},{\mathbf{y}}^{\left( t\right) }}\right) ,{\mathbf{y}}^{{\left( t\right) }^{ * }} \in$ $\arg \mathop{\min }\limits_{{\mathbf{y} \in Y}}{f}^{\left( t\right) }\left( {{\mathbf{x}}^{\left( t\right) },\mathbf{y}}\right) .\;{\Delta }_{\mathbf{x}}^{\left( t\right) } = \begin{Vmatrix}{{\mathbf{x}}^{{\left( t + 1\right) }^{ * }} - {\mathbf{x}}^{{\left( t\right) }^{ * }}}\end{Vmatrix},$ ${\Delta }_{\mathbf{y}}^{\left( t\right) } = \begin{Vmatrix}{{\mathbf{y}}^{{\left( t + 1\right) }^{ * }} - {\mathbf{y}}^{{\left( t\right) }^{ * }}}\end{Vmatrix},{\delta }_{\mathbf{x}} = \frac{{2\eta }{\mu }_{\mathbf{x}}{\ell }_{\nabla f}}{{\ell }_{\nabla \mathbf{x}}f + {\mu }_{\mathbf{x}}}$ , and ${\delta }_{y} = \frac{{2\eta }{\mu }_{y}{\ell }_{\nabla f}}{{\ell }_{\nabla f} + {\mu }_{y}}$ we then have:
+
+$$
+\begin{Vmatrix}{{\mathbf{x}}^{{\left( T\right) }^{ * }} - {\mathbf{x}}^{\left( T\right) }}\end{Vmatrix} + \begin{Vmatrix}{{\mathbf{y}}^{{\left( T\right) }^{ * }} - {\mathbf{y}}^{\left( T\right) }}\end{Vmatrix}
+$$
+
+$$
+\leq {\left( 1 - {\delta }_{\mathbf{x}}\right) }^{T/2}\begin{Vmatrix}{{\mathbf{x}}^{{\left( 0\right) }^{ * }} - {\mathbf{x}}^{\left( 0\right) }}\end{Vmatrix} + {\left( 1 - {\delta }_{\mathbf{y}}\right) }^{T/2}\begin{Vmatrix}{{\mathbf{y}}^{{\left( 0\right) }^{ * }} - {\mathbf{y}}^{\left( 0\right) }}\end{Vmatrix}
+$$
+
+$$
++ \mathop{\sum }\limits_{{t = 1}}^{T}{\left( 1 - {\delta }_{\mathbf{x}}\right) }^{\frac{T - t}{2}}{\Delta }_{\mathbf{x}}^{\left( t\right) } + \mathop{\sum }\limits_{{t = 1}}^{T}{\left( 1 - {\delta }_{\mathbf{y}}\right) }^{\frac{T - t}{2}}{\Delta }_{\mathbf{y}}^{\left( t\right) }. \tag{8}
+$$
+
+If additionally, ${\Delta }_{\mathbf{x}}^{\left( t\right) } \leq d$ and ${\Delta }_{\mathbf{y}}^{\left( t\right) } \leq d$ for all $t \in \left\lbrack T\right\rbrack$ , and $\delta = \min \left\{ {{\delta }_{\mathbf{y}},{\delta }_{\mathbf{x}}}\right\}$ , then:
+
+$$
+\begin{Vmatrix}{{\mathbf{x}}^{{\left( T\right) }^{ * }} - {\mathbf{x}}^{\left( T\right) }}\end{Vmatrix} + \begin{Vmatrix}{{\mathbf{y}}^{{\left( T\right) }^{ * }} - {\mathbf{y}}^{\left( T\right) }}\end{Vmatrix}
+$$
+
+$$
+\leq 2{\left( 1 - \delta \right) }^{T/2}\left( {\begin{Vmatrix}{{\mathbf{x}}^{{\left( 0\right) }^{ * }} - {\mathbf{x}}^{\left( 0\right) }}\end{Vmatrix} + \begin{Vmatrix}{{\mathbf{y}}^{{\left( 0\right) }^{ * }} - {\mathbf{y}}^{\left( 0\right) }}\end{Vmatrix}}\right) + \frac{4d}{\delta }.
+$$
+
+(9)
+
+The proofs of the above theorems are relegated to Appendix C. The theorems we have proven in this section establish the robustness of OMD dynamics for min-max games in both the pessimistic and optimistic settings by showing that the dynamics closely track the Stackelberg equilibrium in a large class of min-max games. As we are not able to extend these theoretical robustness guarantees to min-max Stackelberg games (with dependent strategy sets), we instead ran a series of experiments on (dynamic) Fisher markets, which are canonical examples of min-max Stackel-berg games (Goktas and Greenwald 2021), to investigate the empirical robustness guarantees of OMD dynamics for this class of min-max Stackelberg games.
+
+## Dynamic Fisher Markets
+
+The Fisher market model, attributed to Irving Fisher (Brainard, Scarf et al. 2000), has received a great deal of attention in the literature, especially by computer scientists, as it has proven useful in the design of online marketplaces. We now study OMD dynamics in dynamic Fisher markets, which are instances of min-max Stackelberg games (Goktas and Greenwald 2021).
+
+A Fisher market consists of $n$ buyers and $m$ divisible goods (Brainard, Scarf et al. 2000). Each buyer $i \in \left\lbrack n\right\rbrack$ has a budget ${b}_{i} \in {\mathbb{R}}_{ + }$ and a utility function ${u}_{i} : {\mathbb{R}}_{ + }^{m} \rightarrow \mathbb{R}$ . Each good $j \in \left\lbrack m\right\rbrack$ has supply ${s}_{j} \in {\mathbb{R}}_{ + }$ . A Fisher market is thus given by a tuple(n, m, U, b, s), where $U = \left\{ {{u}_{1},\ldots ,{u}_{n}}\right\}$ is a set of utility functions, one per buyer, $\mathbf{b} \in {\mathbb{R}}_{ + }^{n}$ is a vector of buyer budgets, and $s \in {\mathbb{R}}_{ + }^{m}$ is a vector of good supplies. We abbreviate as(U, b, s)when $n$ and $m$ are clear from context. A dynamic Fisher market is a sequence of Fisher markets ${\left( {U}^{\left( t\right) },{\mathbf{b}}^{\left( t\right) },{\mathbf{s}}^{\left( t\right) }\right) }_{t = 1}^{\left( \left\lbrack T\right\rbrack \right) }$ . An allocation $\mathbf{X} = {\left( {\mathbf{x}}_{1},\ldots ,{\mathbf{x}}_{n}\right) }^{T} \in {\mathbb{R}}_{ + }^{n \times m}$ is a map from goods to buyers, represented as a matrix s.t. ${x}_{ij} \geq 0$ denotes the amount of good $j \in \left\lbrack m\right\rbrack$ allocated to buyer $i \in \left\lbrack n\right\rbrack$ . Goods are assigned prices $\mathbf{p} = {\left( {p}_{1},\ldots ,{p}_{m}\right) }^{T} \in {\mathbb{R}}_{ + }^{m}$ . A tuple $\left( {{\mathbf{p}}^{ * },{\mathbf{X}}^{ * }}\right)$ is said to be a competitive (or Wal-rasian) equilibrium of Fisher market(U, b, s)if 1 . buyers are utility maximizing, constrained by their budget, i.e., $\forall i \in \left\lbrack n\right\rbrack ,{\mathbf{x}}_{i}^{ * } \in \arg \mathop{\max }\limits_{{\mathbf{x} : \mathbf{x} \cdot {\mathbf{p}}^{ * } \leq {b}_{i}}}{u}_{i}\left( \mathbf{x}\right)$ ; and 2. the market clears, i.e., $\forall j \in \left\lbrack m\right\rbrack ,{p}_{j}^{ * } > 0 \Rightarrow \mathop{\sum }\limits_{{i \in \left\lbrack n\right\rbrack }}{x}_{ij}^{ * } = {s}_{j}$ and ${p}_{j}^{ * } = 0 \Rightarrow \mathop{\sum }\limits_{{i \in \left\lbrack n\right\rbrack }}{x}_{ij}^{ * } \leq {s}_{j}.$
+
+Goktas and Greenwald (2021) observe that any competitive equilibrium $\left( {{\mathbf{p}}^{ * },{\mathbf{X}}^{ * }}\right)$ of a Fisher market(U, b)corresponds to a Stackelberg equilibrium of the following min-max Stackelberg game: ${}^{3}$
+
+$$
+\mathop{\min }\limits_{{\mathbf{p} \in {\mathbb{R}}_{ + }^{m}}}\mathop{\max }\limits_{{\mathbf{X} \in {\mathbb{R}}_{ + }^{n \times m} : \mathbf{X}\mathbf{p} \leq \mathbf{b}}}\mathop{\sum }\limits_{{j \in \left\lbrack m\right\rbrack }}{s}_{j}{p}_{j} + \mathop{\sum }\limits_{{i \in \left\lbrack n\right\rbrack }}{b}_{i}\log \left( {{u}_{i}\left( {\mathbf{x}}_{i}\right) }\right) .
+$$
+
+(10)
+
+Let $\mathcal{L} : {\mathbb{R}}_{ + }^{m} \times {\mathbb{R}}^{n \times m} \rightarrow {\mathbb{R}}_{ + }$ be the Lagrangian of the outer player's value function in Equation (10), i.e.,
+
+$\begin{array}{l} {\mathcal{L}}_{\mathbf{p}}\left( {\mathbf{X},\mathbf{\lambda }}\right) = \mathop{\sum }\limits_{{j \in \left\lbrack m\right\rbrack }}{s}_{j}{p}_{j} + \mathop{\sum }\limits_{{i \in \left\lbrack n\right\rbrack }}{b}_{i}\log \left( {{u}_{i}\left( {\mathbf{x}}_{i}\right) }\right) + \\ \end{array}$ $\mathop{\sum }\limits_{{i \in \left\lbrack n\right\rbrack }}{\lambda }_{i}\left( {{b}_{i} - {\mathbf{x}}_{i} \cdot \mathbf{p}}\right)$ . One can show the existence of a Lagrangian solution oracle for the Lagrangian of Equation (10) such that ${\mathbf{\lambda }}^{ * } = {\mathbf{1}}_{m}$ . We then have: 1. by Goktas and Greenwald's envelope theorem, the subdifferential of the outer player’s value function is given by ${\nabla }_{\mathbf{p}}V\left( \mathbf{p}\right) = s -$ $\mathop{\sum }\limits_{{i \in \left\lbrack n\right\rbrack }}{\mathbf{x}}_{i}^{ * }\left( \mathbf{p}\right)$ , where ${\mathbf{x}}_{i}^{ * }\left( \mathbf{p}\right) \in \arg \mathop{\max }\limits_{{\mathbf{x} \in {\mathbb{R}}_{ + }^{m}\mathbf{x} \cdot \mathbf{p} \leq {b}_{i}}}{u}_{i}\left( \mathbf{x}\right)$ , 2. the gradient of the Lagrangian w.r.t. the prices, given the Langrangian solution oracle, is ${\nabla }_{\mathbf{p}}{\mathcal{L}}_{\mathbf{p}}\left( {\mathbf{X},{\mathbf{\lambda }}^{ * }}\right) = s -$ $\mathop{\sum }\limits_{{i \in \left\lbrack n\right\rbrack }}{\mathbf{x}}_{i}$ and ${\nabla }_{{\mathbf{x}}_{i}}{\mathcal{L}}_{\mathbf{p}}\left( {\mathbf{X},{\mathbf{\lambda }}^{ * }}\right) ) = \frac{{b}_{i}}{{u}_{i}\left( {\mathbf{x}}_{i}\right) }{\nabla }_{{\mathbf{x}}_{i}}{u}_{i}\left( {\mathbf{x}}_{i}\right) - \mathbf{p}$ , where ${\mathbf{\lambda }}^{ * } = {\mathbf{1}}_{m}$ .
+
+We first consider OMD dynamics for Fisher markets in the pessimistic setting, in which the outer player determines their strategy via online projected gradient descent and the inner player best-responds. In this setting, we obtain a dynamic version of a natural price adjustment process known as tâtonnement (Walras 1969), which was first studied by Cheung, Hoefer, and Nakhe (2019) (Algorithm 3, Appendix D).
+
+We then consider OMD dynamics in the optimistic setting, in which case both the outer and inner players employ online projected gradient descent, which yields myopic best-response dynamics (Monderer and Shapley 1996) (Algorithm 4, Appendix D). In words, at each time step, the (fictional Walrasian) auctioneer takes a gradient descent step to minimize its regret, and then all the buyers take a gradient ascent step to minimize their Lagrangian regret. These gradient descent-ascent dynamics can be seen as myopic best-response dynamics for sellers and buyers who are both boundedly rational (Camerer 1998).
+
+Experiments In order to better understand the robustness properties of Algorithms 3 and 4 in a dynamic min-max Stackelberg game that is subject to perturbation across time, we ran a series of experiments with dynamic Fisher Markets assuming three different classes of utility functions. ${}^{4}$ Each utility structure endows Equation (10) with different smoothness properties, which allows us to compare the efficiency of the algorithms under varying conditions. Let ${\mathbf{v}}_{i} \in$ ${\mathbb{R}}^{m}$ be a vector of valuation parameters that describes the utility function of buyer $i \in \left\lbrack n\right\rbrack$ . We consider the following utility function classes: 1. linear: ${u}_{i}\left( {\mathbf{x}}_{i}\right) = \mathop{\sum }\limits_{{j \in \left\lbrack m\right\rbrack }}{v}_{ij}{x}_{ij}$ ; 2. Cobb-Douglas: ${u}_{i}\left( {\mathbf{x}}_{i}\right) = \mathop{\prod }\limits_{{j \in \left\lbrack m\right\rbrack }}{x}_{ij}^{{v}_{ij}}$ ; and 3. Leontief: ${u}_{i}\left( {\mathbf{x}}_{i}\right) = \mathop{\min }\limits_{{j \in \left\lbrack m\right\rbrack }}\left\{ \frac{{x}_{ij}}{{v}_{ij}}\right\}$ . To simulate the dynamic Fisher markets, we fix a range for every market parameter and draw from that range uniformly at random during each iteration. Our goal is to understand how closely OMD dynamics track the Stackelberg equilibria of the game as the latter varies with time. To do so, we compare the distance between the iterates $\left( {{\mathbf{p}}^{\left( t\right) },{\mathbf{X}}^{\left( t\right) }}\right)$ computed by the algorithms and the equilibrium of the game at each iteration $t$ . This distance is measured as ${\begin{Vmatrix}{\mathbf{p}}^{{\left( t\right) }^{ * }} - {\mathbf{p}}^{\left( t\right) }\end{Vmatrix}}_{2} + {\begin{Vmatrix}{\mathbf{X}}^{{\left( t\right) }^{ * }} - {\mathbf{X}}^{\left( t\right) }\end{Vmatrix}}_{2}$ , where $\left( {{\mathbf{p}}^{{\left( t\right) }^{ * }},{\mathbf{X}}^{{\left( t\right) }^{ * }}}\right)$ is the Stackelberg equilibrium of the Fisher market $\left( {{U}^{\left( t\right) },{\mathbf{b}}^{\left( t\right) },{\mathbf{s}}^{\left( t\right) }}\right)$ at time $t \in \left\lbrack T\right\rbrack$ .
+
+In our experiments, we ran Algorithms 3 and 4 on 100 randomly initialized dynamic Fisher markets. We depict the distance to equilibrium at each iteration for a randomly chosen experiment in Figures 1 and 2. In these figures, we observe that our OMD dynamics are closely tracking the Stackelberg equilibria as they vary with each iteration. A more detailed description of our experimental setup can be found in Appendix E.
+
+---
+
+${}^{3}$ The first term in this program is slightly different than the first term in the program presented by Goktas and Greenwald (2021), since supply is assumed to be 1 their work.
+
+${}^{4}$ Our code can be found at https://anonymous.4open.science/r/ Dynamic-Minmax-Games-8153/.
+
+---
+
+
+
+Figure 1: In blue, we depict a trajectory of distances between computed allocation-price pairs and equilibrium allocation-price pairs, when Algorithm 3 is run on randomly initialized dynamic linear, Cobb-Douglas, and Leontief Fisher markets. In red, we plot an arbitrary $O\left( {1/\sqrt{T}}\right)$ function.
+
+Figure 2: In blue, we depict a trajectory of distances between computed allocation-price pairs and equilibrium allocation-price pairs, when Algorithm 4 is run on randomly initialized dynamic linear, Cobb-Douglas, and Leontief Fisher markets. In red, we plot an arbitrary $O\left( {1/\sqrt{T}}\right)$ function.
+
+We observe from Figures 1 and 2 that for both Algorithms 3 and 4, we obtain an empirical convergence rate relatively close to $O\left( {1/\sqrt{T}}\right)$ under Cobb-Douglas utilities, and a slightly slower empirical convergence rate under linear utilities. Recall that $O\left( {1/\sqrt{T}}\right)$ is the convergence rate guarantee we obtained for both algorithms, assuming a fixed learning rate in a static Fisher market (Corollaries 7 and 8).
+
+Dynamic Fisher markets with Leontief utilities, in which the objective function is not differentiable, are the hardest markets of the three for our algorithms to solve. Still, we only see a slightly slower than $O\left( {1/\sqrt{T}}\right)$ empirical convergence rate for both Algorithms 3 and 4. In these experiments, the convergence curve generated by Algorithm 4 has a less erratic behavior than the one generated by Algorithm 3. Due to the non-differentiability of the objective function, the gradient ascent step in Algorithm 4 for buyers with Leontief utilities is very small, effectively dampening any potentially erratic changes it the iterates.
+
+Our experiments suggest that even when the game changes at each iteration, OMD dynamics (Algorithms 3 and 4 - Appendix D) are robust enough to closely track the changing Stackelberg equilibria of dynamic Fisher markets. We note that tâtonnement dynamics (Algorithm 3) seem to be more robust than myopic best response dynamics (Algorithm 4), i.e., the distance to equilibrium allocations is smaller at each iteration of tâtonnement. This result is not surprising, as tâtonnement computes a utility-maximizing allocation for the buyers at each time step. Even though Theorems 10 and 11 only provide theoretical guarantees on the robustness of OMD dynamics in dynamic min-max games (with independent strategy sets), it seems like similar theoretical robustness results may be attainable in dynamic min-max Stackelberg games (with dependent strategy sets).
+
+## 5 Conclusion
+
+We began this paper by considering no-regret learning dynamics for min-max Stackelberg games in two settings: a pessimistic setting in which the outer player is a no-regret learner and the inner player best responds, and an optimistic setting in which both players are no-regret learners. For both of these settings, we proved that no-regret learning dynamics converge to a Stackelberg equilibrium of the game. We then specialized the no-regret algorithm employed by the players to online mirror descent (OMD), which yielded two known algorithms, namely max-oracle gradient descent (Jin, Netrapalli, and Jordan 2020) and nested GDA (Goktas and Greenwald 2021) in the pessimistic setting, and a new simultaneous GDA-like algorithm (Nedic and Ozdaglar 2009), which we call Lagrangian GDA, in the optimistic setting. As these algorithms are no-regret learning algorithms, our previous theorems imply convergence to Stackelberg equilibria in $O\left( {1/{\varepsilon }^{2}}\right)$ iterations. Finally, we investigated the robustness of OMD dynamics to perturbations in the parameters of a min-max Stackelberg game. To do so, we analyzed how closely OMD dynamics track Stackelberg equilibria in dynamic min-max Stackelberg games. We proved that in min-max games (with independent strategy sets) OMD dynamics closely track the changing Stackelberg equilibria of a game. As we were not able to extend these theoretical robustness guarantees to min-max Stackelberg games (with dependent strategy sets), we instead ran a series of experiments on dynamic Fisher markets, which are canonical examples of min-max Stackelberg games. Our experiments suggest that OMD dynamics are robust for min-max Stackelberg games so that perhaps the robustness guarantees we have provided for OMD dynamics in min-max games can be extended to min-max Stackelberg games. The theory developed in this paper opens the door to extending the myriad applications of Stackelberg games in AI to incorporating dependent strategy sets. Such models promise to be more expressive, and as a result could provide decision makers with better solutions to problems in security, environmental protection, etc.
+
+References
+
+Alkousa, M.; Dvinskikh, D.; Stonyakin, F.; Gasnikov, A.; and Kovalev, D. 2020. Accelerated methods for composite non-bilinear saddle point problem. arXiv:1906.03620.
+
+Arrow, K.; and Debreu, G. 1954. Existence of an equilibrium for a competitive economy. Econometrica: Journal of the Econometric Society, 265-290.
+
+Boyd, S.; Boyd, S. P.; and Vandenberghe, L. 2004. Convex optimization. Cambridge university press.
+
+Brainard, W. C.; Scarf, H. E.; et al. 2000. How to compute equilibrium prices in 1891. Citeseer.
+
+Cai, Q.; Hong, M.; Chen, Y.; and Wang, Z. 2019. On the Global Convergence of Imitation Learning: A Case for Linear Quadratic Regulator. arXiv:1901.03674.
+
+Camerer, C. 1998. Bounded rationality in individual decision making. Experimental economics, 1(2): 163-183.
+
+Cheung, Y. K.; Hoefer, M.; and Nakhe, P. 2019. Tracing Equilibrium in Dynamic Markets via Distributed Adaptation. In Proceedings of the 18th International Conference on Autonomous Agents and MultiAgent Systems, AA-MAS '19, 1225-1233. Richland, SC: International Foundation for Autonomous Agents and Multiagent Systems. ISBN 9781450363099.
+
+Dai, B.; Dai, H.; Gretton, A.; Song, L.; Schuurmans, D.; and He, N. 2019. Kernel Exponential Family Estimation via Doubly Dual Embedding. In Chaudhuri, K.; and Sugiyama, M., eds., Proceedings of the Twenty-Second International Conference on Artificial Intelligence and Statistics, volume 89 of Proceedings of Machine Learning Research, 2321-2330. PMLR.
+
+Dai, B.; Shaw, A.; Li, L.; Xiao, L.; He, N.; Liu, Z.; Chen, J.; and Song, L. 2018. SBEED: Convergent Reinforcement Learning with Nonlinear Function Approximation. In Dy, J.; and Krause, A., eds., Proceedings of the 35th International Conference on Machine Learning, volume 80 of Proceedings of Machine Learning Research, 1125-1134. PMLR.
+
+Danskin, J. M. 1966. The Theory of Max-Min, with Applications. SIAM Journal on Applied Mathematics, 14(4): 641-664.
+
+Devanur, N. R.; Papadimitriou, C. H.; Saberi, A.; and Vazi-rani, V. V. 2002. Market equilibrium via a primal-dual-type algorithm. In The 43rd Annual IEEE Symposium on Foundations of Computer Science, 2002. Proceedings., 389-395.
+
+Diamond, S.; and Boyd, S. 2016. CVXPY: A Python-embedded modeling language for convex optimization. Journal of Machine Learning Research, 17(83): 1-5.
+
+Edwards, H.; and Storkey, A. 2016. Censoring Representations with an Adversary. arXiv:1511.05897.
+
+Facchinei, F.; and Kanzow, C. 2007. Generalized Nash equilibrium problems. 4or, 5(3): 173-210.
+
+Facchinei, F.; and Kanzow, C. 2010. Generalized Nash equilibrium problems. Annals of Operations Research, 175(1): 177-211.
+
+Fang, F.; and Nguyen, T. H. 2016. Green Security Games: Apply Game Theory to Addressing Green Security Challenges. SIGecom Exch., 15(1): 78-83.
+
+Freund, Y.; and Schapire, R. E. 1996. Game theory, on-line
+
+prediction and boosting. In Proceedings of the ninth annual conference on Computational learning theory, 325-332.
+
+Gidel, G.; Berard, H.; Vignoud, G.; Vincent, P.; and Lacoste-Julien, S. 2020. A Variational Inequality Perspective on Generative Adversarial Networks. arXiv:1802.10551.
+
+Goktas, D.; and Greenwald, A. 2021. Convex-Concave Min-Max Stackelberg Games. arXiv:3961081.
+
+Hamedani, E. Y.; and Aybat, N. S. 2018. A primal-dual algorithm for general convex-concave saddle point problems. arXiv preprint arXiv:1803.01401, 2.
+
+Hamedani, E. Y.; Jalilzadeh, A.; Aybat, N. S.; and Shanbhag, U. V. 2018. Iteration Complexity of Randomized Primal-Dual Methods for Convex-Concave Saddle Point Problems. arXiv:1806.04118.
+
+Harris, C. R.; Millman, K. J.; van der Walt, S. J.; Gom-mers, R.; Virtanen, P.; Cournapeau, D.; Wieser, E.; Taylor, J.; Berg, S.; Smith, N. J.; Kern, R.; Picus, M.; Hoyer, S.; van Kerkwijk, M. H.; Brett, M.; Haldane, A.; Fernandez del Rio, J.; Wiebe, M.; Peterson, P.; Gérard-Marchant, P.; Sheppard, K.; Reddy, T.; Weckesser, W.; Abbasi, H.; Gohlke, C.; and Oliphant, T. E. 2020. Array programming with NumPy. Nature, 585: 357-362.
+
+Hunter, J. D. 2007. Matplotlib: A 2D graphics environment. Computing in Science & Engineering, 9(3): 90-95.
+
+Ibrahim, A.; Azizian, W.; Gidel, G.; and Mitliagkas, I. 2019. Lower bounds and conditioning of differentiable games. arXiv preprint arXiv:1906.07300.
+
+Jin, C.; Netrapalli, P.; and Jordan, M. I. 2020. What is Local Optimality in Nonconvex-Nonconcave Minimax Optimization? arXiv:1902.00618.
+
+Juditsky, A.; Nemirovski, A.; et al. 2011. First order methods for nonsmooth convex large-scale optimization, ii: utilizing problems structure. Optimization for Machine Learning, 30(9): 149-183.
+
+Kakade, S. M.; Shalev-Shwartz, S.; and Tewari, A. 2012. Regularization techniques for learning with matrices. The Journal of Machine Learning Research, 13(1): 1865-1890.
+
+Lin, T.; Jin, C.; and Jordan, M. 2020a. On gradient descent ascent for nonconvex-concave minimax problems. In International Conference on Machine Learning, 6083-6093. PMLR.
+
+Lin, T.; Jin, C.; and Jordan, M. I. 2020b. Near-optimal algorithms for minimax optimization. In Conference on Learning Theory, 2738-2779. PMLR.
+
+Lu, S.; Tsaknakis, I.; and Hong, M. 2019. Block alternating optimization for non-convex min-max problems: algorithms and applications in signal processing and communications. In ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 4754-4758. IEEE.
+
+Madras, D.; Creager, E.; Pitassi, T.; and Zemel, R. 2018. Learning Adversarially Fair and Transferable Representations. arXiv:1802.06309.
+
+Milgrom, P.; and Segal, I. 2002. Envelope theorems for arbitrary choice sets. Econometrica, 70(2): 583-601.
+
+Mokhtari, A.; Ozdaglar, A.; and Pattathil, S. 2020. Convergence Rate of $\mathcal{O}\left( {1/k}\right)$ for Optimistic Gradient and Extra-gradient Methods in Smooth Convex-Concave Saddle Point Problems. arXiv:1906.01115.
+
+Monderer, D.; and Shapley, L. S. 1996. Potential games. Games and economic behavior, 14(1): 124-143.
+
+Nedic, A.; and Ozdaglar, A. 2009. Subgradient methods for saddle-point problems. Journal of optimization theory and applications, 142(1): 205-228.
+
+Nemirovski, A. 2004. Prox-method with rate of convergence $\mathrm{O}\left( {1/\mathrm{t}}\right)$ for variational inequalities with Lipschitz continuous monotone operators and smooth convex-concave saddle point problems. SIAM Journal on Optimization, 15(1): 229- 251.
+
+Nesterov, Y. 2007. Dual extrapolation and its applications to solving variational inequalities and related problems. Mathematical Programming, 109(2): 319-344.
+
+Neumann, J. v. 1928. Zur theorie der gesellschaftsspiele. Mathematische annalen, 100(1): 295-320.
+
+Nguyen, T. H.; Kar, D.; Brown, M.; Sinha, A.; Jiang, A. X.; and Tambe, M. 2016. Towards a Science of Security Games. In Toni, B., ed., Mathematical Sciences with Multidisciplinary Applications, 347-381. Cham: Springer International Publishing. ISBN 978-3-319-31323-8.
+
+Nouiehed, M.; Sanjabi, M.; Huang, T.; Lee, J. D.; and Raza-viyayn, M. 2019. Solving a class of non-convex min-max games using iterative first order methods. arXiv preprint arXiv:1902.08297.
+
+Ostrovskii, D. M.; Lowy, A.; and Razaviyayn, M. 2020. Efficient search of first-order nash equilibria in nonconvex-concave smooth min-max problems. arXiv preprint arXiv:2002.07919.
+
+Ouyang, Y.; and Xu, Y. 2018. Lower complexity bounds of first-order methods for convex-concave bilinear saddle-point problems. arXiv:1808.02901.
+
+pandas development team, T. 2020. pandas-dev/pandas: Pandas.
+
+Rafique, H.; Liu, M.; Lin, Q.; and Yang, T. 2019. Non-Convex Min-Max Optimization: Provable Algorithms and Applications in Machine Learning. arXiv:1810.02060.
+
+Sanjabi, M.; Ba, J.; Razaviyayn, M.; and Lee, J. D. 2018a. On the Convergence and Robustness of Training GANs with Regularized Optimal Transport. arXiv:1802.08249.
+
+Sanjabi, M.; Ba, J.; Razaviyayn, M.; and Lee, J. D. 2018b. On the Convergence and Robustness of Training GANs with Regularized Optimal Transport. In Proceedings of the 32nd International Conference on Neural Information Processing Systems, NIPS'18, 7091-7101. Red Hook, NY, USA: Curran Associates Inc.
+
+Sattigeri, P.; Hoffman, S. C.; Chenthamarakshan, V.; and Varshney, K. R. 2018. Fairness GAN. arXiv:1805.09910.
+
+Shalev-Shwartz, S.; et al. 2011. Online learning and online convex optimization. Foundations and trends in Machine Learning, 4(2): 107-194.
+
+Sinha, A.; Fang, F.; An, B.; Kiekintveld, C.; and Tambe, M. 2018. Stackelberg security games: Looking beyond a decade of success. IJCAI.
+
+Sinha, A.; Namkoong, H.; Volpi, R.; and Duchi, J. 2020. Certifying Some Distributional Robustness with Principled Adversarial Training. arXiv:1710.10571.
+
+Slater, M. 1959. Lagrange Multipliers Revisited. Cowles Foundation Discussion Papers 80, Cowles Foundation for Research in Economics, Yale University.
+
+Slater, M. 2014. Lagrange Multipliers Revisited, 293-306. Basel: Springer Basel. ISBN 978-3-0348-0439-4.
+
+Thekumparampil, K. K.; Jain, P.; Netrapalli, P.; and Oh, S. 2019. Efficient Algorithms for Smooth Minimax Optimization. arXiv:1907.01543.
+
+Tseng, P. 1995. On linear convergence of iterative methods for the variational inequality problem. Journal of Computational and Applied Mathematics, 60(1): 237-252. Proceedings of the International Meeting on Linear/Nonlinear Iterative Methods and Verification of Solution.
+
+Tseng, P. 2008. On accelerated proximal gradient methods for convex-concave optimization. submitted to SIAM Journal on Optimization, 1.
+
+Van Rossum, G.; and Drake Jr, F. L. 1995. Python tutorial. Centrum voor Wiskunde en Informatica Amsterdam, The Netherlands.
+
+Von Stackelberg, H. 1934. Marktform und gleichgewicht. J. springer.
+
+Walras, L. 1969. Elements of Pure Economics; or, The Theory of Social Wealth. Translated by William Jaffé., volume 2. Orion Editions.
+
+Yurii Nesterov, L. S. 2011. Solving strongly monotone variational and quasi-variational inequalities. Discrete & Continuous Dynamical Systems, 31(4): 1383-1396.
+
+Zhang, J.; Hong, M.; and Zhang, S. 2020. On Lower Iteration Complexity Bounds for the Saddle Point Problems. arXiv:1912.07481.
+
+Zhao, R. 2019. Optimal algorithms for stochastic three-composite convex-concave saddle point problems. arXiv preprint arXiv:1903.01687.
+
+Zhao, R. 2020. A Primal Dual Smoothing Framework for Max-Structured Nonconvex Optimization. arXiv:2003.04375.
+
+Zinkevich, M. 2003. Online convex programming and generalized infinitesimal gradient ascent. In Proceedings of the 20th international conference on machine learning (icml- 03), 928-936.
+
+## A Background
+
+Notation We use Roman uppercase letters to denote sets (e.g., $X$ ), bold uppercase letters to denote matrices (e.g., $\mathbf{X}$ ), bold lowercase letters to denote vectors (e.g., $\mathbf{p}$ ), and Roman lowercase letters to denote scalar quantities, (e.g., $c$ ). We denote the $i$ th row vector of a matrix (e.g., $\mathbf{X}$ ) by the corresponding bold lowercase letter with subscript $i$ (e.g., $\left. {\mathbf{x}}_{i}\right)$ . Similarly, we denote the $j$ th entry of a vector (e.g., $\mathbf{p}$ or ${\mathbf{x}}_{i}$ ) by the corresponding Roman lowercase letter with subscript $j$ (e.g., ${p}_{j}$ or ${x}_{ij}$ ). We denote the vector of ones of size $n$ by ${\mathbf{1}}_{n}$ . We denote the set of integers $\{ 1,\ldots , n\}$ by $\left\lbrack n\right\rbrack$ , the set of natural numbers by $\mathbb{N}$ , the set of positive natural numbers by ${\mathbb{N}}_{ + }$ the set of real numbers by $\mathbb{R}$ , the set of non-negative real numbers by ${\mathbb{R}}_{ + }$ , and the set of strictly positive real numbers by ${\mathbb{R}}_{+ + }$ . We denote the orthogonal projection operator onto a convex set $C$ by ${\Pi }_{C}$ , i.e., ${\Pi }_{C}\left( \mathbf{x}\right) = \arg \mathop{\min }\limits_{{\mathbf{y} \in C}}\parallel \mathbf{x} - \mathbf{y}{\parallel }^{2}$ . Given a sequence of iterates ${\left\{ {\mathbf{z}}^{\left( t\right) }\right\} }_{t = 1}^{T} \subset Z$ , we denote the average iterate ${\overline{\mathbf{z}}}^{\left( T\right) } = \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{\mathbf{z}}^{\left( t\right) }.$
+
+Online Convex Optimization An online convex optimization problem (OCP) is a decision problem in a dynamic environment which comprises a finite time horizon $T$ , a compact, convex feasible set $X$ , and a sequence of convex differentiable loss functions ${\left\{ {\ell }^{\left( t\right) }\right\} }_{t = 1}^{T}$ , where ${\ell }^{\left( t\right) }$ : $X \rightarrow \mathbb{R}$ for all $t \in \left\lbrack T\right\rbrack$ . A solution to an OCP is a sequence ${\left\{ {\mathbf{x}}^{\left( t\right) }\right\} }_{t = 1}^{T} \subset X$ . A preferred solution is one that minimizes average regret given by ${\operatorname{Regret}}^{\left( T\right) }\left( \mathbf{x}\right) =$ $\mathop{\sum }\limits_{{t = 1}}^{T}\frac{1}{T}{\ell }^{\left( t\right) }\left( {\mathbf{x}}^{\left( t\right) }\right) - \mathop{\sum }\limits_{{t = 1}}^{T}\frac{1}{T}{\ell }^{\left( t\right) }\left( \mathbf{x}\right)$ , over all $\mathbf{x} \in X$ . An algorithm $\mathcal{A} : {X}^{\mathbb{R}} \rightarrow {X}^{T}$ that takes as input a sequence of loss functions and outputs decisions such that $\mathop{\sum }\limits_{{t = 1}}^{T}\frac{1}{T}{\ell }^{\left( t\right) }\left( {{\mathcal{A}}_{t}\left( {\{ {\ell }^{\left( t\right) }{\} }_{t = 1}^{T}}\right) }\right) - \mathop{\min }\limits_{{\mathbf{x} \in X}}\mathop{\sum }\limits_{{t = 1}}^{T}\frac{1}{T}{\ell }^{\left( t\right) }\left( \mathbf{x}\right) \rightarrow$ 0as $T \rightarrow \infty$ is called a no-regret algorithm.
+
+A first-order method that solves OCPs is Online Mirror Descent (OMD). For some initial iterates ${\mathbf{u}}^{\left( 0\right) } =$ 0 and ${\mathbf{x}}^{\left( t\right) } \in X$ , OMD performs the following update in the dual space ${X}^{ * }$ at each time step $t : {\mathbf{u}}^{\left( t + 1\right) } =$ ${\mathbf{u}}^{\left( t\right) } - \eta {\nabla }_{\mathbf{x}}{\ell }^{\left( t\right) }\left( {\mathbf{x}}^{\left( t\right) }\right)$ , and then projects the iterate computed in the dual space ${X}^{ * }$ back to the primal space $X : {\mathbf{x}}^{\left( t + 1\right) } = \arg \mathop{\min }\limits_{{\mathbf{x} \in X}}\left\{ {R\left( \mathbf{x}\right) - \left\langle {{\mathbf{u}}^{\left( t + 1\right) },\mathbf{x}}\right\rangle }\right\}$ , where $R : X \rightarrow \mathbb{R}$ is a strongly-convex differentiable function. When $R\left( \mathbf{x}\right) = \frac{1}{2}\parallel \mathbf{x}{\parallel }_{2}^{2}$ , OMD reduces to projected online gradient descent, given by the update rule: ${\mathbf{x}}^{\left( t + 1\right) } =$ ${\Pi }_{X}\left( {{\mathbf{x}}^{\left( t\right) } - \eta {\nabla }_{\mathbf{x}}{\ell }^{\left( t\right) }\left( {\mathbf{x}}^{\left( t\right) }\right) }\right)$ . The following theorem bounds the average regret of OMD (Kakade, Shalev-Shwartz, and Tewari 2012):
+
+Theorem 12. Let $c = \mathop{\max }\limits_{{\mathbf{x} \in X}}\parallel \mathbf{x}\parallel$ , and let ${\left\{ {\ell }^{\left( t\right) }\right\} }_{t}$ be a sequence of $\ell$ -Lipschitz loss functions s.t. for all $t \in$ ${\mathbb{N}}_{ + },{\ell }^{\left( t\right) } : {\mathbb{R}}^{n} \rightarrow \mathbb{R}$ with respect to the dual norm $\parallel \cdot {\parallel }_{ * }$ . Then, if $\eta = \frac{c}{\ell \sqrt{2T}}$ , online projected gradient descent achieves bounded average regret bounded as follows: $\mathop{\sum }\limits_{{t = 1}}^{T}\frac{1}{T}{\ell }^{\left( t\right) }\left( {\mathbf{x}}^{\left( t\right) }\right) - \mathop{\min }\limits_{{\mathbf{x} \in X}}\mathop{\sum }\limits_{{t = 1}}^{T}\frac{1}{T}{\ell }^{\left( t\right) }\left( \mathbf{x}\right) \leq c\ell \sqrt{\frac{2}{T}}.$
+
+## B Additional Related Work
+
+Related Work Stackelberg games (Von Stackelberg 1934) have found important applications in the domain of security (e.g., (Nguyen et al. 2016; Sinha et al. 2018)) and environmental protection (e.g., (Fang and Nguyen 2016)). These applications have thus far been modelled as Stackelberg games with independent strategy sets. Yet, the increased expressiveness of Stackelberg games with dependent strategy sets may make them a better model of the real world, as they provide the leader with more power to achieve a better outcome by constraining the follower's choices.
+
+The study of algorithms that compute competitive equilibria in Fisher markets was initiated by Devanur et al. (Devanur et al. 2002), who provided a polynomial-time method for solving these markets assuming linear utilities. More recently, Cheung, Hoefer, and Nakhe (Cheung, Hoe-fer, and Nakhe 2019) studied two price adjustment processes, tâtonnement and proportional response dynamics, in dynamic Fisher markets and showed that these price adjustment processes track the equilibrium of Fisher markets closely even when the market is subject to change.
+
+Additional Related Work Much progress has been made recently in solving min-max games with independent strategy sets, both in the convex-concave case and in non-convex-concave case. We provide a survey of the literature as presented by Goktas and Greenwald in what follows. For the former case, when $f$ is ${\mu }_{\mathbf{x}}$ -strongly-convex in $\mathbf{x}$ and ${\mu }_{\mathbf{y}}$ -strongly-concave in $\mathbf{y}$ , Tseng (Tseng 1995), Yurii Nesterov (Yurii Nesterov 2011), and Gidel et al. (Gidel et al. 2020) proposed variational inequality methods, and Mokhtari, Ozdaglar, and Pattathil (Mokhtari, Ozdaglar, and Pattathil 2020), gradient-descent-ascent (GDA)-based methods, all of which compute a solution in $\widetilde{O}\left( {{\mu }_{\mathbf{y}} + {\mu }_{\mathbf{x}}}\right)$ iterations. These upper bounds were recently complemented by the lower bound of $\widetilde{\Omega }\left( \sqrt{{\mu }_{\mathbf{y}}{\mu }_{\mathbf{x}}}\right)$ , shown by Ibrahim et al. (Ibrahim et al. 2019) and Zhang, Hong, and Zhang (Zhang, Hong, and Zhang 2020). Subsequently, Lin, Jin, and Jordan (Lin, Jin, and Jordan 2020b) and Alkousa et al. (Alk-ousa et al. 2020) analyzed algorithms that converge in $\widetilde{O}\left( \sqrt{{\mu }_{\mathbf{y}}{\mu }_{\mathbf{x}}}\right)$ and $\widetilde{O}\left( {\min \left\{ {{\mu }_{\mathbf{x}}\sqrt{{\mu }_{\mathbf{y}}},{\mu }_{\mathbf{y}}\sqrt{{\mu }_{\mathbf{x}}}}\right\} }\right)$ iterations, respectively.
+
+For the special case where $f$ is ${\mu }_{\mathbf{x}}$ -strongly convex in $\mathbf{x}$ and linear in $\mathbf{y}$ , Juditsky, Nemirovski et al. (Juditsky, Ne-mirovski et al. 2011), Hamedani and Aybat (Hamedani and Aybat 2018), and Zhao (Zhao 2019) all present methods that converge to an $\varepsilon$ -approximate solution in $O\left( \sqrt{{\mu }_{x}/\varepsilon }\right)$ iterations. When the strong concavity or linearity assumptions of $f$ on $\mathbf{y}$ are dropped, and $f$ is assumed to be ${\mu }_{\mathbf{x}}$ -strongly-convex in $\mathbf{x}$ but only concave in $\mathbf{y}$ , Thekumparampil et al. (Thekumparampil et al. 2019) provide an algorithm that converges to an $\varepsilon$ -approximate solution in $\widetilde{O}\left( {{\mu }_{x}/\varepsilon }\right)$ iterations, and Ouyang and Xu (Ouyang and Xu 2018) provide a lower bound of $\widetilde{\Omega }\left( \sqrt{{\mu }_{x}/\varepsilon }\right)$ iterations on this same computation. Lin, Jin, and Jordan then went on to develop a faster algorithm, with iteration complexity of $\widetilde{O}\left( \sqrt{{\mu }_{x}/\varepsilon }\right)$ , under the same conditions.
+
+When $f$ is simply assumed to be convex-concave, Ne-mirovski (Nemirovski 2004), Nesterov (Nesterov 2007), and Tseng (Tseng 2008) describe algorithms that solve for an $\varepsilon$ -approximate solution with $\widetilde{O}\left( {\varepsilon }^{-1}\right)$ iteration complexity, and Ouyang and Xu (Ouyang and Xu 2018) prove a corresponding lower bound of $\Omega \left( {\varepsilon }^{-1}\right)$ .
+
+When $f$ is assumed to be non-convex- ${\mu }_{\mathbf{y}}$ -strongly-concave, and the goal is to compute a first-order Nash, San-jabi et al. (Sanjabi et al. 2018b) provide an algorithm that converges to $\varepsilon$ -an approximate solution in $O\left( {\varepsilon }^{-2}\right)$ iterations. Jin, Netrapalli, and Jordan (Jin, Netrapalli, and Jordan 2020), Rafique et al. (Rafique et al. 2019), Lin, Jin, and Jordan (Lin, Jin, and Jordan 2020a), and Lu, Tsaknakis, and Hong (Lu, Tsaknakis, and Hong 2019) provide algorithms that converge in $\widetilde{O}\left( {{\mu }_{y}^{2}{\varepsilon }^{-2}}\right)$ iterations, while Lin, Jin, and Jordan (Lin, Jin, and Jordan 2020b) provide an even faster algorithm, with an iteration complexity of $\widetilde{O}\left( {\sqrt{{\mu }_{\mathbf{y}}}{\varepsilon }^{-2}}\right)$ .
+
+When $f$ is non-convex-non-concave and the goal to compute is an approximate first-order Nash equilibrium, Lu, Tsaknakis, and Hong (Lu, Tsaknakis, and Hong 2019) provide an algorithm with iteration complexity $\widetilde{O}\left( {\varepsilon }^{-4}\right)$ , while Nouiehed et al. (Nouiehed et al. 2019) provide an algorithm with iteration complexity $\widetilde{O}\left( {\varepsilon }^{-{3.5}}\right)$ . More recently, Ostro-vskii, Lowy, and Razaviyayn (Ostrovskii, Lowy, and Raza-viyayn 2020) and Lin, Jin, and Jordan (Lin, Jin, and Jordan 2020b) proposed an algorithm with iteration complexity $\widetilde{O}\left( {\varepsilon }^{-{2.5}}\right)$ .
+
+When $f$ is non-convex-non-concave and the desired solution concept is a "local" Stackelberg equilibrium, Jin, Netrapalli, and Jordan (Jin, Netrapalli, and Jordan 2020), Rafique et al. (Rafique et al. 2019), and Lin, Jin, and Jordan (Lin, Jin, and Jordan 2020a) provide algorithms with a $\widetilde{O}\left( {\varepsilon }^{-6}\right)$ complexity. More recently, Thekumparampil et al. (Thekumparampil et al. 2019), Zhao (Zhao 2020), and Lin, Jin, and Jordan (Lin, Jin, and Jordan 2020b) have proposed algorithms that converge to an $\varepsilon$ -approximate solution in $\widetilde{O}\left( {\varepsilon }^{-3}\right)$ iterations.
+
+We summarize the literature pertaining to the convex-concave and the non-convex-concave settings in Tables 1 and 2 respectively.
+
+Table 1: Iteration complexities for min-max games with independent strategy sets in convex-concave settings. Note that these results assume that the objective function is Lipschitz-smooth.
+
+| Setting | Reference | Iteration Com- plexity |
| ${\mu }_{\mathbf{x}}$ -Strongly-Convex- ${\mu }_{y}$ -Strongly-Concave | (Tseng 1995) | $\widetilde{O}\left( {{\mu }_{\mathbf{x}} + {\mu }_{\mathbf{y}}}\right)$ |
| (Yurii Nesterov 2011) |
| (Gidel et al. 2020) |
| (Mokhtari, Ozdaglar, and Pattathil 2020) |
| (Alkousa et al. 2020) | $O(\min \left\{ {{\mu }_{\mathbf{x}}\sqrt{{\mu }_{\mathbf{y}}},}\right.$ $\left. \left. {{\mu }_{\mathbf{y}}\sqrt{{\mu }_{\mathbf{x}}}}\right\rbrack \right) \}$ |
| (Lin, Jin, and Jor- dan 2020b) | $\widetilde{O}\left( \sqrt{{\mu }_{\mathbf{x}}{\mu }_{\mathbf{y}}}\right)$ |
| (Ibrahim et al. 2019) | $\widetilde{\Omega }\left( \sqrt{{\mu }_{\mathbf{x}}{\mu }_{\mathbf{y}}}\right)$ |
| (Zhang, Hong, and Zhang 2020) |
| ${\mu }_{\mathbf{x}}$ -Strongly-Convex -Linear | (Juditsky, Ne- mirovski et al. 2011) | $O\left( \sqrt{{\mu }_{x}/\varepsilon }\right)$ |
| (Hamedani and Aybat 2018) |
| (Zhao 2019) |
| ${\mu }_{\mathbf{x}}$ -Strongly-Convex -Concave | (Thekumparampil et al. 2019) | $\widetilde{O}\left( {{\mu }_{x}/\sqrt{\varepsilon }}\right)$ |
| (Lin, Jin, and Jor- dan 2020b) | $\widetilde{O}\left( \sqrt{{\mu }_{x}/\varepsilon }\right)$ |
| (Ouyang and Xu 2018) | $\widetilde{\Omega }\left( \sqrt{{\mu }_{x}/\varepsilon }\right)$ |
| Convex -Concave | (Nemirovski 2004) | $O\left( {\varepsilon }^{-1}\right)$ |
| (Nesterov 2007) |
| (Tseng 2008) |
| (Lin, Jin, and Jor- dan 2020b) | $\widetilde{O}\left( {\varepsilon }^{-1}\right)$ |
| (Ouyang and Xu 2018) | $\Omega \left( {\varepsilon }^{-1}\right)$ |
+
+Table 2: Iteration complexities for min-max games with independent strategy sets in non-convex-concave settings. Note that although all these results assume that the objective function is Lipschitz-smooth, some authors make additional assumptions: e.g., (Nouiehed et al. 2019) obtain their result for objective functions that satisfy the Lojasiwicz condition.
+
+| Setting | Reference | Iteration Complexity |
| Nonconvex- ${\mu }_{{\mathbf{y}}^{ - }}$ Strongly-Concave, First Order Nash or Local Stackelberg Equilibrium | (Jin, Netrapalli, and Jor- dan 2020) (Rafique et al. 2019) | $\widetilde{O}\left( {{\mu }_{\mathbf{y}}^{2}{\varepsilon }^{-2}}\right)$ |
| (Lin, Jin, and Jordan 2020a) |
| (Lu, Tsaknakis, and Hong 2019) |
| (Lin, Jin, and Jordan 2020b) | $\widetilde{O}\left( {\sqrt{{\mu }_{\mathbf{y}}}{\varepsilon }^{-2}}\right)$ |
| Nonconvex- Concave, First Order Nash Equilibrium | (Lu, Tsaknakis, and Hong 2019) | $\widetilde{O}\left( {\varepsilon }^{-4}\right)$ |
| (Nouiehed et al. 2019) | $\widetilde{O}\left( {\varepsilon }^{-{3.5}}\right)$ |
| (Ostrovskii, Lowy, and Razaviyayn 2020) | $\widetilde{O}\left( {\varepsilon }^{-{2.5}}\right)$ |
| (Lin, Jin, and Jordan 2020b) |
| Nonconvex- Concave, Local Stackelberg Equilibrium | (Jin, Netrapalli, and Jor- dan 2020) | $\widetilde{O}\left( {\varepsilon }^{-6}\right)$ |
| (Nouiehed et al. 2019) |
| (Lin, Jin, and Jordan 2020b) |
| (Thekumparampil et al. 2019) | $\widetilde{O}\left( {\varepsilon }^{-3}\right)$ |
| (Zhao 2020) (Lin, Jin, and Jordan 2020b) |
+
+## C Omitted Proofs
+
+Proof of Theorem 3. Since pessimistic regret is bounded by $\varepsilon$ after $T$ iterations, it holds that:
+
+$$
+\mathop{\max }\limits_{{\mathbf{x} \in X}}{\operatorname{PesRegret}}_{X}^{\left( T\right) }\left( \mathbf{x}\right) \leq \varepsilon \tag{11}
+$$
+
+$$
+\frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{V}_{X}^{\left( t\right) }\left( {\mathbf{x}}^{\left( t\right) }\right) - \mathop{\min }\limits_{{\mathbf{x} \in X}}\mathop{\sum }\limits_{{t = 1}}^{T}\frac{1}{T}{V}_{X}^{\left( t\right) }\left( \mathbf{x}\right) \leq \varepsilon \tag{12}
+$$
+
+Since the game is static, and it further holds that:
+
+$$
+\frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{V}_{X}\left( {\mathbf{x}}^{\left( t\right) }\right) - \mathop{\min }\limits_{{\mathbf{x} \in X}}\mathop{\sum }\limits_{{t = 1}}^{T}\frac{1}{T}{V}_{X}\left( \mathbf{x}\right) \leq \varepsilon \tag{13}
+$$
+
+$$
+\frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{V}_{X}\left( {\mathbf{x}}^{\left( t\right) }\right) - \mathop{\min }\limits_{{\mathbf{x} \in X}}{V}_{X}\left( \mathbf{x}\right) \leq \varepsilon \tag{14}
+$$
+
+Thus, by the convexity of ${V}_{X}$ (see Proposition 2), ${V}_{X}\left( {\overline{\mathbf{x}}}^{\left( T\right) }\right) - \mathop{\min }\limits_{{\mathbf{x} \in X}}{V}_{X}\left( \mathbf{x}\right) \leq \varepsilon$ . Now replacing ${V}_{X}$ by its definition, and setting ${\mathbf{y}}^{ * }\left( {\overline{\mathbf{x}}}^{\left( T\right) }\right) \in {\mathrm{{BR}}}_{Y}\left( {\overline{\mathbf{x}}}^{\left( T\right) }\right)$ , we obtain that $\left( {{\overline{\mathbf{x}}}^{\left( T\right) },{\mathbf{y}}^{ * }\left( {\overline{\mathbf{x}}}^{\left( T\right) }\right) }\right)$ is $\left( {\varepsilon ,0}\right)$ -Stackelberg equilibrium:
+
+$$
+{V}_{X}\left( {\overline{\mathbf{x}}}^{\left( T\right) }\right) \leq f\left( {{\overline{\mathbf{x}}}^{\left( T\right) },{\mathbf{y}}^{ * }\left( {\overline{\mathbf{x}}}^{\left( T\right) }\right) }\right) \leq \mathop{\min }\limits_{{\mathbf{x} \in X}}{V}_{X}\left( \mathbf{x}\right) + \varepsilon
+$$
+
+(15)
+
+$$
+\mathop{\max }\limits_{{\mathbf{y} \in Y : \mathbf{g}\left( {{\overline{\mathbf{x}}}^{\left( T\right) },\mathbf{y}}\right) }}f\left( {{\overline{\mathbf{x}}}^{\left( T\right) },\mathbf{y}}\right) \leq f\left( {{\overline{\mathbf{x}}}^{\left( T\right) },{\mathbf{y}}^{ * }\left( {\overline{\mathbf{x}}}^{\left( T\right) }\right) }\right) \leq \mathop{\min }\limits_{{\mathbf{x} \in X}}\mathop{\max }\limits_{{\mathbf{y} \in Y : \mathbf{g}\left( {\mathbf{x},\mathbf{y}}\right) }}f\left( {\mathbf{x},\mathbf{y}}\right)
+$$
+
+$$
++ \varepsilon \tag{16}
+$$
+
+Proof of Lemma 5. We can relax the inner player's payoff maximization problem via the problem's Lagrangian and since by assumption 1 , Slater's condition is satisfied, strong duality holds, giving us for all $\mathbf{x} \in X$ :
+
+$\mathop{\max }\limits_{{\mathbf{y} \in Y : \mathbf{g}\left( {\mathbf{x},\mathbf{y}}\right) \geq \mathbf{0}}}f\left( {\mathbf{x},\mathbf{y}}\right) = \mathop{\max }\limits_{{\mathbf{y} \in Y}}\mathop{\min }\limits_{{\mathbf{\lambda } \geq \mathbf{0}}}{\mathcal{L}}_{\mathbf{x}}\left( {\mathbf{y},\mathbf{\lambda }}\right)$ $= \mathop{\min }\limits_{{\mathbf{\lambda } > \mathbf{0}}}\mathop{\max }\limits_{{\mathbf{y} \in Y}}{\mathcal{L}}_{\mathbf{x}}\left( {\mathbf{y},\mathbf{\lambda }}\right)$ . We can then re-express the min-max game as: $\mathop{\min }\limits_{{\mathbf{x} \in X}}\mathop{\max }\limits_{{\mathbf{y} \in Y : \mathbf{g}\left( {\mathbf{x},\mathbf{y}}\right) \geq \mathbf{0}}}f\left( {\mathbf{x},\mathbf{y}}\right) =$ $\mathop{\min }\limits_{{\mathbf{\lambda } \geq \mathbf{0}}}\mathop{\min }\limits_{{\mathbf{x} \in X}}\mathop{\max }\limits_{{\mathbf{y} \in Y}}$
+
+${\mathcal{L}}_{\mathbf{x}}\left( {\mathbf{y},\mathbf{\lambda }}\right)$ . Letting ${\mathbf{\lambda }}^{ * }\; \in$
+
+$\arg \mathop{\min }\limits_{{\mathbf{\lambda } \geq \mathbf{0}}}\mathop{\min }\limits_{{\mathbf{x} \in X}}\mathop{\max }\limits_{{\mathbf{y} \in Y}}{\mathcal{L}}_{\mathbf{x}}\left( {\mathbf{y},\mathbf{\lambda }}\right)$ , we have $\mathop{\min }\limits_{{\mathbf{x} \in X}}$ $\mathop{\max }\limits_{{\mathbf{y} \in Y : \mathbf{g}\left( {\mathbf{x},\mathbf{y}}\right) \geq \mathbf{0}}}f\left( {\mathbf{x},\mathbf{y}}\right) = \mathop{\min }\limits_{{\mathbf{x} \in X}}\mathop{\max }\limits_{{\mathbf{y} \in Y}}{\mathcal{L}}_{\mathbf{x}}\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right) .$ Note that ${\mathcal{L}}_{\mathbf{x}}\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right)$ is convex-concave in(x, y). Hence, any Stackelberg equilibrium $\left( {{\mathbf{x}}^{ * },{\mathbf{y}}^{ * }}\right) \in X \times Y$ of (X, Y, f, g)is a saddle point of ${\mathcal{L}}_{\mathbf{x}}\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right)$ , i.e., $\forall \mathbf{x} \in$ $X,\mathbf{y} \in \widehat{Y},{\mathcal{L}}_{{\mathbf{x}}^{ * }}\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right) \leq {\mathcal{L}}_{{\mathbf{x}}^{ * }}\left( {{\mathbf{y}}^{ * },{\mathbf{\lambda }}^{ * }}\right) \leq {\mathcal{L}}_{\mathbf{x}}\left( {{\mathbf{y}}^{ * },{\mathbf{\lambda }}^{ * }}\right) .$
+
+Proof of Theorem 6. Since the Lagrangian regret is bounded for both players we have:
+
+$$
+\left\{ \begin{array}{l} \mathop{\max }\limits_{{\mathbf{x} \in X}}{\operatorname{LagrRegret}}_{X}^{\left( T\right) }\left( \mathbf{x}\right) \leq \varepsilon \\ \mathop{\max }\limits_{{\mathbf{y} \in Y}}{\operatorname{LagrRegret}}_{Y}^{\left( T\right) }\left( \mathbf{y}\right) \leq \varepsilon \end{array}\right. \tag{17}
+$$
+
+$$
+\left\{ \begin{array}{l} \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{\mathcal{L}}_{{\mathbf{x}}^{\left( t\right) }}^{\left( t\right) }\left( {{\mathbf{y}}^{\left( t\right) },{\mathbf{\lambda }}^{ * }}\right) - \mathop{\min }\limits_{{\mathbf{x} \in X}}\frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{\mathcal{L}}_{\mathbf{x}}^{\left( t\right) }\left( {{\mathbf{y}}^{\left( t\right) },{\mathbf{\lambda }}^{ * }}\right) \leq \varepsilon \\ \mathop{\max }\limits_{{\mathbf{y} \in Y}}\frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{\mathcal{L}}_{{\mathbf{x}}^{\left( t\right) }}^{\left( t\right) }\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right) - \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{\mathcal{L}}_{{\mathbf{x}}^{\left( t\right) }}^{\left( t\right) }\left( {{\mathbf{y}}^{\left( t\right) },{\mathbf{\lambda }}^{ * }}\right) \leq \varepsilon \end{array}\right.
+$$
+
+(18)
+
+$$
+\left\{ \begin{array}{l} \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{\mathcal{L}}_{{\mathbf{x}}^{\left( t\right) }}\left( {{\mathbf{y}}^{\left( t\right) },{\mathbf{\lambda }}^{ * }}\right) - \mathop{\min }\limits_{{\mathbf{x} \in X}}\frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{\mathcal{L}}_{\mathbf{x}}\left( {{\mathbf{y}}^{\left( t\right) },{\mathbf{\lambda }}^{ * }}\right) \leq \varepsilon \\ \mathop{\max }\limits_{{\mathbf{y} \in Y}}\frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{\mathcal{L}}_{{\mathbf{x}}^{\left( t\right) }}\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right) - \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{\mathcal{L}}_{{\mathbf{x}}^{\left( t\right) }}\left( {{\mathbf{y}}^{\left( t\right) },{\mathbf{\lambda }}^{ * }}\right) \leq \varepsilon \end{array}\right.
+$$
+
+(19)The last line follows because the min-max Stackelberg game is static.
+
+Summing the final two inequalities yields:
+
+$$
+\mathop{\max }\limits_{{\mathbf{y} \in Y}}\frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{\mathcal{L}}_{{\mathbf{x}}^{\left( t\right) }}\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right) - \mathop{\min }\limits_{{\mathbf{x} \in X}}\frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{\mathcal{L}}_{\mathbf{x}}\left( {{\mathbf{y}}^{\left( t\right) },{\mathbf{\lambda }}^{ * }}\right) \leq {2\varepsilon }
+$$
+
+(20)
+
+$$
+\frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}\mathop{\max }\limits_{{\mathbf{y} \in Y}}{\mathcal{L}}_{{\mathbf{x}}^{\left( t\right) }}\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right) - \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}\mathop{\min }\limits_{{\mathbf{x} \in X}}{\mathcal{L}}_{\mathbf{x}}\left( {{\mathbf{y}}^{\left( t\right) },{\mathbf{\lambda }}^{ * }}\right) \leq {2\varepsilon }
+$$
+
+(21)
+
+where the second inequality was obtained by an application of Jensen's inequality on the first and second terms.
+
+Since $\mathcal{L}$ is convex in $\mathbf{x}$ and concave in $\mathbf{y}$ , we have that $\mathop{\max }\limits_{{\mathbf{y} \in Y}}\mathbf{y}$
+
+${\mathcal{L}}_{{\mathbf{x}}^{\left( t\right) }}\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right)$ is convex in $\mathbf{x}$ and $\mathop{\min }\limits_{{\mathbf{x} \in X}}{\mathcal{L}}_{\mathbf{x}}\left( {{\mathbf{y}}^{\left( t\right) },{\mathbf{\lambda }}^{ * }}\right)$ is convex in $\mathbf{y}$ , which implies that $\mathop{\max }\limits_{{\mathbf{y} \in Y}}{\mathcal{L}}_{{\bar{\mathbf{x}}}^{\left( T\right) }}\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right) -$ $\mathop{\min }\limits_{{\mathbf{x} \in X}}{\mathcal{L}}_{\mathbf{x}}\left( {{\overline{\mathbf{y}}}^{\left( T\right) },{\mathbf{\lambda }}^{ * }}\right) \leq {2\varepsilon }$ . By the max-min inequality ((Boyd, Boyd, and Vandenberghe 2004), Equation 5.46), it also holds that $\mathop{\min }\limits_{{\mathbf{x} \in X}}{\mathcal{L}}_{\mathbf{x}}\left( {{\overline{\mathbf{y}}}^{\left( T\right) },{\mathbf{\lambda }}^{ * }}\right) \leq$ $\mathop{\max }\limits_{{\mathbf{y} \in Y}}{\mathcal{L}}_{{\bar{\mathbf{x}}}^{\left( T\right) }}\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right)$ . Combining these two inequality yields the desired result.
+
+Proof of Theorem 10. The value function of the outer player in the game ${\left\{ \left( X, Y,{f}^{\left( t\right) }\right) \right\} }_{t = 1}^{T}$ at iteration $t \in \left\lbrack T\right\rbrack$ , is given by ${V}^{\left( t\right) }\left( \mathbf{x}\right) = \mathop{\max }\limits_{{\mathbf{y} \in Y}}{f}^{\left( t\right) }\left( {\mathbf{x},\mathbf{y}}\right)$ . Hence, for all $t \in \left\lbrack T\right\rbrack$ , as ${f}^{\left( t\right) }$ is $\mu$ -strongly-convex, ${V}^{\left( t\right) }$ is also strongly concave since the maximum preserves strong-convexity.
+
+Additionally, since for all $t \in \left\lbrack T\right\rbrack ,{f}^{\left( t\right) }$ is strictly concave in $\mathbf{y}$ , by Danskin’s theorem (Danskin 1966), for all $t \in \left\lbrack T\right\rbrack ,{V}^{\left( t\right) }$ is differentiable and its derivative is given by ${\nabla }_{\mathbf{x}}{V}^{\left( t\right) }\left( \mathbf{x}\right) = {\nabla }_{\mathbf{x}}f\left( {\mathbf{x},{\mathbf{y}}^{ * }\left( \mathbf{x}\right) }\right)$ where ${\mathbf{y}}^{ * }\left( \mathbf{x}\right) \in \mathop{\max }\limits_{{\mathbf{y} \in Y}}{f}^{\left( t\right) }\left( {\mathbf{x},\mathbf{y}}\right)$ . Thus, as ${\nabla }_{\mathbf{x}}f\left( {\mathbf{x},{\mathbf{y}}^{ * }\left( \mathbf{x}\right) }\right)$ is ${\ell }_{\nabla f}$ -lipschitz continuous, so is ${\nabla }_{\mathbf{x}}{V}^{\left( t\right) }\left( \mathbf{x}\right)$ . The result follows from Cheung, Hoefer, and Nakhe's bound for gradient descent on shifting strongly convex functions ((Cheung, Hoefer, and Nakhe 2019), Proposition 12).
+
+Proof of Theorem 11. By the assumptions of the theorem, the loss functions of the outer player ${\left\{ {f}^{\left( t\right) }\left( \cdot ,{\mathbf{y}}^{\left( t\right) }\right) \right\} }_{t = 1}^{T}$ are ${\mu }_{\mathbf{x}}$ -strongly-convex and ${\ell }_{\nabla f}$ -Lipschitz continuous functions. Similarly the loss functions of the inner player ${\left\{ -{f}^{\left( t\right) }\left( {\mathbf{x}}^{\left( t\right) }, \cdot \right) \right\} }_{t = 1}^{T}$ are ${\mu }_{\mathbf{y}}$ -strongly-convex and ${\ell }_{\nabla f}$ - Lipschitz continuous functions. Using Cheung, Hoefer, and Nakhe's Proposition 12 (Cheung, Hoefer, and Nakhe 2019), we then obtain the following bounds:
+
+$$
+\begin{Vmatrix}{{\mathbf{x}}^{{\left( T\right) }^{ * }} - {\mathbf{x}}^{\left( T\right) }}\end{Vmatrix} \leq {\left( 1 - {\delta }_{\mathbf{x}}\right) }^{T/2}\begin{Vmatrix}{{\mathbf{x}}^{{\left( 0\right) }^{ * }} - {\mathbf{x}}^{\left( 0\right) }}\end{Vmatrix}
+$$
+
+$$
++ \mathop{\sum }\limits_{{t = 1}}^{T}{\left( 1 - {\delta }_{\mathbf{x}}\right) }^{\frac{T - t}{2}}{\Delta }_{\mathbf{x}}^{\left( t\right) } \tag{22}
+$$
+
+$$
+\begin{Vmatrix}{{\mathbf{y}}^{{\left( T\right) }^{ * }} - {\mathbf{y}}^{\left( T\right) }}\end{Vmatrix} \leq {\left( 1 - {\delta }_{\mathbf{y}}\right) }^{T/2}\begin{Vmatrix}{{\mathbf{y}}^{{\left( 0\right) }^{ * }} - {\mathbf{y}}^{\left( 0\right) }}\end{Vmatrix}
+$$
+
+$$
++ \mathop{\sum }\limits_{{t = 1}}^{T}{\left( 1 - {\delta }_{\mathbf{y}}\right) }^{\frac{T - t}{2}}{\Delta }_{\mathbf{y}}^{\left( t\right) } \tag{23}
+$$
+
+Combining the two inequalities, we obtain:
+
+$$
+\begin{Vmatrix}{{\mathbf{x}}^{{\left( T\right) }^{ * }} - {\mathbf{x}}^{\left( T\right) }}\end{Vmatrix} + \begin{Vmatrix}{{\mathbf{y}}^{{\left( T\right) }^{ * }} - {\mathbf{y}}^{\left( T\right) }}\end{Vmatrix}
+$$
+
+$$
+\leq {\left( 1 - {\delta }_{\mathbf{x}}\right) }^{T/2}\begin{Vmatrix}{{\mathbf{x}}^{{\left( 0\right) }^{ * }} - {\mathbf{x}}^{\left( 0\right) }}\end{Vmatrix} + {\left( 1 - {\delta }_{\mathbf{y}}\right) }^{T/2}\begin{Vmatrix}{{\mathbf{y}}^{{\left( 0\right) }^{ * }} - {\mathbf{y}}^{\left( 0\right) }}\end{Vmatrix}
+$$
+
+$$
++ \mathop{\sum }\limits_{{t = 1}}^{T}{\left( 1 - {\delta }_{\mathbf{x}}\right) }^{\frac{T - t}{2}}{\Delta }_{\mathbf{x}}^{\left( t\right) } + \mathop{\sum }\limits_{{t = 1}}^{T}{\left( 1 - {\delta }_{\mathbf{y}}\right) }^{\frac{T - t}{2}}{\Delta }_{\mathbf{y}}^{\left( t\right) } \tag{24}
+$$
+
+The second part of the theorem follows by taking the sum of the geometric series.
+
+## D Pseudo-Code for Algorithms
+
+Algorithm 1: Max-Oracle Gradient Descent
+
+---
+
+Inputs: $X, Y, f,\mathbf{g},\mathbf{\eta }, T,{\mathbf{x}}^{\left( 0\right) }$
+
+Output: $\left( {{\mathbf{x}}^{ * },{\mathbf{y}}^{ * }}\right)$
+
+ for $t = 1,\ldots , T$ do
+
+ Find ${\mathbf{y}}^{ * }\left( {\mathbf{x}}^{\left( t - 1\right) }\right) \in {\mathrm{{BR}}}_{Y}\left( {\mathbf{x}}^{\left( t - 1\right) }\right)$
+
+ Set ${\mathbf{y}}^{\left( t - 1\right) } = {\mathbf{y}}^{ * }\left( {\mathbf{x}}^{\left( t - 1\right) }\right)$
+
+ Set ${\mathbf{\lambda }}^{\left( t - 1\right) } = {\mathbf{\lambda }}^{ * }\left( {{\mathbf{x}}^{\left( t - 1\right) },{\mathbf{y}}^{\left( t - 1\right) }}\right)$
+
+ Set ${\mathbf{x}}^{\left( t\right) } = {\Pi }_{X}\left\lbrack {{\mathbf{x}}^{\left( t - 1\right) } - {\eta }_{t}{\nabla }_{\mathbf{x}}{\mathcal{L}}_{{\mathbf{x}}^{\left( t - 1\right) }}\left( {{\mathbf{y}}^{\left( t - 1\right) },{\mathbf{\lambda }}^{\left( t - 1\right) }}\right) }\right\rbrack$
+
+ end for
+
+ Set ${\bar{\mathbf{x}}}^{\left( T\right) } = \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{\mathbf{x}}^{\left( t\right) }$
+
+ Set ${\mathbf{y}}^{ * }\left( {\overline{\mathbf{x}}}^{\left( T\right) }\right) \in {\mathrm{{BR}}}_{Y}\left( {\overline{\mathbf{x}}}^{\left( T\right) }\right)$
+
+ return $\left( {{\overline{\mathbf{x}}}^{\left( T\right) },{\mathbf{y}}^{ * }\left( {\overline{\mathbf{x}}}^{\left( T\right) }\right) }\right)$
+
+---
+
+Algorithm 2: Lagrangian Gradient Descent Ascent (LGDA)
+
+---
+
+Inputs: ${\mathbf{\lambda }}^{ * }, X, Y, f,\mathbf{g},{\mathbf{\eta }}^{\mathbf{x}},{\mathbf{\eta }}^{\mathbf{y}}, T,{\mathbf{x}}^{\left( 0\right) },{\mathbf{y}}^{\left( 0\right) }$
+
+Output: ${\mathbf{x}}^{ * },{\mathbf{y}}^{ * }$
+
+ for $t = 1,\ldots , T - 1$ do
+
+ Set ${\mathbf{x}}^{\left( t + 1\right) } = {\Pi }_{X}\left( {{\mathbf{x}}^{\left( t\right) } - {\eta }_{t}^{\mathbf{x}}{\nabla }_{\mathbf{x}}{\mathcal{L}}_{{\mathbf{x}}^{\left( t\right) }}\left( {{\mathbf{y}}^{\left( t\right) },{\mathbf{\lambda }}^{ * }}\right) }\right)$
+
+ Set ${\mathbf{y}}^{\left( t + 1\right) } = {\Pi }_{Y}\left( {{\mathbf{y}}^{\left( t\right) } + {\eta }_{t}^{\mathbf{y}}{\nabla }_{\mathbf{y}}{\mathcal{L}}_{{\mathbf{x}}^{\left( t\right) }}\left( {{\mathbf{y}}^{\left( t\right) },{\mathbf{\lambda }}^{ * }}\right) }\right)$
+
+ end for
+
+ return ${\left\{ \left( {\mathbf{x}}^{\left( t\right) },{\mathbf{y}}^{\left( t\right) }\right) \right\} }_{t = 1}^{T}$
+
+---
+
+Algorithm 3: Dynamic tâtonnement
+
+---
+
+Inputs: $T,{\left\{ \left( {U}^{\left( t\right) },{\mathbf{b}}^{\left( t\right) },{\mathbf{s}}^{\left( t\right) }\right) \right\} }_{t = 1}^{T},\mathbf{\eta },{\mathbf{p}}^{\left( 0\right) },\delta$
+
+Output: ${\mathbf{x}}^{ \star },{\mathbf{y}}^{ \star }$
+
+ for $t = 1,\ldots , T - 1$ do
+
+ For all $i \in \left\lbrack n\right\rbrack$ , find ${\mathbf{x}}_{i}^{\left( t\right) } \in$
+
+ ${\operatorname{argmax}}_{{\mathbf{x}}_{i} \in {\mathbb{R}}_{ + }^{m} : {\mathbf{x}}_{i} \cdot {\mathbf{p}}^{\left( t - 1\right) } \leq {b}_{i}^{\left( t\right) }}{u}_{i}\left( {\mathbf{x}}_{i}\right)$
+
+ Set ${\mathbf{p}}^{\left( t\right) } = {\Pi }_{{\mathbb{R}}_{ + }^{m}}\left( {{\mathbf{p}}^{\left( t - 1\right) } - {\eta }_{t}\left( {{\mathbf{s}}^{\left( t\right) } - \mathop{\sum }\limits_{{i \in \left\lbrack n\right\rbrack }}{\mathbf{x}}_{i}^{\left( t\right) }}\right) }\right)$
+
+ end for
+
+ return ${\left( {\mathbf{p}}^{\left( t\right) },{\mathbf{X}}^{\left( t\right) }\right) }_{t = 1}^{T}$
+
+---
+
+Algorithm 4: Dynamic Myopic Best-Response Dynamics
+
+---
+
+Inputs: ${\left\{ \left( {U}^{\left( t\right) },{\mathbf{b}}^{\left( t\right) },{\mathbf{s}}^{\left( t\right) }\right) \right\} }_{t = 1}^{T},{\mathbf{\eta }}^{\mathbf{p}},{\mathbf{\eta }}^{\mathbf{X}}, T,{\mathbf{X}}^{\left( 0\right) },{\mathbf{p}}^{\left( 0\right) }$
+
+Output: ${x}^{ \star },{y}^{ \star }$
+
+ for $t = 1,\ldots , T - 1$ do
+
+ Set ${\mathbf{p}}^{\left( t + 1\right) } = {\Pi }_{{\mathbb{R}}_{ + }^{m}}\left( {{\mathbf{p}}^{\left( t\right) } - {\eta }_{t}^{\mathbf{p}}\left( {{\mathbf{s}}^{\left( t\right) } - \mathop{\sum }\limits_{{i \in \left\lbrack n\right\rbrack }}{\mathbf{x}}_{i}^{\left( t\right) }}\right) }\right)$
+
+ For all $i \in \left\lbrack n\right\rbrack$ , set ${x}_{i}^{\left( t + 1\right) } =$
+
+ ${\Pi }_{{\mathbb{R}}_{ + }^{m}}\left( {{\mathbf{x}}_{i}^{\left( t\right) } + {\eta }_{t}^{\mathbf{X}}\left( {\frac{{b}_{i}^{\left( t\right) }}{{u}_{i}^{\left( t\right) }\left( {\mathbf{x}}_{i}^{\left( t\right) }\right) }{\nabla }_{{\mathbf{x}}_{i}}{u}_{i}^{\left( t\right) }\left( {\mathbf{x}}_{i}^{\left( t\right) }\right) - {\mathbf{p}}^{\left( t\right) }}\right) }\right)$
+
+ end for
+
+ return ${\left( {\mathbf{p}}^{\left( t\right) },{\mathbf{X}}^{\left( t\right) }\right) }_{t = 1}^{T}$
+
+---
+
+## E An Economic Application: Details
+
+Our experimental goal was to understand if Algorithm 3 and Algorithm 4 converges in terms of distance to equilibrium and if so how the rate of convergences changes under different utility structures, i.e. different smoothness and convexity properties of the value functions.
+
+To answer these questions, we ran multiple experiments, each time recording the prices and allocations computed by Algorithm 3, in pessimistic learning setting, and by Algorithm 4 , in optimistic learning setting, during each iteration $t$ of the loop. Moreover, at each iteration $t$ , we solve the competitive equilibrium $\left( {{\mathbf{p}}^{{\left( t\right) }^{ \star }},{\mathbf{X}}^{{\left( t\right) }^{ \star }}}\right)$ for the Fisher market $\left( {{U}^{\left( t\right) },{\mathbf{b}}^{\left( t\right) },{\mathbf{s}}^{\left( t\right) }}\right)$ . Finally, for each run of the algorithm on each market, we then computed distance between the computed prices, allocations and the equilibrium prices, allocations, which we plot in Figure 1 and Figure 2.
+
+Hyperparameters We set up 100 different linear, Cobb-Douglas, Leontief dynamic Fisher markets with random changing market parameters across time, each with 5 buyers and 8 goods, and we randomly pick one of these experiments to graph.
+
+In our execution of Algorithm 3, buyer $i$ ’s budget at iteration $t,{b}_{i}^{\left( t\right) }$ , was drawn randomly from a uniform distribution ranging from 10 to 20 (i.e., $U\left\lbrack {{10},{20}}\right\rbrack$ ), each buyer $i$ ’s valuation for good $j$ at iteration $t,{v}_{ij}^{\left( t\right) }$ , was drawn randomly from $U\left\lbrack {5,{15}}\right\rbrack$ , while each good $j$ ’s supply at iteration $t,{s}_{j}^{\left( t\right) }$ , was drawn randomly from $U\left\lbrack {{100},{110}}\right\rbrack$ . In our execution of Algorithm 4, buyer $i$ ’s budget at iteration $t,{b}_{i}^{\left( t\right) }$ , was drawn randomly from a uniform distribution ranging from 10 to 15 (i.e., $U\left\lbrack {{10},{15}}\right\rbrack$ ), each buyer $i$ ’s valuation for good $j$ at iteration $t,{v}_{ij}^{\left( t\right) }$ , was drawn randomly from $U\left\lbrack {{10},{20}}\right\rbrack$ , while each good $j$ ’s supply at iteration $t,{s}_{j}^{\left( t\right) }$ , was drawn randomly from $U\left\lbrack {{10},{15}}\right\rbrack$ .
+
+We ran both Algorithm 3 and Algorithm 4 for 1000 iterations on linear, Cobb-Douglas, and Leontief Fisher markets. We started the algorithm with initial prices drawn randomly from $U\left\lbrack {5,{55}}\right\rbrack$ . Our theoretical results assume fixed learning rates, but since those results apply to static games while our experiments apply to dynamic Fisher markets, we selected variable learning rates. After manual hyper-parameter tuning, for Algorithm 3, we chose a dynamic learning rate of ${\eta }_{t} = \frac{1}{\sqrt{t}}$ , while for Algorithm 4, we chose learning rates of ${\eta }_{t}^{\mathbf{x}} = \frac{5}{\sqrt{t}}$ and ${\eta }_{t}^{\mathbf{y}} = \frac{0.01}{\sqrt{t}}$ , for all $t \in \left\lbrack T\right\rbrack$ . For these choices of learning rates, we obtain empirical convergence rates close to what the theory predicts.
+
+Programming Languages, Packages, and Licensing We ran our experiments in Python 3.7 (Van Rossum and Drake Jr 1995), using NumPy (Harris et al. 2020), Pandas (pandas development team 2020), and CVXPY (Diamond and Boyd 2016). Figure 1 and Figure 2 were graphed using Matplotlib (Hunter 2007).
+
+Python software and documentation are licensed under the PSF License Agreement. Numpy is distributed under a liberal BSD license. Pandas is distributed under a new BSD license. Matplotlib only uses BSD compatible code, and its license is based on the PSF license. CVXPY is licensed under an APACHE license.
+
+Implementation Details In order to project each allocation computed onto the budget set of the consumers, i.e., $\left\{ {\mathbf{X} \in {\mathbb{R}}_{ + }^{n \times m} \mid \mathbf{X}\mathbf{p} \leq \mathbf{b}}\right\}$ , we used the alternating projection algorithm for convex sets, and alternatively projected onto the sets ${\mathbb{R}}_{ + }^{n \times m}$ and $\left\{ {\mathbf{X} \in {\mathbb{R}}^{n \times m} \mid \mathbf{X}\mathbf{p} \leq \mathbf{b}}\right\}$ .
+
+To compute the best-response for the inner play in Algorithm 3 , we used the ECOS solver, a CVXPY's first-order convex-program solvers, but if ever a runtime exception occurred, we ran the SCS solver.
+
+When computing the distance from the demands ${\mathbf{X}}^{\left( t\right) }$ computed by our algorithms to the equilibrium demands ${\mathbf{X}}^{{\left( t\right) }^{ \star }}$ , we normalize both demands to satisfy $\forall j \in$ $\left\lbrack m\right\rbrack ,\mathop{\sum }\limits_{{i \in \left\lbrack n\right\rbrack }}{x}_{ij} = {1}_{m}$ to reduce the noise caused by changing supplies.
+
+Computational Resources Our experiments were run on MacOS machine with 8GB RAM and an Apple M1 chip, and took about 2 hours to run. Only CPU resources were used.
+
+Code Repository The data our experiments generated, as well as the code used to produce our visualizations, can be found in our code repository (https://anonymous.4open.science/r/Dynamic-Minmax-Games-8153/).
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/u_lOumlm7mu/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/u_lOumlm7mu/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..a8aa965c2cd86186096fd76a2b4ae5e7a8b1dc85
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/u_lOumlm7mu/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,275 @@
+§ ROBUST NO-REGRET LEARNING IN MIN-MAX STACKELBERG GAMES
+
+Anonymous Author(s)
+
+§ ABSTRACT
+
+The behavior of no-regret learning algorithms is well understood in two-player min-max (i.e, zero-sum) games. In this paper, we investigate the behavior of no-regret learning in min-max games with dependent strategy sets, where the strategy of the first player constrains the behavior of the second. Such games are best understood as sequential, i.e., min-max Stackelberg, games. We consider two settings, one in which only the first player chooses their actions using a no-regret algorithm while the second player best responds, and one in which both players use no-regret algorithms. For the former case, we show that no-regret dynamics converge to a Stack-elberg equilibrium. For the latter case, we introduce a new type of regret, which we call Lagrangian regret, and show that if both players minimize their Lagrangian regrets, then play converges to a Stackelberg equilibrium. We then observe that online mirror descent (OMD) dynamics in these two settings correspond respectively to a known nested (i.e., sequential) gradient descent-ascent (GDA) algorithm and a new simultaneous GDA-like algorithm, thereby establishing convergence of these algorithms to Stackelberg equilibrium. Finally, we analyze the robustness of OMD dynamics to perturbations by investigating dynamic min-max Stackelberg games. We prove that OMD dynamics are robust for a large class of dynamic min-max games with independent strategy sets. In the dependent case, we demonstrate the robustness of OMD dynamics experimentally by simulating them in dynamic Fisher markets, a canonical example of a min-max Stackelberg game with dependent strategy sets.
+
+§ 1 INTRODUCTION
+
+Min-max optimization problems (i.e., zero-sum games) have been attracting a great deal of attention recently because of their applicability to problems in fairness in machine learning (Dai et al. 2019; Edwards and Storkey 2016; Madras et al. 2018; Sattigeri et al. 2018), generative adversarial imitation learning (Cai et al. 2019; Hamedani et al. 2018), reinforcement learning (Dai et al. 2018), generative adversarial learning (Sanjabi et al. 2018a), adversarial learning (Sinha et al. 2020), and statistical learning, e.g., learning parameters of exponential families (Dai et al. 2019). These problems are often modelled as min-max games, i.e., constrained min-max optimization problems of the form: $\mathop{\min }\limits_{{\mathbf{x} \in X}}\mathop{\max }\limits_{{\mathbf{y} \in Y}}f\left( {\mathbf{x},\mathbf{y}}\right)$ , where $f$ : $X \times Y \rightarrow \mathbb{R}$ is continuous, and $X \subset {\mathbb{R}}^{n}$ and $Y \subset$ ${\mathbb{R}}^{m}$ are non-empty and compact. In convex-concave min-max games, where $f$ is convex in $\mathbf{x}$ and concave in $\mathbf{y}$ , von Neumann and Morgenstern's seminal minimax theorem holds (Neumann 1928): i.e., $\mathop{\min }\limits_{{\mathbf{x} \in X}}\mathop{\max }\limits_{{\mathbf{y} \in Y}}f\left( {\mathbf{x},\mathbf{y}}\right) =$ $\mathop{\max }\limits_{{\mathbf{y} \in Y}}\mathop{\min }\limits_{{\mathbf{x} \in X}}f\left( {\mathbf{x},\mathbf{y}}\right)$ , guaranteeing the existence of a saddle point, i.e., a point that is simultaneously a minimum of $f$ in the $\mathbf{x}$ -direction and a maximum of $f$ in the $y$ -direction. This theorem allows us to interpret the optimization problem as a simultaneous-move, zero-sum game, where ${\mathbf{y}}^{ * }$ (resp. ${\mathbf{x}}^{ * }$ ) is a best-response of the outer (resp. inner) player to the other’s action ${\mathbf{x}}^{ * }$ (resp. ${\mathbf{y}}^{ * }$ ), in which case a saddle point is also called a minimax point or a Nash equilibrium.
+
+In this paper, we study min-max Stackelberg games (Goktas and Greenwald 2021), i.e., constrained min-max optimization problems with dependent feasible sets of the form: $\mathop{\min }\limits_{{\mathbf{x} \in X}}\mathop{\max }\limits_{{\mathbf{y} \in Y : \mathbf{g}\left( {\mathbf{x},\mathbf{y}}\right) \geq \mathbf{0}}}f\left( {\mathbf{x},\mathbf{y}}\right)$ , where $f : X \times$ $Y \rightarrow \mathbb{R}$ is continuous, $X \subset {\mathbb{R}}^{n}$ and $Y \subset {\mathbb{R}}^{m}$ are non-empty and compact, and $\mathbf{g}\left( {\mathbf{x},\mathbf{y}}\right) = {\left( {g}_{1}\left( \mathbf{x},\mathbf{y}\right) ,\ldots ,{g}_{K}\left( \mathbf{x},\mathbf{y}\right) \right) }^{T}$ with ${g}_{k} : X \times Y \rightarrow \mathbb{R}$ . Goktas and Greenwald observe that the minimax theorem does not hold in these games (2021). As a result, such games are more appropriately viewed as sequential, i.e., Stackelberg, games for which the relevant solution concept is the Stackelberg equilibrium, ${}^{1}$ where the outer player chooses $\widehat{\mathbf{x}} \in X$ before the inner player responds with their choice of $\mathbf{y}\left( \widehat{\mathbf{x}}\right) \in Y$ s.t. $\mathbf{g}\left( {\widehat{\mathbf{x}},\mathbf{y}\left( \widehat{\mathbf{x}}\right) }\right) \geq \mathbf{0}$ . In these games, the outer player seeks to minimize their loss, assuming the inner player chooses a feasible best response: i.e., the outer player's objective, also known as their value function in the economics literature (Milgrom and Segal 2002), is defined as ${V}_{X}\left( \mathbf{x}\right) = \mathop{\max }\limits_{{\mathbf{y} \in Y : \mathbf{g}\left( {\mathbf{x},\mathbf{y}}\right) \geq \mathbf{0}}}f\left( {\mathbf{x},\mathbf{y}}\right)$ . The inner player’s value function, ${V}_{Y} : X \rightarrow \mathbb{R}$ , which they seek to maximize, is simply the objective function given the outer player’s action $\widehat{\mathbf{x}}$ : i.e., ${V}_{Y}\left( {\mathbf{y};\widehat{\mathbf{x}}}\right) = f\left( {\widehat{\mathbf{x}},\mathbf{y}}\right)$ .
+
+Goktas and Greenwald (2021) proposed a polynomial-time first-order method by which to compute Stackelberg equilibria, which they called nested gradient descent ascent (GDA). This method can be understood as an algorithm a third party might run to find an equilibrium, or as a game dynamic that the players might employ if their long-run goal were to reach an equilibrium. Rather than assume that players are jointly working towards the goal of reaching an equilibrium, it is often more reasonable to assume that they play so as to not regret their decisions: i.e., that they employ a no-regret learning algorithm, which minimizes their loss in hindsight. It is well known that when both players in a min-max game are no-regret learners, the players' strategy profile over time converges to a Nash equilibrium in average iterates: i.e., empirical play converges to a Nash equilibrium (e.g., (Freund and Schapire 1996)).
+
+${}^{1}$ One could also view such games as pseudo-games (also known as abstract economies) (Arrow and Debreu 1954), in which players move simultaneously under the unreasonable assumption that the moves they make will satisfy the game's dependency constraints. Under this view, the relevant solution concept is generalized Nash equilibrium (Facchinei and Kanzow 2007, 2010).
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+In this paper, we investigate no-regret learning dynamics in min-max Stackelberg games. We assume both pessimistic and optimistic settings. In the pessimistic setting, the outer player is a no-regret learner while the inner player best responds; in the optimistic setting, both players are no-regret learners. In the pessimistic case, we show that if the outer player uses a no-regret algorithm that achieves $\varepsilon$ -pessimistic regret after $T$ iterations, then the outer player’s empirical play converges to their $\varepsilon$ -Stackelberg equilibrium strategy. In the optimistic case, we introduce a new type of regret, which we call Lagrangian regret, which assumes access to a solution oracle for the optimal KKT multipliers of the game's constraints. We then show that if both players use no-regret algorithms that achieve $\varepsilon$ -Lagrangian regret after $T$ iterations, the players’ empirical play converges to an $\varepsilon$ - Stackelberg equilibrium.
+
+We then restrict our attention to online mirror descent (OMD) dynamics, which yield two algorithms, namely max-oracle gradient descent (Jin, Netrapalli, and Jordan 2020) and nested GDA (Goktas and Greenwald 2021) in the pessimistic setting, and a new simultaneous GDA-like algorithm (Nedic and Ozdaglar 2009) in the optimistic setting, which we call Lagrangian GDA (LGDA). Convergence of the former two algorithms in $O\left( {1/{\varepsilon }^{2}}\right)$ iterations then follows from our previous theorems. Additionally, the iteration complexity of $O\left( {1/{\varepsilon }^{2}}\right)$ suggests the superiority of LGDA over nested-GDA when a Lagrangian solution oracle exists, since nested-GDA converges in $O\left( {1/{\varepsilon }^{3}}\right)$ iterations (Goktas and Greenwald 2021), while LGDA converges in $O\left( {1/{\varepsilon }^{2}}\right)$ iterations, assuming the objective function is only Lipschitz continuous.
+
+Finally, we analyze the robustness of OMD dynamics to perturbations by investigating dynamic min-max Stackel-berg games. We prove that OMD dynamics are robust, in that even when the game changes with each iteration of the algorithm, OMD dynamics track the changing equilibrium closely for a large class of dynamic min-max games with independent strategy sets. In the dependent strategy set case, we demonstrate the robustness of OMD dynamics experimentally by simulating them in dynamic Fisher markets, a canonical example of a min-max Stackelberg game (with dependent strategy sets). Even when the Fisher market changes with each iteration, our OMD dynamics are able to track the changing equilibria closely. Our findings can be summarized as follows:
+
+ * In min-max Stackelberg games, when the outer player is a no-regret learner and the inner-player best-responds, the average of the outer player's strategies converges to their Stackelberg equilibrium strategy.
+
+ * We introduce a new type of regret we call Lagrangian regret and show that in min-max Stackelberg games when both players minimize Lagrangian regret, the average of the players' strategies converge to a Stackelberg equilibrium.
+
+ * We provide novel convergence guarantees for two known algorithms, max-oracle gradient descent and nested gradient descent ascent, to an $\varepsilon$ -Stackelberg equilibrium in $O\left( {1/{\varepsilon }^{2}}\right)$ in average iterates.
+
+ * We introduce a new simultaneous GDA-like algorithm and prove that its average iterates converge to an $\varepsilon$ - Stackelberg equilibrium in $O\left( {1/{\varepsilon }^{2}}\right)$ iterations.
+
+ * We prove that max-oracle gradient descent and simultaneous GDA are robust to perturbations in a large class of min-max games (with independent strategy sets).
+
+ * We run experiments with Fisher markets which suggest that max-oracle gradient descent and simultaneous GDA are robust to perturbations in these min-max Stackelberg games (with dependent strategy sets).
+
+We provide a review of related work in Appendix BThis paper is organized as follows. In the next section, we present the requisite mathematical preliminaries. In Section 3, we present no-regret learning dynamics that converge in a large class of min-max Stackelberg games. In Section 4, we study the convergence and robustness properties of a particular no-regret learning algorithm, namely online mirror descent, in min-max Stackelberg games.
+
+§ 2 MATHEMATICAL PRELIMINARIES
+
+Our notational conventions can be found in Appendix A.
+
+Game Definitions A min-max Stackelberg game, (X, Y, f, g), is a two-player, zero-sum game, where one player, who we call the outer, or $\mathbf{x}$ -, player (resp. the inner, or $\mathbf{y}$ -, player), is trying to minimize their loss (resp. maximize their gain), defined by a continuous objective function $f : X \times Y \rightarrow \mathbb{R}$ , by taking an action from their strategy set $X \subset {\mathbb{R}}^{n}$ , and (resp. $Y \subset {\mathbb{R}}^{m}$ ) s.t. $\mathbf{g}\left( {\mathbf{x},\mathbf{y}}\right) \geq 0$ where $\mathbf{g}\left( {\mathbf{x},\mathbf{y}}\right) = {\left( {g}_{1}\left( \mathbf{x},\mathbf{y}\right) ,\ldots ,{g}_{K}\left( \mathbf{x},\mathbf{y}\right) \right) }^{T}$ with ${g}_{k} : X \times Y \rightarrow \mathbb{R}$ continuous. A strategy profile $\left( {\mathbf{x},\mathbf{y}}\right) \in X \times Y$ is said to be feasible iff for all $k \in \left\lbrack K\right\rbrack$ , ${g}_{k}\left( {\mathbf{x},\mathbf{y}}\right) \geq 0$ . The function $f$ maps a pair of actions taken by the players $\left( {\mathbf{x},\mathbf{y}}\right) \in X \times Y$ to a real value (i.e., a payoff), which represents the loss (resp. the gain) of the $\mathbf{x}$ -player (resp. the $\mathbf{y}$ -player). A min-max game is said to be convex-concave if the objective function $f$ is convex-concave.
+
+One way to see this game is as a Stackelberg game, i.e., a sequential game with two players, where WLOG, we assume that the minimizing player moves first and the maximizing player moves second. The relevant solution concept for Stackelberg games is the Stackelberg equilibrium: A strategy profile $\left( {{\mathbf{x}}^{ * },{\mathbf{y}}^{ * }}\right) \in X \times Y$ s.t. $\mathbf{g}\left( {{\mathbf{x}}^{ * },{\mathbf{y}}^{ * }}\right) \geq \mathbf{0}$ is an $\left( {\epsilon ,\delta }\right)$ -Stackelberg equilibrium if $\mathop{\max }\limits_{{\mathbf{y} \in Y : \mathbf{g}\left( {{\mathbf{x}}^{ * },\mathbf{y}}\right) \geq 0}}$ $f\left( {{\mathbf{x}}^{ * },\mathbf{y}}\right) - \delta \; \leq \;f\left( {{\mathbf{x}}^{ * },{\mathbf{y}}^{ * }}\right) \; \leq$ $\mathop{\min }\limits_{{\mathbf{x} \in X}}\mathop{\max }\limits_{{\mathbf{y} \in Y : \mathbf{g}\left( {\mathbf{x},\mathbf{y}}\right) \geq 0}}f\left( {\mathbf{x},\mathbf{y}}\right) + \epsilon$ . Intuitively, a $\left( {\varepsilon ,\delta }\right)$ -Stackelberg equilibrium is a point at which the $\mathbf{x}$ - player’s (resp. $\mathbf{y}$ -player’s) payoff is no more than $\varepsilon$ (resp. $\delta$ ) away from its optimum. A(0,0)-Stackelberg equilibrium is guaranteed to exist in min-max Stackelberg games (Goktas and Greenwald 2021). Note that when $\mathbf{g}\left( {\mathbf{x},\mathbf{y}}\right) \geq \mathbf{0}$ , for all $\mathbf{x} \in X$ and $\mathbf{y} \in Y$ , the game reduces to a min-max game (with independent strategy sets), for which, by the min-max theorem, a Nash equilibrium is guaranteed to exist (Neumann 1928).
+
+In a min-max Stackelberg game, the outer player’s best-response set ${\mathrm{{BR}}}_{X} \subset X$ , defined as ${\mathrm{{BR}}}_{X} = \arg \mathop{\min }\limits_{{\mathbf{x} \in X}}{V}_{X}\left( \mathbf{x}\right)$ , is independent of the inner player's strategy, while the inner player's best-response correspondence ${\mathrm{{BR}}}_{Y} : X \rightrightarrows Y$ , defined as ${\mathrm{{BR}}}_{Y}\left( \mathbf{x}\right) = \arg \mathop{\max }\limits_{{\mathbf{y} \in Y : \mathbf{g}\left( {\mathbf{x},\mathbf{y}}\right) \geq 0}}{V}_{Y}\left( {\mathbf{y};\mathbf{x}}\right)$ , depends on the outer player’s strategy. A(0,0)-Stackelberg equilibrium $\left( {{\mathbf{x}}^{ * },{\mathbf{y}}^{ * }}\right) \in X \times Y$ is then a tuple of strategies such that $\left( {{\mathbf{x}}^{ * },{\mathbf{y}}^{ * }}\right) \in {\mathrm{{BR}}}_{X} \times {\mathrm{{BR}}}_{Y}\left( {\mathbf{x}}^{ * }\right) .$
+
+A dynamic min-max Stackelberg game, ${\left\{ \left( X,Y,{f}^{\left( t\right) },{\mathbf{g}}^{\left( t\right) }\right) \right\} }_{t = 1}^{T}$ , is a sequence of min-max Stack-elberg games played for $T$ time periods. We define the players’ value functions at time $t$ in a dynamic min-max Stackelberg game in the obvious way. Note that when ${\mathbf{g}}^{\left( t\right) }\left( {\mathbf{x},\mathbf{y}}\right) \geq 0$ for all $\mathbf{x} \in X,\mathbf{y} \in Y$ and all time periods $t \in \left\lbrack T\right\rbrack$ , the game reduces to a dynamic min-max game (with independent strategy sets). Moreover, if $\forall t,{t}^{\prime } \in \left\lbrack T\right\rbrack ,{f}^{\left( t\right) } = {f}^{\left( {t}^{\prime }\right) }$ , and ${\mathbf{g}}^{\left( t\right) } = {\mathbf{g}}^{\left( {t}^{\prime }\right) }$ , then the game reduces to a (static) min-max Stackelberg game, which we denote simply by(X, Y, f, g).
+
+Mathematical Preliminaries Given $A \subset {\mathbb{R}}^{n}$ , the function $f : A \rightarrow \mathbb{R}$ is said to be ${\ell }_{f}$ -Lipschitz-continuous iff $\forall {\mathbf{x}}_{1},{\mathbf{x}}_{2} \in X,$
+
+$\begin{Vmatrix}{f\left( {\mathbf{x}}_{1}\right) - f\left( {\mathbf{x}}_{2}\right) }\end{Vmatrix} \leq {\ell }_{f}\begin{Vmatrix}{{\mathbf{x}}_{1} - {\mathbf{x}}_{2}}\end{Vmatrix}$ . If the gradient of $f$ , $\nabla f$ , is ${\ell }_{\nabla f}$ -Lipschitz-continuous, we refer to $f$ as ${\ell }_{\nabla f}$ - Lipschitz-smooth. We provide a review of online convex optimization in Appendix A.
+
+§ 3 NO-REGRET LEARNING DYNAMICS
+
+In this section we explore no-regret learning dynamics in min-max Stackelberg games, and prove the convergence of no-regret learning dynamics in two settings: a pessimistic setting in which the outer player is a no-regret learner while the inner player best-responds, and an optimistic setting in which both players are no-regret learners. All the results in this paper rely on the following assumptions:
+
+Assumption 1. 1. (Slater's condition (Slater 1959, 2014)) $\forall \mathbf{x} \in X,\exists \widehat{\mathbf{y}} \in Y$ s.t. ${g}_{k}\left( {\mathbf{x},\widehat{\mathbf{y}}}\right) > \mathbf{0}$ , for all $k = 1,\ldots ,K;2.f,{g}_{1},\ldots ,{g}_{K}$ are continuous and convex-concave; and 3. ${\nabla }_{\mathbf{x}}f,{\nabla }_{\mathbf{x}}{g}_{1},\ldots ,{\nabla }_{\mathbf{x}}{g}_{K}$ are well-defined for all $\left( {\mathbf{x},\mathbf{y}}\right) \in X \times Y$ and continuous in(x, y).
+
+We note that these assumptions are in line with previous work geared towards solving min-max Stackelberg games (Goktas and Greenwald 2021). Part 1 of Assumption 1, Slater's condition, is a standard constraint qualification condition (Boyd, Boyd, and Vandenberghe 2004), which is needed to derive the optimality conditions for the inner player's maximization problem; without it the problem becomes analytically intractable. Part 2 of Assumption 1 is is required for the value function of the outer player to be continuous and convex ((Goktas and Greenwald 2021), Proposition A1) so that the problem is solvable efficiently. Finally, we note that Part 3 of Assumption 1 can be replaced by a subgradient boundedness assumption instead; however, for simplicity, we assume this stronger condition.
+
+§ PESSIMISTIC LEARNING SETTING
+
+In Stackelberg games, the leader decides their strategy assuming that the inner player will best respond which leads us to first consider a repeated game setting in which the inner player always best responds to the strategy picked by the outer player. Such a setting also makes sense as in zero-sum Stackelberg games the outer player and inner player are adversaries, and in most applications of interest we are concerned by optimal strategies for the outer player; hence, assuming a strong adversary which always best-responds allows us to consider more robust strategies for the outer player.
+
+For any $\mathbf{x} \in X$ , denote ${\mathbf{y}}^{ * }\left( \mathbf{x}\right) \in {\mathrm{{BR}}}_{Y}\left( \mathbf{x}\right)$ , in such a setting, intuitively, the regret should be equal to the difference between the cumulative loss of the outer player w.r.t. to their sequence of actions to which the inner player best responds, and the smallest cumulative loss that the outer player can achieve by picking a fixed strategy to which the inner player best responds, i.e., $\frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{f}^{\left( t\right) }\left( {{\mathbf{x}}^{\left( t\right) },{\mathbf{y}}^{ * }\left( {\mathbf{x}}^{\left( t\right) }\right) }\right) -$ $\mathop{\sum }\limits_{{t = 1}}^{T}\frac{1}{T}{f}^{\left( t\right) }\left( {\mathbf{x},{\mathbf{y}}^{ * }\left( \mathbf{x}\right) }\right)$ . We call this regret the pessimistic regret which can be more conveniently defined as the regret incurred by an action $\mathbf{x} \in X$ of the outer player w.r.t. a sequence of actions ${\left\{ \left( X,Y,{f}^{\left( t\right) },{\mathbf{g}}^{\left( t\right) }\right) \right\} }_{t = 1}^{T}$ and a dynamic min-max Stackelberg game ${\left\{ {G}_{t}\right\} }_{t \in T}$ w.r.t. to the loss given by their value function ${\left\{ {V}_{X}^{\left( t\right) }\right\} }_{t = 1}^{T}$ , i.e.:
+
+$$
+{\operatorname{PesRegret}}_{X}^{\left( T\right) }\left( \mathbf{x}\right) = \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{V}_{X}^{\left( t\right) }\left( {\mathbf{x}}^{\left( t\right) }\right) - \mathop{\sum }\limits_{{t = 1}}^{T}\frac{1}{T}{V}_{X}^{\left( t\right) }\left( \mathbf{x}\right)
+$$
+
+(1)
+
+That is, the pessimistic regret of the outer player compares the outer player's play history to the smallest cumulative loss the outer player could achieve by picking a fixed strategy assuming that the inner player best-responds. It is pessimistic in the sense that the outer player assumes the worst possible outcome for themself.
+
+The main theorem in this section states the following: assuming the inner player best responds to the actions of the outer player, if the outer player employs a no-regret algorithm, then the outer player's average strategy converges to a Stackelberg equilibrium. Before presenting this theorem, ${}^{2}$ we recall the following property of the outer player's value function.
+
+Proposition 2 ((Goktas and Greenwald 2021), Proposition A.1). In a min-max Stackelberg game(X, Y, f, g), the outer player’s value function, $V\left( \mathbf{x}\right) = \mathop{\max }\limits_{{\mathbf{y} \in Y : \mathbf{g}\left( {\mathbf{x},\mathbf{y}}\right) \geq \mathbf{0}}}f\left( {\mathbf{x},\mathbf{y}}\right)$ , is continuous and convex.
+
+${}^{2}$ The proofs of all mathematical claims in this section can be found in Appendix C.
+
+Theorem 3. Consider a min-max Stackelberg game (X, Y, f, g), and suppose the outer player plays a sequence of actions ${\left\{ {\mathbf{x}}^{\left( t\right) }\right\} }_{t = 1}^{T} \subset X$ . If, after $T$ iterations, the outer player’s pessimistic regret is bounded by $\varepsilon$ for all $\mathbf{x} \in X$ , then $\left( {{\overline{\mathbf{x}}}^{\left( T\right) },{\mathbf{y}}^{ * }\left( {\overline{\mathbf{x}}}^{\left( T\right) }\right) }\right)$ is a $\left( {\varepsilon ,0}\right)$ -Stackelberg equilibrium, where ${\mathbf{y}}^{ * }\left( {\overline{\mathbf{x}}}^{\left( T\right) }\right) \in {\mathrm{{BR}}}_{Y}\left( {\overline{\mathbf{x}}}^{\left( T\right) }\right)$ .
+
+We remark that even though the definition of pessimistic regret looks similar to the standard definition of regret, its structure is very different. In particular, without Proposition 2, it is not clear that the value $\mathop{\sum }\limits_{{t = 1}}^{T}\frac{1}{T}{f}^{\left( t\right) }\left( {\mathbf{x},{\mathbf{y}}^{ * }\left( \mathbf{x}\right) }\right) =$ $\mathop{\sum }\limits_{{t = 1}}^{T}{V}^{\left( t\right) }\left( \mathbf{x}\right)$ is convex in $\mathbf{x}$ .
+
+§ OPTIMISTIC LEARNING SETTING
+
+We now turn our attention to a learning setting in which both players are no-regret learners. The most straightforward way to define regret is by considering the outer and inner players’ "vanilla" regrets, respectively: ${\operatorname{Regret}}_{X}^{\left( T\right) }\left( \mathbf{x}\right) =$ $\frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}f\left( {{\mathbf{x}}^{\left( t\right) },{\mathbf{y}}^{\left( t\right) }}\right) \; - \;\frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}f\left( {\mathbf{x},{\mathbf{y}}^{\left( t\right) }}\right)$ and ${\operatorname{Regret}}_{Y}^{\left( T\right) }\left( \mathbf{y}\right) = \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}f\left( {{\mathbf{x}}^{\left( t\right) },\mathbf{y}}\right) -$ $\frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}f\left( {{\mathbf{x}}^{\left( t\right) },{\mathbf{y}}^{\left( t\right) }}\right)$ . In convex-concave min-max games (with independent strategy sets), when both players minimize their vanilla regret, the players' average strategies converge to Nash equilibrium. In min-max Stackelberg games (with dependent strategy sets), however, convergence to a Stackelberg equilibrium in not guaranteed.
+
+Example 4. Consider the min-max Stackelberg game $\mathop{\min }\limits_{{x \in \left\lbrack {-1,1}\right\rbrack }}$
+
+$\mathop{\max }\limits_{{y \in \left\lbrack {-1,1}\right\rbrack : 0 \leq 1 - \left( {x + y}\right) }}{x}^{2} + y + 1$ . The Stackelberg equilibrium of this game is given by ${x}^{ * } = 1/2,{y}^{ * } = 1/2$ . Suppose both players employ no-regret algorithms that generate strategies ${\left\{ {\mathbf{x}}^{\left( t\right) },{\mathbf{y}}^{\left( t\right) }\right\} }_{t \in {\mathbb{N}}_{ + }}$ . Then at time $T \in {\mathbb{N}}_{ + }$ , there exists $\varepsilon > 0$ , s.t.
+
+$$
+\left\{ \begin{array}{l} \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}\left\lbrack {{{x}^{\left( t\right) }}^{2} + {y}^{\left( t\right) } + 1}\right\rbrack - \frac{1}{T}\mathop{\min }\limits_{{x \in \left\lbrack {-1,1}\right\rbrack }}\mathop{\sum }\limits_{{t = 1}}^{T}\left\lbrack {{x}^{2} + {y}^{\left( t\right) } + 1}\right\rbrack \leq \varepsilon \\ \frac{1}{T}\mathop{\max }\limits_{{y \in \left\lbrack {-1,1}\right\rbrack }}\mathop{\sum }\limits_{{t = 1}}^{T}\left\lbrack {{{x}^{\left( t\right) }}^{2} + y + 1}\right\rbrack - \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}\left\lbrack {{{x}^{\left( t\right) }}^{2} + {y}^{\left( t\right) } + 1}\right\rbrack \leq \varepsilon \end{array}\right.
+$$
+
+(2)
+
+Simplifying yields:
+
+$$
+\left\{ \begin{matrix} \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{x}^{{\left( t\right) }^{2}} - \mathop{\min }\limits_{{x \in \left\lbrack {-1,1}\right\rbrack }}{x}^{2} \leq \varepsilon \\ \mathop{\max }\limits_{{y \in \left\lbrack {-1,1}\right\rbrack }}y - \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{y}^{\left( t\right) } \leq \varepsilon \end{matrix}\right. \tag{3}
+$$
+
+Since both players are no-regret learners, there exists $T \in$ ${\mathbb{N}}_{ + }$ large enough s.t.
+
+$$
+\left\{ \begin{matrix} \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{x}^{{\left( t\right) }^{2}} \leq \mathop{\min }\limits_{{x \in \left\lbrack {-1,1}\right\rbrack }}{x}^{2} \\ \mathop{\max }\limits_{{y \in \left\lbrack {-1,1}\right\rbrack }}y \leq \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{y}^{\left( t\right) } \end{matrix}\right. = \left\{ \begin{matrix} \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{x}^{{\left( t\right) }^{2}} \leq 0 \\ 1 \leq \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{y}^{\left( t\right) } \end{matrix}\right.
+$$
+
+(4)
+
+In other words, the average iterates converge to $x = 0,y =$ 1, which is not the Stackelberg equilibrium of this game.
+
+If the inner player minimizes their vanilla regret without regard to the game's constraints, then their actions are not guaranteed to be feasible, and thus cannot converge to a Stackelberg equilibrium. To remedy this infeasibility, we introduce a new type of regret we call Lagrangian regret, and show that assuming access to a solution oracle for the optimal KKT multipliers of the game's constraints, if both players minimize their Lagrangian regret, then no-regret learning dynamics converge to a Stackelberg equilibrium.
+
+Define ${\mathcal{L}}_{\mathbf{x}}\left( {\mathbf{y},\mathbf{\lambda }}\right) = f\left( {\mathbf{x},\mathbf{y}}\right) + \mathop{\sum }\limits_{{k = 1}}^{K}{\lambda }_{k}{g}_{k}\left( {\mathbf{x},\mathbf{y}}\right)$ to be the Lagrangian associated with the outer player's value function, or equivalently, the inner player's maximization problem given the outer player’s strategy $\mathbf{x} \in X$ . If the optimal KKT multipliers ${\mathbf{\lambda }}^{ * } \in {\mathbb{R}}^{K}$ , which are guaranteed to exist by Slater's condition (Slater 1959), were known for the problem $\mathop{\min }\limits_{{\mathbf{x} \in X}}\mathop{\max }\limits_{{\mathbf{y} \in Y : \mathbf{g}\left( {\mathbf{x},\mathbf{y}}\right) \geq \mathbf{0}}}f\left( {\mathbf{x},\mathbf{y}}\right) =$ $\mathop{\min }\limits_{{\mathbf{x} \in X}}\mathop{\max }\limits_{{\mathbf{y} \in Y}}\mathop{\min }\limits_{{\mathbf{\lambda } \geq \mathbf{0}}}$
+
+${\mathcal{L}}_{\mathbf{x}}\left( {\mathbf{y},\mathbf{\lambda }}\right)$ , then one could plug them back into the Lagrangian to obtain a convex-concave saddle point problem given by $\mathop{\min }\limits_{{\mathbf{x} \in X}}\mathop{\max }\limits_{{\mathbf{y} \in Y}}$
+
+${\mathcal{L}}_{\mathbf{x}}\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right)$ . Note that a saddle point of this problem is guaranteed to exist by the minimax theorem (Neumann 1928), since ${\mathcal{L}}_{\mathbf{x}}\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right)$ is convex in $\mathbf{x}$ and concave in $\mathbf{y}$ . The next lemma states that the Stackelberg equilibria of a min-max Stackelberg game correspond to the saddle points of ${\mathcal{L}}_{\mathbf{x}}\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right)$ .
+
+Lemma 5. Any Stackelberg equilibrium $\left( {{\mathbf{x}}^{ * }{\mathbf{y}}^{ * }}\right) \in X \times$ $Y$ of any min-max Stackelberg game(X, Y, f, g)corresponds to a saddle point of ${\mathcal{L}}_{\mathbf{x}}\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right)$ , where ${\mathbf{\lambda }}^{ * } \in$ $\arg \mathop{\min }\limits_{{\mathbf{\lambda } \geq 0}}\mathop{\min }\limits_{{\mathbf{x} \in X}}\mathop{\max }\limits_{{\mathbf{y} \in Y}}{\mathcal{L}}_{\mathbf{x}}\left( {\mathbf{y},\mathbf{\lambda }}\right)$ .
+
+This lemma tells us that the function ${\mathcal{L}}_{\mathbf{x}}\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right)$ represents a new loss function that enforces the game's constraints. Based on this observation, we assume access to a Lagrangian solution oracle that provides us with ${\mathbf{\lambda }}^{ * } \in$ $\arg \mathop{\min }\limits_{{\mathbf{\lambda } > 0}}\mathop{\min }\limits_{{\mathbf{x} \in X}}\mathop{\max }\limits_{{\mathbf{y} \in Y}}{\mathcal{L}}_{\mathbf{x}}\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right)$ .
+
+Further, we define a new type of regret which we call Lagrangian regret. Given a sequence of actions ${\left\{ {\mathbf{x}}^{\left( t\right) },{\mathbf{y}}^{\left( t\right) }\right\} }_{t = 1}^{T}$ taken by the outer and inner players in a dynamic min-max Stackelberg game ${\left\{ \left( X,Y,{f}^{\left( t\right) },{\mathbf{g}}^{\left( t\right) }\right) \right\} }_{t = 1}^{T}$ , we define their Lagrangian regret, respectively, as LagrRegret ${}_{X}^{\left( T\right) }\left( \mathbf{x}\right) =$ $\frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{\mathcal{L}}_{{x}^{\left( t\right) }}^{\left( t\right) }\left( {{\mathbf{y}}^{\left( t\right) },{\mathbf{\lambda }}^{ * }}\right) \; - \;\frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{\mathcal{L}}_{\mathbf{x}}^{\left( t\right) }\left( {{\mathbf{y}}^{\left( t\right) },{\mathbf{\lambda }}^{ * }}\right)$ and ${\operatorname{LagrRegret}}_{Y}^{\left( T\right) }\left( \mathbf{y}\right) = \frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{\mathcal{L}}_{{\mathbf{x}}^{\left( t\right) }}^{\left( t\right) }\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right) -$ $\begin{matrix} {\frac{1}{T}\mathop{\sum }\limits_{{t = 1}}^{T}{\mathcal{L}}_{{\mathbf{x}}^{(t)}}^{(t)}({\mathbf{y}}^{(t)},{\mathbf{\lambda }}^{ \ast }).} \end{matrix}$
+
+The saddle point residual of a point $\left( {{\mathbf{x}}^{ * },{\mathbf{y}}^{ * }}\right) \in X \times Y$ with respect to a convex-concave function $f : X \times Y \rightarrow \mathbb{R}$ is given by $\mathop{\max }\limits_{{\mathbf{y} \in Y}}f\left( {{\mathbf{x}}^{ * },\mathbf{y}}\right) - \mathop{\min }\limits_{{\mathbf{x} \in X}}f\left( {\mathbf{x},{\mathbf{y}}^{ * }}\right)$ . When the saddle point residual is 0, the saddle point is a(0,0)- Stackelberg equilibrium.
+
+The main theorem of this section now follows: if both players play so as to minimize their Lagrangian regret, then their average strategies converge to a Stackelberg equilibrium. The bound is given in terms of the saddle point residual of ${\mathcal{L}}_{\mathbf{x}}\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right)$ .
+
+Theorem 6. Consider a min-max Stackelberg game (X, Y, f, g), and suppose the outer and the players generate sequences of actions ${\left\{ \left( {\mathbf{x}}^{\left( t\right) },{\mathbf{y}}^{\left( t\right) }\right) \right\} }_{t = 1}^{T} \subset X$ using a no-Lagrangian-regret algorithm. If after $T$ iterations, the Lagrangian regret of both players is bounded by $\varepsilon$ for all $x \in X$ , the following convergence bound holds on the saddle point residual of $\left( {{\overline{\mathbf{x}}}^{\left( T\right) },{\overline{\mathbf{y}}}^{\left( T\right) }}\right)$ w.r.t. the Lagrangian: $0 \leq \mathop{\max }\limits_{{\mathbf{y} \in Y}}{\mathcal{L}}_{{\bar{\mathbf{x}}}^{\left( T\right) }}\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right) - \mathop{\min }\limits_{{\mathbf{x} \in X}}{\mathcal{L}}_{\mathbf{x}}\left( {{\overline{\mathbf{y}}}^{\left( T\right) },{\mathbf{\lambda }}^{ * }}\right) \leq {2\varepsilon }.$
+
+Having provided convergence to Stackelberg equilibrium of general no-regret learning dynamics in min-max Stack-elberg games, we now proceed to investigate the convergence and robustness properties of a specific example of a no-regret learning dynamic, namely online mirror descent (OMD) dynamics.
+
+§ 4 ONLINE MIRROR DESCENT
+
+In this section, we apply the results we have derived for no-regret learning dynamics to Online Mirror Descent (OMD) (Zinkevich 2003; Shalev-Shwartz et al. 2011). We apply the theorems we derived above to OMD, and then we study the robustness properties of OMD in min-max Stack-elberg games.
+
+§ CONVERGENCE ANALYSIS
+
+When the outer player is an OMD learner minimizing its pessimistic regret and the inner player best responds, we obtain the max-oracle gradient descent algorithm (Algorithm 1 - Appendix D) first proposed by Jin, Netrapalli, and Jordan (2020) for min-max games.
+
+Following Jin, Netrapalli, and Jordan (2020), Goktas and Greenwald extend the max-oracle gradient descent algorithm to min-max Stackelberg games and prove its convergence of in best iterates. The following corollary of Theorem 3, which concerns convergence of this algorithm in average iterates, complements their result: the max-oracle gradient descent algorithm is guaranteed to converge to an $\left( {\varepsilon ,0}\right)$ -Stackelberg equilibrium strategy of the outer player in average iterates after $O\left( {1/{\varepsilon }^{2}}\right)$ iterations, assuming the inner player best responds.
+
+We note that since ${V}_{X}$ is convex, by Proposition 2, ${V}_{X}$ is subdifferentiable. Moreover, for all $\widehat{\mathbf{x}} \in X,\widehat{\mathbf{y}} \in {\mathrm{{BR}}}_{Y}\left( \widehat{\mathbf{x}}\right)$ , ${\nabla }_{\mathbf{x}}f\left( {\widehat{\mathbf{x}},\widehat{\mathbf{y}}}\right) + \mathop{\sum }\limits_{{k = 1}}^{K}{\lambda }_{k}{g}_{k}\left( {\widehat{\mathbf{x}},\widehat{\mathbf{y}}}\right)$ is an arbitrary subgradi-ent of the value function at $\widehat{\mathbf{x}}$ by Goktas and Greenwald’s subdifferential envelope theorem (2021). We add that similar to Goktas and Greenwald, we assume that the optimal KKT multipliers ${\mathbf{\lambda }}^{ * }\left( {{\mathbf{x}}^{\left( t\right) },\widehat{\mathbf{y}}\left( {\mathbf{x}}^{\left( t\right) }\right) }\right)$ associated with a solution $\widehat{\mathbf{y}}\left( {\mathbf{x}}^{\left( t\right) }\right)$ ) can be computed in constant time.
+
+Corollary 7. Let $c = \mathop{\max }\limits_{{\mathbf{x} \in X}}\parallel \mathbf{x}\parallel$ and let ${\ell }_{f} =$ $\mathop{\max }\limits_{{\left( {\widehat{\mathbf{x}},\widehat{\mathbf{y}}}\right) \in X \times Y}}$
+
+$\begin{Vmatrix}{{\nabla }_{\mathbf{x}}f\left( {\widehat{\mathbf{x}},\widehat{\mathbf{y}}}\right) }\end{Vmatrix}$ . If Algorithm 1 (Appendix D) is run on a min-max Stackelberg game(X, Y, f, g)with ${\eta }_{t} = \frac{c}{{\ell }_{f}\sqrt{2T}}$ for all iteration $t \in \left\lbrack T\right\rbrack$ and any ${\mathbf{x}}^{\left( 0\right) } \in X$ , then $\left( {{\overline{\mathbf{x}}}^{\left( T\right) },{\mathbf{y}}^{ * }\left( {\overline{\mathbf{x}}}^{\left( T\right) }\right) }\right)$ is a $\left( {c{\ell }_{f}\sqrt{2}/\sqrt{T},0}\right)$ -Stackelberg equilibrium. Furthermore, for $\varepsilon \in \left( {0,1}\right)$ , if we choose $T \geq {N}_{T}\left( \varepsilon \right) \in O\left( {1/{\varepsilon }^{2}}\right)$ , then there exists an iteration ${T}^{ * } \leq T$ s.t. $\left( {{\overline{\mathbf{x}}}^{\left( T\right) },{\mathbf{y}}^{ * }\left( {\overline{\mathbf{x}}}^{\left( T\right) }\right) }\right)$ is an $\left( {\varepsilon ,0}\right)$ -Stackelberg equilibrium.
+
+Note that we can relax Theorem 3 to instead work with an approximate best response of the inner player, i.e., given the strategy of the outer player $\widehat{\mathbf{x}}$ , instead of playing an exact best-response, the inner player computes a $\widehat{\mathbf{y}}$ s.t. $f\left( {\widehat{\mathbf{x}},\widehat{\mathbf{y}}}\right) \geq$ $\mathop{\max }\limits_{{\mathbf{y} \in Y : \mathbf{g}\left( {\widehat{\mathbf{x}},\mathbf{y}}\right) \geq \mathbf{0}}}f\left( \widehat{\mathbf{x}}\right) - \varepsilon$ . Combine with results on the convergence of gradient ascent on smooth functions, the average iterates computed by Goktas and Greenwald's nested GDA algorithm converge to an $\left( {\varepsilon ,\varepsilon }\right)$ -Stackelberg equilibrium in $O\left( {1/{\varepsilon }^{3}}\right)$ iterations. If additionally, $f$ is strongly convex in $\mathbf{y}$ , then the iteration complexity can reduced to $O\left( {1/{\varepsilon }^{2}\log \left( {1/\varepsilon }\right) }\right)$ .
+
+Similarly, we can also consider the optimistic case, in which both the outer and inner players minimize their Lagrangian regrets, as OMD learners with access to a Lagrangian solution oracle that returns ${\mathbf{\lambda }}^{ * } \in$ $\arg \mathop{\min }\limits_{{\mathbf{\lambda } > 0}}\mathop{\min }\limits_{{\mathbf{x} \in X}}\mathop{\max }\limits_{{\mathbf{y} \in Y}}{\mathcal{L}}_{\mathbf{x}}\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right)$ . In this case, we obtain the Lagrangian GDA (LGDA) algorithm (Algorithm 2 - Appendix D). The following corollary of Theorem 6 states that LGDA converges in average iterates to an approximate-Stackelberg equilibrium in $O\left( {1/{\varepsilon }^{2}}\right)$ iterations.
+
+Corollary 8. Let $b = \mathop{\max }\limits_{{\mathbf{x} \in X}}\parallel \mathbf{x}\parallel ,c = \mathop{\max }\limits_{{\mathbf{y} \in Y}}\parallel \mathbf{y}\parallel$ , and ${\ell }_{\mathcal{L}} = \mathop{\max }\limits_{{\left( {\widehat{\mathbf{x}},\widehat{\mathbf{y}}}\right) \in X \times Y}}\begin{Vmatrix}{{\nabla }_{\mathbf{x}}{\mathcal{L}}_{\widehat{\mathbf{x}}}\left( {\widehat{\mathbf{y}},{\mathbf{\lambda }}^{ * }}\right) }\end{Vmatrix}$ . If Algorithm 2 (Appendix D) is run on a min-max Stackelberg game (X, Y, f, g)with ${\eta }_{t}^{\mathbf{x}} = \frac{b}{{\ell }_{\mathcal{L}}\sqrt{2T}}$ and ${\eta }_{t}^{\mathbf{y}} = \frac{c}{{\ell }_{\mathcal{L}}\sqrt{2T}}$ for all iterations $t \in \left\lbrack T\right\rbrack$ and any ${\mathbf{x}}^{\left( 0\right) } \in X$ , then the following convergence bound holds on the saddle point residual $\left( {{\overline{\mathbf{x}}}^{\left( T\right) },{\overline{\mathbf{y}}}^{\left( T\right) }}\right)$ w.r.t. the Lagrangian:
+
+$$
+0 \leq \mathop{\max }\limits_{{\mathbf{y} \in Y}}{\mathcal{L}}_{{\overline{\mathbf{x}}}^{\left( T\right) }}\left( {\mathbf{y},{\mathbf{\lambda }}^{ * }}\right) - \mathop{\min }\limits_{{\mathbf{x} \in X}}{\mathcal{L}}_{\mathbf{x}}\left( {{\overline{\mathbf{y}}}^{\left( T\right) },{\mathbf{\lambda }}^{ * }}\right) \leq \frac{2\sqrt{2}{\ell }_{\mathcal{L}}}{\sqrt{T}}\max \{ b,c\}
+$$
+
+(5)
+
+We remark that in certain rare cases the Lagrangian can become degenerate in $\mathbf{y}$ , in that the $\mathbf{y}$ terms in the Lagrangian might cancel out when ${\mathbf{\lambda }}^{ * }$ is plugged back into Lagrangian, leading LGDA to not update the $\mathbf{y}$ variables, as demonstrated by the following example:
+
+Example 9. Consider this min-max Stackelberg game: $\mathop{\min }\limits_{{x \in \left\lbrack {-1,1}\right\rbrack }}$
+
+$\mathop{\max }\limits_{{y \in \left\lbrack {-1,1}\right\rbrack : 0 \leq 1 - \left( {x + y}\right) }}{x}^{2} + y + 1$ . When we plug the optimal $\dot{K}{KT}$ multiplier ${\lambda }^{ * } = 1$ into the Lagrangian associated with the outer player's value function, we obtain ${\mathcal{L}}_{x}\left( {y,\lambda }\right) = {x}^{2} + y + 1 - \left( {x + y}\right) = {x}^{2} - x + 1$ , with $\frac{\partial \mathcal{L}}{\partial x} = {2x} - 1$ and $\frac{\partial \mathcal{L}}{\partial y} = 0$ . It follows that the $x$ iterate converges to $1/2$ , but the $\mathbf{y}$ iterate will never be updated, and hence unless $y$ is initialized to its Stackelberg equilibirium value, LGDA will not converge to a Stackelberg equilibrium.
+
+In general, this degeneracy issue occurs when $\forall \mathbf{x} \in$ $X,{\nabla }_{\mathbf{y}}f\left( {\mathbf{x},\mathbf{y}}\right) = - \mathop{\sum }\limits_{{k = 1}}^{K}{\lambda }_{k}^{ * }{\nabla }_{\mathbf{y}}{g}_{k}\left( {\mathbf{x},\mathbf{y}}\right)$ . We can sidestep the issue by restricting our attention to min-max Stack-elberg games with convex-strictly-concave objective functions, which is sufficient to ensure that the Lagrangian is not degenerate in $\mathbf{y}$ (Boyd, Boyd, and Vandenberghe 2004).
+
+§ ROBUSTNESS ANALYSIS
+
+Although the OMD dynamics we analyzed in the previous section describe a dynamic behavior in nature, they assume that the game and its properties, i.e., the objective function and constraints, are static and thus do not change over time. In many real-world games, however, the game itself is subject to perturbations, i.e., dynamic changes, in the sense that the agents' objectives and constraints might be perturbed by external influences. Analyzing and providing dynamics that are robust to ongoing changes in games is critical, since the real world is rarely static.
+
+This makes the study of dynamic min-max Stackelberg games and their associated optimal dynamic strategies for both players an important goal. Dynamic games bring with them a series of interesting issues; notably, even though the environment might change at each time period, in each period of time the game still exhibits a Stackelberg equilibrium. However, one cannot sensibly expect the players to play a Stackelberg equilibrium strategy at each time period since even in the static setting, known game dynamics require multiple time steps in order for players to reach even an approximate Stackelberg equilibrium. When players cannot directly best respond or pick the optimal strategy for themselves, they essentially become boundedly rational agents in that they can only take a step towards their optimal strategy but they cannot reach it in just one time step. Hence, in dynamic games, equilibria also become dynamic objects, which can never be reached unless the game stops changing significantly.
+
+Corollaries 8 and 7 tell us that OMD dynamics are effective equilibrium finding strategies in min-max Stackelberg games. However, they do not provide any intuition about the robustness of OMD dynamics to perturbations in the game. That is, we would like to know whether or not OMD dynamics are able to track the equilibrium even when the game changes slowly. Robustness is a desirable property for no-regret learning dynamics as many real-world applications of games involve changing environments. In this section, we provide theoretical guarantees that show that even when the game changes at each iteration, OMD dynamics closely track the changing equilibria of the dynamic game. Unfortunately, our theoretical results only concern min-max games (with independent strategy sets). Nevertheless, we provide experimental evidence that suggests that the results we prove may also apply more broadly to min-max Stackelberg games (with dependent strategy sets).
+
+We first consider the pessimistic setting in which the outer player is a no-regret learner and the inner player best-responds. In this setting, we show that when the outer player follows online projected gradient descent dynamics in a dynamic min-max game, i.e., a min-max game in which the objective function constantly changes, the outer player's strategies closely track their Stackelberg equilibrium strategy. Intuitively, the following result implies that irrespective of the initial strategy of the outer player, online projected gradient descent dynamics follow the Nash equilibrium strategy of the outer player s.t. the strategy determined by the outer player is always within a ${2d}/\delta$ radius of the outer player’s Nash equilibrium strategy.
+
+Theorem 10. Consider a dynamic min-max game ${\left\{ \left( X,Y,{f}^{\left( t\right) }\right) \right\} }_{t = 1}^{T}$ . Suppose that, for all $t \in \left\lbrack T\right\rbrack ,{f}^{\left( t\right) }$ is $\mu$ -strongly convex in $\mathbf{x}$ and strictly concave in $\mathbf{y}$ , and ${f}^{\left( t\right) }$ is ${\ell }_{\nabla f}$ -Lipschitz smooth. Suppose that the outer player generates a sequence of actions ${\left\{ {\mathbf{x}}^{\left( t\right) }\right\} }_{t = 1}^{T} \subset X$ by using an online projected gradient descent algorithm on the loss functions ${\left\{ {V}^{\left( t\right) }\right\} }_{t = 1}^{T}$ with learning rate $\eta \leq \frac{2}{\mu + {\ell }_{\nabla f}}$ and suppose that the inner player generates a sequence of best-responses to each iterate of the outer player ${\left\{ {\mathbf{y}}^{\left( t\right) }\right\} }_{t = 1}^{T} \subset Y$ . For all $t \in \left\lbrack T\right\rbrack$ , let ${\mathbf{x}}^{{\left( t\right) }^{ * }} \in \arg \mathop{\min }\limits_{{\mathbf{x} \in X}}{V}^{\left( t\right) }\left( \mathbf{x}\right)$ , ${\Delta }^{\left( t\right) } = \begin{Vmatrix}{{\mathbf{x}}^{{\left( t + 1\right) }^{ * }} - {\mathbf{x}}^{{\left( t\right) }^{ * }}}\end{Vmatrix}$ , and $\delta = \frac{{2\eta \mu }{\ell }_{\nabla f}}{{\ell }_{\nabla f} + \mu }$ , we then have:
+
+$$
+\begin{Vmatrix}{{\mathbf{x}}^{{\left( T\right) }^{ * }} - {\mathbf{x}}^{\left( T\right) }}\end{Vmatrix} \leq {\left( 1 - \delta \right) }^{T/2}\begin{Vmatrix}{{\mathbf{x}}^{{\left( 0\right) }^{ * }} - {\mathbf{x}}^{\left( 0\right) }}\end{Vmatrix} + \mathop{\sum }\limits_{{t = 1}}^{T}{\left( 1 - \delta \right) }^{\frac{T - t}{2}}{\Delta }^{\left( t\right) }
+$$
+
+(6)
+
+If additionally, for all $t \in \left\lbrack T\right\rbrack ,{\Delta }^{\left( t\right) } \leq d$ , then:
+
+$$
+\begin{Vmatrix}{{\mathbf{x}}^{{\left( T\right) }^{ * }} - {\mathbf{x}}^{\left( T\right) }}\end{Vmatrix} \leq {\left( 1 - \delta \right) }^{T/2}\begin{Vmatrix}{{\mathbf{x}}^{{\left( 0\right) }^{ * }} - {\mathbf{x}}^{\left( 0\right) }}\end{Vmatrix} + \frac{2d}{\delta } \tag{7}
+$$
+
+We can extend a similar robustness result to the setting in which the outer and inner players are both OMD learners. The following theorem implies that irrespective of the initial strategies of the two players, online projected gradient descent dynamics follow the Nash equilibrium of the game, always staying within a ${4d}/\delta$ radius.
+
+Theorem 11. Consider a dynamic min-max game ${\left\{ {G}_{t}\right\} }_{t = 0}^{T} = {\left\{ \left( X,Y,{f}^{\left( t\right) }\right) \right\} }_{t = 1}^{T}$ . Suppose that, for all $t \in \left\lbrack T\right\rbrack ,{f}^{\left( t\right) }$ is ${\mu }_{\mathbf{x}}$ -strongly convex in $\mathbf{x}$ and ${\mu }_{\mathbf{y}}$ - strongly concave in $\mathbf{y},{f}^{\left( t\right) }$ is ${\ell }_{\nabla f}$ -Lipschitz smooth. Let ${\left\{ \left( {\mathbf{x}}^{\left( t\right) },{\mathbf{y}}^{\left( t\right) }\right) \right\} }_{t = 1}^{T} \subset X \times Y$ be the strategies generated by the outer and inner players assuming that the outer player uses a online projected gradient descent algorithm on the losses ${\left\{ {f}^{\left( t\right) }\left( \cdot ,{\mathbf{y}}^{\left( t\right) }\right) \right\} }_{t = 1}^{T}$ with ${\eta }_{\mathbf{x}} = \frac{2}{{\mu }_{\mathbf{x}} + {\ell }_{\nabla f}}$ and that the inner player uses a online projected gradient descent algorithm on the losses ${\left\{ -{f}^{\left( t\right) }\left( {\mathbf{x}}^{\left( t\right) }, \cdot \right) \right\} }_{t = 1}^{T}$ with ${\eta }_{\mathbf{y}} = \frac{2}{{\mu }_{\mathbf{y}} + {\ell }_{\nabla f}}$ . For all $t \in \left\lbrack T\right\rbrack$ , let ${\mathbf{x}}^{{\left( t\right) }^{ * }} \in \arg \mathop{\min }\limits_{{\mathbf{x} \in X}}{f}^{\left( t\right) }\left( {\mathbf{x},{\mathbf{y}}^{\left( t\right) }}\right) ,{\mathbf{y}}^{{\left( t\right) }^{ * }} \in$ $\arg \mathop{\min }\limits_{{\mathbf{y} \in Y}}{f}^{\left( t\right) }\left( {{\mathbf{x}}^{\left( t\right) },\mathbf{y}}\right) .\;{\Delta }_{\mathbf{x}}^{\left( t\right) } = \begin{Vmatrix}{{\mathbf{x}}^{{\left( t + 1\right) }^{ * }} - {\mathbf{x}}^{{\left( t\right) }^{ * }}}\end{Vmatrix},$ ${\Delta }_{\mathbf{y}}^{\left( t\right) } = \begin{Vmatrix}{{\mathbf{y}}^{{\left( t + 1\right) }^{ * }} - {\mathbf{y}}^{{\left( t\right) }^{ * }}}\end{Vmatrix},{\delta }_{\mathbf{x}} = \frac{{2\eta }{\mu }_{\mathbf{x}}{\ell }_{\nabla f}}{{\ell }_{\nabla \mathbf{x}}f + {\mu }_{\mathbf{x}}}$ , and ${\delta }_{y} = \frac{{2\eta }{\mu }_{y}{\ell }_{\nabla f}}{{\ell }_{\nabla f} + {\mu }_{y}}$ we then have:
+
+$$
+\begin{Vmatrix}{{\mathbf{x}}^{{\left( T\right) }^{ * }} - {\mathbf{x}}^{\left( T\right) }}\end{Vmatrix} + \begin{Vmatrix}{{\mathbf{y}}^{{\left( T\right) }^{ * }} - {\mathbf{y}}^{\left( T\right) }}\end{Vmatrix}
+$$
+
+$$
+\leq {\left( 1 - {\delta }_{\mathbf{x}}\right) }^{T/2}\begin{Vmatrix}{{\mathbf{x}}^{{\left( 0\right) }^{ * }} - {\mathbf{x}}^{\left( 0\right) }}\end{Vmatrix} + {\left( 1 - {\delta }_{\mathbf{y}}\right) }^{T/2}\begin{Vmatrix}{{\mathbf{y}}^{{\left( 0\right) }^{ * }} - {\mathbf{y}}^{\left( 0\right) }}\end{Vmatrix}
+$$
+
+$$
++ \mathop{\sum }\limits_{{t = 1}}^{T}{\left( 1 - {\delta }_{\mathbf{x}}\right) }^{\frac{T - t}{2}}{\Delta }_{\mathbf{x}}^{\left( t\right) } + \mathop{\sum }\limits_{{t = 1}}^{T}{\left( 1 - {\delta }_{\mathbf{y}}\right) }^{\frac{T - t}{2}}{\Delta }_{\mathbf{y}}^{\left( t\right) }. \tag{8}
+$$
+
+If additionally, ${\Delta }_{\mathbf{x}}^{\left( t\right) } \leq d$ and ${\Delta }_{\mathbf{y}}^{\left( t\right) } \leq d$ for all $t \in \left\lbrack T\right\rbrack$ , and $\delta = \min \left\{ {{\delta }_{\mathbf{y}},{\delta }_{\mathbf{x}}}\right\}$ , then:
+
+$$
+\begin{Vmatrix}{{\mathbf{x}}^{{\left( T\right) }^{ * }} - {\mathbf{x}}^{\left( T\right) }}\end{Vmatrix} + \begin{Vmatrix}{{\mathbf{y}}^{{\left( T\right) }^{ * }} - {\mathbf{y}}^{\left( T\right) }}\end{Vmatrix}
+$$
+
+$$
+\leq 2{\left( 1 - \delta \right) }^{T/2}\left( {\begin{Vmatrix}{{\mathbf{x}}^{{\left( 0\right) }^{ * }} - {\mathbf{x}}^{\left( 0\right) }}\end{Vmatrix} + \begin{Vmatrix}{{\mathbf{y}}^{{\left( 0\right) }^{ * }} - {\mathbf{y}}^{\left( 0\right) }}\end{Vmatrix}}\right) + \frac{4d}{\delta }.
+$$
+
+(9)
+
+The proofs of the above theorems are relegated to Appendix C. The theorems we have proven in this section establish the robustness of OMD dynamics for min-max games in both the pessimistic and optimistic settings by showing that the dynamics closely track the Stackelberg equilibrium in a large class of min-max games. As we are not able to extend these theoretical robustness guarantees to min-max Stackelberg games (with dependent strategy sets), we instead ran a series of experiments on (dynamic) Fisher markets, which are canonical examples of min-max Stackel-berg games (Goktas and Greenwald 2021), to investigate the empirical robustness guarantees of OMD dynamics for this class of min-max Stackelberg games.
+
+§ DYNAMIC FISHER MARKETS
+
+The Fisher market model, attributed to Irving Fisher (Brainard, Scarf et al. 2000), has received a great deal of attention in the literature, especially by computer scientists, as it has proven useful in the design of online marketplaces. We now study OMD dynamics in dynamic Fisher markets, which are instances of min-max Stackelberg games (Goktas and Greenwald 2021).
+
+A Fisher market consists of $n$ buyers and $m$ divisible goods (Brainard, Scarf et al. 2000). Each buyer $i \in \left\lbrack n\right\rbrack$ has a budget ${b}_{i} \in {\mathbb{R}}_{ + }$ and a utility function ${u}_{i} : {\mathbb{R}}_{ + }^{m} \rightarrow \mathbb{R}$ . Each good $j \in \left\lbrack m\right\rbrack$ has supply ${s}_{j} \in {\mathbb{R}}_{ + }$ . A Fisher market is thus given by a tuple(n, m, U, b, s), where $U = \left\{ {{u}_{1},\ldots ,{u}_{n}}\right\}$ is a set of utility functions, one per buyer, $\mathbf{b} \in {\mathbb{R}}_{ + }^{n}$ is a vector of buyer budgets, and $s \in {\mathbb{R}}_{ + }^{m}$ is a vector of good supplies. We abbreviate as(U, b, s)when $n$ and $m$ are clear from context. A dynamic Fisher market is a sequence of Fisher markets ${\left( {U}^{\left( t\right) },{\mathbf{b}}^{\left( t\right) },{\mathbf{s}}^{\left( t\right) }\right) }_{t = 1}^{\left( \left\lbrack T\right\rbrack \right) }$ . An allocation $\mathbf{X} = {\left( {\mathbf{x}}_{1},\ldots ,{\mathbf{x}}_{n}\right) }^{T} \in {\mathbb{R}}_{ + }^{n \times m}$ is a map from goods to buyers, represented as a matrix s.t. ${x}_{ij} \geq 0$ denotes the amount of good $j \in \left\lbrack m\right\rbrack$ allocated to buyer $i \in \left\lbrack n\right\rbrack$ . Goods are assigned prices $\mathbf{p} = {\left( {p}_{1},\ldots ,{p}_{m}\right) }^{T} \in {\mathbb{R}}_{ + }^{m}$ . A tuple $\left( {{\mathbf{p}}^{ * },{\mathbf{X}}^{ * }}\right)$ is said to be a competitive (or Wal-rasian) equilibrium of Fisher market(U, b, s)if 1 . buyers are utility maximizing, constrained by their budget, i.e., $\forall i \in \left\lbrack n\right\rbrack ,{\mathbf{x}}_{i}^{ * } \in \arg \mathop{\max }\limits_{{\mathbf{x} : \mathbf{x} \cdot {\mathbf{p}}^{ * } \leq {b}_{i}}}{u}_{i}\left( \mathbf{x}\right)$ ; and 2. the market clears, i.e., $\forall j \in \left\lbrack m\right\rbrack ,{p}_{j}^{ * } > 0 \Rightarrow \mathop{\sum }\limits_{{i \in \left\lbrack n\right\rbrack }}{x}_{ij}^{ * } = {s}_{j}$ and ${p}_{j}^{ * } = 0 \Rightarrow \mathop{\sum }\limits_{{i \in \left\lbrack n\right\rbrack }}{x}_{ij}^{ * } \leq {s}_{j}.$
+
+Goktas and Greenwald (2021) observe that any competitive equilibrium $\left( {{\mathbf{p}}^{ * },{\mathbf{X}}^{ * }}\right)$ of a Fisher market(U, b)corresponds to a Stackelberg equilibrium of the following min-max Stackelberg game: ${}^{3}$
+
+$$
+\mathop{\min }\limits_{{\mathbf{p} \in {\mathbb{R}}_{ + }^{m}}}\mathop{\max }\limits_{{\mathbf{X} \in {\mathbb{R}}_{ + }^{n \times m} : \mathbf{X}\mathbf{p} \leq \mathbf{b}}}\mathop{\sum }\limits_{{j \in \left\lbrack m\right\rbrack }}{s}_{j}{p}_{j} + \mathop{\sum }\limits_{{i \in \left\lbrack n\right\rbrack }}{b}_{i}\log \left( {{u}_{i}\left( {\mathbf{x}}_{i}\right) }\right) .
+$$
+
+(10)
+
+Let $\mathcal{L} : {\mathbb{R}}_{ + }^{m} \times {\mathbb{R}}^{n \times m} \rightarrow {\mathbb{R}}_{ + }$ be the Lagrangian of the outer player's value function in Equation (10), i.e.,
+
+$\begin{array}{l} {\mathcal{L}}_{\mathbf{p}}\left( {\mathbf{X},\mathbf{\lambda }}\right) = \mathop{\sum }\limits_{{j \in \left\lbrack m\right\rbrack }}{s}_{j}{p}_{j} + \mathop{\sum }\limits_{{i \in \left\lbrack n\right\rbrack }}{b}_{i}\log \left( {{u}_{i}\left( {\mathbf{x}}_{i}\right) }\right) + \\ \end{array}$ $\mathop{\sum }\limits_{{i \in \left\lbrack n\right\rbrack }}{\lambda }_{i}\left( {{b}_{i} - {\mathbf{x}}_{i} \cdot \mathbf{p}}\right)$ . One can show the existence of a Lagrangian solution oracle for the Lagrangian of Equation (10) such that ${\mathbf{\lambda }}^{ * } = {\mathbf{1}}_{m}$ . We then have: 1. by Goktas and Greenwald's envelope theorem, the subdifferential of the outer player’s value function is given by ${\nabla }_{\mathbf{p}}V\left( \mathbf{p}\right) = s -$ $\mathop{\sum }\limits_{{i \in \left\lbrack n\right\rbrack }}{\mathbf{x}}_{i}^{ * }\left( \mathbf{p}\right)$ , where ${\mathbf{x}}_{i}^{ * }\left( \mathbf{p}\right) \in \arg \mathop{\max }\limits_{{\mathbf{x} \in {\mathbb{R}}_{ + }^{m}\mathbf{x} \cdot \mathbf{p} \leq {b}_{i}}}{u}_{i}\left( \mathbf{x}\right)$ , 2. the gradient of the Lagrangian w.r.t. the prices, given the Langrangian solution oracle, is ${\nabla }_{\mathbf{p}}{\mathcal{L}}_{\mathbf{p}}\left( {\mathbf{X},{\mathbf{\lambda }}^{ * }}\right) = s -$ $\mathop{\sum }\limits_{{i \in \left\lbrack n\right\rbrack }}{\mathbf{x}}_{i}$ and ${\nabla }_{{\mathbf{x}}_{i}}{\mathcal{L}}_{\mathbf{p}}\left( {\mathbf{X},{\mathbf{\lambda }}^{ * }}\right) ) = \frac{{b}_{i}}{{u}_{i}\left( {\mathbf{x}}_{i}\right) }{\nabla }_{{\mathbf{x}}_{i}}{u}_{i}\left( {\mathbf{x}}_{i}\right) - \mathbf{p}$ , where ${\mathbf{\lambda }}^{ * } = {\mathbf{1}}_{m}$ .
+
+We first consider OMD dynamics for Fisher markets in the pessimistic setting, in which the outer player determines their strategy via online projected gradient descent and the inner player best-responds. In this setting, we obtain a dynamic version of a natural price adjustment process known as tâtonnement (Walras 1969), which was first studied by Cheung, Hoefer, and Nakhe (2019) (Algorithm 3, Appendix D).
+
+We then consider OMD dynamics in the optimistic setting, in which case both the outer and inner players employ online projected gradient descent, which yields myopic best-response dynamics (Monderer and Shapley 1996) (Algorithm 4, Appendix D). In words, at each time step, the (fictional Walrasian) auctioneer takes a gradient descent step to minimize its regret, and then all the buyers take a gradient ascent step to minimize their Lagrangian regret. These gradient descent-ascent dynamics can be seen as myopic best-response dynamics for sellers and buyers who are both boundedly rational (Camerer 1998).
+
+Experiments In order to better understand the robustness properties of Algorithms 3 and 4 in a dynamic min-max Stackelberg game that is subject to perturbation across time, we ran a series of experiments with dynamic Fisher Markets assuming three different classes of utility functions. ${}^{4}$ Each utility structure endows Equation (10) with different smoothness properties, which allows us to compare the efficiency of the algorithms under varying conditions. Let ${\mathbf{v}}_{i} \in$ ${\mathbb{R}}^{m}$ be a vector of valuation parameters that describes the utility function of buyer $i \in \left\lbrack n\right\rbrack$ . We consider the following utility function classes: 1. linear: ${u}_{i}\left( {\mathbf{x}}_{i}\right) = \mathop{\sum }\limits_{{j \in \left\lbrack m\right\rbrack }}{v}_{ij}{x}_{ij}$ ; 2. Cobb-Douglas: ${u}_{i}\left( {\mathbf{x}}_{i}\right) = \mathop{\prod }\limits_{{j \in \left\lbrack m\right\rbrack }}{x}_{ij}^{{v}_{ij}}$ ; and 3. Leontief: ${u}_{i}\left( {\mathbf{x}}_{i}\right) = \mathop{\min }\limits_{{j \in \left\lbrack m\right\rbrack }}\left\{ \frac{{x}_{ij}}{{v}_{ij}}\right\}$ . To simulate the dynamic Fisher markets, we fix a range for every market parameter and draw from that range uniformly at random during each iteration. Our goal is to understand how closely OMD dynamics track the Stackelberg equilibria of the game as the latter varies with time. To do so, we compare the distance between the iterates $\left( {{\mathbf{p}}^{\left( t\right) },{\mathbf{X}}^{\left( t\right) }}\right)$ computed by the algorithms and the equilibrium of the game at each iteration $t$ . This distance is measured as ${\begin{Vmatrix}{\mathbf{p}}^{{\left( t\right) }^{ * }} - {\mathbf{p}}^{\left( t\right) }\end{Vmatrix}}_{2} + {\begin{Vmatrix}{\mathbf{X}}^{{\left( t\right) }^{ * }} - {\mathbf{X}}^{\left( t\right) }\end{Vmatrix}}_{2}$ , where $\left( {{\mathbf{p}}^{{\left( t\right) }^{ * }},{\mathbf{X}}^{{\left( t\right) }^{ * }}}\right)$ is the Stackelberg equilibrium of the Fisher market $\left( {{U}^{\left( t\right) },{\mathbf{b}}^{\left( t\right) },{\mathbf{s}}^{\left( t\right) }}\right)$ at time $t \in \left\lbrack T\right\rbrack$ .
+
+In our experiments, we ran Algorithms 3 and 4 on 100 randomly initialized dynamic Fisher markets. We depict the distance to equilibrium at each iteration for a randomly chosen experiment in Figures 1 and 2. In these figures, we observe that our OMD dynamics are closely tracking the Stackelberg equilibria as they vary with each iteration. A more detailed description of our experimental setup can be found in Appendix E.
+
+${}^{3}$ The first term in this program is slightly different than the first term in the program presented by Goktas and Greenwald (2021), since supply is assumed to be 1 their work.
+
+${}^{4}$ Our code can be found at https://anonymous.4open.science/r/ Dynamic-Minmax-Games-8153/.
+
+ < g r a p h i c s >
+
+Figure 1: In blue, we depict a trajectory of distances between computed allocation-price pairs and equilibrium allocation-price pairs, when Algorithm 3 is run on randomly initialized dynamic linear, Cobb-Douglas, and Leontief Fisher markets. In red, we plot an arbitrary $O\left( {1/\sqrt{T}}\right)$ function.
+
+Figure 2: In blue, we depict a trajectory of distances between computed allocation-price pairs and equilibrium allocation-price pairs, when Algorithm 4 is run on randomly initialized dynamic linear, Cobb-Douglas, and Leontief Fisher markets. In red, we plot an arbitrary $O\left( {1/\sqrt{T}}\right)$ function.
+
+We observe from Figures 1 and 2 that for both Algorithms 3 and 4, we obtain an empirical convergence rate relatively close to $O\left( {1/\sqrt{T}}\right)$ under Cobb-Douglas utilities, and a slightly slower empirical convergence rate under linear utilities. Recall that $O\left( {1/\sqrt{T}}\right)$ is the convergence rate guarantee we obtained for both algorithms, assuming a fixed learning rate in a static Fisher market (Corollaries 7 and 8).
+
+Dynamic Fisher markets with Leontief utilities, in which the objective function is not differentiable, are the hardest markets of the three for our algorithms to solve. Still, we only see a slightly slower than $O\left( {1/\sqrt{T}}\right)$ empirical convergence rate for both Algorithms 3 and 4. In these experiments, the convergence curve generated by Algorithm 4 has a less erratic behavior than the one generated by Algorithm 3. Due to the non-differentiability of the objective function, the gradient ascent step in Algorithm 4 for buyers with Leontief utilities is very small, effectively dampening any potentially erratic changes it the iterates.
+
+Our experiments suggest that even when the game changes at each iteration, OMD dynamics (Algorithms 3 and 4 - Appendix D) are robust enough to closely track the changing Stackelberg equilibria of dynamic Fisher markets. We note that tâtonnement dynamics (Algorithm 3) seem to be more robust than myopic best response dynamics (Algorithm 4), i.e., the distance to equilibrium allocations is smaller at each iteration of tâtonnement. This result is not surprising, as tâtonnement computes a utility-maximizing allocation for the buyers at each time step. Even though Theorems 10 and 11 only provide theoretical guarantees on the robustness of OMD dynamics in dynamic min-max games (with independent strategy sets), it seems like similar theoretical robustness results may be attainable in dynamic min-max Stackelberg games (with dependent strategy sets).
+
+§ 5 CONCLUSION
+
+We began this paper by considering no-regret learning dynamics for min-max Stackelberg games in two settings: a pessimistic setting in which the outer player is a no-regret learner and the inner player best responds, and an optimistic setting in which both players are no-regret learners. For both of these settings, we proved that no-regret learning dynamics converge to a Stackelberg equilibrium of the game. We then specialized the no-regret algorithm employed by the players to online mirror descent (OMD), which yielded two known algorithms, namely max-oracle gradient descent (Jin, Netrapalli, and Jordan 2020) and nested GDA (Goktas and Greenwald 2021) in the pessimistic setting, and a new simultaneous GDA-like algorithm (Nedic and Ozdaglar 2009), which we call Lagrangian GDA, in the optimistic setting. As these algorithms are no-regret learning algorithms, our previous theorems imply convergence to Stackelberg equilibria in $O\left( {1/{\varepsilon }^{2}}\right)$ iterations. Finally, we investigated the robustness of OMD dynamics to perturbations in the parameters of a min-max Stackelberg game. To do so, we analyzed how closely OMD dynamics track Stackelberg equilibria in dynamic min-max Stackelberg games. We proved that in min-max games (with independent strategy sets) OMD dynamics closely track the changing Stackelberg equilibria of a game. As we were not able to extend these theoretical robustness guarantees to min-max Stackelberg games (with dependent strategy sets), we instead ran a series of experiments on dynamic Fisher markets, which are canonical examples of min-max Stackelberg games. Our experiments suggest that OMD dynamics are robust for min-max Stackelberg games so that perhaps the robustness guarantees we have provided for OMD dynamics in min-max games can be extended to min-max Stackelberg games. The theory developed in this paper opens the door to extending the myriad applications of Stackelberg games in AI to incorporating dependent strategy sets. Such models promise to be more expressive, and as a result could provide decision makers with better solutions to problems in security, environmental protection, etc.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/vKc1mLxBebP/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/vKc1mLxBebP/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..dc54d9676287dab3292d1645abfb6f69092cd9b2
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/vKc1mLxBebP/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,213 @@
+# Aliasing coincides with CNNs vulnerability towards adversarial attacks
+
+Anonymous
+
+## Abstract
+
+Many commonly well-performing convolutional neural network models have shown to be susceptible to input data perturbations, indicating a low model robustness. Adversarial attacks are thereby specifically optimized to reveal model weaknesses, by generating small, barely perceivable image perturbations that flip the model prediction. Robustness against attacks can be gained for example by using adversarial examples during training, which effectively reduces the measurable model attackability. In contrast, research on analyzing the source of a model's vulnerability is scarce. In this paper, we analyze adversarially trained, robust models in the context of a specifically suspicious network operation, the downsampling layer, and provide evidence that robust models have learned to downsample more accurately and suffer significantly less from aliasing than baseline models.
+
+## Introduction
+
+Convolutional Neural Networks (CNNs) provide highly accurate predictions in a wide range of applications. Yet, to allow for practical applicability, CNN models should not be fooled by small image perturbations, as they are realized by adversarial attacks (Goodfellow, Shlens, and Szegedy 2015a; Moosavi-Dezfooli, Fawzi, and Frossard 2016; Rony et al. 2019). Such attacks aim to fool the network by perturbing image pixels such that human observers would still easily recognize the correct class label, while the network makes incorrect predictions. Susceptibility to such perturbations is prohibitive for the applicability of CNN models in real world scenarios, as it indicates limited reliability and generalization of the model.
+
+To establish adversarial robustness many sophisticated methods have been developed (Goodfellow, Shlens, and Szegedy 2015a; Rony et al. 2019; Kurakin, Goodfellow, and Bengio 2017; Goodfellow, Shlens, and Szegedy 2015b). Some can defend only against one specific attack (Goodfel-low, Shlens, and Szegedy 2015a) while others propose more general defences against diverse attacks. Another way to protect CNNs against adversarial examples is to detect them. Harder et al. (2021) classify adversarial examples through inspecting each input image and its feature maps in the frequency domain. Similarly, Yin et al. (2020) showed that natural images and adversarial examples differ significantly in their frequency spectra.
+
+
+
+Figure 1: Illustration of down-sampling, with (top right) and without anti-aliasing filter (bottom right) as well as an adversarial example (bottom left). The top left image shows the original, on the top right, this image is correctly down-sampled with an anti-aliasing filter. In the bottom right, no filter is applied, leading to aliasing. The adversarial example (bottom left) shows visually similar artifacts. In this paper, we investigate the role of aliasing for adversarial robustness.
+
+In fact, when considering the architecture of commonly employed CNN models, one could wonder why these models perform so well although they ignore basic sampling theoretic foundations. Concretely, most architectures subsample feature maps without ensuring to sample above the Nyquist rate (Shannon 1949), such that, after each down-sampling operation, spectra of sub-sampled feature maps may overlap with their replica. This is called aliasing and implies that the network should be genuinely unable to fully restore an image from its feature maps. One can only hypothesize that common CNNs learn to (partially) compensate for this effect by learning appropriate filters. Following this line of thought, recently, several works suggest to improve CNNs by including anti-aliasing techniques during down-sampling in CNNs (Zhang 2019; Zou et al. 2020). They aim to make the models more robust against image-translations, such that the class prediction does not suffer from small vertical or horizontal shifts of the content.
+
+---
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+---
+
+In this paper, we further investigate the relationship between adversarial robustness and aliases. While previous works (Yin et al. 2020; Harder et al. 2021; Lorenz et al. 2021) focused on adversarial examples, we systematically analyze potential aliasing effects inside CNNs. Specifically, we compare several recently proposed adversarially robust models to models which result from conventional training schemes in terms of aliasing. We inspect intermediate feature maps before and after the down-sampling operation at inference. Our first observation is that these models indeed fail to sub-sample according to the Nyquist Shannon Theorem (Shannon 1949): we observe severe aliasing. Further, our experiments reveal that adversarially trained networks exhibit less aliasing than standard trained networks, indicating that adversarial training encourages CNNs to learn how to properly down-sample data without severe artifacts.
+
+In summary, our contributions are:
+
+- We introduce a measure for aliasing and show that common CNN down-sampling layers fail to sub-sample the feature maps in a Nyquist-Shannon conform way.
+
+- We analyze various adversarially trained models, that are robust against a strong ensemble of adversarial attacks, AutoAttack (Croce and Hein 2020), and show that they exhibit significantly less aliasing than standard models.
+
+## Aliasing in CNNs
+
+CNNs usually have a pyramidal structure in which the data is progressively sub-sampled in order to aggregate spatial information while the number of channels increases. During sub-sampling, no explicit precautions are taken to avoid aliases, which arise from under-sampling. Specifically, when sub-sampling with stride 2, any frequency larger than $N/2$ , where $N$ is the size of the original data, will cause pathological overlaps in the frequency spectra. Those overlaps in the frequency spectra cause ambiguities such that high frequency components appear as low frequency components. Hence, local image perturbations can become indistinguishable from global manipulations.
+
+## Aliasing Metric
+
+To measure the possible amount of aliasing appearing after down-sampling we compare each down-sampled feature map in the Fourier domain with its aliasing-free counterpart. To this end, we consider a feature map $f\left( x\right)$ of size ${2N} \times {2N}$ before down-sampling. We compute an "aliasing-free" down-sampling by extracting the $N$ lowest frequencies along both axes in Fourier space. W.l.o.G., we consider specifically down-sampling operations by strided convolutions, since these are predominantly used in adversarially robust models (Zagoruyko and Komodakis 2017).
+
+In each strided convolution, the input feature map $f\left( x\right)$ is convolved with the learned weights $w$ and downsampled by strides, thus potentially introducing frequency replica (i.e. aliases) in the downsampled signal ${\widehat{f}}_{s2}$ .
+
+$$
+{\widehat{f}}_{s2} = f\left( x\right) * g\left( {w,2}\right) \tag{1}
+$$
+
+To measure the amount of aliasing, we explicitly construct feature map frequency representations without such aliases. Therefore, the original feature map $f\left( x\right)$ is convolved with the learned weights $w$ of the strided convolution without applying the stride $g\left( {w,1}\right)$ to obtain ${\widehat{f}}_{s1}$ .
+
+$$
+{\widehat{f}}_{s1} = f\left( x\right) * g\left( {w,1}\right) \tag{2}
+$$
+
+Afterwards the 2D FFT of the new feature maps ${\widehat{f}}_{s2}$ is computed, which we denote ${F}_{s2}$ .
+
+$$
+{F}_{s2}\left( {k, l}\right) = \frac{1}{{N}^{2}}\mathop{\sum }\limits_{{m = 0}}^{{N - 1}}\mathop{\sum }\limits_{{n = 0}}^{{N - 1}}{\widehat{f}}_{s2}\left( {m, n}\right) {e}^{-{2\pi j}\left( {\frac{k}{M}m + \frac{l}{N}n}\right) }, \tag{3}
+$$
+
+for $k, l = 0,\ldots , N - 1$ . For the non-down-sampled feature maps ${\widehat{f}}_{s1}$ , we proceed similarly and compute for $k, l =$ $0,\ldots ,2 \cdot N - 1$
+
+$$
+{F}_{s1}^{ \uparrow }\left( {k, l}\right) = \frac{1}{4{N}^{2}}\mathop{\sum }\limits_{{m = 0}}^{{{2N} - 1}}\mathop{\sum }\limits_{{n = 0}}^{{{2N} - 1}}{\widehat{f}}_{s1}\left( {m, n}\right) {e}^{-{2\pi j}\left( {\frac{k}{2M}m + \frac{l}{2N}n}\right) }.
+$$
+
+(4)
+
+The aliasing free version ${F}_{s1}$ can be obtained by setting all frequencies above the Nyquist rate to zero before down-sampling,
+
+$$
+{F}_{s1}^{ \uparrow }\left( {k, l}\right) = 0 \tag{5}
+$$
+
+for $k \in \left\lbrack {N/2,{3N}/2}\right\rbrack$ and for $l \in \in \left\lbrack {N/2,{3N}/2}\right\rbrack$ . Then the down-sampled version in the frequency domain corresponds to extracting the four corners of ${F}_{s1}^{ \uparrow }$ and reassembling them as shown in Figure 2,
+
+$$
+{F}_{s1}\left( {k, l}\right) = {F}_{s1}^{ \uparrow }\left( {k, l}\right) \;\text{ for }k, l = 0,\ldots , N/2
+$$
+
+$$
+{F}_{s1}\left( {k, l}\right) = {F}_{s1}^{ \uparrow }\left( {k + N, l}\right) \;\text{ for }\;k = N/2,\ldots , N
+$$
+
+$$
+\text{and}l = 0,\ldots , N/2
+$$
+
+$$
+{F}_{s1}\left( {k, l}\right) = {F}_{s1}^{ \uparrow }\left( {k, l + N}\right) \;\text{ for }\;k = 0,\ldots , N/2
+$$
+
+$$
+\text{and}l = N/2,\ldots , N
+$$
+
+$$
+{F}_{s1}\left( {k, l}\right) = {F}_{s1}^{ \uparrow }\left( {k + N, l + N}\right) \;\text{ for }k, l = N/2,\ldots , N
+$$
+
+(6)
+
+This way we guarantee that there are no overlaps, i.e. aliases, in the frequency spectra. Figure 2 illustrates the computing process of the aliasing free down-sampling in the frequency domain. The aliasing free feature map can be compared to the actual feature map in the frequency domain to measure the degree of aliasing. The full procedure is shown in Figure 3 , where we start on the left with the original feature map. Then we obtain the two down-sampled versions (with and without aliases) and get the difference between both by taking the ${L}_{1}$ norm.
+
+The overall aliasing metric ${AM}$ for a down-sampling operation is calculated by taking the ${L}_{1}$ distance between downsampled and alias-free feature maps ${f}_{k}$ in the Fourier domain, averaged over $\mathrm{K}$ generated feature maps,
+
+$$
+{AM} = \frac{1}{K}\mathop{\sum }\limits_{{k = 0}}^{K}\left| {{F}_{{s1}, k} - {F}_{{s2}, k}}\right| . \tag{7}
+$$
+
+The proposed ${AM}$ measure is zero if aliasing is visible in none of the down-sampled feature maps, i.e. if sampling has been performed above the Nyquist rate. Whenever ${AM}$ is greater than 0 , this is not the case and we should, from a theoretic point of view, expect the model to be easy to attack since it can not reliably distinguish between fine details and coarse input structures.
+
+
+
+Figure 2: Step by step computation of the aliasing free version of a feature map. The left image shows the magnitude of the Fourier representation of a feature map with the zero-frequency in the upper left corner, i.e. high frequencies are in the center. Alias-free downsampling suppresses high frequencies prior to sampling. This can be implemented efficiently in the Fourier domain by cropping and reassembling the low-frequency regions of the Fourier representations, i.e. its four corners. Aliasing would correspond to folding the deleted high frequency components into the constructed representation.
+
+
+
+Figure 3: FFT (Fast Fourier Transformation) of a feature map in the original resolution (left). This feature map is downsampled by striding with a factor of two after aliasing suppression (middle left) and with aliasing (middle right). The difference between the original and aliasing-free FFT of the down-sampled feature map (right).
+
+## Experiments
+
+We conducted an extensive analysis of already existing ad-versarially robust models trained on CIFAR-10 (Krizhevsky 2012) with two different architectures, namely WideResNet- 28-10 (WRN-28-10) (Zagoruyko and Komodakis 2017) and Preact ResNet-18 (He et al. 2016). Both architectures are commonly supported by many adversarial training approaches. As baseline, we trained a plain WRN-28-10 and Preact ResNet-18, both with similar training schemes. All adversarially trained networks are pre-trained models provided by RobustBench (Croce et al. 2020).
+
+The WRN-28-10 networks have four operations in which down-sampling is performed. These operations are located in the second and third block of the network. In comparison, the Preact ResNet-18 networks have six down-sampling operations, located in the second, third and fourth layers of the network.
+
+Both architectures have similar building blocks and the key operations including down-sampling are shown abstractly in the appendix in Figure 6. Each block starts with a convolution with stride two followed by additional operations like ReLu and convolutions with stride one. The characteristic skip connection of ResNet architectures also needs to be implemented with stride two if down-sampling is applied in the according block. Consequently, we need to analyze all down-sampling units and skip connections before they are summed up to form the output feature map.
+
+WideResNet 28-10 In the following, differently trained WRN-28-10 networks are compared in terms of their robust accuracy against AutoAttack (Croce and Hein 2020) and the amount of aliasing in their down-sampling layer. The training procedure of the baseline can be found in the appendix.
+
+Figure 4 indicates significant differences between adver-sarially trained and standard trained networks. First, the standard trained networks are not able to reach any robust accuracy, meaning their accuracy under adversarial attacks is equal to zero. Second, and this is most interesting for our investigation, standard trained networks exhibit much more aliasing during their down-sampling layer than ad-versarially trained networks. Through all layers and operations in which down-sampling is applied, the adversarially trained networks (blue dots) have much higher robust accuracy and much less aliasing compared to the standard trained networks. Additionally, we can observe that the amount of aliasing in the second layer is much higher than in the third layer. This can be explained by the different feature map sizes in the two layers as we calculate the absolute ${L}_{1}$ norm.
+
+When comparing the conventionally trained network against each other it can be seen that also the specific training scheme used for training the network can have an influence on the amount of aliasing of the network. Concretely, the standard baseline model provided by Robust-Bench (Croce et al. 2020) exhibits less aliasing than the one trained by us. Unfortunately, there is no further information about the exact training schedule from RobustBench, such that we can not make any assumptions on the interplay between model hyperparameters and aliasing.
+
+
+
+Figure 4: Adversarial Robustness versus Aliasing exemplary evaluated on different pre-trained WRN-28-10 models from RobustBench (Croce et al. 2020) as well as two baseline models, one from RobustBench (Standard RB) and one trained by us (Baseline). All blue dots represent adversari-ally trained networks for the purpose of clarity we marked three popular models from Carmon et al. (2019), Wang et al. (2020) and Hendrycks, Lee, and Mazeika (2019) by name.
+
+Preact ResNet-18 We conducted the same measurements for the Preact ResNet-18 as we did for the WRN-28-10 and used the same training procedure described in the appendix. Additionally, we needed to account for one more layer with two additional down-sampling operations.
+
+The overall results, presented in Figure 5, are similar to the ones for the WRN-28-10 networks, most adversarially trained networks exhibit much less aliasing and higher robustness than conventionally trained ones. Yet, the additional down-sampling layer allows one further observation. While the absolute aliasing metric is overall lower, the robust networks reduce the aliasing predominantly in the earlier layers, the second and third layers. The aliasing in the fourth layer of adversarially robust models is not significantly different from the aliasing in conventionally trained models in the same layer.
+
+## Discussion
+
+Our experiments reveal that common CNNs fail to subsample their feature maps in a Nyquist-Shannon conform way and consequently introduce aliasing artifacts. Further, we can give strong evidence that aliasing and adversarial robustness are highly related. All evaluated robust models exhibit significantly less aliasing than standard trained models.
+
+After the application of down-sampling operations in standard CNNs all feature maps suffer from aliasing artifacts occurring due to insufficient sub-sampling.
+
+Adversarially trained networks exhibit significantly less aliasing in their feature maps than standard trained networks with the same architecture. As shown previously this is valid for different model architectures and training schemes, especially in the early layers, closer to the input layer. It raises the question whether models with a low amount of aliasing are necessarily more robust. It further entails the question whether there are additional factors that are relevant in this context such as padding techniques, for example. These aspects will be subject to future research.
+
+
+
+Figure 5: Adversarial robustness versus aliasing exemplary evaluated on different pre-trained Preact ResNet-18 models. The blue dots represent adversarial trained networks, trained with the training schemes of Wong, Rice, and Kolter (2020), Rice, Wong, and Kolter (2020) and Sehwag et al. (2021) provided by RobustBench (Croce et al. 2020). The orange dot is the baseline, trained by us without adversarial training.
+
+## Conclusion
+
+Concluding, we were able to show strong evidence that aliasing and adversarial robustness of CNNs are highly correlated. We hypothesize that aliasing is one of the main underlying factors that lead to the vulnerability of CNNs. Recent methods to increase model robustness rather heal the symptoms of the underlying problem than investigate its origins. To overcome this challenge we might need to start thinking about CNNs in a more signal processing manner and account for basic principles from this field, like the Nyquist-Shannon theorem, which gives us clear instructions on how to prevent aliasing. Still, it is not straight forward to incorporate this knowledge into the architecture and structure of common CNN designs. We aim to give a new and more traditional perspective on CNNs to help improve their performance and reliability to enable their application in real world use cases.
+
+## References
+
+Carmon, Y.; Raghunathan, A.; Schmidt, L.; Liang, P.; and Duchi, J. C. 2019. Unlabeled Data Improves Adversarial Robustness. arXiv:1905.13736.
+
+Croce, F.; Andriushchenko, M.; Sehwag, V.; Flammarion, N.; Chiang, M.; Mittal, P.; and Hein, M. 2020. Robust-Bench: a standardized adversarial robustness benchmark. arXiv preprint arXiv:2010.09670.
+
+Croce, F.; and Hein, M. 2020. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. In ${ICML}$ .
+
+Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2015a. Explaining and Harnessing Adversarial Examples. arXiv:1412.6572.
+
+Goodfellow, I. J.; Shlens, J.; and Szegedy, C. 2015b. Explaining and Harnessing Adversarial Examples. arXiv:1412.6572.
+
+Harder, P.; Pfreundt, F.-J.; Keuper, M.; and Keuper, J. 2021. SpectralDefense: Detecting Adversarial Attacks on CNNs in the Fourier Domain. arXiv:2103.03000.
+
+He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Identity Mappings in Deep Residual Networks. arXiv:1603.05027.
+
+Hendrycks, D.; Lee, K.; and Mazeika, M. 2019. Using PreTraining Can Improve Model Robustness and Uncertainty. arXiv:1901.09960.
+
+Krizhevsky, A. 2012. Learning Multiple Layers of Features from Tiny Images. University of Toronto.
+
+Kurakin, A.; Goodfellow, I.; and Bengio, S. 2017. Adversarial Machine Learning at Scale. arXiv:1611.01236.
+
+Lorenz, P.; Harder, P.; Straßel, D.; Keuper, M.; and Keuper, J. 2021. Detecting AutoAttack Perturbations in the Frequency Domain. In ICML 2021 Workshop on Adversarial Machine Learning.
+
+Moosavi-Dezfooli, S.-M.; Fawzi, A.; and Frossard, P. 2016. DeepFool: a simple and accurate method to fool deep neural networks.
+
+Rice, L.; Wong, E.; and Kolter, J. Z. 2020. Overfitting in adversarially robust deep learning. arXiv:2002.11569.
+
+Rony, J.; Hafemann, L. G.; Oliveira, L. S.; Ayed, I. B.; Sabourin, R.; and Granger, E. 2019. Decoupling Direction and Norm for Efficient Gradient-Based L2 Adversarial Attacks and Defenses.
+
+Sehwag, V.; Mahloujifar, S.; Handina, T.; Dai, S.; Xiang, C.; Chiang, M.; and Mittal, P. 2021. Improving Adversarial Robustness Using Proxy Distributions. arXiv:2104.09425.
+
+Shannon, C. 1949. Communication in the Presence of Noise. Proceedings of the IRE, 37(1): 10-21.
+
+Wang, Y.; Zou, D.; Yi, J.; Bailey, J.; Ma, X.; and Gu, Q. 2020. Improving Adversarial Robustness Requires Revisiting Misclassified Examples. In International Conference on Learning Representations.
+
+Wong, E.; Rice, L.; and Kolter, J. Z. 2020. Fast is better than free: Revisiting adversarial training. arXiv:2001.03994.
+
+Yin, D.; Lopes, R. G.; Shlens, J.; Cubuk, E. D.; and Gilmer, J. 2020. A Fourier Perspective on Model Robustness in Computer Vision. arXiv:1906.08988.
+
+Zagoruyko, S.; and Komodakis, N. 2017. Wide Residual Networks. arXiv:1605.07146.
+
+Zhang, R. 2019. Making Convolutional Networks Shift-Invariant Again. arXiv:1904.11486.
+
+Zou, X.; Xiao, F.; Yu, Z.; and Lee, Y. J. 2020. Delving Deeper into Anti-aliasing in ConvNets. In ${BMVC}$ .
+
+## A1: Downsampling Block Preact ResNet
+
+
+
+Figure 6: Abstract Illustration of a building block in Preact ResNet-18 and WRN-28-10. The first operation in a block is a convolution. This convolution is executed with a stride of either one or two. For a stride of one (left) the shortcut simply passes the identity of the feature maps forward. If the first convolution is done with a stride of two, the shortcut needs to have a stride of two (right) too, to guarantee that both representations can be added at the end of the building block.
+
+## A2: Trainings Procedure
+
+The baseline models for the Preact ResNet-18 and the WRN- 28-10 are both the same trained with the same schedule. Each model is trained with 200 epochs, a batch size of 512, cross entropy loss and stochastic gradient descent (SGD) with an adaptive learning rate starting by 0.1 and reducing it at 100 and 150 epochs by a factor of 10 , a momentum of 0.9 and a weight-decay of $5\mathrm{e} - 4$ .
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/vKc1mLxBebP/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/vKc1mLxBebP/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..34d719825be481ff0243ccd6cf3649325cecf0a4
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/vKc1mLxBebP/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,153 @@
+§ ALIASING COINCIDES WITH CNNS VULNERABILITY TOWARDS ADVERSARIAL ATTACKS
+
+Anonymous
+
+§ ABSTRACT
+
+Many commonly well-performing convolutional neural network models have shown to be susceptible to input data perturbations, indicating a low model robustness. Adversarial attacks are thereby specifically optimized to reveal model weaknesses, by generating small, barely perceivable image perturbations that flip the model prediction. Robustness against attacks can be gained for example by using adversarial examples during training, which effectively reduces the measurable model attackability. In contrast, research on analyzing the source of a model's vulnerability is scarce. In this paper, we analyze adversarially trained, robust models in the context of a specifically suspicious network operation, the downsampling layer, and provide evidence that robust models have learned to downsample more accurately and suffer significantly less from aliasing than baseline models.
+
+§ INTRODUCTION
+
+Convolutional Neural Networks (CNNs) provide highly accurate predictions in a wide range of applications. Yet, to allow for practical applicability, CNN models should not be fooled by small image perturbations, as they are realized by adversarial attacks (Goodfellow, Shlens, and Szegedy 2015a; Moosavi-Dezfooli, Fawzi, and Frossard 2016; Rony et al. 2019). Such attacks aim to fool the network by perturbing image pixels such that human observers would still easily recognize the correct class label, while the network makes incorrect predictions. Susceptibility to such perturbations is prohibitive for the applicability of CNN models in real world scenarios, as it indicates limited reliability and generalization of the model.
+
+To establish adversarial robustness many sophisticated methods have been developed (Goodfellow, Shlens, and Szegedy 2015a; Rony et al. 2019; Kurakin, Goodfellow, and Bengio 2017; Goodfellow, Shlens, and Szegedy 2015b). Some can defend only against one specific attack (Goodfel-low, Shlens, and Szegedy 2015a) while others propose more general defences against diverse attacks. Another way to protect CNNs against adversarial examples is to detect them. Harder et al. (2021) classify adversarial examples through inspecting each input image and its feature maps in the frequency domain. Similarly, Yin et al. (2020) showed that natural images and adversarial examples differ significantly in their frequency spectra.
+
+ < g r a p h i c s >
+
+Figure 1: Illustration of down-sampling, with (top right) and without anti-aliasing filter (bottom right) as well as an adversarial example (bottom left). The top left image shows the original, on the top right, this image is correctly down-sampled with an anti-aliasing filter. In the bottom right, no filter is applied, leading to aliasing. The adversarial example (bottom left) shows visually similar artifacts. In this paper, we investigate the role of aliasing for adversarial robustness.
+
+In fact, when considering the architecture of commonly employed CNN models, one could wonder why these models perform so well although they ignore basic sampling theoretic foundations. Concretely, most architectures subsample feature maps without ensuring to sample above the Nyquist rate (Shannon 1949), such that, after each down-sampling operation, spectra of sub-sampled feature maps may overlap with their replica. This is called aliasing and implies that the network should be genuinely unable to fully restore an image from its feature maps. One can only hypothesize that common CNNs learn to (partially) compensate for this effect by learning appropriate filters. Following this line of thought, recently, several works suggest to improve CNNs by including anti-aliasing techniques during down-sampling in CNNs (Zhang 2019; Zou et al. 2020). They aim to make the models more robust against image-translations, such that the class prediction does not suffer from small vertical or horizontal shifts of the content.
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+In this paper, we further investigate the relationship between adversarial robustness and aliases. While previous works (Yin et al. 2020; Harder et al. 2021; Lorenz et al. 2021) focused on adversarial examples, we systematically analyze potential aliasing effects inside CNNs. Specifically, we compare several recently proposed adversarially robust models to models which result from conventional training schemes in terms of aliasing. We inspect intermediate feature maps before and after the down-sampling operation at inference. Our first observation is that these models indeed fail to sub-sample according to the Nyquist Shannon Theorem (Shannon 1949): we observe severe aliasing. Further, our experiments reveal that adversarially trained networks exhibit less aliasing than standard trained networks, indicating that adversarial training encourages CNNs to learn how to properly down-sample data without severe artifacts.
+
+In summary, our contributions are:
+
+ * We introduce a measure for aliasing and show that common CNN down-sampling layers fail to sub-sample the feature maps in a Nyquist-Shannon conform way.
+
+ * We analyze various adversarially trained models, that are robust against a strong ensemble of adversarial attacks, AutoAttack (Croce and Hein 2020), and show that they exhibit significantly less aliasing than standard models.
+
+§ ALIASING IN CNNS
+
+CNNs usually have a pyramidal structure in which the data is progressively sub-sampled in order to aggregate spatial information while the number of channels increases. During sub-sampling, no explicit precautions are taken to avoid aliases, which arise from under-sampling. Specifically, when sub-sampling with stride 2, any frequency larger than $N/2$ , where $N$ is the size of the original data, will cause pathological overlaps in the frequency spectra. Those overlaps in the frequency spectra cause ambiguities such that high frequency components appear as low frequency components. Hence, local image perturbations can become indistinguishable from global manipulations.
+
+§ ALIASING METRIC
+
+To measure the possible amount of aliasing appearing after down-sampling we compare each down-sampled feature map in the Fourier domain with its aliasing-free counterpart. To this end, we consider a feature map $f\left( x\right)$ of size ${2N} \times {2N}$ before down-sampling. We compute an "aliasing-free" down-sampling by extracting the $N$ lowest frequencies along both axes in Fourier space. W.l.o.G., we consider specifically down-sampling operations by strided convolutions, since these are predominantly used in adversarially robust models (Zagoruyko and Komodakis 2017).
+
+In each strided convolution, the input feature map $f\left( x\right)$ is convolved with the learned weights $w$ and downsampled by strides, thus potentially introducing frequency replica (i.e. aliases) in the downsampled signal ${\widehat{f}}_{s2}$ .
+
+$$
+{\widehat{f}}_{s2} = f\left( x\right) * g\left( {w,2}\right) \tag{1}
+$$
+
+To measure the amount of aliasing, we explicitly construct feature map frequency representations without such aliases. Therefore, the original feature map $f\left( x\right)$ is convolved with the learned weights $w$ of the strided convolution without applying the stride $g\left( {w,1}\right)$ to obtain ${\widehat{f}}_{s1}$ .
+
+$$
+{\widehat{f}}_{s1} = f\left( x\right) * g\left( {w,1}\right) \tag{2}
+$$
+
+Afterwards the 2D FFT of the new feature maps ${\widehat{f}}_{s2}$ is computed, which we denote ${F}_{s2}$ .
+
+$$
+{F}_{s2}\left( {k,l}\right) = \frac{1}{{N}^{2}}\mathop{\sum }\limits_{{m = 0}}^{{N - 1}}\mathop{\sum }\limits_{{n = 0}}^{{N - 1}}{\widehat{f}}_{s2}\left( {m,n}\right) {e}^{-{2\pi j}\left( {\frac{k}{M}m + \frac{l}{N}n}\right) }, \tag{3}
+$$
+
+for $k,l = 0,\ldots ,N - 1$ . For the non-down-sampled feature maps ${\widehat{f}}_{s1}$ , we proceed similarly and compute for $k,l =$ $0,\ldots ,2 \cdot N - 1$
+
+$$
+{F}_{s1}^{ \uparrow }\left( {k,l}\right) = \frac{1}{4{N}^{2}}\mathop{\sum }\limits_{{m = 0}}^{{{2N} - 1}}\mathop{\sum }\limits_{{n = 0}}^{{{2N} - 1}}{\widehat{f}}_{s1}\left( {m,n}\right) {e}^{-{2\pi j}\left( {\frac{k}{2M}m + \frac{l}{2N}n}\right) }.
+$$
+
+(4)
+
+The aliasing free version ${F}_{s1}$ can be obtained by setting all frequencies above the Nyquist rate to zero before down-sampling,
+
+$$
+{F}_{s1}^{ \uparrow }\left( {k,l}\right) = 0 \tag{5}
+$$
+
+for $k \in \left\lbrack {N/2,{3N}/2}\right\rbrack$ and for $l \in \in \left\lbrack {N/2,{3N}/2}\right\rbrack$ . Then the down-sampled version in the frequency domain corresponds to extracting the four corners of ${F}_{s1}^{ \uparrow }$ and reassembling them as shown in Figure 2,
+
+$$
+{F}_{s1}\left( {k,l}\right) = {F}_{s1}^{ \uparrow }\left( {k,l}\right) \;\text{ for }k,l = 0,\ldots ,N/2
+$$
+
+$$
+{F}_{s1}\left( {k,l}\right) = {F}_{s1}^{ \uparrow }\left( {k + N,l}\right) \;\text{ for }\;k = N/2,\ldots ,N
+$$
+
+$$
+\text{ and }l = 0,\ldots ,N/2
+$$
+
+$$
+{F}_{s1}\left( {k,l}\right) = {F}_{s1}^{ \uparrow }\left( {k,l + N}\right) \;\text{ for }\;k = 0,\ldots ,N/2
+$$
+
+$$
+\text{ and }l = N/2,\ldots ,N
+$$
+
+$$
+{F}_{s1}\left( {k,l}\right) = {F}_{s1}^{ \uparrow }\left( {k + N,l + N}\right) \;\text{ for }k,l = N/2,\ldots ,N
+$$
+
+(6)
+
+This way we guarantee that there are no overlaps, i.e. aliases, in the frequency spectra. Figure 2 illustrates the computing process of the aliasing free down-sampling in the frequency domain. The aliasing free feature map can be compared to the actual feature map in the frequency domain to measure the degree of aliasing. The full procedure is shown in Figure 3, where we start on the left with the original feature map. Then we obtain the two down-sampled versions (with and without aliases) and get the difference between both by taking the ${L}_{1}$ norm.
+
+The overall aliasing metric ${AM}$ for a down-sampling operation is calculated by taking the ${L}_{1}$ distance between downsampled and alias-free feature maps ${f}_{k}$ in the Fourier domain, averaged over $\mathrm{K}$ generated feature maps,
+
+$$
+{AM} = \frac{1}{K}\mathop{\sum }\limits_{{k = 0}}^{K}\left| {{F}_{{s1},k} - {F}_{{s2},k}}\right| . \tag{7}
+$$
+
+The proposed ${AM}$ measure is zero if aliasing is visible in none of the down-sampled feature maps, i.e. if sampling has been performed above the Nyquist rate. Whenever ${AM}$ is greater than 0, this is not the case and we should, from a theoretic point of view, expect the model to be easy to attack since it can not reliably distinguish between fine details and coarse input structures.
+
+ < g r a p h i c s >
+
+Figure 2: Step by step computation of the aliasing free version of a feature map. The left image shows the magnitude of the Fourier representation of a feature map with the zero-frequency in the upper left corner, i.e. high frequencies are in the center. Alias-free downsampling suppresses high frequencies prior to sampling. This can be implemented efficiently in the Fourier domain by cropping and reassembling the low-frequency regions of the Fourier representations, i.e. its four corners. Aliasing would correspond to folding the deleted high frequency components into the constructed representation.
+
+ < g r a p h i c s >
+
+Figure 3: FFT (Fast Fourier Transformation) of a feature map in the original resolution (left). This feature map is downsampled by striding with a factor of two after aliasing suppression (middle left) and with aliasing (middle right). The difference between the original and aliasing-free FFT of the down-sampled feature map (right).
+
+§ EXPERIMENTS
+
+We conducted an extensive analysis of already existing ad-versarially robust models trained on CIFAR-10 (Krizhevsky 2012) with two different architectures, namely WideResNet- 28-10 (WRN-28-10) (Zagoruyko and Komodakis 2017) and Preact ResNet-18 (He et al. 2016). Both architectures are commonly supported by many adversarial training approaches. As baseline, we trained a plain WRN-28-10 and Preact ResNet-18, both with similar training schemes. All adversarially trained networks are pre-trained models provided by RobustBench (Croce et al. 2020).
+
+The WRN-28-10 networks have four operations in which down-sampling is performed. These operations are located in the second and third block of the network. In comparison, the Preact ResNet-18 networks have six down-sampling operations, located in the second, third and fourth layers of the network.
+
+Both architectures have similar building blocks and the key operations including down-sampling are shown abstractly in the appendix in Figure 6. Each block starts with a convolution with stride two followed by additional operations like ReLu and convolutions with stride one. The characteristic skip connection of ResNet architectures also needs to be implemented with stride two if down-sampling is applied in the according block. Consequently, we need to analyze all down-sampling units and skip connections before they are summed up to form the output feature map.
+
+WideResNet 28-10 In the following, differently trained WRN-28-10 networks are compared in terms of their robust accuracy against AutoAttack (Croce and Hein 2020) and the amount of aliasing in their down-sampling layer. The training procedure of the baseline can be found in the appendix.
+
+Figure 4 indicates significant differences between adver-sarially trained and standard trained networks. First, the standard trained networks are not able to reach any robust accuracy, meaning their accuracy under adversarial attacks is equal to zero. Second, and this is most interesting for our investigation, standard trained networks exhibit much more aliasing during their down-sampling layer than ad-versarially trained networks. Through all layers and operations in which down-sampling is applied, the adversarially trained networks (blue dots) have much higher robust accuracy and much less aliasing compared to the standard trained networks. Additionally, we can observe that the amount of aliasing in the second layer is much higher than in the third layer. This can be explained by the different feature map sizes in the two layers as we calculate the absolute ${L}_{1}$ norm.
+
+When comparing the conventionally trained network against each other it can be seen that also the specific training scheme used for training the network can have an influence on the amount of aliasing of the network. Concretely, the standard baseline model provided by Robust-Bench (Croce et al. 2020) exhibits less aliasing than the one trained by us. Unfortunately, there is no further information about the exact training schedule from RobustBench, such that we can not make any assumptions on the interplay between model hyperparameters and aliasing.
+
+ < g r a p h i c s >
+
+Figure 4: Adversarial Robustness versus Aliasing exemplary evaluated on different pre-trained WRN-28-10 models from RobustBench (Croce et al. 2020) as well as two baseline models, one from RobustBench (Standard RB) and one trained by us (Baseline). All blue dots represent adversari-ally trained networks for the purpose of clarity we marked three popular models from Carmon et al. (2019), Wang et al. (2020) and Hendrycks, Lee, and Mazeika (2019) by name.
+
+Preact ResNet-18 We conducted the same measurements for the Preact ResNet-18 as we did for the WRN-28-10 and used the same training procedure described in the appendix. Additionally, we needed to account for one more layer with two additional down-sampling operations.
+
+The overall results, presented in Figure 5, are similar to the ones for the WRN-28-10 networks, most adversarially trained networks exhibit much less aliasing and higher robustness than conventionally trained ones. Yet, the additional down-sampling layer allows one further observation. While the absolute aliasing metric is overall lower, the robust networks reduce the aliasing predominantly in the earlier layers, the second and third layers. The aliasing in the fourth layer of adversarially robust models is not significantly different from the aliasing in conventionally trained models in the same layer.
+
+§ DISCUSSION
+
+Our experiments reveal that common CNNs fail to subsample their feature maps in a Nyquist-Shannon conform way and consequently introduce aliasing artifacts. Further, we can give strong evidence that aliasing and adversarial robustness are highly related. All evaluated robust models exhibit significantly less aliasing than standard trained models.
+
+After the application of down-sampling operations in standard CNNs all feature maps suffer from aliasing artifacts occurring due to insufficient sub-sampling.
+
+Adversarially trained networks exhibit significantly less aliasing in their feature maps than standard trained networks with the same architecture. As shown previously this is valid for different model architectures and training schemes, especially in the early layers, closer to the input layer. It raises the question whether models with a low amount of aliasing are necessarily more robust. It further entails the question whether there are additional factors that are relevant in this context such as padding techniques, for example. These aspects will be subject to future research.
+
+ < g r a p h i c s >
+
+Figure 5: Adversarial robustness versus aliasing exemplary evaluated on different pre-trained Preact ResNet-18 models. The blue dots represent adversarial trained networks, trained with the training schemes of Wong, Rice, and Kolter (2020), Rice, Wong, and Kolter (2020) and Sehwag et al. (2021) provided by RobustBench (Croce et al. 2020). The orange dot is the baseline, trained by us without adversarial training.
+
+§ CONCLUSION
+
+Concluding, we were able to show strong evidence that aliasing and adversarial robustness of CNNs are highly correlated. We hypothesize that aliasing is one of the main underlying factors that lead to the vulnerability of CNNs. Recent methods to increase model robustness rather heal the symptoms of the underlying problem than investigate its origins. To overcome this challenge we might need to start thinking about CNNs in a more signal processing manner and account for basic principles from this field, like the Nyquist-Shannon theorem, which gives us clear instructions on how to prevent aliasing. Still, it is not straight forward to incorporate this knowledge into the architecture and structure of common CNN designs. We aim to give a new and more traditional perspective on CNNs to help improve their performance and reliability to enable their application in real world use cases.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/wGkmGrDsco8/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/wGkmGrDsco8/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..a93c4b3908e9b991d710594e602f3b2cabf74197
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/wGkmGrDsco8/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,460 @@
+# Saliency Diversified Deep Ensemble for Robustness to Adversaries
+
+First Author Name, ${}^{1}$ Second Author Name, ${}^{2}$ Third Author Name ${}^{1}$
+
+${}^{1}$ Affiliation 1
+
+firstAuthor@affiliation1.com, secondAuthor@affilation2.com, thirdAuthor@affiliation1.com
+
+## Abstract
+
+Deep learning models have shown incredible performance on numerous image recognition, classification, and reconstruction tasks. Although very appealing and valuable due to their predictive capabilities, one common threat remains challenging to resolve. A specifically trained attacker can introduce malicious input perturbations to fool the network, thus causing potentially harmful mispredictions. Moreover, these attacks can succeed when the adversary has full access to the target model (white-box) and even when such access is limited (black-box setting). The ensemble of models can protect against such attacks but might be brittle under shared vulnerabilities in its members (attack transferability). To that end, this work proposes a novel diversity-promoting learning approach for the deep ensembles. The idea is to promote saliency map diversity (SMD) on ensemble members to prevent the attacker from targeting all ensemble members at once by introducing an additional term in our learning objective. During training, this helps us minimize the alignment between model saliencies to reduce shared member vulnerabilities and, thus, increase ensemble robustness to adversaries. We empirically show a reduced transferability between ensemble members and improved performance compared to the state-of-the-art ensemble defense against medium and high-strength white-box attacks. In addition, we demonstrate that our approach combined with existing methods outperforms state-of-the-art ensemble algorithms for defense under white-box and black-box attacks.
+
+## 1 Introduction
+
+Nowadays, deep learning models have shown incredible performance on numerous image recognition, classification, and reconstruction tasks (Krizhevsky, Sutskever, and Hinton 2012; Lee et al. 2015; LeCun, Bengio, and Hinton 2015; Chen et al. 2020). Due to their great predictive capabilities, they have found widespread use across many domains (Szegedy et al. 2016; Devlin et al. 2019; Deng, Hinton, and Kingsbury 2013). Although deep learning models are very appealing for many interesting tasks, their robustness to adversarial attacks remains a challenging problem to solve. A specifically trained attacker can introduce malicious input perturbations to fool the network, thus causing potentially harmful (Goodfellow, Shlens, and Szegedy 2015; Madry et al. 2018) mispredictions. Moreover, these attacks can succeed when the adversary has full access to the target model (white-box) (Athalye and Carlini 2018) and even when such access is limited (black-box) (Papernot et al. 2017), posing a hurdle in security- and trust-sensitive application domains.
+
+
+
+Figure 1: Left. An illustration of the proposed learning scheme for saliency-based diversification of deep ensemble consisting of 3 members. We use the cross-entropy losses ${\mathcal{L}}_{m}\left( x\right) , m \in \{ 1,2,3\}$ and regularization ${\mathcal{L}}_{SMD}\left( x\right)$ for saliency-based diversification. Right. An example of saliency maps for members of naively learned ensemble and learned ensemble with our approach. Red and blue pixels represent positive and negative saliency values respectively.
+
+The ensemble of deep models can offer protection against such attacks (Strauss et al. 2018). Commonly, an ensemble of models has proven to improve the robustness, reduce variance, increase prediction accuracy and enhance generalization compared to the individual models (LeCun, Bengio, and Hinton 2015). As such, ensembles were offered as a solution in many areas, including weather prediction (Palmer 2019), computer vision (Krizhevsky, Sutskever, and Hinton 2012), robotics and autonomous driving (Kober, Bagnell, and Peters 2013) as well as others, such as (Ganaie et al. 2021). However, 'naive' ensemble models are brittle due to shared vulnerabilities in their members (Szegedy et al. 2016). Thus an adversary can exploit attack transferability (Madry et al. 2018) to affect all members and the ensemble as a whole.
+
+In recent years, researchers tried to improve the adversarial robustness of the ensemble by maximizing different notions for diversity between individual networks (Pang et al. 2019; Kariyappa and Qureshi 2019; Yang et al. 2020). In this way, adversarial attacks that fool one network are much less likely to fool the ensemble as a whole (Chen et al. 2019b; Sen, Ravindran, and Raghunathan 2019; Tramèr et al. 2018; Zhang, Liu, and Yan 2020). The research focusing on ensemble diversity aims to diversely train the neural networks inside the ensemble model to withstand the deterioration caused by adversarial attacks. The works (Pang et al. 2019; Zhang, Liu, and Yan 2020; Kariyappa and Qureshi 2019) proposed improving the diversity of the ensemble constituents by training the model with diversity regularization in addition to the main learning objective. (Kariyappa and Qureshi 2019) showed that an ensemble of models with misaligned loss gradients can be used as a defense against black-box attacks and proposed uncorrelated loss functions for ensemble learning. (Pang et al. 2019) proposed an adaptive diversity promoting (ADP) regularizer to encourage diversity between non-maximal predictions. (Yang et al. 2020) minimize vulnerability diversification objective in order to suppress shared 'week' features across the ensemble members. However, some of these approaches only focused on white-box attacks (Pang et al. 2019), black-box attacks (Kariyappa and Qureshi 2019) or were evaluated on a single dataset (Yang et al. 2020).
+
+---
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+---
+
+In this paper, we propose a novel diversity-promoting learning approach for deep ensembles. The idea is to promote Saliency Map Diversity (SMD) to prevent the attacker from targeting all ensemble members at once.
+
+Saliency maps (SM) (Gu and Tresp 2019) represent the derivative of the network prediction for the actual true label with respect to the input image. They indicate the most 'sensitive' content of the image for prediction. Intuitively, we would like to learn an ensemble whose members have different sensitivity across the image content while not sacrificing the ensemble predictive power. Therefore, we introduce a saliency map diversity (SMD) regularization term in our learning objective. Given image data and an ensemble of models, we define the SMD using the inner products between all pairs of saliency maps (for one image data, one ensemble member has one saliency map). Different from our approach with SMD regularization, (Pang et al. 2019) defined the diversity measure using the non-maximal predictions of individual members, and as such might not be able to capture the possible shared sensitivity with respect to the image content related to the correct predictions.
+
+We jointly learn our ensemble members using cross-entropy losses (LeCun, Bengio, and Hinton 2015) for each member and our shared ${SMD}$ term. This helps us minimize the alignment between model SMDs and enforces the ensemble members to have misaligned and non-overlapping sensitivity for the different image content. Thus with our approach, we try to minimize possible shared sensitivity across the ensemble members that might be exploited as vulnerability, which is in contrast to (Yang et al. 2020) who try to minimize shared 'week' features across the ensemble members. It is also important to note that our regularization differs from (Kariyappa and Qureshi 2019), since it focuses on gradients coming from the correct class predictions (saliencies), which could also be seen as a loss agnostic approach. We illustrate our learning scheme in Fig. 1, left. Whereas in Fig. 1 on the right, we visualize the saliency maps with respect to one image sample for the members in naively trained ensemble and an ensemble trained with our approach.
+
+We perform an extensive numerical evaluation using the MNIST (Lecun et al. 1998), Fashion-MNIST (F-MNIST) (Xiao, Rasul, and Vollgraf 2017), and CIFAR-10 (Krizhevsky 2009) datasets to validate our approach. We use two neural networks architectures and conduct experiments for different known attacks and at different attack strengths. Our results show a reduced transferability between ensemble members and improved performance compared to the state-of-the-art ensemble defense against medium and high-strength white-box attacks. Since we minimize the shared sensitivity which could also be seen as the attention of a prediction important image content, we also suspected that our approach could go well with other existing methods. To that end, we show that our approach combined with the (Yang et al. 2020) method outperforms state-of-the-art ensemble algorithms for defense under adversarial attacks in both white-box and black-box settings. We summarize our main contributions in the following:
+
+- We propose a diversity-promoting learning approach for deep ensemble, where we introduce a saliency-based regularization that diversifies the sensitivity of ensemble members with respect to the image content.
+
+- We show improved performance compared to the state-of-the-art ensemble defense against medium and high strength white-box attacks as well as show on-pair performance for the black-box attacks.
+
+- We demonstrate that our approach combined with the (Yang et al. 2020) method outperforms state-of-the-art ensemble defense algorithms in white-box and black-box attacks.
+
+## 2 Related Work
+
+In this section, we overview the recent related work.
+
+### 2.1 Common Defense Strategies
+
+In the following, we describe the common defense strategies against adversarial attacks groping them into four categories.
+
+Adversarial Detection. These methods aim to detect the adversarial examples or to restore the adversarial input to be closer to the original image space. Adversarial Detection methods (Bhambri et al. 2020) include MagNet, Feature Squeezing, and Convex Adversarial Polytope. The MagNet (Meng and Chen 2017) method consists of two parts: detector and reformer. Detector aims to recognize and reject adversarial images. Reformer aims to reconstruct the image as closely as possible to the original image using an auto-encoder. The Feature Squeezing (Xu, Evans, and Qi 2018) utilizes feature transformation techniques such as squeezing color bits and spatial smoothing. These methods might be prone to reject clean examples and might have to severely modify the input to the model. This could reduce the performance on the clean data.
+
+Gradient Masking and Randomization Defenses. Gradient masking represents manipulation techniques that try to hide the gradient of the network model to robustify against attacks made with gradient direction techniques and includes distillation, obfuscation, shattering, use of stochastic and vanishing or exploding gradients (Papernot et al. 2017; Athalye, Carlini, and Wagner 2018; Carlini and Wagner 2017). The authors in (Papernot et al. 2016b) introduced a method based on distillation. It uses an additional neural network to 'distill' labels for the original neural network in order to reduce the perturbations due to adversarial samples. (Xie et al. 2018) used a randomization method during training that consists of random resizing and random padding for the training image data. Another example of such randomization can be noise addition at different levels of the system (You et al. 2019), injection of different types of randomization like, for example, random image resizing or padding (Xie et al. 2018) or randomized lossy compression (Das et al. 2018), etc. As a disadvantage, these approaches can reduce the accuracy since they may reduce useful information, which might also introduce instabilities during learning. As such, it was shown that often they can be easily bypassed by the adversary via expectation over transformation techniques (Athalye and Carlini 2018).
+
+Secrecy-based Defenses. The third group generalizes the defense mechanisms, which include randomization explicitly based on a secret key that is shared between training and testing stages. Notable examples are random projections (Vinh et al. 2016), random feature sampling (Chen et al. 2019a) and the key-based transformation (Taran, Rezaeifar, and Voloshynovskiy 2018), etc. As an example in (Taran et al. 2019) introduces randomized diversification in a special transform domain based on a secret key, which creates an information advantage to the defender. Nevertheless, the main disadvantage of the known methods in this group consists of the loss of performance due to the reduction of useful data that should be compensated by a proper diversification and corresponding aggregation with the required secret key.
+
+Adversarial Training (AT). (Goodfellow, Shlens, and Szegedy 2015; Madry et al. 2018) proposed one of the most common approaches to improve adversarial robustness. The main idea is to train neural networks on both clean and adversarial samples and force them to correctly classify such examples. The disadvantage of this approach is that it can significantly increase the training time and can reduce the model accuracy on the unaltered data (Tsipras et al. 2018).
+
+### 2.2 Diversifying Ensemble Training Strategies
+
+Even naively learned ensemble could add improvement towards adversarial robustness. Unfortunately, ensemble members may share a large portion of vulnerabilities (Dauphin et al. 2014) and do not provide any guarantees to adversarial robustness (Tramèr et al. 2018).
+
+(Tramèr et al. 2018) proposed Ensemble Adversarial Training (EAT) procedure. The main idea of EAT is to minimize the classification error against an adversary that maximizes the error (which also represents a min-max optimization problem (Madry et al. 2018)). However, this approach is very computationally expensive and according to the original author may be vulnerable to white-box attacks.
+
+Recently, diversifying the models inside an ensemble gained attention. Such approaches include a mechanism in the learning procedure that tries to minimize the adversarial subspace by making the ensemble members diverse and making the members less prone to shared weakness.
+
+(Pang et al. 2019) introduced ADP regularizer to diversify training of the ensemble model to increase adversarial robustness. To do so, they defined first an Ensemble Diversity ${ED} = {\operatorname{Vol}}^{2}\left( {\begin{Vmatrix}{f}_{m}^{\smallsetminus y}\left( x\right) \end{Vmatrix}}_{2}\right)$ , where ${f}_{m}^{\smallsetminus y}\left( x\right)$ is the order preserving prediction of $m$ -th ensemble member on $x$ without $y$ -th (maximal) element and $\operatorname{Vol}\left( \cdot \right)$ is a total volume of vectors span. The ADP regularizer is calculated as ${\operatorname{ADP}}_{\alpha ,\beta }\left( {x, y}\right) = \alpha \cdot \mathcal{H}\left( \mathcal{F}\right) + \beta \cdot \log \left( {ED}\right)$ , where $\mathcal{H}\left( \mathcal{F}\right) =$ $- \mathop{\sum }\limits_{i}{f}_{i}\left( x\right) \log \left( {{f}_{i}\left( x\right) }\right)$ is a Shannon entropy and $\alpha ,\beta > 0$ . The ADP regularizer is then subtracted from the original loss during training.
+
+The GAL regularizer (Kariyappa and Qureshi 2019) was intended to diversify the adversarial subspaces and reduce the overlap between the networks inside ensemble model. GAL is calculated using the cosine similarity (CS) between the gradients of two different models as ${CS}{\left( {\nabla }_{x}{\mathcal{J}}_{a},{\nabla }_{x}{\mathcal{J}}_{b}\right) }_{a \neq b} = \frac{ < {\nabla }_{x}{\mathcal{J}}_{a},{\nabla }_{x}{\mathcal{J}}_{b} > }{\left| {{\nabla }_{x}{\mathcal{J}}_{a}}\right| \cdot \left| {{\nabla }_{x}{\mathcal{J}}_{b}}\right| }$ , where ${\nabla }_{x}{\mathcal{J}}_{m}$ is the gradient of the loss of $m$ -th member with respect to x. During training, the authors added the term ${GAL} =$ $\log \left( {\mathop{\sum }\limits_{{1 \leq a < b \leq N}}\exp \left( {{CS}\left( {{\nabla }_{x}{\mathcal{J}}_{a},{\nabla }_{x}{\mathcal{J}}_{b}}\right) }\right) }\right)$ to the learning objective.
+
+With DVERGE (Yang et al. 2020), the authors aimed to maximize the vulnerability diversity together with the original loss. They defined a vulnerability diversity between pairs of ensemble members ${f}_{a}\left( x\right)$ and ${f}_{b}\left( x\right)$ using data consisting of the original data sample and its feature distilled version. In other words, they deploy an ensemble learning procedure where each ensemble member ${f}_{a}\left( x\right)$ is trained using adversarial samples generated by other members ${f}_{b}\left( x\right) , a \neq b$ .
+
+### 2.3 Adversarial Attacks
+
+The goal of the adversary is to craft an image ${x}^{\prime }$ that is very close to the original $x$ and would be correctly classified by humans but would fool the target model. Commonly, attackers can act as adversaries in white-box and black-box modes, depending on the gained access level over the target model.
+
+White-box and Black-box Attacks. In the white-box scenario, the attacker is fully aware of the target model's architecture and parameters and has access to the model's gradients. White-box attacks are very effective against the target model but they are bound to the extent of knowing the model. In the Black-box scenario, the adversary does not have access to the model parameters and may only know the training dataset and the architecture of the model (in grey-box setting). The attacks are crafted on a surrogate model but still work to some extent on the target due to transferability (Papernot et al. 2016a).
+
+An adversary can build a white-box or black-box attack using different approaches. In the following text, we briefly describe the methods commonly used for adversarial attacks.
+
+Fast Gradient Sign Method (FGSM). (Goodfellow, Shlens, and Szegedy 2015) generated adversarial attack ${x}^{\prime }$ by adding the sign of the gradient $\operatorname{sign}\left( {{\nabla }_{x}\mathcal{J}\left( {x, y}\right) }\right)$ as perturbation with $\epsilon$ strength, i.e., ${x}^{\prime } = x + \epsilon \cdot \operatorname{sign}\left( {{\nabla }_{x}\mathcal{J}\left( {x, y}\right) }\right)$ .
+
+Random Step-FGSM (R-FGSM). The method proposed in (Tramèr et al. 2018) is an extension of FGSM where a single random step is taken before FGSM due to the assumed non-smooth loss function in the neighborhood of data points.
+
+Projected Gradient Descent (PGD). (Madry et al. 2018) presented a similar attack to BIM, with the difference that they randomly selected the initialization of ${x}_{0}^{\prime }$ in a neighborhood $\dot{U}\left( {x,\epsilon }\right)$ .
+
+Basic Iterative Method (BIM). (Kurakin, Goodfellow, and Bengio 2017) proposed iterative computations of attack gradient for each smaller step. Thus, generating an attacks as ${x}_{i}^{\prime } = {\operatorname{clip}}_{x,\epsilon }\left( {{x}_{i - 1}^{\prime } + \frac{\epsilon }{r} \cdot \operatorname{sign}\left( {g}_{i - 1}\right) }\right)$ , where ${g}_{i} = {\nabla }_{x}\mathcal{J}\left( {{x}_{i}^{\prime }, y}\right)$ , ${x}_{0}^{\prime } = x$ and $r$ is the number of iterations.
+
+Momentum Iterative Method (MIM). (Dong et al. 2018) proposed extenuation of BIM. It proposes to update gradient with the momentum $\mu$ to ensure best local minima. Holding the momentum helps to avoid small holes and poor local minimum solution, ${g}_{i} = \mu {g}_{i - 1} + \frac{{\nabla }_{x}\mathcal{J}\left( {{x}_{i - 1}^{\prime }, y}\right) }{{\begin{Vmatrix}{\nabla }_{x}\mathcal{J}\left( {x}_{i - 1}^{\prime }, y\right) \end{Vmatrix}}_{1}}$ .
+
+## 3 Saliency Diversified Ensemble Learning
+
+In this section, we present our diversity-promoting learning approach for deep ensembles. In the first subsection, we introduce the saliency-based regularizer, while in the second subsection we describe our learning objective.
+
+### 3.1 Saliency Diversification Measure
+
+Saliency Map. In (Etmann et al. 2019), the authors investigated the connection between a neural network's robustness to adversarial attacks and the interpretability of the resulting saliency maps. They hypothesized that the increase in interpretability could be due to a higher alignment between the image and its saliency map. Moreover, they arrived at the conclusion that the strength of this connection is strongly linked to how locally similar the network is to a linear model. In (Mangla, Singh, and Balasubramanian 2020) authors showed that using weak saliency maps suffices to improve adversarial robustness with no additional effort to generate the perturbations themselves.
+
+We build our approach on prior work about saliency maps and adversarial robustness but in the context of deep ensemble models. In (Mangla, Singh, and Balasubramanian 2020) the authors try to decrease the sensitivity of the prediction with respect to the saliency map by using special augmentation during training. We also try to decrease the sensitivity of the prediction with respect to the saliency maps but for the ensemble. We do so by enforcing misalignment between the saliency maps for the ensemble members.
+
+We consider a saliency map for model ${f}_{m}$ with respect to data $x$ conditioned on the true class label $y$ . We calculate it as the first order derivative of the model output for the true class label with respect to the input, i.e.,
+
+$$
+{s}_{m} = \frac{\partial {f}_{m}\left( x\right) \left\lbrack y\right\rbrack }{\partial x}, \tag{1}
+$$
+
+where ${f}_{m}\left( x\right) \left\lbrack y\right\rbrack$ is the $y$ element from the predictions ${f}_{m}\left( x\right)$ .
+
+Shared Sensitivity Across Ensemble Members. Given image data $x$ and an ensemble of $M$ models ${f}_{m}$ , we define our SMD measure as:
+
+$$
+{\mathcal{L}}_{SMD}\left( x\right) = \log \left\lbrack {\mathop{\sum }\limits_{m}\mathop{\sum }\limits_{{l > m}}\exp \left( \frac{{s}_{m}^{T}{s}_{l}}{{\begin{Vmatrix}{s}_{m}\end{Vmatrix}}_{2}{\begin{Vmatrix}{s}_{l}\end{Vmatrix}}_{2}}\right) }\right\rbrack , \tag{2}
+$$
+
+where ${s}_{m} = \frac{\partial {f}_{m}\left( x\right) \left\lbrack y\right\rbrack }{\partial x}$ is the saliency map for ensemble model ${f}_{m}$ with respect to the image data $x$ . A high value of ${\mathcal{L}}_{SMD}\left( x\right)$ means alignment and similarity between the saliency maps ${s}_{m}$ of the models ${f}_{m}\left( x\right)$ with respect to the image data $x$ . Thus $\operatorname{SMD}\left( 2\right)$ indicates a possible shared sensitivity area in the particular image content common for all the ensemble members. A pronounced sensitivity across the ensemble members points to a vulnerability that might be targeted and exploited by an adversarial attack. To prevent this, we would like ${\mathcal{L}}_{SMD}\left( x\right)$ to be as small as possible, which means different image content is of different importance to the ensemble members.
+
+### 3.2 Saliency Diversification Objective
+
+We jointly learn our ensemble members using a common cross-entropy loss per member and our saliency based sensitivity measure described in the subsection above. We define our learning objective in the following:
+
+$$
+\mathcal{L} = \mathop{\sum }\limits_{x}\mathop{\sum }\limits_{m}{\mathcal{L}}_{m}\left( x\right) + \lambda \mathop{\sum }\limits_{x}{\mathcal{L}}_{SMD}\left( x\right) , \tag{3}
+$$
+
+where ${\mathcal{L}}_{m}\left( x\right)$ is the cross-entropy loss for ensemble member $m,{\mathcal{L}}_{SMD}\left( x\right)$ is our SMD measure for an image data $x$ and an ensemble of $M$ models ${f}_{m}$ , and $\lambda > 0$ is a Lagrangian parameter. By minimizing our learning objective that includes a saliency-based sensitivity measure, we enforce the ensemble members to have misaligned and non-overlapping sensitivity for the different image content. Our regularization enables us to strongly penalize small misalignments ${s}_{m}^{T}{s}_{l}$ between the saliency maps ${s}_{m}$ and ${s}_{l}$ . While at the same time it ensures that a large misalignment is not discarded. Additionally, since ${\mathcal{L}}_{SMD}\left( x\right)$ is a $\log \operatorname{SumExp}$ function it has good numerical properties (Kariyappa and Qureshi 2019). Thus, our approach offers to effectively minimize possible shared sensitivity across the ensemble members that might be exploited as vulnerability. In contrast to GAL regularizer (Kariyappa and Qureshi 2019) SMD is loss agnostic (can be used with loss functions other than cross-entropy) and does not focus on incorrect-class prediction (which are irrelevant for accuracy). Additionally it has a clear link to work in interpretability (Etmann et al. 2019) and produces diverse but meaningful saliency maps (see Fig. 1).
+
+Assuming unit one norm saliencies, the gradient based update for one data sample $x$ with respect to the parameters ${\theta }_{{f}_{m}}$ of a particular ensemble member can be written as:
+
+$$
+{\theta }_{{f}_{m}} = {\theta }_{{f}_{m}} - \alpha \left( {\frac{\partial {\mathcal{L}}_{m}\left( x\right) }{\partial {\theta }_{{f}_{m}}} + \lambda \frac{\partial {\mathcal{L}}_{SMD}\left( x\right) }{\partial {\theta }_{{f}_{m}}}}\right) =
+$$
+
+$$
+= {\theta }_{{f}_{m}} - \alpha \frac{\partial {\mathcal{L}}_{m}\left( x\right) }{\partial {\theta }_{{f}_{m}}} - {\alpha \lambda }\frac{\partial {f}_{m}\left( x\right) \left\lbrack y\right\rbrack }{\partial x\partial {\theta }_{{f}_{m}}}\mathop{\sum }\limits_{{j \neq m}}{\beta }_{j}\frac{\partial {f}_{j}\left( x\right) \left\lbrack y\right\rbrack }{\partial x}, \tag{4}
+$$
+
+where $\alpha$ is the learning rate and ${\beta }_{j} = \frac{\exp \left( {{s}_{m}^{T}{s}_{j}}\right) }{\mathop{\sum }\limits_{m}\mathop{\sum }\limits_{{k > m}}\exp \left( {{s}_{m}^{T}{s}_{k}}\right) }$ . The third term enforces the learning of the ensemble members to be on optimization paths where the gradient of their saliency maps $\frac{\partial {f}_{m}\left( x\right) \left\lbrack y\right\rbrack }{\partial x\partial {\theta }_{fm}}$ with respect to ${\theta }_{{f}_{m}}$ is misaligned with the weighted average of the remaining saliency maps $\mathop{\sum }\limits_{{j \neq m}}{\beta }_{j}\frac{\partial {f}_{j}\left( x\right) \left\lbrack y\right\rbrack }{\partial x}$ . Also,(4) reveals that by our approach the ensemble members can be learned in parallel provided that the saliency maps are shared between the models (we leave this direction for future work).
+
+## 4 Empirical Evaluation
+
+This section is devoted to empirical evaluation and performance comparison with state-of-the-art ensemble methods.
+
+### 4.1 Data Sets and Baselines
+
+We performed the evaluation using 3 classical computer vision data sets (MNIST (Lecun et al. 1998), FASHION-MNIST (Xiao, Rasul, and Vollgraf 2017) and CIFAR-10 (Krizhevsky 2009)) and include 4 baselines (naive ensemble, (Pang et al. 2019), (Kariyappa and Qureshi 2019), (Yang et al. 2020)) in our comparison.
+
+Datasets. The MNIST dataset (Lecun et al. 1998) consists of 70000 gray-scale images of handwritten digits with dimensions of 28x28 pixels. F-MNIST dataset (Xiao, Rasul, and Vollgraf 2017) is similar to MNIST dataset, has the same number of images and classes. Each image is in grayscale and has a size of ${28} \times {28}$ . It is widely used as an alternative to MNIST in evaluating machine learning models. CIFAR 10 dataset (Krizhevsky 2009) contains 60000 color images with 3 channels. It includes 10 real-life classes. Each of the 3 color channels has a dimension of ${32} \times {32}$ .
+
+Baselines. As the simplest baseline we compare against the performance of a naive ensemble, i.e., one trained without any defense mechanism against adversarial attacks. Additionally, we also consider state-of-the-art methods as baselines. We compare the performance of our approach with the following ones: Adaptive Diversity Promoting (ADP) method (Pang et al. 2019), Gradient Alignment Loss (GAL) method (Kariyappa and Qureshi 2019), and a Diversifying Vulnerabilities for Enhanced Robust Generation of Ensembles (DVERGE) or (DV.) method (Yang et al. 2020).
+
+### 4.2 Training and Testing Setup
+
+Used Neural Networks. To evaluate our approach, we use two neural networks LeNet-5 (Lecun et al. 1998) and ResNet-20 (He et al. 2016). LeNet-5 is a classical small neural network for vision tasks, while ResNet-20 is another widely used architecture in this domain.
+
+Training Setup. We run our training algorithm for 50 epochs on MNIST and F-MNIST and 200 epochs on CIFAR-10, using the Adam optimizer (Kingma and Ba 2015), a learning rate of 0.001 , weight decay of 0.0001 , and batch-sizes of 128. We use no data augmentation on MNIST and F-MNIST and use normalization, random cropping, and flipping on CIFAR-10. In all of our experiments, we use 86% of the data for training and 14% for testing.In the implemented regularizers from prior work, we used the $\lambda$ that was suggested by the respective authors. While we found out that the strength of the SMD regularizer (also $\lambda$ ) in the range $\left\lbrack {{0.5},2}\right\rbrack$ gives good results. Thus in all of our experiments, we take $\lambda = 1$ . We report all the results as an average over 5 independent trials (we include the standard deviations in the Appendix A). We report results for the ensembles of 3 members in the main paper, and for 5 and 8 in the Appendix C.
+
+We used the LeNet-5 neural network for MNIST and F-MNIST datasets and ResNet-20 for CIFAR-10. To have a fair comparison, we also train ADP (Pang et al. 2019), GAL (Kariyappa and Qureshi 2019) and DVERGE (Yang et al. 2020), under a similar training setup as described above. We made sure that the setup is consistent with the one given by the original authors with exception of using Adam optimizer for training DVERGE. We also used our approach and added it as a regularizer to the DVERGE algorithm. We named this combination SMD+ and ran it under the setup as described above. All models are implemented in PyTorch (Paszke et al. 2017). We use AdverTorch (Ding, Wang, and Jin 2019) library for adversarial attacks.
+
+In the setting of adversarial training, we follow the EAT approach (Tramèr et al. 2018) by creating adversarial examples on 3 holdout pre-trained ensembles with the same size and architecture as the baseline ensemble. The examples are created via PGD- ${L}_{\infty }$ attack with 10 steps and $\epsilon = {0.1}$ .
+
+Adversarial Attacks. To evaluate our proposed approach and compare its performance to baselines, we use a set of adversarial attacks described in Section 2.3 in both black-box and white-box settings. We construct adversarial examples from the images in the test dataset by modifying them using the respective attack method. We probe with white-box attacks on the ensemble as a whole (not on the individual models). We generate black-box attacks targeting our ensemble model by creating white-box adversarial attacks on a surrogate ensemble model (with the same architecture), trained on the same dataset with the same training routine. We use the following parameters for the attacks: for $\left( {{\mathrm{F}}_{GSM},\mathrm{{PGD}}}\right.$ , R-F., BIM, MIM) we use $\epsilon$ in range $\left\lbrack {0;{0.3}}\right\rbrack$ in 0.05 steps, which covers the range used in our baselines; we use 10 iterations with a step size equal to $\epsilon /{10}$ for PGD, BIM and MIM; we use ${L}_{\infty }$ variant of PGD attack; for R-F. we use random-step $\alpha = \epsilon /2$ .
+
+Computing Infrastructure and Run Time. As computing hardware, we use half of the available resources from NVIDIA DGX2 station with ${3.3}\mathrm{{GHz}}\mathrm{{CPU}}$ and ${1.5}\mathrm{\;{TB}}\mathrm{{RAM}}$ memory, which has a total of ${161.75}\mathrm{{GHz}}$ GPUs, each with 32GB memory. One experiment takes around 4 minutes to train the baseline ensemble of 3 LeNet-5 members on MNIST without any regularizer. Whereas it takes around 18 minutes to train the same ensemble under the SMD regularizer, 37 minutes under DVERGE regularize, and 48 minutes under their combination. To evaluate the same ensemble under all of the adversarial attacks takes approximately 1 hour. It takes approximately 3 days when ResNet-20 members are used on CIFAR-10 for the same experiment.
+
+
+
+Figure 2: Accuracy vs. attacks strength for white-box PGD attacks on an ensemble of 3 LeNet-5 models for MNIST and F-MNIST and on an ensemble of 3 ReNets-20 for CIFAR-10 dataset.
+
+ | MNIST | F-MNIST | CIFAR-10 |
| Clean | ${\mathrm{F}}_{gsm}$ | R-F. | PGD | BIM | MIM | Clean | ${\mathrm{F}}_{gsm}$ | R-F. | PGD | BIM | MIM | Clean | ${\mathrm{F}}_{gsm}$ | R-F. | PGD | BIM | MIM |
| Naive | 99.3 | 20.3 | 73.5 | 2.9 | 4.2 | 5.5 | 91.9 | 15.7 | 33.6 | 5.5 | 7.2 | 6.6 | 91.4 | 10.5 | 2.8 | 1.0 | 3.2 | 2.9 |
| ADP | 98.8 | 43.8 | 89.6 | 10.4 | 19.6 | 14.8 | 91.4 | 18.3 | 34.8 | 5.8 | 8.8 | 7.5 | 91.7 | 11.4 | 3.7 | 0.8 | 3.6 | 3.4 |
| GAL | 99.3 | 72.7 | 89.0 | 14.4 | 28.2 | 38.9 | 91.4 | 35.8 | 51.2 | 7.4 | 10.8 | 12.2 | 91.4 | 11.2 | 9.7 | 1.0 | 1.8 | 2.8 |
| DV. | 99.4 | 44.2 | 85.5 | 10.6 | 16.0 | 20.6 | 91.8 | 27.3 | 44.6 | 7.3 | 10.7 | 9.9 | 91.0 | 11.2 | 6.3 | 1.1 | 5.5 | 4.4 |
| SMD | 99.3 | 70.7 | 91.3 | 21.4 | 34.3 | 43.8 | 91.1 | 38.2 | 52.0 | 11.0 | 14.9 | 16.4 | 90.1 | 12.0 | 12.0 | 2.3 | 3.2 | 3.9 |
| SMD+ | 99.4 | 83.4 | 93.8 | 54.7 | 68.0 | 71.0 | 91.6 | 42.9 | 51.9 | 13.3 | 20.5 | 20.5 | 90.5 | 12.1 | 5.8 | 1.2 | 5.9 | 5.2 |
+
+Table 1: White-box attacks of the magnitude $\epsilon = {0.3}$ on an ensemble of 3 LeNet-5 models for MNIST and F-MNIST and on an ensemble of 3 ReNets-20 for CIFAR-10 dataset. Columns are attacks and rows are defenses employed.
+
+### 4.3 Results
+
+Robustness to White-Box Adversarial Attacks. In Table 1, we show the results for ensemble robustness under white-box adversarial attacks with $\epsilon = {0.3}$ . We highlight in bold, the methods with the highest accuracy. In Figure 2, we depict the results for PGD attack at different attack strengths $\left( \epsilon \right)$ . It can be observed that the accuracy on normal images (without adversarial attacks) slightly decreases for all regularizers, which is consistent with a robustness-accuracy trade-off (Tsipras et al. 2018; Zhang et al. 2019). The proposed SMD and SMD+ outperform the comparing baselines methods on all attack configurations and datasets. This result shows that the proposed saliency diversification approach helps to increase the adversarial robustness.
+
+Robustness to Black-Box Adversarial Attacks. In Table 2, we see the results for ensemble robustness under black-box adversarial attacks with an attack strength $\epsilon =$ 0.3 . In Figure 3 we also depict the results for PGD attack at different strengths $\left( \epsilon \right)$ . We can see that SMD+ is on par with DVERGE (DV.) on MNIST and consistently outperforms other methods. On F-MNIST SMD+ has a significant gap in performance compared to the baselines, with this effect being even more pronounced on the CIFAR-10 dataset. Also, it is interesting to note that standalone SMD comes second in performance and it is very close to the highest accuracy on multiple attack configurations under $\epsilon = {0.3}$ .
+
+Transferability. In this subsection, we investigate the transferability of the attacks between the ensemble members, which measures how likely the crafted white-box attack for one ensemble member succeeds on another. In Figure 5 , we present results for F-MNIST and PGD attacks (results for different datasets and other attacks are in the Appendix B). The Y-axis represents the member from which the adversary crafts the attack (i.e. source), and the X-axis - the member on which the adversary transfers the attack (i.e. target). The on diagonal values depict the accuracy of a particular ensemble member under a white-box attack. The other (off-diagonal) values show the accuracy of the target members under transferred (black-box) attacks from the source member. In Figure 5, we see that SMD and SMD+ have high ensemble resilience. It seems that both SMD and SMD+ reduce the common attack vector between the members. Compared to the naive ensemble and the DV. method, we see improved performance, showing that our approach increases the robustness to transfer attacks.
+
+Robustness Under Adversarial Training. We also present the performance of our method and the comparing methods under AT. We follow the approach of Tramèr et al. as described in Section 4.1. In Figure 4, we show the results for the PGD attack on MNIST dataset. In the white-box attack setting, we see major improvement for all regularizers where SMD and SMD+ consistently outperforming others. This is consistent with results from (Tramèr et al. 2018), which showed EAT to perform rather poorly in the white-box setting. In the Appendix D, we also show the results for black-box attacks.
+
+
+
+Figure 3: Accuracy vs. attacks strength for black-box PGD attacks on an ensemble of 3 LeNet-5 models for MNIST and F-MNIST and on an ensemble of 3 ReNets-20 for CIFAR-10 dataset.
+
+ | MNIST | F-MNIST | CIFAR-10 |
| Clean | ${\mathrm{F}}_{gsm}$ | R-F. | PGD | BIM | MIM | Clean | ${\mathrm{F}}_{gsm}$ | R-F. | PGD | BIM | MIM | Clean | ${\mathrm{F}}_{gsm}$ | R-F. | PGD | BIM | MIM |
| Naive | 99.3 | 32.2 | 84.2 | 21.7 | 20.7 | 14.5 | 91.9 | 23.8 | 47.5 | 33.1 | 31.5 | 15.2 | 91.4 | 10.6 | 5.8 | 1.3 | 3.7 | 3.3 |
| ADP | 98.8 | 26.6 | 70.9 | 27.3 | 26.5 | 19.4 | 91.4 | 22.3 | 49.5 | 33.0 | 33.2 | 16.3 | 91.7 | 11.6 | 5.5 | 1.2 | 3.8 | 3.4 |
| GAL | 99.3 | 38.5 | 85.2 | 32.7 | 31.2 | 22.3 | 91.4 | 29.8 | 55.5 | 44.0 | 41.4 | 21.9 | 91.4 | 11.0 | 8.3 | 4.2 | 3.8 | 4.4 |
| DV. | 99.4 | 42.2 | 89.1 | 34.5 | 32.2 | 22.0 | 91.8 | 30.7 | 55.7 | 44.7 | 42.3 | 21.4 | 91.0 | 10.1 | 8.4 | 6.8 | 5.8 | 4.0 |
| SMD | 99.3 | 38.6 | 85.8 | 33.4 | 31.6 | 22.6 | 91.1 | 31.0 | 56.8 | 45.4 | 42.4 | 23.2 | 90.1 | 10.4 | 7.8 | 3.9 | 3.8 | 3.5 |
| SMD+ | 99.4 | 42.0 | 89.1 | 36.3 | 34.7 | 24.3 | 91.6 | 31.9 | 57.7 | 47.1 | 44.4 | 23.3 | 90.5 | 9.9 | 8.7 | 7.8 | 8.6 | 4.1 |
+
+Table 2: Black-box attacks of the magnitude $\epsilon = {0.3}$ on an ensemble of 3 LeNet-5 models for MNIST and F-MNIST and on an ensemble of 3 ReNets-20 for CIFAR-10 dataset. Columns are attacks and rows are defenses employed.
+
+
+
+Figure 4: Accuracy vs. Attacks Strength for PGD Attacks on MNIST under adversarial training.
+
+
+
+Figure 5: Transferability of PGD attacks on F-MNIST. Attacks are crafted on Y-axis members and tested on X-axis members. Higher values indicate better performance.
+
+## 5 Conclusion
+
+In this paper, we proposed a novel diversity-promoting learning approach for the adversarial robustness of deep ensembles. We introduced saliency diversification measure and presented a saliency diversification learning objective. With our learning approach, we aimed at minimizing possible shared sensitivity across the ensemble members to decrease its vulnerability to adversarial attacks. Our empirical results showed a reduced transferability between ensemble members and improved performance compared to other ensemble defense methods. We also demonstrated that our approach combined with existing methods outperforms state-of-the-art ensemble algorithms in adversarial robustness.
+
+## References
+
+Athalye, A.; and Carlini, N. 2018. On the Robustness of
+
+the CVPR 2018 White-Box Adversarial Example Defenses. arXiv:1804.03286 [cs, stat].
+
+Athalye, A.; Carlini, N.; and Wagner, D. A. 2018. Obfuscated Gradients Give a False Sense of Security: Circumventing Defenses to Adversarial Examples. In ICML.
+
+Bhambri, S.; Muku, S.; Tulasi, A.; and Buduru, A. B. 2020. A Survey of Black-Box Adversarial Attacks on Computer Vision Models. arXiv:1912.01667 [cs, stat].
+
+Carlini, N.; and Wagner, D. 2017. Towards Evaluating the Robustness of Neural Networks. In 2017 IEEE Symposium on Security and Privacy (SP), 39-57.
+
+Chen, T.; Kornblith, S.; Norouzi, M.; and Hinton, G. E. 2020. A Simple Framework for Contrastive Learning of Visual Representations. In International Conference on Machine Learning.
+
+Chen, Z.; Tondi, B.; Li, X.; Ni, R.; Zhao, Y.; and Barni, M. 2019a. Secure Detection of Image Manipulation by Means of Random Feature Selection. IEEE Transactions on Information Forensics and Security, 14(9): 2454-2469.
+
+Chen, Z.; Yang, Z.; Wang, X.; Liang, X.; Yan, X.; Li, G.; and Lin, L. 2019b. Multivariate-Information Adversarial Ensemble for Scalable Joint Distribution Matching. In International Conference on Machine Learning, 1112-1121. PMLR.
+
+Das, N.; Shanbhogue, M.; Chen, S.-T.; Hohman, F.; Li, S.; Chen, L.; Kounavis, M. E.; and Chau, D. H. 2018. SHIELD: Fast, Practical Defense and Vaccination for Deep Learning Using JPEG Compression. In Proceedings of the 24th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, KDD '18, 196-204. New York, NY, USA: Association for Computing Machinery.
+
+Dauphin, Y. N.; Pascanu, R.; Gulcehre, C.; Cho, K.; Gan-guli, S.; and Bengio, Y. 2014. Identifying and Attacking the Saddle Point Problem in High-Dimensional Non-Convex Optimization. In Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2, NIPS'14, 2933-2941. Cambridge, MA, USA: MIT Press.
+
+Deng, L.; Hinton, G.; and Kingsbury, B. 2013. New Types of Deep Neural Network Learning for Speech Recognition and Related Applications: An Overview. In 2013 IEEE International Conference on Acoustics, Speech and Signal Processing, 8599-8603.
+
+Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2019.
+
+BERT: Pre-Training of Deep Bidirectional Transformers for Language Understanding. arXiv:1810.04805 [cs].
+
+Ding, G. W.; Wang, L.; and Jin, X. 2019. Advertorch v0.1: An Adversarial Robustness Toolbox Based on Py-Torch. arXiv:1902.07623 [cs, stat].
+
+Dong, Y.; Liao, F.; Pang, T.; Su, H.; Zhu, J.; Hu, X.; and Li, J. 2018. Boosting Adversarial Attacks with Momentum. arXiv:1710.06081 [cs, stat].
+
+Etmann, C.; Lunz, S.; Maass, P.; and Schönlieb, C. 2019. On the Connection Between Adversarial Robustness and Saliency Map Interpretability. In International Conference on Machine Learning.
+
+Ganaie, M. A.; Hu, M.; Tanveer, M.; and Suganthan, P. 2021. Ensemble Deep Learning: A Review. ArXiv.
+
+Goodfellow, I.; Shlens, J.; and Szegedy, C. 2015. Explaining and Harnessing Adversarial Examples. In International Conference on Learning Representations.
+
+Gu, J.; and Tresp, V. 2019. Saliency Methods for Explaining Adversarial Attacks. arXiv:1908.08413 [cs].
+
+He, K.; Zhang, X.; Ren, S.; and Sun, J. 2016. Deep Residual Learning for Image Recognition. In Conference on Computer Vision and Pattern Recognition, 770-778.
+
+Kariyappa, S.; and Qureshi, M. K. 2019. Improving Adversarial Robustness of Ensembles with Diversity Training. arXiv:1901.09981 [cs, stat].
+
+Kingma, D. P.; and Ba, J. 2015. Adam: A Method for Stochastic Optimization. In The International Conference on Learning Representations.
+
+Kober, J.; Bagnell, J.; and Peters, J. 2013. Reinforcement Learning in Robotics: A Survey. The International Journal of Robotics Research, 32: 1238-1274.
+
+Krizhevsky, A. 2009. Learning Multiple Layers of Features from Tiny Images. undefined.
+
+Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. Ima-geNet Classification with Deep Convolutional Neural Networks. In Conference on Neural Information Processing Systems, volume 60, 84-90.
+
+Kurakin, A.; Goodfellow, I.; and Bengio, S. 2017. Adversarial Examples in the Physical World. ICLR Workshop.
+
+LeCun, Y.; Bengio, Y.; and Hinton, G. 2015. Deep Learning. Nature, 521(7553): 436-444.
+
+Lecun, Y.; Bottou, L.; Bengio, Y.; and Haffner, P. 1998. Gradient-Based Learning Applied to Document Recognition. Proceedings of the IEEE, 86(11): 2278-2324.
+
+Lee, D.-H.; Zhang, S.; Fischer, A.; and Bengio, Y. 2015. Difference Target Propagation. In Joint European Conference on Machine Learning and Knowledge Discovery in Databases, 498-515.
+
+Madry, A.; Makelov, A.; Schmidt, L.; Tsipras, D.; and Vladu, A. 2018. Towards Deep Learning Models Resistant to Adversarial Attacks. In International Conference on Learning Representations.
+
+Mangla, P.; Singh, V.; and Balasubramanian, V. 2020. On Saliency Maps and Adversarial Robustness. ECML/PKDD.
+
+Meng, D.; and Chen, H. 2017. MagNet: A Two-Pronged Defense against Adversarial Examples. In Proceedings of the 2017 ACM SIGSAC Conference on Computer and Communications Security, 135-147. New York, NY, USA: Association for Computing Machinery.
+
+Palmer, T. 2019. The ECMWF Ensemble Prediction System: Looking Back (More than) 25 Years and Projecting Forward 25 Years. Quarterly Journal of the Royal Meteorological Society, 145(S1): 12-24.
+
+Pang, T.; Xu, K.; Du, C.; Chen, N.; and Zhu, J. 2019. Improving Adversarial Robustness via Promoting Ensemble Diversity. In International Conference on Machine Learning, 4970-4979. PMLR.
+
+Papernot, N.; Mcdaniel, P.; Goodfellow, I.; Jha, S.; Celik, Z. B.; and Swami, A. 2017. Practical Black-Box Attacks against Machine Learning. AsiaCCS.
+
+Papernot, N.; Mcdaniel, P.; Jha, S.; Fredrikson, M.; Celik, Z. B.; and Swami, A. 2016a. The limitations of deep learning in adversarial settings. In Proceedings - 2016 IEEE European Symposium on Security and Privacy, EURO S and P 2016, 372-387. Institute of Electrical and Electronics Engineers Inc.
+
+Papernot, N.; McDaniel, P.; Wu, X.; Jha, S.; and Swami, A. 2016b. Distillation as a Defense to Adversarial Perturbations Against Deep Neural Networks. In 2016 IEEE Symposium on Security and Privacy (SP), 582-597.
+
+Paszke, A.; Gross, S.; Chintala, S.; Chanan, G.; Yang, E.; DeVito, Z.; Lin, Z.; Desmaison, A.; Antiga, L.; and Lerer, A. 2017. Automatic Differentiation in PyTorch. In NIPS 2017 Workshop Autodiff.
+
+Sen, S.; Ravindran, B.; and Raghunathan, A. 2019. EMPIR: Ensembles of Mixed Precision Deep Networks for Increased Robustness Against Adversarial Attacks. In International Conference on Learning Representations.
+
+Strauss, T.; Hanselmann, M.; Junginger, A.; and Ulmer, H. 2018. Ensemble Methods as a Defense to Adversarial Perturbations Against Deep Neural Networks. arXiv:1708.07747 [cs, stat].
+
+Szegedy, C.; Vanhoucke, V.; Ioffe, S.; Shlens, J.; and Wojna, Z. 2016. Rethinking the Inception Architecture for Computer Vision. In Conference on Computer Vision and Pattern Recognition.
+
+Taran, O.; Rezaeifar, S.; Holotyak, T.; and Voloshynovskiy, S. 2019. Defending Against Adversarial Attacks by Randomized Diversification. In IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 11218-11225.
+
+Taran, O.; Rezaeifar, S.; and Voloshynovskiy, S. 2018. Bridging Machine Learning and Cryptography in Defence against Adversarial Attacks. In ECCV Workshops.
+
+Tramèr, F.; Kurakin, A.; Papernot, N.; Goodfellow, I.; Boneh, D.; and McDaniel, P. 2018. Ensemble Adversarial Training: Attacks and Defenses. In International Conference on Learning Representations.
+
+Tsipras, D.; Santurkar, S.; Engstrom, L.; Turner, A.; and Madry, A. 2018. Robustness May Be at Odds with Accuracy. In International Conference on Learning Representations.
+
+Vinh, N.; Erfani, S.; Paisitkriangkrai, S.; Bailey, J.; Leckie, C.; and Ramamohanarao, K. 2016. Training Robust Models Using Random Projection. In 23rd International Conference on Pattern Recognition (ICPR), 531-536.
+
+Xiao, H.; Rasul, K.; and Vollgraf, R. 2017. Fashion-MNIST: A Novel Image Dataset for Benchmarking Machine Learning Algorithms. arXiv:1708.07747 [cs, stat].
+
+Xie, C.; Wang, J.; Zhang, Z.; Ren, Z.; and Yuille, A. 2018. Mitigating Adversarial Effects Through Randomization. In International Conference on Learning Representations.
+
+$\mathrm{{Xu}},\mathrm{W}$ .; Evans, D.; and Qi, Y. 2018. Feature Squeezing: Detecting Adversarial Examples in Deep Neural Networks. In 25th Annual Network and Distributed System Security Symposium, NDSS 2018, San Diego, California, USA, February 18-21, 2018. The Internet Society.
+
+Yang, H.; Zhang, J.; Dong, H.; Inkawhich, N.; Gardner, A.; Touchet, A.; Wilkes, W.; Berry, H.; and Li, H. 2020. DVERGE: Diversifying Vulnerabilities for Enhanced Robust Generation of Ensembles. In Advances in Neural Information Processing Systems, volume 33, 5505-5515. Curran Associates, Inc.
+
+You, Z.; Ye, J.; Li, K.; Xu, Z.; and Wang, P. 2019. Adversarial Noise Layer: Regularize Neural Network by Adding Noise. In IEEE International Conference on Image Processing (ICIP), 909-913.
+
+Zhang, H.; Yu, Y.; Jiao, J.; Xing, E.; Ghaoui, L.; and Jordan, M. I. 2019. Theoretically Principled Trade-off between Robustness and Accuracy. In ICML.
+
+Zhang, S.; Liu, M.; and Yan, J. 2020. The Diversified Ensemble Neural Network. Advances in Neural Information Processing Systems, 33.
+
+## A. Additional Result-Supporting Metrics
+
+In this section, we report the standard deviation of the results from the main paper based on 5 independent trials.
+
+In Fig. 6 and 7, and Tab. 3 and 4, we show the results for standard deviations. As we can see from the results, SMD has higher variance than SMD+. Nonetheless, we point out that even under such variation SMD has significant gain other the comparing state-of-the-art algorithms for an attacks with high strength. In is also important to note that for the results on the MNIST and F-MNIST dataset the DVERGE method also has high variance and it is lower but comparable to the SMD. On the other hand it seems that the combination SMD+ has relatively low variance, and interestingly, in the majority of the results it is lower than both SMD and DVERGE.
+
+We show average over 5 independent trials (as in the main paper) and the standard deviation for the transferability of the attacks between the ensemble members, which measures how likely the crafted white-box attack for one ensemble member succeeds on another. In all of the results the Y-axis represents the member from which the adversary crafts the attack (i.e. source), and the X-axis - the member on which the adversary transfers the attack (i.e. target).
+
+The on diagonal values depict the accuracy of a particular ensemble member under a white-box attack. We see that both SMD and SMD+ models have high ensemble resilience. It appears that at some of the ensemble members the variance in the estimate for SMD is high. Interestingly, we found out that this is due to the fact that in the prediction of the SMD ensemble over 5 independent runs, we have one prediction which is quite high and thus causes this deviation. This suggest that an additional tuning of the hyperparameters for the SMD approach might lead to even better performance, which we leave it as future work.
+
+The other (off-diagonal) values show the accuracy of the target members under transferred (black-box) attacks from the source member, here we see that the variance is on levels comparable with the baseline methods.
+
+
+
+Figure 6: Accuracy vs. attacks strength for white-box PGD attacks on an ensemble of 3 LeNet-5 models for MNIST and F-MNIST and on an ensemble of 3 ReNets-20 for CIFAR-10 dataset.
+
+
+
+Figure 7: Accuracy vs. attacks strength for black-box PGD attacks on an ensemble of 3 LeNet-5 models for MNIST and F-MNIST and on an ensemble of 3 ReNets-20 for CIFAR-10 dataset.
+
+ | MNIST | F-MNIST | CIFAR-10 |
| Clean | ${\mathrm{F}}_{gsm}$ | R-F. | PGD | BIM | MIM | Clean | ${\mathrm{F}}_{gsm}$ | R-F. | PGD | BIM | MIM | Clean | ${\mathrm{F}}_{gsm}$ | R-F. | PGD | BIM | MIM |
| Naive | 0.0 | 3.5 | 1.8 | 0.7 | 0.9 | 1.4 | 0.1 | 2.2 | 1.7 | 0.4 | 0.9 | 0.7 | 0.4 | 0.6 | 0.7 | 0.3 | 0.6 | 0.5 |
| ADP | 0.1 | 8.8 | 4.3 | 2.2 | 5.6 | 4.7 | 0.3 | 2.6 | 3.5 | 1.5 | 2.1 | 1.6 | 0.1 | 0.6 | 0.8 | 0.0 | 0.0 | 0.1 |
| GAL | 0.1 | 4.4 | 1.5 | 10.9 | 9.4 | 9.3 | 0.4 | 5.5 | 2.9 | 2.5 | 3.7 | 4.3 | 0.4 | 1.2 | 1.7 | 0.6 | 0.9 | 1.9 |
| DV. | 0.0 | 3.6 | 0.9 | 1.0 | 1.6 | 2.3 | 0.1 | 1.8 | 1.6 | 0.2 | 0.5 | 0.7 | 0.1 | 0.3 | 1.4 | 0.1 | 0.1 | 0.3 |
| SMD | 0.1 | 9.3 | 1.2 | 14.0 | 17.4 | 16.6 | 0.4 | 6.4 | 3.2 | 4.7 | 6.1 | 6.1 | 0.6 | 1.1 | 1.0 | 1.3 | 0.9 | 1.4 |
| SMD+ | 0.0 | 1.3 | 1.1 | 7.9 | 3.7 | 2.2 | 0.2 | 2.6 | 2.1 | 3.6 | 4.5 | 4.2 | 0.3 | 0.4 | 2.2 | 0.2 | 0.3 | 0.2 |
+
+Table 3: Standard deviations for white-box attacks of the magnitude $\epsilon = {0.3}$ on an ensemble of 3 LeNet-5 models for MNIST and F-MNIST and on an ensemble of 3 ReNets-20 for CIFAR-10 dataset. Columns are attacks and rows are defenses employed.
+
+ | MNIST | F-MNIST | CIFAR-10 |
| Clean | ${\mathrm{F}}_{gsm}$ | R-F. | PGD | BIM | MIM | Clean | ${\mathrm{F}}_{gsm}$ | R-F. | PGD | BIM | MIM | Clean | ${\mathrm{F}}_{gsm}$ | R-F. | PGD | BIM | MIM |
| Naive | 0.0 | 1.9 | 0.8 | 1.5 | 1.3 | 0.9 | 0.1 | 2.4 | 2.6 | 4.7 | 3.4 | 1.8 | 0.4 | 0.5 | 1.3 | 0.2 | 0.1 | 0.1 |
| ADP | 0.1 | 6.0 | 5.8 | 5.4 | 5.4 | 4.7 | 0.3 | 3.5 | 4.4 | 6.2 | 4.5 | 2.7 | 0.1 | 0.8 | 0.6 | 0.0 | 0.0 | 0.2 |
| GAL | 0.1 | 1.0 | 1.7 | 1.9 | 2.3 | 2.1 | 0.4 | 4.0 | 3.9 | 4.9 | 3.8 | 3.1 | 0.4 | 0.4 | 0.4 | 0.4 | 0.1 | 1.2 |
| DV. | 0.0 | 0.7 | 0.5 | 1.6 | 1.2 | 0.5 | 0.1 | 0.9 | 1.1 | 0.8 | 0.5 | 0.7 | 0.1 | 0.4 | 1.1 | 1.5 | 0.3 | 0.3 |
| SMD | 0.1 | 3.1 | 2.4 | 4.1 | 4.0 | 2.6 | 0.4 | 4.2 | 4.0 | 4.5 | 3.8 | 3.1 | 0.6 | 0.3 | 0.5 | 0.6 | 0.1 | 0.2 |
| SMD+ | 0.0 | 3.6 | 1.5 | 4.9 | 4.2 | 2.6 | 0.2 | 2.2 | 1.8 | 2.1 | 1.2 | 1.5 | 0.3 | 0.2 | 1.7 | 2.2 | 2.0 | 0.3 |
+
+Table 4: Standard deviations for black-box attacks of the magnitude $\epsilon = {0.3}$ on an ensemble of 3 LeNet-5 models for MNIST and F-MNIST and on an ensemble of 3 ReNets-20 for CIFAR-10 dataset. Columns are attacks and rows are defenses employed.
+
+
+
+Figure 8: Transferability of PGD attacks on F-MNIST. Attacks are crafted on Y-axis members and tested on X-axis members. Higher values indicate better performance. Standard deviations are in parenthesis.
+
+## B. Results for Additional Attacks
+
+In this section, we show results for additional attacks in with-box and black-box setting. Namely, in addition to PGD attacks shown in the main text we present FGSM, R-FGMS, MIM and BIM attacks here.
+
+In Fig. 9, 10, 11, 12, 13, 14, 15, 16, we show the results. Similarly as in the main paper, we can see gains in performance for our SMD approach compared to the existing methods. The results appear to be consistent with those presented in the main text with SMD and SMD+ methods outperforming the baselines in most cases.
+
+
+
+Figure 9: Accuracy vs. attacks strength for white-box FGSM attacks on an ensemble of 3 LeNet-5 models for MNIST and F-MNIST and on an ensemble of 3 ReNets-20 for CIFAR-10 dataset.
+
+
+
+Figure 10: Accuracy vs. attacks strength for black-box FGSM attacks on an ensemble of 3 LeNet-5 models for MNIST and F-MNIST and on an ensemble of 3 ReNets-20 for CIFAR-10 dataset.
+
+
+
+Figure 11: Accuracy vs. attacks strength for white-box R-FGSM attacks on an ensemble of 3 LeNet-5 models for MNIST and F-MNIST and on an ensemble of 3 ReNets-20 for CIFAR-10 dataset.
+
+
+
+Figure 12: Accuracy vs. attacks strength for black-box R-FGSM attacks on an ensemble of 3 LeNet-5 models for MNIST and F-MNIST and on an ensemble of 3 ReNets-20 for CIFAR-10 dataset.
+
+
+
+Figure 13: Accuracy vs. attacks strength for white-box MIM attacks on an ensemble of 3 LeNet-5 models for MNIST and F-MNIST and on an ensemble of 3 ReNets-20 for CIFAR-10 dataset.
+
+
+
+Figure 14: Accuracy vs. attacks strength for black-box MIM attacks on an ensemble of 3 LeNet-5 models for MNIST and F-MNIST and on an ensemble of 3 ReNets-20 for CIFAR-10 dataset.
+
+
+
+Figure 15: Accuracy vs. attacks strength for white-box BIM attacks on an ensemble of 3 LeNet-5 models for MNIST and F-MNIST and on an ensemble of 3 ReNets-20 for CIFAR-10 dataset.
+
+
+
+Figure 16: Accuracy vs. attacks strength for black-box BIM attacks on an ensemble of 3 LeNet-5 models for MNIST and F-MNIST and on an ensemble of 3 ReNets-20 for CIFAR-10 dataset.
+
+
+
+Figure 17: Transferability of FGSM attacks on F-MNIST. Attacks are crafted on Y-axis members and tested on X-axis members. Higher values indicate better performance. Standard deviations are in parenthesis.
+
+
+
+Figure 18: Transferability of R-FGSM attacks on F-MNIST. Attacks are crafted on Y-axis members and tested on X-axis members. Higher values indicate better performance. Standard deviations are in parenthesis.
+
+
+
+Figure 19: Transferability of MIM attacks on F-MNIST. Attacks are crafted on Y-axis members and tested on X-axis members. Higher values indicate better performance. Standard deviations are in parenthesis.
+
+
+
+Figure 20: Transferability of BIM attacks on F-MNIST. Attacks are crafted on Y-axis members and tested on X-axis members. Higher values indicate better performance. Standard deviations are in parenthesis.
+
+## C. Impact of the Number of Ensemble Members
+
+In this section, we show the results for ensembles of 5 and 8 members using the MNIST, F-MNIST and CIFAR-10 datasets under withe-box and black-box attacks. For MNIST and F-MNIST we use 5 seeds for the evaluation, while we use 3 seed for CIFAR-10 due to ResNet-20 being much slower to train.
+
+In Fig. 21 and 22, and Tab. 5 and 6, we can see that when we use an ensemble of 5 members, we sill have high accuracy in the black-box and white-box attack setting. Moreover in the black-box setting, we have better results for most of the attacks, while in the black-box settings we have still have better results for almost all of the attacks compared to the state-of-the-art methods.
+
+The results for 8-member ensembles are shown in In Fig. 23 and 24, and Tab. 7 and 8. These results are also consistent in terms of the performance gains for the SMD and SMD+ methods compared with the results for the 3 and 5-member ensembles.
+
+
+
+Figure 21: Accuracy vs. attacks strength for white-box PGD attacks on an ensemble of 5 LeNet-5 models for MNIST and F-MNIST and on an ensemble of 5 ReNets-20 for CIFAR-10 dataset.
+
+
+
+Figure 22: Accuracy vs. attacks strength for black-box PGD attacks on an ensemble of 5 LeNet-5 models for MNIST and F-MNIST and on an ensemble of 5 ReNets-20 for CIFAR-10 dataset.
+
+ | MNIST | F-MNIST | CIFAR-10 |
| Clean | ${\mathrm{F}}_{gsm}$ | R-F. | PGD | BIM | MIM | Clean | ${\mathrm{F}}_{gsm}$ | R-F. | PGD | BIM | MIM | Clean | ${\mathrm{F}}_{gsm}$ | R-F. | PGD | BIM | MIM |
| Naive | 99.4 | 24.7 | 79.1 | 5.6 | 7.8 | 8.5 | 92.4 | 18.0 | 37.5 | 6.0 | 8.5 | 7.6 | 92.3 | 10.7 | 2.5 | 1.0 | 3.1 | 2.7 |
| ADP | 99.2 | 46.2 | 89.0 | 13.2 | 24.0 | 18.7 | 91.9 | 19.3 | 37.4 | 7.2 | 11.4 | 9.1 | 92.2 | 11.5 | 4.1 | 0.9 | 3.2 | 2.8 |
| GAL | 99.4 | 81.7 | 91.0 | 20.4 | 47.1 | 54.6 | 92.3 | 37.8 | 50.8 | 6.9 | 12.8 | 12.7 | 92.4 | 10.1 | 9.1 | 0.7 | 1.0 | 1.6 |
| DV. | 99.4 | 48.2 | 88.5 | 18.9 | 27.8 | 28.2 | 92.1 | 26.8 | 47.1 | 8.3 | 13.6 | 12.3 | 91.1 | 12.3 | 5.1 | 1.1 | 5.6 | 5.0 |
| SMD | 99.4 | 75.2 | 91.8 | 24.8 | 41.9 | 49.3 | 92.2 | 37.5 | 51.2 | 8.4 | 15.4 | 15.1 | 92.4 | 10.7 | 6.9 | 0.9 | 1.3 | 0.8 |
| SMD+ | 99.4 | 67.6 | 92.3 | 27.4 | 43.6 | 46.0 | 92.0 | 32.4 | 50.7 | 9.2 | 16.4 | 14.4 | 90.6 | 11.2 | 4.4 | 1.5 | 6.1 | 5.7 |
+
+Table 5: White-box attacks of the magnitude $\epsilon = {0.3}$ on an ensemble of 5 LeNet-5 models for MNIST and F-MNIST and on an ensemble of 5 ReNets-20 for CIFAR-10 dataset. Columns are attacks and rows are defenses employed.
+
+ | MNIST | F-MNIST | CIFAR-10 |
| Clean | ${\mathrm{F}}_{gsm}$ | R-F. | PGD | BIM | MIM | Clean | ${\mathrm{F}}_{gsm}$ | R-F. | PGD | BIM | MIM | Clean | ${\mathrm{F}}_{gsm}$ | R-F. | PGD | BIM | MIM |
| Naive | 99.4 | 31.1 | 84.0 | 16.7 | 17.2 | 12.6 | 92.4 | 23.5 | 46.7 | 27.6 | 27.1 | 13.0 | 92.3 | 10.9 | 5.6 | 0.5 | 2.7 | 2.2 |
| ADP | 99.2 | 27.3 | 78.3 | 19.7 | 19.6 | 14.4 | 91.9 | 22.9 | 46.2 | 27.7 | 28.1 | 14.1 | 92.2 | 11.3 | 5.7 | 0.6 | 2.7 | 2.3 |
| GAL | 99.4 | 35.9 | 84.6 | 21.2 | 21.5 | 16.7 | 92.3 | 26.7 | 50.6 | 33.6 | 32.8 | 15.6 | 92.4 | 10.7 | 9.5 | 7.3 | 2.7 | 3.1 |
| DV. | 99.4 | 39.1 | 88.2 | 26.6 | 26.2 | 18.3 | 92.1 | 28.4 | 54.2 | 37.6 | 36.8 | 17.3 | 91.1 | 10.3 | 7.1 | 5.6 | 6.2 | 2.4 |
| SMD | 99.4 | 35.5 | 84.9 | 22.5 | 23.2 | 17.9 | 92.2 | 28.0 | 51.3 | 34.4 | 34.3 | 17.3 | 92.4 | 11.4 | 8.6 | 3.9 | 2.7 | 2.1 |
| SMD+ | 99.4 | 41.2 | 88.4 | 27.8 | 27.5 | 20.0 | 92.0 | 29.7 | 55.1 | 39.0 | 38.4 | 18.7 | 90.6 | 10.1 | 5.4 | 5.3 | 10.7 | 2.3 |
+
+Table 6: Black-box attacks of the magnitude $\epsilon = {0.3}$ on an ensemble of 5 LeNet-5 models for MNIST and F-MNIST and on an ensemble of 5 ReNets-20 for CIFAR-10 dataset. Columns are attacks and rows are defenses employed.
+
+
+
+Figure 23: Accuracy vs. attacks strength for white-box PGD attacks on an ensemble of 8 LeNet-5 models for MNIST and F-MNIST and on an ensemble of 8 ReNets-20 for CIFAR-10 dataset.
+
+
+
+Figure 24: Accuracy vs. attacks strength for black-box PGD attacks on an ensemble of 8 LeNet-5 models for MNIST and F-MNIST and on an ensemble of 8 ReNets-20 for CIFAR-10 dataset.
+
+ | MNIST | F-MNIST | CIFAR-10 |
| Clean | ${\mathrm{F}}_{gsm}$ | R-F. | PGD | BIM | MIM | Clean | ${\mathrm{F}}_{gsm}$ | R-F. | PGD | BIM | MIM | Clean | ${\mathrm{F}}_{gsm}$ | R-F. | PGD | BIM | MIM |
| Naive | 99.4 | 22.8 | 78.9 | 5.7 | 8.1 | 8.1 | 92.7 | 16.8 | 39.0 | 6.3 | 8.8 | 7.2 | 92.8 | 10.8 | 1.5 | 0.8 | 2.8 | 2.5 |
| ADP | 99.3 | 38.3 | 83.8 | 11.0 | 18.1 | 15.4 | 92.3 | 15.9 | 37.4 | 8.2 | 11.7 | 7.3 | 92.7 | 11.3 | 2.4 | 0.8 | 3.2 | 2.8 |
| GAL | 99.4 | 59.4 | 90.1 | 18.1 | 28.9 | 31.3 | 92.7 | 32.0 | 50.5 | 8.5 | 14.6 | 12.0 | 92.9 | 10.0 | 7.8 | 0.7 | 1.6 | 0.5 |
| DV. | 99.4 | 54.7 | 90.5 | 27.5 | 37.8 | 34.7 | 92.3 | 28.6 | 47.4 | 11.2 | 18.4 | 14.9 | 90.8 | 11.9 | 3.2 | 1.4 | 5.7 | 5.4 |
| SMD | 99.4 | 73.1 | 91.5 | 21.9 | 40.4 | 43.8 | 92.6 | 37.4 | 52.3 | 9.4 | 18.2 | 15.7 | 93.2 | 9.8 | 8.4 | 0.6 | 1.2 | 0.5 |
| SMD+ | 99.5 | 60.3 | 91.8 | 31.4 | 43.2 | 40.2 | 92.4 | 29.5 | 48.5 | 10.6 | 17.9 | 14.6 | 90.1 | 11.9 | 4.9 | 1.7 | 6.2 | 5.9 |
+
+Table 7: White-box attacks of the magnitude $\epsilon = {0.3}$ on an ensemble of 8 LeNet-5 models for MNIST and F-MNIST and on an ensemble of 8 ReNets-20 for CIFAR-10 dataset. Columns are attacks and rows are defenses employed.
+
+ | MNIST | F-MNIST | CIFAR-10 |
| Clean | ${\mathrm{F}}_{gsm}$ | R-F. | PGD | BIM | MIM | Clean | ${\mathrm{F}}_{gsm}$ | R-F. | PGD | BIM | MIM | Clean | ${\mathrm{F}}_{gsm}$ | R-F. | PGD | BIM | MIM |
| Naive | 99.4 | 26.4 | 82.0 | 10.5 | 11.5 | 9.5 | 92.7 | 22.5 | 43.7 | 20.4 | 21.2 | 10.8 | 92.8 | 10.9 | 2.5 | 1.1 | 3.1 | 2.5 |
| ADP | 99.3 | 27.9 | 81.2 | 13.2 | 13.8 | 11.7 | 92.3 | 21.3 | 43.5 | 20.8 | 22.4 | 11.4 | 92.7 | 11.4 | 2.7 | 1.1 | 3.2 | 2.6 |
| GAL | 99.4 | 33.2 | 83.9 | 13.8 | 14.8 | 13.1 | 92.7 | 25.8 | 47.5 | 24.7 | 25.2 | 13.2 | 92.9 | 10.2 | 8.1 | 3.4 | 3.1 | 2.6 |
| DV. | 99.4 | 36.9 | 87.9 | 19.6 | 20.0 | 16.2 | 92.3 | 28.6 | 51.0 | 30.0 | 30.7 | 15.3 | 90.8 | 11.0 | 4.7 | 4.6 | 9.0 | 2.6 |
| SMD | 99.4 | 33.8 | 83.8 | 15.0 | 16.0 | 14.1 | 92.6 | 26.1 | 47.9 | 25.1 | 25.8 | 13.5 | 93.2 | 10.1 | 7.8 | 2.7 | 3.0 | 2.5 |
| SMD+ | 99.5 | 37.8 | 87.3 | 19.9 | 20.2 | 16.6 | 92.4 | 28.6 | 51.0 | 30.0 | 30.5 | 15.0 | 90.1 | 10.5 | 6.8 | 7.0 | 12.4 | 2.7 |
+
+Table 8: Black-box attacks of the magnitude $\epsilon = {0.3}$ on an ensemble of 8 LeNet-5 models for MNIST and F-MNIST and on an ensemble of 8 ReNets-20 for CIFAR-10 dataset. Columns are attacks and rows are defenses employed.
+
+## D. Additional Adversarial Training Results
+
+In this section, we also present an additional results where we complement the results in our paper with the results about the variance. In addition, we also show results for adversarial training and black-box attacks. We also show results for the F-MNIST data set in black-box and white-box setting.
+
+In the white-box attack setting for the two datasets, we see major improvement for all regularizers where SMD and SMD+ consistently outperforming others. Considering the results for in the black-box setting we do not have gains. Again this is consistent with results from (Tramèr et al. 2018).
+
+
+
+Figure 25: Accuracy vs. attacks strength for white-box PGD attacks on an ensemble of 3 LeNet-5 models with adversarial training for MNIST and F-MNIST datasets.
+
+
+
+Figure 26: Accuracy vs. attacks strength for black-box PGD attacks on an ensemble of 3 LeNet-5 models with adversarial training for MNIST and F-MNIST datasets.
+
+| MNIST | F-MNIST |
| Clean | ${\mathrm{F}}_{gsm}$ | R-F. | PGD | BIM | MIM | Clean | ${\mathrm{F}}_{gsm}$ | R-F. | PGD | BIM | MIM |
| Naive | 99.2 | 32.9 | 76.5 | 3.4 | 4.9 | 6.0 | 90.7 | 13.2 | 26.2 | 6.2 | 7.6 | 7.2 |
| ADP | 99.2 | 50.8 | 84.3 | 12.6 | 20.7 | 19.7 | 90.8 | 16.2 | 29.3 | 5.9 | 8.4 | 7.4 |
| GAL | 99.3 | 80.1 | 91.9 | 19.2 | 38.2 | 44.8 | 90.5 | 39.5 | 41.0 | 7.4 | 10.9 | 13.0 |
| DV. | 99.3 | 65.2 | 90.0 | 15.2 | 26.2 | 31.7 | 91.0 | 26.6 | 44.2 | 7.5 | 11.2 | 10.5 |
| SMD | 99.3 | 81.7 | 91.4 | 44.6 | 60.5 | 63.6 | 90.4 | 38.7 | 44.7 | 9.3 | 13.4 | 15.3 |
| SMD+ | 99.3 | 85.1 | 94.3 | 48.1 | 64.3 | 66.3 | 91.1 | 39.1 | 46.4 | 10.7 | 17.8 | 17.4 |
+
+Table 9: White-box attacks of the magnitude $\epsilon = {0.3}$ on an ensemble of 3 LeNet-5 models with adversarial training for MNIST and F-MNIST datasets. Columns are attacks and rows are defenses employed.
+
+| MNIST | F-MNIST |
| Clean | ${\mathrm{F}}_{gsm}$ | R-F. | PGD | BIM | MIM | Clean | ${\mathrm{F}}_{gsm}$ | R-F. | PGD | BIM | MIM |
| Naive | 99.2 | 85.4 | 97.6 | 92.1 | 90.9 | 84.4 | 90.7 | 62.3 | 77.7 | 80.9 | 84.0 | 69.5 |
| ADP | 99.2 | 71.3 | 95.3 | 80.7 | 79.4 | 66.7 | 90.8 | 57.0 | 75.9 | 76.3 | 82.1 | 63.7 |
| GAL | 99.3 | 81.4 | 96.9 | 88.1 | 87.4 | 78.2 | 90.5 | 63.1 | 78.4 | 81.6 | 85.0 | 70.8 |
| DV. | 99.3 | 76.9 | 96.2 | 82.4 | 79.4 | 68.2 | 91.0 | 52.8 | 74.2 | 73.3 | 74.8 | 52.2 |
| SMD | 99.3 | 78.9 | 96.7 | 85.5 | 84.3 | 74.4 | 90.4 | 63.9 | 78.6 | 81.6 | 84.9 | 71.1 |
| SMD+ | 99.3 | 73.4 | 96.1 | 78.2 | 76.1 | 63.1 | 91.1 | 51.0 | 72.6 | 72.4 | 75.2 | 52.7 |
+
+Table 10: Black-box attacks of the magnitude $\epsilon = {0.3}$ on an ensemble of 3 LeNet-5 models with adversarial training for MNIST and F-MNIST datasets. Columns are attacks and rows are defenses employed.
+
diff --git a/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/wGkmGrDsco8/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/wGkmGrDsco8/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..29bdd99cb2c29ea91506ff13aba0dc764601a31f
--- /dev/null
+++ b/papers/AAAI/AAAI 2022/AAAI 2022 Workshop/AAAI 2022 Workshop AdvML/wGkmGrDsco8/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,249 @@
+§ SALIENCY DIVERSIFIED DEEP ENSEMBLE FOR ROBUSTNESS TO ADVERSARIES
+
+First Author Name, ${}^{1}$ Second Author Name, ${}^{2}$ Third Author Name ${}^{1}$
+
+${}^{1}$ Affiliation 1
+
+firstAuthor@affiliation1.com, secondAuthor@affilation2.com, thirdAuthor@affiliation1.com
+
+§ ABSTRACT
+
+Deep learning models have shown incredible performance on numerous image recognition, classification, and reconstruction tasks. Although very appealing and valuable due to their predictive capabilities, one common threat remains challenging to resolve. A specifically trained attacker can introduce malicious input perturbations to fool the network, thus causing potentially harmful mispredictions. Moreover, these attacks can succeed when the adversary has full access to the target model (white-box) and even when such access is limited (black-box setting). The ensemble of models can protect against such attacks but might be brittle under shared vulnerabilities in its members (attack transferability). To that end, this work proposes a novel diversity-promoting learning approach for the deep ensembles. The idea is to promote saliency map diversity (SMD) on ensemble members to prevent the attacker from targeting all ensemble members at once by introducing an additional term in our learning objective. During training, this helps us minimize the alignment between model saliencies to reduce shared member vulnerabilities and, thus, increase ensemble robustness to adversaries. We empirically show a reduced transferability between ensemble members and improved performance compared to the state-of-the-art ensemble defense against medium and high-strength white-box attacks. In addition, we demonstrate that our approach combined with existing methods outperforms state-of-the-art ensemble algorithms for defense under white-box and black-box attacks.
+
+§ 1 INTRODUCTION
+
+Nowadays, deep learning models have shown incredible performance on numerous image recognition, classification, and reconstruction tasks (Krizhevsky, Sutskever, and Hinton 2012; Lee et al. 2015; LeCun, Bengio, and Hinton 2015; Chen et al. 2020). Due to their great predictive capabilities, they have found widespread use across many domains (Szegedy et al. 2016; Devlin et al. 2019; Deng, Hinton, and Kingsbury 2013). Although deep learning models are very appealing for many interesting tasks, their robustness to adversarial attacks remains a challenging problem to solve. A specifically trained attacker can introduce malicious input perturbations to fool the network, thus causing potentially harmful (Goodfellow, Shlens, and Szegedy 2015; Madry et al. 2018) mispredictions. Moreover, these attacks can succeed when the adversary has full access to the target model (white-box) (Athalye and Carlini 2018) and even when such access is limited (black-box) (Papernot et al. 2017), posing a hurdle in security- and trust-sensitive application domains.
+
+ < g r a p h i c s >
+
+Figure 1: Left. An illustration of the proposed learning scheme for saliency-based diversification of deep ensemble consisting of 3 members. We use the cross-entropy losses ${\mathcal{L}}_{m}\left( x\right) ,m \in \{ 1,2,3\}$ and regularization ${\mathcal{L}}_{SMD}\left( x\right)$ for saliency-based diversification. Right. An example of saliency maps for members of naively learned ensemble and learned ensemble with our approach. Red and blue pixels represent positive and negative saliency values respectively.
+
+The ensemble of deep models can offer protection against such attacks (Strauss et al. 2018). Commonly, an ensemble of models has proven to improve the robustness, reduce variance, increase prediction accuracy and enhance generalization compared to the individual models (LeCun, Bengio, and Hinton 2015). As such, ensembles were offered as a solution in many areas, including weather prediction (Palmer 2019), computer vision (Krizhevsky, Sutskever, and Hinton 2012), robotics and autonomous driving (Kober, Bagnell, and Peters 2013) as well as others, such as (Ganaie et al. 2021). However, 'naive' ensemble models are brittle due to shared vulnerabilities in their members (Szegedy et al. 2016). Thus an adversary can exploit attack transferability (Madry et al. 2018) to affect all members and the ensemble as a whole.
+
+In recent years, researchers tried to improve the adversarial robustness of the ensemble by maximizing different notions for diversity between individual networks (Pang et al. 2019; Kariyappa and Qureshi 2019; Yang et al. 2020). In this way, adversarial attacks that fool one network are much less likely to fool the ensemble as a whole (Chen et al. 2019b; Sen, Ravindran, and Raghunathan 2019; Tramèr et al. 2018; Zhang, Liu, and Yan 2020). The research focusing on ensemble diversity aims to diversely train the neural networks inside the ensemble model to withstand the deterioration caused by adversarial attacks. The works (Pang et al. 2019; Zhang, Liu, and Yan 2020; Kariyappa and Qureshi 2019) proposed improving the diversity of the ensemble constituents by training the model with diversity regularization in addition to the main learning objective. (Kariyappa and Qureshi 2019) showed that an ensemble of models with misaligned loss gradients can be used as a defense against black-box attacks and proposed uncorrelated loss functions for ensemble learning. (Pang et al. 2019) proposed an adaptive diversity promoting (ADP) regularizer to encourage diversity between non-maximal predictions. (Yang et al. 2020) minimize vulnerability diversification objective in order to suppress shared 'week' features across the ensemble members. However, some of these approaches only focused on white-box attacks (Pang et al. 2019), black-box attacks (Kariyappa and Qureshi 2019) or were evaluated on a single dataset (Yang et al. 2020).
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+In this paper, we propose a novel diversity-promoting learning approach for deep ensembles. The idea is to promote Saliency Map Diversity (SMD) to prevent the attacker from targeting all ensemble members at once.
+
+Saliency maps (SM) (Gu and Tresp 2019) represent the derivative of the network prediction for the actual true label with respect to the input image. They indicate the most 'sensitive' content of the image for prediction. Intuitively, we would like to learn an ensemble whose members have different sensitivity across the image content while not sacrificing the ensemble predictive power. Therefore, we introduce a saliency map diversity (SMD) regularization term in our learning objective. Given image data and an ensemble of models, we define the SMD using the inner products between all pairs of saliency maps (for one image data, one ensemble member has one saliency map). Different from our approach with SMD regularization, (Pang et al. 2019) defined the diversity measure using the non-maximal predictions of individual members, and as such might not be able to capture the possible shared sensitivity with respect to the image content related to the correct predictions.
+
+We jointly learn our ensemble members using cross-entropy losses (LeCun, Bengio, and Hinton 2015) for each member and our shared ${SMD}$ term. This helps us minimize the alignment between model SMDs and enforces the ensemble members to have misaligned and non-overlapping sensitivity for the different image content. Thus with our approach, we try to minimize possible shared sensitivity across the ensemble members that might be exploited as vulnerability, which is in contrast to (Yang et al. 2020) who try to minimize shared 'week' features across the ensemble members. It is also important to note that our regularization differs from (Kariyappa and Qureshi 2019), since it focuses on gradients coming from the correct class predictions (saliencies), which could also be seen as a loss agnostic approach. We illustrate our learning scheme in Fig. 1, left. Whereas in Fig. 1 on the right, we visualize the saliency maps with respect to one image sample for the members in naively trained ensemble and an ensemble trained with our approach.
+
+We perform an extensive numerical evaluation using the MNIST (Lecun et al. 1998), Fashion-MNIST (F-MNIST) (Xiao, Rasul, and Vollgraf 2017), and CIFAR-10 (Krizhevsky 2009) datasets to validate our approach. We use two neural networks architectures and conduct experiments for different known attacks and at different attack strengths. Our results show a reduced transferability between ensemble members and improved performance compared to the state-of-the-art ensemble defense against medium and high-strength white-box attacks. Since we minimize the shared sensitivity which could also be seen as the attention of a prediction important image content, we also suspected that our approach could go well with other existing methods. To that end, we show that our approach combined with the (Yang et al. 2020) method outperforms state-of-the-art ensemble algorithms for defense under adversarial attacks in both white-box and black-box settings. We summarize our main contributions in the following:
+
+ * We propose a diversity-promoting learning approach for deep ensemble, where we introduce a saliency-based regularization that diversifies the sensitivity of ensemble members with respect to the image content.
+
+ * We show improved performance compared to the state-of-the-art ensemble defense against medium and high strength white-box attacks as well as show on-pair performance for the black-box attacks.
+
+ * We demonstrate that our approach combined with the (Yang et al. 2020) method outperforms state-of-the-art ensemble defense algorithms in white-box and black-box attacks.
+
+§ 2 RELATED WORK
+
+In this section, we overview the recent related work.
+
+§ 2.1 COMMON DEFENSE STRATEGIES
+
+In the following, we describe the common defense strategies against adversarial attacks groping them into four categories.
+
+Adversarial Detection. These methods aim to detect the adversarial examples or to restore the adversarial input to be closer to the original image space. Adversarial Detection methods (Bhambri et al. 2020) include MagNet, Feature Squeezing, and Convex Adversarial Polytope. The MagNet (Meng and Chen 2017) method consists of two parts: detector and reformer. Detector aims to recognize and reject adversarial images. Reformer aims to reconstruct the image as closely as possible to the original image using an auto-encoder. The Feature Squeezing (Xu, Evans, and Qi 2018) utilizes feature transformation techniques such as squeezing color bits and spatial smoothing. These methods might be prone to reject clean examples and might have to severely modify the input to the model. This could reduce the performance on the clean data.
+
+Gradient Masking and Randomization Defenses. Gradient masking represents manipulation techniques that try to hide the gradient of the network model to robustify against attacks made with gradient direction techniques and includes distillation, obfuscation, shattering, use of stochastic and vanishing or exploding gradients (Papernot et al. 2017; Athalye, Carlini, and Wagner 2018; Carlini and Wagner 2017). The authors in (Papernot et al. 2016b) introduced a method based on distillation. It uses an additional neural network to 'distill' labels for the original neural network in order to reduce the perturbations due to adversarial samples. (Xie et al. 2018) used a randomization method during training that consists of random resizing and random padding for the training image data. Another example of such randomization can be noise addition at different levels of the system (You et al. 2019), injection of different types of randomization like, for example, random image resizing or padding (Xie et al. 2018) or randomized lossy compression (Das et al. 2018), etc. As a disadvantage, these approaches can reduce the accuracy since they may reduce useful information, which might also introduce instabilities during learning. As such, it was shown that often they can be easily bypassed by the adversary via expectation over transformation techniques (Athalye and Carlini 2018).
+
+Secrecy-based Defenses. The third group generalizes the defense mechanisms, which include randomization explicitly based on a secret key that is shared between training and testing stages. Notable examples are random projections (Vinh et al. 2016), random feature sampling (Chen et al. 2019a) and the key-based transformation (Taran, Rezaeifar, and Voloshynovskiy 2018), etc. As an example in (Taran et al. 2019) introduces randomized diversification in a special transform domain based on a secret key, which creates an information advantage to the defender. Nevertheless, the main disadvantage of the known methods in this group consists of the loss of performance due to the reduction of useful data that should be compensated by a proper diversification and corresponding aggregation with the required secret key.
+
+Adversarial Training (AT). (Goodfellow, Shlens, and Szegedy 2015; Madry et al. 2018) proposed one of the most common approaches to improve adversarial robustness. The main idea is to train neural networks on both clean and adversarial samples and force them to correctly classify such examples. The disadvantage of this approach is that it can significantly increase the training time and can reduce the model accuracy on the unaltered data (Tsipras et al. 2018).
+
+§ 2.2 DIVERSIFYING ENSEMBLE TRAINING STRATEGIES
+
+Even naively learned ensemble could add improvement towards adversarial robustness. Unfortunately, ensemble members may share a large portion of vulnerabilities (Dauphin et al. 2014) and do not provide any guarantees to adversarial robustness (Tramèr et al. 2018).
+
+(Tramèr et al. 2018) proposed Ensemble Adversarial Training (EAT) procedure. The main idea of EAT is to minimize the classification error against an adversary that maximizes the error (which also represents a min-max optimization problem (Madry et al. 2018)). However, this approach is very computationally expensive and according to the original author may be vulnerable to white-box attacks.
+
+Recently, diversifying the models inside an ensemble gained attention. Such approaches include a mechanism in the learning procedure that tries to minimize the adversarial subspace by making the ensemble members diverse and making the members less prone to shared weakness.
+
+(Pang et al. 2019) introduced ADP regularizer to diversify training of the ensemble model to increase adversarial robustness. To do so, they defined first an Ensemble Diversity ${ED} = {\operatorname{Vol}}^{2}\left( {\begin{Vmatrix}{f}_{m}^{\smallsetminus y}\left( x\right) \end{Vmatrix}}_{2}\right)$ , where ${f}_{m}^{\smallsetminus y}\left( x\right)$ is the order preserving prediction of $m$ -th ensemble member on $x$ without $y$ -th (maximal) element and $\operatorname{Vol}\left( \cdot \right)$ is a total volume of vectors span. The ADP regularizer is calculated as ${\operatorname{ADP}}_{\alpha ,\beta }\left( {x,y}\right) = \alpha \cdot \mathcal{H}\left( \mathcal{F}\right) + \beta \cdot \log \left( {ED}\right)$ , where $\mathcal{H}\left( \mathcal{F}\right) =$ $- \mathop{\sum }\limits_{i}{f}_{i}\left( x\right) \log \left( {{f}_{i}\left( x\right) }\right)$ is a Shannon entropy and $\alpha ,\beta > 0$ . The ADP regularizer is then subtracted from the original loss during training.
+
+The GAL regularizer (Kariyappa and Qureshi 2019) was intended to diversify the adversarial subspaces and reduce the overlap between the networks inside ensemble model. GAL is calculated using the cosine similarity (CS) between the gradients of two different models as ${CS}{\left( {\nabla }_{x}{\mathcal{J}}_{a},{\nabla }_{x}{\mathcal{J}}_{b}\right) }_{a \neq b} = \frac{ < {\nabla }_{x}{\mathcal{J}}_{a},{\nabla }_{x}{\mathcal{J}}_{b} > }{\left| {{\nabla }_{x}{\mathcal{J}}_{a}}\right| \cdot \left| {{\nabla }_{x}{\mathcal{J}}_{b}}\right| }$ , where ${\nabla }_{x}{\mathcal{J}}_{m}$ is the gradient of the loss of $m$ -th member with respect to x. During training, the authors added the term ${GAL} =$ $\log \left( {\mathop{\sum }\limits_{{1 \leq a < b \leq N}}\exp \left( {{CS}\left( {{\nabla }_{x}{\mathcal{J}}_{a},{\nabla }_{x}{\mathcal{J}}_{b}}\right) }\right) }\right)$ to the learning objective.
+
+With DVERGE (Yang et al. 2020), the authors aimed to maximize the vulnerability diversity together with the original loss. They defined a vulnerability diversity between pairs of ensemble members ${f}_{a}\left( x\right)$ and ${f}_{b}\left( x\right)$ using data consisting of the original data sample and its feature distilled version. In other words, they deploy an ensemble learning procedure where each ensemble member ${f}_{a}\left( x\right)$ is trained using adversarial samples generated by other members ${f}_{b}\left( x\right) ,a \neq b$ .
+
+§ 2.3 ADVERSARIAL ATTACKS
+
+The goal of the adversary is to craft an image ${x}^{\prime }$ that is very close to the original $x$ and would be correctly classified by humans but would fool the target model. Commonly, attackers can act as adversaries in white-box and black-box modes, depending on the gained access level over the target model.
+
+White-box and Black-box Attacks. In the white-box scenario, the attacker is fully aware of the target model's architecture and parameters and has access to the model's gradients. White-box attacks are very effective against the target model but they are bound to the extent of knowing the model. In the Black-box scenario, the adversary does not have access to the model parameters and may only know the training dataset and the architecture of the model (in grey-box setting). The attacks are crafted on a surrogate model but still work to some extent on the target due to transferability (Papernot et al. 2016a).
+
+An adversary can build a white-box or black-box attack using different approaches. In the following text, we briefly describe the methods commonly used for adversarial attacks.
+
+Fast Gradient Sign Method (FGSM). (Goodfellow, Shlens, and Szegedy 2015) generated adversarial attack ${x}^{\prime }$ by adding the sign of the gradient $\operatorname{sign}\left( {{\nabla }_{x}\mathcal{J}\left( {x,y}\right) }\right)$ as perturbation with $\epsilon$ strength, i.e., ${x}^{\prime } = x + \epsilon \cdot \operatorname{sign}\left( {{\nabla }_{x}\mathcal{J}\left( {x,y}\right) }\right)$ .
+
+Random Step-FGSM (R-FGSM). The method proposed in (Tramèr et al. 2018) is an extension of FGSM where a single random step is taken before FGSM due to the assumed non-smooth loss function in the neighborhood of data points.
+
+Projected Gradient Descent (PGD). (Madry et al. 2018) presented a similar attack to BIM, with the difference that they randomly selected the initialization of ${x}_{0}^{\prime }$ in a neighborhood $\dot{U}\left( {x,\epsilon }\right)$ .
+
+Basic Iterative Method (BIM). (Kurakin, Goodfellow, and Bengio 2017) proposed iterative computations of attack gradient for each smaller step. Thus, generating an attacks as ${x}_{i}^{\prime } = {\operatorname{clip}}_{x,\epsilon }\left( {{x}_{i - 1}^{\prime } + \frac{\epsilon }{r} \cdot \operatorname{sign}\left( {g}_{i - 1}\right) }\right)$ , where ${g}_{i} = {\nabla }_{x}\mathcal{J}\left( {{x}_{i}^{\prime },y}\right)$ , ${x}_{0}^{\prime } = x$ and $r$ is the number of iterations.
+
+Momentum Iterative Method (MIM). (Dong et al. 2018) proposed extenuation of BIM. It proposes to update gradient with the momentum $\mu$ to ensure best local minima. Holding the momentum helps to avoid small holes and poor local minimum solution, ${g}_{i} = \mu {g}_{i - 1} + \frac{{\nabla }_{x}\mathcal{J}\left( {{x}_{i - 1}^{\prime },y}\right) }{{\begin{Vmatrix}{\nabla }_{x}\mathcal{J}\left( {x}_{i - 1}^{\prime },y\right) \end{Vmatrix}}_{1}}$ .
+
+§ 3 SALIENCY DIVERSIFIED ENSEMBLE LEARNING
+
+In this section, we present our diversity-promoting learning approach for deep ensembles. In the first subsection, we introduce the saliency-based regularizer, while in the second subsection we describe our learning objective.
+
+§ 3.1 SALIENCY DIVERSIFICATION MEASURE
+
+Saliency Map. In (Etmann et al. 2019), the authors investigated the connection between a neural network's robustness to adversarial attacks and the interpretability of the resulting saliency maps. They hypothesized that the increase in interpretability could be due to a higher alignment between the image and its saliency map. Moreover, they arrived at the conclusion that the strength of this connection is strongly linked to how locally similar the network is to a linear model. In (Mangla, Singh, and Balasubramanian 2020) authors showed that using weak saliency maps suffices to improve adversarial robustness with no additional effort to generate the perturbations themselves.
+
+We build our approach on prior work about saliency maps and adversarial robustness but in the context of deep ensemble models. In (Mangla, Singh, and Balasubramanian 2020) the authors try to decrease the sensitivity of the prediction with respect to the saliency map by using special augmentation during training. We also try to decrease the sensitivity of the prediction with respect to the saliency maps but for the ensemble. We do so by enforcing misalignment between the saliency maps for the ensemble members.
+
+We consider a saliency map for model ${f}_{m}$ with respect to data $x$ conditioned on the true class label $y$ . We calculate it as the first order derivative of the model output for the true class label with respect to the input, i.e.,
+
+$$
+{s}_{m} = \frac{\partial {f}_{m}\left( x\right) \left\lbrack y\right\rbrack }{\partial x}, \tag{1}
+$$
+
+where ${f}_{m}\left( x\right) \left\lbrack y\right\rbrack$ is the $y$ element from the predictions ${f}_{m}\left( x\right)$ .
+
+Shared Sensitivity Across Ensemble Members. Given image data $x$ and an ensemble of $M$ models ${f}_{m}$ , we define our SMD measure as:
+
+$$
+{\mathcal{L}}_{SMD}\left( x\right) = \log \left\lbrack {\mathop{\sum }\limits_{m}\mathop{\sum }\limits_{{l > m}}\exp \left( \frac{{s}_{m}^{T}{s}_{l}}{{\begin{Vmatrix}{s}_{m}\end{Vmatrix}}_{2}{\begin{Vmatrix}{s}_{l}\end{Vmatrix}}_{2}}\right) }\right\rbrack , \tag{2}
+$$
+
+where ${s}_{m} = \frac{\partial {f}_{m}\left( x\right) \left\lbrack y\right\rbrack }{\partial x}$ is the saliency map for ensemble model ${f}_{m}$ with respect to the image data $x$ . A high value of ${\mathcal{L}}_{SMD}\left( x\right)$ means alignment and similarity between the saliency maps ${s}_{m}$ of the models ${f}_{m}\left( x\right)$ with respect to the image data $x$ . Thus $\operatorname{SMD}\left( 2\right)$ indicates a possible shared sensitivity area in the particular image content common for all the ensemble members. A pronounced sensitivity across the ensemble members points to a vulnerability that might be targeted and exploited by an adversarial attack. To prevent this, we would like ${\mathcal{L}}_{SMD}\left( x\right)$ to be as small as possible, which means different image content is of different importance to the ensemble members.
+
+§ 3.2 SALIENCY DIVERSIFICATION OBJECTIVE
+
+We jointly learn our ensemble members using a common cross-entropy loss per member and our saliency based sensitivity measure described in the subsection above. We define our learning objective in the following:
+
+$$
+\mathcal{L} = \mathop{\sum }\limits_{x}\mathop{\sum }\limits_{m}{\mathcal{L}}_{m}\left( x\right) + \lambda \mathop{\sum }\limits_{x}{\mathcal{L}}_{SMD}\left( x\right) , \tag{3}
+$$
+
+where ${\mathcal{L}}_{m}\left( x\right)$ is the cross-entropy loss for ensemble member $m,{\mathcal{L}}_{SMD}\left( x\right)$ is our SMD measure for an image data $x$ and an ensemble of $M$ models ${f}_{m}$ , and $\lambda > 0$ is a Lagrangian parameter. By minimizing our learning objective that includes a saliency-based sensitivity measure, we enforce the ensemble members to have misaligned and non-overlapping sensitivity for the different image content. Our regularization enables us to strongly penalize small misalignments ${s}_{m}^{T}{s}_{l}$ between the saliency maps ${s}_{m}$ and ${s}_{l}$ . While at the same time it ensures that a large misalignment is not discarded. Additionally, since ${\mathcal{L}}_{SMD}\left( x\right)$ is a $\log \operatorname{SumExp}$ function it has good numerical properties (Kariyappa and Qureshi 2019). Thus, our approach offers to effectively minimize possible shared sensitivity across the ensemble members that might be exploited as vulnerability. In contrast to GAL regularizer (Kariyappa and Qureshi 2019) SMD is loss agnostic (can be used with loss functions other than cross-entropy) and does not focus on incorrect-class prediction (which are irrelevant for accuracy). Additionally it has a clear link to work in interpretability (Etmann et al. 2019) and produces diverse but meaningful saliency maps (see Fig. 1).
+
+Assuming unit one norm saliencies, the gradient based update for one data sample $x$ with respect to the parameters ${\theta }_{{f}_{m}}$ of a particular ensemble member can be written as:
+
+$$
+{\theta }_{{f}_{m}} = {\theta }_{{f}_{m}} - \alpha \left( {\frac{\partial {\mathcal{L}}_{m}\left( x\right) }{\partial {\theta }_{{f}_{m}}} + \lambda \frac{\partial {\mathcal{L}}_{SMD}\left( x\right) }{\partial {\theta }_{{f}_{m}}}}\right) =
+$$
+
+$$
+= {\theta }_{{f}_{m}} - \alpha \frac{\partial {\mathcal{L}}_{m}\left( x\right) }{\partial {\theta }_{{f}_{m}}} - {\alpha \lambda }\frac{\partial {f}_{m}\left( x\right) \left\lbrack y\right\rbrack }{\partial x\partial {\theta }_{{f}_{m}}}\mathop{\sum }\limits_{{j \neq m}}{\beta }_{j}\frac{\partial {f}_{j}\left( x\right) \left\lbrack y\right\rbrack }{\partial x}, \tag{4}
+$$
+
+where $\alpha$ is the learning rate and ${\beta }_{j} = \frac{\exp \left( {{s}_{m}^{T}{s}_{j}}\right) }{\mathop{\sum }\limits_{m}\mathop{\sum }\limits_{{k > m}}\exp \left( {{s}_{m}^{T}{s}_{k}}\right) }$ . The third term enforces the learning of the ensemble members to be on optimization paths where the gradient of their saliency maps $\frac{\partial {f}_{m}\left( x\right) \left\lbrack y\right\rbrack }{\partial x\partial {\theta }_{fm}}$ with respect to ${\theta }_{{f}_{m}}$ is misaligned with the weighted average of the remaining saliency maps $\mathop{\sum }\limits_{{j \neq m}}{\beta }_{j}\frac{\partial {f}_{j}\left( x\right) \left\lbrack y\right\rbrack }{\partial x}$ . Also,(4) reveals that by our approach the ensemble members can be learned in parallel provided that the saliency maps are shared between the models (we leave this direction for future work).
+
+§ 4 EMPIRICAL EVALUATION
+
+This section is devoted to empirical evaluation and performance comparison with state-of-the-art ensemble methods.
+
+§ 4.1 DATA SETS AND BASELINES
+
+We performed the evaluation using 3 classical computer vision data sets (MNIST (Lecun et al. 1998), FASHION-MNIST (Xiao, Rasul, and Vollgraf 2017) and CIFAR-10 (Krizhevsky 2009)) and include 4 baselines (naive ensemble, (Pang et al. 2019), (Kariyappa and Qureshi 2019), (Yang et al. 2020)) in our comparison.
+
+Datasets. The MNIST dataset (Lecun et al. 1998) consists of 70000 gray-scale images of handwritten digits with dimensions of 28x28 pixels. F-MNIST dataset (Xiao, Rasul, and Vollgraf 2017) is similar to MNIST dataset, has the same number of images and classes. Each image is in grayscale and has a size of ${28} \times {28}$ . It is widely used as an alternative to MNIST in evaluating machine learning models. CIFAR 10 dataset (Krizhevsky 2009) contains 60000 color images with 3 channels. It includes 10 real-life classes. Each of the 3 color channels has a dimension of ${32} \times {32}$ .
+
+Baselines. As the simplest baseline we compare against the performance of a naive ensemble, i.e., one trained without any defense mechanism against adversarial attacks. Additionally, we also consider state-of-the-art methods as baselines. We compare the performance of our approach with the following ones: Adaptive Diversity Promoting (ADP) method (Pang et al. 2019), Gradient Alignment Loss (GAL) method (Kariyappa and Qureshi 2019), and a Diversifying Vulnerabilities for Enhanced Robust Generation of Ensembles (DVERGE) or (DV.) method (Yang et al. 2020).
+
+§ 4.2 TRAINING AND TESTING SETUP
+
+Used Neural Networks. To evaluate our approach, we use two neural networks LeNet-5 (Lecun et al. 1998) and ResNet-20 (He et al. 2016). LeNet-5 is a classical small neural network for vision tasks, while ResNet-20 is another widely used architecture in this domain.
+
+Training Setup. We run our training algorithm for 50 epochs on MNIST and F-MNIST and 200 epochs on CIFAR-10, using the Adam optimizer (Kingma and Ba 2015), a learning rate of 0.001, weight decay of 0.0001, and batch-sizes of 128. We use no data augmentation on MNIST and F-MNIST and use normalization, random cropping, and flipping on CIFAR-10. In all of our experiments, we use 86% of the data for training and 14% for testing.In the implemented regularizers from prior work, we used the $\lambda$ that was suggested by the respective authors. While we found out that the strength of the SMD regularizer (also $\lambda$ ) in the range $\left\lbrack {{0.5},2}\right\rbrack$ gives good results. Thus in all of our experiments, we take $\lambda = 1$ . We report all the results as an average over 5 independent trials (we include the standard deviations in the Appendix A). We report results for the ensembles of 3 members in the main paper, and for 5 and 8 in the Appendix C.
+
+We used the LeNet-5 neural network for MNIST and F-MNIST datasets and ResNet-20 for CIFAR-10. To have a fair comparison, we also train ADP (Pang et al. 2019), GAL (Kariyappa and Qureshi 2019) and DVERGE (Yang et al. 2020), under a similar training setup as described above. We made sure that the setup is consistent with the one given by the original authors with exception of using Adam optimizer for training DVERGE. We also used our approach and added it as a regularizer to the DVERGE algorithm. We named this combination SMD+ and ran it under the setup as described above. All models are implemented in PyTorch (Paszke et al. 2017). We use AdverTorch (Ding, Wang, and Jin 2019) library for adversarial attacks.
+
+In the setting of adversarial training, we follow the EAT approach (Tramèr et al. 2018) by creating adversarial examples on 3 holdout pre-trained ensembles with the same size and architecture as the baseline ensemble. The examples are created via PGD- ${L}_{\infty }$ attack with 10 steps and $\epsilon = {0.1}$ .
+
+Adversarial Attacks. To evaluate our proposed approach and compare its performance to baselines, we use a set of adversarial attacks described in Section 2.3 in both black-box and white-box settings. We construct adversarial examples from the images in the test dataset by modifying them using the respective attack method. We probe with white-box attacks on the ensemble as a whole (not on the individual models). We generate black-box attacks targeting our ensemble model by creating white-box adversarial attacks on a surrogate ensemble model (with the same architecture), trained on the same dataset with the same training routine. We use the following parameters for the attacks: for $\left( {{\mathrm{F}}_{GSM},\mathrm{{PGD}}}\right.$ , R-F., BIM, MIM) we use $\epsilon$ in range $\left\lbrack {0;{0.3}}\right\rbrack$ in 0.05 steps, which covers the range used in our baselines; we use 10 iterations with a step size equal to $\epsilon /{10}$ for PGD, BIM and MIM; we use ${L}_{\infty }$ variant of PGD attack; for R-F. we use random-step $\alpha = \epsilon /2$ .
+
+Computing Infrastructure and Run Time. As computing hardware, we use half of the available resources from NVIDIA DGX2 station with ${3.3}\mathrm{{GHz}}\mathrm{{CPU}}$ and ${1.5}\mathrm{\;{TB}}\mathrm{{RAM}}$ memory, which has a total of ${161.75}\mathrm{{GHz}}$ GPUs, each with 32GB memory. One experiment takes around 4 minutes to train the baseline ensemble of 3 LeNet-5 members on MNIST without any regularizer. Whereas it takes around 18 minutes to train the same ensemble under the SMD regularizer, 37 minutes under DVERGE regularize, and 48 minutes under their combination. To evaluate the same ensemble under all of the adversarial attacks takes approximately 1 hour. It takes approximately 3 days when ResNet-20 members are used on CIFAR-10 for the same experiment.
+
+ < g r a p h i c s >
+
+Figure 2: Accuracy vs. attacks strength for white-box PGD attacks on an ensemble of 3 LeNet-5 models for MNIST and F-MNIST and on an ensemble of 3 ReNets-20 for CIFAR-10 dataset.
+
+max width=
+
+X 6|c|MNIST 6|c|F-MNIST 6|c|CIFAR-10
+
+1-19
+X Clean ${\mathrm{F}}_{gsm}$ R-F. PGD BIM MIM Clean ${\mathrm{F}}_{gsm}$ R-F. PGD BIM MIM Clean ${\mathrm{F}}_{gsm}$ R-F. PGD BIM MIM
+
+1-19
+Naive 99.3 20.3 73.5 2.9 4.2 5.5 91.9 15.7 33.6 5.5 7.2 6.6 91.4 10.5 2.8 1.0 3.2 2.9
+
+1-19
+ADP 98.8 43.8 89.6 10.4 19.6 14.8 91.4 18.3 34.8 5.8 8.8 7.5 91.7 11.4 3.7 0.8 3.6 3.4
+
+1-19
+GAL 99.3 72.7 89.0 14.4 28.2 38.9 91.4 35.8 51.2 7.4 10.8 12.2 91.4 11.2 9.7 1.0 1.8 2.8
+
+1-19
+DV. 99.4 44.2 85.5 10.6 16.0 20.6 91.8 27.3 44.6 7.3 10.7 9.9 91.0 11.2 6.3 1.1 5.5 4.4
+
+1-19
+SMD 99.3 70.7 91.3 21.4 34.3 43.8 91.1 38.2 52.0 11.0 14.9 16.4 90.1 12.0 12.0 2.3 3.2 3.9
+
+1-19
+SMD+ 99.4 83.4 93.8 54.7 68.0 71.0 91.6 42.9 51.9 13.3 20.5 20.5 90.5 12.1 5.8 1.2 5.9 5.2
+
+1-19
+
+Table 1: White-box attacks of the magnitude $\epsilon = {0.3}$ on an ensemble of 3 LeNet-5 models for MNIST and F-MNIST and on an ensemble of 3 ReNets-20 for CIFAR-10 dataset. Columns are attacks and rows are defenses employed.
+
+§ 4.3 RESULTS
+
+Robustness to White-Box Adversarial Attacks. In Table 1, we show the results for ensemble robustness under white-box adversarial attacks with $\epsilon = {0.3}$ . We highlight in bold, the methods with the highest accuracy. In Figure 2, we depict the results for PGD attack at different attack strengths $\left( \epsilon \right)$ . It can be observed that the accuracy on normal images (without adversarial attacks) slightly decreases for all regularizers, which is consistent with a robustness-accuracy trade-off (Tsipras et al. 2018; Zhang et al. 2019). The proposed SMD and SMD+ outperform the comparing baselines methods on all attack configurations and datasets. This result shows that the proposed saliency diversification approach helps to increase the adversarial robustness.
+
+Robustness to Black-Box Adversarial Attacks. In Table 2, we see the results for ensemble robustness under black-box adversarial attacks with an attack strength $\epsilon =$ 0.3 . In Figure 3 we also depict the results for PGD attack at different strengths $\left( \epsilon \right)$ . We can see that SMD+ is on par with DVERGE (DV.) on MNIST and consistently outperforms other methods. On F-MNIST SMD+ has a significant gap in performance compared to the baselines, with this effect being even more pronounced on the CIFAR-10 dataset. Also, it is interesting to note that standalone SMD comes second in performance and it is very close to the highest accuracy on multiple attack configurations under $\epsilon = {0.3}$ .
+
+Transferability. In this subsection, we investigate the transferability of the attacks between the ensemble members, which measures how likely the crafted white-box attack for one ensemble member succeeds on another. In Figure 5, we present results for F-MNIST and PGD attacks (results for different datasets and other attacks are in the Appendix B). The Y-axis represents the member from which the adversary crafts the attack (i.e. source), and the X-axis - the member on which the adversary transfers the attack (i.e. target). The on diagonal values depict the accuracy of a particular ensemble member under a white-box attack. The other (off-diagonal) values show the accuracy of the target members under transferred (black-box) attacks from the source member. In Figure 5, we see that SMD and SMD+ have high ensemble resilience. It seems that both SMD and SMD+ reduce the common attack vector between the members. Compared to the naive ensemble and the DV. method, we see improved performance, showing that our approach increases the robustness to transfer attacks.
+
+Robustness Under Adversarial Training. We also present the performance of our method and the comparing methods under AT. We follow the approach of Tramèr et al. as described in Section 4.1. In Figure 4, we show the results for the PGD attack on MNIST dataset. In the white-box attack setting, we see major improvement for all regularizers where SMD and SMD+ consistently outperforming others. This is consistent with results from (Tramèr et al. 2018), which showed EAT to perform rather poorly in the white-box setting. In the Appendix D, we also show the results for black-box attacks.
+
+ < g r a p h i c s >
+
+Figure 3: Accuracy vs. attacks strength for black-box PGD attacks on an ensemble of 3 LeNet-5 models for MNIST and F-MNIST and on an ensemble of 3 ReNets-20 for CIFAR-10 dataset.
+
+max width=
+
+X 6|c|MNIST 6|c|F-MNIST 6|c|CIFAR-10
+
+1-19
+X Clean ${\mathrm{F}}_{gsm}$ R-F. PGD BIM MIM Clean ${\mathrm{F}}_{gsm}$ R-F. PGD BIM MIM Clean ${\mathrm{F}}_{gsm}$ R-F. PGD BIM MIM
+
+1-19
+Naive 99.3 32.2 84.2 21.7 20.7 14.5 91.9 23.8 47.5 33.1 31.5 15.2 91.4 10.6 5.8 1.3 3.7 3.3
+
+1-19
+ADP 98.8 26.6 70.9 27.3 26.5 19.4 91.4 22.3 49.5 33.0 33.2 16.3 91.7 11.6 5.5 1.2 3.8 3.4
+
+1-19
+GAL 99.3 38.5 85.2 32.7 31.2 22.3 91.4 29.8 55.5 44.0 41.4 21.9 91.4 11.0 8.3 4.2 3.8 4.4
+
+1-19
+DV. 99.4 42.2 89.1 34.5 32.2 22.0 91.8 30.7 55.7 44.7 42.3 21.4 91.0 10.1 8.4 6.8 5.8 4.0
+
+1-19
+SMD 99.3 38.6 85.8 33.4 31.6 22.6 91.1 31.0 56.8 45.4 42.4 23.2 90.1 10.4 7.8 3.9 3.8 3.5
+
+1-19
+SMD+ 99.4 42.0 89.1 36.3 34.7 24.3 91.6 31.9 57.7 47.1 44.4 23.3 90.5 9.9 8.7 7.8 8.6 4.1
+
+1-19
+
+Table 2: Black-box attacks of the magnitude $\epsilon = {0.3}$ on an ensemble of 3 LeNet-5 models for MNIST and F-MNIST and on an ensemble of 3 ReNets-20 for CIFAR-10 dataset. Columns are attacks and rows are defenses employed.
+
+ < g r a p h i c s >
+
+Figure 4: Accuracy vs. Attacks Strength for PGD Attacks on MNIST under adversarial training.
+
+ < g r a p h i c s >
+
+Figure 5: Transferability of PGD attacks on F-MNIST. Attacks are crafted on Y-axis members and tested on X-axis members. Higher values indicate better performance.
+
+§ 5 CONCLUSION
+
+In this paper, we proposed a novel diversity-promoting learning approach for the adversarial robustness of deep ensembles. We introduced saliency diversification measure and presented a saliency diversification learning objective. With our learning approach, we aimed at minimizing possible shared sensitivity across the ensemble members to decrease its vulnerability to adversarial attacks. Our empirical results showed a reduced transferability between ensemble members and improved performance compared to other ensemble defense methods. We also demonstrated that our approach combined with existing methods outperforms state-of-the-art ensemble algorithms in adversarial robustness.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/1M6fV3HQ3RE/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/1M6fV3HQ3RE/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..dcc9e9b577fa3baba8410b66953b70009efed4d0
--- /dev/null
+++ b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/1M6fV3HQ3RE/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,69 @@
+# Causal Concept Identification in Open World Environments
+
+Anonymous submission
+
+## Abstract
+
+The ability to continually discover novel concepts is a core task in open world learning. For classical learning tasks new samples might be identified via manual labeling. Since this is a labor intensive task paper we propose to utilize causal information for doing so. Image data provides the ability to directly observe the physical real world appearance of concepts. However, the information presented in images is usually of a noisy and unstructured nature. In this position paper we propose to leverage causal information to structure and causally connect visual representations. Specifically, we discuss the possibilities of using causal models as a knowledge source for identifying novel concepts in the visual domain.
+
+Overview. Section (1) motivates continuous concept discovery using causal mechanisms. Section (2) outlines a path to continually advance the discovery of visual concepts using causal structures. Section (3) outlines practical issues encountered in (1) and (2) and discusses future steps.
+
+## 1 Motivation
+
+Modern machine learning systems need to process large amounts of annotated image data to identify visual concepts. While the resulting models achieve impressive results and push the limits of the field, they lack human curiosity and fall short in their ability to perform lifelong learning. One drawback of such approaches is the necessity to provide a supervision signal for the image data. While modern approaches utilize large amounts of data available from the internet, the trained models fall short on training data for niche domains and might adapt harmful biases from the data. Continual learning approaches are interested in continually discovering novel concepts to help machine learning models improve and adapt to new environments. A key challenge in this regard is to provide sufficient amounts of accurately annotated data. For this purpose the field of causality can help to provide the required supervision by leveraging the structure of causal mechanisms.
+
+In the following sections, we lay out a possible path of connecting continual, open world, learning with causality and discuss the challenges encountered along the way. We highlight the importance of causal knowledge to identify and disentangle objects in the visual domain.
+
+How do humans discover novel concepts? Machine learning models are usually trained by presenting them with randomly sampled image-label pairs. These samples usually do not have any connections besides sharing the same set of labels. Machine learning models are expected to identify visual concepts from ground up. This is in contrast to the way humans discover the world. When presented with novel concepts we usually start out with some initial knowledge. We relate novel concepts to preexisting knowledge and therefore continually advance our understanding of the world (Chen and Liu 2018; Flesch et al. 2022).
+
+Why do we want causal concept discovery? Causality is concerned with identifying processes that are underlying real world observations. As such, structural causal models (SCM) are designed to model the causal relationship between different concepts. Whenever we detect some concepts of a given SCM, we can utilize the graph structure to infer the presence of other, possibly unknown, concepts. Definitions of a 'concept' might vary, depending on the specific use case. In this paper we will use the term to capture the range from low-level features, such as color or texture, to complex composed entities, e.g. cities. Assume that our system is able to detect matches and matchboxes, but has not yet seen a flame. However, we might have read about the causal effect of sliding matches along the striking surface of the matchbox to light them. We can express this knowledge in a causal graph without having to provide image samples. Now, whenever we observe the interaction of striking matches on a matchbox, we can infer the presence of a flame and in consequence train our vision model on the new concept. This approach helps us in two ways: (1) Since the causal model specifies the exact conditions under which a particular concept is to be expected, we can actively steer our discovery process towards those instances and discover concepts more efficiently. (2) Secondly, causality helps us with disentangling concepts and ruling out confounding factors. While some objects might only appear in strongly correlated settings we know the true causal factors from the SCM.
+
+## 2 Causal Concept Discovery
+
+Discovering concepts under causal supervision provides a way to incrementally discover novel concepts. For every observation, we search for an SCM that contains the already known concepts and follow the causal paths to arrive at novel concepts. Each of those discovery step allows us to align more causal graphs without observations and continually broadens the scope of our models.
+
+Structural Causal Models. As we use SCMs to ground and discover novel concepts in the visual domain, we follow the Pearlian notion of Causality (Pearl 2009). An SCM is defined as a 4-tuple $\mathcal{M} \mathrel{\text{:=}} \langle \mathbf{U},\mathbf{V},\mathcal{F}, p\left( \mathbf{U}\right) \rangle$ where the so-called structural equations ${v}_{i} \leftarrow {f}_{i}\left( {{\mathrm{{pa}}}_{i},{u}_{i}}\right) \in \mathcal{F}$ assign values to the respective endogenous variables ${V}_{i} \in \mathbf{V}$ based on the values of their parents ${\mathrm{{Pa}}}_{i} \subseteq \mathbf{V} \smallsetminus {V}_{i}$ and the values of their respective exogenous variables ${\mathbf{U}}_{i} \subseteq \mathbf{U}$ . In particular, any SCM induces a causal graph which represents the causal structure from causes to effects.
+
+Bootstrapping. SCMs are specifically tailored to represent information about causal systems. However, in practice we may not be able to explicitly provide a list of concepts $C$ learned by the model and might even encounter catastrophic forgetting of already learned concepts (French 1999). This poses a practical problem as it introduces additional uncertainty to our system. For our theoretical discussion, we assume for now that we can reliably detect all concepts that we have already discovered during our process.
+
+For starting up our discovery process, we assume an initial set of known concepts ${C}_{0}$ to be given, which can be reliably detected. This intial knowledge might come from training on manually annotated data sets or other pre-trained models (Lin et al. 2014; Krizhevsky, Sutskever, and Hinton 2012; Minderer et al. 2022). Additionally we assume to be given a set of SCMs which encode our causal knowledge about the world.
+
+Boundaries of Discovery. Open world settings provide an infinite stream of concepts to discover (Chen and Liu 2018). Like for human curiosity, we are not interested in learning random concepts, but in utilizing our existing causal knowledge to efficiently discover concepts that stay close to our already existing knowledge. As such, we define the causal frontier as the set of SCMs that contain at least one concept of ${C}_{i}$ . Importantly this gives us a method to a priori determine which concepts can and can not be discovered. Given the set of initial concepts ${C}_{0}$ and the set of given causal graphs, it follows that we can only discover those concepts for which we can find a chain of causal graphs, such that for any two adjacent SCMs we get a non-empty overlap between their set of variables.
+
+Discovering concepts. At this point in our process we can start to advance our causal frontier by continually learning new concepts from observations. As a first step, we identify all known concepts that are present in a new observed image and select the causal graphs that contain those concepts. However, discovering that a concept is contained in the set of endogenous variables of a causal graph does not suffice to infer the presence of the causal system. Some common concepts, such as color, might appear as parameters in many causal graphs. Apples, for example, are typically colored green or red. As such, the concept of color parameterizes the observed object. However, it is not suited to infer the type of an object, as being colored red or green does not make an object an apple. Detecting the typical 'apple-like shape' however would be a strong indicator for the concept. Therefore, we are interested in SCMs for which we discover indicator variables or actual causes (Halpern 2016) that are necessary to infer the presence of a causal system.
+
+Another important insight is the fact that some causal structures may only be identifiable with the help of interventions (Pearl 2009; Bareinboim et al. 2022). Consider, for example, a scenario of two independent variables, $A$ and $B$ , whose appearance are determined by a third variable $C$ . As a consequence, $A$ and $B$ are either present at the same time or not, and we have no way to disentangle them. Causality can help to detect such situations. In consequence we can actively intervene on the system and identify the individual concepts.
+
+## 3 Challenges and Future Steps
+
+In the previous sections we outlined the high-level idea of causally guided concept discovery. For this section we now continue to discuss the challenges that may arise in practice.
+
+Challenge 1: Identifying causal paths. One problem with identifying unknown variables from SCMs is the fact that we may not know how causes and effects are interacting in the real world. In our initial example of lighting a match we might follow the physical process of the objects interacting. For other examples we might assume to only observe sparse changes. While we can continue to come up with more heuristics for specific problems, we have to recognize that the current formalizations of causality are not well suited to trace causal effects in their underlying systems.
+
+Challenge 2: Abstracting concepts. Another challenge towards identifying unknown concepts from SCMs arises from the fact that causal systems are often modeled using high-level relations. Because of that we might encounter several low-level entities on the way from cause to effect the are not modeled in the SCM. In order to be able to identify these concepts, we need to consider abstractions and refinements of SCMs (Beckers and Halpern 2019; Rubenstein et al. 2017). This means, that we have to come up with ways of identifying intermediate concepts or refine the given SCMs to better reflect the abstraction level of our observations.
+
+Summary and Outlook: In this position paper we highlighted the strengths and challenges of continuous causal concept discovery. We presented ways on how to leverage causal structures to guide concept discovery and identify novel concepts. While we primarily focused on the application of causal knowledge to discover open world concepts, the inverse problem of inferring causal knowledge from open world settings is also still to be discussed. Identifying relevant concepts and connecting them in a way that leads to meaningful causal concepts poses a challenging problem on its own that is yet to be solved. For future applications, we might consider combined approaches that discover visual concepts via causal guidance while simultaneously refining their causal knowledge using observations.
+
+## References
+
+Bareinboim, E.; Correa, J. D.; Ibeling, D.; and Icard, T. 2022. On pearl's hierarchy and the foundations of causal inference. In Probabilistic and Causal Inference: The Works of Judea Pearl, 507-556.
+
+Beckers, S.; and Halpern, J. Y. 2019. Abstracting causal models. In Proceedings of the aaai conference on artificial intelligence, volume 33, 2678-2685.
+
+Chen, Z.; and Liu, B. 2018. Lifelong machine learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 12(3): 1-207.
+
+Flesch, T.; Nagy, D. G.; Saxe, A.; and Summerfield, C. 2022. Modelling continual learning in humans with Hebbian context gating and exponentially decaying task signals.
+
+French, R. M. 1999. Catastrophic forgetting in connectionist networks. Trends in cognitive sciences, 3(4): 128-135.
+
+Halpern, J. Y. 2016. Actual causality. MiT Press.
+
+Krizhevsky, A.; Sutskever, I.; and Hinton, G. E. 2012. Im-agenet classification with deep convolutional neural networks. NeurIPS.
+
+Lin, T.-Y.; Maire, M.; Belongie, S.; Hays, J.; Perona, P.; Ramanan, D.; Dollár, P.; and Zitnick, C. L. 2014. Microsoft COCO: Common Objects in Context. In Fleet, D.; Pajdla, T.; Schiele, B.; and Tuytelaars, T., eds., European Conference on Computer Vision (ECCV). Springer International Publishing.
+
+Minderer, M.; Gritsenko, A.; Stone, A.; Neumann, M.; Weissenborn, D.; Dosovitskiy, A.; Mahendran, A.; Arnab, A.; Dehghani, M.; Shen, Z.; Wang, X.; Zhai, X.; Kipf, T.; and Houlsby, N. 2022. Simple Open-Vocabulary Object Detection with Vision Transformers. ECCV.
+
+Pearl, J. 2009. Causality. Cambridge university press.
+
+Rubenstein, P. K.; Weichwald, S.; Bongers, S.; Mooij, J. M.; Janzing, D.; Grosse-Wentrup, M.; and Schölkopf, B. 2017. Causal consistency of structural equation models. arXiv preprint arXiv:1707.00819.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/1M6fV3HQ3RE/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/1M6fV3HQ3RE/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..84f284bc3758f5b108138b3a65ea85a93b843a16
--- /dev/null
+++ b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/1M6fV3HQ3RE/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,45 @@
+§ CAUSAL CONCEPT IDENTIFICATION IN OPEN WORLD ENVIRONMENTS
+
+Anonymous submission
+
+§ ABSTRACT
+
+The ability to continually discover novel concepts is a core task in open world learning. For classical learning tasks new samples might be identified via manual labeling. Since this is a labor intensive task paper we propose to utilize causal information for doing so. Image data provides the ability to directly observe the physical real world appearance of concepts. However, the information presented in images is usually of a noisy and unstructured nature. In this position paper we propose to leverage causal information to structure and causally connect visual representations. Specifically, we discuss the possibilities of using causal models as a knowledge source for identifying novel concepts in the visual domain.
+
+Overview. Section (1) motivates continuous concept discovery using causal mechanisms. Section (2) outlines a path to continually advance the discovery of visual concepts using causal structures. Section (3) outlines practical issues encountered in (1) and (2) and discusses future steps.
+
+§ 1 MOTIVATION
+
+Modern machine learning systems need to process large amounts of annotated image data to identify visual concepts. While the resulting models achieve impressive results and push the limits of the field, they lack human curiosity and fall short in their ability to perform lifelong learning. One drawback of such approaches is the necessity to provide a supervision signal for the image data. While modern approaches utilize large amounts of data available from the internet, the trained models fall short on training data for niche domains and might adapt harmful biases from the data. Continual learning approaches are interested in continually discovering novel concepts to help machine learning models improve and adapt to new environments. A key challenge in this regard is to provide sufficient amounts of accurately annotated data. For this purpose the field of causality can help to provide the required supervision by leveraging the structure of causal mechanisms.
+
+In the following sections, we lay out a possible path of connecting continual, open world, learning with causality and discuss the challenges encountered along the way. We highlight the importance of causal knowledge to identify and disentangle objects in the visual domain.
+
+How do humans discover novel concepts? Machine learning models are usually trained by presenting them with randomly sampled image-label pairs. These samples usually do not have any connections besides sharing the same set of labels. Machine learning models are expected to identify visual concepts from ground up. This is in contrast to the way humans discover the world. When presented with novel concepts we usually start out with some initial knowledge. We relate novel concepts to preexisting knowledge and therefore continually advance our understanding of the world (Chen and Liu 2018; Flesch et al. 2022).
+
+Why do we want causal concept discovery? Causality is concerned with identifying processes that are underlying real world observations. As such, structural causal models (SCM) are designed to model the causal relationship between different concepts. Whenever we detect some concepts of a given SCM, we can utilize the graph structure to infer the presence of other, possibly unknown, concepts. Definitions of a 'concept' might vary, depending on the specific use case. In this paper we will use the term to capture the range from low-level features, such as color or texture, to complex composed entities, e.g. cities. Assume that our system is able to detect matches and matchboxes, but has not yet seen a flame. However, we might have read about the causal effect of sliding matches along the striking surface of the matchbox to light them. We can express this knowledge in a causal graph without having to provide image samples. Now, whenever we observe the interaction of striking matches on a matchbox, we can infer the presence of a flame and in consequence train our vision model on the new concept. This approach helps us in two ways: (1) Since the causal model specifies the exact conditions under which a particular concept is to be expected, we can actively steer our discovery process towards those instances and discover concepts more efficiently. (2) Secondly, causality helps us with disentangling concepts and ruling out confounding factors. While some objects might only appear in strongly correlated settings we know the true causal factors from the SCM.
+
+§ 2 CAUSAL CONCEPT DISCOVERY
+
+Discovering concepts under causal supervision provides a way to incrementally discover novel concepts. For every observation, we search for an SCM that contains the already known concepts and follow the causal paths to arrive at novel concepts. Each of those discovery step allows us to align more causal graphs without observations and continually broadens the scope of our models.
+
+Structural Causal Models. As we use SCMs to ground and discover novel concepts in the visual domain, we follow the Pearlian notion of Causality (Pearl 2009). An SCM is defined as a 4-tuple $\mathcal{M} \mathrel{\text{ := }} \langle \mathbf{U},\mathbf{V},\mathcal{F},p\left( \mathbf{U}\right) \rangle$ where the so-called structural equations ${v}_{i} \leftarrow {f}_{i}\left( {{\mathrm{{pa}}}_{i},{u}_{i}}\right) \in \mathcal{F}$ assign values to the respective endogenous variables ${V}_{i} \in \mathbf{V}$ based on the values of their parents ${\mathrm{{Pa}}}_{i} \subseteq \mathbf{V} \smallsetminus {V}_{i}$ and the values of their respective exogenous variables ${\mathbf{U}}_{i} \subseteq \mathbf{U}$ . In particular, any SCM induces a causal graph which represents the causal structure from causes to effects.
+
+Bootstrapping. SCMs are specifically tailored to represent information about causal systems. However, in practice we may not be able to explicitly provide a list of concepts $C$ learned by the model and might even encounter catastrophic forgetting of already learned concepts (French 1999). This poses a practical problem as it introduces additional uncertainty to our system. For our theoretical discussion, we assume for now that we can reliably detect all concepts that we have already discovered during our process.
+
+For starting up our discovery process, we assume an initial set of known concepts ${C}_{0}$ to be given, which can be reliably detected. This intial knowledge might come from training on manually annotated data sets or other pre-trained models (Lin et al. 2014; Krizhevsky, Sutskever, and Hinton 2012; Minderer et al. 2022). Additionally we assume to be given a set of SCMs which encode our causal knowledge about the world.
+
+Boundaries of Discovery. Open world settings provide an infinite stream of concepts to discover (Chen and Liu 2018). Like for human curiosity, we are not interested in learning random concepts, but in utilizing our existing causal knowledge to efficiently discover concepts that stay close to our already existing knowledge. As such, we define the causal frontier as the set of SCMs that contain at least one concept of ${C}_{i}$ . Importantly this gives us a method to a priori determine which concepts can and can not be discovered. Given the set of initial concepts ${C}_{0}$ and the set of given causal graphs, it follows that we can only discover those concepts for which we can find a chain of causal graphs, such that for any two adjacent SCMs we get a non-empty overlap between their set of variables.
+
+Discovering concepts. At this point in our process we can start to advance our causal frontier by continually learning new concepts from observations. As a first step, we identify all known concepts that are present in a new observed image and select the causal graphs that contain those concepts. However, discovering that a concept is contained in the set of endogenous variables of a causal graph does not suffice to infer the presence of the causal system. Some common concepts, such as color, might appear as parameters in many causal graphs. Apples, for example, are typically colored green or red. As such, the concept of color parameterizes the observed object. However, it is not suited to infer the type of an object, as being colored red or green does not make an object an apple. Detecting the typical 'apple-like shape' however would be a strong indicator for the concept. Therefore, we are interested in SCMs for which we discover indicator variables or actual causes (Halpern 2016) that are necessary to infer the presence of a causal system.
+
+Another important insight is the fact that some causal structures may only be identifiable with the help of interventions (Pearl 2009; Bareinboim et al. 2022). Consider, for example, a scenario of two independent variables, $A$ and $B$ , whose appearance are determined by a third variable $C$ . As a consequence, $A$ and $B$ are either present at the same time or not, and we have no way to disentangle them. Causality can help to detect such situations. In consequence we can actively intervene on the system and identify the individual concepts.
+
+§ 3 CHALLENGES AND FUTURE STEPS
+
+In the previous sections we outlined the high-level idea of causally guided concept discovery. For this section we now continue to discuss the challenges that may arise in practice.
+
+Challenge 1: Identifying causal paths. One problem with identifying unknown variables from SCMs is the fact that we may not know how causes and effects are interacting in the real world. In our initial example of lighting a match we might follow the physical process of the objects interacting. For other examples we might assume to only observe sparse changes. While we can continue to come up with more heuristics for specific problems, we have to recognize that the current formalizations of causality are not well suited to trace causal effects in their underlying systems.
+
+Challenge 2: Abstracting concepts. Another challenge towards identifying unknown concepts from SCMs arises from the fact that causal systems are often modeled using high-level relations. Because of that we might encounter several low-level entities on the way from cause to effect the are not modeled in the SCM. In order to be able to identify these concepts, we need to consider abstractions and refinements of SCMs (Beckers and Halpern 2019; Rubenstein et al. 2017). This means, that we have to come up with ways of identifying intermediate concepts or refine the given SCMs to better reflect the abstraction level of our observations.
+
+Summary and Outlook: In this position paper we highlighted the strengths and challenges of continuous causal concept discovery. We presented ways on how to leverage causal structures to guide concept discovery and identify novel concepts. While we primarily focused on the application of causal knowledge to discover open world concepts, the inverse problem of inferring causal knowledge from open world settings is also still to be discussed. Identifying relevant concepts and connecting them in a way that leads to meaningful causal concepts poses a challenging problem on its own that is yet to be solved. For future applications, we might consider combined approaches that discover visual concepts via causal guidance while simultaneously refining their causal knowledge using observations.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/7_QHeKujaj/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/7_QHeKujaj/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..0126ec2f4ae839ed788944af754a75ae8ccfb954
--- /dev/null
+++ b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/7_QHeKujaj/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,103 @@
+# Spurious Features in Continual Learning
+
+Anonymous
+
+## Abstract
+
+Continual Learning (CL) is the research field addressing learning without forgetting when the data distribution is not static. To solve problems, continual learning algorithms need to learn robust and stable representations based only on a subset of the data. Those representations are necessarily biased and should be revisited when new data becomes available. This paper studies spurious features' influence on continual learning algorithms. We show that in continual learning, algorithms have to deal with local spurious features that correlate well with labels within a task only but which are not good representations for the concept to learn. One of the big challenges of continual learning algorithms is to discover causal relationships between features and labels under distribution shifts.
+
+## Introduction
+
+Feature selection is a standard machine learning problem. Its objective is notably to improve the prediction performance (Guyon and Elisseeff 2003). In the presence of spurious features, a learning algorithm may overfit features and learn a solution that can not generalize to the test set. This problem can notably be caused by a covariate shift between train and test data.
+
+In continual learning (CL) (French 1999; Parisi et al. 2019; Lesort et al. 2020), the training data distribution changes through time. Hence, spurious features (SFs) in one time-step of the data distribution should not last. A CL algorithm relying on a spurious feature could then be resilient and learn better features later - given more data. Algorithms can also learn to ignore past spurious features (Javed, White, and Bengio 2020). An example of a task with spurious features could be a classification task between cars and bikes. In the training data, all cars are red, and all bikes are white, while it test data, both are in a unique blue not available in train data. A model could easily overfit the color to solve the task while it is not discriminative in the test data. Addressing spurious features was one of the major goals of the recent out-of-distribution (OOD) generalization community (Arjovsky et al. 2019; Ahuja et al. 2021; Sagawa et al. 2019; Pezeshki et al. 2020).
+
+
+
+Figure 1: Spurious features and local spurious features. If the task is to distinguish the squares from the circles. In Fig. 1a and 1b, the color is a spurious feature because there is a covariate shift between train and test data. In Fig. 1c and 1d, we observe two tasks of a domain-incremental scenario, the colors are locally spurious in tasks 1 and 2 . Even if there is no significant covariate shift between train and test full data distribution, colors appear discriminative while looking at data within a task.
+
+On the other hand, in continual learning, the second type of spurious feature can be described: local spurious features. Local features denote features that correlate well with labels within a task (a state of the data distribution) but not in the full scenario. In opposite to the usual spurious features, this problem is provoked by the unavailability of all data. An example of a classification scenario would be: task 1, blue cars vs yellow bikes and task 2, yellow cars vs blue bikes. In both cases, the tasks can be solved efficiently with the colour feature, but if the test data is composed of cars and bikes of both yellow and blue, then the colour is not discriminative anymore, and the model can not generalize. While there is no covariate shift between all train data and test data, the model can not generalize because of the distribution shift through time. It is, therefore, a problem specific to continual learning.
+
+This paper investigates the problem of spurious features (with covariate shift) in continual learning and shows that the continual learning setup leads to a specific type of spurious features that we call local spurious features (LSFs) (without covariate shift) in CL as shown in Fig. 1.
+
+## Problem Formulation
+
+This section introduces the spurious features problems in a sequence of tasks. The goal is to present the key types of features, namely: general, local, and spurious features.
+
+General Formalism: We consider a continual scenario of classification tasks. We study a function ${f}_{\theta }\left( \cdot \right)$ , implemented as a neural network, parameterized by a vector of parameters $\theta \in {\mathbb{R}}^{p}$ (where $\mathrm{p}$ is is the number of parameters) representing the set of weight matrices and bias vectors of a deep network. In continual learning, the goal is to find a solution ${\theta }^{ * }$ by minimizing a loss $L$ on a stream of data formalized as a sequence of tasks $\left\lbrack {{\mathcal{T}}_{0},{\mathcal{T}}_{1},\ldots ,{\mathcal{T}}_{T - 1}}\right\rbrack$ , such that $\forall \left( {{x}_{t},{y}_{t}}\right) \sim {\mathcal{T}}_{t}\left( {t \in \left\lbrack {0, T - 1}\right\rbrack }\right) ,{f}_{{\theta }^{ * }}\left( x\right) = y$ . We do not use the task index for inferences (i.e. single head setting).
+
+---
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+---
+
+Table 1: Summary of characteristics of the types of features. For a feature $z$ of a class $c$ , we denote if it verify (1) on different data setting, a single task ${\mathcal{T}}_{t}$ , the whole scenario ${\mathcal{C}}_{T}$ , the test set ${\mathcal{D}}_{te}$ .
+
+| Name | ${\mathcal{T}}_{t}$ | ${\mathcal{C}}_{T}$ | ${D}_{te}$ |
| Good Feature $\left( {z}_{ + }\right)$ | ✓ | ✓ | ✓ |
| Spurious Feature $\left( {z}_{\text{spur }}\right)$ | ✓ | ✓ | ✘ |
| Local Feature $\left( {z}_{loc}\right)$ | ✓ | ? | ? |
| Local Spurious Feature $\left( {z}_{\text{spur:t }}\right)$ | ✓ | ✘ | ✘ |
+
+To describe the different types of features, let $z$ be a feature and $x \sim \mathcal{D}$ a datum point in dataset $\mathcal{D}$ . We define $w\left( \text{.}\right) a$ function which returns 1 if $z$ is in $x$ and 0 if not. $w\left( \text{.}\right)$ ’s output is binary for simplicity. Then, for all data with a label $y$ in the dataset $\mathcal{D}$ , we can compute the correlation $c\left( {\mathcal{D}, z, y}\right) = \operatorname{correlation}\left( {w\left( {z, x}\right) = 1, Y = y}\right)$ , which estimates how a feature correlates with the data of a given class. We can then define discriminative features as:
+
+$z$ is discriminative for class $y$ in $\mathcal{D}$ if:
+
+$$
+\forall {y}^{\prime } \in \mathcal{Y}, y \neq {y}^{\prime }\;c\left( {\mathcal{D}, z, y}\right) \gg c\left( {\mathcal{D}, z,{y}^{\prime }}\right) \tag{1}
+$$
+
+$\mathcal{Y}$ is the set of classes in $\mathcal{D}$ . In other words, $z$ is discriminative for $y$ if it correlates significantly more to $y$ ’s data than to the data of any other class. Then a good feature ${z}_{ + }$ for a class $y$ respects (1) for training data ${\mathcal{D}}_{tr}$ and test data ${\mathcal{D}}_{te}$ .
+
+## Spurious Features vs Local Spurious Features
+
+A spurious feature ${z}_{\text{spur }}$ for a class $y$ respects (1) for training data ${\mathcal{D}}_{tr}$ but not for test data ${\mathcal{D}}_{te}$ . A spurious feature is well correlated with labels in training data but not with testing data.
+
+Hence, learning from ${z}_{\text{spur }}$ may offer a low training error but high test error. The presence of ${z}_{\text{spur }}$ is due to a covariate shift between train and test distribution which changes the feature distribution.
+
+In continual learning, the covariate shift between train and test ${z}_{\text{spur }}$ may also lead to poor generalization. Further, the features can be locally spurious, e.g., they correlate well with labels within a task but not within the whole scenario. We name them local spurious features (LSF). We illustrate the difference between spurious features and local spurious features in Figure 1.
+
+At task $t$ , A local spurious feature ${z}_{\text{spur };t}$ respects (1) for a class ${y}_{t}$ in task ${\mathcal{T}}_{t}$ , but not for the whole scenario ${\mathcal{C}}_{T}.z$ is a LSF for a class $y$ in ${\mathcal{T}}_{t} \sim {\mathcal{C}}_{T}$ , with $t \in \llbracket 0, T - 1\rrbracket$ :
+
+$$
+\text{if}\forall {y}^{\prime } \in {\mathcal{Y}}_{t}, y \neq {y}^{\prime }\;c\left( {{\mathcal{T}}_{t}, z, y}\right) \gg c\left( {{\mathcal{T}}_{t}, z,{y}^{\prime }}\right) \tag{2}
+$$
+
+$$
+\text{and}\exists {y}^{\prime \prime } \in \mathcal{Y}, y \neq {y}^{\prime \prime }\;c\left( {{\mathcal{C}}_{T}, z, y}\right) \geq c\left( {{\mathcal{C}}_{T}, z,{y}^{\prime \prime }}\right)
+$$
+
+${\mathcal{Y}}_{t}$ is the classes set in task ${\mathcal{T}}_{t}$ and $\mathcal{Y}$ is the classes set in the full scenario ${\mathcal{C}}_{T}$ composed of $T$ tasks. A LSF ${z}_{\text{spur };t}$ correlates well with a label on the current task but not on the whole scenario. ${z}_{{spur};t}$ can be extended from a single task ${\mathcal{T}}_{t}$ to all task seen so far ${\mathcal{T}}_{0 : t}$ without loss of generality.
+
+Global vs Local Optimum: We assume that machine learning models solve tasks by learning to detect/select features that correlate well with labels. Then, while learning on a task $t$ , we distinguish a local optimum ${\theta }_{t}^{ * }$ , satisfying for the current task ${\mathcal{T}}_{t}$ , from a global optimum ${\theta }_{0 : T}^{ * }$ that is satisfying for whole scenario ${\mathcal{C}}_{T}$ (past, current, and future tasks).
+
+Similarly, we can differentiate local and global features, leading to local and global optimum. The global features are the good features ${z}_{ + }$ that are predictive for the full scenario. Unfortunately, at time $t$ , we can not know if a feature is part of ${z}_{ + }$ without access to the future data. Therefore, algorithms should learn with their current data but update their knowledge afterwards, given new data. For example, in classification, the discriminative features for a given class depend on all the classes. Therefore, when new classes arrive, discriminative features can become outdated in class-incremental scenarios.
+
+To learn robust solution in CL, algorithms should them be able to deal both with spurious features and local spurious features. One trivial solution to deal with local spurious features is the use of replay. Replay can avoid and fix local spurious features' influence by providing more context on the full data distribution. Nevertheless, replay can be compute and data-intensive and a better solution could be developed.
+
+## Conclusion
+
+Continual learning algorithms are built to learn, accumulate and memorize knowledge through time to reuse them later. Memorizing bad features can have catastrophic repercussions on future performance. Then, to learn general features, algorithms need to deal with spurious and local spurious features.
+
+This paper first investigates the question of spurious features on continual learning. Algorithms easily overfit spurious features for one or several tasks, leading to poor generalization. Spurious features are then problematic for them. Furthermore, we formalize another type of spurious feature that we call local spurious feature and which can be problematic for continual learning algorithms.
+
+Local spurious features are features that correlate well with labels when only a subset of data are available but not when all the data is available. These types of features make harder the discovery of robust features. From a causality perspective, local spurious features makes it harder to discover the causal relationship between features and labels in continual learning. Causality algorithms could help to find a solution to solve this issue.
+
+In the continual learning literature, performance decrease is generally attributed to catastrophic forgetting. Our results show that the problem of local spurious features also plays a major role. More research is needed to understand better the impact of local spurious features along with catastrophic forgetting. Understanding this phenomenon is critical to better address forgetting and feature selection and enable efficient continual learning.
+
+## References
+
+Ahuja, K.; Caballero, E.; Zhang, D.; Bengio, Y.; Mitliagkas, I.; and Rish, I. 2021. Invariance Principle Meets Information Bottleneck for Out-of-Distribution Generalization. arXiv preprint arXiv:2106.06607.
+
+Arjovsky, M.; Bottou, L.; Gulrajani, I.; and Lopez-Paz, D. 2019. Invariant risk minimization. arXiv preprint arXiv:1907.02893.
+
+French, R. M. 1999. Catastrophic forgetting in connectionist networks. Trends in Cognitive Sciences, 3(4): 128-135.
+
+Guyon, I.; and Elisseeff, A. 2003. An Introduction to Variable and Feature Selection. J. Mach. Learn. Res., 3(null): ${1157} - {1182}$ .
+
+Javed, K.; White, M.; and Bengio, Y. 2020. Learning causal models online. arXiv preprint arXiv:2006.07461.
+
+Lesort, T.; Lomonaco, V.; Stoian, A.; Maltoni, D.; Filliat, D.; and Díaz-Rodríguez, N. 2020. Continual learning for robotics: Definition, framework, learning strategies, opportunities and challenges. Information Fusion, 58: 52-68.
+
+Parisi, G. I.; Kemker, R.; Part, J. L.; Kanan, C.; and Wermter, S. 2019. Continual lifelong learning with neural networks: A review. Neural Networks, 113: 54-71.
+
+Pezeshki, M.; Kaba, S.-O.; Bengio, Y.; Courville, A.; Pre-cup, D.; and Lajoie, G. 2020. Gradient starvation: A learning proclivity in neural networks. arXiv preprint arXiv:2011.09468.
+
+Sagawa, S.; Koh, P. W.; Hashimoto, T. B.; and Liang, P. 2019. Distributionally robust neural networks. In International Conference on Learning Representations.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/7_QHeKujaj/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/7_QHeKujaj/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..7682029578be2cd06a2eaa3fdb60472e80d75b0e
--- /dev/null
+++ b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/7_QHeKujaj/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,95 @@
+§ SPURIOUS FEATURES IN CONTINUAL LEARNING
+
+Anonymous
+
+§ ABSTRACT
+
+Continual Learning (CL) is the research field addressing learning without forgetting when the data distribution is not static. To solve problems, continual learning algorithms need to learn robust and stable representations based only on a subset of the data. Those representations are necessarily biased and should be revisited when new data becomes available. This paper studies spurious features' influence on continual learning algorithms. We show that in continual learning, algorithms have to deal with local spurious features that correlate well with labels within a task only but which are not good representations for the concept to learn. One of the big challenges of continual learning algorithms is to discover causal relationships between features and labels under distribution shifts.
+
+§ INTRODUCTION
+
+Feature selection is a standard machine learning problem. Its objective is notably to improve the prediction performance (Guyon and Elisseeff 2003). In the presence of spurious features, a learning algorithm may overfit features and learn a solution that can not generalize to the test set. This problem can notably be caused by a covariate shift between train and test data.
+
+In continual learning (CL) (French 1999; Parisi et al. 2019; Lesort et al. 2020), the training data distribution changes through time. Hence, spurious features (SFs) in one time-step of the data distribution should not last. A CL algorithm relying on a spurious feature could then be resilient and learn better features later - given more data. Algorithms can also learn to ignore past spurious features (Javed, White, and Bengio 2020). An example of a task with spurious features could be a classification task between cars and bikes. In the training data, all cars are red, and all bikes are white, while it test data, both are in a unique blue not available in train data. A model could easily overfit the color to solve the task while it is not discriminative in the test data. Addressing spurious features was one of the major goals of the recent out-of-distribution (OOD) generalization community (Arjovsky et al. 2019; Ahuja et al. 2021; Sagawa et al. 2019; Pezeshki et al. 2020).
+
+ < g r a p h i c s >
+
+Figure 1: Spurious features and local spurious features. If the task is to distinguish the squares from the circles. In Fig. 1a and 1b, the color is a spurious feature because there is a covariate shift between train and test data. In Fig. 1c and 1d, we observe two tasks of a domain-incremental scenario, the colors are locally spurious in tasks 1 and 2 . Even if there is no significant covariate shift between train and test full data distribution, colors appear discriminative while looking at data within a task.
+
+On the other hand, in continual learning, the second type of spurious feature can be described: local spurious features. Local features denote features that correlate well with labels within a task (a state of the data distribution) but not in the full scenario. In opposite to the usual spurious features, this problem is provoked by the unavailability of all data. An example of a classification scenario would be: task 1, blue cars vs yellow bikes and task 2, yellow cars vs blue bikes. In both cases, the tasks can be solved efficiently with the colour feature, but if the test data is composed of cars and bikes of both yellow and blue, then the colour is not discriminative anymore, and the model can not generalize. While there is no covariate shift between all train data and test data, the model can not generalize because of the distribution shift through time. It is, therefore, a problem specific to continual learning.
+
+This paper investigates the problem of spurious features (with covariate shift) in continual learning and shows that the continual learning setup leads to a specific type of spurious features that we call local spurious features (LSFs) (without covariate shift) in CL as shown in Fig. 1.
+
+§ PROBLEM FORMULATION
+
+This section introduces the spurious features problems in a sequence of tasks. The goal is to present the key types of features, namely: general, local, and spurious features.
+
+General Formalism: We consider a continual scenario of classification tasks. We study a function ${f}_{\theta }\left( \cdot \right)$ , implemented as a neural network, parameterized by a vector of parameters $\theta \in {\mathbb{R}}^{p}$ (where $\mathrm{p}$ is is the number of parameters) representing the set of weight matrices and bias vectors of a deep network. In continual learning, the goal is to find a solution ${\theta }^{ * }$ by minimizing a loss $L$ on a stream of data formalized as a sequence of tasks $\left\lbrack {{\mathcal{T}}_{0},{\mathcal{T}}_{1},\ldots ,{\mathcal{T}}_{T - 1}}\right\rbrack$ , such that $\forall \left( {{x}_{t},{y}_{t}}\right) \sim {\mathcal{T}}_{t}\left( {t \in \left\lbrack {0,T - 1}\right\rbrack }\right) ,{f}_{{\theta }^{ * }}\left( x\right) = y$ . We do not use the task index for inferences (i.e. single head setting).
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+Table 1: Summary of characteristics of the types of features. For a feature $z$ of a class $c$ , we denote if it verify (1) on different data setting, a single task ${\mathcal{T}}_{t}$ , the whole scenario ${\mathcal{C}}_{T}$ , the test set ${\mathcal{D}}_{te}$ .
+
+max width=
+
+Name ${\mathcal{T}}_{t}$ ${\mathcal{C}}_{T}$ ${D}_{te}$
+
+1-4
+Good Feature $\left( {z}_{ + }\right)$ ✓ ✓ ✓
+
+1-4
+Spurious Feature $\left( {z}_{\text{ spur }}\right)$ ✓ ✓ ✘
+
+1-4
+Local Feature $\left( {z}_{loc}\right)$ ✓ ? ?
+
+1-4
+Local Spurious Feature $\left( {z}_{\text{ spur:t }}\right)$ ✓ ✘ ✘
+
+1-4
+
+To describe the different types of features, let $z$ be a feature and $x \sim \mathcal{D}$ a datum point in dataset $\mathcal{D}$ . We define $w\left( \text{ . }\right) a$ function which returns 1 if $z$ is in $x$ and 0 if not. $w\left( \text{ . }\right)$ ’s output is binary for simplicity. Then, for all data with a label $y$ in the dataset $\mathcal{D}$ , we can compute the correlation $c\left( {\mathcal{D},z,y}\right) = \operatorname{correlation}\left( {w\left( {z,x}\right) = 1,Y = y}\right)$ , which estimates how a feature correlates with the data of a given class. We can then define discriminative features as:
+
+$z$ is discriminative for class $y$ in $\mathcal{D}$ if:
+
+$$
+\forall {y}^{\prime } \in \mathcal{Y},y \neq {y}^{\prime }\;c\left( {\mathcal{D},z,y}\right) \gg c\left( {\mathcal{D},z,{y}^{\prime }}\right) \tag{1}
+$$
+
+$\mathcal{Y}$ is the set of classes in $\mathcal{D}$ . In other words, $z$ is discriminative for $y$ if it correlates significantly more to $y$ ’s data than to the data of any other class. Then a good feature ${z}_{ + }$ for a class $y$ respects (1) for training data ${\mathcal{D}}_{tr}$ and test data ${\mathcal{D}}_{te}$ .
+
+§ SPURIOUS FEATURES VS LOCAL SPURIOUS FEATURES
+
+A spurious feature ${z}_{\text{ spur }}$ for a class $y$ respects (1) for training data ${\mathcal{D}}_{tr}$ but not for test data ${\mathcal{D}}_{te}$ . A spurious feature is well correlated with labels in training data but not with testing data.
+
+Hence, learning from ${z}_{\text{ spur }}$ may offer a low training error but high test error. The presence of ${z}_{\text{ spur }}$ is due to a covariate shift between train and test distribution which changes the feature distribution.
+
+In continual learning, the covariate shift between train and test ${z}_{\text{ spur }}$ may also lead to poor generalization. Further, the features can be locally spurious, e.g., they correlate well with labels within a task but not within the whole scenario. We name them local spurious features (LSF). We illustrate the difference between spurious features and local spurious features in Figure 1.
+
+At task $t$ , A local spurious feature ${z}_{\text{ spur };t}$ respects (1) for a class ${y}_{t}$ in task ${\mathcal{T}}_{t}$ , but not for the whole scenario ${\mathcal{C}}_{T}.z$ is a LSF for a class $y$ in ${\mathcal{T}}_{t} \sim {\mathcal{C}}_{T}$ , with $t \in \llbracket 0,T - 1\rrbracket$ :
+
+$$
+\text{ if }\forall {y}^{\prime } \in {\mathcal{Y}}_{t},y \neq {y}^{\prime }\;c\left( {{\mathcal{T}}_{t},z,y}\right) \gg c\left( {{\mathcal{T}}_{t},z,{y}^{\prime }}\right) \tag{2}
+$$
+
+$$
+\text{ and }\exists {y}^{\prime \prime } \in \mathcal{Y},y \neq {y}^{\prime \prime }\;c\left( {{\mathcal{C}}_{T},z,y}\right) \geq c\left( {{\mathcal{C}}_{T},z,{y}^{\prime \prime }}\right)
+$$
+
+${\mathcal{Y}}_{t}$ is the classes set in task ${\mathcal{T}}_{t}$ and $\mathcal{Y}$ is the classes set in the full scenario ${\mathcal{C}}_{T}$ composed of $T$ tasks. A LSF ${z}_{\text{ spur };t}$ correlates well with a label on the current task but not on the whole scenario. ${z}_{{spur};t}$ can be extended from a single task ${\mathcal{T}}_{t}$ to all task seen so far ${\mathcal{T}}_{0 : t}$ without loss of generality.
+
+Global vs Local Optimum: We assume that machine learning models solve tasks by learning to detect/select features that correlate well with labels. Then, while learning on a task $t$ , we distinguish a local optimum ${\theta }_{t}^{ * }$ , satisfying for the current task ${\mathcal{T}}_{t}$ , from a global optimum ${\theta }_{0 : T}^{ * }$ that is satisfying for whole scenario ${\mathcal{C}}_{T}$ (past, current, and future tasks).
+
+Similarly, we can differentiate local and global features, leading to local and global optimum. The global features are the good features ${z}_{ + }$ that are predictive for the full scenario. Unfortunately, at time $t$ , we can not know if a feature is part of ${z}_{ + }$ without access to the future data. Therefore, algorithms should learn with their current data but update their knowledge afterwards, given new data. For example, in classification, the discriminative features for a given class depend on all the classes. Therefore, when new classes arrive, discriminative features can become outdated in class-incremental scenarios.
+
+To learn robust solution in CL, algorithms should them be able to deal both with spurious features and local spurious features. One trivial solution to deal with local spurious features is the use of replay. Replay can avoid and fix local spurious features' influence by providing more context on the full data distribution. Nevertheless, replay can be compute and data-intensive and a better solution could be developed.
+
+§ CONCLUSION
+
+Continual learning algorithms are built to learn, accumulate and memorize knowledge through time to reuse them later. Memorizing bad features can have catastrophic repercussions on future performance. Then, to learn general features, algorithms need to deal with spurious and local spurious features.
+
+This paper first investigates the question of spurious features on continual learning. Algorithms easily overfit spurious features for one or several tasks, leading to poor generalization. Spurious features are then problematic for them. Furthermore, we formalize another type of spurious feature that we call local spurious feature and which can be problematic for continual learning algorithms.
+
+Local spurious features are features that correlate well with labels when only a subset of data are available but not when all the data is available. These types of features make harder the discovery of robust features. From a causality perspective, local spurious features makes it harder to discover the causal relationship between features and labels in continual learning. Causality algorithms could help to find a solution to solve this issue.
+
+In the continual learning literature, performance decrease is generally attributed to catastrophic forgetting. Our results show that the problem of local spurious features also plays a major role. More research is needed to understand better the impact of local spurious features along with catastrophic forgetting. Understanding this phenomenon is critical to better address forgetting and feature selection and enable efficient continual learning.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/9mk7Quvo7L/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/9mk7Quvo7L/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..c9e3d5f1d5d8425a4f0094f05b476fa39d7edb79
--- /dev/null
+++ b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/9mk7Quvo7L/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,95 @@
+# From Continual Learning to Causal Discovery in Robotics
+
+Anonymous submission
+
+## Abstract
+
+Reconstructing accurate causal models of dynamic systems from time-series of sensor data is a key problem in many real-world scenarios. In this paper, we present an overview based on our experience about practical challenges that the causal analysis encounters when applied to autonomous robots and how Continual Learning (CL) could help to overcome them. We propose a possible way to leverage the CL paradigm to make causal discovery feasible for robotics applications where the computational resources are limited, while at the same time exploiting the robot as an active agent that helps to increase the quality of the reconstructed causal models.
+
+## 1 Introduction
+
+Causal discovery approaches generally build the causal model of the observed scenario from static or time-series data collected and processed in advance. However, in many real-world robotics applications, this approach could result inefficient or even unfeasible. The link between Continual Learning (CL) (Lesort et al. 2020) and Causality might represent a stepping stone towards the exploitation of causal discovery algorithms (Glymour, Zhang, and Spirtes 2019) that currently suffer many limitations in autonomous robots.
+
+Causal inference is an active research area in different fields, including robotics and autonomous systems (Hell-ström 2021; Brawer, Qin, and Scassellati 2020; Cao et al. 2021; Katz et al. 2018; Angelov, Hristov, and Ramamoor-thy 2019). However, most of these works overlooked some key features that are important for real-world application, i.e. the computational cost and the memory needed by causal analysis when long time-series are processed to reconstruct a causal model of the observed scenario. To this end, the CL's ability to enable the acquisition of more knowledge by trained models without forgetting previous information, and without using previous data recordings, might help to address these problems and to achieve better result in terms of quality of the causal analysis. For instance, a robot in an automated warehouse with humans and various objects (e.g. see Fig. 1) could observe and intervene in the interactions among them (e.g. worker and shelf) in order to build a causal model and therefore a deep understanding of the situation. Since the limited hardware resources though, the robot's causal analysis might be slow and based on a limited amount of data, leading to a low quality causal model.
+
+The solutions suggested in this paper would allow the robot to overcome its hardware limitations and, moreover, to improve the quality of the causal models by continually feeding new data for causal analysis, discarding the old collected one. This would enable a more efficient use of the robot's memory and computing's resources compared to existing causal discovery's approaches. To summarise, this paper proposes a Causal Robot Discovery (CRD) approach to overcome current limitations in causal analysis for real-world robotics applications, addressing in particular:
+
+- the computing and memory hardware resources of the robot, which may hinder its capability to perform meaningful causal analysis;
+
+- the update of previous causal models with new observational and interventional data from the robot to generate more accurate ones.
+
+## 2 Related Work
+
+Causal discovery: Several methods have been developed over the last few decades to derive causal relationships from observational data, which can be categorized into two main classes (Glymour, Zhang, and Spirtes 2019). The first one includes constraint-based methods, such as Peter and Clark (PC) and Fast Causal Inference (FCI), which rely on conditional independence tests as constraint-satisfaction to recover the causal graph. The second one includes score-based methods, such as Greedy Equivalence Search (GES), which assign a score to each Directed Acyclic Graph (DAG) and perform a search in this score space. However, many of these algorithms work only with static data (i.e. no temporal information) and are not applicable to time-series of sensor data in many robotics applications, for which time-dependent causal discovery methods are instead necessary. To this end, a variation of the PC algorithm, called PCMCI, was adapted and applied to time-series data (Runge 2018; Runge et al. 2019; Saetia, Yoshimura, and Koike 2021).
+
+Causal robotics: Causal inference has been recently considered in robotics, for example to build and learn a Structural Causal Model (SCM) from a mix of observation and self-supervised trials for tool affordance with a humanoid robot (Brawer, Qin, and Scassellati 2020). Other applications include the use of PCMCI to derive the causal model of an underwater robot trying to reach a target position (Cao et al. 2021) or to predict human spatial interactions in a social robotics context (Castri et al. 2022). Further causality-based approaches can be found in the robot imitation learning and manipulation area (Katz et al. 2018; Angelov, Hris-tov, and Ramamoorthy 2019; Lee et al. 2021). However, all these solutions rely on a fixed set of time-series for causal analysis and do not consider the computational cost and complexity for online update of the robot's causal models.
+
+Continual learning: The concept of learning continually from experience has been present in artificial intelligence since early days (Weng et al. 2001). Recently this has been explored more systematically in machine learning (Hadsell et al. 2020; Parisi et al. 2019) and robotics (Lungarella et al. 2003; Lesort et al. 2020; Churamani, Kalkan, and Gunes 2020). To our knowledge though, few applications of the continual learning paradigm can be found in the causality field. Javed, White, and Bengio (2020) incorporate causality and continual learning with an online algorithm that continually detects and removes spurious features from a causal model. In (Kummerfeld and Danks 2012, 2013; Kocacoban and Cussens 2019, 2020), instead, algorithms for online causal structure learning are presented to deal with nonstationary data. This is a key feature of data from real-world environments, which is still under-investigated in robotics and therefore motivates our approach proposed next.
+
+## 3 Causal Robot Discovery
+
+A review of the literature revealed that the possible limitations of autonomous robots doing causal discovery with their own on-board sensors have not been taken into account. Indeed, the computational and memory requirements for long time-series of sensor data are often very demanding, making the use of previous algorithms for causal inference unfeasible on such platforms.
+
+Our approach is partially inspired by the works of Koca-coban and Cussens (2019) for handling non-stationary data, but differs from it in two ways. First of all, we adopt the current state-of-the-art PCMCI method for causal discovery from time-series data; second, we propose to re-learn the causal model not only when the observed scenario changes, but also at each new robot's set of observations/interventions (periodically, e.g. every few minutes). In particular, the introduction of the CL paradigm could help the robot to overcome the challenge of limited hardware resources and to improve the quality of the causal analysis even with nonstationary data. In addition, a CRD approach could benefit from the fact that robots are physically embodied in the environment and can actively influence its dynamic processes (i.e. by performing interventions). That is, CRD could improve the accuracy of the causal model by enriching "passive" observational data from the sensors with "active" interventional data from robot's actions aimed at collecting specific time-series for causal discovery.
+
+Therefore, our goal is to decrease the need of hardware resources - often scarce in autonomous systems - and to increase the causal analysis quality by using the robot as an active agent in the learning process. The proposed CRD system is thought to limit the demand for hardware resources and allow the robot to perform high quality causal discovery in a reasonable time by using its own on-board sensor data.
+
+
+
+Figure 1: CRD approach: the robot provides observational and interventional data about the human-object interaction to the CRD block. The latter generates a causal model which is stored and used to compare the next causal model built on the next robot's observation and intervention.
+
+The proposed approach is depicted in Fig. 1: (i) starting from a prefixed set of variables, the robot collects meaningful data by observing and intervening in the target scenario; (ii) based on this data, a causal model is estimated using PCMCI (Runge 2018), which computes test statistics and p-values as causal strengths of the DAG's links. At this stage, differently from (Kocacoban and Cussens 2019), to increase the accuracy of the causal discovery, the robot keeps on collecting data by observing and intervening in the scenario to create new causal models. Periodically then, the robot compares the new causal models with the old ones, inheriting from the latter only the links that minimise the p-values of the DAG's causal relations. By repeating this process until the observed scenario changes, a stable version of the causal model with minimum uncertainty levels would be reached. The latter is useful not only for modeling the current scenario, but also when it changes. Indeed, by partially using the stored causal model that is consistent the target scenario, even the latter changes, the estimation of the new model can be significantly sped up (Kocacoban and Cussens 2019).
+
+Note that by iteratively discarding the old time-series and storing only the built causal model helps to avoid the combinatorial explosion otherwise affecting the PCMCI discovery, therefore allowing the robot to operate and compute new models within reasonable time. The catastrophic-forgetting problem is also mitigated by the fact that, during continuous operations, the robot observes similar processes with only small incremental changes, which leads to sequences of similar causal models, and that it reconstructs the latter by exploiting relatively small variations of the previous models.
+
+## 4 Conclusion
+
+In this paper we considered the hardware resource limitations of autonomous robots, which are crucial to perform causal inference, and proposed a new approach for causal robot discovery to overcome some of the main challenges. This includes improving the quality of the causal models by using the robot as an active agent in the learning process. Future work will be devoted to the implementation and application of this approach to real-wold robotics problems, with a special interest in industrial scenarios involving human-robot interaction and collaboration.
+
+References
+
+Angelov, D.; Hristov, Y.; and Ramamoorthy, S. 2019. Using causal analysis to learn specifications from task demonstrations. In Proc. of the Int. Joint Conf. on Autonomous Agents and Multiagent Systems, AAMAS.
+
+Brawer, J.; Qin, M.; and Scassellati, B. 2020. A causal approach to tool affordance learning. In IEEE/RSJ Int. Conf. on Intell. Robots & Systems (IROS), 8394-8399.
+
+Cao, Y.; Li, B.; Li, Q.; Stokes, A.; Ingram, D.; and Kiprakis, A. 2021. Reasoning Operational Decisions for Robots via Time Series Causal Inference. In 2021 IEEE Int. Conf. on Robotics and Automation (ICRA), 6124-6131.
+
+Castri, L.; Mghames, S.; Hanheide, M.; and Bellotto, N. 2022. Causal Discovery of Dynamic Models for Predicting Human Spatial Interactions. In International Conference on Social Robotics (ICSR).
+
+Churamani, N.; Kalkan, S.; and Gunes, H. 2020. Continual learning for affective robotics: Why, what and how? In 2020 29th IEEE International Conference on Robot and Human Interactive Communication (RO-MAN), 425-431. IEEE.
+
+Glymour, C.; Zhang, K.; and Spirtes, P. 2019. Review of Causal Discovery Methods Based on Graphical Models. Frontiers in Genetics.
+
+Hadsell, R.; Rao, D.; Rusu, A. A.; and Pascanu, R. 2020. Embracing change: Continual learning in deep neural networks. Trends in cognitive sciences, 24(12): 1028-1040.
+
+Hellström, T. 2021. The relevance of causation in robotics: A review, categorization, and analysis. Paladyn, Journal of Behavioral Robotics, 238-255.
+
+Javed, K.; White, M.; and Bengio, Y. 2020. Learning Causal Models Online. CoRR, abs/2006.07461.
+
+Katz, G.; Huang, D. W.; Hauge, T.; Gentili, R.; and Reggia, J. 2018. A novel parsimonious cause-effect reasoning algorithm for robot imitation and plan recognition. IEEE Trans. on Cognitive and Developmental Systems.
+
+Kocacoban, D.; and Cussens, J. 2019. Online Causal Structure Learning in the Presence of Latent Variables. In 2019 18th IEEE International Conference On Machine Learning And Applications (ICMLA), 392-395.
+
+Kocacoban, D.; and Cussens, J. 2020. Fast Online Learning in the Presence of Latent Variables. Digitale Welt, 4(1): 37- 42.
+
+Kummerfeld, E.; and Danks, D. 2012. Online learning of time-varying causal structures. In UAI workshop on causal structure learning.
+
+Kummerfeld, E.; and Danks, D. 2013. Tracking time-varying graphical structure. Advances in neural information processing systems, 26.
+
+Lee, T. E.; Zhao, J. A.; Sawhney, A. S.; Girdhar, S.; and Kroemer, O. 2021. Causal Reasoning in Simulation for Structure and Transfer Learning of Robot Manipulation Policies. In 2021 IEEE Int. Conf. on Robotics and Automation (ICRA), 4776-4782.
+
+Lesort, T.; Lomonaco, V.; Stoian, A.; Maltoni, D.; Filliat, D.; and Díaz-Rodríguez, N. 2020. Continual learning for
+
+robotics: Definition, framework, learning strategies, opportunities and challenges. Information Fusion, 58: 52-68.
+
+Lungarella, M.; Metta, G.; Pfeifer, R.; and Sandini, G. 2003. Developmental robotics: a survey. Connection science, 15(4): 151-190.
+
+Parisi, G. I.; Kemker, R.; Part, J. L.; Kanan, C.; and Wermter, S. 2019. Continual lifelong learning with neural networks: A review. Neural Networks, 113: 54-71.
+
+Runge, J. 2018. Causal network reconstruction from time series: From theoretical assumptions to practical estimation. Chaos: An Interdisciplinary Journal of Nonlinear Science, 28: 075310.
+
+Runge, J.; Nowack, P.; Kretschmer, M.; Flaxman, S.; and Sejdinovic, D. 2019. Detecting and quantifying causal associations in large nonlinear time series datasets. Science Advances, 5.
+
+Saetia, S.; Yoshimura, N.; and Koike, Y. 2021. Constructing Brain Connectivity Model Using Causal Network Reconstruction Approach. Frontiers in Neuroinformatics, 15: 5.
+
+Weng, J.; McClelland, J.; Pentland, A.; Sporns, O.; Stockman, I.; Sur, M.; and Thelen, E. 2001. Autonomous mental development by robots and animals. Science, 291(5504): 599-600.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/9mk7Quvo7L/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/9mk7Quvo7L/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..c9f260dbde8f6bc41a3bb0b00863e4fb739f1af8
--- /dev/null
+++ b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/9mk7Quvo7L/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,47 @@
+§ FROM CONTINUAL LEARNING TO CAUSAL DISCOVERY IN ROBOTICS
+
+Anonymous submission
+
+§ ABSTRACT
+
+Reconstructing accurate causal models of dynamic systems from time-series of sensor data is a key problem in many real-world scenarios. In this paper, we present an overview based on our experience about practical challenges that the causal analysis encounters when applied to autonomous robots and how Continual Learning (CL) could help to overcome them. We propose a possible way to leverage the CL paradigm to make causal discovery feasible for robotics applications where the computational resources are limited, while at the same time exploiting the robot as an active agent that helps to increase the quality of the reconstructed causal models.
+
+§ 1 INTRODUCTION
+
+Causal discovery approaches generally build the causal model of the observed scenario from static or time-series data collected and processed in advance. However, in many real-world robotics applications, this approach could result inefficient or even unfeasible. The link between Continual Learning (CL) (Lesort et al. 2020) and Causality might represent a stepping stone towards the exploitation of causal discovery algorithms (Glymour, Zhang, and Spirtes 2019) that currently suffer many limitations in autonomous robots.
+
+Causal inference is an active research area in different fields, including robotics and autonomous systems (Hell-ström 2021; Brawer, Qin, and Scassellati 2020; Cao et al. 2021; Katz et al. 2018; Angelov, Hristov, and Ramamoor-thy 2019). However, most of these works overlooked some key features that are important for real-world application, i.e. the computational cost and the memory needed by causal analysis when long time-series are processed to reconstruct a causal model of the observed scenario. To this end, the CL's ability to enable the acquisition of more knowledge by trained models without forgetting previous information, and without using previous data recordings, might help to address these problems and to achieve better result in terms of quality of the causal analysis. For instance, a robot in an automated warehouse with humans and various objects (e.g. see Fig. 1) could observe and intervene in the interactions among them (e.g. worker and shelf) in order to build a causal model and therefore a deep understanding of the situation. Since the limited hardware resources though, the robot's causal analysis might be slow and based on a limited amount of data, leading to a low quality causal model.
+
+The solutions suggested in this paper would allow the robot to overcome its hardware limitations and, moreover, to improve the quality of the causal models by continually feeding new data for causal analysis, discarding the old collected one. This would enable a more efficient use of the robot's memory and computing's resources compared to existing causal discovery's approaches. To summarise, this paper proposes a Causal Robot Discovery (CRD) approach to overcome current limitations in causal analysis for real-world robotics applications, addressing in particular:
+
+ * the computing and memory hardware resources of the robot, which may hinder its capability to perform meaningful causal analysis;
+
+ * the update of previous causal models with new observational and interventional data from the robot to generate more accurate ones.
+
+§ 2 RELATED WORK
+
+Causal discovery: Several methods have been developed over the last few decades to derive causal relationships from observational data, which can be categorized into two main classes (Glymour, Zhang, and Spirtes 2019). The first one includes constraint-based methods, such as Peter and Clark (PC) and Fast Causal Inference (FCI), which rely on conditional independence tests as constraint-satisfaction to recover the causal graph. The second one includes score-based methods, such as Greedy Equivalence Search (GES), which assign a score to each Directed Acyclic Graph (DAG) and perform a search in this score space. However, many of these algorithms work only with static data (i.e. no temporal information) and are not applicable to time-series of sensor data in many robotics applications, for which time-dependent causal discovery methods are instead necessary. To this end, a variation of the PC algorithm, called PCMCI, was adapted and applied to time-series data (Runge 2018; Runge et al. 2019; Saetia, Yoshimura, and Koike 2021).
+
+Causal robotics: Causal inference has been recently considered in robotics, for example to build and learn a Structural Causal Model (SCM) from a mix of observation and self-supervised trials for tool affordance with a humanoid robot (Brawer, Qin, and Scassellati 2020). Other applications include the use of PCMCI to derive the causal model of an underwater robot trying to reach a target position (Cao et al. 2021) or to predict human spatial interactions in a social robotics context (Castri et al. 2022). Further causality-based approaches can be found in the robot imitation learning and manipulation area (Katz et al. 2018; Angelov, Hris-tov, and Ramamoorthy 2019; Lee et al. 2021). However, all these solutions rely on a fixed set of time-series for causal analysis and do not consider the computational cost and complexity for online update of the robot's causal models.
+
+Continual learning: The concept of learning continually from experience has been present in artificial intelligence since early days (Weng et al. 2001). Recently this has been explored more systematically in machine learning (Hadsell et al. 2020; Parisi et al. 2019) and robotics (Lungarella et al. 2003; Lesort et al. 2020; Churamani, Kalkan, and Gunes 2020). To our knowledge though, few applications of the continual learning paradigm can be found in the causality field. Javed, White, and Bengio (2020) incorporate causality and continual learning with an online algorithm that continually detects and removes spurious features from a causal model. In (Kummerfeld and Danks 2012, 2013; Kocacoban and Cussens 2019, 2020), instead, algorithms for online causal structure learning are presented to deal with nonstationary data. This is a key feature of data from real-world environments, which is still under-investigated in robotics and therefore motivates our approach proposed next.
+
+§ 3 CAUSAL ROBOT DISCOVERY
+
+A review of the literature revealed that the possible limitations of autonomous robots doing causal discovery with their own on-board sensors have not been taken into account. Indeed, the computational and memory requirements for long time-series of sensor data are often very demanding, making the use of previous algorithms for causal inference unfeasible on such platforms.
+
+Our approach is partially inspired by the works of Koca-coban and Cussens (2019) for handling non-stationary data, but differs from it in two ways. First of all, we adopt the current state-of-the-art PCMCI method for causal discovery from time-series data; second, we propose to re-learn the causal model not only when the observed scenario changes, but also at each new robot's set of observations/interventions (periodically, e.g. every few minutes). In particular, the introduction of the CL paradigm could help the robot to overcome the challenge of limited hardware resources and to improve the quality of the causal analysis even with nonstationary data. In addition, a CRD approach could benefit from the fact that robots are physically embodied in the environment and can actively influence its dynamic processes (i.e. by performing interventions). That is, CRD could improve the accuracy of the causal model by enriching "passive" observational data from the sensors with "active" interventional data from robot's actions aimed at collecting specific time-series for causal discovery.
+
+Therefore, our goal is to decrease the need of hardware resources - often scarce in autonomous systems - and to increase the causal analysis quality by using the robot as an active agent in the learning process. The proposed CRD system is thought to limit the demand for hardware resources and allow the robot to perform high quality causal discovery in a reasonable time by using its own on-board sensor data.
+
+ < g r a p h i c s >
+
+Figure 1: CRD approach: the robot provides observational and interventional data about the human-object interaction to the CRD block. The latter generates a causal model which is stored and used to compare the next causal model built on the next robot's observation and intervention.
+
+The proposed approach is depicted in Fig. 1: (i) starting from a prefixed set of variables, the robot collects meaningful data by observing and intervening in the target scenario; (ii) based on this data, a causal model is estimated using PCMCI (Runge 2018), which computes test statistics and p-values as causal strengths of the DAG's links. At this stage, differently from (Kocacoban and Cussens 2019), to increase the accuracy of the causal discovery, the robot keeps on collecting data by observing and intervening in the scenario to create new causal models. Periodically then, the robot compares the new causal models with the old ones, inheriting from the latter only the links that minimise the p-values of the DAG's causal relations. By repeating this process until the observed scenario changes, a stable version of the causal model with minimum uncertainty levels would be reached. The latter is useful not only for modeling the current scenario, but also when it changes. Indeed, by partially using the stored causal model that is consistent the target scenario, even the latter changes, the estimation of the new model can be significantly sped up (Kocacoban and Cussens 2019).
+
+Note that by iteratively discarding the old time-series and storing only the built causal model helps to avoid the combinatorial explosion otherwise affecting the PCMCI discovery, therefore allowing the robot to operate and compute new models within reasonable time. The catastrophic-forgetting problem is also mitigated by the fact that, during continuous operations, the robot observes similar processes with only small incremental changes, which leads to sequences of similar causal models, and that it reconstructs the latter by exploiting relatively small variations of the previous models.
+
+§ 4 CONCLUSION
+
+In this paper we considered the hardware resource limitations of autonomous robots, which are crucial to perform causal inference, and proposed a new approach for causal robot discovery to overcome some of the main challenges. This includes improving the quality of the causal models by using the robot as an active agent in the learning process. Future work will be devoted to the implementation and application of this approach to real-wold robotics problems, with a special interest in industrial scenarios involving human-robot interaction and collaboration.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/PYLLQ9emxhI/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/PYLLQ9emxhI/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..c65a7ddfaddc92f1c109c7d55f143fc2d9d75e0d
--- /dev/null
+++ b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/PYLLQ9emxhI/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,159 @@
+# Prospects of Continual Causality for Industrial Applications
+
+Anonymous submission
+
+## Abstract
+
+We have been working on causal analysis of industrial plants' process data and its applications, such as material quantity optimization using intervention effects. However, process data often has some problems such as non-stationary characteristics including distribution shifts, which make such applications difficult. Combined with the idea of continual learning, causal models may be able to solve these problems. We will present the potential and prospects for industrial applications of continual causality, showing previous work. We also introduce the brief idea of a specific new causal discovery method using a continual framework.
+
+## Our Position and Purpose
+
+We have been working on researches and business applications about industrial plants where predictive models are created from data and in addition the results are used for later actions to achieve specific objectives especially. We have found that the concepts of continual learning and causality are important to achieve these goals. In terms of the combination of continual learning and causality (continual causality), we will present the challenges we have faced so far and the prospects for their solutions. In particular, we are currently considering a new causal discover method to deal with non-stationarity and non-linearity by continual learning, and its brief idea will also be discussed later in this paper.
+
+## Discussion
+
+## Causality in Industrial Applications
+
+In many industrial applications of AI, the purpose of its predictions is often to stabilize or maximize a specific variable using the predicted value, e.g. optimizing the output product for material input in plants. Since a mere prediction model may not capture the data generation process, it may not be possible to estimate intervention effects, such as how much the production rate will increase when the material input is increased. Therefore, causal analysis are important for such applications. Causality is also useful from the standpoint of interpretability, since plants have high risk of accidents and its damage, and it is important to understand the basis and reasons for various types of predictions.
+
+While causality is useful, the complete picture of causal relationship is rarely available in plant process data. This is also because plant processes often include feedback loops and material reuse, and there may be time-delayed effects among processes, so its causal relationships and directions are often nontrivial. Therefore, it is important to identify unknown causal relationships as well as to estimate intervention effects. However, it is difficult to conduct experiments such as Randomized Controlled Trials (RCTs) to identify causal relationships in non-operating conditions, because of the risk of accidents, its damage and business factors in plants. Therefore, the framework of causal discovery, which finds causal relationships and directions only from data, is important.
+
+## Potential of Continual Learning
+
+Modeling plant process data also shows difficulties in terms of maintenance over time. For example, there are many non-stationary characteristics such as instability at start-up, distribution shifts due to changes in the quantity or type of products, trends by equipments aging, and seasonality caused by outdoor temperature changes (Kadlec, Gabrys, and Strandt 2009). By using the concept of continual learning, the system can be expected to continuously train models and adapt to changes in system conditions. We have working on this as a new method JIT-LiNGAM (Anonymous submitted to CLeaR 2023), which is mentioned bellow.
+
+Moreover, it may also be possible to reconsider the aforementioned tasks of stabilizing and maximizing plant process variable not only in the context of system control or causality, but also in the context of reinforcement learning and continual learning (future work).
+
+## Past Efforts and Future Prospects
+
+In this paper, we discuss the following challenges and describe the efforts of researchers including us so far and future prospects.
+
+## Causal Discovery
+
+Causal discovery is a framework identifying unknown causal relationships and directions only from data. As mentioned above, this framework is important because causal relationships are often unknown in plant process data. Discovered causal relationships are used for later intervention effect estimation and optimization, as well as for variable selection and model interpretation. LiNGAM (Shimizu et al. 2006) is a representative linear causal discovery method. There are also its extensions with partial prior knowledge (Shimizu et al. 2011) and with latent variables (Hoyer et al. 2008). Several non-linear methods are also known (Peters et al. 2014; Zheng et al. 2020; Uemura et al. 2022).
+
+
+
+Figure 1: Flows of the vinyl acetate production plant simulator (Luyben and Tyréus 1998).
+
+
+
+Figure 2: Results of applying VAR-LiNGAM to plant simulator data. Edges represent linear causality coefficients. The node $\mathrm{x}7 \sim \mathrm{x}{10}$ means a certain process variable, and for example ${x7}\left( {t - 1}\right)$ means the value of ${x7}$ one step before at time t.
+
+We have applied these methods to actual plant process data, but we have faced the common problem that the "true causal relationships" are unknown and so it is difficult to evaluate the results. However, there is a possibility that the causal model can be continuously evaluated indirectly based on the results of later interventional actions and be updated based on this. This should be considered future work of continual causality.
+
+## Time-series Extension
+
+It is necessary to introduce time-series models to account for time-lagged variables in causal discovery. Furthermore, it enables us to construct causal models without contradiction by expanding feedback loops along the time direction. Specific methods include VAR-LiNGAM (Hyvärinen et al. 2010). We have conducted numerical experiments applying VAR-LiNGAM to simulation data of vinyl acetate plants (Luyben and Tyréus 1998). The results of these experiments are briefly presented in Figures 1 and 2.
+
+## Optimal Intervention
+
+After getting a complete causal model, the optimal amount of intervention to an operable variable can be calculated backward such that a certain variable has a specific value (Pearl, Glymour, and Jewell 2016). There are some extensions of this approach, for example, a method including predictive models (Blöbaum and Shimizu 2017) and a method that estimates the optimal individual-level intervention (Kir-itoshi et al. 2021).
+
+
+
+Figure 3: The idea of JIT-LiNGAM
+
+## Continual Causal Discovery : JIT-LiNGAM
+
+We proposed a causal discovery method for non-stationarity such as distribution shift and non-linear causal relationships, JIT-LiNGAM (Anonymous submitted to CLeaR 2023) where LiNGAM is combined with Just-In-Time Modeling (JIT) (Stenman, Gustafsson, and Ljung 1996; Bontempi, Bi-rattari, and Bersini 1999). JIT is a method conventionally used for soft sensors (pseudo-sensors in plants for difficult-to-measure locations using regression models, etc.) where local linear models are trained continually by extracting neighboring samples of the current input sample from a database. Based on Taylor's theorem, non-linear phenomena in plants can be approximated by local linear models. And by using neighboring samples for the modeling, it can follow continual changes in plants. Moreover, the database can be updated by adding samples online, but due to limitations of memory and computational complexity, efficient use of data is necessary. For example, the way using data from the most recent several years, and other developments, such as the use of influence functions, or utilizing continual learning methods and reinforcement learning, may be considered (future work).
+
+Extensions to time-delayed causality as described above and development to optimal intervention are also possible. In addition, since it can capture snapshots of non-linear, nonstationary and dynamically changing causal relationships, it may be able to even deal with cases where causal directions are being reversed. This can be a solution to the plant feedback loop problem described above.
+
+## Conclusion
+
+We presented our positions in the causal analysis research area relevant to continual learning problems. Now we are working on each introduced theme independently, but in the future we need to integrate them. Particularly, JIT-LiNGAM is expected to be extended in various ways in the future. Continual causality is still unexplored areas, and much further research will be conducted in the future.
+
+## Appendix:JIT-LiNGAM Algorithm
+
+We will show the details of JIT-LiNGAM in Algorithm1. More details are in our submitted paper (Anonymous submitted to CLeaR 2023).
+
+Algorithm 1: JIT Algorithm for Time-Series Causal Discovery (JIT-LiNGAM)
+
+---
+
+Inputs:
+
+ stored data $\mathcal{D} = \left\{ {{\mathbf{x}}^{\left( t\right) } \mid t = 1,\ldots , T - 1}\right\}$ ,
+
+ query point ${\mathbf{x}}_{\mathrm{Q}} = {\mathbf{x}}^{\left( T\right) }$ , distance function $d\left( {\cdot , \cdot }\right)$ , num-
+
+ ber of neighbors $K$ .
+
+## Outputs:
+
+ weighted adjacency matrix $\mathbf{J}\left( {\mathbf{x}}^{\left( T\right) }\right)$ : representing the
+
+ causality defined in the neighborhood for query point
+
+ ${\mathbf{x}}^{\left( T\right) }.$
+
+ Procedure 1
+
+ Extract $K$ -data of ${\mathbf{x}}^{\left( t\right) }$ from $\mathcal{D}$ , based on $d\left( {{\mathbf{x}}^{\left( t\right) },{\mathbf{x}}_{\mathrm{Q}}}\right)$ ,
+
+ which is the distance from the query point ${\mathbf{x}}_{\mathrm{Q}}$ . (The de-
+
+ tails of how to extract $K$ -data are described in the paper
+
+ (Anonymous submitted to CLeaR 2023)) The resulting
+
+ $K$ -data subset $\Omega \left( {{\mathbf{x}}_{\mathrm{Q}};d, K}\right)$ is:
+
+$$
+\Omega \left( {{\mathbf{x}}_{\mathrm{Q}};d, K}\right) = \left\{ {{\mathbf{x}}^{\left( \sigma \left( k\right) \right) } \mid k = 1,\ldots , K}\right\} ,
+$$
+
+ where $\sigma \left( k\right)$ is a function that returns the $k$ -th nearest
+
+ time index $t$ in $\Omega \left( {{\mathbf{x}}_{\mathrm{Q}};d, K}\right)$ .
+
+ Procedure 2
+
+ Centralize $\Omega \left( {{\mathbf{x}}_{\mathrm{Q}};d, K}\right)$ and get $\widetilde{\Omega }\left( {{\mathbf{x}}_{\mathrm{Q}};d, K}\right)$ , where
+
+ mean is subtracted from each element of $\Omega \left( {{\mathbf{x}}_{\mathrm{Q}};d, K}\right)$
+
+ along each dimension of $\mathbf{x}$ .
+
+ ## Procedure 3
+
+ Train LiNGAM using $\widetilde{\Omega }\left( {{\mathbf{x}}_{\mathrm{Q}};d, K}\right)$ , and get resulting
+
+ weighted adjacency matrix $\mathbf{J}\left( {\mathbf{x}}^{\left( T\right) }\right)$ .
+
+---
+
+## References
+
+Anonymous. submitted to CLeaR 2023. Causal Discovery for Non-stationary Non-linear Time-series Data Using JustIn-Time Modeling.
+
+Blöbaum, P.; and Shimizu, S. 2017. Estimation of interventional effects of features on prediction. In 2017 IEEE 27th International Workshop on Machine Learning for Signal Processing (MLSP), 1-6. IEEE.
+
+Bontempi, G.; Birattari, M.; and Bersini, H. 1999. Lazy learning for local modelling and control design. International Journal of Control, 72(7-8): 643-658.
+
+Hoyer, P. O.; Shimizu, S.; Kerminen, A. J.; and Palviainen, M. 2008. Estimation of causal effects using linear non-Gaussian causal models with hidden variables. International Journal of Approximate Reasoning, 49(2): 362-378.
+
+Hyvärinen, A.; Zhang, K.; Shimizu, S.; and Hoyer, P. O. 2010. Estimation of a structural vector autoregression model
+
+using non-gaussianity. Journal of Machine Learning Research, 11(5).
+
+Kadlec, P.; Gabrys, B.; and Strandt, S. 2009. Data-driven soft sensors in the process industry. Computers & chemical engineering, 33(4): 795-814.
+
+Kiritoshi, K.; Izumitani, T.; Koyama, K.; Okawachi, T.; Asa-hara, K.; and Shimizu, S. 2021. Estimating individual-level optimal causal interventions combining causal models and machine learning models. In The KDD'21 Workshop on Causal Discovery, 55-77. PMLR.
+
+Luyben, M. L.; and Tyréus, B. D. 1998. An industrial design/control study for the vinyl acetate monomer process. Computers & Chemical Engineering, 22(7-8): 867-877.
+
+Pearl, J.; Glymour, M.; and Jewell, N. P. 2016. Causal Inference in Statistics: A Primer. John Wiley & Sons.
+
+Peters, J.; Mooij, J. M.; Janzing, D.; and Schölkopf, B. 2014. Causal discovery with continuous additive noise models. Journal of Machine Learning Research, 15: 2009-2053.
+
+Shimizu, S.; Hoyer, P. O.; Hyvärinen, A.; Kerminen, A.; and Jordan, M. 2006. A linear non-Gaussian acyclic model for causal discovery. Journal of Machine Learning Research, 7(10).
+
+Shimizu, S.; Inazumi, T.; Sogawa, Y.; Hyvärinen, A.; Kawa-hara, Y.; Washio, T.; Hoyer, P. O.; and Bollen, K. 2011. Di-rectLiNGAM: A direct method for learning a linear non-Gaussian structural equation model. The Journal of Machine Learning Research, 12: 1225-1248.
+
+Stenman, A.; Gustafsson, F.; and Ljung, L. 1996. Just in time models for dynamical systems. In Proceedings of 35th IEEE Conference on Decision and Control, volume 1, 1115- 1120. IEEE.
+
+Uemura, K.; Takagi, T.; Takayuki, K.; Yoshida, H.; and Shimizu, S. 2022. A multivariate causal discovery based on post-nonlinear model. In Conference on Causal Learning and Reasoning, 826-839. PMLR.
+
+Zheng, X.; Dan, C.; Aragam, B.; Ravikumar, P.; and Xing, E. 2020. Learning sparse nonparametric dags. In International Conference on Artificial Intelligence and Statistics, 3414- 3425. PMLR.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/PYLLQ9emxhI/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/PYLLQ9emxhI/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..2ebb602e28d4fe73245af7fc71ece9df46d8a30b
--- /dev/null
+++ b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/PYLLQ9emxhI/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,121 @@
+§ PROSPECTS OF CONTINUAL CAUSALITY FOR INDUSTRIAL APPLICATIONS
+
+Anonymous submission
+
+§ ABSTRACT
+
+We have been working on causal analysis of industrial plants' process data and its applications, such as material quantity optimization using intervention effects. However, process data often has some problems such as non-stationary characteristics including distribution shifts, which make such applications difficult. Combined with the idea of continual learning, causal models may be able to solve these problems. We will present the potential and prospects for industrial applications of continual causality, showing previous work. We also introduce the brief idea of a specific new causal discovery method using a continual framework.
+
+§ OUR POSITION AND PURPOSE
+
+We have been working on researches and business applications about industrial plants where predictive models are created from data and in addition the results are used for later actions to achieve specific objectives especially. We have found that the concepts of continual learning and causality are important to achieve these goals. In terms of the combination of continual learning and causality (continual causality), we will present the challenges we have faced so far and the prospects for their solutions. In particular, we are currently considering a new causal discover method to deal with non-stationarity and non-linearity by continual learning, and its brief idea will also be discussed later in this paper.
+
+§ DISCUSSION
+
+§ CAUSALITY IN INDUSTRIAL APPLICATIONS
+
+In many industrial applications of AI, the purpose of its predictions is often to stabilize or maximize a specific variable using the predicted value, e.g. optimizing the output product for material input in plants. Since a mere prediction model may not capture the data generation process, it may not be possible to estimate intervention effects, such as how much the production rate will increase when the material input is increased. Therefore, causal analysis are important for such applications. Causality is also useful from the standpoint of interpretability, since plants have high risk of accidents and its damage, and it is important to understand the basis and reasons for various types of predictions.
+
+While causality is useful, the complete picture of causal relationship is rarely available in plant process data. This is also because plant processes often include feedback loops and material reuse, and there may be time-delayed effects among processes, so its causal relationships and directions are often nontrivial. Therefore, it is important to identify unknown causal relationships as well as to estimate intervention effects. However, it is difficult to conduct experiments such as Randomized Controlled Trials (RCTs) to identify causal relationships in non-operating conditions, because of the risk of accidents, its damage and business factors in plants. Therefore, the framework of causal discovery, which finds causal relationships and directions only from data, is important.
+
+§ POTENTIAL OF CONTINUAL LEARNING
+
+Modeling plant process data also shows difficulties in terms of maintenance over time. For example, there are many non-stationary characteristics such as instability at start-up, distribution shifts due to changes in the quantity or type of products, trends by equipments aging, and seasonality caused by outdoor temperature changes (Kadlec, Gabrys, and Strandt 2009). By using the concept of continual learning, the system can be expected to continuously train models and adapt to changes in system conditions. We have working on this as a new method JIT-LiNGAM (Anonymous submitted to CLeaR 2023), which is mentioned bellow.
+
+Moreover, it may also be possible to reconsider the aforementioned tasks of stabilizing and maximizing plant process variable not only in the context of system control or causality, but also in the context of reinforcement learning and continual learning (future work).
+
+§ PAST EFFORTS AND FUTURE PROSPECTS
+
+In this paper, we discuss the following challenges and describe the efforts of researchers including us so far and future prospects.
+
+§ CAUSAL DISCOVERY
+
+Causal discovery is a framework identifying unknown causal relationships and directions only from data. As mentioned above, this framework is important because causal relationships are often unknown in plant process data. Discovered causal relationships are used for later intervention effect estimation and optimization, as well as for variable selection and model interpretation. LiNGAM (Shimizu et al. 2006) is a representative linear causal discovery method. There are also its extensions with partial prior knowledge (Shimizu et al. 2011) and with latent variables (Hoyer et al. 2008). Several non-linear methods are also known (Peters et al. 2014; Zheng et al. 2020; Uemura et al. 2022).
+
+ < g r a p h i c s >
+
+Figure 1: Flows of the vinyl acetate production plant simulator (Luyben and Tyréus 1998).
+
+ < g r a p h i c s >
+
+Figure 2: Results of applying VAR-LiNGAM to plant simulator data. Edges represent linear causality coefficients. The node $\mathrm{x}7 \sim \mathrm{x}{10}$ means a certain process variable, and for example ${x7}\left( {t - 1}\right)$ means the value of ${x7}$ one step before at time t.
+
+We have applied these methods to actual plant process data, but we have faced the common problem that the "true causal relationships" are unknown and so it is difficult to evaluate the results. However, there is a possibility that the causal model can be continuously evaluated indirectly based on the results of later interventional actions and be updated based on this. This should be considered future work of continual causality.
+
+§ TIME-SERIES EXTENSION
+
+It is necessary to introduce time-series models to account for time-lagged variables in causal discovery. Furthermore, it enables us to construct causal models without contradiction by expanding feedback loops along the time direction. Specific methods include VAR-LiNGAM (Hyvärinen et al. 2010). We have conducted numerical experiments applying VAR-LiNGAM to simulation data of vinyl acetate plants (Luyben and Tyréus 1998). The results of these experiments are briefly presented in Figures 1 and 2.
+
+§ OPTIMAL INTERVENTION
+
+After getting a complete causal model, the optimal amount of intervention to an operable variable can be calculated backward such that a certain variable has a specific value (Pearl, Glymour, and Jewell 2016). There are some extensions of this approach, for example, a method including predictive models (Blöbaum and Shimizu 2017) and a method that estimates the optimal individual-level intervention (Kir-itoshi et al. 2021).
+
+ < g r a p h i c s >
+
+Figure 3: The idea of JIT-LiNGAM
+
+§ CONTINUAL CAUSAL DISCOVERY : JIT-LINGAM
+
+We proposed a causal discovery method for non-stationarity such as distribution shift and non-linear causal relationships, JIT-LiNGAM (Anonymous submitted to CLeaR 2023) where LiNGAM is combined with Just-In-Time Modeling (JIT) (Stenman, Gustafsson, and Ljung 1996; Bontempi, Bi-rattari, and Bersini 1999). JIT is a method conventionally used for soft sensors (pseudo-sensors in plants for difficult-to-measure locations using regression models, etc.) where local linear models are trained continually by extracting neighboring samples of the current input sample from a database. Based on Taylor's theorem, non-linear phenomena in plants can be approximated by local linear models. And by using neighboring samples for the modeling, it can follow continual changes in plants. Moreover, the database can be updated by adding samples online, but due to limitations of memory and computational complexity, efficient use of data is necessary. For example, the way using data from the most recent several years, and other developments, such as the use of influence functions, or utilizing continual learning methods and reinforcement learning, may be considered (future work).
+
+Extensions to time-delayed causality as described above and development to optimal intervention are also possible. In addition, since it can capture snapshots of non-linear, nonstationary and dynamically changing causal relationships, it may be able to even deal with cases where causal directions are being reversed. This can be a solution to the plant feedback loop problem described above.
+
+§ CONCLUSION
+
+We presented our positions in the causal analysis research area relevant to continual learning problems. Now we are working on each introduced theme independently, but in the future we need to integrate them. Particularly, JIT-LiNGAM is expected to be extended in various ways in the future. Continual causality is still unexplored areas, and much further research will be conducted in the future.
+
+§ APPENDIX:JIT-LINGAM ALGORITHM
+
+We will show the details of JIT-LiNGAM in Algorithm1. More details are in our submitted paper (Anonymous submitted to CLeaR 2023).
+
+Algorithm 1: JIT Algorithm for Time-Series Causal Discovery (JIT-LiNGAM)
+
+Inputs:
+
+ stored data $\mathcal{D} = \left\{ {{\mathbf{x}}^{\left( t\right) } \mid t = 1,\ldots ,T - 1}\right\}$ ,
+
+ query point ${\mathbf{x}}_{\mathrm{Q}} = {\mathbf{x}}^{\left( T\right) }$ , distance function $d\left( {\cdot , \cdot }\right)$ , num-
+
+ ber of neighbors $K$ .
+
+§ OUTPUTS:
+
+ weighted adjacency matrix $\mathbf{J}\left( {\mathbf{x}}^{\left( T\right) }\right)$ : representing the
+
+ causality defined in the neighborhood for query point
+
+ ${\mathbf{x}}^{\left( T\right) }.$
+
+ Procedure 1
+
+ Extract $K$ -data of ${\mathbf{x}}^{\left( t\right) }$ from $\mathcal{D}$ , based on $d\left( {{\mathbf{x}}^{\left( t\right) },{\mathbf{x}}_{\mathrm{Q}}}\right)$ ,
+
+ which is the distance from the query point ${\mathbf{x}}_{\mathrm{Q}}$ . (The de-
+
+ tails of how to extract $K$ -data are described in the paper
+
+ (Anonymous submitted to CLeaR 2023)) The resulting
+
+ $K$ -data subset $\Omega \left( {{\mathbf{x}}_{\mathrm{Q}};d,K}\right)$ is:
+
+$$
+\Omega \left( {{\mathbf{x}}_{\mathrm{Q}};d,K}\right) = \left\{ {{\mathbf{x}}^{\left( \sigma \left( k\right) \right) } \mid k = 1,\ldots ,K}\right\} ,
+$$
+
+ where $\sigma \left( k\right)$ is a function that returns the $k$ -th nearest
+
+ time index $t$ in $\Omega \left( {{\mathbf{x}}_{\mathrm{Q}};d,K}\right)$ .
+
+ Procedure 2
+
+ Centralize $\Omega \left( {{\mathbf{x}}_{\mathrm{Q}};d,K}\right)$ and get $\widetilde{\Omega }\left( {{\mathbf{x}}_{\mathrm{Q}};d,K}\right)$ , where
+
+ mean is subtracted from each element of $\Omega \left( {{\mathbf{x}}_{\mathrm{Q}};d,K}\right)$
+
+ along each dimension of $\mathbf{x}$ .
+
+ ## Procedure 3
+
+ Train LiNGAM using $\widetilde{\Omega }\left( {{\mathbf{x}}_{\mathrm{Q}};d,K}\right)$ , and get resulting
+
+ weighted adjacency matrix $\mathbf{J}\left( {\mathbf{x}}^{\left( T\right) }\right)$ .
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/dwIudqXzBCL/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/dwIudqXzBCL/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..8bd64cc6e3631245bbb9841b102d4d259a964d12
--- /dev/null
+++ b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/dwIudqXzBCL/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,109 @@
+# Towards Continual Learning of Causal Models
+
+Anonymous submission
+
+## Abstract
+
+A common assumption in causal modelling is that the relations between variables are fixed mechanisms. But in reality, these mechanisms often change over time and new data might not fit the original model as well. But is it reasonable to regularly train new models or can we update a single model continually instead? We propose utilizing the field of continual learning to help keep causal models updated over time.
+
+## Introduction
+
+Causal models (Pearl 2009; Peters, Janzing, and Schölkopf 2017) can be useful in a variety of applications (Koch, Eisinger, and Gebharter 2017; Carriger and Barron 2020; Fenton et al. 2020). Various algorithms exist for causal discovery, i. e. learning the causal graph from data (Glymour, Zhang, and Spirtes 2019). With the causal graph known, causal model can be trained, enabling a causal perspective on the respective problem. Due to a lack of knowledge about unobserved variables, the relations between effect(s) and cause(s) are usually not deterministic, instead relying on probabilities for making predictions.
+
+Continual learning is another research direction which in recent years has attracted the attention of an increasing number of people (based on the number of publications (Mundt et al. 2022)). While we are not aware of any universally accepted definition, we would describe continual learning roughly as updating a model over time, given new information and without losing the knowledge encoded in the original model (Mundt et al. 2022; Chen and Liu 2018).
+
+The general idea behind this paper is that continual learning can help causal models stay up-to-date over time.
+
+Imagine there is a causal model trained on some data and we assume that this is the best possible model that can be obtained using known techniques and the provided data. What if something in the problem changes or we get new data describing only some parts of the true underlying model?
+
+For example, consider a company trying out different products to sell. Every day, they put a different type of product (A) up for sale and at the end of the day, they obtain information about the amount of money earned (B). In this example, the company is still trying to find out which products are selling well and which do not, so it is safe to assume that the amount of money earned here does not causally influence the type of product sold. On the other hand, which products are put up for sale certainly determines the profit at the end of the day. Therefore, we have the small causal graph $\mathrm{A} \rightarrow \mathrm{B}$ and, given enough data, it can be calculated how much profit is to be expected depending on the type of product sold. Most likely, in a real-world scenario, there would be more variables included which have an influence on the model, for example the number of guests entering the store or the day of the year. We exclude most of these in our example but it is reasonable to assume that some of those would be included and known, while other variables which influence the model are not included, either because the actual values are unknown (e. g. wealth of the customers that day), they might not have been thought of, or it was impossible to include them for other reasons. Now, imagine one such variable suddenly becomes available. For example, the company could sell its wares in several places and one day the data scientists get access to new data including the location of the store (C) which now is another cause of the amount of money earned (new causal graph: $\mathrm{A} \rightarrow \mathrm{B} \leftarrow \mathrm{C}$ ). How can the existing model be updated to account for the newly acquired data without having to retrain the entire model?
+
+But even if the general structure of the problem (causal graph) remains unchanged, we could benefit from continual learning. Staying with the previous example, imagine that the popularity of a product type changes. We would assume that much of the relations between the causes $\mathrm{A}$ and $\mathrm{C}$ and the effect B remains the same but certainly some relations including that specific product type change. ${}^{1}$ Is it possible to retain the knowledge which still holds and at the same time update the model to represent the new situation?
+
+In this paper, we mainly focus on Neural Causal Models (NCMs) (Xia et al. 2021) as one example for causal models. Here, continual learning provides tools which can help keep NCMs running and up-to-date.
+
+First, we look in more detail at the problem to be solved.
+
+## $\mathbf{{Problem}}$
+
+The problem addressed in this position paper is concerned with updating causal models given new information (data). Here, it can be distinguished between two general problems:
+
+---
+
+${}^{1}$ Arguably, this is not a change of relations but a change of unknown variables not included in the model (lack of knowledge). Even so, it can be useful to think of it as a change of relationships.
+
+---
+
+P1 Parts of the probability distributions of the model change (the causal graph remains unchanged).
+
+P2 The number of variables in the causal model changes.
+
+Since $\mathbf{{P2}}$ usually includes $\mathbf{{P1}},\mathbf{{P1}}$ can be seen as a sub-task of $\mathbf{{P2}}$ and the overall easier problem.
+
+But why is this an important problem? Is it not easy to simply add the new data to the original data or replace some parts of the original data and retrain the model? Depending on the application, this might be a valid possibility and, in that case, continual learning is not needed. However, just retraining the model has several possible downsides:
+
+A) Time and efficiency in general. Training a new causal model can take a lot of time and resources. Especially if the data changes regularly, it could be unreasonable to train a model from scratch every time.
+
+B) Original data is unavailable. Privacy aspects, storage constraints, and other reasons could make it impossible to keep data stored for a longer time. If the new data is not sufficiently large and complete, a retrained new model could end up significantly worse than the original model, while an updated model could benefit from both the information of the original model (indirectly the original data) and the newly acquired data.
+
+C) New data is incomplete. The new data might not be complete and only contain some features, like a new variable or only the features for one cause-effect relationship which presumably changed. Here, retraining could simply be impossible.
+
+## Proposed Solution Strategy
+
+For the purpose of this position paper, we consider neural causal models as the type of causal model used. Here, parent-child relationships are modeled by neural networks (NNs). Applying the ideas on models other than NCMs might very well be possible but is not within the scope of this position paper.
+
+First, it is worth mentioning that simply using a model such as an NCM inherently (to a degree) opens the causal model up to continual learning. Since the "mechanisms" (functions determining a variable based on the parent variables) are usually assumed to be independent of each other, they can also be updated separately. In other words, if it is known that only a certain subset of mechanisms changed, those can be updated while requiring data only for the features (variables) relevant for these mechanisms (the respective child and parent variables).
+
+This can be very useful but is not an exciting new revelation so let us get back to the two aforementioned problems and discuss solution strategies. For the second, more complex problem, these strategies are less specific but might serve as first steps towards tackling that problem.
+
+## Problem 1: Change of Probabilities
+
+Retrain. Training a new model is a valid strategy in general but this approach also has various problems (refer to the previous parts of this paper).
+
+Continue Training. One can continue training as before but using the new data. If what you train on now consists of data representative for the entire problem region, this should work out well. However, if the new data is very specific and does not capture the entire region, catastrophic forgetting (Robins 1995; Kirkpatrick et al. 2017) could become a problem, where predictions for data points not represented by the new data are incorrect, although they were correct for the original model.
+
+Continual Learning. Continual learning can help a lot, depending on the specific problem formulation. Assume that we have discrete variables ${}^{2}$ and therefore, given a specific model, there is only a finite amount of probabilities this variable can obtain (one for each parent configuration). If an $\mathrm{{NCM}}$ is used and the new data only covers some of these parents configurations, continual learning methods can be used to avoid (or at least reduce) catastrophic forgetting of the other parent configurations. Continual learning methods for this purpose include elastic weight consolidation (Kirkpatrick et al. 2017) (keep neurons for useful input-output relationships fixed), knowledge distillation (Gou et al. 2021) (distill desired input-output relationships and train a new model), and (pseudo-)rehearsal (Robins 1995) (train on the new data but add artificial or actual data points representing the input-output relationships you want to keep).
+
+## Problem 2: Change of Variables
+
+Continual learning could also be a helpful tool towards updating the structure of a causal model. If a variable is added as a new parent to another variable, the existing NN could be extended by additional neurons to increase the expressivity of the NN and maybe even keep some useful connections and neurons within the NN which are still helpful (but they should not be fixed in case the relationship changed significantly). Another idea is that if a relationship between some variables is presumably complex but has two or more causes (child variables), maybe those could benefit from sharing a part of the NN architecture at the beginning and have different child variables represented by different output (task) specific layers at the end of the NN (Li and Hoiem 2017).
+
+## Conclusion and Outlook
+
+Continual learning and causality (NCMs in particular) have several goals in common, including but not limited to model adaptation given new information, invariance of unchanged knowledge, and efficient use of data. We postulate that causal models therefore can benefit from continual learning methods which are designed to update new or changed parts of a model while keeping other parts functional.
+
+One can also think of further areas in which a continual perspective on causality could help. For example, one might even try to create some kind of "meta model" which operates on top of a causal model but, given the previous changes in that causal model, is tasked to predict how the causal model is expected to change in the future.
+
+---
+
+${}^{2}$ The idea could also work for continuous variables but it requires a more sophisticated approach. For this position paper, we consider discrete variables as the simpler version of this problem.
+
+---
+
+## References
+
+Carriger, J. F.; and Barron, M. G. 2020. A Bayesian network approach to refining ecological risk assessments: Mercury and the Florida panther (Puma concolor coryi). Ecological modelling, 418: 108911.
+
+Chen, Z.; and Liu, B. 2018. Lifelong machine learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 12(3): 1-207.
+
+Fenton, N. E.; Neil, M.; Osman, M.; and McLachlan, S. 2020. COVID-19 infection and death rates: the need to incorporate causal explanations for the data and avoid bias in testing. Journal of Risk Research, 23(7-8): 862-865.
+
+Glymour, C.; Zhang, K.; and Spirtes, P. 2019. Review of causal discovery methods based on graphical models. Frontiers in genetics, 10: 524.
+
+Gou, J.; Yu, B.; Maybank, S. J.; and Tao, D. 2021. Knowledge distillation: A survey. International Journal of Computer Vision, 129(6): 1789-1819.
+
+Kirkpatrick, J.; Pascanu, R.; Rabinowitz, N.; Veness, J.; Des-jardins, G.; Rusu, A. A.; Milan, K.; Quan, J.; Ramalho, T.; Grabska-Barwinska, A.; et al. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13): 3521-3526.
+
+Koch, D.; Eisinger, R. S.; and Gebharter, A. 2017. A causal Bayesian network model of disease progression mechanisms in chronic myeloid leukemia. Journal of theoretical biology, 433: 94-105.
+
+Li, Z.; and Hoiem, D. 2017. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12): 2935-2947.
+
+Mundt, M.; Lang, S.; Delfosse, Q.; and Kersting, K. 2022. CLEVA-Compass: A Continual Learning Evaluation Assessment Compass to Promote Research Transparency and Comparability. In International Conference on Learning Representations. https://arxiv.org/abs/2110.03331.
+
+Pearl, J. 2009. Causality. Cambridge university press.
+
+Peters, J.; Janzing, D.; and Schölkopf, B. 2017. Elements of causal inference: foundations and learning algorithms. The MIT Press.
+
+Robins, A. 1995. Catastrophic forgetting, rehearsal and pseudorehearsal. Connection Science, 7(2): 123-146.
+
+Xia, K.; Lee, K.-Z.; Bengio, Y.; and Bareinboim, E. 2021. The causal-neural connection: Expressiveness, learnability, and inference. Advances in Neural Information Processing Systems, 34: 10823-10836.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/dwIudqXzBCL/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/dwIudqXzBCL/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..6795a540607e07cdf5aea6bde24851e1de5eb70f
--- /dev/null
+++ b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/dwIudqXzBCL/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,73 @@
+§ TOWARDS CONTINUAL LEARNING OF CAUSAL MODELS
+
+Anonymous submission
+
+§ ABSTRACT
+
+A common assumption in causal modelling is that the relations between variables are fixed mechanisms. But in reality, these mechanisms often change over time and new data might not fit the original model as well. But is it reasonable to regularly train new models or can we update a single model continually instead? We propose utilizing the field of continual learning to help keep causal models updated over time.
+
+§ INTRODUCTION
+
+Causal models (Pearl 2009; Peters, Janzing, and Schölkopf 2017) can be useful in a variety of applications (Koch, Eisinger, and Gebharter 2017; Carriger and Barron 2020; Fenton et al. 2020). Various algorithms exist for causal discovery, i. e. learning the causal graph from data (Glymour, Zhang, and Spirtes 2019). With the causal graph known, causal model can be trained, enabling a causal perspective on the respective problem. Due to a lack of knowledge about unobserved variables, the relations between effect(s) and cause(s) are usually not deterministic, instead relying on probabilities for making predictions.
+
+Continual learning is another research direction which in recent years has attracted the attention of an increasing number of people (based on the number of publications (Mundt et al. 2022)). While we are not aware of any universally accepted definition, we would describe continual learning roughly as updating a model over time, given new information and without losing the knowledge encoded in the original model (Mundt et al. 2022; Chen and Liu 2018).
+
+The general idea behind this paper is that continual learning can help causal models stay up-to-date over time.
+
+Imagine there is a causal model trained on some data and we assume that this is the best possible model that can be obtained using known techniques and the provided data. What if something in the problem changes or we get new data describing only some parts of the true underlying model?
+
+For example, consider a company trying out different products to sell. Every day, they put a different type of product (A) up for sale and at the end of the day, they obtain information about the amount of money earned (B). In this example, the company is still trying to find out which products are selling well and which do not, so it is safe to assume that the amount of money earned here does not causally influence the type of product sold. On the other hand, which products are put up for sale certainly determines the profit at the end of the day. Therefore, we have the small causal graph $\mathrm{A} \rightarrow \mathrm{B}$ and, given enough data, it can be calculated how much profit is to be expected depending on the type of product sold. Most likely, in a real-world scenario, there would be more variables included which have an influence on the model, for example the number of guests entering the store or the day of the year. We exclude most of these in our example but it is reasonable to assume that some of those would be included and known, while other variables which influence the model are not included, either because the actual values are unknown (e. g. wealth of the customers that day), they might not have been thought of, or it was impossible to include them for other reasons. Now, imagine one such variable suddenly becomes available. For example, the company could sell its wares in several places and one day the data scientists get access to new data including the location of the store (C) which now is another cause of the amount of money earned (new causal graph: $\mathrm{A} \rightarrow \mathrm{B} \leftarrow \mathrm{C}$ ). How can the existing model be updated to account for the newly acquired data without having to retrain the entire model?
+
+But even if the general structure of the problem (causal graph) remains unchanged, we could benefit from continual learning. Staying with the previous example, imagine that the popularity of a product type changes. We would assume that much of the relations between the causes $\mathrm{A}$ and $\mathrm{C}$ and the effect B remains the same but certainly some relations including that specific product type change. ${}^{1}$ Is it possible to retain the knowledge which still holds and at the same time update the model to represent the new situation?
+
+In this paper, we mainly focus on Neural Causal Models (NCMs) (Xia et al. 2021) as one example for causal models. Here, continual learning provides tools which can help keep NCMs running and up-to-date.
+
+First, we look in more detail at the problem to be solved.
+
+§ $\MATHBF{{PROBLEM}}$
+
+The problem addressed in this position paper is concerned with updating causal models given new information (data). Here, it can be distinguished between two general problems:
+
+${}^{1}$ Arguably, this is not a change of relations but a change of unknown variables not included in the model (lack of knowledge). Even so, it can be useful to think of it as a change of relationships.
+
+P1 Parts of the probability distributions of the model change (the causal graph remains unchanged).
+
+P2 The number of variables in the causal model changes.
+
+Since $\mathbf{{P2}}$ usually includes $\mathbf{{P1}},\mathbf{{P1}}$ can be seen as a sub-task of $\mathbf{{P2}}$ and the overall easier problem.
+
+But why is this an important problem? Is it not easy to simply add the new data to the original data or replace some parts of the original data and retrain the model? Depending on the application, this might be a valid possibility and, in that case, continual learning is not needed. However, just retraining the model has several possible downsides:
+
+A) Time and efficiency in general. Training a new causal model can take a lot of time and resources. Especially if the data changes regularly, it could be unreasonable to train a model from scratch every time.
+
+B) Original data is unavailable. Privacy aspects, storage constraints, and other reasons could make it impossible to keep data stored for a longer time. If the new data is not sufficiently large and complete, a retrained new model could end up significantly worse than the original model, while an updated model could benefit from both the information of the original model (indirectly the original data) and the newly acquired data.
+
+C) New data is incomplete. The new data might not be complete and only contain some features, like a new variable or only the features for one cause-effect relationship which presumably changed. Here, retraining could simply be impossible.
+
+§ PROPOSED SOLUTION STRATEGY
+
+For the purpose of this position paper, we consider neural causal models as the type of causal model used. Here, parent-child relationships are modeled by neural networks (NNs). Applying the ideas on models other than NCMs might very well be possible but is not within the scope of this position paper.
+
+First, it is worth mentioning that simply using a model such as an NCM inherently (to a degree) opens the causal model up to continual learning. Since the "mechanisms" (functions determining a variable based on the parent variables) are usually assumed to be independent of each other, they can also be updated separately. In other words, if it is known that only a certain subset of mechanisms changed, those can be updated while requiring data only for the features (variables) relevant for these mechanisms (the respective child and parent variables).
+
+This can be very useful but is not an exciting new revelation so let us get back to the two aforementioned problems and discuss solution strategies. For the second, more complex problem, these strategies are less specific but might serve as first steps towards tackling that problem.
+
+§ PROBLEM 1: CHANGE OF PROBABILITIES
+
+Retrain. Training a new model is a valid strategy in general but this approach also has various problems (refer to the previous parts of this paper).
+
+Continue Training. One can continue training as before but using the new data. If what you train on now consists of data representative for the entire problem region, this should work out well. However, if the new data is very specific and does not capture the entire region, catastrophic forgetting (Robins 1995; Kirkpatrick et al. 2017) could become a problem, where predictions for data points not represented by the new data are incorrect, although they were correct for the original model.
+
+Continual Learning. Continual learning can help a lot, depending on the specific problem formulation. Assume that we have discrete variables ${}^{2}$ and therefore, given a specific model, there is only a finite amount of probabilities this variable can obtain (one for each parent configuration). If an $\mathrm{{NCM}}$ is used and the new data only covers some of these parents configurations, continual learning methods can be used to avoid (or at least reduce) catastrophic forgetting of the other parent configurations. Continual learning methods for this purpose include elastic weight consolidation (Kirkpatrick et al. 2017) (keep neurons for useful input-output relationships fixed), knowledge distillation (Gou et al. 2021) (distill desired input-output relationships and train a new model), and (pseudo-)rehearsal (Robins 1995) (train on the new data but add artificial or actual data points representing the input-output relationships you want to keep).
+
+§ PROBLEM 2: CHANGE OF VARIABLES
+
+Continual learning could also be a helpful tool towards updating the structure of a causal model. If a variable is added as a new parent to another variable, the existing NN could be extended by additional neurons to increase the expressivity of the NN and maybe even keep some useful connections and neurons within the NN which are still helpful (but they should not be fixed in case the relationship changed significantly). Another idea is that if a relationship between some variables is presumably complex but has two or more causes (child variables), maybe those could benefit from sharing a part of the NN architecture at the beginning and have different child variables represented by different output (task) specific layers at the end of the NN (Li and Hoiem 2017).
+
+§ CONCLUSION AND OUTLOOK
+
+Continual learning and causality (NCMs in particular) have several goals in common, including but not limited to model adaptation given new information, invariance of unchanged knowledge, and efficient use of data. We postulate that causal models therefore can benefit from continual learning methods which are designed to update new or changed parts of a model while keeping other parts functional.
+
+One can also think of further areas in which a continual perspective on causality could help. For example, one might even try to create some kind of "meta model" which operates on top of a causal model but, given the previous changes in that causal model, is tasked to predict how the causal model is expected to change in the future.
+
+${}^{2}$ The idea could also work for continuous variables but it requires a more sophisticated approach. For this position paper, we consider discrete variables as the simpler version of this problem.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/hsdU13XxklX/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/hsdU13XxklX/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..c5d4d7f9533a408c1863e0f360eea14b0f11fea5
--- /dev/null
+++ b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/hsdU13XxklX/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,73 @@
+# Treatment Effect Estimation to Guide Model Optimization in Continual Learning
+
+Anonymous submission
+
+## Abstract
+
+Continual Learning systems are faced with potentially large numbers of tasks to be learned while the models employed have limited capacity which makes it potentially impossible to learn all required tasks with a single model. In order to detect on which point a model might break we propose to use treatment effect estimation techniques to estimate the effect of training a model on a new task w.r.t. some performance measure.
+
+## Motivation
+
+Continually learning new concepts and solving new tasks is one key element of human intelligence which accompanies us through our entire life and seems to be more important than ever these days. For instance, the rising dynamics of the job market demands employees for continually learning, e.g. using a new software being introduced in a company, while not forgetting how to solve problems we face all the time during work, e.g. communicating properly with new customers. Continual Learning (CL) aims to transfer this ability to Machine Learning (ML) to obtain models which are capable of adapting to new tasks without losing the ability to solve tasks seen earlier and to exploit knowledge gathered by learning to solve former tasks (Delange et al. 2021) (Parisi et al. 2019). In CL the notion of a task is typically. A task usually corresponds to one of the widespread problem-definitions used in ML, i.e. supervised learning, unsupervised learning or combinations thereof. One of the most prominent problems in CL is catastrophic forgetting which describes the observation that ML-models (especially Neural Networks) tend to forget about tasks they have learned previously once they are trained to solve a different task (De-lange et al. 2021; Parisi et al. 2019). For example, training a CNN to solve image classification on MNIST and then training the same CNN on Fashion-MNIST starting with the parametrization obtained from training on MNIST will lead to a model being able to solve image classification on Fashion-MNIST, but at the same time causing the MNIST performance to deteriorate. Methods aiming to overcome this issue either train expert-models for each task, replay old data while training on new tasks or fix certain parameters in the models which are considered to be important to solve tasks that can be solved already. Expert-models suffer from high resource consumption and they are not able to exploit old knowledge due to isolated parameter-sets per expert. Replay-based approaches also suffer from high memory consumption. Fixing parameters which are deemed important to solve former tasks thus seems to be the most reasonable approach to take. However, with such approaches questions like the following arise: Does our model have enough capacity to learn a new task? Given a paramerter-ized model, which effect will training on a new task have w.r.t. the over all model performance? Given that we have trained our model on a sequence of tasks, what would be the state and performance of our model if we had not trained on the last $k$ tasks?
+
+Answering such questions is crucial in order to have guarantees w.r.t. model performance and robustness. Also it increases flexibility of CL-systems since answering such questions allows to determine when model-complexity has to be increased. Estimating effects in counterfactual settings enable CL-systems to find proper trade-offs, e.g. when we have to learn a new task, but there is not enough capacity, i.e. we are sure that the overall model-performance will decrease. Then, with counterfactual reasoning, one could identify knowledge in the model which causes the lowest decrease in performance once this knowledge is discarded to make space for the new task to be learned. We will now show that questions of this form can be answered using the framework of treatment effect estimation.
+
+## Treatment Effect Estimation
+
+Treatment effect estimation (TEE) has its grounding in Causal Inference. The goal is to estimate the effect of an intervention in a system on some variable (Becker and Ichino 2002) (Imbens 2004). The do-calculus proposed by Pearl (2009) is a strong framework which can be used to compute entities we need for TEE. The do-calculus is able to capture asymmetries rendered by causal structures (i.e. if $A$ is the cause of $B$ , changing $A$ changes $B$ but not vice versa). Following this rationale, the average treatment effect (ATE) of a variable $X$ on a variable $Y$ can be defined as follows:
+
+$$
+\mathrm{{ATE}} = \mathbb{E}\left\lbrack {Y \mid {do}\left( {X = 1}\right) }\right\rbrack - \mathbb{E}\left\lbrack {Y \mid {do}\left( {X = 0}\right) }\right\rbrack \tag{1}
+$$
+
+ATE is just one of many treatment effect quantities one can estimate/compute, another important quantity is the individual treatment effect (ITE) where one focuses on the outcome of an individual system configuration instead of taking an expectation (Tabib and Larocque 2019). However, we will focus on ATE here. It is also possible to consider counterfactual scenarios: Instead of asking how the system will behave under an intervention, we ask how the system would have behaved if and intervention was performed.
+
+## Connecting TEE and CL
+
+In order to perform TEE, we have to know which variable is caused by which other variable(s). In most cases a causal graph is known or can be designed by hand. For example, Figure 1 shows a causal graph of one "step" in a CL-system: ${t}_{i}$ denotes a task we obtain at step $i,{\tau }_{i}$ is a binary decision variable indicating whether we update our model based on ${t}_{i},{\theta }_{i - 1}$ and ${\theta }_{i}$ are the model-parameters at step $i - 1$ and $i$ respectively, ${l}_{i}$ is the model-performance w.r.t. all tasks at step $i$ and ${T}_{i - 1}$ refers to the set of all tasks we have trained on until step $i$ . Note that all variables except for ${l}_{i}$ are independent of ${T}_{i - 1}$ since we observe ${\theta }_{i - 1}$ which represents the accumulated knowledge over ${T}_{i - 1}$ , thus older tasks are not needed to estimate these variables.
+
+TEE in Factual Settings Sticking with the example in Figure 1, a natural question to be answered is: Obtaining a new task ${t}_{i}$ , will the average model performance ${l}_{i}$ significantly decrease when updating the current parameters ${\theta }_{i - 1}$ on ${t}_{i}$ ? Formally this question corresponds to estimating the conditional average treatment effect (CATE) $\mathbb{E}\left\lbrack {{l}_{i} \mid {do}\left( {{\tau }_{i} = 1}\right) ,{t}_{i},{\theta }_{i - 1}}\right\rbrack - \mathbb{E}\left\lbrack {{l}_{i} \mid {do}\left( {{\tau }_{i} = 0}\right) ,{t}_{i},{\theta }_{i - 1}}\right\rbrack$ . Estimating this quantity requires us to estimate the case where ${do}\left( {{\tau }_{i} = 1}\right)$ only since ${do}\left( {{\tau }_{i} = 0}\right)$ can be approximated by evaluating the current model on all tasks and average the performance. Estimation of ${do}\left( {{\tau }_{i} = 1}\right)$ case can be done with a 2-step-procedure: First, estimate a distribution over ${\theta }_{i}$ s.t. the parameters that would result from training on ${t}_{i}$ have high probability, denoted by $p\left( {{\theta }_{i} \mid {t}_{i},{\theta }_{i - 1}}\right)$ . Once this distribution is estimated, the expectation of ${l}_{i}$ can be computed by:
+
+$$
+{\int }_{{l}_{i}}{\int }_{{\theta }_{i}}{l}_{i} \cdot p\left( {{l}_{i} \mid {\theta }_{i}}\right) \cdot p\left( {{\theta }_{i} \mid {t}_{i},{\theta }_{i - 1}}\right) \tag{2}
+$$
+
+One possible approach to estimate this quantity is to use density estimators for $p\left( {{l}_{i} \mid {\theta }_{i}}\right)$ and $p\left( {{\theta }_{i} \mid {t}_{i},{\theta }_{i - 1}}\right)$ and approximate the integral (e.g. using Monte Carlo approaches). Such a quantity allows to determine when the model should be equipped with additional capacity, e.g. by adding more parameters. Additionally, the estimated distribution over parameters can be used to warm-start the next training-stage.
+
+TEE in Counterfactual Settings Another issue we are confronted with in CL-settings is the following: Assume we have a fixed resource-constraint (i.e. our model has a maximum possible capacity) and we obtain a new task which will decrease the overall model performance. Then we have to identify those parts of knowledge represented by our model which will cause the lowest decrease in performance. This can be considered as identifying the task that contributes the lowest amount of knowledge to our model, which in turn can be formulated as a counterfactual question: What would the model performance be if we had not trained on ${t}_{i - k}$ but on
+
+
+
+Figure 1: Continual Learning represented as a causal graph. The decision ${\tau }_{i - 1}$ if the current model parameters ${\theta }_{i - 1}$ are updated using task ${t}_{i}$ depend on ${\theta }_{i - 1}$ (which is obtained) and ${t}_{i}$ only. The model-parameters ${\theta }_{i}$ at timestep $i$ influence the overall model-performance ${l}_{i}$ across all $i$ tasks.
+
+${t}_{i}$ ? This question can be answered by estimating a series of ATEs in counterfactual settings s.t.
+
+$$
+\mathbb{E}\left\lbrack {\overline{{l}_{i}} \mid {do}\left( {{\tau }_{i} = 1}\right) ,{t}_{i},\overline{{\theta }_{i - 1}}}\right\rbrack - \mathbb{E}\left\lbrack {{l}_{i} \mid {do}\left( {{\tau }_{i} = 1}\right) ,{t}_{i},{\theta }_{i - 1}}\right\rbrack
+$$
+
+is maximized where $\overline{{l}_{i}}$ and $\overline{{\theta }_{i - 1}}$ corresponds to the value of ${l}_{i}$ and ${\theta }_{i - 1}$ respectively if ${\tau }_{i - k}$ had been 0, i.e. if we had not trained on ${t}_{i - k}$ .
+
+Having such a quantity would not only allow for assessing which knowledge does not contribute much to the overall model-performance, it also can be used to prune models s.t. we minimize the model size while we retain the ability to solve all tasks in an acceptable manner. Also, we can use the estimated parameter-distribution to warm-start the model once the knowledge causing the lowest performance-drop if discarded was identified. Estimating the counterfactual case can be done similar to the factual case.
+
+## Conclusion & Further Work
+
+This vision paper looked at the benefits of using the TEE-framework to increase the robustness and flexibility of CL-systems. We propose a starting point that can be used to answer factual and counterfactual questions about CL-systems to guide the optimization behavior. Answering such questions is crucial in productive systems in order to give guarantees w.r.t. model-performance and to minimize computational costs (e.g. by using parameter-estimations as a warm-start). Additionally viewing at CL-systems from a causal lens allows us to make models more transparent, e.g. by identifying knowledge that does not contribute much to the overall model-performance.
+
+Further work should start with solving the factual case, followed by solving the counterfactual case. This requires representing tasks and model-parameters properly which can be achieved using learned representations thereof. Then, the NCM framework proposed by Xia et al. (2021) could be employed to estimate CATE to answer the questions mentioned above. Also, instead of ATE other quantities such as ITE can be considered, e.g. to answer questions about specific tasks.
+
+## References
+
+Becker, S. O.; and Ichino, A. 2002. Estimation of Average Treatment Effects Based on Propensity Scores. The Stata Journal, 2(4): 358-377.
+
+Delange, M.; Aljundi, R.; Masana, M.; Parisot, S.; Jia, X.; Leonardis, A.; Slabaugh, G.; and Tuytelaars, T. 2021. A continual learning survey: Defying forgetting in classification tasks. IEEE Transactions on Pattern Analysis and Machine Intelligence, 1-1.
+
+Imbens, G. W. 2004. Nonparametric Estimation of Average Treatment Effects Under Exogeneity: A Review. The Review of Economics and Statistics, 86(1): 4-29.
+
+Parisi, G. I.; Kemker, R.; Part, J. L.; Kanan, C.; and Wermter, S. 2019. Continual lifelong learning with neural networks: A review. Neural Networks, 113: 54-71.
+
+Pearl, J. 2009. Causality. Cambridge, UK: Cambridge University Press, 2 edition. ISBN 978-0-521-89560-6.
+
+Tabib, S.; and Larocque, D. 2019. Non-parametric individual treatment effect estimation for survival data with random forests. Bioinformatics, 36(2): 629-636.
+
+Xia, K.; Lee, K.; Bengio, Y.; and Bareinboim, E. 2021. The Causal-Neural Connection: Expressiveness, Learnabil-ity, and Inference. CoRR, abs/2107.00793.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/hsdU13XxklX/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/hsdU13XxklX/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..376278dfd980713734656fa757731472dd760ff9
--- /dev/null
+++ b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/hsdU13XxklX/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,57 @@
+§ TREATMENT EFFECT ESTIMATION TO GUIDE MODEL OPTIMIZATION IN CONTINUAL LEARNING
+
+Anonymous submission
+
+§ ABSTRACT
+
+Continual Learning systems are faced with potentially large numbers of tasks to be learned while the models employed have limited capacity which makes it potentially impossible to learn all required tasks with a single model. In order to detect on which point a model might break we propose to use treatment effect estimation techniques to estimate the effect of training a model on a new task w.r.t. some performance measure.
+
+§ MOTIVATION
+
+Continually learning new concepts and solving new tasks is one key element of human intelligence which accompanies us through our entire life and seems to be more important than ever these days. For instance, the rising dynamics of the job market demands employees for continually learning, e.g. using a new software being introduced in a company, while not forgetting how to solve problems we face all the time during work, e.g. communicating properly with new customers. Continual Learning (CL) aims to transfer this ability to Machine Learning (ML) to obtain models which are capable of adapting to new tasks without losing the ability to solve tasks seen earlier and to exploit knowledge gathered by learning to solve former tasks (Delange et al. 2021) (Parisi et al. 2019). In CL the notion of a task is typically. A task usually corresponds to one of the widespread problem-definitions used in ML, i.e. supervised learning, unsupervised learning or combinations thereof. One of the most prominent problems in CL is catastrophic forgetting which describes the observation that ML-models (especially Neural Networks) tend to forget about tasks they have learned previously once they are trained to solve a different task (De-lange et al. 2021; Parisi et al. 2019). For example, training a CNN to solve image classification on MNIST and then training the same CNN on Fashion-MNIST starting with the parametrization obtained from training on MNIST will lead to a model being able to solve image classification on Fashion-MNIST, but at the same time causing the MNIST performance to deteriorate. Methods aiming to overcome this issue either train expert-models for each task, replay old data while training on new tasks or fix certain parameters in the models which are considered to be important to solve tasks that can be solved already. Expert-models suffer from high resource consumption and they are not able to exploit old knowledge due to isolated parameter-sets per expert. Replay-based approaches also suffer from high memory consumption. Fixing parameters which are deemed important to solve former tasks thus seems to be the most reasonable approach to take. However, with such approaches questions like the following arise: Does our model have enough capacity to learn a new task? Given a paramerter-ized model, which effect will training on a new task have w.r.t. the over all model performance? Given that we have trained our model on a sequence of tasks, what would be the state and performance of our model if we had not trained on the last $k$ tasks?
+
+Answering such questions is crucial in order to have guarantees w.r.t. model performance and robustness. Also it increases flexibility of CL-systems since answering such questions allows to determine when model-complexity has to be increased. Estimating effects in counterfactual settings enable CL-systems to find proper trade-offs, e.g. when we have to learn a new task, but there is not enough capacity, i.e. we are sure that the overall model-performance will decrease. Then, with counterfactual reasoning, one could identify knowledge in the model which causes the lowest decrease in performance once this knowledge is discarded to make space for the new task to be learned. We will now show that questions of this form can be answered using the framework of treatment effect estimation.
+
+§ TREATMENT EFFECT ESTIMATION
+
+Treatment effect estimation (TEE) has its grounding in Causal Inference. The goal is to estimate the effect of an intervention in a system on some variable (Becker and Ichino 2002) (Imbens 2004). The do-calculus proposed by Pearl (2009) is a strong framework which can be used to compute entities we need for TEE. The do-calculus is able to capture asymmetries rendered by causal structures (i.e. if $A$ is the cause of $B$ , changing $A$ changes $B$ but not vice versa). Following this rationale, the average treatment effect (ATE) of a variable $X$ on a variable $Y$ can be defined as follows:
+
+$$
+\mathrm{{ATE}} = \mathbb{E}\left\lbrack {Y \mid {do}\left( {X = 1}\right) }\right\rbrack - \mathbb{E}\left\lbrack {Y \mid {do}\left( {X = 0}\right) }\right\rbrack \tag{1}
+$$
+
+ATE is just one of many treatment effect quantities one can estimate/compute, another important quantity is the individual treatment effect (ITE) where one focuses on the outcome of an individual system configuration instead of taking an expectation (Tabib and Larocque 2019). However, we will focus on ATE here. It is also possible to consider counterfactual scenarios: Instead of asking how the system will behave under an intervention, we ask how the system would have behaved if and intervention was performed.
+
+§ CONNECTING TEE AND CL
+
+In order to perform TEE, we have to know which variable is caused by which other variable(s). In most cases a causal graph is known or can be designed by hand. For example, Figure 1 shows a causal graph of one "step" in a CL-system: ${t}_{i}$ denotes a task we obtain at step $i,{\tau }_{i}$ is a binary decision variable indicating whether we update our model based on ${t}_{i},{\theta }_{i - 1}$ and ${\theta }_{i}$ are the model-parameters at step $i - 1$ and $i$ respectively, ${l}_{i}$ is the model-performance w.r.t. all tasks at step $i$ and ${T}_{i - 1}$ refers to the set of all tasks we have trained on until step $i$ . Note that all variables except for ${l}_{i}$ are independent of ${T}_{i - 1}$ since we observe ${\theta }_{i - 1}$ which represents the accumulated knowledge over ${T}_{i - 1}$ , thus older tasks are not needed to estimate these variables.
+
+TEE in Factual Settings Sticking with the example in Figure 1, a natural question to be answered is: Obtaining a new task ${t}_{i}$ , will the average model performance ${l}_{i}$ significantly decrease when updating the current parameters ${\theta }_{i - 1}$ on ${t}_{i}$ ? Formally this question corresponds to estimating the conditional average treatment effect (CATE) $\mathbb{E}\left\lbrack {{l}_{i} \mid {do}\left( {{\tau }_{i} = 1}\right) ,{t}_{i},{\theta }_{i - 1}}\right\rbrack - \mathbb{E}\left\lbrack {{l}_{i} \mid {do}\left( {{\tau }_{i} = 0}\right) ,{t}_{i},{\theta }_{i - 1}}\right\rbrack$ . Estimating this quantity requires us to estimate the case where ${do}\left( {{\tau }_{i} = 1}\right)$ only since ${do}\left( {{\tau }_{i} = 0}\right)$ can be approximated by evaluating the current model on all tasks and average the performance. Estimation of ${do}\left( {{\tau }_{i} = 1}\right)$ case can be done with a 2-step-procedure: First, estimate a distribution over ${\theta }_{i}$ s.t. the parameters that would result from training on ${t}_{i}$ have high probability, denoted by $p\left( {{\theta }_{i} \mid {t}_{i},{\theta }_{i - 1}}\right)$ . Once this distribution is estimated, the expectation of ${l}_{i}$ can be computed by:
+
+$$
+{\int }_{{l}_{i}}{\int }_{{\theta }_{i}}{l}_{i} \cdot p\left( {{l}_{i} \mid {\theta }_{i}}\right) \cdot p\left( {{\theta }_{i} \mid {t}_{i},{\theta }_{i - 1}}\right) \tag{2}
+$$
+
+One possible approach to estimate this quantity is to use density estimators for $p\left( {{l}_{i} \mid {\theta }_{i}}\right)$ and $p\left( {{\theta }_{i} \mid {t}_{i},{\theta }_{i - 1}}\right)$ and approximate the integral (e.g. using Monte Carlo approaches). Such a quantity allows to determine when the model should be equipped with additional capacity, e.g. by adding more parameters. Additionally, the estimated distribution over parameters can be used to warm-start the next training-stage.
+
+TEE in Counterfactual Settings Another issue we are confronted with in CL-settings is the following: Assume we have a fixed resource-constraint (i.e. our model has a maximum possible capacity) and we obtain a new task which will decrease the overall model performance. Then we have to identify those parts of knowledge represented by our model which will cause the lowest decrease in performance. This can be considered as identifying the task that contributes the lowest amount of knowledge to our model, which in turn can be formulated as a counterfactual question: What would the model performance be if we had not trained on ${t}_{i - k}$ but on
+
+ < g r a p h i c s >
+
+Figure 1: Continual Learning represented as a causal graph. The decision ${\tau }_{i - 1}$ if the current model parameters ${\theta }_{i - 1}$ are updated using task ${t}_{i}$ depend on ${\theta }_{i - 1}$ (which is obtained) and ${t}_{i}$ only. The model-parameters ${\theta }_{i}$ at timestep $i$ influence the overall model-performance ${l}_{i}$ across all $i$ tasks.
+
+${t}_{i}$ ? This question can be answered by estimating a series of ATEs in counterfactual settings s.t.
+
+$$
+\mathbb{E}\left\lbrack {\overline{{l}_{i}} \mid {do}\left( {{\tau }_{i} = 1}\right) ,{t}_{i},\overline{{\theta }_{i - 1}}}\right\rbrack - \mathbb{E}\left\lbrack {{l}_{i} \mid {do}\left( {{\tau }_{i} = 1}\right) ,{t}_{i},{\theta }_{i - 1}}\right\rbrack
+$$
+
+is maximized where $\overline{{l}_{i}}$ and $\overline{{\theta }_{i - 1}}$ corresponds to the value of ${l}_{i}$ and ${\theta }_{i - 1}$ respectively if ${\tau }_{i - k}$ had been 0, i.e. if we had not trained on ${t}_{i - k}$ .
+
+Having such a quantity would not only allow for assessing which knowledge does not contribute much to the overall model-performance, it also can be used to prune models s.t. we minimize the model size while we retain the ability to solve all tasks in an acceptable manner. Also, we can use the estimated parameter-distribution to warm-start the model once the knowledge causing the lowest performance-drop if discarded was identified. Estimating the counterfactual case can be done similar to the factual case.
+
+§ CONCLUSION & FURTHER WORK
+
+This vision paper looked at the benefits of using the TEE-framework to increase the robustness and flexibility of CL-systems. We propose a starting point that can be used to answer factual and counterfactual questions about CL-systems to guide the optimization behavior. Answering such questions is crucial in productive systems in order to give guarantees w.r.t. model-performance and to minimize computational costs (e.g. by using parameter-estimations as a warm-start). Additionally viewing at CL-systems from a causal lens allows us to make models more transparent, e.g. by identifying knowledge that does not contribute much to the overall model-performance.
+
+Further work should start with solving the factual case, followed by solving the counterfactual case. This requires representing tasks and model-parameters properly which can be achieved using learned representations thereof. Then, the NCM framework proposed by Xia et al. (2021) could be employed to estimate CATE to answer the questions mentioned above. Also, instead of ATE other quantities such as ITE can be considered, e.g. to answer questions about specific tasks.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/iWLbLoleZMN/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/iWLbLoleZMN/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..1aeb2332ee2fada3ecdb2effe435a3e6bead513d
--- /dev/null
+++ b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/iWLbLoleZMN/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,87 @@
+# Issues for Continual Learning in the Presence of Dataset Bias
+
+Anonymous submission
+
+## Abstract
+
+While most of continual learning algorithms have focused on tackling the stability-plasticity dilemma, they have overlooked the effects of the knowledge transfer when it is ${bi}$ - ased by learning unintended spurious correlations that do not capture the true causal structure of the tasks. In this work, we designed systematic data experiments and show that such bias is indeed transferred, both forward and backward, during continual learning and argue that causality-aware design of continual learning algorithm is critical.
+
+## Introduction
+
+Continual learning (CL) is essential for a system that needs to learn (potentially increasing number of) tasks from sequentially arriving data in an online fashion. The main challenge of CL is to overcome the stability-plasticity dilemma (Mermillod, Bugaiska, and Bonin 2013) that is a trade-off where if a CL model focuses too much on the stability of learned tasks, it suffers from low plasticity for integration of a new task (and vice versa). Recent deep neural networks (DNNs) based CL methods (Kirkpatrick et al. 2017; Jung et al. 2020; Li and Hoiem 2017) attempted to address the dilemma by devising mechanisms to attain stability while improving plasticity thanks to the knowledge transferability (Tan et al. 2018), which is one of standout properties of DNNs. Namely, while maintaining the learned knowledge, the performance on a new task (resp. past tasks) are improved by transferring of knowledge of past tasks (resp. a new task). Such phenomena are called the forward and backward transfer, respectively.
+
+Unfortunately, it is widely known that DNNs often dramatically fail to generalize to out-of-distribution data due to learning some unintended spurious correlations (e.g., dataset bias (Torralba and Efros 2011)), not the true causal relations (Sagawa et al. 2020; Bahng et al. 2020). For instance, a DNN that classifies birds on the sky perfectly may fail on images in which birds are outside the typical sky background when the model has learned a shortcut strategy relying on the background (Geirhos et al. 2020). Furthermore, a recent work (Salman et al. 2022) shows that such bias in a model can be transferred; namely, the bias in pre-trained models are remained present even after fine-tuning them on downstream tasks. In CL, when a model becomes biased while learning a specific task, such bias transfer is likely to happen and may lead to unexpected failures continuously.
+
+In this paper, we show that when the causal learning is not appropriately considered, naively applying CL methods would be problematic since they can maintain unwarranted knowledge (e.g., background bias). In this end, we make a synthetic dataset with color bias, and systematically conduct extensive experiments on various two task scenarios with different degrees of the bias. We identify that bias of a specific task affects other tasks in CL due to two sources: the forward and backward transfer of bias. Specifically, a typical CL method preserves the knowledge such that the bias of the knowledge is reused to train on a new task (i.e., forward transfer of bias). Furthermore, the biased knowledge learned from the current task affects the decision rules for the past tasks to be biased (i.e., backward transfer of bias). To solve these, our experimental results show that it is necessary to learn causal relation of each task in CL. Furthermore, we demonstrate that, when learning the relation, CL should take the stability into account simultaneously, since otherwise it can lead to severe catastrophic forgetting. Therefore, our results strongly appeal the necessity of a novel method that can do causal learning while preventing forgetting.
+
+## Case Studies of Bias Transfer in CL
+
+## Experimental Settings
+
+Dataset. We use Split CIFAR-100 (Zenke, Poole, and Ganguli 2017; Chaudhry et al. 2019; van de Ven, Siegel-mann, and Tolias 2020), which divides CIFAR-100 into 10 tasks with 10 distinct classes. To study bias transfer, we modify Split CIFAR-100, such that half of the classes in each task are skewed toward the grayscale domain and the other half toward the color domain. Namely, given a skew-ratio $\alpha \geq {0.5}$ , the training images of each class are split into $\alpha$ and $1 - \alpha$ ratios for each domain. We set 6 bias levels by dividing the range from 0.5 to 0.99 evenly on a log scale for systematic control of the degree of bias.
+
+CL Scenario. We consider a task-incremental learning scenario (Van de Ven and Tolias 2019) in which task identifier is given during inference time and further assumes the domain of images is known. For simplicity, we only considered the scenario of incrementally learning two tasks; we randomly chose 2 out of 10 tasks in every run and report the
+
+
+
+Figure 1: Forward transfer of bias. The higher DCA indicates a model is more biased. y-axis shows the level of focus on plasticity or stability. Dashed lines connect the points with the same learning strategy (hyperparameters).
+
+## averaged results over 4 different runs. We denote the first and second task as ${T}_{1}$ and ${T}_{2}$ , respectively.
+
+Metrics. We consider two metrics as evaluation metrics for CL performance and bias, accuracy and the difference of classwise accuracy (DCA) (Berk et al. 2021) for each task. Note DCA is defined to be the average (over class) of per-class accuracy difference between domains. We also compute forgetting(F)and intransigence(I)measure (refer to Section 3.1 in Cha et al. 2021 for details), which evaluate stability and plasticity, respectively, of a CL method. We use the normalized difference of them to evaluate the relative weight on plasticity and stability; i.e., the lower the value is, the more the model focuses on stability (and vice versa).
+
+Baselines. We adopt finetuning without any consideration of CL and three representative CL methods: LWF (Li and Hoiem 2017), EWC (Kirkpatrick et al. 2017), and ER (Chaudhry et al. 2019). LWF and EWC add regularization terms in their training objectives to penalize deviation from the past model and balance the stability-plasticity tradeoff by controlling the regularization hyperparameter. On the other hand, ER stores some data from past tasks and replays them while learning current task. Finally, as a model debias-ing technique, we employ MFD (Jung et al. 2021), a state-of-the-art method that trains a domain-independent model using a MMD-based feature distillation.
+
+## Study 1: Forward Transfer of Bias
+
+To investigate the influence of bias captured from ${T}_{1}$ in a CL scenario, we evaluated baseline methods by varying the bias level of ${T}_{1}$ , with that of ${T}_{2}$ is fixed to level 2 . Figure 1 shows DCA of ${T}_{2}$ along with $\mathcal{F} - \mathcal{I}$ after learning ${T}_{2}$ with two different bias levels of ${T}_{1}$ , i.e., level $0\& 6$ . The figure plots the results of LWF and EWC with various regularization strengths; namely, the upper the point is, the lower the regularization strength is. From the gap of blue triangles in the figure, we first observe that bias of ${T}_{1}$ adversely affects bias of ${T}_{2}$ , i.e., forward transfer of bias exists, even with sim-
+
+
+
+Figure 2: Backward transfer of bias. Blue arrows indicate the sequence of stages. Since all baselines are trained in the same way on ${T}_{1}$ , we report the results with one cross marker.
+
+ple finetuning, which is consistent with Salman et al. 2022. Second, we observe that when applying CL methods, the gaps between connected points get larger than finetuning. Moreover, when the bias level of ${T}_{1}$ is 6, DCA of ${T}_{2}$ for EWC and LWF increases more drastically as the focus on stability is larger. Thus, these results imply that CL methods promote the forward transfer of bias since they tend to remember the knowledge of past tasks for stability. Finally, we clearly observe that DCA of ${T}_{2}$ is always better when learned after ${T}_{1}$ with bias level 0 than with bias level 6, for similar $\mathcal{F} - \mathcal{I}$ . Therefore, we argue that whenever a given task has a bias in CL scenario, its bias should be mitigated for learning future tasks.
+
+## Study 2: Backward Transfer of Bias
+
+Here, we set the bias level of ${T}_{1}$ and ${T}_{2}$ as 0 and 6 and assume a scenario that bias is detected after learning ${T}_{1}$ (stage 1) and continually learning ${T}_{2}$ (stage 2) by a CL method. In this situation, one may naively consider applying MFD (stage 3) for debiasing the models on ${T}_{2}$ . Figure 2 shows accuracy and DCA of ${T}_{1}$ and ${T}_{2}$ on each stage for each baseline. In the right plot, we observe that points shift to bottom left as progressing from stage 1 to stage 2 . It means that as the stability gets less focus, the bias obtained from ${T}_{2}$ is more transferred to ${T}_{1}$ , i.e., the backward transfer of bias. From results of the stage 3 in the left plot, we show that DCA of ${T}_{2}$ can be successfully reduced by employing a debiasing technique of a model with similar accuracy. However, we also identify that accuracy of ${T}_{1}$ significantly drops. Thus, when debiasing after learning each task as we argued in Study 1, one should consider forgetting issue of the learned tasks at the same time, i.e., it is necessary to develop a novel method learning causal relation considering the stability for CL.
+
+## Conclusion
+
+We showed forward and backward transfer of bias in CL. Since bias can be transferred either way unexpectedly, we must pursue causal learning, but under the consideration of stability. In future works, we will investigate the bias transfer in more realistic scenario with long sequence of data stream.
+
+## References
+
+Bahng, H.; Chun, S.; Yun, S.; Choo, J.; and Oh, S. J. 2020. Learning de-biased representations with biased representations. In International Conference on Machine Learning, 528-539. PMLR.
+
+Berk, R.; Heidari, H.; Jabbari, S.; Kearns, M.; and Roth, A. 2021. Fairness in criminal justice risk assessments: The state of the art. Sociological Methods & Research, 50(1): 3-44.
+
+Cha, S.; Hsu, H.; Hwang, T.; Calmon, F.; and Moon, T. 2021. $\{ \mathrm{{CPR}}\}$ : Classifier-Projection Regularization for Continual Learning. In International Conference on Learning Representations.
+
+Chaudhry, A.; Rohrbach, M.; Elhoseiny, M.; Ajanthan, T.; Dokania, P. K.; Torr, P. H.; and Ranzato, M. 2019. Continual Learning with Tiny Episodic Memories. arXiv preprint arXiv:1902.10486, 2019.
+
+Geirhos, R.; Jacobsen, J.-H.; Michaelis, C.; Zemel, R.; Brendel, W.; Bethge, M.; and Wichmann, F. A. 2020. Shortcut learning in deep neural networks. Nature Machine Intelligence, 2(11): 665-673.
+
+Jung, S.; Ahn, H.; Cha, S.; and Moon, T. 2020. Continual learning with node-importance based adaptive group sparse regularization. Advances in Neural Information Processing Systems, 33: 3647-3658.
+
+Jung, S.; Lee, D.; Park, T.; and Moon, T. 2021. Fair feature distillation for visual recognition. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 12115-12124.
+
+Kirkpatrick, J.; Pascanu, R.; Rabinowitz, N.; Veness, J.; Des-jardins, G.; Rusu, A.; Milan, K.; Quan, J.; Ramalho, T.; Grabska-Barwinska, A.; et al. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences of the United States of America, 114(13): 3521-3526.
+
+Li, Z.; and Hoiem, D. 2017. Learning without forgetting. IEEE transactions on pattern analysis and machine intelligence, 40(12): 2935-2947.
+
+Mermillod, M.; Bugaiska, A.; and Bonin, P. 2013. The stability-plasticity dilemma: Investigating the continuum from catastrophic forgetting to age-limited learning effects.
+
+Sagawa, S.; Koh, P. W.; Hashimoto, T. B.; and Liang, P. 2020. Distributionally robust neural networks. In International Conference on Learning Representations.
+
+Salman, H.; Jain, S.; Ilyas, A.; Engstrom, L.; Wong, E.; and Madry, A. 2022. When does Bias Transfer in Transfer Learning? arXiv preprint arXiv:2207.02842.
+
+Tan, C.; Sun, F.; Kong, T.; Zhang, W.; Yang, C.; and Liu, C. 2018. A survey on deep transfer learning. In International conference on artificial neural networks, 270-279. Springer.
+
+Torralba, A.; and Efros, A. 2011. Unbiased look at dataset bias. In Proceedings of the 2011 IEEE Conference on Computer Vision and Pattern Recognition, 1521-1528.
+
+van de Ven, G. M.; Siegelmann, H. T.; and Tolias, A. S. 2020. Brain-inspired replay for continual learning with artificial neural networks. Nature Communications, 11: 4069.
+
+Van de Ven, G. M.; and Tolias, A. S. 2019. Three scenarios for continual learning. arXiv preprint arXiv:1904.07734.
+
+Zenke, F.; Poole, B.; and Ganguli, S. 2017. Continual learning through synaptic intelligence. In International Conference on Machine Learning, 3987-3995. PMLR.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/iWLbLoleZMN/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/iWLbLoleZMN/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..9cbe93e9d29a323edf434988fe420055d9600382
--- /dev/null
+++ b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/iWLbLoleZMN/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,51 @@
+§ ISSUES FOR CONTINUAL LEARNING IN THE PRESENCE OF DATASET BIAS
+
+Anonymous submission
+
+§ ABSTRACT
+
+While most of continual learning algorithms have focused on tackling the stability-plasticity dilemma, they have overlooked the effects of the knowledge transfer when it is ${bi}$ - ased by learning unintended spurious correlations that do not capture the true causal structure of the tasks. In this work, we designed systematic data experiments and show that such bias is indeed transferred, both forward and backward, during continual learning and argue that causality-aware design of continual learning algorithm is critical.
+
+§ INTRODUCTION
+
+Continual learning (CL) is essential for a system that needs to learn (potentially increasing number of) tasks from sequentially arriving data in an online fashion. The main challenge of CL is to overcome the stability-plasticity dilemma (Mermillod, Bugaiska, and Bonin 2013) that is a trade-off where if a CL model focuses too much on the stability of learned tasks, it suffers from low plasticity for integration of a new task (and vice versa). Recent deep neural networks (DNNs) based CL methods (Kirkpatrick et al. 2017; Jung et al. 2020; Li and Hoiem 2017) attempted to address the dilemma by devising mechanisms to attain stability while improving plasticity thanks to the knowledge transferability (Tan et al. 2018), which is one of standout properties of DNNs. Namely, while maintaining the learned knowledge, the performance on a new task (resp. past tasks) are improved by transferring of knowledge of past tasks (resp. a new task). Such phenomena are called the forward and backward transfer, respectively.
+
+Unfortunately, it is widely known that DNNs often dramatically fail to generalize to out-of-distribution data due to learning some unintended spurious correlations (e.g., dataset bias (Torralba and Efros 2011)), not the true causal relations (Sagawa et al. 2020; Bahng et al. 2020). For instance, a DNN that classifies birds on the sky perfectly may fail on images in which birds are outside the typical sky background when the model has learned a shortcut strategy relying on the background (Geirhos et al. 2020). Furthermore, a recent work (Salman et al. 2022) shows that such bias in a model can be transferred; namely, the bias in pre-trained models are remained present even after fine-tuning them on downstream tasks. In CL, when a model becomes biased while learning a specific task, such bias transfer is likely to happen and may lead to unexpected failures continuously.
+
+In this paper, we show that when the causal learning is not appropriately considered, naively applying CL methods would be problematic since they can maintain unwarranted knowledge (e.g., background bias). In this end, we make a synthetic dataset with color bias, and systematically conduct extensive experiments on various two task scenarios with different degrees of the bias. We identify that bias of a specific task affects other tasks in CL due to two sources: the forward and backward transfer of bias. Specifically, a typical CL method preserves the knowledge such that the bias of the knowledge is reused to train on a new task (i.e., forward transfer of bias). Furthermore, the biased knowledge learned from the current task affects the decision rules for the past tasks to be biased (i.e., backward transfer of bias). To solve these, our experimental results show that it is necessary to learn causal relation of each task in CL. Furthermore, we demonstrate that, when learning the relation, CL should take the stability into account simultaneously, since otherwise it can lead to severe catastrophic forgetting. Therefore, our results strongly appeal the necessity of a novel method that can do causal learning while preventing forgetting.
+
+§ CASE STUDIES OF BIAS TRANSFER IN CL
+
+§ EXPERIMENTAL SETTINGS
+
+Dataset. We use Split CIFAR-100 (Zenke, Poole, and Ganguli 2017; Chaudhry et al. 2019; van de Ven, Siegel-mann, and Tolias 2020), which divides CIFAR-100 into 10 tasks with 10 distinct classes. To study bias transfer, we modify Split CIFAR-100, such that half of the classes in each task are skewed toward the grayscale domain and the other half toward the color domain. Namely, given a skew-ratio $\alpha \geq {0.5}$ , the training images of each class are split into $\alpha$ and $1 - \alpha$ ratios for each domain. We set 6 bias levels by dividing the range from 0.5 to 0.99 evenly on a log scale for systematic control of the degree of bias.
+
+CL Scenario. We consider a task-incremental learning scenario (Van de Ven and Tolias 2019) in which task identifier is given during inference time and further assumes the domain of images is known. For simplicity, we only considered the scenario of incrementally learning two tasks; we randomly chose 2 out of 10 tasks in every run and report the
+
+ < g r a p h i c s >
+
+Figure 1: Forward transfer of bias. The higher DCA indicates a model is more biased. y-axis shows the level of focus on plasticity or stability. Dashed lines connect the points with the same learning strategy (hyperparameters).
+
+§ AVERAGED RESULTS OVER 4 DIFFERENT RUNS. WE DENOTE THE FIRST AND SECOND TASK AS ${T}_{1}$ AND ${T}_{2}$ , RESPECTIVELY.
+
+Metrics. We consider two metrics as evaluation metrics for CL performance and bias, accuracy and the difference of classwise accuracy (DCA) (Berk et al. 2021) for each task. Note DCA is defined to be the average (over class) of per-class accuracy difference between domains. We also compute forgetting(F)and intransigence(I)measure (refer to Section 3.1 in Cha et al. 2021 for details), which evaluate stability and plasticity, respectively, of a CL method. We use the normalized difference of them to evaluate the relative weight on plasticity and stability; i.e., the lower the value is, the more the model focuses on stability (and vice versa).
+
+Baselines. We adopt finetuning without any consideration of CL and three representative CL methods: LWF (Li and Hoiem 2017), EWC (Kirkpatrick et al. 2017), and ER (Chaudhry et al. 2019). LWF and EWC add regularization terms in their training objectives to penalize deviation from the past model and balance the stability-plasticity tradeoff by controlling the regularization hyperparameter. On the other hand, ER stores some data from past tasks and replays them while learning current task. Finally, as a model debias-ing technique, we employ MFD (Jung et al. 2021), a state-of-the-art method that trains a domain-independent model using a MMD-based feature distillation.
+
+§ STUDY 1: FORWARD TRANSFER OF BIAS
+
+To investigate the influence of bias captured from ${T}_{1}$ in a CL scenario, we evaluated baseline methods by varying the bias level of ${T}_{1}$ , with that of ${T}_{2}$ is fixed to level 2 . Figure 1 shows DCA of ${T}_{2}$ along with $\mathcal{F} - \mathcal{I}$ after learning ${T}_{2}$ with two different bias levels of ${T}_{1}$ , i.e., level $0\& 6$ . The figure plots the results of LWF and EWC with various regularization strengths; namely, the upper the point is, the lower the regularization strength is. From the gap of blue triangles in the figure, we first observe that bias of ${T}_{1}$ adversely affects bias of ${T}_{2}$ , i.e., forward transfer of bias exists, even with sim-
+
+ < g r a p h i c s >
+
+Figure 2: Backward transfer of bias. Blue arrows indicate the sequence of stages. Since all baselines are trained in the same way on ${T}_{1}$ , we report the results with one cross marker.
+
+ple finetuning, which is consistent with Salman et al. 2022. Second, we observe that when applying CL methods, the gaps between connected points get larger than finetuning. Moreover, when the bias level of ${T}_{1}$ is 6, DCA of ${T}_{2}$ for EWC and LWF increases more drastically as the focus on stability is larger. Thus, these results imply that CL methods promote the forward transfer of bias since they tend to remember the knowledge of past tasks for stability. Finally, we clearly observe that DCA of ${T}_{2}$ is always better when learned after ${T}_{1}$ with bias level 0 than with bias level 6, for similar $\mathcal{F} - \mathcal{I}$ . Therefore, we argue that whenever a given task has a bias in CL scenario, its bias should be mitigated for learning future tasks.
+
+§ STUDY 2: BACKWARD TRANSFER OF BIAS
+
+Here, we set the bias level of ${T}_{1}$ and ${T}_{2}$ as 0 and 6 and assume a scenario that bias is detected after learning ${T}_{1}$ (stage 1) and continually learning ${T}_{2}$ (stage 2) by a CL method. In this situation, one may naively consider applying MFD (stage 3) for debiasing the models on ${T}_{2}$ . Figure 2 shows accuracy and DCA of ${T}_{1}$ and ${T}_{2}$ on each stage for each baseline. In the right plot, we observe that points shift to bottom left as progressing from stage 1 to stage 2 . It means that as the stability gets less focus, the bias obtained from ${T}_{2}$ is more transferred to ${T}_{1}$ , i.e., the backward transfer of bias. From results of the stage 3 in the left plot, we show that DCA of ${T}_{2}$ can be successfully reduced by employing a debiasing technique of a model with similar accuracy. However, we also identify that accuracy of ${T}_{1}$ significantly drops. Thus, when debiasing after learning each task as we argued in Study 1, one should consider forgetting issue of the learned tasks at the same time, i.e., it is necessary to develop a novel method learning causal relation considering the stability for CL.
+
+§ CONCLUSION
+
+We showed forward and backward transfer of bias in CL. Since bias can be transferred either way unexpectedly, we must pursue causal learning, but under the consideration of stability. In future works, we will investigate the bias transfer in more realistic scenario with long sequence of data stream.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/kn6rBT9gotr/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/kn6rBT9gotr/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..0efe85e2d6e71848f8f39fb22d5cb3fc4affe198
--- /dev/null
+++ b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/kn6rBT9gotr/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,63 @@
+# Continual Causal Inference: Challenges and Opportunities
+
+Anonymous submission
+
+## Introduction
+
+A further understanding of cause and effect within observational data is critical across many domains, such as economics, health care, public policy, web mining, online advertising, and marketing campaigns (Yao et al. 2021). Although significant advances have been made to overcome the challenges in causal effect estimation with observational data, such as missing counterfactual outcomes and selection bias between treatment and control groups, the existing methods mainly focus on source-specific and stationary observational data. Such learning strategies assume that all observational data are already available during the training phase and from only one source (Chu, Rathbun, and Li 2020).
+
+Along with the fast-growing segments of industrial applications, this assumption is unsubstantial in practice. Taking Alipay as an example, which is one of the world's largest mobile payment platforms and offers financial services to billion-scale users, a tremendous amount of data containing much privacy-related information is produced daily and collected from different cities or countries. In conclusion, the following two points are summed up. The first one is based on the characteristics of observational data, which are incrementally available from non-stationary data distributions. For instance, the number of electronic financial records for one marketing campaign is growing every day, or the electronic financial records for one marketing campaign may be collected from different cities or even other countries. This characteristic implies that one cannot have access to all observational data at a one-time point and from one single source. The second reason is based on the realistic consideration of accessibility. For example, when new observational data are available, if we want to refine the model previously trained by original data, maybe the original training data are no longer accessible due to a variety of reasons, e.g., legacy data may be unrecorded, proprietary, the sensitivity of financial data, too large to store, or subject to privacy constraint of personal information (Zhang et al. 2020). This practical concern of accessibility is ubiquitous in various academic and industrial applications. That's what it boiled down to: in the era of big data, we face new challenges in causal inference with observational data, i.e., the extensibility for incrementally available observational data, the adaptability for extra domain adaptation problem except for the imbalance between treatment and control groups, and the accessibility for an enormous amount of data.
+
+In this position paper, we formally define the problem of continual treatment effect estimation, describe its research challenges, and then present possible solutions to this problem. Moreover, we will discuss future research directions on this topic.
+
+## Problem Definition
+
+Suppose that the observational data contain $n$ units collected from $d$ different domains and the $d$ -th dataset ${D}_{d}$ contains the data $\{ \left( {x, y, t}\right) \mid x \in X, y \in Y, t \in T\}$ collected from $d$ -th domain, which contains ${n}_{d}$ units. Let $X$ denote all observed variables, $Y$ denote the outcomes in the observational data, and $T$ be a binary variable. Let ${D}_{1 : d} = \left\{ {{D}_{1},{D}_{2},\ldots ,{D}_{d}}\right\}$ be the set of combination of $d$ dataset, separately collected from $d$ different domains. For $d$ datasets $\left\{ {{D}_{1},{D}_{2},\ldots ,{D}_{d}}\right\}$ , they have the commonly observed variables, but due to the fact that they are collected from different domains, they have different distributions with respect to $X, Y$ , and $T$ in each dataset. Each unit in the observational data received one of two treatments. Let ${t}_{i}$ denote the treatment assignment for unit $i;i = 1,\ldots , n$ . For binary treatments, ${t}_{i} = 1$ is for the treatment group and ${t}_{i} = 0$ for the control group. The outcome for unit $i$ is denoted by ${y}_{t}^{i}$ when treatment $t$ is applied to unit $i$ . For observational data, only one of the potential outcomes is observed. The observed outcome is called the factual outcome, and the remaining unobserved potential outcomes are called counterfactual outcomes.
+
+This task can follow the potential outcome framework for estimating treatment effects (Rubin 1974; Splawa-Neyman, Dabrowska, and Speed 1990). The individual treatment effect (ITE) for unit $i$ is the difference between the potential treated and control outcomes, and is defined as ${\mathrm{{ITE}}}_{i} =$ ${y}_{1}^{i} - {y}_{0}^{i}$ . The average treatment effect (ATE) is the difference between the mean potential treated and control outcomes, which is defined as $\operatorname{ATE} = \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\left( {{y}_{1}^{i} - {y}_{0}^{i}}\right)$ . The success of the potential outcome framework is based on the following assumptions (Imbens and Rubin 2015), which ensure that the treatment effect can be identified. Stable Unit Treatment Value Assumption (SUTVA): The potential outcomes for any unit do not vary with the treatments assigned to other units, and, for each unit, there are no different forms or versions of each treatment level, which lead to different potential outcomes. Consistency: The potential outcome of treatment $t$ is equal to the observed outcome if the actual treatment received is $t$ . Positivity: For any value of $x$ , treatment assignment is not deterministic, i.e., $P\left( {T = t \mid X = x}\right) > 0$ , for all $t$ and $x$ . Ignorability: Given covariates, treatment assignment is independent of the potential outcomes, i.e., $\left( {{y}_{1},{y}_{0}}\right) ⫫ t \mid x$ .
+
+Our goal is to develop a novel continual causal inference framework to estimate the causal effect for all available data, including new data ${D}_{d}$ and the previous data ${D}_{1 : \left( {d - 1}\right) }$ , without having access to previous data ${D}_{1 : \left( {d - 1}\right) }$ .
+
+## Research Challenges
+
+Existing causal effect inference methods, however, are unable to deal with the aforementioned new challenges, i.e., extensibility, adaptability, and accessibility. Although it is possible to adapt existing causal inference methods to cater to these issues, these adjusted methods still have inevitable defects. Three straightforward adaptation strategies are described as follows: (1) If we directly apply the model previously trained based on original data to new observational data, the performance on new task will be very poor due to the domain shift issues among different data sources; (2) Suppose we utilize newly available data to re-train the previously learned model for adapting changes in the data distribution. In that case, old knowledge will be completely or partially overwritten by the new one, which can result in severe performance degradation on old tasks. This is the well-known catastrophic forgetting problem (McCloskey and Cohen 1989; French 1999); (3) To overcome the catastrophic forgetting problem, we may rely on the storage of old data and combine the old and new data together, and then retrain the model from scratch. However, this strategy is memory inefficient and time-consuming, and it brings practical concerns such as copyright or privacy issues when storing data for a long time (Samet, Miri, and Granger 2013). Any of these three strategies, in combination with the existing causal effect inference methods, are deficient.
+
+## Potential Solution
+
+To address the continual treatment effect estimation problem, we propose a Continual Causal Effect Representation Learning framework (CERL) for estimating causal effect with incrementally available observational data. Instead of having access to all previous observational data, we only store a limited subset of feature representations learned from previous data. Combining selective and balanced representation learning, feature representation distillation, and feature transformation, our framework preserves the knowledge learned from previous data and updates the knowledge by leveraging new data so that it can achieve the continual causal effect estimation for incrementally new data without compromising the estimation capability for previous data.
+
+Framework Overview. To estimate the incrementally available observational data, the framework of CERL is mainly composed of two components: (1) the baseline causal effect learning model is only for the first available observational data, and thus we don't need to consider the domain shift issue among different data sources. This component is equivalent to the traditional causal effect estimation problem; (2) the continual causal effect learning model is for the sequentially available observational data, where we need to handle more complex issues, such as knowledge transfer, catastrophic forgetting, global representation balance, and memory constraint.
+
+Baseline Causal Effect Learning Model. We first train the baseline causal effect learning model for the initial observational dataset and then bring in subsequent datasets. The task on the initial dataset can be converted to a traditional causal effect estimation problem. Owing to the success of deep learning for counterfactual inference (Shalit, Johans-son, and Sontag 2017), we propose to learn the selective and balanced feature representations for units in treatment and control groups, and then infer the potential outcomes based on learned representation space.
+
+Sustainability of Model Learning. To avoid catastrophic forgetting when learning new data, we propose to preserve a subset of lower-dimensional feature representations rather than all original covariates. We also can adjust the number of preserved feature representations according to the memory constraint.
+
+Continual Causal Effect Learning. We have stored memory and the baseline model. To continually estimate the causal effect for incrementally available observational data, we incorporate feature representation distillation and feature representation transformation to estimate the causal effect for all seen data based on a balanced global feature representation space.
+
+## Research Opportunities
+
+Although significant advances have been made to overcome the challenges in causal effect estimation from an academic perspective, industrial applications based on observational data are always more complicated and harder. Unlike source-specific and stationary observational data, most real-world data are incrementally available and from nonstationary data distributions. Significantly, we also face the realistic consideration of accessibility. This work is the first attempt to investigate the continual lifelong causal effect inference problem and propose the corresponding evaluation criteria. However, constructing the comprehensive analytical tools and the theoretical framework derived from this brand-new problem requires non-trivial efforts. Specifically, there are four potential directions for continual causal inference: (1) The basic assumptions for traditional causal effect estimation may not be completely applicable. New assumptions may be supplemented, or previous assumptions need to be relaxed. (2) There exists a natural connection with continual domain adaptation among different times or domains ("continual" causal inference) and between treatment and control groups (continual "causal inference"). (3) Compared to traditional causal effect estimation tasks based on a small amount of medical data, the continual causal inference method will face big data computing or cloud computing due to its objective task. (4) With the increasing public concern over privacy leakage in data, federated learning, which collaboratively trains the machine learning model without directly sharing the raw data among the data holders, may become a potential solution for continual causal inference.
+
+## References
+
+Chu, Z.; Rathbun, S.; and Li, S. 2020. Continual Lifelong
+
+Causal Effect Inference with Real World Evidence.
+
+French, R. M. 1999. Catastrophic forgetting in connectionist networks. Trends in cognitive sciences, 3(4): 128-135.
+
+Imbens, G. W.; and Rubin, D. B. 2015. Causal inference in statistics, social, and biomedical sciences. Cambridge University Press.
+
+McCloskey, M.; and Cohen, N. J. 1989. Catastrophic interference in connectionist networks: The sequential learning problem. In Psychology of learning and motivation, volume 24, 109-165. Elsevier.
+
+Rubin, D. B. 1974. Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of educational Psychology, 66(5): 688.
+
+Samet, S.; Miri, A.; and Granger, E. 2013. Incremental learning of privacy-preserving Bayesian networks. Applied Soft Computing, 13(8): 3657-3667.
+
+Shalit, U.; Johansson, F. D.; and Sontag, D. 2017. Estimating individual treatment effect: generalization bounds and algorithms. In International Conference on Machine Learning, 3076-3085. PMLR.
+
+Splawa-Neyman, J.; Dabrowska, D. M.; and Speed, T. 1990. On the application of probability theory to agricultural experiments. Essay on principles. Section 9. Statistical Science, 465-472.
+
+Yao, L.; Chu, Z.; Li, S.; Li, Y.; Gao, J.; and Zhang, A. 2021. A survey on causal inference. ACM Transactions on Knowledge Discovery from Data (TKDD), 15(5): 1-46.
+
+Zhang, J.; Zhang, J.; Ghosh, S.; Li, D.; Tasci, S.; Heck, L.; Zhang, H.; and Kuo, C.-C. J. 2020. Class-incremental learning via deep model consolidation. In The IEEE Winter Conference on Applications of Computer Vision, 1131-1140.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/kn6rBT9gotr/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/kn6rBT9gotr/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..6fa7716ac583d8dcd31c4b35452e2937f28a45d1
--- /dev/null
+++ b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/kn6rBT9gotr/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,39 @@
+§ CONTINUAL CAUSAL INFERENCE: CHALLENGES AND OPPORTUNITIES
+
+Anonymous submission
+
+§ INTRODUCTION
+
+A further understanding of cause and effect within observational data is critical across many domains, such as economics, health care, public policy, web mining, online advertising, and marketing campaigns (Yao et al. 2021). Although significant advances have been made to overcome the challenges in causal effect estimation with observational data, such as missing counterfactual outcomes and selection bias between treatment and control groups, the existing methods mainly focus on source-specific and stationary observational data. Such learning strategies assume that all observational data are already available during the training phase and from only one source (Chu, Rathbun, and Li 2020).
+
+Along with the fast-growing segments of industrial applications, this assumption is unsubstantial in practice. Taking Alipay as an example, which is one of the world's largest mobile payment platforms and offers financial services to billion-scale users, a tremendous amount of data containing much privacy-related information is produced daily and collected from different cities or countries. In conclusion, the following two points are summed up. The first one is based on the characteristics of observational data, which are incrementally available from non-stationary data distributions. For instance, the number of electronic financial records for one marketing campaign is growing every day, or the electronic financial records for one marketing campaign may be collected from different cities or even other countries. This characteristic implies that one cannot have access to all observational data at a one-time point and from one single source. The second reason is based on the realistic consideration of accessibility. For example, when new observational data are available, if we want to refine the model previously trained by original data, maybe the original training data are no longer accessible due to a variety of reasons, e.g., legacy data may be unrecorded, proprietary, the sensitivity of financial data, too large to store, or subject to privacy constraint of personal information (Zhang et al. 2020). This practical concern of accessibility is ubiquitous in various academic and industrial applications. That's what it boiled down to: in the era of big data, we face new challenges in causal inference with observational data, i.e., the extensibility for incrementally available observational data, the adaptability for extra domain adaptation problem except for the imbalance between treatment and control groups, and the accessibility for an enormous amount of data.
+
+In this position paper, we formally define the problem of continual treatment effect estimation, describe its research challenges, and then present possible solutions to this problem. Moreover, we will discuss future research directions on this topic.
+
+§ PROBLEM DEFINITION
+
+Suppose that the observational data contain $n$ units collected from $d$ different domains and the $d$ -th dataset ${D}_{d}$ contains the data $\{ \left( {x,y,t}\right) \mid x \in X,y \in Y,t \in T\}$ collected from $d$ -th domain, which contains ${n}_{d}$ units. Let $X$ denote all observed variables, $Y$ denote the outcomes in the observational data, and $T$ be a binary variable. Let ${D}_{1 : d} = \left\{ {{D}_{1},{D}_{2},\ldots ,{D}_{d}}\right\}$ be the set of combination of $d$ dataset, separately collected from $d$ different domains. For $d$ datasets $\left\{ {{D}_{1},{D}_{2},\ldots ,{D}_{d}}\right\}$ , they have the commonly observed variables, but due to the fact that they are collected from different domains, they have different distributions with respect to $X,Y$ , and $T$ in each dataset. Each unit in the observational data received one of two treatments. Let ${t}_{i}$ denote the treatment assignment for unit $i;i = 1,\ldots ,n$ . For binary treatments, ${t}_{i} = 1$ is for the treatment group and ${t}_{i} = 0$ for the control group. The outcome for unit $i$ is denoted by ${y}_{t}^{i}$ when treatment $t$ is applied to unit $i$ . For observational data, only one of the potential outcomes is observed. The observed outcome is called the factual outcome, and the remaining unobserved potential outcomes are called counterfactual outcomes.
+
+This task can follow the potential outcome framework for estimating treatment effects (Rubin 1974; Splawa-Neyman, Dabrowska, and Speed 1990). The individual treatment effect (ITE) for unit $i$ is the difference between the potential treated and control outcomes, and is defined as ${\mathrm{{ITE}}}_{i} =$ ${y}_{1}^{i} - {y}_{0}^{i}$ . The average treatment effect (ATE) is the difference between the mean potential treated and control outcomes, which is defined as $\operatorname{ATE} = \frac{1}{n}\mathop{\sum }\limits_{{i = 1}}^{n}\left( {{y}_{1}^{i} - {y}_{0}^{i}}\right)$ . The success of the potential outcome framework is based on the following assumptions (Imbens and Rubin 2015), which ensure that the treatment effect can be identified. Stable Unit Treatment Value Assumption (SUTVA): The potential outcomes for any unit do not vary with the treatments assigned to other units, and, for each unit, there are no different forms or versions of each treatment level, which lead to different potential outcomes. Consistency: The potential outcome of treatment $t$ is equal to the observed outcome if the actual treatment received is $t$ . Positivity: For any value of $x$ , treatment assignment is not deterministic, i.e., $P\left( {T = t \mid X = x}\right) > 0$ , for all $t$ and $x$ . Ignorability: Given covariates, treatment assignment is independent of the potential outcomes, i.e., $\left( {{y}_{1},{y}_{0}}\right) ⫫ t \mid x$ .
+
+Our goal is to develop a novel continual causal inference framework to estimate the causal effect for all available data, including new data ${D}_{d}$ and the previous data ${D}_{1 : \left( {d - 1}\right) }$ , without having access to previous data ${D}_{1 : \left( {d - 1}\right) }$ .
+
+§ RESEARCH CHALLENGES
+
+Existing causal effect inference methods, however, are unable to deal with the aforementioned new challenges, i.e., extensibility, adaptability, and accessibility. Although it is possible to adapt existing causal inference methods to cater to these issues, these adjusted methods still have inevitable defects. Three straightforward adaptation strategies are described as follows: (1) If we directly apply the model previously trained based on original data to new observational data, the performance on new task will be very poor due to the domain shift issues among different data sources; (2) Suppose we utilize newly available data to re-train the previously learned model for adapting changes in the data distribution. In that case, old knowledge will be completely or partially overwritten by the new one, which can result in severe performance degradation on old tasks. This is the well-known catastrophic forgetting problem (McCloskey and Cohen 1989; French 1999); (3) To overcome the catastrophic forgetting problem, we may rely on the storage of old data and combine the old and new data together, and then retrain the model from scratch. However, this strategy is memory inefficient and time-consuming, and it brings practical concerns such as copyright or privacy issues when storing data for a long time (Samet, Miri, and Granger 2013). Any of these three strategies, in combination with the existing causal effect inference methods, are deficient.
+
+§ POTENTIAL SOLUTION
+
+To address the continual treatment effect estimation problem, we propose a Continual Causal Effect Representation Learning framework (CERL) for estimating causal effect with incrementally available observational data. Instead of having access to all previous observational data, we only store a limited subset of feature representations learned from previous data. Combining selective and balanced representation learning, feature representation distillation, and feature transformation, our framework preserves the knowledge learned from previous data and updates the knowledge by leveraging new data so that it can achieve the continual causal effect estimation for incrementally new data without compromising the estimation capability for previous data.
+
+Framework Overview. To estimate the incrementally available observational data, the framework of CERL is mainly composed of two components: (1) the baseline causal effect learning model is only for the first available observational data, and thus we don't need to consider the domain shift issue among different data sources. This component is equivalent to the traditional causal effect estimation problem; (2) the continual causal effect learning model is for the sequentially available observational data, where we need to handle more complex issues, such as knowledge transfer, catastrophic forgetting, global representation balance, and memory constraint.
+
+Baseline Causal Effect Learning Model. We first train the baseline causal effect learning model for the initial observational dataset and then bring in subsequent datasets. The task on the initial dataset can be converted to a traditional causal effect estimation problem. Owing to the success of deep learning for counterfactual inference (Shalit, Johans-son, and Sontag 2017), we propose to learn the selective and balanced feature representations for units in treatment and control groups, and then infer the potential outcomes based on learned representation space.
+
+Sustainability of Model Learning. To avoid catastrophic forgetting when learning new data, we propose to preserve a subset of lower-dimensional feature representations rather than all original covariates. We also can adjust the number of preserved feature representations according to the memory constraint.
+
+Continual Causal Effect Learning. We have stored memory and the baseline model. To continually estimate the causal effect for incrementally available observational data, we incorporate feature representation distillation and feature representation transformation to estimate the causal effect for all seen data based on a balanced global feature representation space.
+
+§ RESEARCH OPPORTUNITIES
+
+Although significant advances have been made to overcome the challenges in causal effect estimation from an academic perspective, industrial applications based on observational data are always more complicated and harder. Unlike source-specific and stationary observational data, most real-world data are incrementally available and from nonstationary data distributions. Significantly, we also face the realistic consideration of accessibility. This work is the first attempt to investigate the continual lifelong causal effect inference problem and propose the corresponding evaluation criteria. However, constructing the comprehensive analytical tools and the theoretical framework derived from this brand-new problem requires non-trivial efforts. Specifically, there are four potential directions for continual causal inference: (1) The basic assumptions for traditional causal effect estimation may not be completely applicable. New assumptions may be supplemented, or previous assumptions need to be relaxed. (2) There exists a natural connection with continual domain adaptation among different times or domains ("continual" causal inference) and between treatment and control groups (continual "causal inference"). (3) Compared to traditional causal effect estimation tasks based on a small amount of medical data, the continual causal inference method will face big data computing or cloud computing due to its objective task. (4) With the increasing public concern over privacy leakage in data, federated learning, which collaboratively trains the machine learning model without directly sharing the raw data among the data holders, may become a potential solution for continual causal inference.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/pKyB5wMnTiy/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/pKyB5wMnTiy/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..8f646539190ebd0c6f9bf7c15464c65aa4dc717e
--- /dev/null
+++ b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/pKyB5wMnTiy/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,149 @@
+# Modeling Uplift from Observational Time-Series in Continual Scenarios
+
+Anonymous submission
+
+## Abstract
+
+As the importance of causality in machine learning grows, we expect the model to learn the correct causal mechanism for robustness even under distribution shifts. Since most of the prior benchmarks focused on vision and language tasks, domain or temporal shifts in causal inference tasks have not been well explored. To this end, we introduce UpliftCRUD dataset for modeling uplift in continual learning scenarios. We build a dataset using CRUD log data and construct continual learning tasks under temporal shifts and out-of-domain scenarios.
+
+## Introduction
+
+Uplift is defined as Individual Treatment Effect (ITE), but its evaluation metric differs from other causal tasks (Radcliffe and Surry 1999). Following Radclifte and Simpson (2008), individuals can be segmented into four groups along two axes: received treatment and response to it. Sure Things will stay whether or not they receive treatment, and Lost Causes will leave in either case. However, Persuadables are likely to stay only if they receive the treatment, but Sleeping Dogs would be annoyed and will eventually leave. Based upon this, the main goal is thus to identify as many Persuad-ables as possible and avoid Sleeping Dogs for treatment.
+
+In practice, the bottlenecks of causal models are data availability, scalability, and distribution shifts. In many cases where randomized controlled trials (RCTs) is infeasible, practitioners are given observational data. However, even with a large set of covariates, there may still be unobserved confounders, and the curse of dimensionality would occur. It is problematic, particularly in causal inference, as the chance of violating the positivity assumption increases. In addition, distribution change over time and domains results in wrong validation and, eventually the degradation of the model.
+
+To challenge aforementioned issues with causality even in high-dimensional spaces and bridge the gap between research and practice environments, we constructed Up-liftCRUD dataset ${}^{1}$ , a real-world uplift dataset from mobile game users. The task is to predict uplift to push notifications by recognizing patterns from each user's CRUD history. A model must learn underlying causal mechanisms and continuously adapt itself to distributions varying over time and to other games. To the best of our knowledge, UpliftCRUD is the first uplift dataset with time-series under domain shifts.
+
+## Background
+
+Causal inference and its notations. Potential outcomes framework (Rubin 1974) defines the causal effect as the difference between two potential outcomes $Y\left( 1\right) - Y\left( 0\right)$ : when receiving treatment $\left( {T = 1}\right)$ and under control $\left( {T = 0}\right)$ . The fundamental problem of causal inference (Holland 1986) states that either ${Y}_{i}\left( 1\right)$ or ${Y}_{i}\left( 0\right)$ is observable for each unit indexed by $i \in \{ 1,\ldots , n\}$ , and the unobserved outcome is called counterfactual. To estimate ITE, or uplift, ${u}_{i} \mathrel{\text{:=}}$ ${Y}_{i}\left( 1\right) - {Y}_{i}\left( 0\right)$ , we, therefore, model Conditional Average Treatment Effect (CATE), i.e., $u\left( \mathbf{X}\right) \mathrel{\text{:=}} \mathbb{E}\left\lbrack {Y\left( 1\right) - Y\left( 0\right) \mid \mathbf{X}}\right\rbrack$ . Among the assumptions needed to identify CATE, two assumptions are crucial and often likely to be violated (Pearl 2010; Neal 2020): unconfoundedness, i.e., $Y ⫫ T \mid \mathbf{X}$ , and positivity, i.e., $P\left( {T \mid \mathbf{X} = \mathbf{x}}\right) > 0,\forall \mathbf{x} : P\left( {\mathbf{X} = \mathbf{x}}\right) > 0$ .
+
+Time-series modeling. Time-series is a sequence of discrete-time data. Many previous works have dealt with regular time-series, but in this paper, we mainly focus on irregular time-series, where intervals between two consecutive data points are not regular. RNNs (Rumelhart, Hinton, and Williams 1986; Hochreiter and Schmidhuber 1997; Cho et al. 2014), TCNs (Bai, Kolter, and Koltun 2018) with dilated convolutions (Yu and Koltun 2015), and Transformers (Vaswani et al. 2017) have become popular choices for handling irregular time-series data. However, there is no one-size-fits-all augmentation strategy in irregular time-series (Yue et al. 2022) except for dropout (Srivastava et al. 2014), or masked autoencoding (Devlin et al. 2018; He et al. 2022).
+
+Continual learning. Continual Learning (CL) aims to effectively learn new tasks and adapt a model to distribution shifts over time while minimizing performance degradation in the learned scenarios, which is called catastrophic forgetting (Kirkpatrick et al. 2017). It is also infeasible in practice to fully retrain the model whenever new data is available due to training costs or the unavailability of previous data. Therefore, recent algorithms for this aim to accumulate knowledge and reuse them in future scenarios without forgetting information (iCaRL (Rebuffi et al. 2017), A-GEM (Chaudhry et al. 2019), EWC (Kirkpatrick et al. 2017),
+
+---
+
+${}^{1}$ The dataset will be released to the public in early 2023.
+
+---
+
+
+
+Figure 1: Illustration of dataset construction.
+
+SI (Zenke, Poole, and Ganguli 2017)). Moreover, causal inference tasks require the model to capture the causal mechanism over distributional shifts, on which existing CL algorithms have not focused.
+
+## Previous Benchmarks
+
+Benchmarks for uplift modeling. Researchers on uplift have relied on (semi-)synthetic data for testing algorithms since counterfactuals do exist. As of now, the largest observational benchmark is Criteo dataset (Eustache et al. 2018) with 12 static features from $\sim {14}\mathrm{M}$ real-world users. Thus far, there has been little motivation to use deep learning, and therefore, related works have been restricted to smaller models (#params $< 1\mathrm{\;K}$ ) or other ML algorithms. With regard to time-series, a subset of MIMIC II/III (Johnson et al. 2016) has been used for causal discovery or inference. See Moraf-fah et al. (2021) for a comprehensive review.
+
+Benchmarks for CL. Benchmarks in various fields and tasks with CL scenarios have been introduced, e.g., object recognition in robotics (Fanello et al. 2013; Lomonaco and Maltoni 2017; She et al. 2020), classification tasks in various domains on images (Rebuffi, Bilen, and Vedaldi 2017; Lake, Salakhutdinov, and Tenenbaum 2015; He, Shen, and Cui 2021), videos (Roady et al. 2020), 3D objects (Stojanov et al. 2019), and natural language (Hussain et al. 2021; Srinivasan et al. 2022). However, domain or temporal shifts in causal inference tasks have not been well explored.
+
+## UpliftCRUD Dataset
+
+Background. We collected data from a Backend-as-a-Service (BaaS) company specializing in mobile games, allowing game developers to quickly release their apps without backend servers of their own. One of their features is sending a push notification to all users at the same time.
+
+Construction. In a typical observational dataset, the treatment is not given randomly. In our data, however, the treatment group only exists as the treatment (push message) is given to the whole population simultaneously. To circumvent this problem, for a train set, we sampled a pseudo-control group from exactly one week prior to the push to eliminate the time and weekday effect. We also introduced a concept of no push area, an -12 - +6 hour window around which no other pushes must exist for preventing interference from the other pushes. Still, some gamers exist in both groups. For a test set, we randomly split those overlapping users into either group to simulate RCT.
+
+Tasks. The dataset consists of three games (A, B, and C) with a total of ${16.7}\mathrm{M}$ lines of CRUD logs from 5,360 ${\text{users}}^{2}$ . Each consists of a triple $\left( {{\mathbf{X}}_{i},{T}_{i},{Y}_{i}}\right)$ , where ${\mathbf{X}}_{i}$ is a sequence of categorical variables along with corresponding timestamps, and ${T}_{i},{Y}_{i} \in \{ 0,1\}$ are binary indicators of the treatment and whether a user login within three hours. We experimented on uplift modeling in the proposed tasks:
+
+- In-distribution (ID): train with the game A (APR, MAY) and test on ${20}\%$ random-split holdout set.
+
+- Temporal shift (TS): train with the game A (APR, MAY) and test on the game A (JUN).
+
+- Out-of-distribution (OOD): train with the game A (APR, MAY) and test on the game B with fine-tuning (OOD w/) or on the game $\mathrm{C}$ without fine-tuning (OOD w/o).
+
+Experiments
+
+| Model | Ckpt | ID | TS | OOD w/ | OOD w/o |
| Dragon | VAL | .091/.056 | .006/.003 | .118/.038 | .037/.023 |
| MAX | | .112/.074 | .372/.082 | .123/.081 |
| Siamese | VAL | .145/.062 | -.036/-.011 | .154/.057 | -.057/-.030 |
| MAX | | .249/.067 | .207/.075 | .036/.022 |
| $P\left( {Y = 1}\right)$ | | 11.9% | 12.2% | 5.9% | 22.4% |
+
+Table 1: Baseline results. VAL denotes the best checkpoint on the holdout set, and MAX denotes the best metric during entire training, showing the discrepancy of the performance.
+
+We used Dragonnet (Shi, Blei, and Veitch 2019), Siamese network (Mouloud, Olivier, and Ghaith 2020) with TCN backbones and EWC for CL. Table 1 shows qini coefficients (left) and area under uplift curve (AUUC) (right) (Devriendt et al. 2020), both of which measure incremental gains. The difference between VAL and MAX can be attributed to the difficulty of the proper validation due to distribution shifts. Also, the performance in TS and OOD w/o dropped sharply. We conjecture that it is due to the model being overfitted to the train set without learning causal mechanisms, and a model should perform equally well both in ID and OOD.
+
+## Conclusion and Future Work
+
+In this paper, we propose uplift tasks and UpliftCRUD dataset, combining causal inference with CL scenarios. We demonstrate that naïvely applying existing methods may fail as uplift modeling tries to predict future behaviors based on historical data. All observational datasets have inherent biases; identifying causal relationship and eliminating undesirable effects would be one of the important follow-up research topics. We believe that learning causal mechanisms which are invariant over time is crucial towards general-level AI, and that the dataset will hopefully contribute to developing these algorithms.
+
+---
+
+${}^{2}$ More data will be included in the public release.
+
+---
+
+References
+
+Bai, S.; Kolter, J. Z.; and Koltun, V. 2018. An empirical evaluation of generic convolutional and recurrent networks for sequence modeling. arXiv preprint arXiv:1803.01271.
+
+Chaudhry, A.; Ranzato, M.; Rohrbach, M.; and Elhoseiny, M. 2019. Efficient Lifelong Learning with A-GEM. In International Conference on Learning Representations.
+
+Cho, K.; Van Merriënboer, B.; Bahdanau, D.; and Ben-gio, Y. 2014. On the properties of neural machine translation: Encoder-decoder approaches. arXiv preprint arXiv:1409.1259.
+
+Devlin, J.; Chang, M.-W.; Lee, K.; and Toutanova, K. 2018. Bert: Pre-training of deep bidirectional transformers for language understanding. arXiv preprint arXiv:1810.04805.
+
+Devriendt, F.; Van Belle, J.; Guns, T.; and Verbeke, W. 2020. Learning to rank for uplift modeling. IEEE Transactions on Knowledge and Data Engineering.
+
+Eustache, D.; Artem, B.; Renaudin, C.; and Massih-Reza, A. 2018. A Large Scale Benchmark for Uplift Modeling. In Proceedings of the AdKDD and TargetAd Workshop, KDD, London, United Kingdom, August, 20, 2018. ACM.
+
+Fanello, S. R.; Ciliberto, C.; Santoro, M.; Natale, L.; Metta, G.; Rosasco, L.; and Odone, F. 2013. iCub World: Friendly Robots Help Building Good Vision Data-Sets. In 2013 IEEE Conference on Computer Vision and Pattern Recognition Workshops, 700-705.
+
+He, K.; Chen, X.; Xie, S.; Li, Y.; Dollár, P.; and Girshick, R. 2022. Masked autoencoders are scalable vision learners. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition, 16000-16009.
+
+He, Y.; Shen, Z.; and Cui, P. 2021. Towards non-iid image classification: A dataset and baselines. Pattern Recognition, 110: 107383.
+
+Hochreiter, S.; and Schmidhuber, J. 1997. Long short-term memory. Neural computation, 9(8): 1735-1780.
+
+Holland, P. W. 1986. Statistics and causal inference. Journal of the American statistical Association, 81(396): 945-960.
+
+Hussain, A.; Holla, N.; Mishra, P.; Yannakoudakis, H.; and Shutova, E. 2021. Towards a robust experimental framework and benchmark for lifelong language learning. In Thirty-fifth Conference on Neural Information Processing Systems Datasets and Benchmarks Track (Round 1).
+
+Johnson, A. E.; Pollard, T. J.; Shen, L.; Lehman, L.-w. H.; Feng, M.; Ghassemi, M.; Moody, B.; Szolovits, P.; Anthony Celi, L.; and Mark, R. G. 2016. MIMIC-III, a freely accessible critical care database. Scientific data, 3(1): 1-9.
+
+Kirkpatrick, J.; Pascanu, R.; Rabinowitz, N.; Veness, J.; Des-jardins, G.; Rusu, A. A.; Milan, K.; Quan, J.; Ramalho, T.; Grabska-Barwinska, A.; et al. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13): 3521-3526.
+
+Lake, B. M.; Salakhutdinov, R.; and Tenenbaum, J. B. 2015. Human-level concept learning through probabilistic program induction. Science, 350(6266): 1332-1338.
+
+Lomonaco, V.; and Maltoni, D. 2017. CORe50: a New Dataset and Benchmark for Continuous Object Recogni-
+
+tion. In Proceedings of the 1st Annual Conference on Robot Learning, volume 78, 17-26.
+
+Moraffah, R.; Sheth, P.; Karami, M.; Bhattacharya, A.; Wang, Q.; Tahir, A.; Raglin, A.; and Liu, H. 2021. Causal inference for time series analysis: Problems, methods and evaluation. Knowledge and Information Systems, 1-45.
+
+Mouloud, B.; Olivier, G.; and Ghaith, K. 2020. Adapting neural networks for uplift models. arXiv preprint arXiv:2011.00041.
+
+Neal, B. 2020. Introduction to causal inference from a machine learning perspective. Course Lecture Notes (draft).
+
+Pearl, J. 2010. Causal inference. Causality: objectives and assessment, 39-58.
+
+Radcliffe, N.; and Surry, P. 1999. Differential response analysis: Modeling true responses by isolating the effect of a single action. Credit Scoring and Credit Control IV.
+
+Radclifte, N. J.; and Simpson, R. 2008. Identifying who can be saved and who will be driven away by retention activity. Journal of Telecommunications Management, 1(2).
+
+Rebuffi, S.-A.; Bilen, H.; and Vedaldi, A. 2017. Learning multiple visual domains with residual adapters. Advances in neural information processing systems, 30 .
+
+Rebuffi, S.-A.; Kolesnikov, A.; Sperl, G.; and Lampert, C. H. 2017. icarl: Incremental classifier and representation learning. In Proceedings of the IEEE conference on Computer Vision and Pattern Recognition, 2001-2010.
+
+Roady, R.; Hayes, T. L.; Vaidya, H.; and Kanan, C. 2020. Stream-51: Streaming Classification and Novelty Detection From Videos. In The IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops.
+
+Rubin, D. B. 1974. Estimating causal effects of treatments in randomized and nonrandomized studies. Journal of educational Psychology, 66(5): 688.
+
+Rumelhart, D. E.; Hinton, G. E.; and Williams, R. J. 1986. Learning representations by back-propagating errors. nature, 323(6088): 533-536.
+
+She, Q.; Feng, F.; Hao, X.; Yang, Q.; Lan, C.; Lomonaco, V.; Shi, X.; Wang, Z.; Guo, Y.; Zhang, Y.; Qiao, F.; and Chan, R. H. M. 2020. OpenLORIS-Object: A Robotic Vision Dataset and Benchmark for Lifelong Deep Learning. In 2020 International Conference on Robotics and Automation (ICRA), 4767-4773.
+
+Shi, C.; Blei, D.; and Veitch, V. 2019. Adapting neural networks for the estimation of treatment effects. Advances in neural information processing systems, 32.
+
+Srinivasan, T.; Chang, T.-Y.; Alva, L. L. P.; Chochlakis, G.; Rostami, M.; and Thomason, J. 2022. CLiMB: A Continual Learning Benchmark for Vision-and-Language Tasks. arXiv preprint arXiv:2206.09059.
+
+Srivastava, N.; Hinton, G.; Krizhevsky, A.; Sutskever, I.; and Salakhutdinov, R. 2014. Dropout: A Simple Way to Prevent Neural Networks from Overfitting. Journal of Machine Learning Research, 15(56): 1929-1958.
+
+Stojanov, S.; Mishra, S.; Thai, N. A.; Dhanda, N.; Humayun, A.; Yu, C.; Smith, L. B.; and Rehg, J. M. 2019. Incremental Object Learning From Contiguous Views. In Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR).
+
+Vaswani, A.; Shazeer, N.; Parmar, N.; Uszkoreit, J.; Jones, L.; Gomez, A. N.; Kaiser, Ł.; and Polosukhin, I. 2017. Attention is all you need. Advances in neural information processing systems, 30.
+
+Yu, F.; and Koltun, V. 2015. Multi-scale context aggregation by dilated convolutions. arXiv preprint arXiv:1511.07122.
+
+Yue, Z.; Wang, Y.; Duan, J.; Yang, T.; Huang, C.; Tong, Y.; and Xu, B. 2022. Ts2vec: Towards universal representation of time series. In Proceedings of the AAAI Conference on Artificial Intelligence, volume 36, 8980-8987.
+
+Zenke, F.; Poole, B.; and Ganguli, S. 2017. Continual learning through synaptic intelligence. In International Conference on Machine Learning, 3987-3995. PMLR.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/pKyB5wMnTiy/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/pKyB5wMnTiy/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..5145eb1c2d930068789b4dc1df8202f9539b22eb
--- /dev/null
+++ b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/pKyB5wMnTiy/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,84 @@
+§ MODELING UPLIFT FROM OBSERVATIONAL TIME-SERIES IN CONTINUAL SCENARIOS
+
+Anonymous submission
+
+§ ABSTRACT
+
+As the importance of causality in machine learning grows, we expect the model to learn the correct causal mechanism for robustness even under distribution shifts. Since most of the prior benchmarks focused on vision and language tasks, domain or temporal shifts in causal inference tasks have not been well explored. To this end, we introduce UpliftCRUD dataset for modeling uplift in continual learning scenarios. We build a dataset using CRUD log data and construct continual learning tasks under temporal shifts and out-of-domain scenarios.
+
+§ INTRODUCTION
+
+Uplift is defined as Individual Treatment Effect (ITE), but its evaluation metric differs from other causal tasks (Radcliffe and Surry 1999). Following Radclifte and Simpson (2008), individuals can be segmented into four groups along two axes: received treatment and response to it. Sure Things will stay whether or not they receive treatment, and Lost Causes will leave in either case. However, Persuadables are likely to stay only if they receive the treatment, but Sleeping Dogs would be annoyed and will eventually leave. Based upon this, the main goal is thus to identify as many Persuad-ables as possible and avoid Sleeping Dogs for treatment.
+
+In practice, the bottlenecks of causal models are data availability, scalability, and distribution shifts. In many cases where randomized controlled trials (RCTs) is infeasible, practitioners are given observational data. However, even with a large set of covariates, there may still be unobserved confounders, and the curse of dimensionality would occur. It is problematic, particularly in causal inference, as the chance of violating the positivity assumption increases. In addition, distribution change over time and domains results in wrong validation and, eventually the degradation of the model.
+
+To challenge aforementioned issues with causality even in high-dimensional spaces and bridge the gap between research and practice environments, we constructed Up-liftCRUD dataset ${}^{1}$ , a real-world uplift dataset from mobile game users. The task is to predict uplift to push notifications by recognizing patterns from each user's CRUD history. A model must learn underlying causal mechanisms and continuously adapt itself to distributions varying over time and to other games. To the best of our knowledge, UpliftCRUD is the first uplift dataset with time-series under domain shifts.
+
+§ BACKGROUND
+
+Causal inference and its notations. Potential outcomes framework (Rubin 1974) defines the causal effect as the difference between two potential outcomes $Y\left( 1\right) - Y\left( 0\right)$ : when receiving treatment $\left( {T = 1}\right)$ and under control $\left( {T = 0}\right)$ . The fundamental problem of causal inference (Holland 1986) states that either ${Y}_{i}\left( 1\right)$ or ${Y}_{i}\left( 0\right)$ is observable for each unit indexed by $i \in \{ 1,\ldots ,n\}$ , and the unobserved outcome is called counterfactual. To estimate ITE, or uplift, ${u}_{i} \mathrel{\text{ := }}$ ${Y}_{i}\left( 1\right) - {Y}_{i}\left( 0\right)$ , we, therefore, model Conditional Average Treatment Effect (CATE), i.e., $u\left( \mathbf{X}\right) \mathrel{\text{ := }} \mathbb{E}\left\lbrack {Y\left( 1\right) - Y\left( 0\right) \mid \mathbf{X}}\right\rbrack$ . Among the assumptions needed to identify CATE, two assumptions are crucial and often likely to be violated (Pearl 2010; Neal 2020): unconfoundedness, i.e., $Y ⫫ T \mid \mathbf{X}$ , and positivity, i.e., $P\left( {T \mid \mathbf{X} = \mathbf{x}}\right) > 0,\forall \mathbf{x} : P\left( {\mathbf{X} = \mathbf{x}}\right) > 0$ .
+
+Time-series modeling. Time-series is a sequence of discrete-time data. Many previous works have dealt with regular time-series, but in this paper, we mainly focus on irregular time-series, where intervals between two consecutive data points are not regular. RNNs (Rumelhart, Hinton, and Williams 1986; Hochreiter and Schmidhuber 1997; Cho et al. 2014), TCNs (Bai, Kolter, and Koltun 2018) with dilated convolutions (Yu and Koltun 2015), and Transformers (Vaswani et al. 2017) have become popular choices for handling irregular time-series data. However, there is no one-size-fits-all augmentation strategy in irregular time-series (Yue et al. 2022) except for dropout (Srivastava et al. 2014), or masked autoencoding (Devlin et al. 2018; He et al. 2022).
+
+Continual learning. Continual Learning (CL) aims to effectively learn new tasks and adapt a model to distribution shifts over time while minimizing performance degradation in the learned scenarios, which is called catastrophic forgetting (Kirkpatrick et al. 2017). It is also infeasible in practice to fully retrain the model whenever new data is available due to training costs or the unavailability of previous data. Therefore, recent algorithms for this aim to accumulate knowledge and reuse them in future scenarios without forgetting information (iCaRL (Rebuffi et al. 2017), A-GEM (Chaudhry et al. 2019), EWC (Kirkpatrick et al. 2017),
+
+${}^{1}$ The dataset will be released to the public in early 2023.
+
+ < g r a p h i c s >
+
+Figure 1: Illustration of dataset construction.
+
+SI (Zenke, Poole, and Ganguli 2017)). Moreover, causal inference tasks require the model to capture the causal mechanism over distributional shifts, on which existing CL algorithms have not focused.
+
+§ PREVIOUS BENCHMARKS
+
+Benchmarks for uplift modeling. Researchers on uplift have relied on (semi-)synthetic data for testing algorithms since counterfactuals do exist. As of now, the largest observational benchmark is Criteo dataset (Eustache et al. 2018) with 12 static features from $\sim {14}\mathrm{M}$ real-world users. Thus far, there has been little motivation to use deep learning, and therefore, related works have been restricted to smaller models (#params $< 1\mathrm{\;K}$ ) or other ML algorithms. With regard to time-series, a subset of MIMIC II/III (Johnson et al. 2016) has been used for causal discovery or inference. See Moraf-fah et al. (2021) for a comprehensive review.
+
+Benchmarks for CL. Benchmarks in various fields and tasks with CL scenarios have been introduced, e.g., object recognition in robotics (Fanello et al. 2013; Lomonaco and Maltoni 2017; She et al. 2020), classification tasks in various domains on images (Rebuffi, Bilen, and Vedaldi 2017; Lake, Salakhutdinov, and Tenenbaum 2015; He, Shen, and Cui 2021), videos (Roady et al. 2020), 3D objects (Stojanov et al. 2019), and natural language (Hussain et al. 2021; Srinivasan et al. 2022). However, domain or temporal shifts in causal inference tasks have not been well explored.
+
+§ UPLIFTCRUD DATASET
+
+Background. We collected data from a Backend-as-a-Service (BaaS) company specializing in mobile games, allowing game developers to quickly release their apps without backend servers of their own. One of their features is sending a push notification to all users at the same time.
+
+Construction. In a typical observational dataset, the treatment is not given randomly. In our data, however, the treatment group only exists as the treatment (push message) is given to the whole population simultaneously. To circumvent this problem, for a train set, we sampled a pseudo-control group from exactly one week prior to the push to eliminate the time and weekday effect. We also introduced a concept of no push area, an -12 - +6 hour window around which no other pushes must exist for preventing interference from the other pushes. Still, some gamers exist in both groups. For a test set, we randomly split those overlapping users into either group to simulate RCT.
+
+Tasks. The dataset consists of three games (A, B, and C) with a total of ${16.7}\mathrm{M}$ lines of CRUD logs from 5,360 ${\text{ users }}^{2}$ . Each consists of a triple $\left( {{\mathbf{X}}_{i},{T}_{i},{Y}_{i}}\right)$ , where ${\mathbf{X}}_{i}$ is a sequence of categorical variables along with corresponding timestamps, and ${T}_{i},{Y}_{i} \in \{ 0,1\}$ are binary indicators of the treatment and whether a user login within three hours. We experimented on uplift modeling in the proposed tasks:
+
+ * In-distribution (ID): train with the game A (APR, MAY) and test on ${20}\%$ random-split holdout set.
+
+ * Temporal shift (TS): train with the game A (APR, MAY) and test on the game A (JUN).
+
+ * Out-of-distribution (OOD): train with the game A (APR, MAY) and test on the game B with fine-tuning (OOD w/) or on the game $\mathrm{C}$ without fine-tuning (OOD w/o).
+
+Experiments
+
+max width=
+
+Model Ckpt ID TS OOD w/ OOD w/o
+
+1-6
+2*Dragon VAL .091/.056 .006/.003 .118/.038 .037/.023
+
+2-6
+ MAX X .112/.074 .372/.082 .123/.081
+
+1-6
+2*Siamese VAL .145/.062 -.036/-.011 .154/.057 -.057/-.030
+
+2-6
+ MAX X .249/.067 .207/.075 .036/.022
+
+1-6
+$P\left( {Y = 1}\right)$ X 11.9% 12.2% 5.9% 22.4%
+
+1-6
+
+Table 1: Baseline results. VAL denotes the best checkpoint on the holdout set, and MAX denotes the best metric during entire training, showing the discrepancy of the performance.
+
+We used Dragonnet (Shi, Blei, and Veitch 2019), Siamese network (Mouloud, Olivier, and Ghaith 2020) with TCN backbones and EWC for CL. Table 1 shows qini coefficients (left) and area under uplift curve (AUUC) (right) (Devriendt et al. 2020), both of which measure incremental gains. The difference between VAL and MAX can be attributed to the difficulty of the proper validation due to distribution shifts. Also, the performance in TS and OOD w/o dropped sharply. We conjecture that it is due to the model being overfitted to the train set without learning causal mechanisms, and a model should perform equally well both in ID and OOD.
+
+§ CONCLUSION AND FUTURE WORK
+
+In this paper, we propose uplift tasks and UpliftCRUD dataset, combining causal inference with CL scenarios. We demonstrate that naïvely applying existing methods may fail as uplift modeling tries to predict future behaviors based on historical data. All observational datasets have inherent biases; identifying causal relationship and eliminating undesirable effects would be one of the important follow-up research topics. We believe that learning causal mechanisms which are invariant over time is crucial towards general-level AI, and that the dataset will hopefully contribute to developing these algorithms.
+
+${}^{2}$ More data will be included in the public release.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/qjh2DRM0JXh/Initial_manuscript_md/Initial_manuscript.md b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/qjh2DRM0JXh/Initial_manuscript_md/Initial_manuscript.md
new file mode 100644
index 0000000000000000000000000000000000000000..012dcdb67ed15193849f4631257f4b52fe46a0a3
--- /dev/null
+++ b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/qjh2DRM0JXh/Initial_manuscript_md/Initial_manuscript.md
@@ -0,0 +1,81 @@
+# From IID to the Independent Mechanisms assumption in continual learning
+
+Author Name
+
+Affiliation
+
+Affiliation Line 2
+
+name @example.com
+
+Current machine learning algorithms are successful in learning clearly defined tasks from large i.i.d. data. Continual learning (CL) requires learning without iid-ness and developing algorithms capable of knowledge retention and transfer, the later one can be boosted through systematic generalization. Dropping the i.i.d. assumption requires replacing it with another hypothesis. While there are several candidates, here we advocate that the independent mechanism assumption (IM) (Schölkopf et al. 2012) is a useful hypothesis for representing knowledge in a form, that makes it easy to adapt to new tasks in CL. Specifically, we review several types of distribution shifts that are common in CL and point out in which way a system that represents knowledge in form of causal modules may outperform monolithic counterparts in CL. Intuitively, the efficacy of IM solution emerges since: (i) causal modules learn mechanisms invariant across domains; (ii) if causal mechanisms must be updated, modularity can enable efficient and sparse updates.
+
+Setup. We consider the observation space consisting of variables $X$ and $T$ . We think of $T$ as a subset of observed input variables that carry information about the task to be performed (e.g. operations in a math equation), while $X$ caries contextual information (e.g. input digits) that can be thought of as an argument to the underlying causal mechanisms. Here we assume the setting of supervised learning, where the label $Y$ must be predicted from $X$ and $T$ - each observation is a tuple(X, Y, T). Observations are sampled from the joint that factorizes as ${p}_{t}\left( {Y, X, T}\right) = p\left( {Y \mid X, T}\right) {p}_{t}\left( {X, T}\right) =$ $\mathop{\sum }\limits_{Z}p\left( {Y \mid X, T, Z}\right) {p}_{t}\left( {X, T, Z}\right)$ , where $Z$ denotes a set of potentially unobserved attributes and $t$ is the time/task index. Such setting can be instantiated in the math equations domain similar to Mittal, Bengio, and Lajoie (2022): ${X}_{1},{X}_{2} \sim$ ${R}^{\left\lbrack -1,1\right\rbrack }$ , and $T$ describe the math operations to be performed (+/-/* etc.) (one or many operations per equation).
+
+The Independent mechanisms (IM) assumption states that in causal factorization of the joint, the mechanism $p\left( {Y \mid X, T, Z}\right)$ contains no information about the causes ${p}_{t}\left( {X, T, Z}\right)$ and VV (Schölkopf et al. 2012). Hence, the true causal mechanism $p\left( {Y \mid X, T, Z}\right)$ is invariant across tasks and environments. For simplicity, here we assume the independence of $X, Z$ and $T : {p}_{t}\left( {X, T, Z}\right) = {p}_{t}\left( X\right) {p}_{t}\left( T\right) {p}_{t}\left( Z\right)$ .
+
+The IM assumption can be extended to the mechanism $p\left( {Y \mid X, T}\right)$ , which can be thought of as a composition of autonomous modules that operate independently from each other (Parascandolo et al. 2018; Goyal et al. 2019). That is, it can be approximated with a learnable function ${f}_{\theta }\left( \cdot \right)$ that is compositional. The most general definition of composi-tionality is that the meaning of the whole is a function of the meanings of its parts (Hirst 1992). We envision a model ${f}_{\theta }\left( \cdot \right)$ parameterized with a set of $M$ modules that compete with each other for explaining the current observation. The benefit of such system for CL is discussed next.
+
+Different distribution shifts. Compositonal solutions can be useful under different types of distribution shifts in CL.
+
+Domain shift: shift in the joint $p\left( {X, Y, T}\right)$ caused by shift in $p\left( X\right)$ . Domain shift can be leveraged for learning causal mechanisms, that is the mechanism invariant across domains, under some structural assumptions (i.e. sparse change in the underlying graph). This principle is used by Arjovsky et al. (2019) for learning invariant (causal) representations. Perry, von Kügelgen, and Schölkopf (2022) showed that domain shifts can provide useful learning signal for identifying causal structures if the shift in the underlying causal graph is sparse. Importantly, domain annotation is needed for such learning, which is natural in CL-every detected distribution shift signals a new domain. Once the true mechanism is learned, faster generalisations to new domains is possible. Importantly, leveraging domain shift for learning causal mechanisms likely requires storing samples from seen domains in a replay buffer (Rolnick et al. 2019).
+
+New tasks: shift in the joint $p\left( {X, Y, T}\right)$ caused by shift in $p\left( T\right)$ that introduces a new causal mechanism (e.g. math operation) to be learned by a new module. Existing CL methods like regularization (Kirkpatrick et al. 2017) of replay (Rolnick et al. 2019) applied to monolithic networks may perform on par with modular solutions under this shift in terms of forgetting. Later, however, should achieve better transfer and faster learning under the assumption that tasks share mechanisms (i.e. "+" is used in combination with two other distinct operation in two different task). Additionally, monolithic architectures have been shown to loose plasticity throughout CL (Dohare, Mahmood, and Sutton 2021) — a drawback that may be mitigated through modularity.
+
+---
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+---
+
+Hidden shift: shift in the joint $p\left( {X, Y, T}\right)$ caused by shift in $p\left( Z\right)$ . Consider an example, where the task is to interpret the meaning of a nodding gesture at some geographical location, that is unknown. When moving e.g from Canada to India the meaning of the nodding gesture can change while the meanings of other gestures (supposedly) may remain identical. In the example of math equations, a new environment can hypothetically change the meaning of the multiplication operation to, say, subtraction while not effecting other operations. Since $Z$ is unobserved, this drift requires sparse knowledge of a single mechanism without effecting other operations. Standard CL methods are likely to underperform in this setting, as old and new tasks become contradictory.
+
+Data amount shift: knowledge about previously seen mechanisms needs to be updated as more training data becomes available. Modular architecture may be able to sparsely update only the affected modules, while a monolithic solution, with entangled mechanisms, would suffer from forgetting if no measures to prevent it are taken.
+
+Spurious correlation shift: attributes correlate under ${p}_{t}$ but not under ${p}_{k}, t \neq k$ . For example operation "+" has been seen together in the same equation with "-" in task $t$ , which may result in routing mechanism of a modular solutions to mistakenly associate the high level variable " + " with the mechanism of subtraction. For modular solutions this shift might require updating only the routing mechanism, while monolithic one would require updating the whole net. The problematic of spurious features in the context of CL has been recently studied out by (Lesort 2022).
+
+How to learn modules representing true causal mechanisms is hence an important open question. While several attempts have been made to design systems capable of discovering the true underlying data generative modules that comprise $p\left( {Y \mid X, T}\right)$ (Goyal et al. 2019,2021; Parascandolo et al. 2018), there is no clear receipt to do it yet. Several inductive biases have been proposed, that facilitate learning of independent composable mechanisms, including competition (Parascandolo et al. 2018), information bottlenecks such as attention (Goyal et al. 2019) or functional bottlenecks (i.e. limiting the number of inputs a module can take) (Goyal et al. 2021; Ostapenko et al. 2022), or restricting modular communication to discrete variables (Liu et al. 2021).
+
+Preliminary result with Mixture of Experts (MoE). Here, we design a simple attention based MoE model and train it continually on two streams of 5 and 7 tasks. In both streams $X$ is sampled uniformly from ${R}^{\left\lbrack -1,1\right\rbrack }$ , and the corresponding $T$ (here task description is represented by a single variable) is samples uniformly from a set of predefined math operations. Labels $Y$ are generated by applying sampled mechanisms $T$ to the inputs $X$ . Stream 1 represents new task shift (i.e. new operations are introduced with operations overlapping across tasks) and consists of 5 tasks $\left( {t = 0\ldots 5}\right)$ . First 5 tasks of Stream 2 are identical to stream 1, tasks 5 to 7 simulate the hidden shift. For example, the operation encoded in the input of $t = 5$ is addition, which is identical to $t = 0$ and $t = 1$ , but the meaning of addition has changed from ${x}_{1} + {x}_{2}$ to $\left( {{x}_{1} + {x}_{2}}\right) /5$ , which is reflected in the training data of these tasks. The modular continual learner (MCL) receives as input a set of 3 entities: ${x}_{1}$ , ${x}_{2}$ and an operation (e.g. addition, subtraction, multiplication etc.). All three variables are first projected into a vector space, thereby we use a fixed embedding table for the operations and an encoder, that is only trained during the first task, for ${x}_{1}$ and ${x}_{2}$ . MCL performs module selection using key-value attention mechanism and a functional bottleneck similar to NPS (Goyal et al. 2021) (MCL is an adopted version of NPS for CL). We formulate these tasks as regression problems. We test on novel randomly sampled $x$ ’s. We use 20,000 samples per task for training and 2,000 for testing.
+
+
+
+Figure 1: Stream 1.
+
+
+
+Figure 2: Stream 2.
+
+In Figure 1 we plot the average mean squared error (MSE) of all tasks after learning each task incrementally. MCL has a much larger mean MSE at the beginning, it reaches MSE comparable to EWC (Kirkpatrick et al. 2017) at the end of the sequence. In Figure 2 we measure MSE averaged over the current state of the world on Stream 2, i.e. if the meaning of " + " changes in $t = 5$ from ${x}_{1} + {x}_{2}$ to $\left( {{x}_{1} + {x}_{2}}\right) /5$ , this change is incorporated in the test sets after this task (i.e. the addition operation in test sets of all tasks is replaced with addition and division by 5 ). Here, we observe that only MCL is able to perform well on this stream. EWC performs well up until ${T}_{4}$ as it is able to alleviate forgetting. After ${T}_{4}$ , when the mechanisms shifts, EWC's regularization strategy, aimed at reducing plasticity, prevents the model from incorporating knowledge about the shift in the mechanism reflected in the new training data of tasks 5 to 7. MCL is able to sparsely update only the modules which are specialized on the shifted mechanisms. Importantly, MCL can eleviate forgetting solely through routing samples to correct modules.
+
+Conclusion. We advocate for the usefulness of IM hypothesis in CL (it is not mutually exclusive with i.i.d). This may open a door for developing algorithms with better transfer and efficiency. We point out the potential advantages of such solutions under different distribution shifts and show in simple toy experiments that the IM principle can address some problems of CL in practice. Open challenges include determining useful inductive biases and further assumptions for designing modular solutions beyond MoE, where causal mechanisms can be discover when modules are applied in superposition resulting in a more fine-grained task decomposition (Ostapenko et al. 2022).
+
+## References
+
+Arjovsky, M.; Bottou, L.; Gulrajani, I.; and Lopez-Paz, D. 2019. Invariant risk minimization. arXiv preprint arXiv:1907.02893.
+
+Dohare, S.; Mahmood, A. R.; and Sutton, R. S. 2021. Continual backprop: Stochastic gradient descent with persistent randomness. arXiv preprint arXiv:2108.06325.
+
+Goyal, A.; Didolkar, A.; Ke, N. R.; Blundell, C.; Beaudoin, P.; Heess, N.; Mozer, M.; and Bengio, Y. 2021. Neural Production Systems. CoRR, abs/2103.01937.
+
+Goyal, A.; Lamb, A.; Hoffmann, J.; Sodhani, S.; Levine, S.; Bengio, Y.; and Schölkopf, B. 2019. Recurrent independent mechanisms. arXiv preprint arXiv:1909.10893.
+
+Hirst, G. 1992. Semantic interpretation and the resolution of ambiguity. Cambridge University Press.
+
+Kirkpatrick, J.; Pascanu, R.; Rabinowitz, N.; Veness, J.; Des-jardins, G.; Rusu, A. A.; Milan, K.; Quan, J.; Ramalho, T.; Grabska-Barwinska, A.; et al. 2017. Overcoming catastrophic forgetting in neural networks. Proceedings of the national academy of sciences, 114(13): 3521-3526.
+
+Lesort, T. 2022. Continual Feature Selection: Spurious Features in Continual Learning. arXiv preprint arXiv:2203.01012.
+
+Liu, D.; Lamb, A. M.; Kawaguchi, K.; ALIAS PARTH GOYAL, A. G.; Sun, C.; Mozer, M. C.; and Bengio, Y. 2021. Discrete-valued neural communication. Advances in Neural Information Processing Systems, 34: 2109-2121.
+
+Mittal, S.; Bengio, Y.; and Lajoie, G. 2022. Is a Modular Architecture Enough? arXiv preprint arXiv:2206.02713.
+
+Ostapenko, O.; Rodriguez, P.; Lacoste, A.; and Charlin, L. 2022. Attention for Compositional Modularity. In NeurIPS'22 Workshop on All Things Attention: Bridging Different Perspectives on Attention.
+
+Parascandolo, G.; Kilbertus, N.; Rojas-Carulla, M.; and Schölkopf, B. 2018. Learning independent causal mechanisms. In International Conference on Machine Learning, 4036-4044. PMLR.
+
+Perry, R.; von Kügelgen, J.; and Schölkopf, B. 2022. Causal Discovery in Heterogeneous Environments Under the Sparse Mechanism Shift Hypothesis. arXiv preprint arXiv:2206.02013.
+
+Rolnick, D.; Ahuja, A.; Schwarz, J.; Lillicrap, T.; and Wayne, G. 2019. Experience replay for continual learning. In Advances in Neural Information Processing Systems.
+
+Schölkopf, B.; Janzing, D.; Peters, J.; Sgouritsa, E.; Zhang, K.; and Mooij, J. 2012. On causal and anticausal learning. arXiv preprint arXiv:1206.6471.
\ No newline at end of file
diff --git a/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/qjh2DRM0JXh/Initial_manuscript_tex/Initial_manuscript.tex b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/qjh2DRM0JXh/Initial_manuscript_tex/Initial_manuscript.tex
new file mode 100644
index 0000000000000000000000000000000000000000..a0b835f6f5bbbaaafdac31c34cee3bcccb6db379
--- /dev/null
+++ b/papers/AAAI/AAAI 2023/AAAI 2023 Bridge/AAAI 2023 Bridge CCBridge/qjh2DRM0JXh/Initial_manuscript_tex/Initial_manuscript.tex
@@ -0,0 +1,47 @@
+§ FROM IID TO THE INDEPENDENT MECHANISMS ASSUMPTION IN CONTINUAL LEARNING
+
+Author Name
+
+Affiliation
+
+Affiliation Line 2
+
+name @example.com
+
+Current machine learning algorithms are successful in learning clearly defined tasks from large i.i.d. data. Continual learning (CL) requires learning without iid-ness and developing algorithms capable of knowledge retention and transfer, the later one can be boosted through systematic generalization. Dropping the i.i.d. assumption requires replacing it with another hypothesis. While there are several candidates, here we advocate that the independent mechanism assumption (IM) (Schölkopf et al. 2012) is a useful hypothesis for representing knowledge in a form, that makes it easy to adapt to new tasks in CL. Specifically, we review several types of distribution shifts that are common in CL and point out in which way a system that represents knowledge in form of causal modules may outperform monolithic counterparts in CL. Intuitively, the efficacy of IM solution emerges since: (i) causal modules learn mechanisms invariant across domains; (ii) if causal mechanisms must be updated, modularity can enable efficient and sparse updates.
+
+Setup. We consider the observation space consisting of variables $X$ and $T$ . We think of $T$ as a subset of observed input variables that carry information about the task to be performed (e.g. operations in a math equation), while $X$ caries contextual information (e.g. input digits) that can be thought of as an argument to the underlying causal mechanisms. Here we assume the setting of supervised learning, where the label $Y$ must be predicted from $X$ and $T$ - each observation is a tuple(X, Y, T). Observations are sampled from the joint that factorizes as ${p}_{t}\left( {Y,X,T}\right) = p\left( {Y \mid X,T}\right) {p}_{t}\left( {X,T}\right) =$ $\mathop{\sum }\limits_{Z}p\left( {Y \mid X,T,Z}\right) {p}_{t}\left( {X,T,Z}\right)$ , where $Z$ denotes a set of potentially unobserved attributes and $t$ is the time/task index. Such setting can be instantiated in the math equations domain similar to Mittal, Bengio, and Lajoie (2022): ${X}_{1},{X}_{2} \sim$ ${R}^{\left\lbrack -1,1\right\rbrack }$ , and $T$ describe the math operations to be performed (+/-/* etc.) (one or many operations per equation).
+
+The Independent mechanisms (IM) assumption states that in causal factorization of the joint, the mechanism $p\left( {Y \mid X,T,Z}\right)$ contains no information about the causes ${p}_{t}\left( {X,T,Z}\right)$ and VV (Schölkopf et al. 2012). Hence, the true causal mechanism $p\left( {Y \mid X,T,Z}\right)$ is invariant across tasks and environments. For simplicity, here we assume the independence of $X,Z$ and $T : {p}_{t}\left( {X,T,Z}\right) = {p}_{t}\left( X\right) {p}_{t}\left( T\right) {p}_{t}\left( Z\right)$ .
+
+The IM assumption can be extended to the mechanism $p\left( {Y \mid X,T}\right)$ , which can be thought of as a composition of autonomous modules that operate independently from each other (Parascandolo et al. 2018; Goyal et al. 2019). That is, it can be approximated with a learnable function ${f}_{\theta }\left( \cdot \right)$ that is compositional. The most general definition of composi-tionality is that the meaning of the whole is a function of the meanings of its parts (Hirst 1992). We envision a model ${f}_{\theta }\left( \cdot \right)$ parameterized with a set of $M$ modules that compete with each other for explaining the current observation. The benefit of such system for CL is discussed next.
+
+Different distribution shifts. Compositonal solutions can be useful under different types of distribution shifts in CL.
+
+Domain shift: shift in the joint $p\left( {X,Y,T}\right)$ caused by shift in $p\left( X\right)$ . Domain shift can be leveraged for learning causal mechanisms, that is the mechanism invariant across domains, under some structural assumptions (i.e. sparse change in the underlying graph). This principle is used by Arjovsky et al. (2019) for learning invariant (causal) representations. Perry, von Kügelgen, and Schölkopf (2022) showed that domain shifts can provide useful learning signal for identifying causal structures if the shift in the underlying causal graph is sparse. Importantly, domain annotation is needed for such learning, which is natural in CL-every detected distribution shift signals a new domain. Once the true mechanism is learned, faster generalisations to new domains is possible. Importantly, leveraging domain shift for learning causal mechanisms likely requires storing samples from seen domains in a replay buffer (Rolnick et al. 2019).
+
+New tasks: shift in the joint $p\left( {X,Y,T}\right)$ caused by shift in $p\left( T\right)$ that introduces a new causal mechanism (e.g. math operation) to be learned by a new module. Existing CL methods like regularization (Kirkpatrick et al. 2017) of replay (Rolnick et al. 2019) applied to monolithic networks may perform on par with modular solutions under this shift in terms of forgetting. Later, however, should achieve better transfer and faster learning under the assumption that tasks share mechanisms (i.e. "+" is used in combination with two other distinct operation in two different task). Additionally, monolithic architectures have been shown to loose plasticity throughout CL (Dohare, Mahmood, and Sutton 2021) — a drawback that may be mitigated through modularity.
+
+Copyright © 2022, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.
+
+Hidden shift: shift in the joint $p\left( {X,Y,T}\right)$ caused by shift in $p\left( Z\right)$ . Consider an example, where the task is to interpret the meaning of a nodding gesture at some geographical location, that is unknown. When moving e.g from Canada to India the meaning of the nodding gesture can change while the meanings of other gestures (supposedly) may remain identical. In the example of math equations, a new environment can hypothetically change the meaning of the multiplication operation to, say, subtraction while not effecting other operations. Since $Z$ is unobserved, this drift requires sparse knowledge of a single mechanism without effecting other operations. Standard CL methods are likely to underperform in this setting, as old and new tasks become contradictory.
+
+Data amount shift: knowledge about previously seen mechanisms needs to be updated as more training data becomes available. Modular architecture may be able to sparsely update only the affected modules, while a monolithic solution, with entangled mechanisms, would suffer from forgetting if no measures to prevent it are taken.
+
+Spurious correlation shift: attributes correlate under ${p}_{t}$ but not under ${p}_{k},t \neq k$ . For example operation "+" has been seen together in the same equation with "-" in task $t$ , which may result in routing mechanism of a modular solutions to mistakenly associate the high level variable " + " with the mechanism of subtraction. For modular solutions this shift might require updating only the routing mechanism, while monolithic one would require updating the whole net. The problematic of spurious features in the context of CL has been recently studied out by (Lesort 2022).
+
+How to learn modules representing true causal mechanisms is hence an important open question. While several attempts have been made to design systems capable of discovering the true underlying data generative modules that comprise $p\left( {Y \mid X,T}\right)$ (Goyal et al. 2019,2021; Parascandolo et al. 2018), there is no clear receipt to do it yet. Several inductive biases have been proposed, that facilitate learning of independent composable mechanisms, including competition (Parascandolo et al. 2018), information bottlenecks such as attention (Goyal et al. 2019) or functional bottlenecks (i.e. limiting the number of inputs a module can take) (Goyal et al. 2021; Ostapenko et al. 2022), or restricting modular communication to discrete variables (Liu et al. 2021).
+
+Preliminary result with Mixture of Experts (MoE). Here, we design a simple attention based MoE model and train it continually on two streams of 5 and 7 tasks. In both streams $X$ is sampled uniformly from ${R}^{\left\lbrack -1,1\right\rbrack }$ , and the corresponding $T$ (here task description is represented by a single variable) is samples uniformly from a set of predefined math operations. Labels $Y$ are generated by applying sampled mechanisms $T$ to the inputs $X$ . Stream 1 represents new task shift (i.e. new operations are introduced with operations overlapping across tasks) and consists of 5 tasks $\left( {t = 0\ldots 5}\right)$ . First 5 tasks of Stream 2 are identical to stream 1, tasks 5 to 7 simulate the hidden shift. For example, the operation encoded in the input of $t = 5$ is addition, which is identical to $t = 0$ and $t = 1$ , but the meaning of addition has changed from ${x}_{1} + {x}_{2}$ to $\left( {{x}_{1} + {x}_{2}}\right) /5$ , which is reflected in the training data of these tasks. The modular continual learner (MCL) receives as input a set of 3 entities: ${x}_{1}$ , ${x}_{2}$ and an operation (e.g. addition, subtraction, multiplication etc.). All three variables are first projected into a vector space, thereby we use a fixed embedding table for the operations and an encoder, that is only trained during the first task, for ${x}_{1}$ and ${x}_{2}$ . MCL performs module selection using key-value attention mechanism and a functional bottleneck similar to NPS (Goyal et al. 2021) (MCL is an adopted version of NPS for CL). We formulate these tasks as regression problems. We test on novel randomly sampled $x$ ’s. We use 20,000 samples per task for training and 2,000 for testing.
+
+ < g r a p h i c s >
+
+Figure 1: Stream 1.
+
+ < g r a p h i c s >
+
+Figure 2: Stream 2.
+
+In Figure 1 we plot the average mean squared error (MSE) of all tasks after learning each task incrementally. MCL has a much larger mean MSE at the beginning, it reaches MSE comparable to EWC (Kirkpatrick et al. 2017) at the end of the sequence. In Figure 2 we measure MSE averaged over the current state of the world on Stream 2, i.e. if the meaning of " + " changes in $t = 5$ from ${x}_{1} + {x}_{2}$ to $\left( {{x}_{1} + {x}_{2}}\right) /5$ , this change is incorporated in the test sets after this task (i.e. the addition operation in test sets of all tasks is replaced with addition and division by 5 ). Here, we observe that only MCL is able to perform well on this stream. EWC performs well up until ${T}_{4}$ as it is able to alleviate forgetting. After ${T}_{4}$ , when the mechanisms shifts, EWC's regularization strategy, aimed at reducing plasticity, prevents the model from incorporating knowledge about the shift in the mechanism reflected in the new training data of tasks 5 to 7. MCL is able to sparsely update only the modules which are specialized on the shifted mechanisms. Importantly, MCL can eleviate forgetting solely through routing samples to correct modules.
+
+Conclusion. We advocate for the usefulness of IM hypothesis in CL (it is not mutually exclusive with i.i.d). This may open a door for developing algorithms with better transfer and efficiency. We point out the potential advantages of such solutions under different distribution shifts and show in simple toy experiments that the IM principle can address some problems of CL in practice. Open challenges include determining useful inductive biases and further assumptions for designing modular solutions beyond MoE, where causal mechanisms can be discover when modules are applied in superposition resulting in a more fine-grained task decomposition (Ostapenko et al. 2022).
\ No newline at end of file